Abstract
Within the wider discourse on economic planning, this paper critically interrogates practical challenges in applying optimisation algorithms to achieve allocative efficiency on the scale of national economies. In questioning whether contemporary information technology solves the shortcomings of Soviet command planning, the paper provides a historical introduction to optimal planning and engages with four fundamental problems optimal planners would face, namely: the concern of computational complexity, the challenge of generating the economic data necessary for the constitution of such a system, the issue of reconciling a static optimum with the dynamism of real economies, and ultimately the relation of optimisation solvers to a hierarchy of ends. Besides epistemic concerns, the greatest risk of such a planning framework is constituted by the possibility of a slow descent into an authoritative system that would mirror the Soviet experience because of its reliance on bureaucratic institutions to make this machine work.
Introduction
‘The silliest thing to do is not to calculate. The second most silly thing to do is to follow blindly the results of your calculation’ – Michal Kalecki
Since the conception of the discipline, economics has been understood as the rational means of identifying the best use of scarce resources between competing ends, regardless of whether it is the survival of Robinson, the prosperity of one’s family, the wealth of nations, an egalitarian harmony or simply the generation of individual profits. Without a doubt, the means and ends discovered in this process, formerly referred to as ‘political economy’, can differ fundamentally. History nevertheless reveals to us surprising overlaps of techniques that cross the lines of opposing economic systems, which provide some empirical proof for the universal character or an essential relatedness of economic problems including possible solutions, or at least unveil a specific type of economic rationalisation that is at play across different cultures – in case one rather prefers a more constructivist viewpoint.
One such method is mathematical optimisation. Today all successful companies deploy some form of optimisation algorithm in intra-planning processes, whether it is for supply chain scheduling, to determine warehouse layouts, cargo fleet routing, or for the optimal use of machinery. But can such a technology, which has proven its efficacy on the enterprise level, be applied to work in the interest of workers within a post-capitalist society collectively planning production? For many critical observers, such algorithms are lumped together with other algorithmic management technologies and seem merely to be the cause of alienation by intensifying labour. However, an economic system that wishes to reduce working hours and avert ecological collapse by reducing resource consumption, through means other than austerity, would also have to deal with such issues as the optimal use of resources. This is also why the coordinative challenge of a socialist mode of production remains to transcend the allocation through the price mechanism with a more rational form of distributing scarce resources. At stake is whether mathematical approaches can assist in the coordination of material flows to realise a post-capitalist future. 1
To better understand this fundamental challenge and to grasp why optimisation algorithms are even discussed today it might be helpful to conceptualise the economy to be in search of allocative efficiency in order to realise the greatest possible satisfaction of needs. This search space is of course restricted by natural and social constraints and has a dynamic and static dimension to it. Despite their practical entanglement, one can here think of dynamics as dealing much more with the uncertainty of future changes, while statics relate more to the present state of the economy. A short example can better illustrate this. Say a given economy has the capacity to produce a million units of a certain microprocessor, but total demand exceeds this quantity. At this point, the dynamic planning decision could be made to increase supply or develop a new type of chip, both requiring investment in new capital stock such as lithography machines. If one wishes to hand over such investment decisions neither to the market and financial speculation nor to bureaucrats but to socialise the investment function then this economy would require what Aaron Benanav (2020) calls democratic ‘planning protocols’, or what Maxi Nieto (2021) refers to as ‘investment councils’. Yet, to implement this, it will still require allocative decisions in the present that in many cases will take place under scarcity. Required resources for a new semiconductor fabrication plant might also be demanded elsewhere in the economy and limiting factors will likely not allow to satisfy all productive demands. 2 The same applies to our processors. Until production is ramped up to meet demand, one would face a state of shortage, requiring again decisions on where this scarce good is best allocated. While in their opposition to finance capital, there seems to be a rough consensus among socialists on the dynamic aspects of planning towards the democratisation of the investment function, the widest dissent can be encountered in solving this static allocation problem, which comes down to the question of who gets what in the economy at a given point in time. 3
So far, our imaginative space seems to be constrained to five general trajectories of dealing with the static aspect of allocative efficiency. The first trajectory for solving this problem is the status quo of the market order. It is the price mechanism still present in market socialist proposals, which steers the flow of scarce resources into the direction of the most profitable enterprises. As a point of departure, this is what socialists still dedicated to the idea of an economic plan wish to transcend. Second, following on from Soviet command planning, these allocative decisions could be taken up again by bureaucrats. Marx disparagingly called this the ‘papacy of production’, and – after the experience of actually existing socialism – any such return to economic despotism seems to present the least desirable alternative. Third, as an inversion of such bureaucratic centralism, one could agree to grant the immediate producers full control over the allocation of the products of their labour, which would amount to an equally irrational dictatorship of the periphery, or an anarchism without markets, due to their particular interests and limited scope of societal needs. A fourth trajectory, favoured by most anti-authoritarians today, would be to solve this problem through a deliberative negotiation process involving everyone affected to reach consensus. Based on voluntarist principles, it is the utopian belief that the allocative efficiency of a world economy would harmoniously emanate from the multitude. In a slightly more bureaucratic form, this hope is articulated today in Pat Devine’s (1988) call for negotiated coordination. Even in the unlikely case that consensus on the allocation of millions of scarce goods could be achieved among the participants 4 by horizontal means with a myriad of meetings taking place simultaneously, each individual allocation decision would have cascading effects within the economy resulting in a recursive loop of permanent renegotiation, which would paralyse the economy in an inconclusive state and reinforce desperate calls for a rationing by price or bureaucratic hierarchies to resolve persistent conflicts. Finally, the fifth and last resort humanity seems to have for coming to terms with the complexity of this allocative task is some form of algorithmic mediation.
In an attempt to provide some answers regarding the emancipatory potential of such algorithmic coordination, the following paper begins with a first part providing a historical introduction to optimal planning, while also briefly sketching classical Soviet command planning and its shortcomings. The second part engages with more recent proposals to apply optimisation algorithms to resolve the socialist inquiry for allocative efficiency and will outline four fundamental problems optimal planners would face, namely: the concern of computational complexity, the challenge of generating the economic data necessary for the constitution of such a system, the issue of reconciling a static optimum with the dynamism of real economies, and ultimately the relation of optimisation solvers to a hierarchy of ends, questioning whether such an optimal planning framework might be able to surmount both the economic and political problems of the Soviet system.
Optimisation on both sides of the Iron Curtain
The first optimisation algorithm for production planning was formulated by the mathematician Leonid Kantorovich (1960) in 1939 in the context of Soviet industrialisation, when he was approached by a Plywood Trust in search for assistance to maximise their output in order to hit their plan target. Given information about the productivity of eight peeling machines for five different types of wood and an assortment plan specifying product proportions, the optimisation problem consisted in identifying the best possible use for each machine. What has been solved prior by machinists and managers, was now calculated by Kantorovich with a novel computational optimisation technique. Central to his method of ‘resolving multipliers’ was the calculation of opportunity costs, which he later named ‘objectively determined valuations’ that indicate which combination of the expenditure of materials and labour to apply in order to achieve an optimal output. As Roy Gardner (1990: 643) points out: ‘Kantorovich’s fundamental economic insight’ was that an ‘optimal plan is inseparable from its prices’. From the early beginnings at the Plywood Trust, Kantorovich grasped the potential this technique had for the entire Soviet planning apparatus because his objectively determined valuations function similarly to money prices in allowing economic actors on the ground to compare the opportunity costs or economic consequences of different production methods in regards to the global optimality of the system. Thus, in theory, enabling a form of in natura planning by making possible precisely the cost-benefit analysis or inquiry into substitution relations that Ludwig von Mises had claimed at the advent of the socialist calculation debate to be impossible in the absence of a market.
Yet under the then-prevailing political circumstances Kantorovich was initially hesitant in circulating his ideas. This was because, following the outbreak of the Second World War and against the backdrop of the Great Purge, he feared his work might be used outside the country (Kantorovich, 1987: 256) and would therefore run the risk of implicating him in ‘counterrevolutionary tendencies’, due to the proximity of his proposal to the bourgeoise concept of opportunity cost paired with a general aversion of Stalinist planners applying mathematics to the field of economics at the time. Nevertheless convinced of its practical value for the Soviet economy, he soon began to cautiously lobby for the wider application of his ideas in optimising planning processes on a larger scale. After being turned down by Gosplan, the major state planning commission, during the Second World War he halted his campaign for the time being, in parts for his own safety (many have been deported or executed for less) but also to place his mathematical abilities at the disposal of the motherland’s war effort. 5
Meanwhile, on the other side of what soon became the Iron Curtain and without any knowledge of Kantorovich’s previous work, George Dantzig, a likewise gifted mathematician, was inspired by the military logistical problems of the Second World War to develop a similar optimisation method in 1947 under the umbrella of the Army Air Forces Operations Research, 6 which would become the foundational method of what is today known as linear programming in the West. After William Orchard Hays wrote the first commercial-grade software for solving linear programs in 1954 while working at the RAND Corporation, Dantzig’s ‘simplex method’ became an essential part of capitalist production by offering solutions to routing and scheduling problems in production planning. Over the next decades, the adaptation of linear programming schemes and other forms of mathematical optimisation expanded into every corner of the commercial world.
Ironically, at the height of Joseph McCarthy’s red scare, certain individuals involved in developing mathematical solutions for economic problems, often coming directly out of the American Military-Industrial-Academic Complex, dared to express similar thoughts to Kantorovich in terms of expanding the application of algorithms to coordinate entire economies. For instance, in the introduction to the proceedings of the first conference on linear programming held at the Cowles Commission for Research in Economics in 1949, the organiser Tjalling Koopmans
7
weighed in into the socialist calculation debate by stating: The problem of efficient production then becomes one of finding the proper rules for combining these building blocks. […] To Mises’ arguments regarding the unmanageability of the computation problems of centralized allocation, the authors oppose the new possibilities opened by modern electronic computing equipment. (Koopmans, 1951: 6–7)
Now not only Koopman but indeed a number of socialist scientists at research institutions in the West seriously engaged with the idea of engineering the economy at the time. To some extent, the Pentagon even treated such a computerised approach for economic allocations as a threat and as a potentially viable alternative to a capitalist market economy guided by price signals. In a classified CIA report on Leonid Kantorovich’s book The Best Use of Economic Resources (1965), which he originally published in Russian in 1959, following Stalin’s death and the commencement of ‘Destalinisation’, to popularise his ideas, the far-sighted reviewer concluded that: this book is exceptionally significant because of the contribution it can make to Soviet economic theory and to planning practice in so far as it is accepted. If Kantorovich’s concepts and methodologies were fully implemented there would have to be some shift in the locus of economic power – an unlikely development. The Soviet vested interests, be they party officials, managers, or planners, are not likely to relinquish control over the multitudinous economic decisions made at the intermediate layers of the Soviet economy. (CIA, 1960: 5)
For the American security apparatus, mathematical programming was regarded as an essential part of a perceived Soviet embracement of cybernetic theory and a wider scheme to develop a ‘Unified Information Net’ (Conway and Siegelman, 2005: 318), for which the agency opened ‘a special branch to study the Soviet cybernetics menace’. In 1962, President Kennedy’s top aide warned that the ‘all-out Soviet commitment to cybernetics’ would give the Soviets ‘a tremendous advantage’ and that ‘by 1970 the USSR may have a radically new production technology, involving total enterprises or complexes of industries, managed by closed-loop, feedback control employing self-teaching computers’. If the American negligence of cybernetics continued, he concluded, ‘we are finished’ (as quoted by Gerovitch, 2008: 336). However, any such concerns were unjustified, as an information network for computerising the planning process and overwriting the predominantly analogue and largely uninformed material balance planning process with a unified planning framework that utilised mathematical optimisation techniques in the end never materialised in the Soviet Union. It would remain instead only a futuristic vision or threatening backdrop to the Cold War.
The Soviet system
Classical Soviet command planning 8 consisted of three levels: at the top reigned Gosplan, responsible for implementing the political goals of the Politburo regarding plan targets for key industries and ensuring the consistency of the national production plan for highly aggregated key goods. Immediately below Gosplan were the industrial ministries, in charge of linking centre and periphery by breaking down the aggregated instructions that were passed down by the central planning commission to distribute the output target to the branches of industry under their auspices, and for aggregating the reports by the enterprises on the way up. At the bottom were the individual enterprises responsible for the fulfilment of the obligatory plan targets. In certain periods, different intermediate bodies – from Sovnarkhozy to associations (see Gorlin, 1974) – were called into existence, which were located between the enterprises and ministries in the planning hierarchy.
Despite the hierarchical nature of this planning process, it is Gosplan that drafted the national production and distribution plan while ensuring the material balance between inputs and outputs, enterprises could still influence the plan by sending counter-proposals up the chain of command. With the centre digesting this information provided from below, the planning procedure would then go through several rounds of iteration. Yet, either way the enterprises were subject to the binding instructions of planning bureaucrats and had little scope for self-direction, as their superiors held sway over both what they should produce and with what materials they should achieve their production targets, as producers also did not have much influence over choosing their suppliers. Led by party-appointed managers, the internal organisation of the workplace mirrored the pyramidal form of the Soviet planning apparatus. Democracy at the workplace or within the wider economy was de facto non-existent.
While competitive structure of the market tends to overproduction, resulting in a buyer’s market, where customers are ideally in the beneficial situation to choose between suppliers, the Soviet economy was an economy of shortage (Kornai, 1980), a dictatorship of the supplier, where customers only had the Hobson’s Choice to take what they were allocated or leave it. Although there were no official horizontal links between enterprises without the mediation of bureaucratic superiors, the result of chronic misallocations was that an informal grey zone emerged, tolerated by the officials, where enterprise supply agents haggled directly with each other using the allocated resources they had no use for to secure the necessary inputs for their production unit. In this system, most economic calculations and allocations of resources were realised predominantly in physical terms. Although prices existed, they were determined not through market forces but by the central authorities. Thus, value indices had less of an allocative purpose rather than a ‘control and evaluation function’ (Bornstein, 1962: 66). Yet, even the latter function had hardly any consequences for the wider economy, as the inefficiency of the system was further intensified due to a lack of any mechanisms for weeding out unproductive or superfluous producers through bankruptcy: Soviet enterprises were running with a ‘soft budget constraint’, which meant that individual enterprise losses were ‘paid by some other institution, typically by the State’ (Kornai, 1986: 4) and the responsible managers usually retained their positions regardless of performance.
The systemic problem of shortages was only exacerbated by the endemic overinvestment inherent to the Soviet economy, an investment hunger that was caused by the overambitious plan targets of the political leadership, the unsustainable prioritisation of military and industry over consumer needs, as well as the empire-building of sectorial and regional ministries, which all led to the fragmentation of investments and long delays in the finalisation of projects. This critical defect of chronic shortages not only had quantitative effects such as unplanned downtime in manufacture due to ‘unproductive slack’ (Kornai, 1980: 103) but also reduced the overall quality of available goods. There were hardly any consequences for producers for providing inferior quality goods or products with specifications for which there was no demand. Furthermore, long-term thinking was disincentivised, and what mattered instead was the short-termist fulfilment of quarterly plan targets. Overall consumer feedback was highly dysfunctional in the Soviet Union, particularly for end consumers. A production seemingly detached from consumer needs together with a dysfunctional pricing system led to an economic reality of large stockpiles of overpriced goods and long queues with black markets when they were under-priced.
Another serious problem of the system was that the centre was not able to process disaggregated information, thus requiring the mid-level ministries as an interface. One of the better-known consequences of this were the aggregated target indicators that were passed down to the enterprises, which created perverse incentives. In the absence of functional feedback channels and without further specifications, physical output targets created curious distortions from consumer needs due to the strategic manipulation of factory managers at gaming the system. Among countless such stories, a popular anecdote is the manufacture of Soviet nails, which were either tiny in size when the plan target was expressed by a certain quantity of pieces demanded or enormous when it was articulated in tons. Another incentive problem was that producers disguised their productive capacities to easier fulfil their plan targets, and also inflated their input requirements to ensure that they were allocated the necessary amounts. The overall supply uncertainty for producers led to further inefficiencies such as the hoarding of large stockpiles as well as the in-house production of critical components, leading to less specialised factories and a decrease in overall efficiency.
Ultimately, Soviet planning was built on the idea of material balances to ensure plan consistency. For a plan to be ‘feasible’ meant merely, first, that the resources used in the plan were actually available, and second, that the output of suppliers needed to match the input of other producers, that is, consistency of input and output throughout the economy. In theory, this was achieved with material balance sheets, yet in practice the centre was overwhelmed by the task, because the data provided was not very reliable, it lacked the crucial information of the availability of resources, and the central planners were often lagging behind, unable to process the available data fast enough due to its bureaucratic limitations grounded in the overwhelmingly analogue state of the planning infrastructure. This is why the central organs had to formulate their plans using coarse categories, which led to troublesome aggregation errors: While the national production and distribution plan might be balanced in aggregate terms, it was inconsistent on the disaggregated level, so that the plan had to be amended regularly to correct the consequential errors throughout the year. But even if the centre could have been able to formulate a feasible plan in disaggregated terms for the national economy by means of material balances, this would not have allowed them to determine whether it might be a good or even the best one. It is in this environment, wherein certain Soviet scientists hoped to enhance this planning process through mathematical optimisation methods, sometimes also referred to as ‘economic cybernetics’ or ‘planometrics’ (Zauberman, 1962), whose application Kantorovich framed the following: the system of optimal planning by no means presupposes the full centralisation of economic decisions. On the contrary, thanks to the fact that together with the national economic plan a system of prices and valuations (output/capital norms, rent for land and natural resources, investment efficiency norms and so on) consistent with it is compiled, the possibility arises of taking decisions maximally consistent with the interests of the national economy locally. This is conducive to the wide utilisation of the initiatives of economic collectives, the possibility of mobilising resources and uncovering reserves locally, allows the expansion of the rights of separate economic units, and the construction of a system of valuations and stimulation of the work of separate units, such that, that which is profitable for society as a whole is profitable also for every enterprise. In other words, such a system creates the theoretical basis for the solution of the problem of the combination of centralized management of the economy with wide rights and initiative locally on the basis of economic methods of control. (As quoted by Ellman, 1968: 117)
While mathematical programming would necessarily take on a central role in this procedure its advocates well understood that such algorithms would not solve all of the problems mentioned above and would instead have to be introduced alongside market mechanisms and other reforms that democratise the planning process. So despite the perceived technocratic rule of applied mathematics over economic matters, these planning theorists in fact believed that their approach would ultimately allow for an increase in the autonomy of the periphery.
This discursive upheaval of optimal planning from operations research on the enterprise level emerged in the wake of the Soviet cybernetic movement, which provided a wider philosophical language but also a material push towards the networking of the country. Cybernetics, which had initially been shunned as a reactionary pseudoscience by Stalin, was rehabilitated after his death through influential figures such as cybernetician pioneer Anatoly Kitov and soon became state doctrine under Nikita Khrushchev’s vision of automating the economy. Yet, ambitions to network the planning apparatus through information-technological infrastructure (hardware) and to apply mathematical models as well as necessary reforms (software) on the national level ultimately failed due to the lack of political will. There was a surge in the popularity of economic cybernetics from the late 1950s to the early 1970s spearheaded by the ‘System for Optimal Functioning Economy’ (SOFE) research program led by Nikolai Federenko and Vasily Nemchinov, including several attempts to build national information networks, most notably the ‘All State Automated System’ (OGAS) envisioned by cybernetician Victor Glushkov. 9 However, all of these attempts were met with heavy resistance from both market reformers as well conservative forces amongst self-interested administrators committed to the status quo, who well understood that a successful introduction of such technology will result in a loss of their power or might even make them obsolete. Many local projects by ambitious optimal planners failed because the ministries simply did not provide the relevant statistical data. Already in 1961 Polish economist Oskar Lange would compare the situation of the Soviet planometrician to ‘a chemist who is denied access to his laboratory or an astronomer prevented from observing the sky’ (As quoted in Smolinski, 1973: 1190). Another important factor influencing why optimal planning was never implemented on a larger scale in the Soviet Union was the justified scepticism concerning the viability of the available computational power at the time, linked to the astronomical investment costs attached to realising this vision. The risk of failure was simply too high, and liberal market reforms came with the promise of being the less risky option.
The widest application of optimal planning in the Soviet Union was realised by Kantorovich himself in 1970, in optimising the production scheduling of the steel industry, addressing about ‘1,000,000 orders, involving 60,000 users, more than 500 producers and tens of thousands of products’. After collecting the necessary data for six years, Kantorovich formulated the optimisation of production schedules and attachment plans as a linear program with ‘more than a million unknowns and 30,000 constraints’ with actual savings in steel after implementation of ‘only 108,000 tons, although the calculated saving was 200,000 tons’ due to imperfect information. Nevertheless, it turned out that the use of computers in planning the steel industry had a major advantage in addition to enlarging output by making better use of productive capacity. It enabled the degree of aggregation of requirements during the planning process to be reduced, and hence reduced the divergence between output and requirements (Ellman, 1973: 72–75).
Prior to the collapse of the Soviet Union there have been a variety of such projects, yet hardly any of their advocates have gone so far as to optimise the economy within a single model, a hope that was ‘dubbed computomania by its opponents’ (Gardner, 1990: 646). János Kornai, for instance, who developed and applied methods of optimal planning on behalf of Hungary’s socialist government and gradually became disillusioned with the prospect of efficient economic planning through his experience in the trenches of the Soviet-style planning system, stated already in 1970 that it is ‘a science-fiction idea to cover all relevant problems of an economic system in a single model’ due to the sheer number of variables and equations, calling instead for ‘a united system of models’ (Kornai, 1970: 14–15).
10
Similarly, in a last gasp before he died in 1986, also for Leonid Kantorovich the final goal of his vision remained not the formalisation of a single model but the creation of a single complex of interconnected models encompassing the entire national economy, as well as systems of forecasting, planning, and centralized and decentralized management of the national economy that provide working people and managers of individual levels and sectors of the economy broad opportunities to display their initiative. (Kantorovich et al., 1987: 17)
In addition to the complexity and enormity of the problem in terms of the amount of variables that speak against a singular model, it also proved difficult to derive an objective function for the economy as a whole. Maximising the output of a plywood trust in the right proportions or minimising the material consumption in the steel industry is one thing, but by which means should the entire economy be optimised? What would be the criterion of such optimality? There has been an intense debate on this question in the Soviet literature, which Alec Nove summarises in the following terms: The party’s general economic policy objectives are too general, too diffuse, to serve as an operational criterion. If asked by the leadership to produce a programme which contains optimal targets for the year 2000, the economic profession cannot escape the problem by taking the targets adopted by the leadership as its optimisation criterion. After all, the leadership asks the advice of the economic profession about what these targets should be! How is one to distinguish means from ends? Various proposals are mooted: maximise the national income; maximise labour productivity; minimise costs for a given set of outputs; and so on. To maximise human welfare is altogether too vague, and the more intelligent Soviet specialists never forget that some aspects of welfare (leisure, quality of life, environment, etc.) do not figure in national income statistics. But in the Soviet case even the purely material part of welfare is poorly reflected in aggregate statistics. […] The whole notion of somebody defining an objective function rests on the incorrect assumption that there is only one actor. (Nove, 1991: 101)
Initiated by the Liberman Reforms in 1965, the victory of decentralising market reforms was deemed complete when Gorbachev’s perestroika put the final nail in the coffin of any attempt at nationwide algorithmic coordination. And with the ensuing collapse of the Soviet Union, further research on optimal planning came to an abrupt halt. Yet, as it was never fully implemented in the Soviet Union the question remains open as to whether the assessment would be different with contemporary technology at hand. But are our technical means today sufficient to allow the centre to plan in disaggregated terms and to realise a mathematically optimised production and distribution plan, and if so, would this solve the shortcomings of the Soviet system mentioned above, especially in regard to consumer satisfaction and workers’ autonomy?
The return of optimal planning and its computational complexity
While many planometricians like János Kornai turned their back on optimal planning after the implosion of the Soviet Union, the torch was subsequently picked up by the Western Marxists Paul Cockshott and Allin Cottrell in their influential book Towards a New Socialism published in 1993, where they developed a cybernetic planning model that supplants classical linear programming with a more efficient ‘harmony algorithm’, which laid the groundwork for the discursive resurgence of using algorithms for the economic coordination of material flows. In its simplicity, this rather mechanistic framing of the economy envisions the socialist allocation problem to be solved within a unified model consisting of a set of gigantic input-output matrices in disaggregated terms providing information about production methods in relation to a target vector, that is the plan target for final consumer goods. What is eventually optimised in the model is plan fulfilment, or in other words, the harmony of supply and demand. This is achieved mathematically through a singular objective function (i.e. the harmony algorithm), subject to a set of constraints, that minimises the deviation of the final production output from the initial plan target for final consumer goods. As an outcome of this optimisation process, one receives a detailed allocation plan for the entire economy. 11 Their decision for a single objective function is built on the assumption that contemporary information technology would allow the process to run in disaggregated terms, which constitutes a crucial difference in comparison to the situation with poor aggregate statistics Soviet planners faced. In theory, this would allow for detailed allocation decisions to no longer be undertaken by bureaucrats endowed with particular interests, but by a seemingly objective algorithm. In this way, the motivation to use such algorithms is not only to increase allocative efficiency, but despite its centralist characteristics also to wrest the political power formerly held by Soviet bureaucrats and transcend the arbitrariness of their subjective decisions to a more objective process guided by formal logic.
More recently, other proposals were put forward applying the simplex method (Dapprich, 2022), machine learning (Samothrakis, 2021) or interior point algorithm (Härdin, 2021b) for their optimisation.
12
Regarding their technical feasibility, such algorithmic calculations rest upon a twofold challenge. In the literature this is usually understood as the two parts of Friedrich Hayek’s ‘knowledge problem’, which can be subdivided into the issue of generating and that of processing economic data.
13
While usually attributed to Hayek, the latter concern was already articulated by his colleague Lionel Robbins in 1934, who acknowledged that the socialist planning problem could be framed as a system of equations, but believed that in practice this solution is quite unworkable. It would necessitate the drawing up of millions of equations on the basis of millions of statistical tables based on many more millions of individual computations. By the time the equations were solved, the information on which they were based would have become obsolete and they would need to be calculated anew. (Robbins, 1934: 151)
Meanwhile, regarding classical linear programming schemes, this argument still holds true today, due to their rather high computational complexity. This run time of algorithms is differentiated in computer science in complexity classes according to the big O notation, with the most important classes being: constant O (1), linear O (n), log-linear O (n log n), polynomial O (nk) or exponential O (en). Given the same input size (n) the run time of these classes differs substantially. Algorithms that are within exponential time are only feasible for the tiniest data sets, but also those with polynomial run time can bring contemporary supercomputers to their limits if the input is large enough. For a long time, it was not known whether linear programming techniques such as the simplex method had an exponential complexity. Today we know that for most problems the simplex algorithm’s complexity is within polynomial time, with a complexity of about n3. It is certain that during the 70s and 80s, not enough computing power could have been mobilised in the Soviet Union to handle an input of millions of variables, 14 and even with the computational means of the present, an algorithm of that complexity would be practically infeasible for calculating a plan for a national economy in disaggregated terms.
However, the recent proposals build on the insight that in dynamic economies there will be no need for an optimal solution, and approximations will likely be sufficient for the task. By sacrificing a degree of optimality for computational feasibility, algorithms like Cockshott’s harmony algorithm, which is of order O (n log n) (Cockshott, 2019b: 314), and also the interior point method used by Härdin (2021b), which allows for input sizes up to several billions of variables to be computed in a matter of hours on a single machine, render the computational part of the knowledge problem null. Computation time is further reduced by the fact that the plan does not have to be recalculated in its entirety, as operations will only have to be performed on variables that have changed between iterations, and that such a matrix would be very sparse, because the overwhelming majority of the entries in such a disaggregated input-output table will be null since only a fraction of all existing products will ever enter as an input into the production of another good. 15 Distributed computing would allow for these calculations not to be made in a single computation centre but across a decentralised computer network.
Thus, it seems that the technical problem of processing the economic data appears to be tractable, although the authors of a recent Austrian critique informed by complexity economics and Hayek’s theory of mind claim otherwise, arguing that the ‘complete control of any complex system is impossible, due to the problem of self-reference’ (Moreno-Casas et al., 2022: 574–575). However, such a framing of computational complexity goes far beyond the technical issue of number crunching, which seems to be – and here even the Austrians might also agree – not the critical limitation of such an economic framework. What is at stake is not whether our technical systems can handle the data load, but how accurate economic data is generated and fed into such a model, as well as how to then lay the static map back over the dynamic territory.
The formalisation of production knowledge
Following the trajectory of replacing a competitive market with an optimal allocation by algorithmic means, the fundamental challenge such a framework faces is not the processing of data, but the generation of accurate economic information to even constitute such a system of (non-)linear equations. This should become clear as soon as we change our point of view. While, from the macro-economic perspective of the optimal planner, the economy appears as a matrix, from the micro viewpoint of immediate producers its parts must first be formulated as what is called a ‘production function’, an individual production plan, which in total constitute such an input-output matrix. While each row of the matrix would indicate the use of a specific product across different industries, each column would constitute a unique production method. Both the recent optimal planning proposals discussed above, which on their level of abstraction do not deal with such micro economical questions in sufficient detail, as well the vague critique expressed by the Austrians on the access to dispersed and tacit production knowledge, which fails to deal with the models in detail, do not engage closely enough with this problem of articulating these production functions. This section aims at bridging this gap. To a degree, it is a concession to the Austrians without going so far as to concede to their impossibility claim, yet it might bring some light to serious problems socialist planners would have to find answers for.
For the foundation of such an optimal planning framework, each workplace would be required to draft a production function for each of their production methods by mapping the maximal output obtainable from a set of inputs. The components of such a production function are referred to as ‘technical coefficients’ in the literature. In the absence of a producer market, it would not be enough to express these production functions in monetary terms, but it would have to be a disaggregated input-output functions to generate the information on which an optimal allocation is computed. Such a level of detail is necessary so that the centre does not run into the same problems as the Soviet Union did when dealing with aggregated data. Disaggregation also solves a large part of the dysfunctional target indicators mentioned above. There would no longer be the need to hand down unspecified output targets in tons or other physical units. By granting producers a certain degree of freedom in deciding what they want to produce and allowing them to choose between input alternatives with exact specifications and produce codes, detailed information on the total demand for specific intermediary goods would arise from the bottom up. Likewise, detailed demand information for consumer goods would have to be generated. 16 To which degree these statistics would then function as binding plan targets for the responsible producer and by which means, discipline or incentive, it would ultimately be enforced would be for the social body to decide.
Regarding the fear for workers’ autonomy, as expressed by anarcho-communists like Jasper Bernes (2020), it has to be stressed that in principle such an optimisation process does not require any form of centralist despotism regarding plan targets. It would work as well under the condition of leaving the decision on individual plan targets to workplaces themselves. Statistics would then function in this decentralised and high trust scenario only as indicative targets or a mere orientation on what is best for the economy as a whole. The final decision to what extent this is complied with could be entirely up to the immediate producers by defining the highest possible output each workplace would be willing or able to achieve. This information could be integrated into the system either by overriding the target vector in the case of consumer goods or by introducing a set of constraints representing the highest possible output for each workplace. What an optimisation solver would then calculate is the best possible plan that does not violate these conditions. The only thing that would be decided centrally by algorithmic means in such a scenario would be to determine where finished products would have to be delivered. This central question of plan targets ultimately comes down to whether we want to live in a dictatorship of the producer or consumer, whether one wishes to minimise the external compulsion labour faces or extort the greatest degree of material wealth from it. Yet, the labour disciplining coercion plan targets introduce and the need to harmonise worker autonomy with other societal needs is not a problem specific to optimal planning – it sits at the heart of a socialist mode of production. 17
An optimisation solver, as proposed by Härdin, would also allow producers to suggest alternative configurations for production methods and the algorithm would then choose the plan alternative that was best for the economic system as a whole. This way local producers would have way more leverage in choosing their suppliers, proposing different input alternatives and even ranking their choices, whereas in the Soviet Union in most cases enterprises received quotas for materials as the superiors saw fit. As part of this cybernetic discovery procedure, dynamic valuations could be calculated similar to how Kantorovich imagined them 18 that would serve as the basis for future cost-benefit calculations. In determining substitution relations between alternative components, such metrics for opportunity costs would provide producers not with information about static costs as labour values do but supply dynamic information on the relative scarcity of goods similar to money prices, thus offering much more control by immediate producers over what is actually allocated to them compared to the Soviet system. As indicators for bottlenecks, these valuations would serve a central function in the economy’s feedback loops, as they would not only assist in making plan adjustments by producers on the micro level but would simultaneously inform investment decisions on a macro level on where to expand industries. Enabled through such valuations, one might think that most planning could in fact be done in the rim and much of the work that was formerly done by the ministries subordinated to Gosplan, in aggregating the upward-flowing information and disaggregating the downward-flowing instructions would become superfluous the moment the centre was able to process disaggregated data. In theory, this nullifies the argument brought forward by Ludwig von Mises and Friedrich Hayek on the importance of having dynamic valuations to figure out substitution relations that were later synthesised by Don Lavoie (1985), and has been raised again more recently in Boettke and Candela (2023) in their case against ‘Technosocialism’ as at first glance optimisation solvers could indeed create an environment that cultivates economic action rather than controlling it entirely from the top down by granting the periphery full autonomy in mapping their productive space. The only thing the centre would do in the end is to determine the optimal combination of individual plans that were drafted by the immediate producers themselves.
In practice, however, it would not be a simple task to formalise the tacit production knowledge involved into detailed production functions. To get an idea of how such a production function for a specific product might look like, an example might be helpful. Take, for instance, a car manufacturer, whose production function would define the required inputs, the exact quantities of different parts with exact specifications up to the last bolt, then demand estimates on the different types of labour required expressed in hours, which would have to be broken down, as well as the share of machinery time, whose deprecation rates would have to be approximated, other administrative costs and overheads, the rent involved for the factory, electricity and so on. All this would have to be expressed in the right ratio to map inputs to a specific quantity of output. Then, on the output side, one would have not just a singular car unit, or depending on the granularity a specific number of units, but also all waste and by-products such as greenhouse gas emissions arising from this productive activity if one wishes to internalise externalities, bringing with it problems of measurement and deception. This only becomes more complicated the more heterogenous the output is. Cars with the same specification might be treated as equivalents, but what about logistics – where every route is a unique output? 19 Or how would a production function of a university look?
These are tasks that are not done in such detail under capitalism, where production functions are mostly used in macro-economic contexts and budgeting within enterprises is done in aggregate monetary terms and not in natura, as it is simply unnecessary, challenging and therefore too costly to formalise tacit production knowledge in such a way. This is why not a single person at a car manufacturer like Volkswagen is in full possession of the knowledge of what is required to produce a specific car unit, as the easier aspects to determine are certainly the inputs that are directly consumed, in our example the parts a car is made up of, but for the rest of the inputs at best there will only be estimates, which will always bring with them the danger of distortion and misallocation. The reason for this is that one does not want to compose these production functions in regard to total output, which might be easier for an enterprise by simply tracking what is coming in and out and expressing it as a simple input-output equation. Doing this would result in a highly static production function that will only be accurate for a specific output quantity. But for most intermediate goods one will not know the demand before the optimisation process and there would also be not much to optimise for the algorithm because there would not be any information about elasticity in the system. At best the solver could determine the feasibility of all existing plans similar to Soviet material balance planning. Any change in regards to outputs would require producers to manually adjust their production function according to the new target which would clog the iterative planning process and strain producers in their day-to-day operation. Most producers would then be obliged to provide not one production function but variants for different demand scenarios. In taking into account the dynamism of real economies paired with the wish to streamline the optimisation process, it might appear that it would be more beneficial the more granular these production functions are up to a breakdown of what is required for a singular unit in order to dynamically scale them up and down with changing demand. In this way, Philipp Dapprich (2022: 7), for instance, understands optimisation as the simple process of identifying optimal scalars to determine the intensity for each production function in the economy. 20 But practically this will be completely infeasible without manual adjustments as well.
Not only are production functions with lower output sizes more challenging to compose, regarding their elasticity, they also come with the serious risk of distortions, as no matter how well they are defined, one cannot blindly extrapolate them: The ratio between inputs will alter significantly with changing output targets, as with economies of scale one will always be confronted with non-linearities and what in optimisation theory is called non-convexity. In the case of our car factory, there will be significant investments required to produce the first car. But to make ten thousand cars, one will not need to expand the machinery of the factory by the same amount but might only need more labour and parts. But even labour requirements will not scale linearly in most cases. Also with the multiplication of simple input-output equations, the associated producers might be confronted with irrational instructions such as the addition of one third of an assembly line. Then, this car factory would likely produce more than one model, and each model would probably have different specifications, which would require determining the share of each machine, business operation software or other overheads account to each of the different production methods. And with any change in output, this ratio would have to be rearranged anew.
This is why, for the construction of a single-year plan, Cockshott proposes to separate a flow matrix for direct inputs, raw materials that are immediately used up, that could be easily scaled up and down, from ‘a corresponding capital stock matrix, specifying the amount of machine Y needed to produce an annual flow of P of output x’, 21 while the net output for the current planning period is expressed as a target vector. With this measure he separates the linear material consumption of circulating capital from the non-linear one of fixed capital. But it is highly questionable whether this really does justice to the complexity of the problem. Other optimal planners demonstrate a far greater problem awareness and interesting research is being done by addressing these issues with the help of mixed integer programming and piecewise linearisation to approximate them (Härdin, 2021a; CibCom, 2022: 43-44), yet regardless of the feasibility of these solutions this means producers would require data scientists within their enterprises or external expertise to assist in modelling this. Another option would be to use artificial intelligence to draft these production functions on the basis of different demand variants. Yet, with the current state of this technology it is doubtful whether one wants to rely on such alchemy for such a critical part of the planning process.
So rather than simply scaling production methods up and down, in reality, an algorithmic planning framework will demand constant meddling with these functions. Doing this somewhat accurately will be a challenge that should not be underestimated. Already thinking about the task produces headaches. Furthermore, it is also highly unlikely that all the production requirements, fixed capital, intermediary goods and overheads necessary to keep an operation running can be known in complete detail and in their correct ratios from the start. In most cases, the best that could be achieved is a rough approximation, not much different than in a market economy, as much of this knowledge will only emerge along the way, so it will be necessary to dynamically alter these functions over the course of a planning period to add missing inputs, that must be proportionally added to each production method they contribute, likely often requiring an alteration in the ratio between technical coefficients.
This Sisyphean task would be a constant companion of all planning processes at the level of the enterprise. Maybe some input is not available and must be replaced, a collapse in demand for some products alters production requirements, certain producers might have forgotten to insert critical inputs into the flow matrix, or some machinery might break down earlier than expected. All this would require manual adjustment. Compared to a market economy this would not mean simply buying a replacement on the market, but inserting it into these production functions. This would constitute a considerable workload of administrative labour that is simply unnecessary in a market system, and would come on top of other extra work within a socialist economy such as council meetings, as it cannot be done at distance by some managers in the planning department but requires the information from the ‘man on the spot’ (Hayek, 1948: 83) involved directly in the production process. With this in mind, practical experiments that aim at proving the viability of such a framework should focus less on technical simulations than on the formulation of production functions for a variety of production processes to receive information on their accuracy, elasticity and the time necessary to describe them.
Dynamics, uncertainty and social domination
Similar to the approximation of non-linearities with linear functions, a static optimum would have to constantly be recalculated to approximate the dynamism of the economy. What would be the interval for such adjustments? Would it be necessary to go down to hours, minutes, or even seconds? Here exists a fundamental contradiction between the necessity to manually adjust production functions taking place under a human time horizon and the need for permanent plan revisions calculated with algorithmic velocity. Humans cannot keep up with shorter time intervals but longer ones will introduce erroneous signals and dreadful lag into the economy. At question is also whether a mere contraction of adjustment intervals would truly suffice to deal with the probabilistic nature of real economies and the uncertainty unknowns infuse into the system. It appears to be that economic motion is simulated here by stringing together momentary and shaky snapshots that would to a degree always remain speculative. 22
This fundamental challenge of integrating the dimension of time back into their calculations is another serious problem such an optimal planning framework would have to face, which will lead to several consequential problems concerned with the temporality of actual allocation. 23 As part of the problems that arise when relating the static map back to the dynamic territory, the application of the model, the enforcing act of imposing it on the real world, could therefore be understood as the third aspect of the knowledge problem. This should become evident when thinking about the deviation between model and material reality. What the optimisation solvers are calculating is a one-year plan that will never be truly optimal due to inaccurate economic information. But what might it mean for allocative decisions on the ground when in the face of the dynamism of the economy a new one-year plan has to be calculated every other moment? If there were to be some economic rhythm in the form of a year-long planning period, then what would already have been allocated during the elapsed time must have to be taken into account, possibly by subtracting it from the newly calculated yearly allocation.
Although we might no longer run into aggregation errors, we still have to deal with the problem of what one might call the ‘temporal breakdown of allocations’. For the calculation of an optimal allocation it would be necessary to determine at what point in time certain quantities of specific goods will be available for distribution. This is also linked to the essential question of the order in which the different producers benefiting from the yearly allocation will be delivered. In other words: Who gets their resources and components first? This seems to scream for some coordinative entity that will have to make such allocative decisions, weighing the risk of shutdowns between different industries and enterprises against each other, if this information cannot be integrated into the model. While most optimal planners like Cockshott or Dapprich simply disregard this critical issue of time, the only solutions that have been brought up so far to mitigate this problem is to further differentiate products ‘with respect to the time period during which they are produced’ (Kantorovich, 1976: 26), which significantly increases the number of variables, and by reducing the planning horizon from a year to shorter time periods as advanced by Dave Zachariah and Loke Hagberg (2023) in their proposal for receding horizon planning, which was in fact also the reality in Soviet planning, where the quarterly plans – not the yearly plan – were orchestrating economic activity on the ground. However, there remains a fundamental tension between short-term plan fulfilment and long-term investment planning: Do we optimise for plan fulfilment in the current time period or at some point in the future? If it is the former, how can it be assured that projects receive their required resources in the present if they only begin to contribute towards plan fulfilment years down the road? At the same time, with optimising future plan fulfilment, there exists the serious risk of overinvestment and shortages in the present.
Any measures to deal with the dynamism of the economy will likely reintroduce the Soviet problem of supply uncertainty for enterprises, which will inevitable emerge with repeated plan revisions, which Kornai (1994) considered one of the most harmful aspects of the Soviet system. Already granted allocations could be withdrawn again at any point when it appears to be more optimal for the analytic engine. There would not be any contractual commitments, but only algorithmically determined truths, which would likely be in constant flux, as the choice here is between rigidity or uncertainty. On the one hand, one aspires for such dynamic adjustments to react to inevitable planning errors and unforeseeable changes in the economic system – a planned economy is damned to do so in the light of epistemic limitations and the contingency of the real world – while on the other hand, it is very likely that in the absence of contractual commitment the same tendency as observed in the Soviet Union will remerge, whereby enterprises hoarded critical components out of fear of shortages, which through their actions only worsened the situation for the whole economy by intensifying shortages or even creating them in the first place.
Here, the interests of individual producers are in contradiction with an optimal allocation of scarce resources within the wider economy because one cannot simply assume that all agents are rational actors, who will sacrifice their self-interest for the favour of the common good. As in the Soviet system, producers might not report the economic information truthfully or try to game the system. So it is not just the epistemic capacity to articulate these production functions that might be an obstacle to realising such an optimal planning framework, it is also the will of all participants to do so in good faith. For instance, by dramatically overstating the risk of a production stop or by inflating their input requirements, producers could accordingly increase the chance of receiving the required amount in time to the disadvantage of others, which could result in a highly dysfunctional spiral of inflating costs or distorted exigency if not inhibited. Through the comparison of different producers in the same industry an optimisation solver could route around inefficient production units but one would have be cautious here not to reintroduce some form of ‘mute compulsion' (Mau, 2023) with such measures.
Another critical problem is that one has to take into account outright malicious behaviour by agents, wishing to sabotage this planning procedure, whether this comes from within enterprises or from some foreign actor wishing to see the system fail is of lesser importance. What is the more pressing issue, however, is the fragility of such a model. This system has a single point of failure: if only a single constraint is violated the whole plan has to be recalculated. This does not even demand bad intentions and data poisoning attacks, but could result from banal errors in data input, such as when a digit is entered twice by mistake. All of this demands some sort of superintendence regarding data entry, which becomes even more problematic with small-scale economic activity. Well aware of the struggle to integrate this into such a framework, most optimal planners today are aligned to the Marxist orthodoxy of merging small businesses into larger entities, turning restaurants into canteens, organising trades within combines, merging factories into conglomerates and so on, to socialise production with the hope of reducing administrative redundancies and to ensure some control over their actions. But there also exist diseconomies of scale; big is not always beautiful. Spontaneous and small-scale economic activity fulfil critical functions in capitalist societies today. Will a socialist economy simply supplant them? If not, how can they be integrated, and their plans monitored? 24
This brings with it an attendant group of questions regarding how much autonomy production units would truly have, not only in setting their output targets, choosing their inputs, and determining what to produce in the first place, but also in how one would be recognised as such. How are production units formed in the first place? Through which process are they accredited? Does any workplace have immediate access to all goods, or do there exist bureaucratic gates, wherein some superior coordinative entity must first unlock an item before it can be inserted into a production function? Can enterprises directly insert their requirements or does a superior have to confirm their actions? Regardless of his opposition to linear programming schemes, Stafford Beer’s understanding of a ‘viable system’ (Beer, 1981) would here suggest some sort of reconciliation between decentralisation and centralisation. Although the Soviet experience has proven that a certain degree of autonomy by the periphery is desirable, it would still have to be carefully balanced in such an algorithmic planning framework, as without any control instance in the form of democratically legitimised coordination boards for sectors, industries or regions, it would be simply too fragile and it would be way too easy to exploit such an optimisation algorithm to secure an unjust advantage in the allocation or outright sabotage the planning procedure by adding outlandish technical coefficients for which the algorithm would not find a feasible solution. Of course a technical outlier monitoring system for spotting these errors could be introduced, but there would still be the need for human oversight in form of some kind of institution, composed of bureaucrats or delegates, however one might call them.
Could coordination boards of delegates fulfil this function in a democratic way? Or are we not rather confronted with the threat of bureaucratisation and the re-emergence of a mid-level coordinator class in charge of the control over enterprises, resulting in a cat and mouse game of surveillance that is the breeding ground for nepotism and other forms of misuse of power? Even when these coordination boards would be democratically elected and could be recalled by majority decision, the officials would be in a position whereby well-established players would have much control over industries and could keep out others, who might wish to disrupt it by introducing creative destruction, or they could advocate inappropriate demands in support of their industry or region to the disadvantage of others. Creating another control instance to appeal against the decisions of a potentially corrupted coordination board might certainly be one possible solution, but here we once more encounter temporal issues. In keeping the economy running, there would be no time for long deliberative and consensus-oriented discussions, but economic urgency would demand swift decisions from these boards and other control instances, if one wished to avoid the economy grinding to a standstill. Likely, a fear of horizontalist paralysis would gradually erode any radical democratic structures of these institutions and might turn the whole apparatus into a bureaucratic machine, not differing much from the papacy of Soviet command planning.
A more elegant solution to this problem would be to come up with an incentive system which rewards producers to submit truthful information, thereby aligning particular interests with the common good. The Soviet system failed to articulate something like this. As stated above the target indicators were in many cases counterproductive, contradictory and the cause for many of the inefficiencies in the economy. So far, no formula has been discovered that cures these ills entirely. However, interesting research into rethinking such indicators has been undertaken more recently by David Laibman who developed the outline for a possible performance measure in a socialist economy (Laibman, 2015: 317–323), which includes a ‘comparison rate’ alongside qualitative and social performance measures, based on industry norms or enterprise history, which would incentivise worker groups to use resources more efficiently. By providing inflated information of input requirements, the performance measure of producers would be reduced. Although this is certainly an incentive to provide adequate information, it is doubtful whether this truly solves the need for central control over data entry and would completely eliminate the temptation to overstate the urgency of one’s own productive demands. False information, provided willingly or not, remains the Achilles’ heel of any such economic system. This problem, however, is not one that is exclusive to optimal planning realised through algorithms, but indeed to other planning frameworks based on deliberative negotiation as well. Only when enterprises are truly independent and allocation is achieved through a rationing by price one does not run into this problem, but then the invisible hand of the market is back in the driver’s seat again with all its devastating side effects and one can no longer speak of a rational administration of the economy in the light of ecological collapse, as the profitability of enterprises alone will determine the allocation of scarce resources.
Towards a hierarchy of ends?
Next to the challenge of composing and administering a global system of production functions, another serious problem emerges when interfacing the plan target with the optimisation process itself. In all optimal planning models, the final demand for consumer goods – which is to say the plan target of collective needs – is expressed formally as a target vector. Those unfamiliar with the mathematical jargon can think of the target vector as a list that represents the quantified needs of society or the anticipated demand for all consumer goods. This list encompasses all the private and public goods such a society wishes to produce for a specific planning period. As already outlined above, the procedure of optimisation then means nothing other than the best fulfilment of this plan target with given constraints, or in other words identifying the set of individual production plans that deviates the least from this ideal target on the side of consumption. So before the optimisation process can begin, society has to figure out what it desires for the coming planning period. But how might this list be conceived? In most cases, it might be expedient to use the statistical state of the art for the quantitative determination of future needs rather than asking consumers to lay down their anticipated needs in the form of wish lists, in other cases there will be no way around customer orders or deliberative planning procedures, in the form of participatory budgeting on the municipal level for instance or a referendum in regards to energy transition and the following investments. This challenge of needs registration has already been addressed elsewhere by the author by conceptualising it as a forecasting problem (Grünberg, 2023). 25
However, within an optimal planning framework there exists the necessity of ranking the gravity of societal needs that would somehow be integrated into an algorithmic optimisation process itself. 26 This problem arises under the premise that society will likely desire more than what it is able to produce, particularly in regard to the looming ecological catastrophe on the horizon. Any concrete used for the foundations of wind turbines might be missing for the construction of new schools, offices or factories. Until unlearned, human desire will outrun our productive apparatus as well as the carrying capacity of our planet and also in a socialist economy one would still be required to negotiate over the use of scarce resources between competing ends.
This fact, namely, that it will require the ranking of needs is another critical point that is not sufficiently addressed in an optimal planning framework. There, in classical cybernetic manner, needs are treated as black boxes by the optimisation solver. But without additional information, all needs are situated on the same plane of importance in the gaze of the algorithm. All that matters for the algorithm is how the addition or exclusion of particular use-values, measured in output added towards the plan target, affects the objective function. It only knows quantities and is on its own completely blind towards qualities. Yet without such a hierarchy of needs, optimisation solvers could generate plans that are mathematically optimal but still run counter to actual consumer needs and productive requirements of industries in the real world.
Most of the models have an integrated mechanism that ensures some degree of proportionality between different products. In Cockshott’s model, the optimisation process is determined by the objective function of his harmony algorithm. Similar to the reward functions in machine learning, this function 27 aims at maximising what he calls ‘harmony’, the equilibrium state between supply and demand, through the simple means of punishing underproduction more than overproduction. In the process of maximising the mean harmony of the system, this ensures that everything deviates from the plan target roughly in the same proportions. Other proposals such as the one by Dapprich (2022: 9–10) are even more rigid in keeping the proportions of final products in tact through target constraints functioning as an assortment plan. While such proportionality impedes a total loss in controlling the calculation of an optimal plan solution, it should still be commonsensical that some needs are more important than others. One might wish to provide enough food and housing for everyone before producing gaming consoles and perfume, despite the fact that a specific quantity of the latter group of products is requested in the target vector. Or the underproduction of a critical industrial component has such devastating consequences for an industry that production for an entire supply chain comes to a standstill for months, which should be avoided at all costs. Although sticking to rigid proportions might result in an economy that crudely works somehow, giving up on them would lead to even greater misallocations, where some targets are met fully but other goods might not be produced at all without planners having any control over this process.
There certainly exist technical means of influencing this. In the case of Cockshott’s model, one would regain some control by introducing weights to incentivise the reward function to produce critical goods before others. Another option would be to integrate constraints in absolute terms to hardwire the plan fulfilment for crucial goods like foods. The problem, however, is not necessarily the technical implementation but the political process of even establishing such a hierarchy of needs. It is the identical problem of establishing allocative priorities in the absence of profit, which we already encountered in the previous section when speaking of temporal breakdowns. What are basic needs? Which needs are more important? And who will decide which components are critical for an industry and should be prioritised? Not only real existing socialism struggled with this, also experiences from war planning in the United States during the Second World War have shown how dysfunctional such a system of prioritisation can be with a high degree of decentralisation, there: The supply Priorities and Allocations Board recognised early that efficiency lay in establishing an allocation versus spending time on priorities. Trying to establish priorities corrupted the system, because everybody wanted everything now and certainly ahead of everyone else. Because too many systems received A-1 ratings, the Office of Production Management established a higher rating, A-1-A. Then too many systems got that rating, so a new priority rating system was established that rated materiel from A-1-A through A-1-J. and when that system became clogged, an AA band had to be superimposed. Then the system broke down. (Gropman, 1996: 50)
This is a universal problem inherent to all planning frameworks, namely, the question of who will make the decision to set such priorities, if it is neither the market nor the immediate producers? For producer goods, it would likely have to be some sort of aforementioned coordination board endowed with this authority, turning immediate producers into supplicants again, thus recreating their dependence on such institutions and with it an incentive to exaggerate the gravity of shortages by intensifying the circulation of distorted information. As a truly horizontal and consensus-oriented negotiation process would regularly result in decision paralysis and a stuttering economy, the question arises as to whether such coordination organs can be designed truly democratically so that they can make unbiased decisions promoting the common good, whatever that might be, and if so whether over time they will not corrupt and regress into some bureaucratic abomination.
For consumer goods, it might be even more complicated to establish such a hierarchy of ends. If it is not bureaucrats but the consumers themselves, all that could be done ex ante, before production takes place, is to do this in aggregate terms, for instance, by deciding to prioritise universal basic services or certain product groups such as housing, food, healthcare, education and transportation over other material wants. This is because consumers cannot simply articulate a detailed pyramid of material needs ex ante by consciously ranking millions of different products in the form of wish lists as Daniel E. Saros (2014) believes – there does not exist such a socialist homo economicus. Also socialist consumers have epistemic limitations. But with coarse categories one again runs into a number of problems. While it might be possible to reach an intersubjective consensus on the bottom, the further one moves upward from basic needs toward what one might consider wants the more subjective preferences begin to diverge, resulting in a tyranny of the majority that would not be very nuanced, especially regarding marginal utility. Even when a democratic process determines that, for instance, the category ‘clothes’ is deemed more important than ‘toys’, in reality the best consumer satisfaction might still be achieved with a slight underproduction of clothes and at least some toys rather than with a plan fulfilment of clothes without producing a single toy. Further, one would be confronted with the delicate business of differentiating needs from wants within certain categories. How artisanal can the production of a cheese be not to become a luxurious item?
In contrast to the naïve hope of establishing a hierarchy of needs ex ante and in disaggregated terms through participatory means, it might be more promising to quantitatively approximate demand for final consumer goods at different price levels through statistical means. So beyond turning the target vector into a ranked list by determining weights for categories, in Cockshott’s model, for instance, further calibration regarding marginal utility could be achieved by formulating individual harmony functions based on statistical demand estimates at different price levels, so that the effect underproduction has on the mean harmony of the system is determined separately for each good. But can this be modelled in any meaningful way and are prices truly a good measure of the intensity of human desire? In the face of the complexity of the task, it seems the best that could be done in terms of establishing a hierarchy of ends ex ante is the ranking coarse categories, the guesswork of such machinic crystal balls, or some combination between the two. Where Hayek claimed that in economic planning a hierarchy of ends could only be deduced by central planners from formal principles, such an inductive approach would allow for a far more democratic and data-driven interpretation.
But if one thing is certain, then it is that the future likely turns out differently than expected, and like any other economic system, a cybernetic planning framework would have to rely on ex post corrections to constantly improve such a hierarchy of needs through the feedback of consumer markets. And maybe it might be even more adaptive in processing such feedback loops than market economies or analogue planning frameworks. Yet, basing the calibration of such a hierarchy of needs entirely on passive consumption patterns would create a system that does not function fundamentally different from the status quo. If it remains a mere autonomous process that increases production of combustion engines due to a statistical determined rise in demand, it will be equally unconscious as the capitalist market. There would need to be another layer in the planning process that challenges such automatisms of establishing algorithmic truths that threaten our livelihood, an instance of conscious contestation one could say, endowed with the power to overrule a hierarchy of ends established through algorithms and consumer markets by increasing the price of harmful products to disincentivise their consumption or outright banning them.
In view of this, it should be clear that, regardless of their democratic status, administrative institutions will be required within such an algorithmic framework to establish and tweak a hierarchy of ends, which ultimately remains to be more of a political than technical issue. Irrespective of the challenges outlined in this paper – and contrary to critics like the philosopher John O’Neill, who stand in the tradition of ecological economics rooted in the writings of Otto Neurath and Karl William Kapp and repeatedly argues that optimisation is a dead end because it is impossible to optimise two variables – such an optimal planning framework would still be able to address the metabolic rift. Despite the necessity of staying within metabolic limits, the satisfaction of human needs remains the true end of political economy. That means the reduction of material throughput is never an end in itself but merely constitutes an exigent necessity in view of ecological collapse. And as long as production takes place within sustainable limits, questions of minimising the consumption of one resource over the other become secondary. Sustainability could be met within an optimal planning framework by defining an ecological ceiling through a set of resource constraints, similarly to how Kate Raworth (2017) conceptualises it with her ‘doughnut model’. 28 Gradual adjustments to further reduce the material throughput of certain resources, could always be realised manually by tightening these constraints further, thus consciously narrowing the metabolic boundaries under which auspices an optimisation process would take place. What remains unresolved is the determination of a political framework under which this procedure could take place, both in the decision process but even more so in ensuring compliance. It would be torn between circumventing the collapse of the global ecosystem, which seems to call out for authoritative measures, and democratic principles: What if, just like in our capitalist present, a democratic majority is not only unwilling to alter unconscious consumption habits but also consciously votes in favour of increasing the production of combustion engines or to slacken the Carbon dioxide constraint? What if the greater part of society chooses global catastrophe? Will it then be the enlightened vanguard party of ecological Leninists, who will exorcise the unsustainable irrationality held by the majority?
While it remains entirely infeasible from an economic standpoint concerned with questions of allocative efficiency, to transcend the global market order without the use of some form of algorithmic mediation – as central planners would either be overwhelmed with the economic complexity or planning would be simply hamstrung by an indissoluble negotiation process for those preferring a radically horizontal variant – optimal planners have to recognise that it will never be possible to truly automate away the centre. In embracing an algorithmic road to socialism there will not only be technical personnel necessary to write and maintain the code, but it will also require supervision in carefully feeding in the economic data and setting weights and constraints to fine-tune the model. The menace of optimal planning is not its incapacity to realise the necessary economic calculations, maybe it is not even the lack of a fitting incentive structure, reconciling static efficiency with temporal dynamism, or the challenge of formalising the production knowledge that has to be fed into the machine, but the threat of the return of a red bureaucracy. Even if the technical and epistemic concerns could be dispelled, an economically feasible planned economy would come at a political cost. Although, compared to the Soviet Union, workers would benefit from far greater autonomy within such a framework, it is certain that some form of mid-level planning commissions would have to be implemented to make this machine work. A robust civil society and a return to the republican principles Marx held himself might keep such an administrative apparatus in check through means of electability and revocability of these functionaries but it is equally likely that hierarchies will slowly crystallise, resulting in a system where the critical decisions will be made again by those sitting closer to the levers.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
