Abstract
Centuries before the advent of computers, the German philosopher and mathematician Gottfried Wilhelm von Leibniz (1646–1716) sketched out a “computational ontology” whereby information operates as an organic principle imposing order, molding and driving it, in such a way that the world gains a form of consciousness, and thought produces its being at the same time as it thinks itself.
Keywords
Centuries before the advent of computers, the German philosopher and mathematician Gottfried Wilhelm von Leibniz (1646–1716) sketched out a rather unsettling new picture whereby information operates as an organic principle imposing order, molding and driving it, in such a way that the world gains a form of consciousness, and thought produces its being at the same time as it thinks itself. In this “computational ontology” the particular and the whole, unity and diversity, permanence and change were reconciled and could not be conceived of independently of one other.
The starting point to much of Leibniz’s reflection on computing lay in his attempts to solve one of the most enduring philosophical problems of the time, namely that of the composition of the infinite continuum that made up our contingent world. Leibniz would devote much of his life refining methodologies, especially mathematical innovations such as his differential calculus, an early algorithmic procedure based on the summation of infinitesimal differences, to approximate movement. Anticipating modern-day computing, he also configured a binary code that could represent all possible numbers and encode any proposition—and would more broadly symbolize all of creation (the 1 standing for God and 0 for nothingness) (Eco, 1995, pp. 284-287).
Leibniz, however, did not restrict this endeavor to the realm of mathematics, but conceived it as part and parcel of a broader philosophical project that he had been contemplating since his 1,666 On the Combinatorial Art. He envisaged this so-called “combinatorial art” as a generally applicable epistemological formalism that would render knowledge production mechanical and applicable to all disciplines. In particular, he set out to create his own artificial universal language by drawing on the work of the Catalan philosopher and poet Raymond Lull (1232–1316), the English clergyman and natural philosopher John Wilkins (1614–1672), and the Scottish linguist George Dalgarno (1616–1687). In addition to serving as an instrument of communication, this formal language would serve philosophical purposes and help perfect the human mind by reducing all reasoning to a kind of calculation. Arguments and ideas would be broken down to primitive concepts representing an alphabet of human thought which would then be combined according to logical principles (Leibniz, 1923–, VI, 4, 922). 1 In this way, simple algorithmic procedures would help derive mathematical and nonmathematical truths alike effortlessly and irresistibly even when dealing with complex metaphysical issues (Leibniz, 1923–, II, 1, 241–2: to Oldenburg, 28 December 1675). This truly blind calculus, which postponed indefinitely any appeal to meaning, allowed for the thought process to be mechanized, decades before mathematicians George Boole and Frege’s attempts to formalize symbolic logic (Dascal, 1978, p. 222; Davis, 2000; Eco, 1995, p. 286).
Such formalism would provide the “solid basis” on which to build a new epistemological edifice, one that would establish logical connections across the full range of human thought and hence offer a glimpse of the world’s rational understructure (Leibniz, 1903, p. 401). Not only would such a calculus help settle disputes by allowing the straightforward resolution of all conflicts of knowledge, but it would also record and map existing human knowledge and act as an instrument of discovery, advancing knowledge and scientific progress towards a kind of global repository, Leibniz’s so-called “general science” (Antognazza, 2011, p. 92) 2
According to Leibniz, such fantasies would be materialized with the advent of actual physical calculating machines, and he remained a lifelong enthusiastic evangelist for the potential of machines—and all cognitive tools—to assist humankind. He devised a stepped reckoner, a mechanical calculator with a gear mechanism, that could multiply and divide, and more generally he upheld the benefits that would accrue from the use of computational devices: “And now that we give final praise to the machine we may say that it will be desirable to all who are engaged in computations which, it is well known, are the managers of financial affairs, the administrators of others’ estates, merchants, surveyors, geographers, navigators, astronomers… Also, the astronomers surely will not have to continue to exercise the patience which is required for computation… For it is unworthy of excellent men to lose hours like slaves in the labor of calculation which could safely be relegated to anyone else if the machine were used” (Leibniz, 1929, pp. 173–178).
While humans would eventually delegate repetitive and mind-numbing tasks to machines and various computing devices, Leibniz, however, never imagined that we would be supplanted by these tools. 3 He envisaged mechanization and automation as starting points, never as ends in themselves: tools, as well as various other fictions such as infinitesimals, were not designed to displace human intellect or abolish human effort, perseverance, or creativity but to empower man and help him unleash his inner potential. 4
Also, while dreams of the vast creative and emancipatory possibilities inherent to formalizing reasoning endure, Leibniz already discerned their limits, including their inability to express truths of fact before the infinite complexity of the world (Eco, 1995, p. 287), as well as the uniqueness of the human mind. What Leibniz had grasped already back then was that computational devices can operate whilst lacking a true understanding of the world: three hundred years ago in one of his more prescient comments, Leibniz had already exposed the essential emptiness underscoring artificial computation in his famous “Mill” experiment, and which anticipates John Searle’s Chinese room experiment. In this thought experiment which involved entering a mill, Leibniz contrasts a machine’s seemingly perceptive and conscientious overt behavior with the actual void of its internal, purely physical, mechanism: “That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception. Thus, it is in a simple substance, and not in a compound or in a machine that perception must be sought for” (Leibniz, 1875–90, VI, 609). 5
Human minds for their part were divine mirrors differing from God’s only in degree “as a drop of water differs from the ocean, or rather as the finite from the infinite” (GP VI, 84). 6 In a letter of 25 July 1707 to the theologian Michael Gottlieb Hansch on Platonic philosophy, Leibniz had exalted pure intellection as the highest form of cognition—above demonstration itself. For him, pure mechanization should not constitute the end-all of man’s intellection but blossom into intuitive knowledge, an insight later echoed by Alan Turing when he upheld the irreplaceability of human intuition (Leibniz, 1960, 593; Turing, 1939, pp. 161–228). 7 In fact, while much attention has been devoted to his attempts at mechanizing the thought process (and attendant mathematical innovations such as the calculus), Leibniz’s most significant contribution to computation lies perhaps in the particular kind of “computational ontology” he advanced, namely the harmonious convergence of thought and being he discerned in the dynamic unfolding of the world.
Leibniz was writing at a time when the mechanistic model had emerged as the privileged conceptual framework through which to elucidate the world, the state and even the body. In his 1637 Treatise on Man, the French philosopher René Descartes (1590–1650) had effectively equated living bodies to complex automata. By introducing his concept of “natural machine” in his Système Nouveau (1695), Leibniz set out to limit the claims of this “integral mechanism” which, whilst useful for epistemological purposes, could not possibly account for the infinite complexity and subtlety of the “artifice” at play (Fichant, 2003, pp. 1–28).
In the Monadology (1714), in which he expounded some of the fundamental principles of his philosophy, Leibniz articulated the distinction between the nature of organism and mere clockwork: natural machines, unlike their artificial counterparts remained “machines in their least parts to infinity,” but also the same machine throughout the various changes it underwent, “being merely transformed through different enfoldings” (GP VI, 543; IV, 482). The organic nature of any living body was thus essentially structural and predicated on an infinite composition, each smaller machine “enfolded in greater machines to infinity” (GP VI, 543). In this manner, each “machine of nature” formed a “kind of automaton,” not merely in its totality as a whole, “but also in its smallest distinguishable parts,” expressing all other machines nested in it and unfolding in concert with them. By contrast, an artificial machine could never overcome its state of aggregation to achieve the nestedness constitutive of its natural counterpart, a true unity which alone endowed it with sensation and perception (GP VI, 599; Leibniz, 1923–, II, 2, 249: to Arnauld, 9 October 1687). Whilst organism conformed with strict mechanism, ultimately it operated as an organizational principle rather than a particular biological entity, and emanated from the metaphysical realm (GP III, 340). 8 No part of matter was so small that it would not admit entelechy (GP II, 376: to Arnauld). The whole organic world was borne out of “the workmanship of God”: each minute machine unfolded harmoniously according to an internal law, a unique telos which itself expressed the “predetermined plan” which governed the whole (GP II, 250: to de Volder, 20 June 1703).
Within this scheme, reason went beyond formal and instrumental logic to embrace the organic, molding and driving it in such a way that the world gained a form of consciousness. Leibniz conceived of the world as an infinitely divisible continuum in which everything was interconnected as he wrote to the French sceptic philosopher Foucher in 1692: “I am so much in favour of actual infinity that… I hold that it affects it everywhere, for better marking the perfections of its author… Consequently, the least particle ought to be considered as a world full of an infinity of different creatures” (GP I, 416: to Foucher, 16 March 1693). References to actual infinity feature prominently in Leibniz’s work, often poetic or metaphorical: “Each portion of matter may be conceived as a garden full of plants and as a pond full of fishes. But each branch of every plant, each member of ever animal, each drop of its liquids is also some such garden or pond” (GP VI, 618). The continuous structure of the world, in fact, harked back to its very act of creation and the “divine mathematics” it had involved (GP VII, 191). Out of the infinity of logically “compossible” combinations of simple concepts and potential “multiverses,” God had freely chosen to bring into existence the “best of all” possible “existential series,” one which actualized the greatest degree of perfection (GP VII, 304).
Far from conflicting with diversity, order constituted its very precondition; the supreme mathematician had crafted a world which simultaneously ensured maximal freedom and diversity whilst precluding any type of arbitrariness. Nothing ever happened without a reason and each state was simultaneously the product of that which had immediately preceded it and “pregnant with the future” (GP VI, 610). Within this configuration, the cosmos was akin to a sheet of paper or a tunic “in such a way that an infinite number of folds can be produced, some smaller than others, but without the body ever dissolving into points or minima” (Leibniz, 1903, 614–615). The infinite number of particular constituents coalesced into a unified web “in which… so many links clasp one another so firmly that it is impossible… to fix the exact point where one begins or ends” (GP IV, 106–110). A fluid and dynamic reality folded and unfolded indefinitely in a regular “uninterrupted” process of continuous transformation, a dynamic generation, the minutest of instances formalizing the “disappearing” in the following state, a “logic of becoming” which the French philosopher and Leibniz scholar Yvon Belaval construed as prefiguring the later Hegelian act of Verschwinden which entailed the subsistence of permanence within change, of unity in difference (Belaval, 1976, p. 305).
The cybernetician Norbert Wiener famously touted Leibniz as a “patron saint for cybernetics” for promoting the processing of information using artificial machines (Smith, 2022, p. 101; Wiener, 1948, p. 12). Leibniz sketched out a new rather unsettling ontology, one instantiated through a self-referential and infinitely layered structure which ensured the preservation of sameness within difference. This nestedness was in fact inscribed within the very fabric of the world and the figure of the monad, the atom-like foundational indivisible and indestructible substance which constituted it. Monads were self-contained and regulating substances, each monad living “in its own closed universe with a perfect causal chain from the creation or from minus infinity in time to the indefinitely remote future” (Smith, 2022, p. 101; Wiener, 1948, p. 41). Leibniz likened the monad, to a “spiritual automat [on]” that unfolded “spontaneously... through all its states” according to its own particular internal law (GP VI, 610). At the same time, whilst self-contained and “windowless,” each monad was firmly embedded within an infinite network in which its actions operated in coordination with all others according to a preestablished harmony set in motion by God: through this “harmonic concordance” each monad expressed “what happen [ed] in every other substance and in the universe as a whole,” each reflecting the universe albeit from its own particular perspective (GP II, 337: to Arnauld; see also Danek, 1990).
Instead, modern-day algorithmic models such as machine learning systems act, for their part, as AI “mirrors” that reflect to us partial and distorted representations and outputs—solutions that tend to recast past patterns as our future and from which we in the present are largely excluded. Whereas tasks were previously typically mechanized only once they had first been understood and become tacit knowledge, algorithmic systems have increasingly been taking over our cognitive well as critical faculties, whilst fanning the flames of crisis and polarization—contrary to Leibniz’s hope that mechanizing cognition would foster peaceful communications and interactions. By capturing meaning production itself, they can compromise our ability not only to reason but to navigate together a shared understanding—however chaotic and discordant—through which we can exercise agency, engage in autofabrication, collective action and deliberation, and project ourselves in the future individually and collectively (Vallor, 2024, p. 11). Many critical operations necessary to moral and political life are bypassed altogether—and largely without scrutiny or accountability—in the name of efficiency, rapidity, and “optimization” (Vallor, 2024, pp. 114 and 119). 9
Crucially, machine learning systems assume an increasingly large role in shaping human realities, even as the pretensions of these systems to be the natural extension of human cognition are regularly exposed and discredited—for they lack any understanding or intelligence in themselves. 10 They draw patterns on the basis of statistical correlations—rather than causality, symbolic logic or any concept of “sufficient reason.” 11 While their outputs often mimic human reasoning, these systems largely engage in experimental fabulation and conjecture to compute the unknowable, often with primarily surveillance or commercial intents. 12
With Generative AI we have crossed a threshold, transitioning from the outsourcing of memory and some data processing to the production of discourse itself through the prediction of next tokens—with often little grounding in factual reality. Generative AI’s attempt to render the world more knowable and manageable—through the imposition of fictions and simulacra of reality—seems to underpin a new kind of twenty-first-century metaphysics, this time largely dictated by governments and corporations (Borowski, 2025). 13 Large Language Models have been portrayed as harbingers of “artificial general intelligence” (AGI), which in turn has been touted as a “miracle” facility that, by far exceeding human abilities, will solve difficult problems better than humans themselves ever could, and an algorithmically generated future as the only one desirable, or in fact possible. According to this vision Generative AI promises to realize a long-held fantasy, namely of a world managed by a carefully choreographed program designed to maximize value, surveillance, and control.
In this sense, Leibniz’s ontology and modern-day digital computation certainly share features: both rely on non-linear self-reflexive structures that continuously instantiate new formal systems on top of one another. 14 Only Leibniz, however, envisaged a form of automaticity that achieved perfect autonomy—and the striking marriage of thought and being—that modern-day AI systems could only dream of.
Footnotes
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Funder information: Leverhulme Trust/Isaac Newton Trust, ECF-2025-437.
Declaration of Conflicting Interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
