Abstract
The study of law as a social process should combine an analysis of structures from a political economy perspective with a sociological focus on the practices of lawyering in mediating social relations and conflicts through the formulation and interpretation of legal texts. This approach is applied here to software, which has become the oxygen of the world economy, powering the digitalisation that has transformed economic activities and social life. The forms this has taken have been moulded by lawyers, battling over intellectual property rights in computer programs, enshrining them in national law and international standards, as well as devising the international tax avoidance strategies that have helped propel the giant digital-tech transnational corporations to global dominance. These contests have taken place through processes of formulation and interpretation of the legal concepts that both reflect and shape social struggles over economic and political power, mediated by law, in contemporary corporate capitalism.
Lawyering Practices in Corporate Capitalism
A socio-legal understanding of law in the economy should combine a perspective from political economy on the historical development of legal principles, structures and concepts, with an actor-centred sociological approach focusing on the contestations that shaped these changes (Miola and Picciotto, 2021). I suggest that this should focus particularly on the practices of lawyering: in particular how lawyers do what they do. These are too little examined by sociologists who are concerned with social positions and roles, while for those trained in law, its techniques are taken as natural. While critical legal scholars have become adept at dissecting the rhetoric of law, particularly as a cultural practice, they generally neglect both the political economy context and a sociological perspective on lawyers’ roles and practices.
Such an approach is particularly important in considering the role of law in constructing what Pistor has called the ‘code of capital’ (Pistor, 2019). Starting with the emergence of large-scale industry in the late 19th century, a rising class of business lawyers built on the foundational forms of private law governing markets – property, contract and delict – to create the key institutions and concepts that shaped the transformation into large-scale corporate capitalism (Hovenkamp, 1991). The main legal forms they fashioned have been business organisations particularly the corporation (Roy, 1997), the increasingly complex and abstruse credit instruments mediating accumulation in financialised capitalism, and the intellectual property rights that facilitate the commercialised exploitation of technical innovation and cultural creativity. In parallel, lawyers moulded the key interfaces between corporate capital and states through changing forms of public law, notably the fiscal framework, and the myriad forms of regulatory law, originating early in the 20th century that came to the fore from the 1980s in neo-liberal post-industrial capitalism (Picciotto, 2011).
Lawyers mediate between the interacting realms of political and economic power, through the formulation, interpretation and application of texts that legitimise and regularise patterns of social and economic relations of power. Lawyers both formulate and interpret legal concepts and principles, and their practices mediate between the political realm, which provides the necessary underpinning for the normative system based on monopolising legitimate coercion, and the real-world of social and economic activities. Business lawyers in particular are ubiquitous and can move strategically between different sites where authoritative norms are formulated, such as legislatures, regulatory authorities, and international bodies, and those where they are interpreted and applied, such as enforcement agencies, tribunals and courts. In some fields, especially those involving specialised technical knowledge, the career trajectories of actual individuals move through the ‘revolving doors’ between the private and public spheres (Seabrooke and Tsingou, 2021). More widely, there is always an interdependence between public officials and private professionals, and their close interactions create ‘epistemic communities’ in the many arenas where the purpose and meaning of legal and regulatory regimes are debated and common understandings are shaped. Theorists of regulation have come to understand that regimes develop over time, through recursive or dialectical processes, and that the domination of private over public interests through regulatory capture takes place not only through direct material means (various kinds of corruption), but more insidiously through information asymmetries and cognitive capture (Picciotto, 2017: 689; Rilinger, 2023).
Crucially, the normative frameworks lawyers construct and maintain are flexible and adaptable. Lawyers themselves assert their aim and ability to provide certainty and predictability, particularly for economic transactions, and this is considered to be the law's function both popularly and in positivist social science. In practice, however, lawyers’ work interrogates and manipulates the indeterminacy of legal concepts and texts, the meanings of which they help to mould and stabilise through their practices.
This indeterminacy is inherent, for three main reasons (Picciotto, 2007: 15–17). First, as linguistic philosophy shows, meaning depends on the social context and practices. From a sociological perspective, this means that specialised technical language such as legal discourse is given meaning through the professional practices of ‘cognitive communities’ of specialists. Secondly, legal rules, particularly in liberal legality, are expressed to varying degrees as abstract and general norms, leaving considerable scope for interpreting how they might apply to specific situations. Thirdly, legal rules are normative, so their interpretation is necessarily teleological. To put forward an interpretation of a legal rule is to propose the desirability of one norm rather than another. Although lawyers often opine on the meaning of legal rules as if that were the obvious way to understand them, they are always to some extent advancing a version that is in the interests of their client, or for some desired goal, whether thought of or expressed in mercenary or emancipatory terms.
Thus, contrary to legal formalism's view of law as an internally coherent and logical system, in practice it is fluid and contested, and shaped by the social processes of interpretation. In their discussions of the meaning of legal rules, lawyers often criticise the lack of clarity or ambiguity, which they often attribute to poor drafting, but they are in their element in legal grey areas, debating the law's meaning and exploiting its uncertainty. They fight for what Pierre Bourdieu has called ‘le droit de dire le droit’ (the right to state the law), to justify their interpretation as the ‘correct’ one, and thereby sanctify their representation of the world with ‘the perceived objectivity of orthodoxy’ (Bourdieu, 1987: 839).
Hence, the law's role in stabilising normative expectations emerges from the cognitive community which establishes the shared understandings (in Bourdieu's term, the ‘habitus’) that normalise accepted interpretations. The capacity to deploy legal resources gives powerful advantages to ensure domination of this process of normalisation through professional techniques and practices. Hence, power in today's corporate capitalism is buttressed by the ability to mobilise the elite lawyers who dominate professional discourses (Pistor, 2019: 158–192), and to deploy these legal resources strategically in the multiple sites of legal contestation and debate that lie between the public sphere of politics and the state and the private sphere of structuring and managing commercial and corporate transactions. From this perspective lawyers, especially those representing large corporations, act not merely to enable these powerful entities to avoid or evade the intent of legal rules and regulations, but to shape the accepted meanings of the rules (Picciotto, 2007). Thus, the mark of the truly powerful is that they can be law-makers, not law-breakers.
Locating the ideological work of lawyers in this wider context entails seeing their activities as real material practices, as implied by the term ‘constructive ideologists’. By marshalling and wielding cognitive resources, for example, by publishing analyses, participating in professional and policy forums and debates, submitting comments and submissions to consultations and inquiries, drafting both bespoke and standard-form contracts, instigating or defending legal proceedings, and writing legislative proposals and draft legislation, they aim to shape real-world social practices and institutions. This also means that their creativity has real ontological limits, they cannot imagine fantasies into existence, just as architects must also ground their designs in sound engineering. Turning their visions into reality depends not just on the cogency of their arguments, but on both the ability of the material resources wielded in advancing them to help dominate the received wisdom, and their effectiveness in shaping real-world social practices.
This article examines one of the key threads in the complex tapestry of contemporary corporate capitalism, computer software, and focuses on its legal framing to explore how the law has shaped the development of the human intellectual activity of computer programming and integrated it into the latest phase of corporate capitalism. It relies on a distillation from sources that provide extensive empirical detail (e.g., Bessen and Hunt, 2007; Campbell-Kelly, 2003; Con Diaz, 2019), writings by and published interviews with some of the key actors (Báthory-Kitsz, 1980; Edstrom and Eller, 1999; Gates, 1976; Kaplan, 1995; Stallman, 2002; Torvalds and Diamond, 2001; Warren, 1976; Williams, 2002), some participant observation in international tax policy debates using a reflexive methodology (Picciotto, 2021: 636–637), and evidence from a range of legal texts, including contracts and licences, official reports, legislative and treaty provisions, and court decisions in key cases. The focus is mainly on the US, which quickly dominated this technology, and therefore on US law, although the paper will also cover the important process of internationalisation.
The Shaping of Software Through Law
Software has been at the heart of the computer and information technology revolution and has become the oxygen in the lifeblood of the global economy. The social practices involved in producing, disseminating and using software must be understood in the context of the wider historical political and economic context in which they emerged and developed. It is these practices that gave rise to the contestations that became mediated by legal concepts and debates, which in turn shaped those practices in significant ways.
Computer programming began after the Second World War as a research activity in state-funded institutions, and among enthusiasts. Writing software is a distinctive activity, combining scientific rationality with inventive and even artistic creativity, and has become a new type of social and even political expertise (Ensmenger, 2012). Like many intellectual activities, although it requires solitary concentration, it greatly benefits from the interchange of ideas and techniques, and large-scale programming projects have required coordinated teamwork, especially to design, develop, refine, and extend complex applications over long periods and continuously, though this is now being transformed by artificial intelligence.
These characteristics of software have resulted in continuous contention over the legitimacy and scope of proprietary protection, particularly through the private property paradigm of exclusive rights.
Programming and the Rise of the Software Industry
A computer program can be defined as a set of instructions to operate or execute tasks on, a general-purpose computer. A program develops in several stages, including initial design, formulation of an algorithm, writing ‘source code’ and then ‘object code’, but the ultimate result is machine code, readable only by a computer. Hence, software and hardware are symbiotic, and the distinction between them is fluid. Instructions can be digitalised, and permanently etched onto an integrated circuit on a semiconductor chip, and such chips are now an integral part of many devices. The microprocessor which is the central processing unit or brain of a computer is also built on a chip, and hence is part of the computer's hardware. However, its stored-memory technology means that it is programmable, allowing software that runs in real-time and can interact with human operators, enabling a computer to perform far more than passive data processing functions.
Software not only turns computers into powerful tools and even substantially autonomous machines, but it also connects both machines and users through digitalised hardware. Hence, software applications enabling complex and often extensive human activities and interactions have increasingly transformed economic and social relations around the world over the past half-century. In more fundamental terms, the rise of software involves major changes in the relationship between mental processes and the physical world, and a shift to the immaterial or intangible economy.
Following its scientific research origins, programming received major state funding for military purposes during and after the Second World War. The US academic-military-industrial complex powered the growth of computer technology and its dominance by the US, especially with the shift to digitalisation, and then the emergence of personal computers in the 1970s (O’Mara, 2019). The first large-scale programming project was the SAGE air defence system, devised mainly at MIT for computers built by IBM, for which 700 programmers were mobilised through a project contracted in 1955 to the state-owned RAND Corporation. This fed into the development of the SABRE airline reservation system by IBM, leading also to a boom in software contracting (Campbell-Kelly et al., 2013: 144–159, 178–180).
The development of custom software for large corporations and public bodies remains an important market segment, but the difficulty of recovering the high development costs led to a shift to business packages, the second large market segment. This growth of a market for software as a product, that is, standardised programs for many users, was also stimulated by IBM's decision to ‘unbundle’ the sales of its software and hardware in 1969, due to both the rising cost of software development and antitrust concerns (Bessen, 2022: 166).
The emergence of personal computers (PCs) in the early 1970s opened the third market segment, for software applications aimed at a large number of individual users (Campbell-Kelly, 1995, 2003). However, a mass market developed only when computers became truly user-friendly due to the graphical user interface (GUI), linked to a keyboard and mouse. This originated in research funded by the Advanced Research Projects Agency (ARPA) from the 1960s, taken forward at Xerox's Palo Alto Research Centre (PARC), then commercialised and popularised by the Apple Macintosh launched in 1984, quickly emulated by Microsoft with its Windows operating system (Campbell-Kelly et al., 2013: 258–266).
This turned the computer into a device that can be controlled by the user for a wide range of purposes, through software applications that could be copied into the computer's memory. These were initially supplied on physical media, and later digitally over the internet. The programs controlling the operating system were generally developed by or licensed to manufacturers of PCs for sale to consumers ‘bundled’ with the hardware, often together with some standard applications such as home office suites or other ‘firmware’. This has continued with later-generation devices, particularly mobile phones. Hence, software is often supplied for free, and many applications are monetised by sales of services, either to the users or to third parties, particularly advertising.
Although mass-market consumer software products have had the greatest impact, and dominate our understanding of software, the custom-contractual and business package sectors aimed at corporate users are still very important, particularly for actual sales of software to customers. Despite the product-like characteristics of mass-market applications, the supply of software is not just a discrete sale transferring ownership of a commodity, but creates a continuous relationship. Indeed, business models in both the business and consumer sectors have increasingly shifted to software-as-a-service, particularly with the advent of cloud computing. Thus, software creates a complex web that structures social and economic relationships linking together people across the world, but under the control of the software provider. Much of this control is built into the design of the software itself, but the appeal of software is the power it gives users, and its portability to different devices, control of which also depends on norms and law.
The Appropriation of Software Through Law
In the initial phase of the emergence of commercial firms supplying custom software, the relative rights of the parties could be regulated by contract, and commercially confidential information could be protected as trade secrets and through non-disclosure agreements (NDAs). Once software was commodified as a product for wide distribution, control of its use exceeded the scope of contract, so depended on proprietary and use rights.
The assertion of proprietary rights was far from natural, particularly as software did not easily fit existing forms of intellectual property rights. However, both copyright and patents had proved capacious concepts, while sometimes contestations over protection had resulted in the creation of more specific ‘neighbouring rights’: for example, for performances, broadcasts, plant varieties, and geographical indications of origin.
Software posed the particular problem of falling between the two paradigms of patents and copyrights. An algorithm is essentially a mathematical sequence, so treating a computer program as a patentable invention seemed inappropriate. On the other hand, although the initial design of a program involves textual and pictorial elements such as flowcharts that resemble works already covered by copyright, and the ‘source code’ (written in ‘high level’ language) can be ‘read’ by humans, it must be ‘translated’ into ‘object code’ that is unintelligible to humans, but directs the functioning of a machine. Hence, the program itself is not a form of human communication like the literary, artistic or scientific works traditionally protected by copyright. Furthermore, programming is an iterative and continuous process, involving debugging, refinement, and further development, and the architecture of an application usually combines many modules of code. So a program does not fit the authorial model of a distinct, original piece of work, and it is hard to define what under copyright law would be a ‘derivative’ work, and to legitimise the prohibition against extending or modifying code without permission. Indeed, from the economic-utilitarian perspective, to hinder sharing or revising the work of others in this context impedes improvement.
Nevertheless, as programs increasingly found commercial applications from the 1950s, lawyers developed techniques and legal arguments to get around the objections of patent and copyright offices, and obtain proprietorial protections in support of the competitive strategies of different firms (Con Diaz, 2019). In the analogue age, business lawyers in sectors such as data-processing and telecommunications had already become skilled at writing patent applications for electromechanical computing, framing them as the mechanical embodiment of mathematical ideas. Indeed, patents had been a key weapon in the competitive battles that helped firms such as IBM and AT&T to build dominant positions in electro-mechanical industries (Cortada, 2019: ch. 3). Although software is designed to run on a general-purpose computer, some began to write patent claims describing programs as designs for machines, finessing the legal principle that mental steps could not be patented (Con Diaz, 2019: 26–34). In contrast, the large manufacturers of mainframe computers, generally leased to business customers, treated software as ancillary and encouraged and facilitated the free sharing of programs, partly to deflect the antitrust concerns of the Department of Justice. IBM was particularly under its scrutiny, and in the early 1960s took a position opposing patenting.
However, lawyers eyeing the growing number of software development firms argued that the restriction of software patenting was reinforcing IBM's dominance. In 1967, the report of a Presidential Commission (with IBM's CEO as a member) recommended against allowing software patents, and a Bill was tabled to explicitly exclude patentability for a set of instructions for a calculating machine. However, it received a flood of objections, and the Patent Office said that legislation would be premature. Proceeding on a case-by-case basis provided flexibility that in practice allowed patent lawyers to keep pushing the boundaries in the US, and by making claims in other countries such as Canada and the UK they also created regulatory competition. An ingeniously drafted patent claim for a ‘sorting system’ was accepted in 1968, and publicised in Computerworld magazine (Con Diaz, 2019: 66–71). Judges on the Court of Customs and Patent Appeals (CCPA) were favourable to such claims, but they were resisted by many patent examiners, as well as by hardware firms such as IBM and Honeywell, supported by the academic view of programming as an abstract intellectual process, rather than a form of engineering. The debate came to a head when a Bell Labs patent for a computerised telephone exchange reached the Supreme Court, which in 1972 rejected it (Gottschalk, 1972).
This setback for the rising software firms shifted their attention to copyright. The Congress had begun a long-overdue revision of the US copyright law in the early 1960s, before the advent of software products. At that stage, the main issue concerned the impact on print publishing of photocopying, and even computerised copying, although a few lawyers also speculated whether a computer program could itself be a copyright work (CONTU, 1978: 81–82). At that time, US law required registration of works in which copyright was claimed, and the US Copyright Office ruled in 1964 that it would accept the registration of software only under its ‘rule of doubt’, and subject to publication of a program's code in human-readable form (CONTU, 1978: 82). In the context of rapidly changing technology and conflicting interests, the Copyright Law enacted in 1976 granted copyright in sweeping terms to all ‘original works of authorship fixed in any tangible medium of expression’ while excluding protection for ‘any idea, procedure, process, system, method of operation, concept, principle, or discovery’, and granting a broad permission for ‘fair use’. The specific thorny questions of how these general legal principles should apply to photocopying and computerisation were referred to a Commission on New Technological Uses of Copyrighted Works (CONTU).
The Commission deliberated during the period of emergence of commercial prospects for software products, following IBM's unbundling decision, as well as the emergence of the first microprocessor-based personal computers. These caught the imagination of communities of enthusiasts in a libertarian culture of phreaking, hacking, sharing and conviviality (Turner, 2006). These were also the seed beds of future tech billionaires such as Wozniak, Jobs, Gates and Allen, and tensions soon emerged between commercial imperatives and the collaborative culture (Johns, 2010: ch. 16; Con Diaz, 2019, ch. 8).
When the kit for the Altair computer was released for hobbyists in 1975, Paul Allen and Bill Gates, with help from another student, wrote a programming language for it, a variant of BASIC, which had been developed by grant-funded researchers and was widely used on mainframe computers. They formed Microsoft to market this software, and agreed to a licence for its distribution by the Altair's manufacturer, which would pay them a royalty per copy sold (Wallace and Erickson, 1993: 92–93). When unlicensed copies of Microsoft's version of BASIC began circulating among enthusiasts, the young Bill Gates wrote to the Homebrew Computer Club Newsletter attacking this as theft (Gates, 1976). His subsequently published interview with a programmer who questioned him about Microsoft's copyright claims for the software evidences his close involvement with the legal issues and the work of the CONTU, and his strong view that Microsoft needed to control the dissemination of its computer code to incentivise and reward investment in marketable software products (Báthory-Kitsz, 1980).
The CONTU report recommended an addition to the 1976 Act to make it clear that machine-readable computer programs were included in its wide scope of protection for authorship works. The Commission clearly felt that some form of protection was needed, and noted the uncertainties around patentability, and the limitations of trade secrets. The majority's report reflected the utilitarian and commercial view of copyright, but it was counterpointed by a notable dissenting opinion from author John Hersey, with support from some other members. Hersey cogently argued that computer programs are fundamentally different from works of authorship that communicate between human beings since their only object is ‘to control the electrical impulses of a computer’. Although writing is involved in developing a program, and the source code may be described as a set of instructions, these ‘eventually become an essential part of the machinery that produces the results’ (CONTU, 1978: 28). He argued that to ‘shoehorn’ software into the concept of copyright would produce great distortions; he favoured some sui generis form of protection, but this was not explored.
Without further ado, and without debate, in 1980 the Congress enacted the changes recommended by the CONTU report. The legislation also allowed the owner of a copy of a program to make a copy or adaptation to the extent necessary to use it with a machine, or for archival purposes. Far from resolving the issue, however, the legislation opened a Pandora's box of issues, as will be discussed in the next section.
Meantime, patent lawyers had not been discouraged by the decision in Gottschalk, which could be seen as due to a poorly drafted patent specification. They were rewarded by increasingly favourable decisions from the CCPA, which upheld a bold patent claim for financial record-keeping software; this was ultimately rejected by the Supreme Court, but for lack of originality, leaving the door ajar on the key issue of patentability (Dann, 1976). Claims for software for industrial machinery were more persuasive, and the CCPA developed a two-step test, so that a claim for software to be embodied in a machine would be patent-eligible. On this basis, a patent for software for a rubber moulding press was accepted by the Supreme Court, although only by a 5–4 majority (Diehr, 1981). This finally opened a door through which patent lawyers were happy to enter in droves.
Thus, by the beginning of the 1980s proprietary rights could be asserted in software through both patents and copyright. The software industry's lawyers did not limit their efforts to the US, but joined other technology-led big business lobbies in a drive to secure global rights, receiving bipartisan political support for industries seen as the wave of the future that could be led by US firms. Driven by this big business alliance, the US, which until 1891 had encouraged ‘piracy’ by refusing to give copyright protection to foreign works, and even after that requiring local publication (Seville, 2006), changed tack and in 1987 joined the Berne Convention. In 1984 and 1988 Congress extended to intellectual property the powers in s.301 of the Trade Act to scrutinise and sanction other countries’ regulatory ‘trade barriers’, and used this pressure to ensure the inclusion in the package of agreements establishing the World Trade Organisation in 1995 of the treaty on Trade-Related Intellectual Property Rights (TRIPS) (Drahos and Braithwaite, 2002; Sell, 2003). This requires all WTO member countries to protect computer programs, ‘whether in source or object code … as literary works’; to ensure the availability of patents for both products and processes ‘in all fields of technology’, and to protect layout designs of integrated circuits. Thus, the US led the pressure for the creation of an international framework of legal standards for proprietary rights through copyright, patents, and sui generis protections, which helped propel US tech firms to global dominance.
Battles Over Interpretation
However, these changes in the formal law opened up new terrains of conflict and debate. Bill Gates's open letter denouncing software piracy sparked diverse opinions among the early computer enthusiasts. While some were eager to seize the commercial opportunities opened up by the new technology, dismissing others as hobbyists, the dedicated programmers scoffed at Gates's concerns and continued to share computer lore and program code. An editorial by Jim Warren in a programmers’ newsletter mocked Gates's ‘proprietary preoccupation’, suggesting that Microsoft could generate sufficient income from licensing software to computer manufacturers while charging users a low enough price that they would pay for the convenience (Warren, 1976). The newsletter also published the source code for a version of BASIC by Li-Chen Wang, who added the ironic claim ‘@copyleft; all wrongs reserved’, and encouragement for others to improve it (Con Diaz, 2019: 174).
Steve Wozniak, who had formed a group of fellow enthusiasts at the Homebrew Computer Club, had used MOS Technology's cheap 6502 chip (for which a patent claim had been filed), to build a computer which was little more than a circuit board and operating system. His friend Steve Jobs suggested that it could be sold to hobbyists as a test machine, initially with no real commercial aim; indeed, Wozniak offered it to Hewlett Packard, concerned that they might claim rights to the software, because it was developed partly while he was in their employment. They declined, regarding it as a ‘hobby computer’, and gave him a release, so he and Jobs launched the Apple I (Con Diaz, 2019: 178). They then determined to form a ‘real company’, and now hired lawyers to file patent claims as well as copyright registrations for some operating system software, which was based on BASIC (Con Diaz, 2019, 179–180). The Apple II launched in 1977, as well as rivals from Commodore, Tandy and others, were marketed as personal computers (PCs), creating a rush to develop applications software for games, education and business.
The ensuing battles over property rights played a key part in shaping the development of the markets for both hardware and software. At their heart was the ambivalent nature of computer programs, which can be supplied independently, but must be implemented through hardware, and can also be embedded in the PC's microchips. Particularly important was operating system software, essential for the functioning of computers, and central in specifying how each device works.
The limits of copyright protection were soon tested when a group of engineers formed the Franklin Computer Corporation to market a rival to Apple using fourteen of its operating system programs, four embedded on chips. Apple's move for a blocking injunction was countered by reopening the legal argument that programs that are an integral part of a machine should not be considered copyrightable, now adding the utilitarian rationale that this would encourage competition between manufacturers of compatible computers, and widen the market for applications software. This was viewed sympathetically by the trial court, which denied the injunction, but rejected it on appeal, based on the CONTU report and the explicit wording of the ensuing legislation (Apple v. Franklin, 1982; Con Diaz, 2019: 196–203).
This ruling played a major role in enabling Apple to build a dominant position in integrated computer systems, and ultimately to become the world's biggest tech giant, based on a strong proprietary business model that aims to lock consumers into its software-based ecosystem, creating a huge and highly profitable pool of captive consumers for the associated hardware, later expanding into streaming content and even payment systems and banking.
Microsoft's alternative strategy of licensing its operating system software widely to competing equipment manufacturers was also successful, especially due to Gates's astute business move in securing a licence from IBM in 1980 for an operating system for its PC. As historians of computing point out: ‘At the time that Microsoft made its agreement with IBM for an operating system, it did not have an actual product, nor did it have the resources to develop one in IBM's time scale. However, Gates obtained a suitable piece of software from a local software firm, Seattle Computer Products, for $30,000 cash and improved it. Eventually, the operating system, known as MS-DOS, would be bundled with almost every IBM personal computer and compatible machine, earning Microsoft a royalty of between $10 and $50 on every copy sold’ (Campbell-Kelly et al., 2013: 247). This steady flood of income financed Microsoft's expansion to dominance in applications software, mainly through its Office suite, widely licensed to businesses and other organisations.
Apple's legal victory over Franklin did not stem the free circulation of software among user groups, some even supported by Apple, as well as the use of replication programs, which were advertised as a means of making backups permitted by the legislation, but also used for the more dubious purpose of making copies for friends (Con Diaz, 2019: 181–184). Chips were also relatively easy to copy, and with the rapid international diffusion of the technology, clones of the Apple II soon appeared in the Far East, so Apple enlisted the US Customs to block imports. However, there was limited legal protection from copying the chips themselves: the Copyright Office accepted registrations of chip design drawings, but not of the chip layouts, and patenting was both slow and problematic, due to the need to prove originality and novelty.
Alarm at the emergence of foreign competition, particularly from Japan, made Congress receptive to protection, but the extension of copyright to such evidently utilitarian objects as chips proved a step too far, and debate highlighted the danger that patent protection could hinder the emulation of ideas and concepts through fair reverse engineering. The compromise solution was the Semiconductor Chip Protection Act of 1984, to provide a specific form of 10-year protection for an ‘original mask work’, but with a broad exception for reverse engineering, allowing both copying for analysis as well as reproduction of elements, if the result can be shown to be original. Its protection has been little relied on in practice, which surprises lawyers (Hoeren, 2016; Kasch, 1992), but is likely because of the entry barriers to semiconductor manufacturing, due to the enormous upfront capital investment needed, and dependence on the tacit knowledge of engineers, 1 as well as the continued availability of patents. Indeed, the relaxation of patentability standards sparked a surge in the patenting of semiconducting technologies, enabling firms to build patent portfolios to attract venture capital and strengthen negotiation positions for technology sharing through cross-licensing (Hall and Ziedonis, 2001).
The fuzziness of the distinction between hardware and software was accentuated with the introduction of the GUI coupled with keyboard and mouse, which made PCs user-friendly. The competitive interactions between the firms that began to exploit this technology opened up complex issues that revealed the limitations of copyright protection.
The GUI arrived with a bang with Apple's much-publicised launch of the Mac in 1984, though its predecessor the Lisa had flopped commercially the previous year, as had Xerox's Star in 1981, both due to high pricing. Apple had helped Xerox to build on ARPA's research on the GUI to develop its Smalltalk software, which was not published or registered for copyright protection, but then they worked on their own programs, which were registered as copyright. Microsoft in turn had worked with Apple on software for the Mac GUI, as well as writing word-processor and spreadsheet applications for it, and then followed the Mac by launching Windows 1.0 in October 1985. This was largely based on Apple's GUI, so Microsoft licensed Apple's GUI visual displays. However, the agreement was drafted by Microsoft's lawyers in wide terms, which could be read to allow future derivatives (Apple-Microsoft, 1985; Con Diaz, 2019: 214–215). Buoyed by its revenues from IBM and the increasing success of its word-processor and spreadsheet applications, Microsoft then developed Windows 2.0, which functioned virtually identically to the Mac, aiming to license it to firms such as HP to compete with the Mac (Campbell-Kelly et al. 265–266).
Responding to this threat, Apple sued Microsoft, claiming that the Windows display of menus and icons on a desktop controlled by a mouse infringed Apple's registered copyright for an audiovisual work. This angered Xerox, which after all had begun the commercialisation of the GUI, but had abandoned its own PC development. Xerox sued to have Apple's copyright registration declared invalid since it was harming its own attempts to license its GUI, but the court held that it had no jurisdiction to deregister (Xerox v. Apple, 1990). Microsoft proved a tougher opponent, and finally beat off Apple's suit, on the grounds that its 1985 licence from Apple extended to future derivatives of the visual display such as Windows 2 (Apple v. Microsoft, 1994).
These rulings left open the wider issue of what constitutes copying, which became acute as the development of decompilation enabled the analysis of computer code, permitting reverse engineering. This facilitated the portability of programs into different languages, as well as the design of competing software based on the same or a similar ‘sequence, structure and organisation’. Such activities could be regarded as ‘non-literal copying’ under traditional copyright doctrine, which had accepted that using the essential elements of a work, such as plot and characters, can constitute infringement. This challenged the fundamentals of programming: the legitimacy of emulating and building on others’ innovations, and of implementing a concept such as the ‘look and feel’ of a computer interface but by different means. There was soon a flood of litigation, and the courts initially took a broad view of what could constitute non-literal copying, but this weakened as conflicts between rivals intensified. Thus, Lotus successfully blocked the development of software that mimicked the menu and commands of its spreadsheet (Lotus v. Paperback, 1994), but then failed against Borland, although the Supreme Court was evenly divided (Lotus v. Borland, 1996). This narrow division of opinion on a crucial point of interpretation had enormous consequences for future software development.
The partial limitation of copyright protection shifted attention back again to patents, which can protect against emulation by different means, although the threshold for protection (novelty and non-obviousness) is higher than the copyright's concept of an original authorial work. Exploiting the opening created by the decision by Diehr (1981), lawyers quickly succeeded in obtaining patents for all kinds of software applications, needing only to claim a program that could be loaded onto a machine to perform an innovative process in the real world, including manipulation of data (Con Diaz, 2019: 239–256). In the 1990s this was extended to business methods, confirmed by a decision of the Federal Circuit approving a patent for software for an innovative method of managing a financial portfolio (State St Bank, 1998). The number of patents issued involving software climbed from around 1000 a year in 1980 to nearly 25000 in 2002, some 15% of all patents (Bessen and Hunt, 2007: 169). The debate spread internationally, notably to Europe, where it became more politicised due to conflicting business interests (Guadamuz González, 2006).
Software patenting in this period was mainly by large established manufacturing firms (Bessen and Hunt, 2007: 171–173), but could also be used by insurgents aiming to disrupt dominant firms in sectors such as finance, leading to legal challenges. Asked to rule on the rejection of a claim for software for hedging risk in energy markets, the Supreme Court accepted that claims for process patents did not need to be tied to a machine, but confirmed the rejection of the claim in contention on the grounds that hedging is not a novel technique, so the claim was for abstract mathematical ideas (Bilski, 2010).
However, obstacles such as the novelty requirement did not prevent the emergence of ‘patent trolls’ or ‘non-practising entities’, which invest in patent portfolios as well as patent litigation around the world to extract rents by a form of legal blackmail (Chen et al., 2023; Watkins, 2013). The US Congress created an easy procedure to challenge patents for ‘covered business methods’ (CBMs), which had some initial success, but only raised the legal stakes, as seen by the rise of Unwired Planet LLC which built a portfolio of over 2500 patents, even beating off Google's objection to a patent for ‘using location-based services over mobile wireless networks’, which the courts held was not for a CBM (Perchyts, 2018).
Thus, the success of lawyers in ensuring that software could be protected by both copyright and patents, far from providing a clear and predictable framework of rights, only opened new, and for them lucrative, terrains of legal debate and contestation. Law itself, and the power to deploy it, became a potent weapon in the competitive struggles that shaped the power to control the development and use of software. Lured by the prospect of riches from designing an application, startups could protect their code and build patent portfolios to attract venture capital, provide leverage for a licensing deal or achieve a buyout from an established firm. For example, Amazon, when it was still just an online bookseller, in 1998 obtained a patent for its 1-Click ordering process; this was used against its rival Barnes & Noble, then licensed to Apple, although Bezos claimed to favour patent reform (Stone, 2013). The 1-Click software facilitated ready access to credit card information, which proved a goldmine for both Amazon and Apple.
Established large corporations also have the power to exploit the indeterminacy of law to strengthen their dominance, by deciding whether to buy out or beat off innovators. Thus, in the early years of Windows in 1987, when software engineer Jerry Kaplan founded GO to develop a pen-based GUI, with venture capital backing, he approached Microsoft to develop applications for it. After seeing a demonstration, Gates decided that GO was too much of a competitor, so he decided to copy the idea enough to kill it. Microsoft acquired handwriting-recognition software from a two-person company, bought out a Chinese developer of kanji-recognition software, and modified Windows sufficiently to be able to demonstrate a pen-driven GUI that performed just like GO's (Edstrom and Eller, 1999: 118–139). This was enough to dissuade hardware manufacturers, including IBM, from teaming up with GO, leaving Kaplan understandably riled at Microsoft's unscrupulousness in exploiting the grey areas of copyright (Kaplan, 1995). Hence, the power to exploit law itself becomes a weapon of competition, by deploying lawyers not just to ensure legal protection, but to outmanoeuvre others in its grey areas.
Hacking Copyright: Copyleft and Open Source
In parallel, the collision between the commercialisation of software through copyright protection and the sharing culture of programmers stimulated the construction of a very different legal paradigm, albeit one also based on copyright, often referred to as ‘copyleft’. 2 Due to the efforts of some dedicated software pioneers, as well as considerable investments in law, this eventually gained considerable momentum and made a permanent impact.
Despite the assertions from Gates and others of the need for proprietorial protection to stimulate innovation, the free sharing of code among researchers and enthusiasts in the 1970s enabled the rapid and fluid development of software outside corporate control. Notably, Unix was a modular suite of operating system programs begun in 1969 by employees at Bell Labs (Weber, 2004: 25–28), shared freely with other programmers, and then widely licensed to universities at nominal cost because until 1984 Bell's parent AT&T was forbidden from commercial activities outside telecommunications (Tozzi, 2017: ch. 1). Similarly, the EMACS software for writing text editor macros had a modular, extensible design which facilitated its proliferation and continuous, decentralised development (Kelty, 2008: 184–186). This fostered a culture of free sharing, playful cleverness and fun in writing software which was termed ‘hacking’ (Stallman, 2002/2019; Turner, 2006: 117; Tozzi, 2017: 37), though the word later received the pejorative connotation of using a computer to gain unauthorised access to data.
A foremost figure was Richard Stallman, who thrived in the hacker culture of MIT's artificial intelligence lab. He was the main developer of the EMACS, and viewed its flexibility and extensibility as a moral imperative that users must be allowed and encouraged to copy, extend and adapt software (Levy, 2010; Williams, 2002). To facilitate this he started what he described in a User's Manual for EMACS in 1981 as a ‘software-sharing commune’. This was based on the conditions that, in exchange for being supplied a copy of the program gratis, users were free to redistribute it has received, and also make improvements or additions, which should be distributed separately, and must be donated under the same terms to the commune (Kelty, 2008: 186). These informal norms were not always strictly observed, but the EMACS spread widely in the modular way Stallman intended.
The attempts at commercial control of software led to an increasing enforcement of NDAs, and Stallman was shocked by being refused access to software source code, especially when this was by other programmers. 3 Consequently, in 1983 he launched a project to develop an alternative operating system to Unix, which he called GNU (GNU’s not Unix). Working with limited funds and a handful of supporters this was a very ambitious undertaking, and was further hindered by others refusing him permission to use parts of source code that had been incorporated into programs now protected by copyright – ironically even parts of the EMACS. Hence, he fell back on writing from scratch a GNU version of the EMACS, outlining the principles of the project in The GNU Manifesto (Stallman, 1983/1987): centrally that ‘everyone will be able to obtain good system software free, just like air’. Importantly, this would be ensured not by releasing software into the public domain – experience had shown that under copyright law this actually facilitated appropriation (Stallman, 2002: 22). Instead, the GNU EMACS was released in 1985 under similar informal conditions to previous versions, allowing modification and redistribution, and stipulating that no distributor could restrict further redistribution. At the same time, he and others launched the Free Software Foundation (FSF) to propagate the GNU principles.
In Stallman's first statement of principles the term ‘free’ was used loosely, and copyright in software was seen negatively, because he considered that it was used, particularly through NDAs, for the inappropriate and fundamentally immoral purpose of preventing sharing and improvement. However, working through the FSF he learned to ‘distinguish carefully between “free” in the sense of freedom and “free” in the sense of price’ (Stallman, 1983/1987: 8). In fact, he was always clear that many business models could and should be built on software, including distributing copies in convenient form and offering support and other services. Furthermore, he noted that other programmers were releasing software with a notice claiming copyright, but granting permission to copy subject to conditions such as prohibiting commercialisation. Hence, with help from others including lawyers, he drafted what he called the General Public Licence (GPL) version 1.0, to be used in standard form for the release of GNU modules (GNU, 1989; Williams, 2002: ch.9). This was much more law-like than the informal user instructions and copyright notices previously used, though avoiding legalese. He explained it essentially as a hack: ‘Copyleft uses copyright law, but flips it over to serve the opposite of its usual purpose: instead of a means for restricting a program, it becomes a means for keeping the program free’ (Stallman, 2002: 22).
At the same time, Stallman and his colleagues engaged with the debates around ‘look and feel’, and formed the League for Programming Freedom to organise protests against the Lotus lawsuits, with the slogan ‘innovate, don’t litigate’, as well as intervening in the litigation and the wider debates (Con Diaz, 2019: 225–229; Williams, 2002). For them, patents were a greater threat than copyright, because they could be used to block emulation and reverse engineering. The GPL, by ‘flipping’ the way in which copyright was used, built on its conceptualisation as an authorial right to allow programmers greater control over diffusion and commercialisation, although they would soon discover its drawbacks.
Stallman's approach to programming was vindicated when a 21-year-old Finnish student, Linus Torvalds, independently overcame a major obstacle blocking the GNU project, to lead the development of a kernel for it, using the same collaborative approach. Coming from an unexpected quarter, this vindicated the GNU perspective of collaborative software development, while introducing greater decentralisation than coordinated teams (Raymond, 2000). Torvalds released the code for Linux in 1991 under ‘share-alike’ conditions just as Stallman had, but with an additional restriction against asking for payment. However, the following year he adopted the GPL, which did not have this restriction. 4
This finally enabled developers to release complete operating systems combining Linux and GNU programs. These spread rapidly in the 1990s greatly aided by the invention of the World Wide Web, the software for which was released initially as public domain 5 and then, particularly with the Apache web server software, under GPL licences, so that by mid-2000 Linux was running on one-third of web servers (Weber, 2004: 55). This success was attributed to the superiority over the hierarchical approach of the decentralised ‘share-alike’ model in managing the inherent complexity of designing interacting suites of software. This thesis was advanced most famously in a talk by Eric Raymond, first delivered at the Linux Kongress (sic) in 1997, ‘The Cathedral and the Bazaar’ (Raymond, 2000), although the contrast he drew was too stark. Decentralised development could be orderly, although often rambunctious, and it could involve hierarchy – even for Linux, Torvalds operated as the chief and through lieutenants (Weber, 2004: 64). Rather, the freedom proclaimed in moral terms by Stallman responded to the practical needs of programmers in their everyday work, even those employed in commercial firms, for software to be continuously modifiable, portable and interoperable through wide networks of users (Weber, 2004: ch. 3).
The skeleton of this new ecosystem was provided by licensing, which also developed in a decentralised way, as different groups and factions formulated their own licence models. Geek programmers also became experts on licensing terms, while companies developing business models based on open-source software hired lawyers and created their own legal departments, and the FSF spawned the Software Freedom Law Centre in 2005.
Amid the various licences with diverse terms, the condition that became particularly controversial was that derivatives of the software must be distributed under the same conditions. 6 While this was viewed as a basic moral principle by Stallman and his followers, others (including Torvalds) considered it too restrictive. The advocates of fewer restrictions favoured speedier development, viewing Stallman's management of the GNU project as too controlling and even dictatorial. This was the rationale behind the Debian project launched in 1993 for a free GNU/Linux operating system based on a ‘social contract’, formulated in 1997 by Bruce Perens as the Debian Free Software Guidelines. Raymond, whose essay had made him a chief proselytiser for decentralised software development, as well as a beneficiary from its increasing adoption in business models, saw that the term ‘free software’ was an obstacle, and steered the emergence of the Open Source Initiative in 1998 (Williams, 2002: ch. 9). This adopted Debian's more liberal definition, which included all the freedoms of the FSF, and also allowed the free distribution of derivative works (Weber, 2004: 85–86, 113–115). This linguistic twist meant that the GPL, which requires the free distribution of derivatives, qualifies as open source, but many licences defined as an open source do not meet the GNU's less permissive GPL standard.
The laxer restrictions on the distribution of derivative versions facilitated business models that could combine open-source and more restrictively protected software. This opportunity was grasped by Netscape, which decided to harness the power of open source in its internet browser's battle with Microsoft's Explorer. After some deliberation, this was done by disentangling parts of its core code that could be released under a full GPL-style licence as Mozilla, while using an open-source-compliant licence allowing proprietary derivatives for other parts to be distributed to third parties and retained for itself. This ‘forking’ of both the code and the licence terms had some initial success, but proved an uneasy compromise (Weber, 2004: 122–124).
Firms that had built established positions through proprietary software, notably Microsoft, were initially unnerved by the potential of open source (Weber, 2004: 126–127). However, they focused their hostility on the GPL's free distribution obligation, denounced by Bill Gates as ‘Pac-Man-like’ (Ricciuti, 2001), and demonised as a ‘viral’ clause. Eventually, even Microsoft found ways of collaborating with open source developers such as Canonical, particularly on software for cloud computing and the ‘internet of things’, and by 2014 Microsoft's new CEO declared that it ‘loves Linux’ (Tozzi, 2017: 245). This volte-face was also a tribute to the success of Google, which used its acquisition of a startup called Android to develop it, in conjunction with industry partners, as a Linux-based open-source operating system that has proved the only serious mass-market rival to Apple's IOS and Microsoft Windows, especially for mobile devices. Microsoft riposted with an investment in an artificial intelligence company OpenAI that paid off handsomely by feeding into all Microsoft's software products, signalled spectacularly with its launch of ChatGPT in late 2022, stealing a march on its rival Google's Bard.
Hence, open-source and proprietary software are now used and combined in many ways. Clearly, the irruption of copyleft into the internecine corporate battles over proprietary rights changed the landscape, with a sharp shift towards ‘a concept of property configured around the right and responsibility to distribute, not to exclude’ (Weber, 2004: 86). However, although they are polar opposite perspectives, both are based on copyright, so the GPL did not displace, but in effect confirmed the conceptualisation of software in terms of rights to control its use. The modular nature of software involves many kinds of interactions between and combinations of different programs, and even the FSF issued a licence variant, the Lesser GPL, to allow some linking of free and ‘unfree’ software. Furthermore, both the copyleft and OS licences, as well as their interfaces with commercial licences, opened up many contentious issues, mediated by legal interpretation (McGowan, 2005).
Exploiting Ownership
The control of intangible property rights became central to the growth of the tech sector, particularly the big tech transnational corporations (TNCs) that have come to dominate the world economy. By 2022 eight of the top 10 TNCs (by market capitalisation) were software-based, either linking software with distinctive hardware (Apple, Nvidia, Tesla and Taiwan Semiconductor), providing software platforms to supply other services such as advertising or sales intermediation (Alphabet, Amazon and Meta) or primarily software applications providers (Microsoft). They and a host more, such as Airbnb, Alibaba, Cisco, Expedia, Oracle, TenCent and Uber, have catalysed the increased centrality of software to many economic sectors, from agriculture to transportation. Software has enabled new forms of social interactions, facilitating social control of users through its design and ownership, dominated by these firms.
Ownership has also been central to the international expansion and oligopolistic concentration of these TNCs, placing them at the heart of global ‘surveillance capitalism’ (Zuboff, 2019). Ownership ensures what Pistor has called the key legal attributes of capital, notably priority and universality (Pistor, 2019: 13–15). Backed by the powerful ideology underpinning private property, ownership rights can be used to create presumptions that public law and regulation should not usurp private rights. Lawyers can adapt and combine the flexible concepts of property and contract, and this part of the paper considers first how the legal trickery of software licensing does so to provide legal backing for the control of users. It then discusses how the grey areas in the definitions of proprietary rights and their ownership have been exploited to reinforce the big tech firms’ competitive advantages in an important area of public law and taxation, particularly international tax. 7
Controlling Users Through Licensing
Central to the control of users of mass-market software has been the End User Licensing Agreement (EULA), through which all users of software accept wide-ranging legal conditions, amounting to private legislation (Phillips, 2009). The EULA emerged with the early software application products of the early 1980s: its introduction was later explained by the developer of the Wordstar wordprocessor, Seymour Rubinstein, as intended to bring home to buyers that they did not ‘own’ the software but acquired only a right to use it (Dvorak, 1998: 87).
EULAs entail a powerful combination of a contract and a conditional grant of intangible property rights, which creates legal confusion. The contract is not a sale, and claims to bind users who have no monetary or other direct relationship with the software provider. A contractual link was devised by deeming it to be accepted by opening the ‘shrink-wrapped’ physical copy, which with digital distribution has become a ‘click-through’ contract. The one-sided nature of EULAs initially raised doubts about their enforceability as consensual contracts, but this was approved in the US by Judge Frank Easterbrook, in a wide-ranging decision in which he provided a utilitarian justification from a law-and-economics perspective to overcome any traditionalist liberal concerns about consensuality (ProCD, 1996; Phillips, 2009: 28–31).
This gave free rein for EULAs to become increasingly extensive and arcane. The GPL also has become highly complex since Stallman's initial version 1.0, particularly due to battles by the FSF's lawyers to preserve a wide ambit for the copyleft ‘commons’, notably to block firms such as Microsoft from asserting patent rights over versions of Linux (Phillips, 2009: 133–140). The GPL does not claim to be a contract, but is more realistically framed as a conditional grant of rights to use the software; this view was upheld by the courts (Jacobsen, 2008), but raises problems of its own, since if it cannot be considered a binding contract the GPL's permissions may be unilaterally revoked by a licensor, creating potential confusion over later relicences (Phillips, 2009: 125–132).
A EULA typically grants the right to ‘use’ the software, but is subject to a wide range of restrictions. As pointed out in a leading manual (Tollen, 2015: section C.1), in copyright law a ‘right to use’ software is unclear, since it obscures the key distinction between the copy which is the necessary physical embodiment of the copyright work, and the intangible right to control the reproduction of the work. ‘Use’ of the copy entails installing it on a computer, which involves copying it into the computer memory, hence requiring permission of the copyright owner. Permission could be considered implicit in the supply of the software, and in many countries, legislation also specifies that this is not an infringement, so the EULA has been said to create the need for itself (Phillips, 2009: 11). However, the EULA's permissions generally go beyond any statutory rights, for example extending to all users, 8 allowing them to make a backup and install it on several devices, and covering other elements supplied, for example, graphics.
In exchange, EULAs generally provide that only specified uses are permitted, hence imposing extensive restrictions even on users’ legal rights. For example, EULAs often restrict resale of the copy, although this does not entail making a copy, and hence does not normally infringe copyright in countries (such as the US) that accept the ‘first use’ doctrine (Rothchild, 2004). They often limit ‘fair use’ rights, notably reverse engineering of the software code, even to ensure interoperability with other programs (Phillips, 2009: 31). Furthermore, they grant the software provider limitations on liability, as well as rights to access the user's computer, and to install both updates and other software, including adware and spyware. Finally, they usually specify that compliance with all the conditions is a condition of the licence, leaving users open to liability for copyright infringement that could far exceed breach of contract (Phillips, 2009: 35–36).
This legal trickery uses the formality of granting a licence to copy the program in order to operate it (which should be regarded as implicit and may be permitted by statute), to attach limitations on users rights and additional obligations, that have been described as ‘servitudes’ (Van Houweling, 2008). The control of users enabled by these rights has enabled software providers to build enormous databases and act as portals to global consumer markets for a large range of services, generating an avalanche of revenues particularly for the main software-based tech TNCs.
Avoiding Tax
The ownership of software has also been central to strategies of international tax avoidance that have helped to propel the giant tech TNCs to global dominance. The digitalisation of economic transactions enabled by software has greatly exacerbated fundamental flaws in international tax arrangements that have enabled TNCs to use techniques of tax avoidance based on exploiting legal grey areas.
Direct taxation of income or profits began to become an important source of state revenues from early in the 20th century, and states asserted the power and right to tax the income or profits of their residents, as well as that derived from activities within their jurisdiction by non-residents. Those engaged in international business, the early TNCs, soon began to complain of ‘double taxation’ due to the potential overlap between states’ jurisdiction to tax, while also developing techniques to avoid these taxes. 9 Following negotiations through the League of Nations in the 1920s, a loose form of international coordination emerged, based on model conventions which states could use as a basis for bilateral treaties, accompanied by Commentaries intended to assist interpretation. Significant divergences in perspective between states that were mainly home or residence countries of TNCs and those which were only hosts for their activities resulted in two different versions of the League model, and the abandonment of attempts after 1945 for the United Nations to resume the work of the League.
The work on setting international tax standards shifted to the Committee on Fiscal Affairs (CFA) of the Organisation for Economic Cooperation and Development (OECD), 10 which gave priority to TNC concerns about double taxation, and aimed to stimulate international trade and investment by restricting taxation of non-residents. In 1967, at the initiative of the US, the United Nations formed a tax committee, 11 to formulate a more balanced treaty model. The UN Committee has played a subordinate role, and its model treaty adopted the same structure and much of the text from the OECD, but it has included stronger provisions for taxation of income at source. Both models are couched in relatively simple terms, which however leave considerable scope for interpretation. The Commentaries, which are supposed to aid in this, actually add an additional layer of complexity, creating contentious issues such as whether interpretation should be ‘ambulatory’ (whether changes to the Commentary apply retrospectively). Substantial sections of the OECD model's Commentary have been included in that of the UN model, which also frequently records the divergent views of Committee members on the meaning of various terms and provisions. The treaties are generally incorporated directly into national law, sometimes with supplementary legislation, creating special tax regimes, and intricate interactions between different levels of rules. Thus, the interpretation of the relatively simple language of the treaties themselves has become a matter of labyrinthine complexity, which is clearly of great importance for the large TNCs that are mainly affected (Picciotto, 2015: 168).
The CFA and the UN Committee have been key arenas for the formulation of international standards on tax, typical of the technocratic institutions of ‘global governance’, in which lawyers have played a key role in shaping contemporary corporate capitalism (Picciotto, 2011, 2021b). The spread of tax treaties facilitated the growth of TNCs assisted by tax avoidance, based on exploiting the grey areas in the interpretation particularly of the key concepts of income and its source, and of residence (especially for legal persons such as corporations) in the interactions of national tax laws as mediated by international rules. The complexity of the interactions of these layers of norms creates further hermeneutic fluidity, as terms and concepts are deployed and applied in different arenas and contexts. Exploiting these interactions, TNC tax advisers devised complex corporate structures, creating affiliates in convenient jurisdictions ‘offshore’ to act as intermediaries or conduits between the home country parent and operating company subsidiaries in host countries. Such conduits could be used to hold assets or perform functions, making charges to the operating companies which reduced their tax liabilities, while the income extracted would be subject to zero or low taxation in the conduit country. This ‘double non-taxation’ was a major factor in the competitive advantages of TNCs, helping to finance their rapid expansion in the last decades of the 20th century.
The tech TNCs were able to build on these techniques, pioneered particularly by research-intensive firms such as pharmaceuticals, as well as electronics and computing. They pushed these techniques to such extremes that even the OECD's CFA felt obliged to launch a reform effort in 2012, in the period of fiscal crisis following the great financial crash which brought TNC tax avoidance to public attention. News reports even discussed arcane structures such as the ‘Double Irish Dutch Sandwich’, versions of which were deployed by many tech firms, including Apple, Microsoft and Google. Apple had set up in Ireland as early as 1980, before it even launched the Mac, benefiting until 1990 from a tax holiday on export income. At the same time, it also establishing a holding company Apple Operations International (AOI), with subsidiaries designated to perform specific functions, notably sales, to funnel global revenues through Ireland. In 1991 it negotiated a confidential tax ruling, under which the vast bulk of the revenues from worldwide sales that flowed to AOI and its subsidiaries were deemed not taxable in Ireland, since under Irish law at that time corporate residence was based on the place of management and control, which could be established by notionally holding its Board meetings in Bermuda. 12
These and similar arrangements adopted also by Microsoft and many others, exploited the ability to transfer rights in software and other intellectual property to a suitable affiliate (such as AOI), to act as a ‘cash-box’, receiving income from worldwide sales which could be taxed at near-zero rates, and could be held offshore to finance investments and acquisitions anywhere. Apple, for example, was estimated to have amassed between 1992 and 2020 some $370b of such ‘stateless income’, avoiding taxation by both residence and source countries (Curtis and Chamberlain, 2021: 819). Profits from exploiting intellectual property rights were generally routed through the Netherlands, benefiting from its network of tax treaties, through which some 14,300 special purpose entities were estimated to have funnelled €10.2 trillion of revenues in 2010 (Drucker, 2013). Tax revenue losses from all TNC tax avoidance were estimated in 2013 to amount to $509b for OECD countries, and $213b for developing countries, or 0.6% and 1.7%, respectively, of their gross domestic product (Crivelli et al., 2015: 20).
For the tech firms, this depended importantly on ensuring that payments for the use of software were not characterised as royalties for intellectual property rights, which seemed likely to apply to software once it became widely protected by copyright, and extended worldwide by the TRIPS agreement after 1995. In tax treaties royalties were defined as ‘payments of any kind received as consideration for the use of, or the right to use, any copyright of literary, artistic or scientific work, any patent, trade mark, design or model…’, which now applied to software. Most countries taxed royalty payments at source, but OECD countries had decided by a majority to cede this right among themselves to the country of residence through treaties based on the OECD model. However, some had not, and many treaties were based on the UN model, which provided for source taxation of royalties.
The threat of taxation of payments for the use of software was blunted in 1992, when the OECD's CFA agreed, by a substantial majority, to add a section to the Commentary of its treaty model, adopting an ingenious interpretation of this provision, based on a report which argued that payments for software should almost always fall outside this definition (OECD, 1992). This addition to the Commentary made a distinction for tax purposes between the acquisition of rights ‘to develop or exploit the software itself commercially, for example by development and distribution of it’, and acquisition of software ‘for the personal or business use of the purchaser’. This restricted the scope of the phrase ‘use of or the right to use copyright’ in the treaty article, based on the interpretation that allowing users to operate a software program does not entail granting a ‘right to use’ the copyright in the software, even though it does entail making a copy in the computer's memory. 13 On the other hand, the report reasoned, a complete transfer of all rights did not transfer rights to ‘use’ the copyright, so any payment for such a transfer should also not be treated as a royalty, but as the purchase price. A similar distinction was adopted in domestic law, notably in the US, 14 and these paragraphs of the OECD Commentary were also included in the Commentary to the UN model in 1997, as ‘relevant’ to its interpretation. This interpretation of the ‘right to use’ is evidently the opposite of that asserted to uphold the deployment of software licences, including EULAs, that depend on the view that users are indeed granted proprietary rights, which stem from copyright.
This interpretation was used to block attempts to tax payments for the use of software, particularly at source, by lawyers for TNCs in large firms such as Baker McKenzie, able to combine expertise in both tax and intellectual property. This impacted especially on developing countries, which are mainly markets for software rather than suppliers, and particularly in relation to business software, since such payments are deductible as costs, reducing the tax payable by the customers. They generally did have the right to tax royalty payments, both in their domestic law, and in their treaties, based on the UN model. However, the OECD Commentary's interpretation in 1992, and its inclusion in that of the UN model, provided authoritative backing for legal arguments in national courts. Notably in India, which extended copyright protection to computer programs in 1994, decisions by tax officials applying withholding tax to payments to foreign suppliers of software were soon appealed to tax tribunals, which mostly accepted the argument that payments for the right to use software could not be considered royalties.
Practitioners were alarmed by a tribunal ruling against Microsoft in 2010 that rejected the interpretation in the OECD Commentary, mentioning that it had not been accepted by all OECD members, and that the Commentary in 2008 had noted that India reserved its rights to tax royalties at source (Microsoft, 2010; Mehta, 2011; Sinha and Simha, 2010). Appeals to the High Court of Karnataka also upheld the Revenue's position (Samsung, 2011), but those to the Delhi High Court did not, so over a hundred appeals eventually ended in the Supreme Court. This accepted the interpretation in the OECD Commentary, which it said had ‘persuasive value’. It took the view that India's reservation in the OECD Commentary was not in ‘categorical language’, because it stated that India ‘reserved the right’ to tax royalties, and ‘is of the view that’ some payments ‘may constitute royalties’, compared to other reservations which stated that ‘India does not agree’. India had not taken steps to renegotiate its treaties, and businesses were entitled to rely on the interpretation in the OECD Commentary (Engineering Analysis, 2021).
In fact, developing country members of the UN Committee, especially those from India, had been attempting since 2011 to secure inclusion in the UN model's Commentary of an explanation of their dissent from the OECD position, but had been hindered by confusing and semantic technical arguments. Finally, only a month after the Indian Supreme Court's decision, the UN Committee approved a couple of short paragraphs to be added to the Commentary explaining this alternative interpretation. A year later, in April 2023 it agreed to include a new section, which lays out three positions on the provision (UN, 2023). This still leaves open uncertainties over the interpretation of existing treaties concluded by non-OECD countries, especially those which have not had the opportunity to register their positions, and where tax authorities’ attempts to tax software payments have been challenged, such as Kenya (Seven Seas, 2021).
Many software applications have of course been supplied gratis, and monetised through the sale of services. This digitalisation of economic activity created further opportunities to exploit the scope for interpretation to avoid tax. Both domestic law and many tax treaties provide for source taxation of payments for services, in addition to royalties. This threat was also deftly avoided, by arguing that these provisions, which generally refer to professional and technical services, are limited to those involving human knowledge and skill, excluding services such as advertising delivered through a software algorithm. This argument succeeded, notably in a key case in India involving Google and Yahoo (Right Florist, 2013). This led the Indian government to formulate a special tax, the Equalisation Levy, the forerunner of similar taxes on digital services introduced by other countries in Europe and elsewhere. These are of dubious validity under tax treaties, and viewed by the US as targeting the mainly US-based big tech companies; however, the US response was to sanction the measures as unfair trading practices (Picciotto, 2021: 21). This US retaliation might itself be challenged under WTO rules, but those procedures have been hampered by the paralysis of its Appellate Body since 2019, due to US refusal to nominate a member.
This necessarily succinct account aims only to sketch out some of the ways in which the taxation of TNCs, which has been key to their growth, has been contested through the debate over increasingly technical and arcane issues of formulation and interpretation of legal concepts and principles.
Conclusions
The example of software paradigmatically shows how economic activity and social life have been shaped under the domination of corporate capitalism, through the key role played by lawyers in enabling the assertion and deployment of ownership rights in key technologies, by exploiting the grey areas of uncertainty in law. Software is in some ways an egregious case, since its central importance has made it particularly contentious, although a similar story can be told about biotechnology, the other key post-industrial technology (Picciotto, 2011: 393–420). Nevertheless, close examination of any area of law, particularly economic regulation, can show that it is the fluidity and uncertainty of legal concepts and principles that provide the space for the mobilisation of legal resources to provide legitimacy for the normalisation of bringing innovations resulting from human creativity under the continued domination of capitalism, albeit also in changing forms.
The example also demonstrates how the process of encoding into legal forms operates to channel social conflicts into forms that normalise this domination. The legal framing of software was not a single clear policy decision, but a protracted and contentious process of conceptual and terminological exploration, interpretation and adaptation. Even the legislative decision to ‘shoehorn’ software into copyright only opened up a new terrain of debate and legal practice, including the radical adaptation of copyright – a legal form thought of as ensuring private exclusionary rights – to create the ‘copyleft’ subversive vision of a sharing commons. Yet, there are structural limits to this malleability. The legal encoding ensured the prioritisation of private proprietary rights (even if non-exclusive) over collective or public purposes, even though the technology had resulted, and continued to benefit, from enormous state expenditures and publicly funded research.
Copyleft could not create an autonomous economic sphere, but it did formulate an alternative vision, which was deeply rooted in the actual material practices of writing software, projecting it as a dynamic and liberatory technology. Hence, its conflict with the proprietary model underlying copyright was not just in the realm of ideas, but concerned the construction of contemporary social, political and economic life-forms. Thus, although it was accommodated into corporate capitalism through open-source software, it enabled the preservation of much of the emancipatory potential of programming, still substantially controlled by, but not completely captive to, financialised corporatist incentives. The channelling of social conflicts through law does not simply preserve existing forms of domination, but can help to shape transformations, although these also depend on wider political and economic factors.
This continued domination is also due to the power of the forms of private law rooted in concepts of private ownership to assert priority over public law that attempts to protect collective and public interests. The necessarily brief discussion of taxation in the previous section showed how the malleability of private law concepts enabled the subversion of attempts to tax the enormous rents resulting from the appropriation of software by large TNCs. Space precludes discussion here of the similar debility of antitrust and competition law, let alone the suggestions for new forms of regulation that have belatedly been recognised as needed to at least tame the many harmful effects of the proliferating applications of software under the aegis of privately owned corporations. Property rights give their owner priority of action, including the ability to design transactions involving those rights, clothing them in the most advantageous legal language. Thus, the big tech TNCs have avoided tax on their income from exploiting software by being allowed to characterise transactions as not involving the use of property rights in it, while also formulating those transactions as licences to use some of those property rights, in order to exert extensive control over users of the software.
Public law is hobbled by a reluctance or even inability to disregard or restrict what are regarded as legitimate private rights powerfully fetishised as property. On the other hand, private law rights of property and contract can be used to create what amounts to private legislation regulating activities and transactions, and legitimising the dominion of the software's owner over its users. Private rights of property are also entrenched constitutionally as first-generation ‘human rights’, more extensively than later-generation social and economic rights. Furthermore, they have now become enshrined at a global level in public international law, through the embedding of intellectual property rights in the WTO by the TRIPS agreement (Sell, 2003), and more widely in the extensive protections of TNC investments and market access rights through trade and investment treaties, backed by supranational private arbitration (Van Harten, 2007).
The more fundamental problem is the lack of alternative legal forms that provide a suitable balance between the collective and the individual, particularly in framing economic activity. Even the company, which is the central institution of corporate capitalism that institutionalises collective work, is based on shareholding as a form of private property. The corporate form certainly provided a basis for software firms to coordinate the work of programmers in teams, but in the assigned role of employees. They have been amply compensated, and software firms have made extensive use of share-based remuneration, which in effect gives them a share in the firm's capital. However, this ties their incentives to the firm's dependence on finance, particularly from the venture capitalists who have dominated this industry's development, and gives them no formal role in decisions over the design or purposes of the software they helped to produce. Alternatives such as employee-owned firms or cooperatives exist, but they are marginalised.
The very adaptability and versatility of private legal forms creates obstacles to any more imaginative rethinking. Opportunities for more radical paradigm shifts occur at times of social crisis, war, or even revolution, but at such times the pressure of rapid change calls for quick solutions, and adaptation of existing forms is seductive. Although creative lawyering can also play a role as part of movements for wider social change, it must be firmly grounded in a grasp of the broader realities based on critical sociology and political economy.
Footnotes
Acknowledgements
I am grateful to Celine Tan and others at Warwick Law School for the opportunity to present an early version of this paper at the Centre for Law, Regulation & Governance of the Global Economy (GLOBE), and for comments and discussion; and to one of the anonymous referees in particular for incisive comments that helped make significant improvements.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
