Abstract
Despite limited progress within international institutions, the need for articulating a regulatory framework for cyber operations in outer space is becoming a pressing concern. One precondition for regulation is to share cybersecurity and outer-space common terminology that can inform the negotiation of standards, policies and laws. While the UN Institute for Disarmament Research has recently issued a baseline policy glossary, binding technical definitions are missing, and the lack of a binding international cybersecurity regime adds to the obsolescence of a binding outer-space regime tracing back to half a century ago. As the IEEE SA embarks on the drafting of the first-ever technical standard for cybersecure-by- design outer-space missions, scoping and conceptual challenges abound. Technical standards are US-centred, non-binding, engineering-intensive exercises, where lawyers and Asian jurisdictions are only marginally involved; nevertheless, as China’s framework for cybersecurity is refined and its involvement in outer-space policing deepens, its disengagement from Western-driven standard-setting bodies appears unsustainable. Drawing on the specific challenge of defining what makes a cyber system ‘mission-critical’, I expose the necessity to examine how domestic cybersecurity laws from a diverse range of States identify ‘critical’ information infrastructure. Generalising therefrom, I advocate a jurisdictionally inclusive process that combines American supremacy in technical standard-setting for outer-space missions with Chinese normative contributions to cybersecurity regulation, including on data localisation and mandatory multilevel cyber-hygiene requirements. I further argue that involving legal experts from a diverse range of jurisdictions and sociolegal cultures may enhance the global reception of standardisation outputs, thus securing higher degrees of voluntary compliance therewith. This could foster cooperation and promote regional and global satellite cybersecurity.
Keywords
Introduction
With the outer space (OS) gaining momentum as a terrain for both civilian and military activity, its global security becomes increasingly topical. At present, no cybersecurity framework is immediately applicable to outer-space missions, let alone globally; the first embryonic attempts to devise such a framework are either standing on the high-level-overview side, or facing the hurdle of integrating OS concepts and lexicon with those from cybersecurity. To exemplify, if one needed to decide what makes a cyber system in OS ‘critical’, they could scrutinise ‘criticality’ under OS customs or ‘critical information infrastructure’ (CII) under cybersecurity: both would matter, and the most appropriate synthesis would need to be attained (Ear et al., 2024; Jakobson, 2013; Pecharich et al., 2016). The Institute of Electrical and Electronics Engineers Standards Association (IEEE SA)’s P3349—Space System Cybersecurity Working Group can be currently deemed the most notable effort to reconcile OS and cybersecurity expertise towards the standardisation of secure cyber systems for OS missions. Initiated in Fall 2023 in response to a hybrid governmental-academic call for action issued by United States (US), Australian, United Kingdom (UK) and European Union (EU) researchers and policymakers (Falco et al., 2022), this process is articulated along the traditional OS systems’ classification—space segment, user segment and ground segment (Casaril & Galletta, 2014, p. 3; Kodheli et al., 2021, p. 78), plus link segment and integration layer—reflected in subcommittees.
Despite the praiseworthiness of this venture, it might still be perceived as overly technical and not truly global by policymakers and regulators, especially outside the Atlantic. If the latter are not invited to trust the process, this standardisation’s outcome might even stand technically optimal, but it risks being dismissed or sidelined by those who could ‘socialise’ and support its implementation. Without their endorsement, it might not be widely adopted; it might equally fail to persuade lawmakers, arbitrators and judges to integrate it into their laws, arbitral awards and judgements, thus crediting it legal standing as a source of law.
In fact, technical standards in this arena routinely fail to articulate technology as a policy-oriented—as opposed to merely technical—compromise to engage powerful players such as China and to embed technical solutions within appropriate policy expectations beyond the US-UK-EU axis. In drafting a standard for cyber-secure OS missions, one should indeed refer to cybersecurity frameworks as they have blossomed in recent years across the globe, and should identify key tenets that would make it likelier for the standard to be accepted in those regions, in potential compliance (or at least non-friction) with said frameworks. A topical case of reference is indeed that of China, due to the following: (a) the weight of its space programme (Qisong, 2021) and economy; (b) its mature cybersecurity regulatory framework; (c) its CII focus; and (d) its rejection of—or exclusion from—previous professedly ‘global’ standardisation undertakings. Currently, standardisation in OS affairs (and generally) is primarily aimed at ‘translating’ general policy recommendations and guidelines into technical solutions that can inform contractual agreements between customers and suppliers in the space industry (e.g., Kato et al., 2013, p. 2). Nevertheless, as will be illustrated here, technical standards may also represent powerful enablers and catalysers of regional and global cooperation around security concerns that involve public and private actors alike. For this to materialise, and especially to avert the reverse effect (technical decoupling and insecurity threats), a number of policy-legal conditions beyond technical excellence must be in place.
The IEEE effort will not be pursued in isolation. Rather, it will be placed within a broader regulatory and sociolegal framework from which it will need to draw terminology and boundaries. The next section will offer background context on such a framework, introducing parallel—albeit far less technical—efforts being made across other tables, including United Nations (UN) bodies. It will outline the concerns and aspirations underpinning this IEEE exercise, and the unsuitability of any outer-space regime to provide adequate definitory and substantial support thereto. It will thus illustrate the reasons why the composite regulatory ecosystem on cybersecurity may prove more helpful in feeding lexicon, precedents and real-world scenarios into this IEEE process. The third section will indeed delve into the cybersecurity ecosystem, from the often-underrated perspective of technical standardisation. More specifically, it will turn to China and position it as an unwilling outlier that is stranded at the periphery of West-driven standardisation outputs due to the latter’s self-assigned ‘global’ standing and reach. It will debunk the policy neutrality of cybersecurity standards, arguing that they are embedded in sociotechnical cultures that root themselves into regional and state-promoted strategies for value-laden domination. In light of standards being (sometimes unwittingly) expressive of epistemic networks that far exceed mere technical expertise and factually ground superiority claims into geopolitical dominance, a case can be made for legal and policy expertise to be factored in from the early stages of the drafting process, so to secure jurisdictional inclusiveness and trust, thereby increasing regional interoperability, that is, the likelihood for the standard to be widely adopted beyond the West. One implication of recruiting lawyers and policy advisors to secure ‘jurisdictional inclusiveness by design’ is that their terms of reference will stand at the less technical end of the spectrum, making it necessary to situate extremely technical discussions against broader trends, interests and sources. Among the latter, norms and policies from the global to the domestic level cannot be disregarded for contextual reference, even within exceedingly technical works such as this IEEE’s pursuit. The fourth section will therefore paint an overview of the cybersecurity regulatory landscape from a Chinese perspective, insisting on a few core contributions offered by China towards a shared policing of the cyberspace. While not necessarily desirable, these contributions mark a clear posture vis-à-vis cybersecurity regulation that cannot merely be ignored—as it is often the case when West-hosted ‘global’ endeavours are embedded in pretentiously neutral technical epistemic communities. Despite the theoretical salience of this discussion, one may still object to its practical relevance: if normative contributions cannot add anything substantial to technical outputs, why then to refer to them and strive for a jurisdictionally inclusive approach? The fifth section dispels this irrelevancy myth by offering a brief case-study on the way China’s identification of ‘critical’ information infrastructure within its cybersecurity laws may meaningfully inform cyber-systems’ criticality classification, along a scale that draws from cybersecurity experience as opposed to more general guidelines and best practices on ‘mission-critical’ items. Put otherwise, by conceptualising ‘criticality’, this section will seek to demonstrate that lexicon, methodologies and expert insights deriving from normative sources on cybersecurity from ‘beyond the Empire’ may well inform the selection of the most suitable technical solutions to attain cyber-systems’ security in outer- space missions. The sixth section will summarise the (IEEE-addressed, yet easily generalisable) recommendations that immediately follow from the analysis. The seventh section will provide a conclusion.
In Between the Cyber and Outer Spaces: Preliminary Notes on Regime Deconfliction
Paradoxically, even if the cyberspace has garnered much deeper (albeit ‘patched’) regulatory engagement from States lately, we do have treaty definitions of OS and (some) space objects, but no agreement yet on what expressions like ‘cyberspace’ or ‘cyber operation’ stand for. For the sake of the present analysis, the cyberspace will be addressed in its infrastructure dimension only, that is, leaving the ‘content’ transiting therethrough and stored therein aside. This means that transmissions are also covered, but only insofar as their technical nature as data signals and deciphering protocols is concerned, without regard for the semantic meaning of the communications.
No generally accepted rule exists as to how to apply existing cyberspace laws and policies to cyber operations in the OS more specifically, nor have comprehensive proposals to devise new rules been tabled. The tenth chapter of NATO’s so-called ‘Tallinn Manual 2.0’ does try to propose a way for cybersecurity rules to be framed alongside space governance, but it merely stands as a West-intensive scholarly output with neither authoritative status nor jurisdictional representativeness (it in fact excludes contributions from most of the world’s major space economies, especially those in East Asia). Yet, as OS capabilities diversify, geopolitical faultlines deepen (Flynn, 2024), and the space economy grows (Terzi & Nicoli, 2024), common and authoritative regulatory lexicon and framing are urged. ‘Historically, the space industry has relied on security-through-obscurity, but this approach can no longer be tolerated as the industry opens up to new players and technologies’ (Calabrese, 2023, p. i). The concern is that cybersecurity threats, including for instance radio frequency interference, jamming, replay, tampering, elevation of service, malware, denial of service and spoofing attacks on satellites, will make the exploration and exploitation of space unsustainable (Cyr et al., 2023; Diro et al., 2023, pp. 6–9; Kavallieratos & Katsikas, 2023, pp. 6–8; Palmroth et al., 2021, pp. 7–8; Shabbir & Sarosh, 2018, pp. 133–135; Zhuo et al., 2021). While these threats might be perceived as speculatory or distant-in-time, they are in fact the ‘natural’ development of half a century of continuous and increasingly ingenuous cyberattacks against satellites and space systems more generally (O’Neill, 2022; Pavur, 2021, pp. 28–34). Public investments—and the rhetoric surrounding them—is also evidencing enhanced preparedness, with, for example, India establishing both the Defence Cyber Agency and Defence Space Agency in 2019, to join efforts against satellite-threatening cyberattacks (Lele, 2023, p. 3). This is unsurprising: just like most space technologies, satellites are dual-use communication enablers for both civilian and military purposes (Fleming et al., 2023; Pekkanen et al., 2023, 2), and they have always represented a premier intelligence-gathering solution (Bateman, 2023). ‘As a result, many [S]tates are willing to go to extreme lengths to protect, or destroy, important satellites’ (Hammack, 2021, p. 233). Moreover, as traditional, social and collaborative robots (or ‘cobots’) are increasingly deployed for space operations (Bluethmann et al., 2003; Papadopoulos et al., 2021; Scoles, 2018; Freeland and Martin, 2024, p. 11), hacking them might unlock mission-endangering scenarios.
To address the concerns mentioned earlier and sustain derisking, one would be first tempted to consider the OS framework as opposed to the cybersecurity one; as recalled earlier, it builds a system of hard and soft norms, some of which are international treaties: the 1967 Outer Space Treaty, the 1967 Rescue Agreement, the 1971 Liability Convention and the 1975 Registration Convention. Prima facie, reliance on these binding legal sources reads like a promising option, but their potential contribution to securing OS cyber systems would be only rudimentary: first, they are obsolete as conceived in the pre-cyber era; second, they are vaguely drafted and suffer from lack of accuracy and consistency. The latest attempt to negotiate a shared OS security glossary has been the Open-Ended Working Group (OEWG) on Reducing Space Threats Through Norms, Rules and Principles of Responsible Behaviours, sponsored by the UN Office for Disarmament Affairs (UNODA), which has hosted its Fourth Session in September 2023. One would be disappointed to learn that on top of the outcomes’ non- bindingness, diplomatic jargon abounds and concrete progress stalls. From a technical standpoint, though proactively participated by several delegations (including from East Asia), the OEWG work can be deemed superficial on cybersecurity governance, as it confined itself to noting well-known shortcomings and threats, including the difficulty of attributing cyberattacks to state and non-state actors (Azcárate Ortega & Lagos Koller, 2023, pp. 32–33) without achieving consensus on any refreshing solution. The appended glossary redacted by a group of experts and edited by UN Institute for Disarmament Research (UNIDIR) researchers (Azcárate Ortega & Samson, 2023) is noteworthy, but not yet sufficiently technical, nor officially sanctioned by most UN members. This aligns well with OS law often proving unserviceable and (notoriously) underdeveloped, especially when it comes to cybersecurity. As Lucas-Rhimbassen et al. (2021, pp. 118–119) put it, turning ‘space resources into cyber resources [… is] carrying the potential of transforming the space market infrastructure irrespective of international space law’, that is to say, international space law (ISL) is becoming nearly irrelevant. On the public security side more generally, parallel remarks are due: as it stands, ISL is not well-equipped to prevent space weaponisation in general (Khalid, 2021; Saxena, 2023, pp. 147–149; Yan, 2023); even less so will it prevent cyber confrontation in space more specifically.
With the aforementioned in mind, inspecting the cyberspace regulatory ecosystem might prove more fruitful, despite its non-bindingness under public international law—apart from general state responsibility and related obligations, of course. In truth, scrutinising cybersecurity is even more salient as numerous aspects of interest here will be extrapolated from domestic cybersecurity frameworks as opposed to international ones. Domestic frameworks are domestically binding and exceedingly specific; moreover, they concur to shaping international legal principles, and can be drawn inspiration from to envision the way some ‘core’ tenets can fulfil the expectations of the majority of States. As we shall see, this matters not merely towards normative negotiations but also for shaping technical standards applicable to all. Challenges abound, but they are not insurmountable. The first is to draw inspiration from ‘representative’ cybersecurity frameworks in order to integrate technical and policy aspects into technically sound yet broadly politically acceptable standards. The second is to achieve the first without disregarding the value stances of any geoeconomic district. The third relates to the ‘ontological’ unpredictability of the extent to which patterns of engagement/disengagement in ‘Earth cybersecurity’ are likely to be mirrored into cybersecurity for OS, which extends far beyond the orbital layers as to encompass operations from, within and to other planets or spacefaring objects. Besides these limitations, and mindful of our objective here (i.e., securing cyber systems for OS missions), cybersecurity frameworks remain more promising than the (binding, yet vague) space ones. Let us investigate how they were shaped through standardisation, to later explore what one could extrapolate from domestic cybersecurity laws and policies with the purpose of feeding into novel and useful global standards—like the one the IEEE is aspiring to draft.
China vis-à-vis the Extractive Ethno-Epistemology of Technical Standard-Drafting
On paper, technical standards are voluntary, non-partisan, industry-driven, trade-supportive, expert-drafted specifications aiming to foster safety and interoperability for new technology solutions within and between States. Despite their private, consensual and self-regulatory nature, they hold strategic, commercial and legal significance as they sharpen regulatory competition, shape research and development, feature into (and ‘define’) standard-essential patents and factually harden into law when public adjudicators or arbitrators decide technology- intensive disputes (Baron et al., 2020; Liao & Parkouda, 2023; Vecellio Segate & Daly, 2023, pp. 26–30). This explains why international standard-setting organisations’ influence is mostly ‘invisible’, but ranks high among the most pervasive shapers of contemporary societies, markets and norms (Murphy, 2015, p. 443). Scholars (e.g., Vallejo, 2021) are more and more persuaded that a ‘private administrative law’ based on standards has long been a factual reality that courts and regulators have only recently come to terms with, and whose systemic societal and geoeconomic implications still lie largely unexplored in research and institutional practice alike. While the European Commission is reportedly taking cognizance of this ‘new’ form of authority (Andersdotter & Olejnik, 2021), the issue is severely underestimated or unheard of across most other political, regulatory and especially technical knowledge circuits.
Cybersecurity has long been the subject of intense standard-setting efforts, whose impact has been, however, less global than it could have otherwise been, mostly due to institutionalised extractivism and uncollaborative attitudes. At their core, standardisation agencies encompass little more than secluded groups of élite engineers from the 20 or 30 most ‘highly ranked’ universities worldwide, which happen to be mostly American or, on a declining trend, British; no matter those engineers’ nationality (and citizenship), they will have at least graduated from or been employed by a ‘highly reputable’ Western university or corporation, upholding and subscribing to West-centred values and narratives. This unspoken seclusion is rooted in a long history of ‘output legitimacy’ and capacity to attract and retain functional deference, based on expertise, the concrete ability to standardise solutions to technical problems, membership and affiliations, and teleological ties to the Transatlantic Empire (Mumford & Shires, 2023, p. 628, pp. 646–647). Even as neoliberal élites become increasingly transnational and value-neutral (Vecellio Segate, 2022b, pp. 3–30, 609–671, 693–753), their deference to and support for standardisation is arguably embedded in that very same imperial expansionist motives—or at least ‘cognitive habits’—that led to the establishment of standardisation societies in the first place. When the private society called International Electrotechnical Commission was founded in London in 1906, the ‘[h]armonization of previous divergent standards […] create[d] winners and losers. A functional explanation for the creation of the IEC ignores these distributional implications’ (Büthe, 2010, p. 18). In fact, it was mostly created to practically achieve interoperability for the industry, which did work (and fairly rapidly!) in that world, as little resistance could be opposed—or alternative be proposed—to the Empire’s superior technical capability.
Yet, markets and technology function differently today: one cannot dismiss contributions from (impending or actual) superpowers and great powers and still hope to achieve cross-jurisdictionally interoperable solutions. Along today’s global supply chains, or at least when technical solutions are conceived to apply globally (as it is definitely the case with secure cyber systems for outer-space missions), distributional implications should be accounted for to the highest possible extent: solutions should work in practice, but with due regard to not a priori ignore inputs from traditionally ‘decentred’ knowledge communities. The cost of excluding them would rest with non-adoption of the solution themselves by a significant fraction of state and non-state actors (Lozada, 2021), which is all the more problematic as ‘vendors are more concerned with the actual de facto standardization process than with whether a specification receives additional recognition from [standard bodies]’ (Simcoe, 2009, p. 269). When faced with competing standards, the industry might prefer to abandon standardisation altogether, or pursue it in disregard of public institutions and interests (Cargill & Bolin, 2009, p. 310). On the public side, exclusion would also steer further decoupling of means and intents, which is antithetical to any genuine cybersecurity pursuit—especially about ‘the commons’ such as OS. The supposed ‘absolute best’ technology-wise (Sheehan et al., 2021) should leave appropriate room for win-win compromises policy-wise; in view of a seemingly inevitable multipolar future, ‘winners and losers’ no longer seems an affordable outcome for problem of concern to the whole of humanity.
Standardisation agents recurrently reason that, differently from political negotiating tables, standardising efforts are technical in nature, and therefore aimed at accommodating ‘the best and brightest’ inputs alone. This argument is fundamentally prejudiced and deeply flawed. Standardisation exercises are not academic, scientific pursuits; and standards, while hopefully built on solid science and sound expertise, are not scientific outputs. Epistemically, ‘harmonization requires not only technical but also political cooperation, since standards themselves are not direct mirrors of reality but are co-produced responses to technoscientific and political uncertainty’ (Jasanoff, 2013). They are not meant to seek any natural validity, contribute to human knowledge or pursue absolute, abstract ‘perfection’; technical solutions need to work and ‘feel trustworthy’ in practice, which means that they must suffice to solve a technical issue reliably and safely, but also be policy interoperable enough to be adopted by those who have the authority to adopt, disseminate and enforce them: lawmakers, policymakers and regulators. For this to occur, they must reasonably resonate with their expectations as for priorities, language, conceptual background, timescales, aims and operative follow-ups. Different aims will substantiate the need for different solutions: aims never are technically neutral. Furthermore, even the very same technical solution can be arrived at through divergent routes, not all thereof being equally acceptable to different parties: the same technical outcome may be more or less acceptable to public authorities depending on its legal-ethical justification and policy function within the broader picture, and the way in which those are phrased and conveyed.
Such a way is hardly going to be regulatorily neutral. Indeed, it was argued already 20 years ago (Benoliel, 2004) that technical standards in the cyberspace domain (yet arguably across other realms as well) premise themselves upon ‘technological necessity (or even inevitability)’, ‘technical expertise’ and ‘neutrality’ (i.e., a supposedly self-evident impossibility to be/function otherwise) to in fact take a stance vis-à-vis the traditional regulatory environment. Within a jurisdiction, this might well work to defy domestic laws, opposing the argument of optimal techno-solutionism to the social balance reached through compromise as crystallised into law. Globally, the stance might be to advance the normative and economic interests of the Empire, especially in Internet governance and the telecommunications sector broadly defined (Chimni, 2004, p. 15; Schiller, 2023, pp. 536–537, p. 560; Shulman, 2021, p. 338; Tang, 2020, pp. 4559–4561; Thussu, 2022, p. 1585), surreptitiously backing its worldviews via recourse to its lexicon, methodologies and cumulated experience. Somewhat paradoxically, technical standards need to be formulated via far more inclusive and ‘democratic’ procedures than scientific pursuits like a physics experiment. On a domestic level, inclusivity translates into accounting for different societal orientations and values. Internationally, it equates to incorporating the stances (i.e., preferences and priorities) of as many States as possible, especially those with high stakes in the future of the relevant policy area.
If laws are to be referenced or at least taken due note of, then public lawyers and specialists in law and technology should be involved ‘by design’, from an early stage, and especially when definitory and scoping preliminaries are set out. Their involvement will definitely cause delays and short-term frictions, yet it will prove beneficial in the long run, helping ensure that: (a) language is applied consistently and ‘geopolitically tactfully’; (b) the output appears ‘readable’ to policy and regulatory professionals as well, as opposed to just engineers and computer scientists; (c) courts and arbitral tribunals are encouraged to more readily ‘harden’ the final standard into law, via professional affinity and lexical accuracy, with, for example, product liability and risk management schemes already forecasted.
At present, in the US and elsewhere, lawyers have no meaningful exposure to (let alone are they trained in) technical standards, until they are litigated in court to substantiate safety, intellectual-property, criminal-liability, international-trade or other types of submissions (Coglianese, 2023, p. 19). By ‘lawyers’ I primarily mean socio-legal researchers and advisors and similar professionals (all the better if endowed with a genuinely interdisciplinary background), not traditional practicing attorneys. Standards are drafted without lawyers, who are only briefly involved—if ever—at the final stages, when the conceptual architecture of the standard can no longer be amended. The outcome will be authoritative technically, and the industry might still enthusiastically rely thereupon at first, to then discover that its regulatory implications should have been thought of, too. Of course, fine expertise aside, lawyers to join standardisation bodies are to be chosen in harmony with those very same principles of jurisdictional inclusion, open-mindedness and personal integrity that this article identifies for selecting engineers. Despite deeper ethics training and closer ethical scrutiny from their professional regulatory authorities (Pace, 1994), enforcement is not necessarily comprehensive (Nicolson & Webb, 2020, pp. 84–122), so that lawyers too are prone to regulatory capture, imperialistic demands, the pursuance of personalised interests, rent-seeking and unethical behaviour (Adams, 2020, pp. 989–990). Hence, just like engineers, lawyers are to represent a wide variety of legal systems and educational backgrounds, with zero tolerance for conflicts of interest—though the latter are admittedly controversial to define in practice. Diversity should be substantial as opposed to formalistically based on citizenship (for instance, one could hold a Chinese passport but have studied at Princeton, speak poor Mandarin, have a family in California and have served US companies for decades). Functional cooperation and mutual trust between lawyers and engineers from an ethical perspective are fundamental: ‘[e]thics tells practitioners of unacceptable outcomes, but it does not guide them in what they need to do to avoid that outcome in practice’ (Maslen et al., 2021, p. 46).
If legal concepts are not informing (at least indirectly) a standard’s drafting, implementation uncertainty will arise on the regulatory side as soon as the technical solution is assembled into a marketable device. Instead, early involvement will provide context for solutions that matter in practice and States would be more likely to endorse; trust from the authorities will be enhanced, and policy interoperability of the standard across jurisdictions will be made quicker, safer and smoother. This might read as if implications lie entirely on the public side; a closer-to-reality take would be that industries, in fact, are extremely sensible to (and to varying degrees, dependent on) this ‘business diplomacy’ background: policy-unready standards will make them unready to accept standards of different ‘geopolitical derivation’. One reason for their alertness is that under most varieties of ‘capitalism’, private companies are increasingly tied to ‘their’ States; this is immediately evident with, for example, China’s state-owned enterprises, but it stands equally true in, for example, the US due to revolving doors and regulatory capture (Vecellio Segate, 2022b, pp. 3–30). Standards tend to emphasise the primacy of technical ‘excellence’ all the more so to support America’s private sector, which is increasingly tied to public players in the space economy.
The aforementioned follows linearly from the rationale of States landing ‘recognition’ to standardisation in the first place. In other words, why should States (perhaps other than the neoimperial ones) provide such a degree of policy room to meta-rules arranged by private agents? One answer is that standardisation has long been made recourse to as a regulatory expression of geopolitical protectionism, most recently through the World Trade Organisation’s Agreement on Technical Barriers to Trade (TBT Agreement)— one exemplification thereof being the Agreement on the Application of Sanitary and Phytosanitary Measures (SPS Agreement). Strictly trade-wise, Du (2018, p. 884) has contended that ‘there is no evidence that private standards are motivated by protectionism’, but they definitely are protectionist measures competition-wise, and even more obviously as a tool to exert geopolitical pressure and exercise (or contest, or resist) hegemony.
US scholars and policymakers have long invested great amounts of intellectual and political capital in the self-serving construction of China as an unmistakable and uncompromising threat (Ruiz Casado 2024), not least in space affairs. More specifically, US Administrations having long been articulating their discomfort at China potentially outpacing America in key areas of space exploration and prevailing over the US within the wider race for space governance rule-setting (Bowe, 2019; Chapman, 2016), the ‘misappropriation’ of technical arguments to advance (or retain) geopolitical priority (Hogan & Newton, 2015) could be expected. In fact, Pace (2023, p. 9) has already noted that the US mistrust for China on cybersecurity is likely to spillover onto poor cooperation on outer-space dossiers. While most IR scholars (and especially constructivists) submit that said spillover is due to perceptional and discursive rituals of statecraft, technical standardisation provides insights on the way technical arguments of superior expertise might replicate non-cooperation and disengagement from a policy area to another via technical resemblance and commonality—for example, indeed here via the embedding of general cybersecurity assumptions and practices into standard-drafting for outer-space missions. In a recent Report to Congress, the US Office of the Secretary of Defense (2020, p. 147) expressed its preoccupation at China granting some of its major technology corporations ‘the lead for setting national technical standards and enabl[ing] extensive cooperation with China’s national security community’. This is precisely what happens when technical standardisation processes are captured by strategic interests (and agencies) as opposed to seeking solutions that can work in a multipolar chessboard—something that China has learnt from US standardisation modes and rituals.
Standard-drafting’s geopolitical non-inclusiveness has long been failing Chinese experts and disfavouring Chinese viewpoints (and industries). Pushing China to the edges of standardisation is the official policy of the US and its European allies. As Wouters (2023, pp. 66–67) articulates,
Western governments, including the European Union (EU) and its Member States, have rallied to counter the Chinese offensive. While their actions are officially inspired by concerns for the protection of personal data and the privacy of individuals – in other words, by human rights – there are other matters at stake, including the question of corporate influence in public standard-setting bodies, the coherence with existing standards, and, last but not least[, …] the preservation of Western normative dominance.
For decades, China has been pushed to the periphery of such exercises, to such an extent that it later turned to isolationist self-confinement and only most recently to selective disengagement and equally selective strategic alterity. Up until recently, ‘Chinese ICT firms [… have had to] absorb the costs associated with standardization and compliance, while having very little say in the creation of those standards’ (Mavroidis & Wolfe, 2017, p. 7). Under the ‘Standards 2035’ programme and related flagship initiatives, and building inter alia on Huawei’s conquered global leadership across a remarkable portfolio of digital technologies, Chinese leaders are drastically changing course (Chávez Mazuelos, 2022, p. 45; Nanni, 2024, p. 129). No matter how superficial or formalistic their rejection of Western ‘superior expertise’ is, they no longer accept technical subordination and are pressing for decoupling from Western aerospace and (cyber)defence technology. Perhaps more surprisingly, they are already working for other States to follow suit and informally upgrade China’s status from a normative follower to a (selective) normative leader, at least within a regional (East Asia) or developmental (‘developing countries’) basin. While behaving as a persistent outlier to the dominant standard-setters, China is investing massive political and financial capital into building its own (no less imperialistically minded—Li, 2019, p. 2) alternative systems (and indeed ‘state followers’, especially from the so-called ‘Global South’), along a complex matrix of South-South exchanges and relationships (Garlick & Qin, 2024; Heeks et al., 2024; Majerowicz & de Carvalho, 2024). Depending on the field, China is either still trying to pierce the curtains of dominance by aligning its ‘working modes’ to the (Western) host technical élite, or it has already renounced to it to rather pursue its own alternative. Its pursuance of an alternative is facilitated via a wide portfolio of domestic actions, ‘from formal coordination mechanisms in strategic sectors, such as wireless mobile standardisation, to financial incentives, such as subsidies and stipends, or informal standardisation guidance from party-state officials to private-led standardisation organs’ (Rühlig, 2023b, p. 104). Ownership restructuring and hybrid alliances are experimented with, too (Qiu, 2023, p. 212). This action portfolio confirms that China has learnt to ‘play the game’ by the same Western rules, and it is now ready to project its statecraft through technical standard-setting, across any sector that is not yet (or too weakly) dominated by West-backed standard-setting agencies (Rühlig, 2023a). This might sound comforting to Chinese corporations (especially those relying on internal demand instead of global capital channels), but it will not favourably serve common global agreement around key technical challenges. To address those, only mutual trust and regulatory rapprochement will uncoil the current ‘soft confrontation’ and finally unlock shared progress.
There are several policy areas related to the OS, like OS debris and traffic management, where the lack of globally agreed technical standards has long been deemed to increase the likelihood to cause international incidents, including between US and Chinese satellites (Frandsen, 2022, p. 237; Sgobba, 2022). As the cybersecurity of space systems ranks higher and higher among the top security concerns, this lacuna will most likely cause inter-State cyber-incidents in space. Hence, if the aim is cyber-securing OS missions, no alternative to cooperating with China exists; this persistent—yet essential—outlier shall be reengaged, in that there are stringent reasons to be persuaded that balanced valuing of its contribution would make the sought standard more widely adoptable around the planet and thus ultimately useful. In the realm of outer-space cybersecurity, it is plausibly not too late to avert technical standardisation decoupling. While precisely the IEEE is listed among the ‘transatlantic’ standardisation institutes that China seeks to provide a strategic alternative to (Rühlig, 2023c, pp. 21–22), Chinese leaders have hardened their stance in response to specific protectionist moves from the EU—related to, for example, the Huawei dossier (Perarnaud & Rossi, 2023, p. 16)—that are far removed from satellites cybersecurity per se. Outer-space policing warrants—and might indeed witness—more relaxed alternative-seeking, and thus deeper scope for collaboration. Nonetheless, collaboration will not happen in a vacuum; mobilising and channeling trust is time-consuming and politically costly in the short run. If involving lawyers and crediting weight to the ‘policy context’ is of the essence, then one way to re-engage China is to study its strategy and legal documents on the matter to be standardised.
China’s Contribution to Cybersecurity Norms
China is a notably assertive power in global cybersecurity negotiations, with a distinct identity along dichotomous lines that can be contrasted with ‘Western’ stances. Together with Russia, and often through the Shanghai Cooperation Organisation (SCO) of which both States are members, it advocates for a binding convention on responsible state behaviour in cyberspace. What is remarkable here is that the very same Sino-Russian alliance is shaping UN negotiations on space security, with Russia and China demanding a legally binding instrument (Azcárate Ortega & Lagos Koller 2023, 23): the same preference they expressed while negotiating international cybersecurity norms. Variable geometries showcase the posture of the Sino-Russian alliance across proliferating organisations and policy platforms (like the BRICS and the G20), meaning that their stances do slightly vary depending on the negotiating table; this makes the identification of opinio iuris (Vecellio Segate, 2019, p. 89), which is useful to assess these jurisdictions’ legal position on relevant dossiers, less straightforward.
Nevertheless, a number of central tenets can be discerned. Most prominently, China has invested considerable political capital in emphasising territorial ‘sovereignty’ on internet infrastructure (Claessen, 2020, p. 148; Fung, 2022; Vecellio Segate, 2019, p. 101). Yet, sovereignty is almost alien to the current normative architecture on OS, where traditional state-centred legal concepts such as authority and enforcement are replaced by soft obligations on shared responsibility for mankind (Pace, 2023, p. 4). This embodies yet another misalignment between cybersecurity and space governance, and the legal vocabulary available to States for addressing them within international institutions. Another notable stance, of special relevance for cyberterrorism, relates to the internal dimension of terrorism, that tends to emphasise rebellion, secessionism and separatism over external sources of political terror.
How useful is the aforementioned with respect to the aim at hand, that is, the assurance of system cybersecurity in OS? Fairly limited. This is because a three-fold conceptual leap applies: first, one shall assess the extent to which definitions and doctrines from international regimes like international humanitarian law or international security law shape domestic legal frameworks; second, one shall apply those general concepts to the cyberspace, turning, for example, warfare, terrorism or espionage in cyberwarfare, cyberterrorism and cyberespionage; at this juncture, as the third step, one should further adjust those cybernorms to the specificity of OS missions, personnel, devices and systems. The normative stretch is so far that it seems at best contentious, on this basis, to predict anything of service to the drafting of a technical standard—if not contextually at a very high level of generality. Turning directly to domestic law might thus hold more promise.
Domestically, starting with the Cybersecurity Law in 2017, China has enthusiastically issued more than forty legislative and executive documents on and around cybersecurity, pursuing the establishment of a coherent, integrated, securitised legal framework alongside trade secrets protection (Vecellio Segate, 2020a) as well as data protection (Creemers, 2022). In light of the whole framework having already been extensively illustrated in scholarship (e.g., Huang, 2022), there is no need to recapitulate it here; I will just mention two among the most recent moves: the Draft Regulation on Standardising and Promoting Cross-Border Data Flows, released in September 2023, and the so-called Twenty Articles on the Data System, released in December 2022. On the whole, these documents tend to insist on cybersecurity’s developmental rationale (Bui & Lee, 2022, p. 655, pp. 673–674; Cheung, 2018), along with the involvement of a plethora of technical standardisation agencies. Standardisation is indeed considered a capacity-building strategy for developing economies (Calderaro & Craig, 2020, pp. 920–926; Collett, 2021, p. 305; Hurel, 2022, p. 73)—though one shall acknowledge that China no longer falls squarely into that category.
As a whole, the spectrum of China’s cybersecurity norms, standards, implementing measures, principles, judicial interpretations, constitutional provisions and statutes does provide crucial information to be fed into the IEEE SA standardisation process being described here. What inferences can be drawn for a potential Chinese contribution to cyber-safe OS missions? I have selected three essential elements to report here. First, China has introduced a ‘horizontal’ conception of privacy whereby citizens are entitled to protecting their privacy rights against violations by fellow citizens or corporations, but not against public authorities, which are authorised to warrantlessly access any piece of information about any citizen or enterprise (Vecellio Segate, 2022a). As Western conceptions of privacy chiefly revolve around protection against ‘vertical’ interference by the State, horizontal privacy is often dismissed by Western commentators, when in fact it can work fairly effectively in certain societies.
Second, and quite originally, China has mandated binding incident-reporting and cyber-hygiene obligations for all corporations storing, transferring or processing data relating to Chinese entities or individuals (Gong, 2024; Ning & Wu, 2024). In this respect, the obligation to share information about impending threats is worth noting. Empirical studies (e.g., Chang & Huang, 2023, p. 9) have confirmed that organisations are unlikely to share information on cybersecurity threats among each other and with public authorities if not strictly bound to do so. In the West, information-sharing had already been gaining ground across regulated sectors (most notably energy, finance, transportation and healthcare), but not besides them. As for routine cyber-hygiene, the range of physical and digital measures is wide (Vecellio Segate, 2020b, pp. 125–126), but the novelty rests with its bindingness, at different degrees but for all corporations—as opposed to more permissive (and largely self-certified) compliance targets. This is essential in the space economy, where most private actors are becoming so wealthy and powerful that operate under the assumption that regulatory reach can (or even should) be eluded. With reference to the American landscape itself, Shadbol (2021, p. 18) lamented that
Like NASA[,] the private space asset industry is currently improving its security, but […] it is impossible to evaluate many private sector companies wh[ich] are not transparent regarding their cybersecurity efforts. SpaceX, Virgin Galactic[,] or other space asset developers, owners and operators do not make their technology readily available for security researchers to test.
Yet the third and probably most relevant original contribution by China to domestic modes of regulating cybersecurity is concerned with CII. This concept flows naturally from cyber-hygiene being demanded of all corporations to different degrees: the more demanding requirements will be placed on the infrastructure that is vital to the State’s informational survival. To that end, light cyber-hygiene requirements can scale up to stricter emergency preparedness; of interest here, some of these enhanced measures may entail the CII’s disconnection from clouds or other internet-based networks (Varadharajan & Suri 2023, pp. 3–4), the pre-emption of data fusion from less vetted informational entry points, the allocation of more technical staff, but also tools workload recalibration, access privileges auditing and reassignment, anomaly and intrusion detection and pre-emptive threat hunting and neutralisation. One example of original Chinese contribution to the rather complex ‘infrastructure criticality’ debate by somehow ‘decentring’ it is a multilevel protection scheme whereby all operators are routinely audited (Creemers, 2023a, pp. 119–122), jointly with their procurement chain (Qi et al., 2018, pp. 1347–1350). In China, the robustness of the system shall couple with the elaboration of (equally multilevel) response plans, as well as with staff reliability, loyalty and immaculate moral standing (Creemers, 2023b, pp. 11–12)—which exemplarily captures the centrality of the ‘human factor’ for cyber systems security and shields them from corporate moral hazard (Tijerina, 2022, p. 200). That all operators are audited is key to our discussion here, as a cyber protection scheme that overfocuses on criticality levels and critical infrastructure is probably going to prove unfit for purpose. Cyberattacks, even more than ‘traditional’ attacks, can hardly spare significant fractions of collateral damage: while adding security layers to protect critical system functions and components, it is wise to stop threats from propagating from initially unintended targets (Backman, 2023; Casaril & Galletta, 2014, p. 6), which are likely to fall within the ‘non-critical’ infrastructure. Adversarial attacks feature lateral movement about originally unintended system components, too, and are of concern for flight software security (Curbo & Falco, 2023, pp. 73–74)—and large satellite constellations (Manulis et al., 2021), whose satellites ‘are not only interdependent but also similar in their [cyber] vulnerabilities’ (Blount & Cesari Zarkan, 2023, p. 5). If we phrase criticality scoring as a form of risk assessment, ‘urgent or special measures [are not the exclusive preoccupation]. Instead, riskification focuses on long-term strategies directed towards building the resilience of referent objects’ (Siudak, 2022, p. 328).
Lastly, criticality-dependent data localisation requirements, too, have been gaining ground starting from the Chinese experience, to later socialise to Russia, the EU, Brazil, India and so forth—but not the US or the UK (Tang, 2022, p. 2401). These requirements are essential in the current geography of data-centres commercial and strategic competition, where Euro-American and (South-) East Asian (including China, but also, e.g., Singapore) actors endeavour to consolidate (or challenge) global dominance by asserting jurisdiction over extraterritorial data flows (Rossiter, 2017; Vecellio Segate, 2024, p. 20).
In technical terms, labelling a cybersystem ‘critical’ equates to certifying that: a) it represents an ‘attack surface’, within the ‘subsystem’ of a system ‘segment’—Bradbury et al. (2020); b) the capability of attacking such surface exists or is likely to emerge soon; and c) a successful attack would severely compromise the mission or dangerously spillover onto society—or even humanity overall. When Chinese domestic technical standards contribute to defining what information infrastructure is ‘critical’, and foreign companies interact with Chinese counterparts through such infrastructure, domestic standards factually exhibit extraterritorial effects. Indeed, ‘because multinational companies often choose to comply with the strictest technical standard’ (Rühlig, 2024, p. 107), technical dominance generates conformity and lowers adaptation costs for the companies incorporated in the jurisdiction where the ‘most persuasive’ standard-setter is located.
To be sure, not all Chinese contributions outlined in this section are necessarily desirable outside China. Nonetheless, they do contribute to cybersecurity policing in several directions, most prominently with decentring criticality within a multilevel protection scheme built on progressive, binding and data-localisation-premised cyber-hygiene requirements. The novelty of China’s contribution is not often tributed much credit, yet it would deserve sounder acknowledgment, to be translated into closer inspection of its stances towards cyber-secure systems across a variety of domains. Transplanting the ‘Chinese way’ elsewhere, or drawing inspiration therefrom for global cybersecurity standards, might not always be desirable; but it is an option to genuinely and open-mindedly consider, as dialogically as possible.
Case-Study: Mission-Critical Systems Versus CII
In aeronautics and astronautics, any mission corresponds to a criticality level that, depending on the specific definition within each institution or corporation, quantifies system vulnerability, failure risk likelihood and potential disaster magnitude. Hence, criticality stands as a core variable towards mission security management, including on the cyber side. Indeed, criticality is core to business continuity and disaster recovery, as well as to strategise on funding allocation across mission areas. No matter how it is defined, the end-goal is to ensure mission survival, even upon key assets or functions having been attacked or disabled.
To cater to the criticality of a cyber system for OS missions, two approaches can be followed. The first is the business-as-usual approach, drawing on (usually broad, in the range of three to five) security-informed mission classes, where pre-eminent consideration is credited to the criticality of the mission as a whole, with the criticality of the cyber system deriving therefrom (Zatti, 2020). Documents abiding by this logic provide rough indications that are to be interpreted on a case-by-case basis by system designers/programmers and/or final users. The second approach is to devise a cyber-tailored criticality scale that accounts for the specificity of vulnerability, risk and failure in the cyber domain. Both approaches have merit, but if a technical standard on cyber systems is to be drafted, then the second route is perhaps more advisable and far more likely to be embraced by the standardising agency. It is precisely at this juncture that China’s domestic cybersecurity framework creeps in: as introduced in the preceding section, one of the most detailed and innovative legislative undertakings with regards to cyber criticality is embedded in China’s Cybersecurity Law and follow-up regulations, specifications and implementing measures.
Turning to domestic cybersecurity sources for inspiration is a legitimate and fruitful choice (Vettorel, 2024). As reported supra, the OS framework is too superficial to provide meaningful insights; and when the latter are detailed, they are drafted by reputable US institutions such as the National Aeronautics and Space Administration (NASA) or the National Institute of Standards and Technology (NIST)—for example, NIST SP 800-160 Vol. 2 Rev. 1, NIST IR 8270, or NIST IR 8401/8441—but follow a mission-centred as opposed to cyber-tailored approach, which makes them a second-best source for the purpose at hand. In this sense, identifying what makes a cyber system ‘critical’ for OS missions provides the cardinal illustration of the desirability to integrate cybersecurity laws into OS governance, achieving regime deconfliction and finding a ‘reference compromise’ among the most developed domestic frameworks around the world. The golden rule is ‘synthesis in diversity’: while this article zooms in on China, other frameworks should be equally considered from all regions, depending on their overall maturity stage and thus their ability to supply quality inputs. This relatively elementary finding is momentous, as it confirms that OS frameworks being unhelpful, any international arrangement on the matter should account for finding a compromise between domestic regulatory frameworks on cybersecurity. No analysis had ever proposed this term of reference and justified its policy desirability, especially towards technical standardisation.
Besides the OS framework’s superficiality, yet another reason exists for referring to cybersecurity definitions. With regards to antennas, rovers, space vessels and satellites, there are components and functions that are inherently ‘cyber- intensive’—so to write (Tedeschi et al., 2022). Receivers and transmitters for communications to Earth (or anyway planetary) ground terminals inherently rely on what most States would encompass within their CII scope, including segments of the cabled Internet they exercise territorial or prescriptive jurisdiction over. Other signals and transmissions bearing effects on ground systems and personnels are likely to fall within the same category. Yet focusing on CII cybersecurity-wise helps shift the frame from mission-confined risks to potentially spillovers onto the whole of humanity, which could indeed be caused by cyber catastrophes executed from—or anyway involving—OS systems. Indeed, retrieving ‘criticality’ levels from mission classifications alone confines them to their relative importance to the mission, while obscuring the wider impact of failing systems onto society. Instead, such impact lies right at the core of criticality considerations as retrievable from cybersecurity regulation, where how ‘critical’ an information system is deemed to be also depends on the potential impact of its failure on societal cohesion and stability, and ultimately regime survival. In sum, no doubt exists that scoring system criticality—as opposed to mission criticality—to assess the potential cyber threats against them helps ensure that while regulatory objects are moving targets (a system may enhance or degrade its criticality over time), policy objectives are more consistently pursued.
Although satellites are already considered critical infrastructure within most statutes and policy guidelines, including in the US, the EU and by international organisations like NATO or the OECD (Martin, 2023, pp. 6–10), this is not necessarily true for space systems as a whole (Cilluffo & Montgomery, 2023; Pavur, 2021, p. 27, 50). In any case, it is not so much about deciding what is critical or not; the question rather rests with the implications of such a labelling: with how to treat critical infrastructure like satellites policy-wise, that is, what cybersecurity requirements to enforce (and on whose actors exactly) to achieve their enhanced protection. If the aim (enhanced protection) is not clearly articulated and its precise boundaries are not made acceptable to all relevant stakeholders (including public authorities from different jurisdictions), then no technical solution can be granularly envisioned, and no end-result can be attained. Definitory, scoping and operational practices on CII under domestic cybersecurity laws are inextricably linked to technical solutions to securing the cyber component of OS systems and the transmissions they enable. And while the divide between Western and non-Western mission-confined criticality scales would be unsurmountable because of the still unchallenged NASA’s leadership, cyber-tailored scales allow ampler room for jurisdictional inclusivity, in that cybersecurity legislation is catching up quickly across States from all regions, and already showcasing substantial diversity in sociocultural premises.
Finally, it seems worth underlying that the present case-study was intended as a mere exemplification; in fact, instances of regulatory takes from China being worthy of consideration towards the IEEE effort scrutinised here (and similar undertakings) are potentially countless. Another salient case-study would be that China’s emphasis on internal as opposed external terrorism could make a stronger case for emphasising the ‘human factor’, up to, for example, assessing taxonomies of potential attack motivations, and thus contrasting social engineering techniques (not least via mission information segmentation and the design of specific penetration tests for emulation of adversary identity misappropriation/substitution). Cybersecurity-wise, incorporating human factors into risk assessments is considered essential for the cyber system’s components, which are more directly exposed to the end-user (Morgan et al., 2020; Pollini et al., 2022; Rohan et al., 2023)—the IEEE indeed categorises those components under the ‘user segment’. Since 9/11 and even prior to that, all US administrations have consistently framed the cyber protection of satellites—just like countless other policy activities—as a counterterrorism objective, most often as part of America’s ‘War on Terror’ (Buenneke, 2004, pp. 240–251). China’s cybersecurity focus on endogenous as opposed to exogenous terrorist threats could help add nuance and balance to the US narrative. An in-depth discussion of this aspect (and related caveats) is, however, seconded to another paper.
Policy Recommendations
‘China’s space cooperation is disproportionally underdeveloped in comparison with the degree of advancement of its capability’ (Wu, 2023, p. 1); closer engagement would be beneficial to all, and it partly depends on mutual assurance on missions’ safety, not least from a cyber perspective. In fact, China repeatedly stated that ‘[c]ountries that are major players in outer space should take up primary responsibility for safeguarding peace and security in outer space’ (Ministry of Foreign Affairs of the People’s Republic of China, 2023). Admittedly, opportunities for dialogue and engagement, which are the only open window on technical standards that can serve the whole of humanity and thus be useful to secure the cyber-safety of OS missions, are eroding fast. Scholars portray the US-China competition for centrality in digital infrastructure control as a ‘Second Cold War’, and they place technical standardisation processes right at the core of this neomilitarised rhetoric and practices (Schindler et al., 2023, pp. 12–13). As far as OS operations are concerned, however, residual room for global solutions is still available.
The IEEE SA drafting process is of resounding importance and worth persevering with. However, it also bears the implementability and ‘trustworthiness’ limitations of most similar contemporary processes, in terms of ‘policy interoperability’ across jurisdictions as well as ‘epistemic’ and somehow ‘collegial’ standing before the law. This article hopefully clarified that it is difficult, yet not impossible, to improve on these aspects. After all, as unexpected as it might sound to some, Chinese and Western engineers are already proficiently cooperating within international standard-setting bodies—not least the IEEE itself—towards, for example, de-risking AI applications (Cantero Gamito, 2023; von Ingersleben-Seip, 2023, pp. 798–801).
What seems further essential to emphasise is that this hopeful call for ‘re-engagement’ is only confined to technical standardisation, especially vis-à-vis the specific IEEE effort illustrated here. When it comes to broader cyber governance at the multilateral level, the ‘liberal’ West’s mistrust by default for any Sino-Russian proposal might have already provoked irreconcilable fracture. US scholars like Raymond & Sherman (2023, p. 17) apostrophised the SCO for acting ‘as an incubator and as a vector of authoritarian multilateralism’, and similarly uncompromising comments are frequently found in US-published (cyber)security studies; in light of the US’s structural hypocrisy and double-standardism in global (cyber)security policing (Chen & Yang 2022, p. 51; Katagiri, 2021, p. 8), a more conciliatory and constructive tone would be advised, but the trend lies in the worsening. Persevering in singling-out jurisdictions such as China (and Russia), whose cooperation is objectively essential to securing a safe cyberspace and OS alike, appears myopic and dysfunctional; confrontational grammars coupled with counterproductive outcomes even along corporate ecosystems, insofar as the Sino-Russian block, is as active as the Western one in engaging with private actors and fostering multi-stakeholdering in public international fora and negotiating platforms (Douzet & Gery, 2021, p. 109). While compromises are always challeng- ing when dealing with technological trade secrets and especially state secrets, co-drafting of technical standards can prove a springboard towards mistrust de- escalation and win-win capability enhancement in areas of common interest such as space exploration for scientific purposes. This holds especially true if the newly gained trust capital is then consolidated via joint training sessions and collaborative deployment of cyber-safe spacecrafts and personnel that effectively apply those shared standards.
By way of simplicity, the discussion earlier has pointed to the US and China, or ‘the West’ and ‘the rest’; this choice has inevitably reiterated obsolete dichotomous thinking that we should all try to dispel. One could elaborate on India as a ‘swing’ cyber power (Barrinha & Turner, 2023, pp. 16–17; Vecellio Segate, 2019), or on Gulf countries that, too, will soon become powerful actors in the race for OS domination, but the takeaway point from this admittedly concise examination was to emphasise the salience of achieving an inclusionary process, by combining expertise and sharing concepts and terminology from both the cyberspace and OS regulatory frameworks. To achieve this, legal experts shall be included as early as possible, so to ensure that the outcome proves as trustworthy and adoptable as possible for as many actors as possible. As the world heads to multipolarity, technical quality shall be coupled with wide state subscription: both are important variables to define the impact of a standardisation process and its final output (that is, the technical standard itself).
Conclusion
Western powers could advance fair arguments that China, like Russia (Cooney, 2024; Schreiber, 2022; Strobel, 2024), tends to advocate for shared security solutions within international fora, to then itself act as a primary source of destabilisation and insecurity. Nevertheless, this does not seem an adequate reason to refrain from seeking a more participatory attitude and mutual understanding at least on the standardisation side, which is supposed to serve all parties equally to the overall benefit of humanity’s safety and progress. Exclusion provides fertile pretexts for alternatives-seeking, as well as for alibis to deepen confrontational expressions of competitive behaviour—first rhetorically (belligerent propaganda), then through alternative lawmaking and policymaking and eventually militarily. The recently negotiated Artemis Accords between the US and several other world’s governments are a recent exemplification of the consequences of trying to isolate China on the international stage when it comes to even ‘soft’ or ‘informal’ space law agreements. As reported in Reed (2024), Harvard law lecturer Memme Onwudiwe has emphasised how
[B]ecause the Artemis Accords are being done bilaterally, and because there are laws, for example, preventing China from being directly connected to NASA, even if China wanted to join the Accords, they couldn’t. (The Wolf Amendment, which Congress passed in 2011, bars NASA from directly working with China or Chinese affiliates.) So, Russia and China have basically articulated their own agreement to create their own research facility in space that they’re going to call the International Lunar Research Station. And like the Artemis Accords, they’re opening that up to other governments.
Instead, involving China (and other often-excluded powers) in claimed-global rule-making (technical standard-setting, in this case) makes it more likely that they will prefer to abide by those rules they themselves contributed to shaping, as opposed to persuading (or coercing) other governments into their alternative sets of priorities, strategies and ambitions. On another take, the People’s Liberation Army 61486 and 61398 units, allegedly in charge of conducting cyberattacks worldwide, once presumably fell within a broad Strategic Support Force (PLASSF) that was tasked with both cyber and space operations, in a jointly fashion (Costello, 2016). Today, that Force has been split into three components, two of which are dedicated to cyber and space operations respectively (Lin & Liao, 2024; Wang, 2024); and yet, China’s focus on information warfare, on Earth and in space, has not sensibly shifted (Bruzzese & Singer, 2024). This is all to reiterate a call for restraint and cooperation, at least on selected areas of technical standard-setting, before decoupling and confrontation irreversibly and irreparably take the lead. The consequences thereof could represent a disastrous no-return for global OS governance.
This article has offered the first analysis of the space-cyber nexus in international standardisation from a Chinese perspective. It has demonstrated that contextual insights derived from the analysis of China’s trajectories of engagement and disengagement in cybersecurity standard-setting and norm-crafting—and even more importantly, its domestic regulatory environment on cybersecurity—represent essential references for those who seek to draft technical standards in this field, even for application to OS missions. To exemplify, it has shown how to combine mission criticality with CII as defined under domestic laws. In so doing, it outlined a tension between the US supremacy in technical definitory exercises and the novel potential contribution by actors such as China on the normative side. This, in turn, unveiled the long-standing struggle—or at least ‘epistemic competition’—between engineers and lawyers in policing cyberspace and OS technologies and operations. I have thus advocated for a jurisdictionally inclusive process that seeks to reengage US-rooted standard-setting claims of expertise with alternative policy priorities whose incorporation would make those standards globally adoptable and more readily hardening into law.
This article has focused on China for two main reasons. The first is its status as a potentially incoming superpower and the main contender to the US for the international order. The OS is premised to become a battleground for traditional warfare, but also for cyber-confrontation and information warfare as major components of hybrid conflicts (Brown, 2020; Pražák, 2022, pp. 180–185)—in fact, all conflicts today are partly or exclusively informational. The more asymmetrical and by-proxy a conflict, the more informational it will likely be (Nakayama, 2022, p. 223). But even if competition between China and the US ever erupts into an open conflict, it would be dangerous to fight it on Earth; it is vital for the West to agree with China on a baseline common ground for secure OS operations of potentially catastrophic spillover effect. More optimistically, as the liberalists tend to hypothesise, it is precisely this commercial cooperation and political trust that will prevent escalation, by making overt conflict economically undesirable and politically overpriced. Whichever the case, if technical standards are not value-neutral, they might well play a role in reducing or augmenting geopolitical friction and confrontation. The more standards prove jurisdictionally inclusive and trustworthy, the less they will be dismissed out of mistrust, decoupled, perhaps even subjected to politicised sanctions, and eventually weaponised for inter-State confrontation. As Nawaz et al. (2022, pp. 30–31) recall, when the US denied access to its Global Positioning System (GPS) to China, the latter engineered its own alternative, and insecurity ensued: as the cybersecurity of Positioning, Navigation and Timing (PNT) systems is essential to secure safe missions, technical competition might easily turn into ‘the sabotage of dual-use command, control, communications and intelligence (C3I) systems’ (Yuan, 2023) and thus overt confrontation across both—and more—domains (Johnson, 2021, p. 362). Even on the peaceful (or ‘non-offensive’, which is slightly else) side, China is part of a ‘rest’ that no longer accepts to play second-class in OS affairs, along with the already mentioned India, but also, for example, South Korea (Ahn, 2019; Davies, 2023; Hong, 2022) and, perhaps more notoriously, Japan (Casaril & Galletta, 2014, pp. 13–14; Lele, 2012, pp. 95–108).
The second reason why China was selected is because not only is it an objector to (selected) Western values of relevance here but it is also elaborating its own comprehensive and integrated alternative discourse—alongside like-minded state partners, a strategy long mastered by its Western counterparts. It is doing so across multiple regional, bilateral, multilateral and even multitasking platforms, not least through the Digital Silk Road (El-Kadi, 2024; Haner & Knake, 2021, p. 12; He, 2024). President Xi never fails to link China’s space ambitions to its Belt and Road Initiative (BRI; Chase, 2019; Nie, 2019; Pekkanen, 2017; Schulhof et al., 2022, p. 1), which is of interest here as the BRI is, in essence, an infrastructural project that, via technology transfer and supply-chain contracting, unmissably ‘exports’ digital standardisation (de Seta, 2023, pp. 248–249; Rühlig & ten Brink, 2021; Ahmed Raslan, 2024, pp. 28–29)—including on cybersecurity (Nanni, 2024, p. 30).
If the ‘Western’ (or so perceived) standard ignores Chinese contributions, the Sino-Russian block (and corollary jurisdictions) will ignore the standard. From this IEEE SA standard-in-the-making and beyond, it occurs to me that States are running out of time and political capital to avert such full-scale ‘decoupling’ scenario—if not manufacture-wise (Majerowicza & de Medeiros Aguiar, 2018), at least on a policy plane. Divergent standards would encourage the fragmentation of digital supply chains and the Internet’s decentralisation into multiple ‘Internets’, with unknown security implications (Hoffmann et al., 2020). Theoretically, threats can be contained more successfully if the network is segmented into different protocols, but that scenario would also remove the constraints that make blowing up a common Internet inconvenient to most parties, down on Earth just like in the OS.
This analysis also touched upon the reasons why synergetic, context-savvy technical outcomes have some chances to succeed in this field, despite long-time stalling responsible-cyberspace-behaviour normative negotiations at the UN and elsewhere. The building and maintenance of spacecrafts often results from international efforts, and so does the safeguarding of their operational capacity. While OS missions are about resource extraction and war, so that restraint and de- escalation remain high on the agenda, OS is equally—and most importantly—about scientific research and even species survival perhaps. Hence, there are excellent reasons to make sure that States can make the most of it collaboratively. Achieving consensus around representative cybersecurity standard, and monitoring their effective implementation, represent undelayable steps in that direction.
Footnotes
Acknowledgements
The author thanks the two anonymous peer-reviewers of this article for their assistance in refining his arguments. For countless technical discussions, the author further express his gratitude to selected team members of the User Segment of the IEEE SA P3349—Space System Cybersecurity Working Group, and especially to Professor Carsten
Declaration of Conflicting Interests
At the time of writing, the author was a member of the User Segment and Standards Body Coordination (outreach) Sub-Working Groups within IEEE SA’s P3349—Space System Cybersecurity Working Group; however, all information and materials reported and discussed in this article are publicly available (online and/or offline).
Funding
The author received no financial support for the research, authorship and/or publication of this article.
