Abstract
The goal is to map out some policy problems attached to using a club good approach instead of a public good approach to manage our internet protocols, specifically the HTTP (Hypertext Transfer Protocol). Behavioral and information economics theory are used to evaluate the standardization process of our current generation HTTP/2 (2.0). The HTTP update under scrutiny is a recently released HTTP/2 version based on Google’s SPDY, which introduces several company-specific and best practice applications, side by side. A content analysis of email discussions extracted from a publicly accessible IETF (Internet Engineering Task Force) email server shows how the club good approach of the working group leads to an underperformance in the outcomes of the standardization process. An important conclusion is that in some areas of the IETF, standardization activities may need to include public consultations, crowdsourced volunteers, or an official call for public participation to increase public oversight and more democratically manage our intangible public goods.
Keywords
Introduction
How is the internet governed? The research question may appear simple, but an answer is as complex as our favorite technology (Roche, 2016). At one extreme, libertarian users of the darknet rally for an unregulated virtual space (Graham & Pitman, 2020; Rochefort, 2020; Rudesill et al., 2015). On the other hand, the recent political abuses involving myriad Facebook users appear to increase the possibilities of stricter government regulations for large social platform technology companies (The Guardian International Edition, 2019; Sun, 2020). It seems, current internet governance oscillates between laissez-faire parlance and ongoing government oversight, depending on who is nudging us on. The current research contributes to these discussions about democratic oversight, improved management, and technocratic governance.
Developed in behavioral economics at the turn of the millennium, the concept of “nudging” is useful to explain and analyze current online governance and decision-making (Bammert et al., 2020; Foster & Frijters, 2017; Peacock, 2018; Sunstein, 2018). But nudging is not easy to isolate in social interactions because it is a gentle signal intended to guide fellow actors’ minds, thoughts, and actions while preserving their agency (Sunstein, 2018). The current study offers a glimpse into the mechanics of nudging within a circle of experienced, candid, and well-acquainted senior computer professionals tasked with critically assessing the merits and publishing the standards of our essential internet protocols.
The key thesis is that if internet protocols are public goods with some characteristics of natural public goods, it has profound consequences for their governance. More detailed explanations are offered in the second section, but first we need to understand how a little noted but important nongovernmental standard-setting organization for computing standards operates.
The Internet Engineering Task Force (IETF), the main body for the publication of internet standards, coordinates the release of standards pertaining to internet and computer technology (Oever & Moriarty, 2012). The goal of the IETF is to ensure that all moving parts of the internet continue to communicate smoothly with each other. The organization strives to produce timely, relevant, and robust engineering standards that are widely adopted because they are deemed useful. The volunteers working for the IETF have a mutual understanding to adhere to a general rule of “rough consensus and running code” and often share an affinity toward cyberlibertarianism: As long as no one is harmed and code keeps functioning well, engineered interventions are unnecessary (Borsook, 2001; Mosemghvdlishvili & Jansz, 2020).
These goals appear compelling, but the reality is that more often than not, standards are value-laden choices (Nissenbaum, 2001). For example, underlying political and social tensions were on full display during the IETF standardization process of the so-called “cookie” technology at the turn of the millennium (Kristol, 2001). Today, a couple of industrial participants are resourceful and organized enough to govern and pass through the IETF that happens online (Peacock, 2018). But these resourceful online companies may do well to remember that it were public funds that helped to build and secure us an internet and that all our pioneering internet protocols are in the public domain (Abbate, 1999; Leiner et al., 2009). Public funds are at the origin of the dotcom success, not market mechanisms.
The goal of this article is to map out the policy problems arising from a misguided club good instead of a public good approach to manage our internet protocols, and in this specific case, the HTTP/2. I revisit the standard issued in 2015 for the HTTP/2, our Hypertext Transfer Protocol, which, at the risk of oversimplifying, is our internet “messenger,” thus a protocol communicating to a remote server all requests from local clients, and vice versa. The HTTP is based on two important ideas: decentralization and generality (Blumenthal & Clark, 2001; Isenberg, 1997; Kergel, 2019).
Decentralization of functionalities for network traffic means no central entity may control the flow of information at any level of the network (Isenberg, 1997). Ideas like decentralization or “stupid networks” ensure that innovations, designs, codes, and restrictions are allowed only at the edges or on top of the network while traffic passes unhampered below. Generality improves the reliability of all applications operating at the edge of a network (Blumenthal & Clark, 2001). Inherent in the principle of generality is a “lack of controls in the Net that limit or regulate what users can do” (Blumenthal & Clark, 2001, p. 74). Decentralization reduces complexity and, when paired with generality, produces a stable core network while external applications independently serve their specific purposes (Blumenthal & Clark, 2001, p. 71).
Decentralization and generality are powerful ideas and aim to speed up online connections, so why do our web applications appear to take so longer and longer to load? Explanations for slowing transmission times are manifold, but one of them is the exponential growth of video and audio transmissions and the display-advertisement industry (LUMA LLC, 2019). With few exceptions, most internet users’ access to web platforms are monetized in the form of behaviorally targeted advertisements (Hoofnagle & Whittington, 2014; Peacock, 2014). Owners of online platforms block content until users grant access to user data, and at that point, consumer data extraction happens in “real time.” As a rule, online consumers receive multiple bandwidth-intense, behaviorally targeted online advertisements together with content (Johnson et al., 2017).
Currently, network bandwidth-intense transmissions from the display-advertisement industry are unavoidable, profitable, and scalable, but most importantly, they progressively block the head of the transmission line: As more visual adverts are embedded on websites, network bandwidth use and Transmission Control Protocol (TCP) connection consumption significantly increases, resulting in progressive latency (Chesire, 1996; Flach et al., 2013; Peacock, 2018). Another well-known problem is that the TCP is known to introduce extra latency (Kharat & Kulkarni, 2019). Of course, additional problems like old router technology and insufficient transmission lines play their part in inhibiting increased network bandwith use (Chesire, 1996; Krenc, 2019, 24f.). Looking at our present-day development toward ubiquitous connectivity, scholars urge a focus on decreasing latency (Ma et al., 2019). In addition, terms like industry 4.0, smart society, smart grids, or vehicle-to-everything connections conjure up scenarios where more automated network transmissions take place stemming from devices with little to no human intervention. In a first step, in 2012, solutions were sought to prioritize first-party transmissions and decrease latency in content delivery to online consumers to increase user retention and insure first-party access to user data (Peon, 2013).
The goal of this article is, first, to show the differences between intangible club goods and intangible public goods and map out some policy problems attached to using a club good approach instead of a public good approach to manage our internet protocols, specifically the HTTP (Hypertext Transfer Protocol); a second related goal is to pinpoint the inefficiencies attached to our current technocratic solution and introduce the idea of public oversight to the management of our intangible public goods. To understand the intricacies of the approach, the next section contains a discussion on how the public good concept applies to internet protocols. Following that discussion, I will return to the HTTP/2 standardization process and explain how the empirical evidence underlines theoretically relevant points for the management of a public good in the public domain. I conclude with an analysis of my results and some tentative suggestions to improve future outcomes.
Theoretical Foundations: Intangible Goods, Public Goods, Club Goods, and Private Goods
In the mid-20th century, Samuelson formally established the distinction between a private and a public good (Samuelson, 1954, 1955). The history of public goods reaches further back, of course, but Samuelson sparked academic debates on classifications, utility, distributional, and welfare effects in market and nonmarket situations, including hybrid concepts. Let us look at these concepts from a perspective of how many people share in the utility of a good, on a scale of one to infinity, theoretically including past and future generations. The first question is how many people share the benefits of the good.
A purely private good offers utility to its owner and it is therefore of limited interest in the current context. More relevant for understanding the political dimensions of advancing the HTTP in the public interest are the next two types of goods. An ideal-typical public good captures the utility of N → ∞ and presents a fair approximation for the theoretical number of past and future HTTP users. So-called club goods, on the contrary, are hybrids with loosely defined lower and upper boundaries in the order of larger rather than smaller groups (1 < N < ∞).
Club and public goods are distinguishable by their ease of excluding third parties from consuming the good in question. For example, unusually high prices for opera tickets create club goods and are easily enforced. On the contrary, erecting a paywall to limit access to Wikipedia will defeat its very purpose of sharing and improving everyone’s knowledge by offering easily accessible facts and fictions. If we were to apply a sliding scale, where third party exclusion becomes increasingly difficult or nonsensical, we observe that the public good concept becomes increasingly relevant (Samuelson, 1955). Very much like Wikipedia, using an internet protocol such as the HTTP must be widely encouraged, low-threshold, and agreeable, and therefore, it may be concluded that what we are dealing with here is a public good. Put differently, a government may decide to develop their own version of a session layer “HTTP,” a “TCP/IP (internet protocol),” or any other protocol and perhaps create their own “internet,” although it would come at a very high price. The central ideas of generality and decentralization would need to be sacrificed to see this endeavor through to the end. Instead, the decision by Sir Berner-Lee to place the HTTP code in our public domain has helped preserve its public good attributes.
Buchanan (1965) discusses how physical attributes affect all distinctions between club goods and public goods. An upper-level distinction is to distinguish between a tangible public good, for example, the international parts of the Atlantic Ocean, and an intangible public good, for example, a country’s weather warning system. Intangible goods, like memories, information, or stories, remain with the originator long after their distribution, enable multiple exchanges, and tip timelines for benefactors toward infinity. Of course, political power and legal restrictions can limit access to intangible goods, but enforcement is expensive.
Consequently, the HTTP is an intangible public good, but it is used internationally. The provision of international public goods poses a dilemma, because classic writings tie them to (national) public finance (Desai, 2003; Musgrave & Peacock, 1958) and local people are able to formulate their own rules. A national framework lends itself well to discuss the three P’s of provision, preferences, and political bargaining of public goods (Musgrave & Peacock, 1958, p. 65). Perhaps the IANA (Internet Assigned Numbers Authority) stewardship transition process may serve as a timely example for the provision, preferences, and political bargaining of a current intangible public good, the production, and registry maintenance of our internet domain addresses (Kruger, 2015).
Proponents of an erroneous club good classification appear oblivious to the potential impact of potentially infinitely large user numbers on their theoretical models; what is more, the use of open source with an exponential distribution of benefits indicates the opposite of an incorrectly assumed “finite flow of benefits” (Raymond, 2013). However, with more accurate assumptions and classification, further problems immediately rear their heads, that is, the management difficulties of public goods: overuse, free-riding, and an identification of efficient mechanisms to improve the situation.
Before Harding (1968) published his seminal work on the tragedy of the commons, Samuelson (1955) and Buchanan (1965) had identified “free riding” as a potential problem for public goods. These early theories detail the failure of market mechanisms to manage and distribute public goods. Theoretically, if an anonymous crowd gains unrestrained access to public goods in a capitalist market environment that rewards selfish behavior, a race ensues to reap the benefits before others do and overuse quickly depletes the public good in question (Buchanan, 1965; Harding, 1968; Ostrom, 2012). Notably, most theories specify this abysmal outcome for tangible goods while the above-mentioned problem of online latency due to the massive and exponential growth of the extractive targeting industry serves as a textbook example for the deterioration of an intangible public good.
Depletion is not an innate outcome for public goods, at all (Cumming et al., 2020). Historically well-managed public goods exist and still do, today (Dittmar & Meisenzahl, 2020). Empirical research shows that several key factors appear to foster century-old polycentric properties, like public management, a well-adjusted catalog of fines, or close public scrutiny (Ostrom, 2012). The question is whether the same mechanisms might serve us for the management of an intangible public good. Such discussions ought to include the provision and distribution preferences of the public good in question. At first glance, it appears to make little sense to worry about free-riding if an intangible public good suffers depletion or deterioration at a very slow rate. As usual, though, the devil is in the detail. For example, the gradual delay of data transmissions called latency happened at a very slow pace over two decades.
After switching the HTTP from a stateless to a stateful session in 1998, it took almost 20 years for the extractive targeting industry to become so massive as to slow down transmission times for everyone. By way of example, anyone who is so inclined is allowed to erect a “billboard” on our online freeway while paying only the material costs entailed. To stretch the example a bit further, most “drivers” will be obliged to have their IDs copied and must stop to acknowledge numerous billboards before proceeding to their end destination. Although we are all moving near the speed of light online, if enough IDs need copying and drivers must stop, it slows down traffic, resulting in latency. Due to the lack of public management, a well-defined catalog of fines, and little public scrutiny, the provision of the online freeway is deteriorating.
Let us next look at examples for the relevant distribution of preferences on our online freeway and appreciate their sociopolitical dimensions. One important current conjecture is personal indifference to information distribution. What does that mean? In a nutshell, the important assumption is that users do not care who else receives a bit of public online freeway as long as she herself receives a share. But indifference is unrealistic in an otherwise competitive market for intangible goods where participants closely guard who is privy to information and who is not (Stiglitz, 2017). Of course, sometimes information is withheld accidentally but more often purposefully, and it is shared only with a few select friends, relatives, or members of a specific occupational network. Pay-offs can be large. If legally unchallenged, companies gain significant positional advantages by withholding important information from competitors. Hence, Buchanan (1965) calls for an inclusion of participants’ preferences for the distribution of public goods, like acts of discrimination or selectivity, to produce a more realistic analytical model. Thus, a more inclusive approach offers us a better understanding of why IETF WG (working group) members use information selectivity to steer the HTTP/2 standardization process. It will be further discussed below.
As it stands, the HTTP has an important role for a significant number of automated data transmissions. By conceptualizing an internet protocol as an intangible public good, we capture incentives for free-riding, introduce information asymmetry, and—as the commons deteriorate—reduce the rate of depletion for intangible public goods compared with tangible public goods. In addition, attention needs to be paid to selectivity and preferences for information sharing. The new HTTP/2 has received a noticeable “nudge” in the direction of providing benefits to select corporate actors in its newest version. To detect underlying preferences, I highlight some choice social interactions at the beginning of the standardization process, when the IETF HTTP working group discussed the goals of the future protocol, or what would become known as HTTP/2. First, though, we need to understand why the HTTP/1.1 required revisions because an undertaking like this is nontrivial, to say the least.
The Standardization of HTTP/2
Until the mid-1990s, our HTTP was designed to be stateless without the ability to remember the user, a constraint defined in what is called an REST or representational state transfer software architecture style (Fielding, 2000). To support specific applications and add a “memory” to prior visits (i.e., to enable a virtual shopping cart), the HTTP was modified to include an additional “header” and provide a “memory” or, in precise terms, state information. But because the “cookie” mechanisms violated basic REST principles, it was highly controversial (Peacock, 2014). Still, at the time when the cookie mechanism was standardized in the mid-1990s, nobody expected it to balloon into a data extraction business much beyond a small online shopping cart.
Today, automated personal data extractions from data requests are happening on a truly impressive scale (LUMA LLC, 2019). Old and new online-platform companies scale their personal data extraction business, extract myriad private data points, transmit multiple scripts and embed spyware, or other insidious applications. While personal data extractions are initiated and processed, websites freeze, central processing unit (CPU) use inflates, software crashes, transmissions grind to a snail’s pace, and as it takes very long for a data request round-trip to traverse the net, impatient users click elsewhere (Peon, 2013). After the turn of the millennium, latency started to pose an existential business problem for data extraction businesses, particularly if online platforms were scaled to capture the data of billions of users (Peon, 2013).
If we look at the statistical distribution of online-platform visits, a power law distribution shows that a handful of companies captures most of the online traffic and the majority of companies taper off in a very long tail. According to Wikipedia, two of the most popular websites worldwide are U.S. Google search and their video platform YouTube, both subsidiaries of Alphabet Inc. (since 2015). This giant corporate holding is singled out because the current HTTP/2 is based on a protocol called SPDY (read: speedy), coded by their engineers (The Chromium Project, n.d.).
When latency started affecting their most popular services, an in-house project at Alphabet Inc. was started with the goal to engineer an alternate transfer protocol and speed up their corporate browsers and content servers. In Silicon Valley, word spreads quickly and a few other large online companies embedded the desirable SPDY faculties in their transmission procedures. But we get a glimpse of the complexities involved when SPDY worked well on Alphabet Inc.’s own impressive infrastructure though the insularity of their solution made no significant difference to overall transmission times. Perhaps it is fair to assume that Google’s engineers believed a standardization of their new SPDY protocol by the IETF would deliver more significant results.
Internet standards include sets of processes and rules for interoperability based on nonproprietary codes (Internet Society, n.d.). How and when Google requested the IETF area director to task the HTTP working group with SPDY’s standardization is an open question, because no recorded meetings exist. The first official record is a draft standard version zero, published in February, 2012 (https://datatracker.ietf.org/doc/rfc7540/), followed by a recharter of the working group, issued by the IESG secretary the next month, in March 2012. The recharter includes a timeline to standardize SPDY latest by October of the following year, in 2013 (IESG Secretary, 2012). An informed guess might place Alphabet Inc.’s request to the HTTP application area director somewhere in November or December of 2011.
In the end, more than 3 years would pass before the HTTP/2 standard RFC7540 was finalized, edited by Mike Belshe, Roberto Peon, and Matt Thompson, who all worked, at some point, for Google or Google-funded projects like Mozilla (Belshe et al., 2015). The significantly extended project time frame to standardize SPDY may serve as the first indication of an apparent public goods problem, which will manifest itself in the section below, where we will look at the project goal discussions in the HTTP working group.
The Standards Discourse
How to Become an Internet Standard
Updating the HTTP is a bit like fixing things under the hood of the internet or more officially, “. . . maintaining and developing the ‘core’ specifications for HTTP” (IESG Secretary, 2012). The ongoing reasons for several transitions, from HTTP/1 to HTTP/1.1 to HTTP/2, are improvements to latency. The secretary of the Internet Engineering Steering Group (IESG), the administrative arm of the IETF, charters the WG. Beforehand, the area director needs convincing that an undertaking is advisable and sufficiently staffed. All dates and content of charters are public information. For this reason, early in 2012, while the working group was still busy putting the finishing touches on HTTP/1.1, already a new charter for HTTP/2 was issued to the same WG (IESG Secretary, 2012).
The application area director will have initiated the issue of a new charter after a so-called “Birds of a Feather” meeting in which engineers from Alphabet Inc. must have convinced him of the need to do so. As a rule, no publicly available minutes exist for such meetings (Oever & Moriarty, 2012). What makes the material below so compelling is the intensity of the debate that followed. All decisions are made by email, although they sometimes overlap with face-to-face meetings, and in this case the IETF 83 conference in Paris (March 25–30, 2012). Perhaps coincidentally, most defenders of SPDY attended the IETF 83 in Paris while most opponents were absent. A number of meeting agenda items include the HTTP/1.1 standard, although four out of six slideshows and most of the notes in the minutes refer to SPDY (https://datatracker.ietf.org/meeting/83/agenda). Apparently, the discussions around SPDY dominated the face-to-face sessions.
As we all know, goals matter when a group begins to problem solve and the next section offers more ideas about how experts yield or resist nudges to consider a public goods approach in favor of a club good approach when remodeling our HTTP standard.
SPDY=HTTP/2.0 or Not?
Typically, public goods rarely come with special rights or entitlements to any user or group. Consequently, Google’s success in initiating a new industry standard is a good example of the privatization of global rule making (Büthe & Mattli, 2011). Google is handed a first-mover advantage by standardizing their in-house coded protocol SPDY, intermittently changing the HTTP into a club good by yielding to Alphabet Inc.’s preferences. Differing knowledge and use of code prior to the introduction, the new industry standard will incur adjustment and switching costs to all in the industry unconnected to Google (Büthe & Mattli, 2011). Given the stakes involved, the process is not merely technical, but a political and therefore conflictual one, as the following excerpts from the discussions show.
A first indication that ambivalence is involved comes from the 51 emails in the discussion thread SPDY=HTTP/2.0 or Not?, an unusually high number of emails posted during the last week of March 2012. Names are initialed to maintain a focus on what is contributed rather than who contributes and are presented below in a condensed empirical content analysis. Quotes from members are typeset in italics, original sources dated and numbered. The date and numbers reference the archived information of the email on the archive server that is publicly accessible.
In the debate, contributors demonstrate their specialized expertise competently, debates are blunt, a ‘devil’s’ advocate emerges, but—and this is important—no initiative to seek further input beyond active WG members is taken, initially. Traits of groupthink surface, most prominently concurrence-seeking and pressure toward uniformity (Janis, 1982, p. 244).
The debate starts with JR, a proponent of SPDY, commenting on technical details in the SPDY draft and requesting changes (03/24/12, 0936.html). He gets a same-day reply from WCh, an SPDY team member from Google, who promptly submits updates to the HTTP/2 internet Draft (I-D) though he only selectively complies with JR’s requests. In return, JR asks WCh whether SPDY represents the next HTTP—and includes an answer to his own question:
“. . . If we see SPDY as a transport layer only yes; if we consider it HTTP/2.0; maybe not” (03/25/12, 0941.html).
Although initially supportive, JR appears hesitant to accept SPDY as the next generation internet protocol, nudging members to reconsider the approach. The indecision voiced by JR offers PHK an opening to enter the fray:
“Why do some people still consider it a workable idea to just goldplate SPDY as HTTP/2.0? Isn’t the idea to make HTTP/2.0 more desirable than HTTP/1.1? If we don’t make it more desirable for the majority of the web, people will vote with their packets, and HTTP/1.1 will continue to be the default protocol. (See also: IPv6)” (03/25/12, 0943.html).
What is implicit in PHK’s statement is that internet standards are more easily adopted when based on code kept in the public domain, unlike SPDY. The trivializing tone of the following exchange becomes obvious only after further considerations are voiced:
“That’s the point of the recent recharter to the HTTPBIS WG: {omitted link} Specifically, if people have proposals, encourage them to write Internet-Drafts. :)” (03/25/12, 0944.html).
On the surface, PSA appears to encourage debate, but he demands the near unworkable, namely, for dissenters to quickly whip up an alternative I-D and present it together with their dissent. Writing an I-D for an internet protocol is a nontrivial endeavor and requires a significant investment of time, energy, and financial resources. Analytically, PSA silences counterarguments of dissenting members to “protect” the group from adverse information (Janis, 1982, p. 175). His contribution receives a number of replies, mainly emails clouded with jargon and mostly tangential to the discussions and therefore omitted here. Then, PHK pinpoints the problem of having self-appointed mindguards in their working group.
“And just why should people spend time on I-D’s, when it for all intents and purposes looks like httpbis is now chartered to goldplate SPDY as HTTP/2.0? If you really want I-D’s to discuss, the very least you could do is to make it clear to people that they are not wasting their time writing them. As long as this ‘Ohh nothing has been decided yet, but look at that SPDY, ain’t it shiny?’ charade is going on, nobody in a sane state of mind is going to waste their time on an I-D with no future” (03/25/12, 0947.html).
PHK is a network engineer with strong convictions for open data initiatives and open source software. Perhaps a convincing alternative might have been the issuance of a call from this group for public input to update our next generation HTTP. Essentially, PHK is publicly calling for a request to discuss alternatives to SPDY at a stage in the process where the introduction of any alternatives would stall the current charter of the WG to standardize SPDY. We may regard PSA’s prior suggestion as a rhetorical one only while fully embracing Alphabet Inc.’s market hegemony in browser technology and online content delivery.
All dealings considered, however, PSA’s complacency may stem from his sincere belief that little can be done to counter Google’s first-mover advantage, not even if results affect billions of users and millions of non-U.S. online corporations. Google simply has the resources to continue on in a long and drawn out standardization process (Büthe & Mattli, 2011). Three years later, PSA’s fears would be substantiated by the publication of RFC7540 (Belshe et al., 2015).
When PHK opens the floor for more ideas about possible alternatives to SPDY, another self-appointed mindguard intervenes, the chairperson (MN) whose task is to shepherd the standardization of SPDY. Accordingly, MN redirects the WG focus back to the task at hand with similar arguments to the one made previously: SPDY is a blueprint and available for standardization (03/25/12, 0950.html). His role in the debate is to guide task facilitation, achieve goal cohesiveness, and “get stuff done.” To meet expectations, he appears compelled to interfere with critical thinking and override alternative actions (Janis, 1982, p. 9). Again, any alternatives to SPDY will make the current charter of the WG obsolete and will possibly be regarded as a functional failure of the IETF standardization process by Google. For brevity, much of the ensuing discussions is omitted here, although it is worthwhile accessing via the above archive.
As the number of deadlines mount without producing results between 2012 and 2015, MN becomes increasingly aware of the public goods problem involved in the standardization of internet protocols and consequently includes a larger group of experts in the consultations. Public goods have this finicky characteristic of getting a wider circle of people involved who suddenly find they have a stake in their management. Given the circumstances, the chair goes above and beyond his initial convictions in the end. Understanding the underlying policy issues at hand offers a sound explanation for the slow grind and changes in process. Without further directive, a chair can merely follow procedure: ask for WG members’ advice until a result appears to garner “rough consensus,” put the final draft to a vote, and issue the new RFC.
But some features in SPDY promote questionable protocol qualities that may not enhance decentralization and generality. For example, a new feature called “push” decreases the numbers of data transfers by preempting anticipated data transmissions. It is supposed to decrease the number of “round-trip” data requests by pushing resources on their way before they are requested. The attribute serves YouTube well because of its near monopoly status and cross-platform cookie deployment with Google search and gmail, but for companies in the “long tail” of the power law distribution, it remains experimental, at best. With a near monopoly, Alphabet’s content servers have little trouble to “find out” which style features and resources are needed or whether they were sent along in a prior site visit.
A good solution to improve the standardization of intangible goods at the IETF would be a firsthand opinion of whether the proposed I-D changes a public good, a club good, or a private good. In the current case, a public call for expressions of interest would have admitted public input and very likely changed the quality and the speed of the technical solution. To the best of my knowledge, such a call was never issued, although it states otherwise on the FAQ for HTTP/2 (see https://http2.github.io/faq/). In the interim, we find a working group tasked with producing a club good with the worthy goal to improve the speed of data transmissions, strung out by belated public contributions. Such inefficient use of volunteered expert time is uneconomical with public involvement appearing as an afterthought, but it might be improved upon with additional ideas discussed in the concluding section.
Conclusion
Technology is rarely a neutral instrument. The importance of changes to an internet protocol cannot be understated, particularly not if used by the majority of the world’s human population and alongside developments like Big Data, the Web of Things, Industry 4.0, or Smart Cities. So, from a sheer numerical point of view, a solid public good approach appears appropriate. Mitch Kapor from the EFF and Lawrence Lessig phrased it well by stating that architecture is politics and code is law (Reagle, 1998).
When classified as an intangible public good, we can probably all agree that an internet protocol needs to maintain its integrity and quality. It begs the question how best to support working groups to maintain their sole focus on increasing the public weal and whether the IETF in its present form fulfills its mandate to improve decentralization and generality. Members of their working groups are nudged by powerful corporate actors to decrease latency and improve their increasingly centralized corporate data transmissions. Entrenched interests keep protocol developers at the IETF operating overwhelmingly out of the United States, and while insisting on a nongovernmental status, most individuals are closely connected to large corporate stakeholders. The latter steer national elections and political outcomes, and any internet protocol customization for their corporate needs assists their influence in the online public sphere. It may be something citizens wish for, but without democratic directives and public consultations, a small group of computer engineers is making decisions for all worldwide current and future citizens, at the moment.
Public good rules apply to everyone and thus market equilibrium remains a possibility, if we transition from a local to global internet governance. In 2016, IANA was transitioned to fit a global stakeholder model and all contracts with the U.S. National Telecommunications and Information Administration (NTIA) were quietly canceled (Farrell, 2016). IANA’s transition followed Snowden’s revelations, but even without such public spectacle, the administration of and responsibilities within the IETF are able to rotate through global networks, given their current working procedures. Introducing a public accountability process that monitors and solves global transmission problems could yield efficient standards for better global internet protocols.
Some engineers at the IETF note the incoherence of having self-interested market participants manage public information goods; perhaps the boundaries of market thinking are too confined or perhaps user access or data traffic is more local than we believe, but self-interested goals insufficiently meet the management needs of intangible public goods. What is more, the social conditions that appropriate a public good and turn it into a sheltered club good are fluid, without distinct boundaries or cut-offs, so insufficient management is a cumulative process throughout time. Nevertheless, with more attentiveness, vested interests might be efficiently met and appropriation attempts expeditiously declined.
For now, it appears as if some members fear that the recent club good approach to internet standards at the IETF is threatening to make their organization obsolete (Kamp, 2015). Surely, a dilution of the mandate of the IETF would be an example of how a well-intentioned volunteer institution might become ineffective whenever it operates in a theoretical vacuum. One solution may be to set up an international multistakeholders trustee organization that vets national interests with regard to introducing changes to our internet protocols.
The current study has some shortcomings. First, the empirical material merely illustrates some of the dilemmata of the theoretical problem at hand, by adding real-life scenarios but it needs a more stringent empirical test. So, the theoretical foundations will have to be flanked by additional quantitative network analysis in future work, to underline the salience of current results further. Second, the current focuses on the direction and severity of nudges dealt in-group need an additional historical lens to quantify the changes in levels of agreement and disagreement in a dynamic analysis, throughout the last two and a half decades. Last but not least, unsurprisingly, latency continues to be a problem, with more actors entering the fray to use their share of our intangible public goods, so a need for internet protocol changes is ongoing. In 2020, the working group is focusing on the QUIC transport protocol that is supposed to replace TCP. The use of QUIC is in the process of being standardized by IETF and carries the promise to improve latency as well as security, just like prior upgrades (Kharat & Kulkarni, 2019). Whether the involvement of the public slows down or speeds up the improvement of our internet protocols depends on the political view of the beholder. Right now, we are using but one tool in our toolbox, namely, small group engineering to decrease latency and manage our intangible public goods. But every current and future generation shares a stake in a well-managed internet, a distributed network technology with an in-build propensity to scale voting and public input.
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
Ethics Statement
Empirical data used in my research are publicly available from a publicly accessible email server that explicitly allows data scraping by third parties. The only restriction was a request to throttle downloads and not attempt bulk downloads, to which I adhered in my research. The previous area director of the IETF working group was emailed a copy and stated no objections.
