The Unreasonable Sufficiency of Protocols

DRAFT Version 0.99, March 6th, 2023

Venkatesh Rao, Tim Beiko, Danny Ryan, Josh Stark, Trent Van Epps, Bastian Aue

Thanks to Hasu, Micah Zoltu, Matt Garnett, Vitalik Buterin, Ben Edgington, Alex Stokes, and Josh Davis for helpful discussions.

Introduction

Complex coordination problems have an air of doomed intractability about them. We speak in fatalistic terms of economics being a “dismal” science, of sociological phenomena being dominated by the “tragedy of the commons,” of organizations being hopelessly “captured,” and of complex problems being “wicked.” Even our simplest mental models of coordination and cooperation problems, such as the Prisoner’s Dilemma in game theory, are built around default expectations of obviously worse outcomes dominating obviously better ones, and worst-case behaviors driving systemic outcomes.

Yet, in practice, we routinely solve coordination problems reasonably well. Workable solutions materialize, pushing through the gloom and doom which often accompanies theoretical views and cultural commentary. In light of this foreboding context, the outcomes appear almost suspiciously lucky, or serendipitous. To take just three examples:

  1. Vehicular traffic comprises millions of objects, each weighing up to several tons, moving at high speeds in close proximity. Yet traffic is able to flow reasonably safely thanks to a relatively small set of rules, starting with a consensus about which side of the road to drive on.

  2. There is an enormous variety of dangerous pathogens in our environment, yet the simple practice of washing hands thoroughly with disinfecting agents, pioneered by Joseph Lister, has proved to be radically effective, and arguably a more important factor in managing infectious diseases than many more advanced medical technologies.

  3. Billions of transactions involving sensitive information flow over the public internet every day, yet the vast majority succeed without incident, thanks to reliable packet-switched networking and secure public-key cryptography techniques.

Each of these simple examples features one or more protocols. A protocol is a relatively simple and codified set of behaviors that, when adopted by a sufficient number of participants (human and/or artificial) in a situation, reliably leads to good-enough outcomes for all.

These outcomes are often achieved in the face of non-trivial levels of defection, free-riding, and other bad-actor patterns. While protocols can and do fail – the Kyoto climate protocol is a prominent recent example – what is truly remarkable is that they defy expectations of failure as often as they do.

In the best cases, successful protocols work so well, they go beyond solving the nominal problem to catalyzing generative flourishing around the activities they codify. For example, reliable and trustworthy protocols for land titling often unlock remarkable levels of economic flourishing, by allowing privately owned land to be used as collateral for capitalist ventures. On the public end of the spectrum, good protocols for environmental stewardship can bring endangered species back from the brink of extinction, and restore delicate ecosystems.

Yet, precisely because they turn into invisible backdrops when they work, good protocols tend to become visible only when they fail, reinforcing pessimistic views of the problem domains they address. Before the Covid19 pandemic, for example, few humans were even aware of the existence of global public health protocols that had contained the spread of other infectious diseases in previous years.

In many situations, a protocol is all you need to turn a seemingly impossible problem into a tractable one, where any residual ambiguities and indeterminacies are well within the capabilities of ordinary humans to resolve. Surprisingly often, protocols herd collective problem-solving behaviors away from tragedies of commons into regimes of serendipity. As they evolve, good protocols tend to rise to the standard articulated by Milton Friedman:[1] they “make it profitable for the wrong people to do the right thing.” Rather than relying on exceptional levels of virtue or intelligence, protocols bring workable solutions within the reach of individuals with ordinary, fallible levels of those qualities, while also containing the effects of extraordinary levels of vice or stupidity.

In some situations, all that is needed for the emergence of a good protocol is the recognition and diffusion of good solutions that are also easy to imitate. For example, in the classic iterated prisoner’s dilemma game, the well-known tit-for-tat strategy[2] and its derivatives resolve the dilemma modeled by the non-iterated version of the game, and establish mutual cooperation as an evolutionary stable strategy. While the strategy often emerges naturally in the wild, via natural selection, it can also be implemented as a formal protocol, and established by design. Such formalization of key insights, with or without technological enablement, is often at the heart of protocols that are “good” both in the sense of being desirable to participants, and being adaptive in their evolutionary environments.

Good protocols do not just treat solutions to problems as works-in-progress, with bugs and imperfections to be worked out over the long term, but the specifications of the problems as works-in-progress as well. Good protocols learn, grow, and mature in ways that catalyze thoughtful stewardship and sustained generativity. Bad protocols on the other hand, if they avoid early mortality, tend to become increasingly neglected over time, leading to extended periods of sterility and stagnation, and succumbing to capture and corruption. Deep-rooted problems get patched over and over with evermore complex surface-level fixes, leading to increasing fragility.

Yet, as we argue later in this essay, bad protocols are typically subject to sufficiently strong evolutionary pressures that they tend to get replaced by better ones. While it is important to resist the techno-optimist temptation to overstate this argument to a pollyannaish degree – highly adaptive bad protocols do exist, and can persist long enough to do lasting damage – there is a case to be made that protocols are natural engines of progress, with the logic of history generally favoring good protocols (in both valuative and evolutionary senses of the word) over bad ones.

Good protocols, in short, are the embodiments of A. N. Whitehead’s famous assertion that “civilization advances by extending the number of important operations which we can perform without thinking of them.” Not only do good protocols deliver civilizational advances, they do so in sustainable ways. “Stability without stagnation” (a guiding principle of the Rust programming language[3]) is the condition good protocols aspire to, and surprisingly often, manage to achieve and sustain for long enough to produce and consolidate significant civilizational advances.

The title of this essay is inspired by that of Eugene Wigner’s classic 1960 article, The Unreasonable Effectiveness of Mathematics in the Natural Sciences.[4] The article established not just a resonant headline template that has inspired many snowclones,[5] but established a heuristic for identifying engines of serendipity: unreasonable performance relative to naive expectations.

While protocols vary in their effectiveness, the remarkable thing about them is they are unreasonably sufficient. They solve more of the problem than we expect, more completely than we expect, relative to their size and complexity. Good protocols, in short, manage to catalyze good enough outcomes with respect to a variety of contending criteria, via surprisingly limited and compact interventions.[6]

As a result, despite the ritual moaning that is invariably part of the cultures surrounding established protocols, they inspire just enough voluntary commitment and participation to overcome the centripetal forces of defection and exit, and establish a locus of continuity and history. Good protocols tend to form persistent Schelling points in spaces of problems worth solving, around solutions good enough to live with – for a while. And surprisingly often, they manage to induce more complex patterns of voluntary commitment and participation than are achieved by competing systems of centralized coordination.

The goal of this essay is to shine a spotlight on this remarkable characteristic of protocols, offer a working conceptualization and account of their essential nature, and lay out an initial agenda for further explorations. Our goal here is to help accelerate, amplify, and structure the contemporary conversation about protocols, and to this end, we invite readers to vigorously critique and poke holes in the draft ideas laid out here.

With this essay, and the broader Summer of Protocols program it is part of, we hope to help catalyze a broader, deeper, richer, and more optimistic conversation about all aspects of protocols, from the highly technical and mathematical, to the social, political, and cultural. Protocols, we believe, deserve to be first-class concepts in any discussion of coordination phenomena, at every level: from handshakes to civilizational futures. We believe that protocols, especially ones mediated by computers, will play an increasingly important role across all aspects of modern human life. The literacy, capability, and imagination we bring to the invention of protocolized futures will determine whether those futures are good or bad.

This article grew out of a three-month-long discussion of the nature and future of protocols in a corner of the Ethereum community, and is intended to convey a sense of an ongoing, evolving conversation that we hope to broaden. As participants and stakeholders in the Ethereum ecosystem, we naturally have a particular interest in protocols mediated by computing technologies, especially cryptographic computing technologies, and the crypto-economic ecosystems they induce. While our treatment is necessarily shaped by the history, current priorities, and long-term visions of the Ethereum project, we have attempted to explore the world of protocols broadly, and we hope it is generally useful to all students of protocols. No particular technical knowledge is needed to follow the discussion that follows here, only broad curiosity about technology and culture.

The rest of this article is organized as follows. In Section 2, we offer a working definition of protocols, briefly distinguish them from adjacent concepts such as standards, APIs, and social conventions, and identify an initial set of interesting questions about them. In Section 3, we drill down into ten aspects of protocols, focusing in particular on aspects of the “unreasonable sufficiency” we identified as a key gestalt characteristic. In Section 4, we briefly survey some frontier problems in state-of-the-art protocols. Finally, in Section 5 we offer a thumbnail sketch of a protocolized future we believe is worth working towards.

2. What is a Protocol?

Despite their association with highly legible, hard-edged, and strongly (often mathematically) codified rules, behaviors, and patterns, protocols form an unusually nebulous category of social reality constructs. The term is applied to everything from handshake norms and dinner etiquette rules among individual humans, to climate treaties among nations and cryptographic consensus mechanisms among computers.

We offer the following working definition as a starting point:

A protocol is a stratum of codified behavior that allows for the construction or emergence of complex coordinated behaviors at adjacent loci.

The point of this definition is not to offer a dispositive characterization of protocols, but a convenient starting point for exploration of their underlying phenomenology. We hope philosophically inclined readers will take up the challenge of coming up with competing definitions.

In our discussions, we explored many questions that a satisfying account must aim to answer. We list a few of the more fertile ones below:

  1. What is the structural relationship between small-p protocols, in the sense of specific atomic behaviors like handshakes, and big-P protocols in the sense of entire behavior complexes, such as the one governing diplomatic relations among countries?

  2. What is the relationship between protocols and agency? Do protocols assume or require a set of participating agents with autonomy or free-will?

  3. How do protocols mutate, and what are the limits on the mutability of a protocol beyond which it begins to lose coherence, identity, and utility?

  4. Protocols often mediate evolving relationships, especially ones with a natural adversarial element and endemic potential for conflict. These relationships often involve agents with long-term memories, creating an evolving historical context the protocol must handle. How do protocols accomplish such complex mediation?

  5. Protocols often serve as boundaries between related spaces, separating regimes of behavior via soft or hard rules of engagement. What is the nature of such boundaries?

To help think about these and other questions, it is useful to think about protocols along two basic dimensions: hard to soft, and atomic to systemic.

Loosely speaking, a hard protocol is one with relatively inflexible expectations, with small deviations causing errors. Traditional computer protocols are typically hard in this sense, but ones capable of sustaining more forgiving interactions, using AI elements for example, can be soft. A soft protocol on the other hand, accommodates a wide range of behaviors. Human behavioral protocols, such as handshakes, are usually soft, but in the case of highly formalized examples, such as ceremonial military protocols, can have nearly machine-like hardness.

An atomic protocol is one that is difficult or pointless to further decompose, while a systemic protocol is one that is made up of many atomic protocols, often organized in strata.

In Figure 1, we offer a sampling of a variety of things that could reasonably be considered protocols, loosely organized along these two axes.

Figure 1. A sampling of protocols

Developing a more extensive and carefully organized inventory is one of the follow-on goals of this study.

It is also useful to ponder the question of what protocols are not. Protocols, as a category, naturally invite comparisons to several adjacent and overlapping categories, such as social conventions, industry standards, rule books, legal codes, and behavioral grammars.

In the case of computer-mediated protocols, there are also the overlapping notions of APIs (Application Programming Interfaces), “stacks” and “platforms.” Usefully distinguishing protocols from these adjacent notions is arguably one of the key tasks in developing a satisfying and useful theory of protocols.

In our conversations, we noted several preliminary ideas:

  1. Protocols are richer and more dynamic than typical social conventions and industry standards, and exhibit some evolutionary tendencies driven by an internal logic.

  2. APIs embody part of the design of protocols, but do not define them.

  3. Unlike stacks and platforms, protocols tend to define and regulate flows of codified behaviors rather than stocks of technological artifacts.

These points are intended as a sampling of initial thoughts that came up in our conversations, and we hope readers will take up the challenge of developing a more comprehensive map of the broader landscape of kindred concepts of which protocol is a member.

3. Ten Dimensions of Sufficiency

In this section we briefly survey ten aspects of protocols that repeatedly came up in our initial discussions, and identify a fundamental question relating to each that we hope will serve as provocation for further investigation. We frame our account of each of the aspect dimensions as a particular kind of sufficiency. While specific protocols often feature more fine-grained patterns of sufficiency of affordances for their particular purposes, in this essay, we focus on aspects that we believe are shared by all good protocols.

Many of these sketches of sufficiency might strike readers as being somewhat tautological: if a protocol wasn’t sufficient along these ten dimensions, arguably it wouldn’t exist at all, either failing to get established in the first place, or failing too fast. As with the argument about the so-called Goldilocks zone of orbits for planets capable of sustaining life, there is a suggestion of circular reasoning here: By definition a planet incapable of sustaining life would not give rise to a species capable of wondering about the question.

That said, it is not obvious that so many domains should admit coherent and generative equilibrium conditions governed by codifiable protocols at all. The null hypothesis for a given coordination problem is not that the workable solution will be “sufficient” but that there will be no solution at all, or that a sustainably terrible solution will establish an unshakeable incumbency. To extend the analogy to planets, our overarching question about protocols is perhaps equivalent to asking why planets cohere at all, out of belts of asteroidal rubble.

3.1 Sufficiently Generative

Good protocols are usually hard-edged, parsimonious, compact, legible, and slow-changing. Surprisingly often, however, the emergent layers of coordination they induce are fluid, profligate, expansive, illegible, and in a state of constant flux. Good protocols are sufficiently generative in that they create a self-sustaining amount of value. This makes it worth dealing with the encumbrances they impose on behaviors, and the negative externalities they create in their shadows.

The internet, for example, is based on a set of limited and rigidly defined protocols such as TCP/IP, but the cultural economy it induces is one of wild exuberance. Modern public health protocols are based on a fairly limited suite of sanitary, pharmacological, and dietary interventions. Yet they proved highly effective against infectious diseases and malnutrition, and led to a sustained population explosion over a century, marked by a sharp decline in childhood mortality and increase in lifespans.

Generativity, however, always comes at a cost. The explosive wealth-creation of the internet has been accompanied by an explosion in the quantity of e-waste destined for landfills, and a sharp increase in chemical contamination problems. The population explosion created by public health protocols has stressed the carrying capacity of the environment, threatening the survival of other species.

Good protocols, however, seem to trigger virtuous cycles that help mitigate their own externalities over time, and give rise to better descendants. The same industrial technologies that created a great deal of waste and pollution also power protocols for recycling and waste management. The same public health technologies that led to overpopulation also power protocols for population control.

This characteristic yin-yang feature of successful protocols is at the root of their unreasonable sufficiency. Good protocols seem to strike a robust balance between ensuring order at some loci, and inducing serendipitous creative chaos at adjacent loci. As a result, within their sphere of influence, they create conditions of exceptional serendipity, or at least significantly reduced malevolence, as in the example of public health protocols.

Key question: What determines the generativity of a protocol, and how that generativity changes over time?

3.2 Sufficiently Legible

In their classic work, Metaphors We Live By, Lakoff and Johnson argued that we understand all domains of experience through conceptual metaphors that structure our understanding in terms of correspondences to other domains. Conceptual metaphors, arguably, are at the heart of how we make our complex social realities legible to ourselves.

The metaphor of parent-child relationships, for example, is widely used to structure our understanding of relations between nation-states and their citizens, and between corporations and employees. The metaphor of Darwinian evolution is similarly used to structure our understanding of the behavior of markets. In more technological domains, the document and stream metaphors are often used to structure our experience of computing technologies. These conceptual metaphors are not just useful, they offer aesthetically satisfying mental models.

Protocols, however, seem unusually resistant to broadly illuminating and monolithic conceptual metaphors. This resistance is perhaps at the root of their illegibility relative to peer concepts. Notably though, protocols are not so illegible as to be impossible to talk about or work with. They can be made sufficiently legible through a variety of metaphors that are, if not aesthetically satisfying, at least workable.

The essential phenomenology of protocols that a compelling conceptual metaphor must illuminate is the interplay of a “hard” codified aspect[7] that evolves over time, and a parallel “soft” cultural tradition, which typically manifest structurally as strata exhibiting different characteristic behaviors. In the blockchain sector, the popular overarching metaphor of ossification is often used to talk about the evolution of the hard aspect of protocols. Related metaphors of gradual material transformation, such as sclerosis, annealing and petrification, get at different timescales of “hardening,” and illuminate subtly different phenomena. Stewart Brand’s architectural metaphor of “pace layering,” derived from the life histories of physical buildings, and gesturing at strata evolving at different speeds, is another candidate metaphor in the same spirit.

But other protocol phenomena suggest more limited metaphors that do not always harmonize well with popular overarching conceptual metaphors. For example, as they mature, protocols seem to present an evolving set of bottleneck problems that limit progress, and as they are solved, new and typically harder ones appear at other loci, serving as a source of growing resistance to change. In our discussions, the metaphor of “tightening knots” proved helpful in talking about this phenomenon, but it does not harmonize with the metaphor of ossification in a clean way. Another metaphor with isolated utility is that of escape velocity[8] for describing a protocol that has achieved sufficient functionality to be safely ossifiable.

Another class of illuminating metaphors can be found in livelier biological processes than ossification. Evolution is arguably based on the natural protocol of the genetic code, with a vocabulary of just four bases, but induces the entire biosphere. The principle of homeostasis, which governs the complex set of interlinked equilibria in a healthy body, can serve as a metaphor for the emergent dynamic balances of protocol-governed spaces.

Key question: What are the best metaphors for talking about protocols?

3.3 Sufficiently Stewardable

Modern computer-mediated protocols offer a powerful kind of optionality: the ability to engineer indefinitely sustained automated behaviors into technological infrastructures. These automated behaviors can not only be designed to require no ongoing human oversight, they can be designed to preclude such oversight. Automation can range from simple unsupervised bots and “crash-only” infrastructure software, to smart contracts and “doomsday” devices of the sort made famous by Doctor Strangelove.

The fact that it is possible to design functionally advanced protocols without room for stewardship makes the question of stewardship particularly important, because a non-trivial argument can be made that perhaps the best kind of stewardship is no stewardship.

The counter-argument is that stewardship is necessary both because protocols rarely solve the problems they address with any degree of finality, and because the problems themselves evolve. In addition, they also inevitably induce their own second-order problems down the road. Only a process of “muddling through” with a series a series of imperfect and limited solutions (via what is sometimes called the method of successive approximations[9]) can arrive at good mature equilibrium states.

Even past an initially rapid evolutionary phase, if a protocol converges to a stable mature state, ongoing inspection and monitoring is still required because protocols typically only codify a common minimum core of a set of problems. A degree of ongoing management may be required both to complete the specifications of problems and solutions for specific circumstances, and to accommodate the slower, but not zero, rate of ongoing learnings at maturity.

One of the biases of Ethereum governance[10] is the idea that the presence of active and attentive stewards and curators willing to “muddle through” for a long time is arguably not just an option, but a requirement for a healthy protocol, at least until it reaches some level of maturity. This has been one of the distinguishing cultural markers of the Ethereum ecosystem relative to other blockchain ecosystems.

While different protocol communities arrive at different conclusions about the right level of stewardship, good protocols seem to thread the needle between too much and too little automation, and too much and too little room for discretionary governance decisions, stabilizing at the right level for their circumstances. They are sufficiently stewardable.

What makes protocols powerful engines of coordination is that they structure imperfect and limited solutions for ongoing active and thoughtful management over an indefinitely extended lifespan, in an imperfectly understood and evolving context. Often, thoughtful stewardship requires repeatedly replacing hard problems at some loci with easier problems at other loci. But on occasion, especially when growth in capabilities is needed, it requires the reverse: replacing easier problems at some loci with harder problems at other loci. The upgrade of the Ethereum network to Proof of Stake, to meet security, scalability, and sustainability needs, was arguably an example of such risky capability growth.

Such episodes involve an evolving burden of systemic risk and technical debt that requires stewardship with foresight and planning to manage, and optimistic collective expectations to commit to such management.

Key question: What are general principles of good protocol stewardship?

3.4 Sufficiently Evolvable

Protocols are a category of constructs that, like standards, appear subject to evolutionary dynamics. Individual protocols mutate and evolve, and competing protocols within broader sectoral ecosystems contend with each other to colonize specific techno-ecological niches. On larger time scales, protocols co-evolve with technological capabilities, and newer protocols, designed to exploit newer capabilities, often (but not always) supplant aging ones. In the history of electrical communication networks, for example, which begins with the telegraph and the Morse code, the TCP/IP protocol eventually emerged as the one apex protocol that ruled the rest. But its continued dominance is by no means guaranteed.

The evolutionary history of protocols, however, is not the same thing as either the history of the underlying technologies, or the histories of societies and institutional landscapes built on top. Rather, they form a somewhat independent layer of the civilizational stack that mediates between material and social realities. Protocols appear to be sufficiently evolvable to allow good protocols to gradually dominate. The governing evolutionary processes are neither as wild and anarchic as those that appear to drive natural evolution and basic scientific advances, nor as tame and civilized as the ones which stable societies (somewhat wishfully) imagine govern their fates.

Unlike evolution in underlying material realities, which tends to be strongly creative-destructive, and driven by randomness, protocols tend to admit a significant degree of deliberate exploratory design, and potential for mitigating the violence of ungoverned creative destruction. The possibilities induced by a set of composable elements and codified behavioral regimes can be systematically explored, and better possibilities deliberately chosen through social-choice processes. While the absolute merits of the resulting outcomes can be debated,[11] the key point is that social choice processes, rather than blind and potentially violent and painful evolutionary dynamics, shape outcomes.

Unlike purely social realities, which often aim to create stable institutional environments for humans, and confine processes of creative destruction to fully domesticated roles, protocols usually aim to catalyze valuable equilibrium shifts, in both material and social built environments, without incurring the cost of significant creative destruction.

As with any evolutionary process, the range of possible futures changes over time, expanding as innovative possibilities are uncovered. An important normative consideration in the study of protocols, therefore, is what new protocols, and what improvements in old ones, are enabled by specific innovations.

For example, blockchains, as a class of protocols, were enabled by a sequence of discrete innovations made since the 1980s that eventually enabled Bitcoin by 2009.[12] More recently, the mRNA vaccines underlying the Covid19 response protocols developed since 2020 are based on innovations going back to the 1960s.[13]

The recent explosion of capabilities in machine learning is particularly interesting to watch, since clear protocols for the training and deployment of these capabilities have yet to emerge, and demands for such protocols, to manage various real and perceived risks, are growing in urgency as the raw technologies increase in capability. Given the exceptionally promethean nature of the technology, developing sufficiently evolvable protocols will be a particular challenge.

While state-of-the-art protocols appear to be sufficiently evolvable for good protocols to outcompete bad ones, it is not clear that they are evolvable enough to allow them to keep up with the evolution of the problems they address. For example, even if energy-use protocols based on sustainable energy outcompete ones based on fossil fuels, it is not clear that they will evolve fast enough to contain climate change.

Key question: Can protocols be made evolvable enough to keep pace with the problems they target?

3.5 Sufficiently Legitimate

Unlike comparable constructs, such as nation-states or corporate organizations, protocols seem to exhibit a natural tendency towards decentralization, and systematically cause governance know-how to migrate from human minds to reliable processes, driving complex systems increasingly towards rule-of-law regimes.

While there are examples of protocols built around privileged participants, such as those governing interactions with monarchs or powerful bureaucracies, or around irreducibly centralized resources, such as space telescopes, the most powerful and generative ones seem to be built around peer-to-peer relationships and interactions. While real protocols often do not meet utopian standards of decentralization, democracy, equality, or justice, surprisingly often, they do well enough that they enjoy sufficient legitimacy to continue existing politically. Voluntary participation, even if with grudgingly given consent, is preferred by enough potential participants that they are sustainably politically viable.

Reinforcing this tendency, protocols embody the idea of rule of law much more strongly than other kinds of institutions, (with many in the blockchain world adopting the stronger principle of “code is law” and mechanisms that automatically translate formal stakes into voting rights), and typically exhibit a much stronger resistance to exceptions, especially formal ones.

Where possible, technological mechanisms are often designed into modern protocols to make a subset of the rules self-enforcing, with some of the inescapable power of the laws of nature, and substituting hard physical constraints for soft social guardrails and norms. One important way this is achieved is via strong emphasis on backwards compatibility. A respect for past versions of a technology creates strong historical continuity, and grounds evolution in a culture of precedents, which helps grow legitimacy over time. Where no suitable technological mechanisms are available, the tendency manifests as an urge towards bureaucratization.

In computer-mediated protocols, this normative tendency is often explicitly articulated as an explicit value. For example, The Internet Engineering Task Force (IETF) operates by the principle,[14] “We reject kings, presidents and voting. We believe in rough consensus and running code.”

This technologically amplified and hardened tendency towards peering and decentralization is often associated with utopian or dystopian imaginaries and speculative tendencies. Powerful protocols often seem to point towards specific classes of futures, and inspire specific utopian or dystopian visions. These in turn, drive patterns of critique fueled by one sort of idealism or another.

Successful protocols invariably face endemic legitimacy challenges from influential voices, yet manage to maintain sufficient legitimacy to persist in practice.

Key question: What makes a protocol legitimate?

3.6 Sufficiently Constrained

An aspect of particular interest in large, distributed, computer-mediated protocols, is the nature of fundamental limits governing their behavior. In computer science, hypothesized limits often take the form of formal and semi-formal trilemmas, which conjecture that only 2 of 3 desirable properties can be satisfied together. The CAP theorem (concerning the desirable properties of consistency, availability, and partition resistance), is perhaps the best known. In blockchain computing, two familiar trilemmas are the Scalability Trilemma[15] (scalability, security, decentralization) and Zooko’s Triangle[16] (human-meaningfulness, security, decentralization of naming systems).

Such constraint models are not limited to computing systems of course. In macroeconomics, a similar fundamental constraint model, the Mundell-Fleming Impossible Trinity, often serves as the basis for interesting arguments and debates about economic policies.

It is perhaps no accident that such constraint models illuminate the workings of protocols. In our discussion of sufficient generativity (Section 3.1), we noted that good protocols are “hard-edged, parsimonious, compact, legible, and slow-changing,” and these characteristics are precisely those associated with explanations that we view as being scientific, in the spirit of Occam’s razor. The constraints that emerge in good protocols are in some ways similar to the conservation laws that emerge in scientific theories of natural phenomena.

Good protocols are sufficiently constrained to force thoughtful consideration of trade-offs, costs and benefits, and thorough evaluation of designs, leading to good engineering outcomes. They are neither so underconstrained that arbitrary tastes can drive outcomes, nor so overconstrained that there are no good solutions to problems at all. Instead, they encourage a search for opinionated but principled solutions to core problems. An interesting cultural effect is that protocols are relatively unfriendly to autocratic and arbitrary leadership, which has led to the ironic use of the phrase Benevolent Dictator for Life (BDFL) to gesture at the fact that technological leaders are typically not entrusted with dictatorial powers for life. But they are expected to benevolently bring strong and tasteful opinions and ideas to bear on the toughest problems, while being self-aware enough to cede the stage when they are unable to do so.

The study of constraints is perhaps one of the most important areas in the study of protocols, and the one that comes closest to rising to the level of a science. Our ability to design and build better protocols is strongly driven by the quality of our understanding of fundamental limits, and cultures of tastefully opinionated leadership for navigating them.

Key Question: What are the important limits and constraints of protocols, and to what extent can these limits and constraints be significantly shifted over time?

3.7 Sufficiently Learnable

A feature common to protocols and adjacent categories such as APIs, grammars, or rules, is their relationship to literacy. Every protocol, arguably, is by definition also a literacy that takes effort to acquire and practice. The value of a protocol is a strong function of the ease with which participants can acquire literacy and fluency in the behaviors it codifies.

Besides basic skills in parsing and producing conventions and codes, protocol literacy encompasses familiarity with basic patterns and usage idioms, and a vocabulary sufficient for the role one hopes to play in a given protocol.

Strong literacies usually also induce thriving and self-sustaining cultures of lore as a side effect, comprising a public commons of aphoristic wisdom, advice, jokes, anecdotes, and stories.

In modernity, humans routinely acquire and deploy dozens of protocol literacies. Among the major ones are: using public transportation, obeying traffic rules, using cash and credit cards, operating bank accounts, using communication systems, accessing healthcare systems, and creating and managing accounts for online services. Some, such as driving, are complex and require some formal learning and licensure. Others, such as using an unfamiliar subway system, can be learned quickly and informally.

Both human and machine participants in a protocol must conform to certain behavioral expectations, especially in communicating with other participants, in order to operate successfully within the protocol. Failing to conform can lead to adverse outcomes ranging from simple confusion or unintelligibility to others, to the creation of risky situations.

Unlike in the natural sciences, protocol literacy does not necessarily arise from knowledge about natural phenomena, or constitute “truth” in any philosophical sense. But on the other hand, it is not entirely a matter of arbitrary social conventions or aesthetics either. While some elements, such as color schemes or terminology, can be somewhat arbitrarily determined by the design choices of pioneers, other elements are constrained by the domain. Railroad operations protocols, for example, must be informed by real knowledge about safe operation of trains.

An effective culture of literacy around a protocol ensures that all participants have the skills necessary to safely and productively participate in it. Mass or retail participants must have sufficient literacy to use protocols safely. For example, pedestrians and drivers must understand and respond to traffic signals. Expert participants and stewards must have enough literacy to govern the protocol and evolve it in the face of changing circumstances and evolving domain knowledge. Creating and sustaining a broad-based culture of literacy around a protocol is a non-trivial task, but is often underestimated, and either treated as a promotional task, to be handled alongside marketing or public relations, or a matter of foolproof user-experience design.

Historically, however, successful protocols have bootstrapped powerful literacies around themselves, and educational mechanisms to sustain them. They are sufficiently learnable to sustain their cultures. Even without formal teaching institutions (they need not be sufficiently teachable), they allow for learning to occur. Perhaps most importantly, the learning cultures around protocols can be partly or wholly permissionless, allowing them to accumulate learning outside of traditional teaching institutions, and beyond the supervision of authority figures like teachers and certifying authorities. Good protocols are not just sufficiently learnable, they are sufficiently hackable to do without formal educational institutions, especially early in their histories.

A particularly important engine of sufficient learnability and hackability for technological protocols is kits. Historically, important engineering domains governed by protocols have usually co-evolved with thriving kit cultures[17] which teach basic knowledge and tastes, but also supply basic bootstrapping resources, and help develop basic protocol literacy.

James Watts’ original steam engine, for instance, diffused through industry in the form of kits. Later, during the era of mass industrialization, the classic Mecanno kit toy, developed in 1898, helped disseminate basic principles of mechanical engineering widely, along with knowledge of design conventions, standards, and protocols. Similar kit cultures fueled other technological revolutions, from rocketry and radio to early homebrew computers. These cultures largely grew around existing institutions, but beyond the reach of their formal authority or governing apparatuses. They were hacker cultures.

A further particular feature of modern technologies making protocols sufficiently learnable through kits is the strong ethos of interchangeable parts, interoperable systems, and composability. A large fraction of the substance of modern protocols is devoted to these aspects of the underlying technology.

While kit cultures are harder to create in digital media, “software development kits” are a familiar construct in the emergence of new software technologies. The blockchain sector has a particularly strong kit culture, with its emphasis on composable and interoperable technologies.

Other domains, such as chemistry and biology, are harder to “protocolize” (and therefore democratize) precisely because kit cultures are more challenging to catalyze. But slow progress is visible even in these domains. In synthetic biology for example, CRISPR has created something of an engineering-style kit culture, and DNA testing kits are increasingly common.

Key Question: What makes for a strong culture of literacy around a protocol, and how can one be created around a new protocol?

3.8 Sufficiently Ludic

In his classic 1938 work Homo Ludens, Johan Huizenga argued that that all major aspects of human civilization have an essentially game-like, immersive quality to them, occupying a liminal zone between seriousness and play.

This ludic quality is arguably essential for meaning-making, and is conceivably the spiritual essence of protocols. Various ceremonial military protocols are popular with the public around the world. In blockchain ecosystems, ceremonial cultural elements, with opportunities for broad community participation, are often integrated into important infrastructure initialization procedures, especially around the injection of the randomness required by cryptographic algorithms.

Good protocols offer problem-solving contexts that resist the anomie of both oppressively coercive and bureaucratic order on the one hand, and anarchic bleakness on the other. As a side-effect, they appear to serve as engines of meaning-making. This side effect can be so valuable, the primary functions of protocols are sometimes even abandoned, while the meaning-making functions are preserved.[18]

It is possible to be too serious, as high-modernist bureaucracies often are, and too playful, as overwrought “gamified” corporate platforms, suffused with artificial cheer and contrived affects, often are.

Good protocols, arguably, are sufficiently ludic to serve as engines of meaning-making beyond their nominal functions, while also fulfilling their nominal functions. Not only do they catalyze explicitly ludic elements, such as NFT-based applications around blockchains, they tend to have intrinsic playfulness, visible in such aspects as naming conventions, running jokes, and popular memes.

Key Question: How do protocols create meaning through play?

3.9 Sufficiently Defensible

Protocols are vulnerable to a whole host of pathologies and systemic ailments, to the degree that any sufficiently successful protocol invariably features a minority of willing participants that is convinced it is irredeemably broken. Ailments that afflict protocols include, but are not limited to, capture by particular groups, endemic exploitation by hostile parasitic elements, cronyism, “gaming,” runaway extraction, and active ideological hostility.

Yet, despite this vulnerability, surprisingly many protocols manage to survive early mortality threats and achieve equilibrium states where they are sufficiently defensible to function anyway, even if in significantly diseased conditions. Surprisingly small groups of well-positioned stewards can keep established and critical protocols going long past the point where critics predict they should have succumbed to their varied apparently fatal vulnerabilities.

For those of a revolutionary bent, this particular ruggedness of established protocols often seems like a bug rather than a feature, a Chesterton-Fence-like aspect that allows aging protocols to overstay their welcome on their civilizational stage, perpetuating systems of iniquity and structural oppression.

For those of a conservative bent, this aspect represents a welcome element of resistance to ill-conceived radicalism.

Setting aside these necessary ideological debates, it is empirically observable that strong (if not necessarily “good”) protocols, once established, are hard to kill. They are sufficiently defensible against their natural threat environments.

Key Question: What is the nature of the balance of power between attackers and defenders of a valuable protocol, and what maintains it?

3.10 Sufficiently Mortal

The history of protocols suggests that while they can be extremely long-lived constructs relative to individual organizations or even entities like nations, they do seem to exhibit natural life cycles, with characteristic patterns of genesis, growth, maturation, decline, and death. While protocols can be hard to kill, and sufficiently defensible against their threat environments, they are neither impossible to kill, nor naturally immortal. They are sufficiently mortal that they do not persist indefinitely, choking the domains they organize. The League of Nations, which preceded the United Nations, is an example of a geopolitical protocol that died after it failed to fulfill its functions in the 1930s.

While the earlier stages of the life cycles of protocols are generally legible and explainable in terms of historical circumstances and the intentions of their creators, the terminal stages are relatively obscure.

What infirmities, fragilities and vulnerabilities naturally and inevitably emerge in a protocol with age? If a protocol is not killed by an environmental threat, what kinds of death-by-aging are possible? What determines the rate of obsolescence of a protocol relative to the problems it addresses? What determines whether a given protocol ages gracefully, eventually yielding to a worthy successor via a smooth transition, or collapses in a crisis, leading to a costly and painful transition to a successor?

Of particular interest to stewards of blockchains is the question of how to select and design for a desirable endgame, and whether that is in fact a worthwhile thing to do. Questions that naturally emerge include: should blockchains be designed to ossify, eventually hardening into an immutable end state, or should they remain capable of mutation and radical change through their whole lifespan?

Key Question: Can protocols be immortal, and if not, what determines their natural lifespans?

4. The State of Protocol Arts

While the general study of protocols is in its infancy, actually existing protocols today span the gamut of age, complexity, and technological sophistication. In this section, we briefly review five example domains that feature state-of-the-art protocols and some of the hardest challenges in protocol design and governance today.

4.1 Concentration Effects in Email Services

Email is perhaps the most widely used, nominally open, general-purpose communication protocol. In theory, any individual or organization can set up and operate an email server, and in the early days of the internet, this was true in practice as well.

But as the email protocol has aged and scaled, it has become effectively impossible for all but the largest organizations to provision email services. This is primarily because the growing problem of spam, over the years, drove the largest public email providers, such as Gmail and Outlook, to create a de facto oligarchy of spam-resistant email service provisioning. As a result, a protocol that was once open and relatively decentralized is now heavily centralized.

As a result of strengthening concentration effects, with no obvious resolution in sight, the global email protocol is now in an increasingly fragile condition.

4.2 Deforestation Monitoring

Climate action presents one of the hardest domains for protocol design, since it spans nations, public and private sector actors, global and local incentives, and complex on-the-ground conditions. An example of a specific hard climate-action problem is forest protection. The LEAF coalition, the largest forest protection initiative in the world, illustrates the challenges. The initiative involves multiple governments, billions in funding, private sector actors, physical monitoring systems (including satellites and ground sensor systems) to monitor logging, and protocols for connecting local communities to global governance and finance systems.

Climate-action protocols are unprecedented in scale and complexity. The problems they tackle are typically far beyond the individual capacities of even the most powerful participating institutions. As a result, some of the most valuable new learnings are emerging from climate initiatives.

4.3 IPv6 in the Internet Protocol

The internet of today relies on an addressing scheme, known as IPv4, that is running out room for new devices. As more and more internet-enabled devices come online around the world, the problem has been rapidly getting more acute over time. As a result, the scarce available address space is managed through complex technologies that exist solely to help share the artificially limited address space.

An updated standard, known as IPv6, in development since 1998, and ratified in 2017 by the internet’s governing body, the IETF, expands the address space to create more than enough room to accommodate the growth of the internet. Robust implementations and transition models now exist.

Transition to IPv6, however, remains slow and messy due to the sheer scale and distributed nature of the global internet’s infrastructure, illustrating the challenges in deploying even validated solutions to large protocol problems.

4.4 Global Plastics Recycling

In 2017, China instituted the National Sword policy, imposing much stricter standards on contamination levels in recyclable waste imports. Combined with the convenience-oriented single-stream recycling models in the West, the result was a rapid unmanaged pile-up of recyclables, with many communities defaulting to undesirable alternatives such as incineration.

The global trade in recyclables has since shifted to some extent to other parts of Asia, but overall, the problem of creating a sustainable recycling sector remains unsolved.

Global plastics recycling is an example of a protocol design and governance problem that is strongly driven by science and technology constraints. On the supply side, there is a growing materials transition problem, as the industry works to replace fossil-fuel based plastics with zero or negative-carbon plastics, as well as replacing important plastics like PET (polyethylene terephthalate) with functionally equivalent but more sustainable plastics such as the biodegradable PEF (polyethylene furanoate). On the waste stream side, co-mingling and contamination sharply limit recycling levels due chemical processing constraints.

Compounding the overall problem, consumer behaviors are strongly driven by aesthetic considerations, misinformation, aspirational behaviors, and a vast variety of confusing local laws, labeling conventions, and incentives. Designing a global recycling protocol that works, makes scientific sense, sparks rational consumer behaviors, and delivers environmentally just global outcomes is one of the hardest problems in protocol engineering today.

4.5 Blockchain Endgames

As the original blockchain, the Bitcoin protocol has set many precedents in protocol design and governance, and has often been the first protocol to encounter fundamentally new problems, especially at a significant deployed scale.

One such problem is that of approaching a fixed supply limit and smoothly transitioning past it to a stable new regime of operations. In Bitcoin, as the supply limit is approached, block rewards diminish, and miners who secure the blockchain must rely on other sources of income to fund their operations. Early in the history of Bitcoin, increasing transaction fees were considered the likely solution, but as the protocol has matured, problems with this hypothesis have become evident. Specifically, given the high price of bitcoin and the relatively low volume of Bitcoin transactions, it does not appear that transaction fees are rising high enough, fast enough, to support secure mining operations indefinitely in a stable way. One potential solution, tail emissions,[19] may provide a way out, via a scheme of fixed block rewards. Alternatively, use cases popularized on other blockchains, such as NFTs on Ethereum, may be “backported” to Bitcoin over time, increasing its potential to generate fees.

Whatever the outcome, the fixed-supply endgame in Bitcoin will be a very significant event in the history of blockchains, with a lot of lessons not just for the blockchain sector, but for protocols in general.

The Ethereum blockchain, while similar to Bitcoin in many ways, is also different in some key ways that have resulted in it evolving towards a different regime of endgame challenges.

In 2022, a challenging multi-year upgrade from Proof-of-Work to Proof-of-Stake was completed, and attention turned to the future. A roadmap, synthesized from ongoing community discussions by Vitalik Buterin, envisions an endgame defined by achievement of functionality escape velocity. The roadmap is designed to add a sufficient number of additional features to meet the requirements of an affordable global blockchain that is sufficiently secure, scalable, and decentralized.

Past this endgame, there is growing consensus that Ethereum should, following the precedent of Bitcoin, but with a much greater degree of deliberate design, “ossify” into a highly stable and nearly immutable condition. The growth of Layer 2 networks is both an important motivator, and an important enabler of such ossification. The base layer can ossify because non-core functions can migrate to other layers, which in turn require the base layer to ossify over time, in order to secure their own foundations.

While there is some rough consensus around this general roadmap and overall end-state vision, there is a great deal of debate and disagreement over exactly how much additional functionality is actually necessary to meet the envisioned needs, and what “functional escape velocity” actually looks like. Within the Ethereum ecosystem, a variety of views are vigorously proposed and defended, ranging from the view that it is perhaps already too complex and should have already ossified, to the view that it might take a decade or more. Given that “ossification” in Ethereum is envisioned as a matter of deliberate design rather than cultural emergence, an important challenge is to manage not just the technology roadmap, but the cultural process of agreeing on it as well, and evolving the culture of stewardship in parallel.

Whatever the future of the contemporary landscape of blockchains, it is clear that as a class of technologies, blockchains represent some of the most powerful protocols in existence. The challenges that various blockchain ecosystems are grappling with today arguably constitute a significant segment of the technological frontier of civilization.

5. Protocolizing the Future

Protocols are undoubtedly experiencing a cultural moment, one that is manifesting as a curious mix of nostalgia and futuristic excitement. Several recent articles explore the (re)emerging potential of protocols as a medium of progress, such as Specifying Spring ‘83 by Robin Sloan, and Protocols, Not Platforms, by Mike Masnick.

Recent events surrounding large social media platforms have served as the immediate provocation for this rapidly growing conversation, as well as the resulting attention on social media protocols like ActivityPub (which powers Mastodon, the popular Twitter alternative). But the momentum of protocol-based technological thinking has arguably been developing for several years. Besides the obvious effect of blockchain-based protocols, cultural trends have also driven a growing interest in protocols as engines of cultural progress, political action, and alternate modes of technological evolution.

Perhaps the biggest driver of interest in protocols is the growing recognition that no other techno-institutional mechanism can grapple with the challenges of fundamentally global problems like climate change and responding to pandemics. The past few years have made it clear that these problems lie beyond the reach of existing institutional forms like nation-states, and pre-internet global institutions such as the United Nations, World Bank, and IMF. The internet has not just transformed the nature of globalization itself, it has also transformed expectations around how we expect to see global problems tackled. Peer-to-peer modes of global coordination at individual and informal levels are gaining in strength, while industrial-era approaches are weakening.

Economic factors are also driving the interest in protocols. The decline of the managerial class, and its gradual replacement by protocols comprising a hodge-podge of SaaS tools for businesses, is lending the entire economy an increasingly protocol-ish flavor. The rise of the gig economy, and the emergence of the “API” as a boundary condition for the labor market, “sharing economy,” and “creator economy,” all serve as tailwinds for the development of the protocol economy.

Subtle cultural factors appear to be at work too. As more people become “terminally online,” participating in cultural production mediated by subcultural social graphs governed by subtle social signaling and shibboleths, general levels of protocol literacy are on the rise. Highly legible status markers, such as suits and educational credentials, associated with industrial institutions, are increasingly being replaced by a mix of illegible markers, such as knowledge of memes and shibboleths gating entry into desirable subcultures, and post-industrial protocol-based formal mechanisms, such as blockchain-based identity systems.

Against this backdrop of pent-up interest and cultural energy, we are at a critical civilizational juncture. How we respond to the power and potential of modern protocols will determine what kinds of futures are open to humanity.

The conversation around protocols has hitherto been narrow, and largely limited to engineers, scientists, and politicians. Broadening it along every possible dimension is critical for making the conversation bolder and more imaginative. Not only do we need more tinkering and hacking to invent and disseminate new protocols, we need more imaginative and courageous explorations of the futures that are made possible as a result.

Science fiction in particular, has always been attracted to protocols, and likely has an important role to play.

Isaac Asimov’s three laws of robotics, and his notion of psychohistory were arguably protocol-based design fictions. The early decades of science fiction were full of explorations of protocolized worlds, and protocols (often embodied by human bureaucracies with limited technological components), were as much the heroes of the stories as the human protagonists.

Sometime during the past few decades, however, as David Brin argued in an influential 2013 essay titled Our Favorite Cliche: A World Filled With Idiots, this changed, and institutions, along with the protocols they stewarded, became the villainous antagonists of individualist protagonists operating on the margins and within the underbellies of protocolized worlds. Without creative interventions, the prevailing cultural hostility to industrial age institutions and bureaucracies is likely to predictably morph into an uncritical hostility to protocolized futures. Turning the hostility into curiosity via good storytelling is an interesting challenge, explored in another 2013 essay by Merve Emre titled Bureaucratic Heroism.

A big part of the challenge of telling more interesting stories around protocols, both fictional and non-fictional, is overcoming the default attitude of hostility and suspicion that reduces them to vaguely malevolent background forces in our environment. As Matt Webb argues in Who could write Protocol Fiction, this is not an easy challenge.

But as our world gets more protocolized, and more of our lives get organized by them, it is perhaps the most important creative challenge we can take on.

This work was supported by the Ethereum Foundation

  1. “I do not believe that the solution to our problem is simply to elect the right people. The important thing is to establish a political climate of opinion which will make it politically profitable for the wrong people to do the right thing. Unless it is politically profitable for the wrong people to do the right thing, the right people will not do the right thing either, or if they try, they will shortly be out of office.” ↑

  2. See Robert Axelrod’s books, The Evolution of Cooperation, and The Complexity of Cooperation, for details. ↑

  3. See: https://doc.rust-lang.org/book/appendix-07-nightly-rust.html ↑

  4. Eugene Wigner, “The Unreasonable Effectiveness of Mathematics in the Natural Sciences,” Communications in Pure and Applied Mathematics, Vol. 13, No. I (February 1960). ↑

  5. The Unreasonable Effectiveness of Data by Google researchers, which kicked off the Big Data movement, is a prominent example. Another is an influential machine learning blog post by Andrej Karpathy, The Unreasonable Effectiveness of Recurrent Neural Networks.

  6. This characterization is based on a generalization of Danny Ryan’s commentary on the sufficiency characteristics of Ethereum in particular. See the section on ossification in his Reflections 2023 essay. ↑

  7. See Josh Stark, Atoms, Institutions, Blockchains, for a detailed description of the characteristic “hardness” of protocols. ↑

  8. The idea of “muddling through” with a process of successive approximations was first identified as a pattern in successful complex governance and policy problems by Charles E. Lindblom in his classic 1959 article, The Science of Muddling Through.

  9. While blockchains represent a radical break from the history of computing in many ways, particularly around technical features, traditions of stewardship exhibit more continuity with the past. The Ethereum stewardship model draws both from the governance traditions of Bitcoin, and older traditions such as those of the IETF. ↑

  10. “Standards wars,” such as the textbook example of VHS vs. Betamax, are often marked by influential voices arguing that the “worse” alternative, by some design consideration, prevailed. In his classic essay, Worse is Better, Richard Gabriel argued (and bemoaned) that this is in fact the common outcome.The point, however, is that a “standards war” between competing protocols is a social choice process that offers some decision-making agency to at least some of the humans involved, rather than a blind kind of evolutionary dynamic. In well-governed protocols, a great deal of stewardship effort goes towards ensuring the health of such social-choice processes. ↑

  11. See Bitcoin's Academic Pedigree by Arvind Narayanan and Jeremy Clark for an account of the history. ↑

  12. See The Long History of mRNA Vaccines by Chris Breyer. ↑

  13. The principle is attributed to Dave Clark, an influential early figure in the history of the IETF. ↑

  14. See Why Sharding is Great by Vitalik Buterin. ↑

  15. See Kits and Revolutions by Michael Schrage, for a brief overview of the interplay of kits and technological revolutions. ↑

  16. A classic 1976 sociology paper by Meyer and Rowan, Institutionalized Organizations: Formal Structure as Myth and Ceremony, is a powerful exploration of this phenomenon. ↑

  17. See Peter Todd, Surprisingly, Tail Emission Is Not Inflationary. The idea was first developed in the Monero blockchain. ↑

Last updated