The IETF has the final day of its 98th meeting in Chicago today (Friday 31 Mar), far away from here in Vilnius. The Internet is maturing and becoming indispensable to modern life, and is transitioning to industrial types of use. Are the IETF’s methods fit-for-purpose for the future, and if not, what to do about it?
My assertion is that the Internet Engineering Task Force (IETF) is an institution whose remit is coming to a natural end. This is the result of spectacular success, not failure. However, continuing along the present path risks turning that success into a serious act of wrongdoing. This will leave a social and political legacy that will tarnish the collaborative technical achievements that have been accumulated thus far.
Before we give gracious thanks for the benefits the IETF has brought us all, let’s pause to lay out the basic facts about the IETF: its purpose, processes and resulting product. It is a quasi-standards body that issues non-binding memos called Requests for Comments (RFCs). These define the core Internet interoperability protocols and architecture, as used by nearly all modern packet-based networks.
The organisation also extends is activities to software infrastructure closely associated with packet handling, or that co-evolved with it. Examples of those general-purpose needs include email message exchange or performance data capture and collation. There is a fully functioning governance structure provided by the Internet Society to review its remit and activities.
This remit expressly excludes transmission and computing hardware, as well as end user services and any application-specific protocols. It has reasonably well-defined boundaries of competence and concern, neighbouring with institutions like the IEEE, CableLabs, W3C, 3GPP, ITU, ICANN, GSMA, IET, TMF, ACM, and many others.
The IETF is not a testing, inspection or certification body; there’s no IETF seal of approval you can pay for. Nor does it have a formal governmental or transnational charter. It doesn’t have an “IETF tax” to levy like ICANN does, so can’t be fracked for cash. Nobody got rich merely from attending IETF meetings, although possibly a few got drunk or stoned afterwards.
The IETF’s ethos is one which also embraces widespread industry and individual participation, and a dispersal of decision-making power. It has an aversion to overt displays of power and authority, a product of being a voluntary cooperative association. It has no significant de jure powers of coercion over members, and only very weak de facto ones.
All technology standards choices are necessarily political (as some parties are favoured or disfavoured), yet overall the IETF has proven to be a model of collaborative behaviour and pragmatic compromise. You might disagree with its technical choices, but few could argue they are the result of abuses of over-concentrated and unaccountable power.
Inevitably many of the active participants and stakeholders do come from today’s incumbent ISPs, equipment vendors and application service providers. Whether Comcast, Cisco or Google are your personal heroes or villains does not detract from the IETFs essential story of success. It is a socio-technical ecosystem whose existence is amply justified by sustained and widespread adoption of its specification and standards products.
Having met many active participants over many years, I can myself attest to their good conscience and conduct. This is an institution that has many meritocratic attributes, with influence coming from reputational stature and sustained engagement.
As a result of their efforts, we have an Internet that has appeared in a remarkably short period of human history. It has delivered extraordinary benefits that have positively affected most of humanity. That rapid development process is bound to be messy in many ways, as is the nature of the world. The IETF should carry no shame or guilt for the Internet being less than ideal.
To celebrate and to summarise thus far: we have for nearly half a century been enjoying the fruits of a first-generation Internet based on a first-generation core architecture. The IETF has been a core driver and enabler of this grand technical experiment. Its greatest success is that it has helped us to explore the manifest possibilities of pervasive computing and ubiquitous and cheap communications.
Gratitude is the only respectable response.
OK, so that’s the upside. Now for the downside. First steps are fateful, and the IETF and resulting Internet were born by stepping away from the slow and stodgy standards processes of the mainstream telecoms industry, and its rigorous insistence on predictable and managed quality. The computing industry is also famous for its chaotic power struggles, since application platform standards (like Windows and Office) can define who controls the profit pool in a whole ecosystem (like PCs).
The telecoms and computing worlds have long existed in a kind of techno-economic “hot war” over who controls the application services and their revenues. That the IETF has managed to function and survive as a kind of “demilitarised zone for distributed computing” is close to miraculous. This war for power and profit continues to rage, and may never cease. The IETF’s existence is partly attributable to the necessity of these parties to have a safe space to find compromise.
The core benefit of packet networking is to enable the statistical sharing of costly physical transmission resources. This “statistical multiplexing” allows you to perform this for a wide range of application types concurrently (as long as the traffic is scheduled appropriately). The exponential growth of PCs and smartphones has created intense and relentlessly growing application demand, especially when coupled with spectacular innovation in functionality.
So the IETF was born and grew up in an environment where there was both a strong political and economic need for a universal way of interoperating packet networks. The US government supplied all the basic technology for free, and mandated its use over rival technologies and approaches.
With that as its context, it hasn’t always been necessary to make the best possible technical and architectural choices for it to stay in business. Nonetheless, the IETF has worked tirelessly to (re)design protocols and interfaces that enable suitable network supply for the evolving demand.
In the process of abandoning the form and formality of telco standards bodies, the IETF adopted a mantra of “rough consensus and running code”. Every technical standards RFC is essentially a “success recipe” for information interchange. This ensures a “semantic impedance match” across any management, administration or technological boundary.
The emphasis on ensuring “success” is reinforced by being “conservative in what you send and liberal in what you accept” in any protocol exchange. Even the April Fool RFC begins “It Has To Work”, i.e. constructing “success modes” are the IETF’s raison d’être.
Since RFCs exist to scratch the itches of real practitioners, they mostly have found an immediate and willing audience of “success seekers” to adopt them. This has had the benefit of maximising the pace at which the possibility space could be explored. A virtuous cycle was created of more users, new applications, fresh network demand, and new protocol needs that the IETF has satisfied.
Yet if you had to apply a “truth in advertising” test to the IETF, you would probably call it the Experimental Internet Interface Exploration Task Force. This is really a prototype Internet still, with the protocols being experimental in nature. Driven by operational need, RFCs only define how to construct the “success modes” that enable the Internet to meet growing demands. And that’s the essential problem…
The IETF isn’t, if we are honest with ourselves, an engineering organisation. It doesn’t particularly concern itself with the “failure modes”; you only have to provide “running code”, not a safety case. There is no demand that you demonstrate the trustworthiness of your ideas with a model of the world with understood error and infidelity to reality. You are never asked to prove what kinds of loads your architecture can safely accept, and what its capability limits might be.
This is partly a result of the widespread industry neglect of the core science and engineering of performance. We also see serious and unfixable problems with the Internet’s architecture when it comes to security, resilience and mobility. These difficulties result in several consequent problems, which if left unattended to, will severely damage the IETF’s technical credibility and social legitimacy.
The first issue is that the IETF takes on problems for which is lacks an ontological and epistemological framework to resolve. (This is a very posh way of saying “people don’t know that they don’t know what they are doing”.)
Perhaps the best example is “bufferbloat” and the resulting “active queue management” proposals. These, regrettably, create a whole raft of new and even worse network performance management problems. These “failure modes” will emerge suddenly and unexpectedly in operation, which will prompt a whole new round of “fixes” to reconstruct “success”. This in turn guarantees future disappointment and further disaster.
Another is that you find some efforts are misdirected as they perpetuate poor initial architecture choices in the 1970s. For instance, we are approaching nearly two decades of the IPv4 to IPv6 transition. If I swapped your iPhone 7 for your monochrome feature phone of 2000 I think you’d get the point about technical change: we’ve moved from Windows 98 on the desktop to wearable and even ingestible nanocomputing in that period.
Such sub-glacial IPv6 adoption tells us there is something fundamentally wrong: the proposed benefits simply don’t exist in the minds of commercially-minded ISPs. Indeed, there are many new costs, such as an explosion in the security attack surface to be managed. Yet nobody dares step back and say “maybe we’ve got something fundamental wrong here, like swapping scopes and layers, or confusing names and addresses”.
There is an endless cycle of new problems, superficial diagnoses, and temporarily fixes to restore “success”. These “fixes” in turn results in an ever-growing “technical debt” and unmanaged complexity and new RFCs have to find out how to relate to a tangled morass of past RFCs.
The IETF (and industry at large) lacks a robust enough theory of distributed computing to collapse that complexity. Hence the potential for problems of protocol interaction explode over time. The operational economics and technical scalability of the Internet are now being called into doubt.
Yet the most serious problem the IETF faces is not a limit on its ability to construct new “success modes”. Rather, it is the fundamental incompatibility of the claim to “engineering” with its ethos.
Architects and engineers are professions that engage in safety-critical structures and activities. The Internet, due to its unmitigated success, is now integral to the operation of many social and economic activities. No matter how many disclaimers you put on your work, people can and do use it for home automation, healthcare, education, telework and other core social and economic needs.
Yet the IETF is lacking in the mindset and methods for taking responsibility for engineering failure. This exclusive focus on “success” was an acceptable trade-off in the 1980s and 1990s, as we engaged in pure experiment and exploration. It is increasingly unacceptable for the 2010s and 2020s. We already embed the Internet into every device and activity, and that will only intensify as meatspace blends with cyberspace, with us all living as cyborgs in a hybrid metaverse.
The lack of “skin in the game” means many people are taking credit (and zillions of frequent flyer miles) for the “success modes” based on claiming the benefits of “engineering”, without experiencing personal consequences for the unexamined and unquantified technical risks they create for others. This is unethical.
As we move to IoT and intimate sensed biodata, it becomes rather scary. You might Web-based think adtech is bad, but the absence of privacy-by-design makes the Internet into a dangerous place for our descendants.
There are lots of similarly serious problems ahead. One of the big ones is that the Internet is not a scale-free architecture, as there is no “performance by design”. A single counter-example suffices to resolve this question: there are real scaling limits that real networks have encountered. Society is betting its digital future on a set of protocols and standards whose load limits are unknown.
There are good reasons to be concerned that we are going to get some unpleasant performance surprises. This kind of problem cannot be resolved through “rough consensus and running code”. It requires rigorous and quantified performance engineering, with systems of invariants, and a semantic framework to turn specifications into operational systems.
The danger the IETF now faces is that the Internet falls ever further below the level of predictability, performance and safety that we take for granted in every other aspect of modern life. No other utility or engineering discipline could get away with such sloppiness. It’s time for the Internet and its supporting institutions to grow up and take responsibility as it industrialises.
If there is no action by the IETF, eventually the public and will demand change. “The Internet is a bit shit” is already a meme floating in the zeitgeist. Politicians will seek scapegoats for the lack of benefit of public investments. The telcos have lobbyists and never were loved anyway. In contrast, the IETF is not in a position to defend itself.
The backlash might even see a radical shift in power away from its open and democratic processes. Instead we will get “backroom deals” between cloud and telco giants, in which the fates of economies and societies are sealed in private as the billing API is defined. An “Industrial Internet” may see the IETF’s whole existence eclipsed.
The root issue is the dissonance between a title that includes the word “engineering”, and an organisation that fails to enact this claim. The result is a serious competency issue, that results in an accountability deficit, that then risks a legitimacy crisis. After all, to be an engineer you need to adhere to rules of conduct and a code of ethics.
My own father was just a fitter on Boeing 747s, but needed constant exams and licensing just like a medical doctor. An architect in Babylon could be put to death for a building that collapsed and killed someone! Why not accountability for the network architects designing core protocols necessary to the functioning of society?
As a consequence of changing times and user needs, I believe that the IETF needs to begin a period of deep reflection and introspection:
- What is its technical purpose? We have proven that packet networks can work at scale, and have value over other approaches. Is the initial experimental phase over?
- What are its ethical values? What kind of rewards does it offer, and what risks does it create? Do people experience consequences and accountability either way?
- How should the IETF respond to new architectures that incorporate our learning from decades of Internet Protocol?
- What is the IETF’s role in basic science and engineering, if any, given their growing importance as the Internet matures?
The easy (and wrong) way forward is to put the existing disclaimers into large flashing bold, and issue an RFC apologising for the lack of engineering rigour. That doesn’t cut the ethical mustard. A simple name change to expunge “engineering” from the title (which would provoke howls of rage and never happen) also doesn’t address the core problem of a capability and credibility gap.
The right way is to make a difficult choice: to scale up, scale down, or scale out?
One option is to “scale up”, and make its actions align with its titular claim to being a true engineering institution. This requires a painful process to identify the capability gaps, and to gather the necessary resources to fill them. This could be directly by developing the missing science and mathematics, or through building alliances with other organisations who might be better equipped.
Licensed engineers with relevant understanding may be needed to approve processes and proposals; experts in security and performance risk and safety would provide oversight and governance. It would be a serious rebuild of the IETF’s core mission and methods of operation. The amateur ethos would be lost, but that’s a price worth paying for professional legitimacy.
In this model, RFCs of an “information infrastructure” nature would be reviewed more like how a novel suspension bridge or space rocket has a risk analysis. After all, building packet networks is now merely “rocket science”, applying well-understood principles and proven engineering processes. This doesn’t require any new inventions or breakthroughs.
An alternative is for the IETF to define an “end game”, and the scaling down of its activities. Some would transfer to other entities with professional memberships, enforced codes of behaviour, and licensed practitioners for safety-related activities. Others would cease entirely. Rather like the initial pioneers of the railroad or telegraph, their job is done. IPv6 isn’t the answer, because Internet Protocol’s foundations are broken and cannot be fixed.
The final option that I see is to “scale out”, and begin a new core of exploration, but focused on new architectures beyond TCP/IP. The basic social and collaboration processes of the IETF are sound, and the model for exploring “success modes” is proven. In this case, a renaming to the Internet Experiment Task Force for the spin-out might be seen as an acceptable and attractive one.
One thing is certain, and that is that the Internet is in a period of rapid maturation and fundamental structural change. If the IETF wishes to remain relevant and not reviled, then it needs to adapt to an emerging and improved Industrial Internet, or perish along with the Prototype Internet it has nurtured so well.
For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.