We all take the predictability and reliability of other utilities for granted. So why is broadband such a frustrating exception? Why do our Skype calls fail mid-way? What makes Netflix buffer like crazy? How come our gaming sessions are so laggy?
No real experience intention
Imagine if the design of your electrical supply was optimised to apply the biggest possible voltage and current to anything that was plugged in. That would clearly be ridiculous!
Imagine if the design of your kitchen tap was optimised to deliver as much water as possible at the highest possible pressure the moment you turned it on. That would clearly be ridiculous!
Imagine if the design of your gas cooker was optimised to burn everything to a crisp as fast as possible in a white hot inferno. That would clearly be ridiculous!
So, why have we optimised broadband to deliver as much bandwidth as possible? That’s clearly ridiculous!
In order to work, applications need enough packets to arrive “fresh” enough. In other words, they are sensitive to quality, and need a sufficient quantity of good enough quality. Instead, we’ve aimed to deliver a maximum quantity with an undefined quality.
This is disconnected from what the user values, unlike all the other utilities.There is no specific experience intention, merely a “you get what you get”.
Missing engineering specification
With a domestic AC power supply, we primarily define its quality through having a stable voltage and frequency. With gas we have a regulated composition and energy content. Withe water it has to be potable and delivered under sufficient pressure.
So what’s the specification for the quality of broadband? It is, and please don’t laugh too hard, purely accidental. Yup, the quality of all current ISPs is an emergent property of random processes. Whilst it may be a stable and managed property, it is (unlike all those other utilities) not engineered to a specification with a known safety margin.
The quality of your broadband can and will suddenly shift (under load) in ways your ISP has effectively no control over. Some genius came up with the PR term of “best effort” to describe “out of control” and “not engineered”.
Inappropriate operational mechanisms
With power, gas and water we understand that there are switches, valves and taps to regulate flow. With networks we have buffers. And we’ve chosen the wrong kind. Absolutely everywhere. Honest!
In every network you are likely to encounter, the default policy is to send as many packets as quickly as possible. After all, we wouldn’t want any expensive data link to become sinfully idle, would we? We want a network that is busy, busy, busy!
Regrettably, this is a really dumb thing to do. Other industries figured this out decades ago with their ‘lean’ revolutions. More work in progress and busyness is not the same as delivering value.
What is happening is that we are sending packets into networks faster than downstream data links can process them. The excess “work” we do can only have one effect: those packets get in the way of other data being delivered, without creating any value.
So we have optimised our networks for instability and overload, not for smooth flow of packets within the inherent limits of the system. This architecture error (called “work conservation”) is ubiquitous.
The core (and mistaken) industry belief that the job of the network is to create as much “bandwidth” as possible by delivering as many packets as fast as possible. It doesn’t matter whether it is cable, cellular, DSL, fibre or any other bearer: everyone is selling on bandwidth with unpredictable quality.
This is not the same as delivering a predictable user experience. Whoever first switches to an outcome-centric and engineered performance model may well revolutionise the broadband industry.
For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.