In a previous article I wrote about ‘structural engineering’ for broadband, and its lack of technical sophistication. A key cause is the ubiquitous resource model that we use: ‘bandwidth’. This fails to adequately characterise broadband demand or supply. You should care because it undermines both the economic foundations of the telecoms industry and the legitimacy of our network engineering profession.
Resource models help us to manage risk
“It has been said that an engineer is a man who can do for ten shillings what any fool can do for a pound; if that be so, we were certainly engineers.” – Nevil Shute
The essence of broadband is the sharing of a common transmission resource among many users and applications. The trick is to get an appropriate balance: too little sharing, and you give your users a great experience, but won’t have enough revenues to cover your costs; too much sharing, and users will lack enough isolation from other competing users, and won’t value and pay for the resulting bad experience.
The right balance depends on the applications, customers, and context. In order to manage the balance, you to need to define what it means to successfully deliver a service and manage the unavoidable trade-offs. That requires some level of characterisation of the experience you intend to deliver, and the consequent engineering requirement on the network. The heart of this process is the management of performance hazards. (See “How ‘hazards’ drive broadband economics”.)
We characterise demand and construct a matching supply using a resource model. When we select ‘bandwidth’ as our model, we sacrifice our ability to manage these hazards. This creates unmanaged business and customer experience risks.
Why is this?
1. ‘Bandwidth’ makes people think of averages
As noted in another previous article, there is no quality in averages. Would you be happy to eat any apple from a barrel of apples if I told you they were, on average, very fresh? I hope not, since there could be one very rotten one. Users are disproportionately sensitive to the outliers of the experience they get, and averages (like bandwidth) hide all the vital detail of real (bad) experiences.
2. Networks have a non-uniform statistical multiplexing gain
The statistical properties of the network vary: from the edge, through the access network, to the core. The same bandwidth may cause an application to behave in very different ways. Notably the level of isolation improves as you aggregate more flows together and the ‘strong law of large numbers’ takes over. That means you can safely aggregate more flows, and get more statistical gain, the nearer you get to the core of the network.
3. Supply properties vary with load
A corollary of the above variation is that the statistical properties of the same bandwidth vary not only by location in the network, but also by time. The ‘first’ 1kbps – an ‘empty’ network – will get a very different treatment to the ‘last’ 1kbps – a ‘full’ network. Which slice of bandwidth did you just buy: the uncontended first one, or very jittery last one?
4. Supply properties vary between bearers
It’s no news that the speed on light imposes limits on our communications, and the same bandwidth via geostationary satellite takes longer than sending the same data via a fibre-optic cable, all other things being equal. However, on broadband we also have quantisation issues, so 10Mbps of Ethernet is not equivalent to 10Mbps of WiFi which is not equivalent to 10Mbps of ADSL.
5. There are layering or encoding costs
This issue of quantisation hints at a bigger problem. When we divide up our data, and wrap it up in layer upon layer of transport protocols, it imposes additional costs that aren’t accounted for if we just measure the bandwidth of the data. Furthermore, tiny micro-scale interactions resulting from the phasing and interplay of the layers result in dramatically different macro-scale performance, even for the same bandwidth.
6. There are schedulability constraints
If you have a monoservice (single class) network, then the flows will have less isolation than in a polyservice network (multiple classes), by the deliberate construction of the network. The same bandwidth may thus have wildly divergent usefulness depending on what scheduling mechanisms are available for use.
7. Bandwidth is not divisible
The temptation with bandwidth is to assume that is can be divided up, and that 1/10 of a 100mbps circuit is the same as buying a 10mbps one. This skirts the issues of packet serialisation time, which will be much longer on the 10mpbs link, and results in wholly different performance for many applications.
The bottom line
The summary of all the above is simple: bandwidth doesn’t ‘add up’. It is not an appropriate general engineering metric for broadband. Yet we keep on using it.
The business consequences are serious. It causes mismatched expectations between telcos and their customers and suppliers:
- Salespeople unknowingly oversell the benefits of broadband, assuring customers when they buy a product that they can easily grow their business. But it doesn’t work out, and the apparent ‘slack’ isn’t really there.
- It likewise encourages customers to believe that if they are only using half of the bandwidth, then they could double their usage, or dedicate the ‘unused’ bandwidth to another application. What they don’t realise is that this apparent slack is being ‘used up’ to create isolation due to poor scheduling.
- This same basic error also causes friction in supply chains within the telecoms industry, as we are unable to reliably contract supply and demand between different operators in order to deliver a working customer service.
This can’t continue, especially in a ‘software telco’ world where there is a danger of disastrous network failures through faulty predictive resource trading models. We can and must reach the same maturity level as other engineering disciplines, so that services adequately manage performance hazards and are fit-for-purpose. To achieve this goal we need to abandon bandwidth as a common resource model for broadband network engineering, and adopt a more appropriate and robust one.
This article is based on content from the Fundamentals of Network Performance course that I co-run with my business partners. We offer private training courses and workshops in the state of the art in this subject. Please get in touch for more details.
For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.