The rapid change in players and technologies in telecoms can blind us to slower-moving yet profound changes. This article makes the case that there is a subtle on-going shift in the structure and nature of the business. Telecoms has three broad “epochs”, and we are in the midst of a transition from the second to the third of these. This is likely to change the structure of the industry, and will create new winners and losers.
How to decompose what happens on networks
The essence of the methodology is to note that:
- Networking is inter-process communications, no more and no less.
- The only thing that is observable by the computation processes is that the network induces loss and delay.
- That loss and delay can be decomposed into geographic (G), packet serialisation (S) and variable contention (V) effects – and nothing else.
The idea is summarised in the chart below, which takes a sample of packet delays and sorts them by packet size:
(Packet loss can also be charted as a probability density function, but for brevity and simplicity we will focus on delay effects alone from here on in.)
This G/S/V model is crucial to understanding telecoms. It’s the “protons, neutrons and electrons” of networking chemistry. Each of the three basis components corresponds to an epoch of the telecoms industry.
The G epoch
Throughout most of the history of our species, message delivery was limited to the running speed of a human or horse. The original marathon runner, Pheidippides, gave his life to announce that the Athenians had defeated the Persians. We can safely assume it took many hours for the message to travel only a few tens of kilometres. (If you see someone running a marathon today, you can give them a mobile phone and kindly inform them they can just phone ahead. It’s the public-spirited thing to do.)
It took a long time for humanity to progress beyond this point. In the early 19th century, the British Admiralty was concerned with defeating the wicked French. They set up a semaphore system between Whitehall in London and Portsmouth on the English coast. Although a single letter could travel the 108km in only 32 seconds, you had to have someone looking out for a new message. This was done using sand timers, which prompted someone to check every five minutes. Hence, in good weather and daylight, the geographic delay was of the order of a few minutes.
The telegraph dropped this even further, and finally the telephone reduced the geographic delay to close to the speed of light. For computer communications, local and wide area networks replaced sneakernet and postal distribution of magnetic tapes or optical media.
This is summarised in the chart below.
When G was days, hours or even minutes, it didn’t really matter how fast you were at scribing the message or reading it back. Contention was infrequent in a world where transport was slow and expensive. Maybe the seasonal Christmas postal delays are the exception to the rule. Once G became small enough, other effects started to become more significant in their contribution to overall outcomes.
Curiously, G has been slowly rising ever since we gave up analogue telephony. As networks become more complex and we add more elements of processing, each element (e.g. packet address routing look-ups) adds more fixed overhead. Fibre-optic cable has a lower speed of light than copper, too.
The S epoch
For the past 40 years, the focus for data services has been ever increasing link speed. This has served us well, as message payloads became more complex and lengthy. The time for getting a message from A to B was dominated by how quickly you could transmit each packet.
During this era, file transfers, both small and large, have dominated traffic. For example, a Web page is typically a number of files (HTML, style sheet, images) that are assembled into an interactive experience.
During this epoch, the marketing of networks has revolved around bandwidth. This concept conflates two different ideas: the total volumetric transmission capacity of the network, and the time to serialise a single packet. Because they typically both improve with each generation of technology, we tend to think of them together.
To see the difference, consider a “fire bullet” packet from a gaming application. Lower S will improve performance, whereas (ceteris paribus) more capacity will have no effect. When designing broadband systems, the idea of “bandwidth” is a poor one to reason by, because of this imprecision of meaning.
Neither high total capacity nor low packet serialisation time is enough to deliver good broadband. A Blu-ray disc sent in the post has high capacity, and the data can be quickly written and read back. Plenty of bandwidth! The high geographic delay, however, makes it useful only for time-insensitive large file transfers. It is the combination of delays being low that enables a good outcome.
The V epoch
We are now in a new era. Ever more modern applications have tight requirements on latency: small cells, voice, video, gaming, screen sharing, media streaming, even rich interactive web sites. These are all being multiplexed together with an increasing volume of bulk data traffic. The result is that contention-sensitive applications are competing for transmission resources with many other bulk data flows.
The result of all this contention is seen below, in a typical trace of delay from a home DSL line:
Source: Predictable Network Solutions Ltd
The “spikes” of delay are when buffers get full. There is an excess of instantaneous demand over supply, which will make contention-sensitive applications fail. There is a world of research into how to how to manage and mitigate these issues. However, they are an inescapable feature of how today’s broadband networks operate.
As we have seen, low “G” delay was not enough on its own, so we had the “S epoch”. Now low G and S are insufficient to deliver good outcomes: we need something more, which is low V too – at least for those applications that are sensitive to it.
More bandwidth is not the answer
That means that the role of the network changes. It becomes increasingly important for networks to take loss and delay away from flows that can’t withstand them, and re-allocate this impairment to those which can. That makes the network into a trading platform.
Moving from an S-dominated world to a V-dominated one is a huge conceptual and practical shift. Old technologies for sharing resources become less effective, and new ones are required. Old business models for packaging and selling connectivity lose their lustre, and new ones replace them. Old companies tied to particular beliefs about bandwidth and networks will wither, and new ones will rule the industry.
The canals didn’t turn into railroad companies, and the railroads failed to master trucking and container shipping. Likewise, the next epoch of telecommunications is likely to involve some tumultuous change. Those who think that it’s all just about ever more bandwidth are not going to contend well.
For further fresh thinking on the limitations of bandwidth, please get in touch.
To keep up to date with the latest fresh thinking on telecommunication, please sign up for the Geddes newsletter