The new science of network quality takes away a constraint to service innovation in telecoms and cloud. I outline the thee strategies now available.
I am preparing the content for the all-star Scientific Network Management for Cloud Computing workshop in London on Friday 8th December. Here is the core strategic choice that now faces all executives in the telco-cloud ecosystem.
∆Q metrics and high-fidelity measurements take away a core constraint in packet networking, which is the previous inability to relate the network performance to the user experience in a robust manner.
This opens up the potential to apply well-proven quality management techniques from other industries, such as lean, six sigma, and theory of constraints. These methodologies have already transformed the product value and cost structure of manufacturing and services sectors.
See my earlier article for more details of the switch from a supply-led to a demand-led model for telecommunications.
Strategy 1: Enhance the current telco model
In this approach, we use the new science and engineering to rebuild existing business processes. For instance, we define the performance requirements (demand ceiling and supply floor) on network elements, and monitor them using high-fidelity measures. We can then provide automated isolation of faults in the network.
This strategy retains the present revenue model, which is to sell circuit-like products sized by bandwidth.
Strategy 2: Extend the current telco model
The more ambitious and adventurous operators, perhaps in countries with a strong engineering and science tradition, will use these new enabling technologies to build new products. These will still fundamentally have the same revenue and cost structure as today, but will be qualified by fitness-for-purpose and segment by quality.
This requires operators to not only be able to measure quality, but also to be able to model the demands of different applications, and budget that performance in digital supply chains using “quality contracts” at network management boundaries.
This strategy requires repurposing existing telco assets into “virtual quality networks” as overlays. This can be done by the telco side advancing towards cloud, or the cloud side arbitraging the existing telco assets.
Strategy 3: Elevate the current telco model
Getting onto the next S-curve of growth involves a fundamental change from “quantity first” (bandwidth) to “quality first” (latency). This is hardly news to anyone in the SaaS or data centre business, and it is slowly seeping into the consciousness of telco leaders and investors.
In this “cloud native” or “made for cloud” model, value is attached to delivering an application performance outcome, and delivering a fit-for-purpose service. It required new mechanisms, business processes and management methods.
Strategy 3½: Extinguish the current telco model
My gut tells me this may end up the most popular strategy, by default. The canals didn’t become railroads, the railroads didn’t become cars, and breakbulk shipping didn’t become container shipping. The move from computer networking to inter-process communications is a paradigm shift that will leave most of the existing telecoms industry behind and broke.
Rather than advancing towards the cloud, the cloud will acquire and arbitrage the telco services and assets. The arrival of the first assured cloud application access products will cause investment analysts to tweak their financial models of telcos, and there will be a flight of capital away from the sector.
The most likely scenario to me is that Amazon will do to telcos what Apple did to handset makers. Actions speak louder than words, and my personal money is all currently invested in arbitraging the current telco model as a quality poacher. Being a broadband performance gamekeeper was proving too much like hard work.
Can you afford the price of coming?
Wait until you find out the cost of ignorance…
For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.