An update on the current status of the ∆Q ecosystem, and the unfolding quality revolution in networking and cloud.
I have received some ∆Q fanmail art. No, really I have.
The networking and cloud industry has the most fabulous opportunity to reinvent itself around the new science of distributed systems performance. The prize is a “lean quality” revolution that transforms the user experience and its cost of delivery.
The key to this magic kingdom is ∆Q, the breakthrough mathematics of stochastic systems like packet networks, and the practical techniques and technologies that flow from it. Here is a short summary of where things are at, and what’s next.
∆Q market positioning
Whilst in its strictest sense ∆Q is just pure mathematics and applied science, we are slowly learning how to position it in the world of engineering and products. In 2017 a key conceptual breakthrough was to fully locate it in a wider understanding of quality management systems and existing “lean” management theory.
The role of ∆Q is a vendor-neutral universal measure of network quality, which has all the key properties desired: it can be related to QoE, has a scientific causal model to relate QoE to architecture and operations, and “adds up” in a way that allows quality to be contracted in digital supply chains.
The “unique selling point” of ∆Q is that it allows us to focus on the orchestration of information flows, even in saturated networks, rather than the management of capacity. The outcome is a “service of quality”, rather than “quality of service”, which is a business transformation, as all telcos must shift from traditional circuits to cloud application access.
∆Q users and applications
Until now, ∆Q techniques have really only been used in boutique consulting work for the most intractable problems and extreme projects, like particle supercolliders. Two public case studies are Vodafone and BT, and there are more lurking behind NDAs.
Looking ahead, there are several areas where it increasingly looks like it is “∆Q or bust, baby!”. These include 5G slicing and low-latency communications, VR/AR over WANs, broadband service quality assurance, edge computing on fixed and mobile, VNF placement in NFV networks (telco acronyms are good for you), critical systems and TETRA replacement, and making any kind of safety case for SDN deployment at scale.
If you are working on any of these problems, and you don’t have ∆Q in your engineering toolkit, then you are in trouble. You might not yet know it yet, and my suggestion is that you might want to stop and reflect if your science foundations are solid before rushing ahead.
The “obvious” early application of ∆Q measurement and calibration is with Virtual Quality Networks (VQNs), establishing their fitness-for-purpose for different cloud applications. Within the next 12 months I would expect to see lab and field trials, although likely not public announcements of products.
Core ∆Q technologies
There are three central new technology platforms for ∆Q: measurement, modelling and management.
The high-fidelity ∆Q measurement has been matured over a decade to the point where it is possible to turn it into a product and industrialise it. There is an alpha version of the measurement system available, and it is already usable in that form for some network performance monitoring. The charting remains relatively rudimentary, being done on a custom basis for each project, since none of the existing monitoring tools are set up for ∆Q metrics and the G/S/V analytics. There needs to be a client application for smartphones/tablets, and this looks like a key development area for 2018.
The ∆Q modelling is available in the Overture testbed. This allows us to do two essential things: establish the performance envelope (“predictable region of operation”) of an application (using ∆Q metrics), and to remove deployment pain by accurately simulating in the lab the environment of any network scenario. This is raw, nascent technology that is not yet productised. I expect this to gain a lot more attention in 2018, as people realise this is the answer to a universal problem of developing the quantitative network performance demand specifications for cloud applications.
The ∆Q management is a revolutionary new kind of packet scheduler that establishes a new class of network. This “Contention Management” starts with the end performance outcome (expressed in ∆Q metrics, naturally), and works backwards to what the mechanisms need to do in order to deliver it. This is the reverse of all existing packet networks, in which we build stuff and then let the customer find out if there’s any safety margin. The second-generation contention manager is currently being deployed by Just Right Networks Ltd to manufacture “cloud access erlangs” for any SaaS/UC application.
∆Q mathematics and science
The central concepts and metrics are all documented at qualityattenuation.science, and the many presentations and articles linked from there. It’s not yet a simple and easy journey, and taking people on the path to quality enlightenment is definitely work in progress, not least as it is a paradigm change from how the industry mainstream presently sees digital experience quality.
There is a core training curriculum available in the ∆Q science. This comes in a basic form (suitable for a 1-2 day course, together with handbook) accessible to anyone with a numerate background; and a deeper and far more technical course (1 week in length) that is more at the PhD level. These have been delivered at multiple clients in Europe, N America and Asia, being refined over time.
In 2018 these materials need organising into a more easy-to-follow set of learning modules, ideally with the input of new industry participants from research institutions. A little bit of industry sponsorship money to help disseminate the science would also go a long way. It’s been a tough ride producing public science goods from minuscule private resources.
∆Q intellectual property
If ∆Q metrics are to become an industry standard, then there has to be a reference implementation of the measurement system. Nobody wants to be locked into a proprietary technology stack, as has happened previously in the industry’s development with 3G. (Hello, Qualcomm, we love you really.)
This remains a barrier, and there has to be a way found to appropriately reward the inventors and pioneers of ∆Q. Ideally this technology needs to be available on an open source basis with something like an Apache or BSD license, and freely offered to R&D labs and individual/educational users. That will relieve the obscurity issue, as every ambitious 25 year old developer (with a Haskell compiler…) can get to play.
This is going to take some serious time and money to file all the relevant patents and create the intellectual property pool, and disseminate the technology so as to kick off the ∆Q toolchain development gold rush. My hope is that one of the global equipment vendors will seize leadership in 2018, and the fear of missing out will have everyone else jumping on the bandwagon soon after.
∆Q industry standards
∆Q metrics are to information flow as amperes are to electrical current flow and litres to water flow. In other words, it’s not just a neat and cool idea, it’s something baked into the universe, and your job is to get with the plan as quickly as you can.
At the moment, there’s no obvious home for standardising ∆Q metrics and the associated “quality SLAs” (called Quantitative Timeliness Agreements, or QTAs). Many candidate organisations have a possible stake, with relevant existing initiatives: IEEE, TMForum, ETSI, ONAP, GSMA, 3GPP, Broadband Forum, ITU, IETF, and more. If we are lucky, 2018 will be a year of some of these testing the water quietly to see what the ∆Q fuss is all about.
A number of vendors have built private libraries of performance requirements for different Saas/UC apps, all using their own private testbeds and proprietary methods and metrics. The move to ∆Q metrics will allow cloud application and platform vendors to issue network demand specs, just as a piece of equipment in a rack tells you how much current it draws. I hope for some baby steps in 2018, as existing players privately play with ∆Q tools in their labs.
Testing, inspection and certification of quality in digital supply chains is close to non-existent, and ∆Q’s state reflects that problematic reality. I don’t expect this to change in 2018, or indeed before 2020.
∆Q methods and processes
Whilst the scientific management concepts are all out there and proven (just in time, six sigma, lean, theory of constraints, etc.), their application to distributed computing is novel. There are essentially no maps, and few guides. Don’t look for any “packet network lean quality” textbooks or ∆Q university courses, as there aren’t any.
There is a huge amount of work to be done across the whole service lifecycle to reengineer product development, sales & marketing, and in-life service and support for a “quality first” world. If the auto industry managed it 50 years ago, and transformed their supply chain and defect nightmare, we can do it now. It’s not going to be overnight, but 5-10 years from now things could be remarkably different.
The progress I hope for in 2018 is to get a few simple exemplar business processes up and running, like automated fault isolation, or segmented performance of existing wholesale products. The enabling measurement, modelling and management technologies will take years of work to turn into the business value of calibration, coordination and control. Whole new OSS/BSS systems are required, as old ones become obsolete.
∆Q bleeding edge research
The world’s top computer boffins are working on integrating ∆Q with RINA, the only seriously candidate to supersede TCP/IP. ∆Q is being used in blockchain, which raises a whole new level of distributed systems performance and security issues.
There is a massive inventory backlog of inventions waiting to be turned into innovations once the ∆Q toolchain reaches critical mass of adoption in the R&D community. ∆Q enables new routing architectures that are sensitive to latency and packet size. I expect to see initial exploration of this in 2018, albeit as private R&D.
In 2018, a key goal for me will be to get the ∆Q technology stack into the hands of multiple research institutions and universities. I have had many conversations about this in recent months. The barriers are all surmountable, and I hope to make progress soon.
∆Q public presence and policy
From an analyst perspective, we can expect to hear more about intent-based networking, and the growing role of application-aware and quality managed (if not quite yet assured) connectivity. The delivery troubles of 5G are likely to gain commentator attention, and it would not be surprising to see ∆Q mentioned more in that context.
Regulators are already taking a growing interest in QoE-centric measures, especially as “net neutrality” is now dead in the water both technically and politically. There is a bargain to be made around quality floors. ∆Q is the obvious answer to their “which metric?” problem, and they need to take the initiative to get a common science base agreed.
We may see ∆Q-based “synthetic speed tests” reach market in 2018 in early form. This would be the first step to a new regulatory model anchored in fitness-for-purpose for any application (with a “speed test” merely being one kind of application).
The press is catching on to the problems of broadband quality, and the visibility that ∆Q metrics give might drive a few interesting stories in 2018.
The year ahead for ∆Q
In the next year, the focus will mainly be on basic ∆Q measurement. There are basic building blocks that the industry needs, like to standardise the statistical properties of test data streams, and to upgrade the probing and data capture for multi-point measurements. This is unglamorous stuff, and there are already multiple vendors poking around the possibilities.
Putting ∆Q metrics into the hands of (tens of) thousands of sysadmins and users is a realistic goal for 2018, repackaging the alpha version of the measurement into something that is more of a productised solution. This will generate much-needed awareness and engagement. A key application is going to be basic fault management: is it my broadband or my WiFi that is making my apps fail?
We may need a new industry body to promote the science of quality and its enabling technologies. It is possible that there may be headway in 2018 in establishing a Network Quality Foundation or similar. If this interests you (read: you have a budget), then do get in touch. This may have to wait until 2019, as the network quality movement gains more traction and attention.
Whilst we have multiple telcos and equipment providers actively engaged with ∆Q, the people missing from the party at the moment are the cloud services giants like Amazon and Microsoft. Whether it is SaaS web page loading, streaming, or unified comms, there’s a ubiquitous issue in delivering mission critical applications, especially for the enterprise.
It there’s one thing I hope to change in 2018, it is for the cloud industry players to wake up the fact that a standard unit for supply and demand for network resources might be a really useful thing.
For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.