I have been attending a professional event where I was given a challenge: “boil down your message to the world about the future of networking into a three-minute talk”.
This is the edited text of that talk. It contains themes that should be familiar to long-time newsletter readers, but this is the first time they have been presented in this form.
I propose that we need to found a new discipline of network science. For this to happen, we need a theory of translocatability.
Every application outcome in our digital world is the result of two complementary data manipulation activities: computation and translocation. Computation transforms data, whereas translocation takes an output from one computation, and turns it into an input for another, without any transformation.
The discipline of computability defines what it means ‘to compute’. It offers a strong model of cause and effect for computation. Computation is therefore a scientific activity, and computers behave in a predictable and repeatable manner.
Whilst computation is the abstract function that is concretely performed by computers, translocation is the abstract function that is concretely performed by data networks. Computation is increasingly a distributed activity, performed by multiple dispersed computation processes. That makes the relationship between computation and translocation of increasing interest to those building and using data networks.
In turn, data networking involves the assembly of individual data links into a fixed and finite transmission resource. We have information theory to describe the constraints of communications channels representing single data links. This resource is then shared using packet-based statistical multiplexing. However, we have hitherto lacked a generalised model translocatability to define the constraints of the multiplexed system as a whole.
As a result, network design and operation is being conducted without a robust and complete model of cause and effect. This incomplete understanding leads to poor design and operational choices, which often cause packet data networks to behave unpredictably. As such, data networking is typically a craft activity, and is not truly the product of science.
This weakness of fundamental understanding is placing mankind in growing conflict with the mathematics of statistical multiplexing. The result is increasing inefficiency and ineffectiveness in our networking technology. This makes it ever-harder to build reliable and affordable higher-order systems. Since the constraints imposed by mathematics are not negotiable, the eventual outcome of our mistaken beliefs is unsustainable behaviour.
What is therefore required is a paradigm shift in our understanding: the creation of a theory of translocatability, to complement that of computability. The heart of this theory is a rigorous mathematical definition of what it means ‘to translocate’
This conceptual advance gives data networking technology solid mathematical foundations. Computer science will then be joined by network science, respectively underpinned by their rigorous fundamental theories.
By exploiting this philosophically and mathematically sound approach, data networks can then enjoy the predictability and efficiency that we have taken for granted for computation since the earliest digital computers. An infrastructure that behaves predictably will enable us to proceed in a more technically, economically and environmentally sustainable manner.
If you want to know what this theory of translocatability looks like, read my Lean Networking presentation. More technical details are in this presentation [PDF].
To keep up to date with the latest fresh thinking on telecommunication, please sign up for the Geddes newsletter