I don’t like to boast, but…
No, let me start that again.
I absolutely love to boast, and knowing it’s highly unseemly, I try (and sometimes fail) to moderate the urge. Anyhow, 27 years ago this year I started a degree in Mathematics and Computation at the University of Oxford. I even graduated, three years later, with a grade that isn’t worth boasting about.
To be honest, I’m not a particularly good mathematician. That is why today I tell tall tales about mathematicians in action, rather than actually doing it myself. Partly it’s a matter of aptitude, but more a lack of motivation. I prefer witty words over elegant equations.
Heaping abstractions upon abstractions was a bit of a narrow intellectual diet for my liking in my late teens. That’s why I spent a large proportion of my undergraduate years reading The Economist, exploring the fringes of the pre-Web Internet on Usenet, and planning trips for the hill walking club.
Why is all this self-indulgent reminiscing relevant to you and me today? Well, the clue is in the rather unusual degree name, Mathematics and Computation. This isn’t “computer science” as such, but rather the fundamental (mathematical) structures that underpin computing. Plus the philosophy that underpins mathematics, for good measure.
Computation is to computing as theoretical physics is to your everyday atom-smashing in a particle accelerator. I sometimes joke that my degree is in “Maths and yet more maths”, but “Theoretical computer science” would be more accurate. Former coursemates have jested that mere “computer science” is the easy option for intellectual lightweights. After all, any Web page coder who has mastered Javascript’s type system is a “computer scientist” these days, no?
Anyhow, back to the main story. In the 1930s, folk like Turing, Church and von Neumann put in place the theoretical foundations for computing. They asked the basic questions of what it meant “to compute”. They reasoned about this through philosophical and mathematical means using intellectual devices like a Turing machine or the λ-calculus.
This was all done before the invention of a practical digital computer in the 1940s. The theory preceded the implementation, so the execution was guided and performed within a highly rigorous framework. “Computation” was the abstract framework, “computing” the concrete implementation.
Also in the 1940s, Claude Shannon put in place the theoretical foundations for data transmission. His information theory gave a meaning to the idea of “information” as a distinct phenomenon from the physical bearer on which it was being exchanged. “Information theory” is the abstract framework, “data transmission” the concrete implementation.
We’ve been transmitting data for as long as humans have been around, so this was a case of the theory catching up with the reality.
Now, it’s only been a human lifetime or so since the first two computers were networked together. We’ve poured a huge amount of effort into data networking. Yet what is really interesting is that we don’t yet have a word to describe the abstract act of the network working as a whole. So let’s steal one!
“Translocation” is a word that has the right meaning, but has yet to be applied widely in the context of distributed computing. It’s the twin of “computation”. The word describes the abstract nature of information replication in distributed computation systems. This is more than just transmission over a single link, and it can’t be modelled just using Shannon’s information theory. We’ve added new elements like buffers, routing and multiple paths.
This process of separating out a general abstraction from the concrete implementation is the foundation of science and engineering. Hiding irrelevant variability in the world gives us intellectual leverage, as we can reason about the world without becoming mired in unnecessary detail.
It may seem like abstruse arcana to point out that we’re one concept short of a full house in our lexicon. Yet it’s a really important thing: if we don’t even have a word to describe the nature of any abstraction we are making, how can we begin to reflect upon its robustness? How can we formulate predictive models of distributed computing if we only have fundamental theory for the “computing” and not the “distributed”?
Who knows, maybe some day in my lifetime, the University of Oxford might even offer a course in Mathematics and Translocation. Just don’t ask me to teach it!
For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.