I have been watching with dismay the commentary and debate following the US Federal Communications Commission’s issuing of its rules on the contentious issue of “net neutrality”. Regrettably, they have proceeded to issue rules without having their science in order first. As a result they have set themselves up to fail. My fear is that other countries may attempt to copy their approach, at a high cost to the global public.
Let’s take a look at the three core rules, and why they are unsuitable.
No blocking
At first sight this seems like an obviously desirable thing. However, it wrongly assumes a known universe of end points to connect to. For example, a decade from now there will be billions of new connected smart devices. Will an ISP have to route to all of them? How will the FCC differentiate between “blocking” and “places our ISP service doesn’t happen to route to”?
This becomes particularly problematic in a future world of virtualised services, which is the logical end point of technologies like SDN and RINA. Every device will potentially experience its own “virtual Internet” (rather like a VPN or VLAN). It may be undesirable to make all end points reachable by everyone, for a variety of cost, performance and security reasons.
An assumption being made with “no blocking” is that all end points should automatically be associated with each other. This is an artefact of the Internet’s primitive prototype design and protocols. In more advanced architectures (such as RINA, and prospectively 5G) association management is an explicit primitive. You can’t route to another point without associating first (and there is a security process to get through, which might say “no”).
Furthermore, the idea of “public” IP addresses (being like phone numbers) is an anachronism. The Internet is not actually a true “inter-network”, as it lacks any gateways that hide the implementation of one network from the next. As a result it is more like a global LAN using a global address space, with the resulting security and performance nightmares. “No blocking” is based on a backwards-looking view of technology to the 1970s.
For that matter, why should any ISP be forced to offer access to Netflix? Why can’t an ISP offer “100% guaranteed Netflix-free!” service at a lower price to user who don’t want to carry the cost of their neighbours’ online video habit? Or an ISP service that doesn’t connect you to web sites with the letter “z” in the domain name? A basic freedom of (non-)association is being lost here.
The real issue is the conjoining of the ISP service and local broadband access, with a market bottleneck for the latter. In dial-up you had a choice of ISPs, so this ISP-level issue didn’t matter. To this foreigner, “no blocking” is a competition issue for the FTC and antitrust law, not the FCC.
No throttling
Again, this seems like an obvious “good thing”. I bought a 10Mbit/sec broadband plan, and you’re only delivering me 5, what gives?
Yet this is a naive understanding of broadband. “No throttling” assumes an intentional semantics to network operation that doesn’t exist. In other words, it assumes that the service is supposed to exhibit certain performance behaviours. Yet broadband is a stochastic system whose properties are entirely emergent (and potentially non-deterministic under load). An ISP can, in principle, legitimately exhibit any possible behaviour.
How will a regulator distinguish between “throttling” and mere “unfortunate statistical coincidences leading to bad performance”? How will they define what performance is supposed to be delivered, and to whom? Why should someone who merely demands more resources be given them? Where’s the fairness in that!
What’s the metric used to determine if “throttling” has taken place? If it’s “speed”, then me and my evil packet scheduling friends and deliver an ISP service with good speed but terrible quality. Indeed, “speed” encourages ISPs to optimise for long file downloads, not interactivity.
So what are the proposed metrics for performance and methods for measuring traffic management? What’s the reference performance level for the service? Without these, “no throttling” is meaningless and unenforceable.
The real issue is whether the service performance is good enough to deliver the QoE outcome(s) that the user seeks. How can the user know if the service will be fit for purpose?
No paid prioritisation
This rule raises the bogeyman of “fast lanes”, which conflates two distinct issues. The first is of having multiple explicit classes of service (a “polyservice” network), and the second being who pays (retail or wholesale side).
Inhibiting the very necessary exploitation of traffic scheduling is technical madness. It ensures the non-scalability of the Internet to satisfy growing quality and quantity needs. Thankfully, it’s only a few neutrality extremists who think all packets were created equal and FIFO queues are divine creations. Yet this rule appears to leave us with “no prioritisation” as a proposed future. Are they serious?
Determining in advance that the wholesale side cannot pay for assurance simply prevents a rational market pricing for quality. This also dumps a ton of complexity onto end users. Now grandma potentially needs to purchase and provision the right quality assurance for each service or application she uses. I hope she gets the codec right in that drop-down box…
We already have “paid priority”, and nobody died. All CDNs offer de facto priority by placing content closer to the user, so it can out-compete the control loops of content further away. Paid peering is perfectly normal. Indeed, nobody bats an eyelid when Amazon sends you physical goods via a parcel service. So why the panic over digital goods?
The real issue is the separation of the immutable delivery cost issues from everything else, and pricing the service appropriately to reflect those costs.
Time for some hard science to inform policy
Both sides of the debate in the US has been fuelled by campaign groups who are often funded by rich corporations and donors. It’s a battle between Big Content and Big Telco over who carries the cost of delivering bulky and quality-demanding services. There’s little of principle at stake. It’s about power and privilege.
A lot of (legal) academics have written on the subject, with some offering reasoning that unsurprisingly aligns with the interests of their sponsors. They consistently make the same technical errors:
- Firstly they assume a “virtuous circle” of content and users, ignoring the diseconomies of scale: users are not internalising their cost of using a shared medium, and the cost of association is not zero.
- Secondly, they assume circuit-like behaviours of the Internet, with wholly wrong understandings of “QoS”, “congestion” and the network resource trading space.
- Finally, they look backwards to an illusory utopian past of the Internet, rather than planning for the future. (SDN doesn’t appear once in the whole FCC order. QED.)
However, technical reality has the last laugh. If you tried to make spectrum policy rules that broke the laws of physics, you’d be ignored by informed people, and the cosmos wouldn’t bend. Broadband is similarly constrained by “laws of mathematics”. Why don’t we try making rules that fit within those, for a change?
The real issue is abuse of power, not abuse of packets. We need a new regulatory approach, grounded in the science of network performance, that directly constrains market power.
For the latest fresh thinking on telecommunications, please sign up for the free Geddes newsletter.