I have been invited to write on network neutrality for the industry newsletter VA Telekommarknaden. They are covering the European Telecommunications Network Operators meeting #46GA in Stockholm tomorrow (Friday 17th October).
The bottom line? We are “pursuing a technical fantasy [that] is the perfect regulatory folly”.
The first half of the article is reproduced below for your convenience.
There is only one problem with network ‘neutrality’: it doesn’t technically exist.
The underlying political idea is that by offering some form of ‘fair’ packet treatment, a ‘fair’ user experience will result. This is a seductive attempt to apply common carriage principles to broadband. Unfortunately, it is not grounded in technical reality.
The confusion comes from misunderstanding the relationship between packet treatment and application outcomes. Distributed computing systems use statistically-multiplexed resources, and packets are merely arbitrary divisions of data flows. Broadband is fundamentally different from other forms of common carriage infrastructure.
This has led some regulators (and many policy advisors) to make false assumptions about packet networks:
Assumption #1: It is ‘neutral’ packet treatment that matters. Incorrect! Packets are not people, and you don’t need to be ‘fair’ to them. There is no good karma in being even-handed to fragments of data flows in flight. It is only the fairness to people that we care about, which means only end user outcomes matter. Indeed, this focus on ‘fair treatment for packets’ leads to a deeper fallacy. The supposed equivalence of localised packet scheduling treatment does not equal the equivalence of end user outcomes. Any qualified systems engineer will tell you that a collection of local optima do not guarantee a global optimum, which is what we seek both individually and collectively.
Assumption #2: You can readily measure and thus regulate ‘neutrality’. Incorrect! The presumption is that by delivering ‘good’ averages of standardised measures you can deliver ‘good’ user outcomes to everyone. However, averages fail to capture what matters to customers and citizens: making bad experiences sufficiently rare. Furthermore, ‘neutrality violations’ cannot reliably be detected with deployed measurement approaches in an objective and repeatable way. For instance, they can’t consistently distinguish ‘throttling’ from emergent effects of packet phasing (or even locally bad WiFi, for that matter).
Assumption #3: That the end result of a ‘neutral’ network is ‘fair’. Incorrect! All users are competing for a shared resource, and use adaptive protocols that are designed to be aggressive. What is ‘fair’ about continuing to reward selfish behaviour? Some might attempt to finesse this by use legalese words like ‘reasonable’ to mask these technical issues. This assumes an omniscient and a benevolent intentionality to the operation of inorganic network equipment. The switches don’t have a soul, and can’t telepathically divine what the ‘reasonable’ thing to do is.
Nobody will ever be able to define what the metric of ‘neutrality’ is. That is because (at best) ‘neutrality’ is a relative outcome between two or more competing uses. Indeed a ‘neutral’ network may be completely unsuitable for some uses, seriously disadvantaging some users. This relativity means you cannot regulate ‘neutrality’ into existence: any selected metric will be dismissed by a rational court as arbitrary when advised by suitably-qualified expert witnesses.
To read the rest of the article, click here and skip to “So if not ‘neutrality’, then what else?”.