Imagine sitting in an air traffic control tower but having no control over air traffic. Instead, planes from all over the world independently chose on which runway and at what time they land. It would, of course, lead to chaos and congestion.
Now think of our core network, directing and landing internet traffic to customers. It’s a similar scenario. But unlike with aerospace, we have no control over the volume of traffic our network has to handle at any one time. Rather than being able to space things out in an ordered way, we’re forced to build excess capacity – think of them as hundreds of extra terminals and runways, if you like – to support the remote possibility of high peaks of traffic wanting to land at any given moment.
And last night saw the highest peak yet, with 25.5Tbps flowing over our fixed network. That was around 12% higher than the previous peak set in December last year. It was driven by the popularity of the midweek Premier League fixtures, with six games simultaneously streaming online.
Of course, we invest to ensure our networks can cope with this but as demand grows further this decade we can see potential problems coming down the line. That’s why we’ve taken a stance on the need to review the rules that govern the approach to the internet, commonly referred to as ‘net neutrality’. These rules, first introduced earlier this century, were designed to prevent any discrimination of traffic, in particular that might limit smaller players who were unable to compete with the largest.
Now, almost 20 years on from when the term was first coined, there has never been more of a need for a fair, transparent and open internet. But the principles that were established to deliver that are no longer working for everyone in the ecosystem. The net is not neutral and becomes less so yearly; a quick glance at our own network data suggests that at peak times up to 80% of traffic – and therefore capacity on our network – comes from just a handful of companies, some using models that can find and consume every last bit of space. It’s hard to argue that that doesn’t unfairly impact other users of the internet.
We currently build huge excess capacity to support these inefficient processes. Capacity for content is not infinite and the exponential growth of data will, in the future, pass what we can reasonably be expected to build – or indeed expect consumers to have to pay for. What are occasional management issues now will become much bigger and more frequent challenges later in the decade impacting everyone.
Identifying this fast-approaching challenge is the easy bit. Tackling it is far harder. Even identifying the issue causes some detractors to claim these are old arguments raised by network operators and we haven’t moved on. I don’t see it that way at all. Faced with realities of an internet that is becoming increasingly filled with higher quantities of (and higher-resolution) traffic, and becoming squeezed on capacity as a result, we’re trying to find solutions.
Ironically, it’s the defenders of the status quo who are resurrecting arguments from the past, ignoring the reality of internet economics and the problems that lie ahead. Equally, it’s not in the interests of any ISP – however new – to operate inefficient traffic management; their investors should be asking serious questions if they did.
We are actively trying to plan for the future. The internet has evolved- from an era of simple webpages, to an ultra-complex system of streams, casts, on-demands and now, a promised metaverse. It is critical national infrastructure. The rules that once signaled fairness are out of date and serve only to support the concentration of services, not support their diversity.
These are big questions that we need to face into. The last thing we want to see are old principles holding back modern technologies. I believe we can move forward without overriding the core principles of net neutrality: openness and transparency. In fact, I believe we need to in order to protect them. CT Bureau