The reason why your smartphone can morph from egg timer to real-time language translator to augmented-reality game is because it’s a programmable piece of glass. Your wildest ideas can become “an app for that.”
Of all the modern computing infrastructure that helps make your phone such a rich and capable device — cloud data centers, the internet and cellular networks, and edge computing — there is still one part that’s not yet fully programmable, not yet open for the wildest ideas. That is the network itself.
The reason? When the internet was taking off in the 1990s and 2000s, it became bogged down with too many standards, regulatory stakeholders and a networking industry a bit too invested in the status quo. When it should’ve been open and simple, fast-moving and agile, it instead ossified — moving at a glacially slow pace. For sure it got faster; but because its behavior was locked into standards and silicon, it was hard to make it more reliable, more secure and more useful.
More: Intel at MWC Barcelona 2022
I’ve had a burning desire for more than 15 years to fix this, to improve the internet, to make it faster and make it evolve more quickly. And when I say internet, I mean networking broadly defined: in our homes, in the cellular networks, in Wi-Fi, in enterprises, in the public internet, as well as inside cloud data centers.
Before I joined Intel last year, this desire drove me to become a professor at Stanford and start a number of successful networking companies. My goal has always been to challenge the networking industry to think more in terms of software to drive the infrastructure.
In the past, all the functions of networks were locked down and determined by standards and equipment manufacturers who had very little incentive to change. The thinking was that this was the only way for networks to achieve the desired performance, cost and power efficiency.
But that’s no longer the case.
One example is live in Japan right now. A recent study of 5G networks found that the providers with the very fastest download speeds — more than 40% faster than any other — were on a network built by Rakuten, an Intel customer. (Rakuten’s virtualized network runs on Intel® Xeon® processors using our FlexRAN software.)
What’s remarkable about this is Rakuten Mobile is not a telecommunications company — it’s an e-commerce and internet services company with 1.5 billion global members. But Rakuten was able to build a 5G network with software on the same infrastructure it uses to offer its dozens of online services.
Many companies with warehouse-size data centers — Google, Amazon, Facebook and Microsoft, for instance — are shifting to programmable networks, too. But in their cases, it’s driven by a need for speed, while giving them the flexibility that programmability brings.
Let’s dig into their voracious appetite for speed. If you were to cut a line vertically through the United States and then look at all the public internet traffic passing from left to right and right to left — called the bisection bandwidth of the internet, essentially how much capacity the internet has crossing the United States —it would be less than the amount of traffic going between a couple of hundred servers inside a modern data center, which often contains tens of thousands or hundreds of thousands of servers inside. The sheer scale is huge (hence these companies often being known as “hyperscalers”).
If one of these companies wants a faster, more reliable and more secure data center than its competitors, it cannot just go buy the same old fixed-function networking box. To introduce new ideas and differentiate from their competition, each will need to program these devices themselves.
The natural next choice is taking the chips that were themselves fixed, making them programmable and allowing for that differentiation. I have been involved in developing networking chips exactly for this purpose. I’ve observed over the past decade, as companies want more control over how individual packets are processed, they will do interesting, innovative, sometimes wild, things that I would have never thought of — that their competitors wouldn’t think of. Today, the networks inside different data centers work in different ways, as they introduce their own secret sauce to get a leg up on the competition.
If you step back a little bit, the entire system — the computers, the storage, the network — is becoming one big distributed system that you can program to do exactly what you want.
Our job at Intel is to provide our customers, especially their software developers, with the best programmable platforms in the world. As this infrastructure moves to software in the cloud, through the internet and 5G networks, and all the way out to the intelligent edge, our job is to make it as easy as possible to develop their new ideas on our hardware.
By lifting functions previously baked into hardware up into software, our customers and developers can improve them faster than ever before. Because if it’s baked into hardware, into fixed-function silicon, then innovation not only moves more slowly but it’s also limited to the imagination of those who are building the hardware.
However, if you move it to software, then you open it up to a much bigger population, a universe of developers who can then bring their creative ideas and try them out. What’s more, you have handed over the keys from those who build hardware, to those who own and operate big networked systems for a living. Only they know how to operate at such scale; only they can write the software to determine how their systems should work.
As the last piece of the world’s computing fabric finally goes programmable, it’s going to change everything — it’s going to open the floodgates for a massive amount of innovation.
For example, my colleague Raja Koduri recently pointed out that the metaverse may be the next major platform in computing after the world wide web and mobile. To realize this vision, we need orders of magnitude more powerful computing and communication capability, accessible at much lower latencies across a multitude of device form factors. All of this is far more achievable with a more composable and programmable infrastructure.
A fully programmable infrastructure will also lead to wider distribution of intelligence. For instance, it brings the ability to process data closer to where that data is produced or consumed, or what we call the edge.
Our customers are already deploying a lot of AI inferencing in their premises at the edge of the network, where they are analyzing video as it streams from cameras, to monitor inventory, measure foot traffic and identify manufacturing anomalies. Already big, the use of inferencing will grow rapidly and we’ll see a massive transformation in factories, retail stores and hospitals. As AI inferencing grows, developers will demand programming models that are open, so they can target their creative new applications to any target they wish, without being locked into a single solution. For this reason, we are seeing rapid growth in our very successful OpenVINO™ inference platform. Pair this with 5G and we see this as the next killer app.
I can’t wait to see what new ideas emerge next. Especially the wild ones.
Nick McKeown is senior vice president and general manager of the Network and Edge Group (NEX) at Intel Corporation.