Business and application requirements have pushed web titans like FaceBook, Google, Microsoft, and Amazon to innovative next-gen data center designs that can meet the processing, networking, and storage capacity required to serve millions to billions of users. At the networking layer, in addition to network link speeds increasing from 10 to 40 to 100 Gbps, we are also seeing significant topology design changes. Large data centers are moving from traditional L2-based fat-tree architectures with 3-tiers of access, aggregation, and core layers into L3 ECMP-based multi-stage Clos networks that maximize link utilization and simplify application workload placement.
Along with that shift, next-gen data center networks focus more on software-centric controls (SDN). In addition, there is a shift toward disaggregation of networking devices (separating the constituent components), merchant silicon, and white boxes into select verticals. Key success factors for building next-gen data center networks include modularization and standardization of the fabric, ensuring support for virtualization and containers, adding programmability and automation as well as ensuring visibility to facilitate troubleshooting.
On the data center networking stack, the next-gen data center network is no longer confined to the physical fabric but expands into cross-data center interconnects (DCI) and into accelerator NICs and virtual switches on the servers, with innovation happening across all these domains.
Finally, the efforts of the web titans are expected to drive a lot of the innovation through publishing of their own learning and through organizations like Open Compute Project (OCP) and Linux Foundation, which will push open-source hardware and software designs into the wider ecosystem. Networking vendors will have to find a way to continue to differentiate their offerings while participating in these efforts that ironically could cannibalize their own businesses. To succeed, these vendors will have to innovate and engage the markets in new ways that add value over and above hardware platforms that are being commoditized.
Cloud computing, software-defined networking (SDN), and network virtualization have radically changed the way data centers are networked. Network equipment needs to be designed and built to handle the massive scale of today's web and cloud applications, with flexible software controls that can dynamically react to changing bandwidth needs as well as the complex North-South and East-West data patterns inside data centers.
Switching now extends to servers, with the rise of virtual switches that can accommodate tens to hundreds of virtual machines (VMs) or thousands of Linux containers. And in some deployments, specialized network interface cards (NICs) help offload switching from these virtual switches to improve overall system performance.
Moving from servers to the network, whether a new proprietary switch and software management platform, or a white-box networking platform using SDN standards, many networking vendors are responding to the need for scale and software agility by unveiling scalable switching platforms with dynamic software control and programmable APIs. These platforms also integrate SDN with traditional Layer 2 and Layer 3 technologies.
At the same time, inter-data center connects are evolving, and we see a rise in the trend of using white-box optical platforms under SDN control to provide flexible and cost-effective connectivity. Likewise, optical modules that connect networks across data centers are being slotted directly into data center switches, obviating the need for separate DCI boxes. And data centers are not restricted to large buildings of thousands or tens of thousands of servers but are also pushing into edge locations with initiatives like CORD (Central Office re-architected as a Data Center), blurring lines around the definition of a data center.
Key Business Drivers for the Next-Gen Data Center
As use of cloud and Internet technology grows with more people, devices and applications coming online, the data centers that host and power the key applications consumers, businesses, and governments alike have come to depend on must continue to evolve.
Some key business drivers are propelling enterprises and communication service providers to evolve their data centers. These include increased competitiveness driving agility, cost-savings, differentiation in IT, increased consumption of video and media-rich content, dominance of cloud and mobile applications and importance of data – big data, IoT and analytics.
At the same time, with IoT adoption, the number of devices that are sending data will continue to grow. In sensor and industrial applications, we will see both large data set transfers (think streaming data from airplane wing models in wind tunnels or even during actual flights), as well as millions of streams of small data sets (gas, electric, water meters). These all contribute to both North-South and East-West data use on data center networks.
Next-Gen Data Center Networking Requirements
With the business and application needs described above driving data center architectures, a new wave of data center networking requirements is emerging. The key requirements may be zeroed in as simplification, standardization and modularity, virtualization and containers, programmability, automation and DevOps/NetOps, improved visibility and troubleshooting and open hardware platforms.
Next-Gen Data Center Networking Evolution
To meet the new requirements for the next-gen data center, we are seeing major data center networking evolution, starting with the major cloud providers, but with trickle-down impact on regular enterprises and service providers as well.
Historically, the data center network consisted purely of physical routers and switches, with an active-passive (or active-active in more advanced) design for traffic flow through the data center. Most traffic was north-south, with external requests coming into the data center to clusters of load-balanced application servers which would then process the data and send information back to the external requester.
Furthermore, in this type of architecture (sometimes termed as fat tree because of the high-BW requirement at the core), half of the network was active and often the other half was on standby, ready to take over in case one of the links or network devices failed. Spanning-Tree-Protocol (STP) would be used on the network to ensure there were no loops in the overall switching infrastructure. And historically, the access layer tended to be L2-centric, with L3 routing taking place only at the core.
In the new cloud-centric world, and taking into account SDN and NFV evolution, we are seeing the definition of boundaries around networking expand, along with the expansion of data centers into new locations (such as remote points-of-presence like telco central offices). The NGDCN now looks more like the following, starting at the virtual switch within the servers themselves, and going out to the core switches and the data center interconnects (DCI) between geographically disparate data centers.
Moving forward, the next-gen data center network will continue to evolve rapidly over the next few years. When web titans like Google keep driving new generations of data centers, and Amazon, FaceBook, LinkedIn, and Microsoft continue to push aggressively to scale, rapid innovation and evolution can be expected.
For enterprises and service providers looking to adopt the designs wrought by these web titans, it will be hard to keep up with each and every generation of their NGDCN evolution. Nevertheless, they should focus on the key tenets of next-gen data network: IPv4/v6 L3 ECMP-based Clos fabrics, modularity and standardization, support for virtualization/containers, automation and DevOps support, improved visibility for better optimization, and security. As long as enterprise and service provider data center designers stick with these fundamental elements, they will be in good shape to adapt to new learnings and innovations from the large data center players.
Furthermore, enterprise and service provider organizations will find it prudent to closely track the innovation coming from OCP and TIP, and explore the value that these open platforms can provide. While enterprises and service providers may choose to purchase proprietary platforms from network incumbents, the next generation of network devices will be heavily influenced by the work of these open-source organizations. SDxCentral