Digital transformation of communications networks globally will continue to demand capabilities that support exponential growth in data traffic. Services and applications that are already data-intensive today will see even more demand in the future, given further advancements in technologies, such as virtual reality, ultra-high-definition video streaming, and artificial intelligence (AI) devices with advanced analytics. Coupled with applications and technologies that rely on high-speed networking – such as IoT, cloud computing, and edge computing – network operators are challenged to keep pace.
In response, massive investment in networks that can cope with this amount of data has already begun, and provisions for 400/800G speeds and feeds are being deployed. Let us take a look at the challenges, capabilities, and optimization of 400G network deployment.
How fast is 400G?
The actual line rate of a 400G Ethernet link is 425 Gbit/s. Making this possible is 4-level pulse amplitude modulation or PAM4, which is a move away from sending binary bits to using symbol combinations.
The challenges of 400G networks
There are many challenges when deploying 400G networks, including:
Optical transceivers, which are one of the most critical parts of 400G rollout and perhaps the biggest stumbling block right now. There is a wide range of different transceivers available on the market with different levels of technology maturity. This is a challenge for all players along the value chain. There is a need to support and validate multiple formats and properly choose the right solution for the right application.
Coherent transceivers and connections, which are being deployed and represent another major novelty in the current digital transformation. Communications service providers (CSPs), operators, and data centers want to take advantage of the substantial benefits that this technology brings, such as enhanced capacity, flexibility, and longer reach. However, they are also witnessing another level of complexity that requires the appropriate tools to demystify the issues with which they are associated.
Port densification, which is necessary in large data centers, requiring plugging more cables into server racks to meet the bandwidth, speed, and latency requirements of new networks. There are many problems to be overcome here that are easy to overlook, such as making smaller and more densely packed cables easier to manipulate, manage, and troubleshoot.
Testing and monitoring
Network testing and monitoring has always been important, but now even more so with the advent of 400G. To support growing next-generation network applications, end-customer operators demand that latency from their service providers is minimized, while other key metrics like data throughput, frame loss, and latency variation all need to be measured and optimized, making testing vital. 400G brings with it some additional issues that only make testing more important.
The shift from non-return to zero (NRZ) modulation (that is, transmitting one binary digit at a time) to PAM4 (transmission in pairs) requires more checks to make sure signals are being transmitted correctly. The advantage of digital transmission has always been that errors are less likely, but PAM4 is not quite, but almost, a move away from digital. That is why, as part of the 400G standard there is a mandatory forward-error correction (FEC) mechanism that not only detects errors in the transmission but also corrects them – and these are included in the 425 Gbit/s line rate.
The nature of encoding twice as many digits introduces more complexity when it comes to testing. The spacing between encoding levels is tighter, and as such it is more susceptible to noise. This means more parameters need to be checked, including optical power at transmit and receive, electrical power consumption, even temperature – and these need to be tested before, during, and after networks are installed. Before installation, carriers need to thoroughly evaluate and stress-test new components in their own labs before installing them in a 400G network.
Throughout installation, quality of service needs to be monitored. This means not only examining layer 2 Ethernet performance, but layer 3 packet efficiency too.
Enabling further digital evolution
The shift to 400G enables the next step in technological change, but the new paradigm is far from simple and will require testing and monitoring at every step of deployment. While it could be tempting to wait until some of the technology has matured, the pressure to increase network capacity and deliver new services is forcing network operators to move forward, bringing new challenges. Because of those challenges, today’s networks require innovative approaches to make way for even more digital transformation.