Connect with us


AI Getting In On The Ground Floor Of Networking

AI is coming to enterprise networks in a number of ways, but the most fundamental shift will likely be on hardware.

As can be expected, AI workloads are quite large and quite complicated, incorporating sophisticated algorithms, abundant compute cycles and highly dynamic data management. Already, leading organizations are finding it difficult to handle all of this on traditional CPUs, GPUs and FPGAs, which is why the processor industry is working overtime to produce new generations of AI-ready silicon.

According to Allied Market Research, the AI chip sector is on pace to top $91 billion by 2025, a more than ten-fold increase from today’s already hefty $4.5 billion valuation. The technology has seen a steady flow of capital investment over the past year, as talk of smart cities and smart homes has ramped up and research has shown the increasing viability of quantum computing. On the downside, however, management and utilization of smart chips is vastly different from current technology, which is likely to produce a dearth of skilled workers in the coming years.

For networking in particular, the first vendor out of the gate with an AI-driven switch is Huawei, which recently announced the CloudEngine 16800 containing an embedded AI chip and a 48-port 400GbE line card per slot. The Huawei chip supports the iLossless algorithm to provide auto-sensing and auto-optimization of traffic latency and packet loss, both of which are detrimental to AI workloads. The device also contains the FabricInsight network analyzer that locates and identifies faults in support of autonomous networking.

Meanwhile, Xilinx is out with the new Versal chips (versatile and universal), which it says are optimized for AI workloads of all kinds. The chip comes in a number of configurations, with anywhere from 128 to 400 AI engines backed by dual-core ARM Cortex A72 application processors and dual-core Cortex-R5 real-time processors, along with more than 1,900 DSP engines for high-precision floating point operations. The aim is to hit the throughput, latency and power efficiency benchmark needed to satisfy AI’s demanding interference and signal processing requirements.

The key to implementing AI on silicon, however, is not to produce overwhelming performance right out of the box, but to ensure that the system can learn and adapt to its new environment without human interference. As Pure Storage Chief Architect Robert Lee explained to Inside Big Data recently, the more iterations an AI system can perform, the greater its ability to train and refine itself as it consumes more data. This, in turn, will lead to better performance and, ultimately, a more balanced computing environment, because it will be able to identify and correct inhibiting factors across the entire architecture.

While AI-driven networking will certainly find a home in the data center, the real action will likely take place on the IoT edge, where all manner of traffic patterns and data use cases will emerge. Much of this edge infrastructure will be unmanned, of course, so the enterprise will be in desperate need of technology that can make decisions regarding the optimal means to achieve desired results.

The whole world is getting smarter, and it has reached its current stage primarily through advances in networking. It only stands to reason that the network itself will become smarter as well, right down to its hardware roots.―Enterprise Networking Planet 

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!