Connect with us


Embracing digital transformation

While 2020 was a year of IT fear, confusion, and upheaval; 2021 brings a new era of IT progress, innovation, and efficiency, due in no small part to the growing adoption of virtualization technologies in enterprise networking operations.

The enterprise networking industry in 2020 is not the same as it was in 2019, nor any year prior for that matter. In 2020, enterprise networking was tested in unprecedented ways, with the pandemic forcing organizations to fundamentally change the way they operate and adapt for a long-term remote workforce.

The global pandemic has changed the role of the network and networking technology from being an enabler for enterprises to be the foundation on which all businesses, institutions, and homes run. Networking professionals have absorbed and adopted wave after wave of new technologies over the last few years — most recently software-defined networking and its closely-related cousin network virtualization.

But there is no time to rest up because network cloudification is set to explode in popularity. As the impacts of the COVID-19 pandemic continue to affect industries around the world, new technologies and market developments emerge at a rapid pace, presenting both advantages to capitalize on and major challenges to overcome.

The growing role of SD-WAN

The SD-WAN market has exploded in 2020, driven by the need to support increasingly remote workforces, the emergence of the AI-driven WAN and the requirement to deliver branch performance that matches campus networks. As enterprises look at the year ahead, support for the distributed workforce remains a top priority, and SD-WAN is a crucial technology that businesses need to invest in to thrive in the new normal.

There has been a lot of hype around SD-WAN and it is a technology that is still in its first generation. Over the next few years, there will be a second generation of SD-WAN technology that provides a more integrated approach that combines security, networking, access, and even artificial intelligence-powered insights.

SD-WAN evolving in the coming years is expected to be more about enabling end-to-end application delivery and experiences. It is an idea that has a lot of value given the extremely distributed nature of application deployments and microservices today. In the last few years there has been a lot of consolidation in the SD-WAN space and there will be further consolidation in the years ahead. Ultimately, there will remain only five or six important SD-WAN vendors in the market.

SD-WANs marked a major advancement in network cost, performance, and safety by giving adopters easily manageable links to branch offices and other remote parties for data, voice, or video communication. Unfortunately, without the assistance of third-party applications, SD-WANs lack important security attributes, such as virtual private network (VPN) protection and web gateways.

Though there are notable barriers to SD-WAN adoption, there are also solutions for each one. If enterprises can overcome these barriers, implementing SD-WAN has the potential to improve scale, bring agility, unify policy and boost security efficacy through a collapsed and integrated AI-driven network. Deploying SD-WAN can be a significant investment for companies, but in the long run, the technology is an important step in moving enterprises into the next generation of networking.

Reshaping the data industry
Reports of the imminent death of the data center have been greatly exaggerated. That is not to say that data centers can remain static, resembling the ones of 10 or even 5 years ago. Thanks to the explosive growth in cloud computing, and the increasing numbers of enterprises adopting a cloud-first strategy, many organizations are having to fundamentally rethink how they architect and use their on-premises data centers. That is paving the way for a transformation in the infrastructure within them, and also a transformation in the way that they are managed.

To get an idea of what data centers are up against, Gartner forecasts that spending on public cloud services worldwide will grow 18.4 percent in 2021 to total USD 304.9 billion, up from USD 257.5 billion in 2020. As a proportion of total IT spending, cloud is expected to make up 14.2 percent of total global enterprise IT spending in 2024, up from 9.1 percent in 2020.

But these figures mask much greater migrations to the cloud. Many companies are allocating 40 percent or more of their IT budgets to cloud and cloud-related services. It is not uncommon to find large enterprises moving 60 percent of their workloads out of the data center and into the public cloud. And, there are instances of enterprises that have moved a higher proportion still.

Data-center networking was already changing prior to the technology challenges brought on by the COVID-19 pandemic, and few areas of the enterprise will continue to be affected more than data centers by those modifications in the future.

That is because myriad technologies are driving changes in the data center – everything from heavy demand for higher-speed networking, support for a remote workforce, increased security, tighter management, and perhaps the biggest alteration – the prolific growth of cloud services.

One of the early tenets of cloud computing was buy the base, rent the peak, a principle intended to guard against overprovisioning; paying for a server farm full of idle computers costing money. In today’s world, trends, services, networks, websites, and apps change and move faster than analysts and ICT buyers can keep abreast of.

Provisioning infrastructure and networks is the linchpin by which online services rise and fall, and there is simply too much work to be done as we do more online. Data center automation will (many say, has to) become standard operating procedure, it is just a matter of when.

The biggest motivation behind data center automation today is agility. A computer is a blank slate, but the processes and applications it will perform are endless, so the reconfiguration it needs for different uses is endless too. Scale that up to a data center and one can see how critical it is to automate the provisioning, connecting, and maintaining of servers and networks.

Application developers work with a huge variety of operating systems and APIs, and they need delivery of standardized (occasionally specialized) infrastructure builds at a moment’s notice to program, test, and deploy. Whether that is bringing a new rack online or spinning up thousands of virtual desktops with the necessary software programming tools already installed, automation can deliver faster than an army of data center staff.

But like everything computer-driven, data center automation is not perfect. There is a real threat of alarm fatigue and false positives — so many alerts demanding attention that staff tend to tune them out and potentially miss a legitimate problem. Even then, there might be too many priorities that need human intervention for staff to keep on top of them. Also, if a process does fail or an errant command is issued automatically, it can propagate heaven knows where and cause a cascade of negative effects.

Mostly however, there is the emotional hurdle. Many IT professionals might see data center automation as a means to make their skills and employability redundant, and there is an inherent trust barrier to putting our faith in automated systems to such an extent, especially when million-dollar cloud applications or websites depend on data centers running smoothly.

The software-defined infrastructure
Following closely on the heels of SDDC is the software-defined infrastructure (SDI), which moves network features and functions into an entirely software-based model. By severing resource provisioning and management from underlying infrastructure components, SDI enables adopters to speed their move into hybrid cloud adoption.

With SDI, the entire data center infrastructure is controlled by software, demanding little or no human involvement. SDI unites an array of data center infrastructure elements, such as SDN, software-defined storage (SDS), software-defined compute (SDC), and network function virtualization (NFV), seamlessly interconnecting an enterprise’s public and private cloud resources.

Sindhu Bhaskaran
Senior Director
Intelligent Automation, AI Practitioner,AI Solutions, Business Impact through AI, Capgemini

“One of the key findings of a recent research conducted by us indicates that organizations that deployed AI at scale are the ones that realized quantifiable benefits from their AI deployments. Hence, it is especially important to pick the right AIOps use cases that not only provide the required scale but also deliver those benefits.”

SDI also allows human administrators to define the enterprise’s application and operational policies. Orchestration software, meanwhile, automates infrastructure configuration and provisioning to comply with established policies. In 2021, more enterprises will turn to an SDI-based architecture to addresses the challenges associated with the traditional hardware-centric model.

Container technology
Over the past few years, containerization has emerged as a major advancement in software development and is likely to gain added momentum during 2021. At this point it might be sensible to think about virtualization and another of its closely related cousins, container technology. Traditional enterprise networking hardware was like a dedicated server, while enterprise network virtualization is analogous to a number of virtual machines, each with their own OS, running on top of a hypervisor on a host server.

Containers are smaller and nimbler, consisting of their own application and as much operating system as they need, sharing the container host’s operating system with other containers on the host. The result is a container app, or a cloud-based microservice, which can easily be moved around and controlled using an orchestration tool such as the open source Kubernetes platform.


It is worth noting that just as existing legacy software from legacy networking appliances can be virtualized, it can also be containerized. But most people take cloudification to be something more: the development of new code that has specifically been written or rewritten to run in the cloud, in a lightweight micro service container.

So cloudification usually (but does not have to) involve the containerization of networking microservices, and providing them from open public clouds, or indeed private corporate clouds as well. That way these abstracted networking microservices can be set up and deployed as needed across any cloud environment.

AIOps will mean the end of human network management
When early humans domesticated wheat, it changed the way they lived. The plentiful food produced by these pioneering farmers allowed populations to grow rapidly, and that meant there could be no going back to the nomadic hunter-gatherer lifestyle of the past. There were simply too many people for that.

Fast forward 10,000 years to today and network professionals are about to make a change to the way they work, which, once made, there can also be no turning back from. But this time it is not about food production – it is about artificial intelligence for IT operations, AIOps. In fact, AIOps is a bit of a misnomer because it is really about the use of machine learning (ML) rather than artificial intelligence (AI).

The point is that enterprise networks, the networks they are connected to, the applications which run on those networks and in the cloud, and all the supporting infrastructure that goes with that, have now become supremely complex when viewed as one gigantic entity. One can forget about understanding what is going on in these systems: it is at the very limit of human ability just to manage them and fix them when they go wrong.

That is why networking teams around the world are looking at AIOps platforms to help them handle the vast volumes of data generated by these IT systems, networks, and applications and to analyze events, metrics, network flow data, streaming telemetry data, and so on.

At the moment the trend to AIOps has only just started in earnest, although it has been talked about for several years. But Gartner predicts that by 2023, 40 percent of DevOps teams will augment application and infrastructure monitoring tools with AIOps platform capabilities. There can be little doubt that a few years after this, AIOps will be the norm rather than the exception in virtually every large enterprise.

AIOps means humans can no longer manage networks. Once this has happened, though, there can be no going back. The main reason is that, freed from the constraints of what the human brain can cope with, network and systems complexity can go through the roof. It will no longer be possible for humans to manage such networks and systems, but as long as machine learning systems can watch over it all with loving grace, that would not matter. These systems will analyze the entire system’s functioning, detect anomalies, fix problems before they occur, avoid outages and detect and prevent cybersecurity incidents.

That at least is the theory. They may prove not to be perfect, but that is not the point. What is important is that they will be able to perform these tasks better than humans possibly could, and that is because the industry will have long passed the point where humans could perform these tasks at all.

And there, in a nutshell, is the big potential problem with AIOps. If it does not end up being as capable as it is expected, nothing can be done about that. Because, like becoming an agrarian society, once we go down the path of AIOps, we will pass a point of no return.

Looking ahead
There was a time when the nuts and bolts, the so-called plumbing of the network, was what enterprise networking was all about. Networking conversations and professionals had the job of learning all the nuances of protocols and configuration. The network today is more of a given, a foundation, and networking is judged by what it enables. It is about the quality of experience. And that will keep pressure on networking vendors to continue evolving regardless of what the future of work looks like.

COVID-19 constraints, factory shutdowns, and rapid economic contraction and expansion were key themes to last year’s results, but that is changing. Semiconductor shortages, increased lead times for data-center products, and timing of workers returning to the office will shape the rebound for 2021.

While big changes are certainly afoot, the impact of the pandemic in 2020 did slow some implementations. For example, network spending by large enterprises paused in the first half of the year due to the uncertainty and the lack of business confidence created by the pandemic, which favored pursuit of an OpEx model (public cloud) versus CapEx model (on-prem).

As for small enterprises, private data centers have already been in a steady decline even prior to the pandemic. This is because it is significantly less expensive for enterprises of this size to lease capacity in the public cloud as opposed to building their own data centers. The pandemic has further accelerated this trend.

There appears to be no end to the number of ways virtualization can be applied to enterprise networking operations. While 2020 was a year of IT fear, confusion, and upheaval, 2021 brings the hope of a new era of IT progress, innovation, and efficiency, due in no small part to the growing adoption of virtualization

Ryan Perera
Vice President & Country Head

Ciena India

“In the past year, new digital consumption models and remote working have catalysed the move toward digital transformation. Specifically, these changes have accelerated enterprise migration to hybrid multi cloud computing environments. Connectivity to support these has never been more important. Improved coupling of connect, storage and compute is becoming essential to cater to the insatiable hunger for cloud computing.”

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2022 Communications Today

error: Content is protected !!