The network is becoming more critical than ever as connectivity demands skyrocket with mobile workforces, the rise of the Internet of Things (IoT), and the proliferation of cloud applications. Companies are ramping up their networking infrastructure, focusing on adding bandwidth, investigating ways to modernize their networks with software, and expanding their wireless networking capabilities.
As we move into 2018, in the world of networking, including enterprise switching, routing, virtualization, SD-WAN, mobility and collaboration, various predictions have been made by experts across the industry on what to expect and will be needed.
Some of them include:
IoT will move from evaluation to full-scale deployment. Increase of IoT data will put even more demands on DNS infrastructure, but legacy DNS solutions based on BIND and derivatives will be unable to keep up with the real-time requirements of IoT applications, which will rely upon high velocity real-time traffic management to enable edge computing strategies that slash latency. The solution to this distributed architecture model will be that enterprises move computing and data centers closer to the IoT devices at the edge. In turn, enterprises will increasingly rely on DNS technologies that include intelligent traffic management to direct workload across such highly distributed edge architectures.
There will be an increase in network segmentation projects. Network segmentation splits networks into isolated subnetworks. The advantage of this approach is that it can increase network performance and overall network security. Using network segmentation, critical data and infrastructure can be isolated in one network segment, while employees are isolated in another. Employees and data can be micro-segmented into even smaller groups. This trend will continue to gain momentum in 2018 as part of a broader movement toward intent-based networking.
Networking and security personnel will work as a team. Currently the jobs of network performance monitoring and security monitoring are separate and siloed. But in an outage, when you silo these teams, the end result can be finger-pointing, and network problems with massive consequences go undetected or take longer to isolate. Companies will now begin to acknowledge that networking and security personnel work better as a team.
Network security will be driven by machine learning and artificial intelligence. Unlike today’s network security systems, which are largely human administered and maintained, and find comfort in just an updated database, secure firewall, and patched OpenSSL, machine learning and artificial intelligence technologies at the security layer are going to be extremely dependable sentinels.
Arthur Cole, an expert with more than 25 years’ experience covering enterprise IT and telecommunications provides valuable insights on how the industry shall emerge and the challenges it shall need to address over the next couple of years.
The networking bots are coming. Bots have already infiltrated social media, ecommerce and a host of other digital functions, so there is no reason to suspect they will stay clear of the enterprise network much longer. But as with most technologies, success is usually a matter of proper execution, which in turn requires a clear understanding of the goals and objectives to be met.
Enterprise infrastructure has reached a point where virtually every upgrade must be made as part of a holistic, strategic vision. The last thing any organization needs is an army of disjointed, uncoordinated bots running amok with the keys to network infrastructure.
Accelerating the Fight Against Network Latency
Network latency is a perennial challenge that, despite innovations in abstract networking and advanced fabric architectures, will likely remain at the top of the enterprise list of pet peeves for some time. At the moment, the focus of many applications and services is turning toward real-time performance. Given the state of predictive analytics, it probably will not be long before we start to see better-than-real-time functions as well.
But tackling network latency is not an easy task. Data virtualization, better network design, and hardware improvements could help reduce latency.
The speed at which data is being generated, distributed, and consumed is growing at a record pace and shows no sign of slowing down any time soon. As data infrastructure becomes increasingly intelligent, we can expect to see machine-driven processes start to eke tremendous productivity gains out of sub-micron improvements in latency.
This puts the entire data industry under the gun to drive all inefficiency out of network and data architectures as quickly as possible. The world has already come to expect data anywhere, anytime on any device, and the tolerance for even the slightest delay is getting lower every day. In this day and age, if users cannot get what they want when they want it from one provider, they can easily get it from someone else.
Building Uptime into the SD-WAN
The software-defined wide area network (SD-WAN) is intended to provide flexibility and dynamic provisioning, while lowering the cost of connecting the data center to the cloud and branch office infrastructure. But it also stands to improve another crucial piece of the emerging data connectivity formula: uptime.
SD-WAN can not only improve overall performance, but achieve virtually 100 percent uptime for increasingly complex network architectures. This can be accomplished through four key capabilities: application prioritization, broadband aggregation, dynamic bandwidth management, and network firewall virtualization. With these functions built into the SD-WAN operational stack as core elements, the enterprise gains greater network optimization and manageability, as well as improved security, better compliance, and faster implementation of new network architectures – all of which reduces downtime to near zero even as network scope and complexity increase.
Leading SD-WAN platforms are starting to place uptime on an equal footing with flexibility and agility. Wide area connectivity is not limited to just text or graphical data but will also accommodate the plethora of voice and video services of unified communications (UC) architectures, which have stringent uptime requirements of their own.
Nobody is happy when the network goes down, but as the data environment starts to push past the enterprise and the cloud into the realm of the IoT, failure to maintain connectivity starts to cross the line between mere inconveniences to outright disruption of the core business model.
The ability to grow, expand, and dynamically adjust network architectures will prove critical going forward, but it will amount to nothing if users cannot count on you to provide reliable access your data.
For Composable Infrastructure, You Will Need Fabric Networking
Enterprises are attracted to the idea of easy scalability, but do they have the right networking technology to support this new infrastructure?
The lure of composable infrastructure is that it allows the enterprise to scale hardware footprints up and out even as it supports the added flexibility of fully software-defined architectures. But all of this adding and removing of compute/storage modules can only happen if the system is anchored by a flexible network fabric.
This represents a vastly different operational paradigm from today’s fixed network topologies.
Fabric networking is nothing new, of course, but fabric networks have typically been limited to processor interconnects and, more recently, rack-level and storage area networking. Under the composable paradigm, however, the data center itself essentially becomes a large, distributed mainframe. The basic PCIe or Infiniband interconnect can now produce fabric-style connectivity between compute and storage modules, and those fabrics should be able to integrate pretty cleanly with the connectivity solutions between individual cores within those modules.
Fabric-based composable infrastructure is also expected to dovetail nicely with containerized workflows, giving enterprises an unprecedented level of flexibility in crafting next-generation data services.
It is, of course, possible to build composable infrastructure around standard, nonfabric topologies, but for all practical purposes an integrated, interconnected networking environment is the way to go. Part of the appeal of a composable solution is the ability to plug in new modules so as to add resources to available pools quickly and easily. And you cannot very well do that using traditional provisioning and fixed connectivity patterns.
Going forward, data traffic will require not only high speed but broad flexibility to meet the demands of an increasingly digital-facing economy. For the moment, the only way to get there is through fabric-style networking.
Intelligent Networks Need More Visibility
Automating network infrastructure requires a lot of trust: trust in the systems you have deployed, trust in the policies you have established, and trust in your ability to reassert manual control should things go wrong. But just as nations employ the trust, but verify mantra when dealing with critical issues, so too should the enterprise when it comes to critical applications and services.
Verifying network performance, however, has grown a lot more complicated in the past decade. The advent of software-defined architectures and scale-out cloud and IoT infrastructure, as well as the speed at which workflows and virtual network deployments take place these days, makes it all the more imperative that organizations adopt increasingly intelligent management stacks. So in a way, intelligence tends to feed off of itself. The smarter our systems and devices become, the smarter the network must be in order to maintain acceptable service levels.
This is causing some experts to look past the software-defined network (SDN) and even the emerging intent-based network (IBN) toward the data-driven network. Companies like Cisco, Arista, and Veriflow are already implementing remote data collectors in their networking solutions as a means to move beyond mere traffic monitoring to enable deep-dive analysis of multiple operating metrics.
But how well can technologies like machine learning actually deal with the challenges of modern network management? If the aim is to improve on detection-prevention-analysis-response (DPAR) models, the outlook is pretty good, but only if a few best practices are employed. For one thing, intelligent automation stacks require broad visibility across the entire IT spectrum – everything from bandwidth consumption and disk array performance to database actions and web server connectivity. Also, the human operators of these systems (yes, they will still be necessary) will need more training in the data sciences and the DevOps model of IT management.
One of the best things about intelligent networking is that even though it represents a paradigm shift in IT management, it can be implemented on legacy infrastructure relatively easily. Once it has learned how to improve today’s environment, it has the capacity to change processes on a more fundamental level going forward, all while adapting to new topologies, new service requirements, and new business models.
By empowering these systems with the visibility tools to acquire the proper data to assess network operations, the biggest challenge will not be managing the network but figuring out how it can be leveraged to produce the biggest gain in data productivity.
Network Development in a Time of Digital Transformation
Enterprises embarking on the complex and error-prone process known as digital transformation are finding that networking is the most difficult piece of legacy infrastructure to transform. Not only was it the last hardware-centric component to go virtual, but it also requires a highly coordinated plan to upgrade key network components without affecting critical functions.
In the old days, data was mostly transactional and came from internal sources such as ERP and CRM solutions. In the new world, most data will be unstructured and will come from a variety of sources, including the cloud and the rising legions of customer devices. This means networks will have to become increasingly fast. They must dynamically support both broadband and narrowband connectivity and extend far beyond the data center or even the cloud provider, all while accommodating increasingly bursty workloads.
This is expected to create a multibillion dollar global network transformation market by the next decade. Research and Markets estimates that demand for digital-facing systems, solutions, and services will climb more than tenfold from today’s USD 6 billion to nearly USD 67 billion by 2022, representing compound annual growth of the order of 62 percent. Much of this will be driven by increased deployment of IT as a service offerings and a growing dependence on virtual infrastructure. It will also lead to greater collaboration between network vendors and providers in order to craft increasingly optimized solutions for key industry verticals.
Already, traditional networking firms are reworking their portfolios in order to accommodate digitally transformed enterprises. The key driver in digital transformation, of course, is the fear of being disrupted by a start-up with a smartphone app. At the moment, only one in five organizations has recast its networking strategy around digital transformation, even though firms that have taken this step are already seeing twice the rate of revenue growth. With hybrid, multicloud architectures quickly becoming the norm, firms that do not tailor their networks to the new reality have virtually no chance of succeeding in a digital economy. After all, disruption is only a problem for the disruptee, not the disruptor.
Managing the Gap Between Old Networking and New
Few enterprises can make the leap from traditional data architectures to nimble, virtualized operations all at once. And many organizations are still struggling to maintain effective performance on bare-metal hardware.
Networking is particularly troublesome in this regard because most equipment has a longer lifecycle than either servers or storage. Plus, the connected nature of infrastructure makes it difficult to simply swap out components when they have served their purpose.
Recent surveys suggest that it is networking more than anything else that is hampering the conversion to advanced architectures. One of the biggest hurdles in network conversion is the continued reliance on CapEx models for network upgrades rather than the more nimble OpEx approach of software-based service platforms. But just as SaaS has remade the productivity suite, so too can it bring the network more in line with today’s flexible, mobile data environment. With networking as a service (NaaS), the enterprise is able to lower its upfront costs and more accurately gauge the consumption of network resources to revenue-generating operations. At the same time, it provides ready access to the latest technological developments because NaaS providers are constantly trying to outdo one another in order to gain market share.
Implementation of advanced network architectures is certainly much easier in software than in hardware, but what is the best way to integrate a partially deployed SDN into a legacy network? The key challenge is designing a mixed control plane that functions smoothly with existing network management systems. This is a difficult undertaking because it must incorporate the vastly different ways that the two sides handle things like deployment, reconfiguration, and measurement/monitoring. Without further research in this area, however, enterprises can expect only marginal results from their SDN deployment until nearly the entire network ecosystem has been upgraded.
Going from hardware- to software-defined networking is kind of like making the jump to light-speed in sci-fi: it is best to do it quickly, but carefully. True, enterprises that are slow to implement agile networking run the risk of losing out in the service-driven economy to come, but those that jeopardize current workflows face an even greater risk in the economy we have right now.
Enterprises across all industry verticals are waking up to the opportunity and threat posed by digital disruption. The year 2018 is likely to see incumbent firms shore up their digital infrastructure, enabling them to become leaner, increasingly flexible, and better placed to adapt to heightened unpredictability. Speed of deployment of new technologies will take precedence over ROI. A shift in focus from technology architectures to service architectures is in the offing as businesses seek to standardize the way they use multiple different services. System integration skills are paramount for legacy companies hoping to compete strongly against new market entrants. Edge computing will leap to the forefront in 2018.
“It’s the beginning of a new era. There are 8.4 billion connected things on the Internet today, there are probably 100,000 in this room right here. 3.1 billion of those things are already used by enterprises to change their business model. Every organization on the planet will want to move with great speed to take advantage of what is possible. As more devices join the Internet, complexity is the enemy. Business users no longer want to struggle with understanding different protocols and minutiae, they just want the network and all of its connected device intelligence to work.”
“Digital transformation is gathering pace. The competitive pressures from early adopters are starting to force others to begin transformational efforts, no matter where they are based geographically. These transformational efforts will accelerate in all verticals worldwide over the next several years driven by the increasing need to face off the global competition.”
Senior Research analyst, IDC Insights,
The telecom industry has traversed a long path from analog to IP technology. With numerous enterprises realizing the importance of using the latest technology systems, there has been a drastic change in the way networks are deployed. The digital transformation is at the doorstep today. The artificial intelligence (AI), machine learning, IoT, and wearables are walking the same progressive path and are ready to disrupt the present enterprise IT infrastructure for good. I am sure that businesses will find it tough adopting such complex concepts but since these will symphonize the present mobile entities, the change would be inevitable. The time has come where enterprises will be keen in investing in solutions that enable employees to handle their calling activities, email, social media, and calendar from the same screen. Security of process that every business does will go ubiquitous, concealing all the points through which hackers can get access of confidential data and misuse the same.
Manager – Product Marketing,
– CT Bureau