Connect with us

Enterprise Networks

Predictions-2018

IT pros will have their hands full with technologies that have been hyped and are now ripe for adoption. Here are come of the selected enterprise picks.

IT pros will have their hands full with technologies that have been hyped and are now ripe for adoption. Here are come of the selected enterprise picks.

SD-WAN in a Takeoff Stage

Five years ago, the term SD-WAN barely existed. Five years from now, SD-WAN will be an USD 8 billion a year business, according to IDC.

Software Defined WAN makes use of public Internet links to replace more costly MPLS, private links to deliver WAN connectivity for an organization. IDC is now forecasting that in 2021 SD-WAN will bring in USD 8.05 billion in revenue, giving the nascent networking segment a Compound Annual Growth Rate (CAGR) of 69.6 percent.

A year ago, IDC had predicted that SD-WAN revenues would grow to USD 6 billion by 2020 and that 70 percent of enterprises expected to use SD-WAN by the middle of 2019.

Networking giant Cisco is also seeing the potential for SD-WAN. In May, Cisco announced the USD 610 million acquisition of privately-held SD-WAN vendor Viptela.

IoT Applications to Look Out for in 2018

What IoT is packed with for this year, is beyond anyone’s imagination. The shipment of all the smart home devices, be it safety and security systems like alarms, cameras, sensors or smart appliances like coffee makers, washing machines, dryers or any energy equipment, would increase two-fold by 2020. This number is expected to swell with AI becoming the new UI, UX with hyper-personalization.

Many verticals still have business operations that involve manual observation of equipment status, inventory levels, and other key metrics. Where there is currently manual observation, there may be a great opportunity for a high-ROI project involving IoT. Some verticals that have a lot of manual observations are oil and gas, energy distribution, supply chain, and telecommunications.

Part of what is going to shape growth, according to a set of predictions from Forrester, is the change that is coming to IoT services in 2018. There is a wide range of different offerings out there, providing different capabilities – to design, integrate, and operate IoT systems – to different users. Those users, of course, all have different needs, which are what is going to drive realignment among the providers of those platforms, which include AWS IoT, Azure IoT, and GE Predix. However, security remains an issue.

IoT creates opportunities for micro data centers. Increasingly becoming relevant for many enterprises as IoT endpoints, they are expected to grow at a 33 percent compound annual growth rate from 2015 through 2020, reaching an installed base of 20.4 billion units, according to a Gartner study. Upcoming 5G implementations will further promote edge deployments of data center infrastructure, as they will enable deployments that were previously impractical due to bandwidth constraints. To support the distributed nature of IoT workloads, new types of data center infrastructures are emerging (such as edge computing) as well as new data center architectures that handle more flexible and scalable use cases.

A micro data center is modular or containerized. It is also smaller than a computer room – usually no more than a rack of equipment or two – and typically one rack or less. All required IT functionalities, such as uninterruptible power supply, servers, storage, networking, and cooling are contained in the MDC, designed to handle specific needs (for example, accumulating sensor data or small remote office support) at distributed locations and typically managed remotely from a large data center.

The figure illustrates how edge data centers reside between IoT endpoints and regular data centers. The necessity to analyze large volumes of data close to the source to avoid low latency is creating more needs for the edge and MDC. This deployment model works particularly better in remote locations with limited bandwidth and/or spaces (such as warehouses, retail locations, or cargo ships).

 Increased IoT workloads are boosting micro data centers’ needs. MDCs, housing usually no more than a rack or two of equipment, are increasingly used for IoT workloads and digital business transformation. Although these workloads are rapidly increasing, it is hard to change existing data center facilities in a short time. A rack level solution with containerized data centers that MDC provides enables organizations to implement IoT applications quickly, save a footprint, and reduce power consumption. This is an interesting reverse trend where data centers have been consolidated with hyperscale data centers, while the number of smaller data centers will go up due to the rise of IoT workloads. These smaller data centers are expected to augment traditional data centers.

IoT workloads are different from traditional data center workloads because they can involve massive datasets (such as data from sensors) that often need to be processed locally due to low-latency sensitivity.

Many enterprises increasingly need to deal with IoT workloads, although it largely depends on their nature of business. To handle these emerging tasks, they are asking for preintegrated modular data centers that enable to develop and deliver data center environments without the need of a dedicated building or server room. These data centers need to be easily moved to and operated at any place.

Micro data centers have evolved from modular data center solutions. MDCs are standard, repeatable designs that deliver compute, storage, and networking capabilities to remote sites. Additionally, they provide simplified management and high levels of security and reliability via standardization and pre-delivered factory testing. Standardized or completely engineered to order, most MDC infrastructure solutions include the physical enclosure, UPS, power distribution unit (PDU), cooling, software, environmental monitoring, and security. MDC solutions have evolved from modular data center solutions that follow the standard, repeatable design approach and extend to larger form factors, including data center containers and larger modules.

MDCs are designed to house data center infrastructure in remote locations and facilitate the management, control, and efficiency of that infrastructure. The location flexibility facilitates placement of these MDCs to reduce latency and offer a small set of infrastructure required in some situations such as ROBOs.

As carriers move to 5G implementations and growing trends such as IoT push for lower latencies, edge deployments of data center infrastructure will become more prominent. These edge deployments will require efficient ways to securely house and manage the data center infrastructure used in these edge situations. MDCs offer an effective solution to meet that edge infrastructure needs.

As integrated systems and hyperconverged integrated systems (HCISs) continue to grow in remote and edge implementations, those systems will find their way into more MDCs. MDCs are likely to evolve to provide increasing efficiency in space, power, cooling, and management for specific integrated systems and HCIS offerings to optimize their cost-efficiencies and ROI for end users. Both physical packaging and central management capability are critical for MDC in IoT uses.

 Moving Toward Hyperconverged Hardware

If current trends hold, it will not be long before data center infrastructure will consist of hyperconverged hardware. And while there will undoubtedly be many ways to configure this technology, it will generally include ultra-dense compute/storage modules outfitted with solid-state memory connected by advanced network fabrics.

Enterprises too are shifting storage investments from legacy architectures to software-defined systems in an effort to achieve greater agility, easier provisioning, and lower administrative costs. Hyperconverged systems – which combine storage, compute, and network functionality in a single virtualized solution – are on their radars.

The largest segment of software-defined storage is hyperconverged infrastructure (HCI), which boasts a five-year CAGR of 26.6 percent, and revenues that are forecast to hit USD 7.15 billion in 2021, according to research firm IDC.

“HCI is the fastest growing market of all the multi-billion-dollar storage segments,” says Eric Burgener, research director for storage at IDC.

Ease of expansion is a key driver of HCI adoption. “When your business grows and it’s time to expand, you just buy an x86 server with some additional storage in it, you connect it to the rest of the hyperconverged infrastructure, and the software handles all of the load balancing,” Burgener says. “ It is very easy to do that, and it is a single purchase.”

HCI systems were initially targeted at virtual desktop infrastructure (VDI) and other general-purpose workloads with fairly predictable resource requirements. Over time they have grown from being specialty solutions for VDI into generally scalable platforms for databases, commercial applications, collaboration, file and print services, and more.

Small and midsize enterprises have driven most of the adoption of hyperconverged systems, but that may be changing as the technology matures. One development that’s getting the attention of large enterprises is the ability to independently scale the compute and storage capacity, Burgener says.

“One of the disadvantages of hyperconverged infrastructure, because you buy it all as a single node, is that you really cannot adjust the amount of performance you need versus the amount of capacity,” he says. If there’s a performance mismatch, it’s often not relevant in a smaller environment. But in a large environment, a company might wind up spending a lot more on the processing component that it did not want, just to get the capacity that it needs.

The solution is to allow companies to shift their HCI deployment to a disaggregated model, without having to do a data migration, as workloads require it.

“In larger environments, it is very attractive to be able to independently scale the compute and storage capacity,” Burgener say. With a disaggregated model, “if you have a workload that needs a lot more storage but doesn’t need a lot more performance, then you don’t end up paying for CPUs to get the storage capacity that you need.”

“One of things you’re going to see from vendors in 2018 is that they will allow customers to configure their hyperconverged plays either as a true hyperconverged model or as a disaggregated storage model,” he says. “As customers grow larger, they don’t want to lose those guys.”

NVMe over fabrics. A second big development in the HCI world is the ability to create a hyperconverged solution using NVMe over fabrics. Most HCI systems today connect the cluster nodes over Ethernet, which creates a data locality issue as enterprises try to grow their HCI environments. “This is one reason why people don’t buy hyperconverged: When the data set is too big to fit in a single node, and you have to go out to another node to access data, that introduces pretty significant latency,” Burgener says.

Looking ahead, the low latency and high throughput of NVMe over fabrics could vastly improve that issue.

Two reasons why large enterprises did not like to buy HCI in the past are now being addressed with this disaggregated option and NVMe over fabric, which means the larger data set environment could actually be run on this architecture more effectively.

 Hybrid Cloud Services Set to Surge in 2018

Cloud vendors’ services partners are likely to play an increasingly critical role in the successful implementation of hybrid solutions and, more broadly, hybrid cloud environments for end customers in the coming year, according to analyst firm, Technology Business Research (TBR).

According to TBR cloud and software practices practice manager, Allan Krans, this expected increased reliance on partners is largely due to the anticipation that hybrid cloud and hybrid IT management tools will take on greater importance in 2018.

“Although cloud has simplified much of the technical complexity of traditional IT for customers, hybrid implementations return it,” Krans said in a paper prepared for TBR’s 2018 Predictions series.

“Management headaches related to cloud implementations have been growing as the scale and scope of solutions expands, and integration across clouds and on-premises environments undoubtedly magnifies these challenges. So, while the desire for solutions that can be portable, fully integrated and flexibly delivered has never been higher, the management of workloads being implemented into sprawling hybrid environments will remain the bottleneck for how much additional hybrid adoption will occur in 2018,” he said.

According to Krans, in addition to driving innovation at the tools and platforms level, hybrid infrastructure will also increase demand for services engagements.

This is where opportunities for partners arise, Krans suggests.

“The skills needed to deploy and manage hybrid solutions, from technology and complexity perspectives, are distinct issues that customers need to address as part of their hybrid implementations,” he said.

“Customers not only have to grapple with how to manage and control cloud solutions within their organizations, but they also have to manage cloud solutions at scale and during integration with other IT assets. For these reasons, cloud vendors, and more importantly, their services partners, will play a critical role in the successful implementations of hybrid solutions and broader hybrid environments for their joint end customers,” he said.

In addition to a surging demand for hybrid cloud
services from partners next year, Krans flagged a number of other trends that are likely to hit the IT landscape in 2018.

Among his predictions is that the changing profile of the end cloud buyer will impact the types of services delivery methods and vendors selected for cloud engagements.

Specifically, much larger and more diverse sets of decision makers will be required to facilitate cloud business value versus just the technical or financial benefit, Krans suggested. This, in turn is likely to see integration across functions, delivery methods and stakeholders determine cloud success in 2018.

“Like all market trend changes, the one that will play out is grounded and driven by the customers who commit real dollars into products and services being offered in the market,” Krans said.

“The shift toward a business focus for cloud investments will be illustrated by changes in who is involved in decisions, the process for making decisions, and the objectives for the ultimate solutions. Essentially, expect changes in nearly every dynamic within cloud buying in 2018,” he said.

Meanwhile, Krans also forecasts further consolidation in the cloud vendor landscape next year, albeit as the cloud market itself sees more fragmented areas.

According to the analyst, this dual dynamic will be driven by a shift from cloud delivery itself to management and applications that add value.

It is likely to result in fewer vendors delivering core cloud services, while more will add vertical functional or management skills on top of solutions offered by leading cloud providers.

“There will be interesting developments in areas of cloud that have more room for differentiation, cloud applications, and professional services for cloud and hybrid environments,” Krans said.

“As platform vendors such as Google look to make cloud services available in environments outside traditionally hosted offerings and other platform leaders win market share on stark functionality differences, we expect to see applications vendors embrace a multi-platform partner approach, like Apttus has done with Salesforce and Microsoft,” he said.

Trends in Data Storage 2018

Hot data storage technology trends for 2018 include predictive storage analytics, ransomware protection, converged secondary storage, multi-cloud, and NVMe over Fabrics.

Predictive storage analytics has morphed from being a specialized feature to a red-hot storage technology. Fueling its rise is the prominence of all-flash arrays and growing demand for real-time intelligence about storage capacity and performance.

 It transcends traditional hierarchical storage management or resource monitoring. The goal is to harness and parlay vast amounts of data into operational analytics to guide strategic decision-making.

Predictive analytics lets storage and network-monitoring vendors continuously capture millions of data points in the cloud from customer arrays deployed in the field. They correlate the storage metrics to monitor the behavior of virtual storage running on physical targets.

Typically, predictive analytics can pinpoint potential problems, such as defective cables, drives, and network cards. If hardware issues are detected, the software sends alerts and recommends troubleshooting. An at-a-glance console provides an integrated view across the infrastructure stack, letting customers apply recommendations with a single click.

Aside from monitoring hardware, array-based analytics tools have matured to provide cache, CPU, and storage-sizing recommendations based on preselected policies.

The importance of predictive storage analytics won’t wane anytime soon. Big data deployments are no longer a curiosity, having matured to the point that companies in almost every industry could deploy a DevOps model. The ability to rapidly compute massive data sets at the network edge is credited with helping organizations wring more business value from flash storage infrastructure.

Ransomware has made big news in the last couple of years. While global attacks – such as WannaCry in May and Petya and NotPetya in June – garner the most coverage, smaller-scale ransomware can be just as crippling for victims. While many organizations are not paying the ransom, downtime can be even worse than making a payment. Fortunately, ransomware protection from backup and recovery vendors is now a hot technology and one of the trends in data storage 2018 to watch.

Converged secondary storage. The expansion of hyper-convergence into secondary storage is a natural next step for the technology. Secondary storage’s rise started gradually, with a handful of vendors taking notice. Now we expect to see converged secondary storage taking up more space and getting more buzz in the coming year.

Many organizations are putting greater emphasis on secondary storage to reclaim much-needed primary storage capacity. Secondary storage frees up primary storage, while leaving the data more accessible than archive storage. It also lets organizations continue to gain value from older data or data that isn’t mission-critical.

With more vendors getting in on the action and more emphasis than ever on secondary storage, converged secondary storage should have a big year as one of the key trends in data storage for 2018.

Multi-cloud storage is one of the latest amorphous technology terms to capture the imaginations of industry experts. It is poised to become one of the hot technology trends in 2018 as more enterprises that have adopted the cloud – whether in a hybrid or pure public configuration – are demanding the cloud provide true IT services capabilities.

The benefits of multi-cloud storage are hard to ignore. There’s data portability among heterogeneous clouds, easier lifting and shifting of applications among multiple cloud environments, better data availability and disaster recovery, and the ability to bridge
data services between private and public clouds. Also, you can set enterprise data services more consistently and colocate them with applications and compute resources.

However, multi-cloud storage still has its share of challenges. Moving data in and out of clouds is more complicated than moving it across on-premises systems, and managing data stored in different clouds requires a new approach.

Several vendors already offer a genuine multi-cloud primary storage concept based on software-defined storage (SDS). These include Hedvig, Qumulo, Scality, SoftNAS, and SwiftStack. Scality’s multi-cloud software is built on object storage with Amazon S3 compatibility, while also offering some file capabilities. SDS offerings from SoftNAS and Qumulo are focused on cloud file, while Hedvig provides block, file, and object storage. SwiftStack is object storage only.

NVMe over fabrics. Performance-boosting, latency-lowering, nonvolatile memory express is already one of the hot technology trends in SSDs that use a host computer’s PCI Express bus. Moving into 2018, the revenue stream for NVMe over Fabrics (NVMe-oF) should start to grow, making it one of the significant trends in data storage. Significant deployments are expected follow in 2019 and beyond, according to industry analysts.

G2M predicted the NVMe market will hit USD 60 billion by 2021, with revenue from SSDs; adapters; enterprise storage arrays and appliances; and enterprise servers, including some loaded with SDS designed for use with NVMe. G2M’s research projected most enterprise servers will be NVMe-enabled by 2019, and more than 70 percent of all-flash arrays will be NVMe-based by 2020. Shipments of NVMe-oF adapters will surpass 1.5 million units by 2021, with 10 percent of them “accelerated,” according to G2M.

The main use case for early NVMe-oF-based products has been real-time big data analytics applications. IDC predicted that 60 percent to 70 percent of Fortune 2000 organizations will have at least one real-time, big data analytics workload by 2020. Certain high-end databases requiring low latency could also generate interest in NVMe-oF, as could vendors pushing denser workload consolidation.

 Data Center Cooling Market Set to Explode over the Next Couple of Years

Operators want greener, cheaper, more efficient systems, but they do not want to pay for them.

The data center cooling market is set to reach USD 20 billion by 2024, according to research by Global Market Insights.

Hot and cold aisle containment, blanking panels (to be placed in unused rack spaces to avoid the recirculation of hot air), close coupled cooling (meaning it is precise and offers modularity at rack-level) are recognized as means of improving efficiency in the data center, where approximately 40 percent of energy consumption is attributed to cooling systems.

As well as requiring more cost-effective systems, operators increasingly wish to take visible steps to reduce data centers’ environmental impact, something the study said will be a significant growth driver in the sector. On the other hand, cost of manufacturing, installing, and maintaining equipment is expected to hinder sales of new systems.

The North America region led the way for investment in new cooling technologies in 2016, the study said, driven by hyperscalers such as Google and Facebook, as well as new ASHRAE temperature and humidity guidelines, which saw operators replacing legacy systems in favor of compliant ones. This trend will continue into the 2020s, it stated.

Schneider Electric, Black Box, Nortek Air Solutions, LLC, Airedale International Air Conditioning, Rittal, and AdaptivCOOL are expected to be at the forefront of the cooling market between now and 2024.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!