Connect with us

International Circuit

Maximizing the shift to public cloud

Cloud computing is going to grow because of the pandemic and lockdown. There’s no question of that, but how much it might grow and what it might grow into are harder to assess. The issues that impact those questions range from security/compliance to simple cost, and enterprise planners are grappling with how to come to terms with them, and with the future cloud that issue resolution will create.

Ten years ago, I did some modeling on cost points, and came to the conclusion that “moving to the cloud” would be economically feasible for only about 24% of applications. The problem is that hosting economies, meaning resource pool efficiencies, don’t grow linearly or accelerate as the pool grows. Instead, you reach a point where further server density doesn’t improve your ability to support new applications or your operations efficiency. My calculations showed that larger enterprises would reach a high enough level of resource efficiency that a public cloud provider’s cost, plus their profit margin, would be greater than the enterprises’ own resource costs.

The other complication is security and governance. Companies are very reluctant to expose their critical information to a public cloud provider. I won an online debate on this topic a decade ago, and little has changed since. It’s possible to reduce the security and compliance risks by things like on-disk on-the-fly encryption, but the cost and performance impact is still considered by planners to be too high.

You might wonder how the cloud is growing at all, given this, and how it could be considered an effective response to the pandemic. The answer is that for about a year, enterprises have recognized that using public cloud resources for the front-end GUI or “presentation” interface is smart. By ceding as much of the user experience as possible to the public cloud, QoE is improved and the application’s overall performance can be improved without adding data center resources. This is because the user interface has a lot of think-time associated with it, and some non-critical editing and even database work can be offloaded to further improve the front-end response.

Planners have been considering the question of how to do more. About 22% of enterprises (multi-site businesses with at least 20 locations) believe they could cede more applications to the cloud if they could deal with executive objections to the security/compliance issues. Since most of these deal with database access during transaction processing, the tentative conclusion of planners is that if the cloud applications could dip into data center storage for access and updates, more of the applications could be made cloud-resident. Executives, it turns out, have issues with this approach too.

One of the issues is spurious, IMHO. They’re concerned that because the database access has to be exercised across a network boundary with the cloud provider, they’re losing performance relative to doing it locally. The reason I think this is spurious is that transactions already have to cross that same boundary to reach the core back-end portion of the applications. There might be a slightly larger data payload moving across if you pushed the main logic of an application into the cloud and then hit the database across the cloud/data-center boundary, but not necessarily a huge increase.

The second issue is that cloud providers usually charge for transiting the cloud-network boundary, meaning ingress and egress traffic is chargeable. This is also a spurious issue, but not as much as the performance issue I just noted. The larger payloads might not make a huge difference in QoE, but they could easily run up costs. If cloud providers want to maximize the number of applications or application components that are transferred to the cloud, they’ll have to look at their traffic charge policies.

The third issue, which is still spurious, is that management has been spooked by reporting of failures in public clouds. They realize that it’s possible in theory to back up one cloud provider with another, but that increases costs and compromises further cloud migration. What management forgets is that they’re already depending on public cloud availability with any cloud front-end. There’s no further risk, or at least no significant risk, in depending more on the public cloud. This problem, then, is largely one of public relations.

The final issue is the only one that’s not a red herring. Core applications simply aren’t designed to be distributed in a public cloud that way. Rewriting them to make them cloud-distributable with databases at home in the data center is considered a significant burden, and there are multiple reasons for that, too.

The first reason is that third-party software can’t be rewritten by enterprises, and most software companies are telling enterprises that there are no plans to make that sort of change in the near term. This impacts just short of a fifth of the enterprise applications, according to the planners.

The second reason, which some might say is related, is that software licenses can either hinder optimizing scaling and redeployment, or downright prevent it. Software is often licensed based on the number of instances being run, and so scaling components could introduce additional charges—if the software even permits scaling. This impacts about ten percent of applications.

The third reason relates to recent stories on state unemployment systems. Many enterprises are running applications written in obsolete programming languages. In some cases, the languages themselves may introduce barriers to modern software design, and in other cases there’s simply insufficient development resources familiar with the languages. This impacts twelve percent of applications.

Which leaves the biggest reason, the issue that impacts over half of applications. The time and cost required to make the changes, and the need to freeze changes while the applications are being redone, is prohibitive. This is the main reason why we aren’t going to see the predictions of some pundits on the universal rush to the cloud come true.
Is there no solution? There are two, in fact, but one won’t be palatable for many. In time applications will be redone and redesigned to meet cloud-ready qualifications. Yes, it will take years to happen, and some applications might require a decade or two, but inevitably the new hybrid-cloud model will succeed. Many of those who promote the cloud may have retired by that time, but hey, nature is a force.

The more palatable option is middleware. Applications are written to access resources, whether hardware, database, or platform, through middleware APIs. If the middleware is changed to a form that’s more cloud-friendly, that transformation could reduce or eliminate many of the issues associated with a broader public cloud mission. But it has to be done right.

Database is a good example. You can view a database access as “logical” or “physical”. A logical access means something like an RDBMS query. Whether that goes to a local database or emerges from the cloud as a query to an on-premises database, it’s still a database. If it’s possible to intercept logical DBMS access, it would be possible to move a relational database (or any structured database with a high-level access semantic) away from the application so it could reside in a data center while the accessing application components move to the cloud.

A physical access, in contrast, means device-level I/O is what’s being done via the application’s API to the data world. If that’s the case, there’s nothing that can make it efficient or cost-effective if the database remains in the data center while some or all of the application moves to the cloud. There’s no easy way to get a handle on how many applications could hit this wall, but planners estimate the number to be between 15 and 20 percent.

We still need to look at the way the cloud prices data in-and out-flows. We still need to look at how we could optimize cloud benefits for applications not easily rewritten. The rewards could be great, though. The current trend toward enhanced cloud front-ends would roughly double the cloud’s potential revenue, and we could triple it if we could offload even half the potential components of mission-critical applications that enterprises would be willing to cede to the cloud in a post-pandemic world. For cloud providers, this is the real light at the end of the tunnel.

―Blog authored by Tom Nolle, President of CIMI Corp.

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!