Connect with us

Trends

Google cloud leads in AI innovation, AWS prioritizes cost-efficiency

New Omdia research reveals that major cloud providers are engaged in a close competitive race to deliver AI at scale, with Google Cloud Platform (GCP) leading in cutting-edge technology, while Amazon Web Services (AWS) focuses on offering cost-efficient solutions.

GCP benefits from Google’s status as a powerhouse of fundamental AI research, while Amazon Web Services benefits both from the enormous scale of its existing business and its excellence in day-to-day operations. Customers looking to adopt the latest technology will be best served by GCP, while those focused on price will be best served by AWS. However, Microsoft Azure seems to be concentrating on satisfying OpenAI’s appetite for capacity.

The research recently published in Omdia’s AI Inference Products & Strategies of the Hyperscale Cloud Providers report examines how the major vendors of cloud infrastructure serve inference – the process of generating content or answers from an AI model once its training is complete. By definition, inference is required when an AI application goes into production, with demand driven by end user needs. As a result, it represents the intersection of AI projects and practical reality. As more and more AI applications go into production, Omdia anticipates inference will account for a growing share of overall AI computing demand.

“The competition in this sector is intense. Google has an edge related to its strength in fundamental AI research, while AWS excels in operational efficiency, but both players have impressive custom silicon,” said Alexander Harrowell, Omdia’s Principal Analyst for Advanced Computing. “Microsoft took a very different path by concentrating on FPGAs initially but is now pivoting urgently to custom chips. However, both Google and Microsoft are considerably behind AWS in CPU inference. AWS’ Graviton 3 and 4 chips were clearly designed to offer a strong AI inference option and staying on a CPU-focused approach is advantageous for simplifying projects.”

Hyperscalers are crucial providers of computing services to the majority of the AI industry and are likely to be the first point of contact for those establishing an AI model inference infrastructure to serve users. Omdia’s report, available through Omdia’s Advanced Computing Intelligence Service, is designed to inform enterprises on key recommendations when selecting an appropriate provider and options to choose from. The study provides analysis on pricing and availability of custom AI silicon, such as Google TPUs, flagship, mid-range, and entry-level GPUs, and CPU options that hyperscalers recommend for AI inference. Omdia

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!