Connect with us


Sales Of Semiconductors Performing AI Functions Set To Triple By 2025

Posted by IHS Markit

The global market for memory and processing semiconductors used in artificial intelligence (AI) applications will soar to $128.9 billion in 2025, three times the $42.8 billion total in 2019, according to IHS Markit | Technology, now a part of Informa Tech.

Within the AI segment, worldwide revenue from memory devices in AI applications will increase to $60.4 billion in 2025, up from $20.6 billion in 2019. The processor segment will expand slightly faster, growing to $68.5 billion in 2025, up from $22.2 billion in 2019.

This total tracks sales of semiconductor content in systems that run AI functions. These chips include memory and processing devices within systems that can run AI applications.

AI chips are used widely in various markets, including automotive, communication, computers, consumer electronics, industrial and healthcare. The largest single market for memory devices in AI applications is the computer segment, with sales rising to $65.9 billion in 2025, increasing at a 15.7 percent compound growth rate (CAGRs) from $27.5 billion in 2019. However, other segments will generate faster growth, including the communication, consumer electronics, industrial and healthcare sectors.

“Semiconductors represent the foundation of the AI supply chain, providing the essential processing and memory capabilities required for every artificial intelligence application on earth,” said Luca De Ambroggi, senior research director for AI at IHS Markit | Technology. “AI is already propelling massive demand growth for microchips. However, the technology also is changing the shape of the chip market, redefining traditional processor architectures and memory interfaces to suit new performance demands.”

AI-driven processor architectures emerge

Several startups now are aiming to offer completely new architectures that will challenge the market supremacy of traditional devices used for AI processing, such as graphics processing units (GPUs), field programmable gate arrays (FPGAs), microprocessors (MPUs), microcontrollers (MCUs), and digital signal processors (DSPs). These new architectures include capabilities such as integrated vector-processing, which can accelerate deep-learning tasks.

Moreover, the introduction of AI-related capabilities into various devices means that these traditional classes of processors are evolving to the point where they are no longer recognizable as distinct categories.

“The old definitions of what makes an MPU, DSP or MCU are beginning to blur in the AI era, as each type of device adds cores with different core functions,” De Ambroggi said. “Increasingly, designers of AI-enabled systems are using highly integrated heterogenous processing solutions, such application-specific integrated circuits (ASICs) and system-on-chip (SoC) solutions. With processor makers offering turnkey, heterogenous processing solutions using these ASICs and SOCs, it makes less difference to system designers whether their AI algorithm is executed on a GPU, CPU or DSP.”

―CT Bureau

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!