HBM IP Filing Trends: Need, Major Players, and Patent Filings
With its quick information transfer speed, High Bandwidth Memory (HBM), a new type of CPU/GPU memory, is enabling previously unthinkable computer performance levels. Due to its broad range of applications across numerous sectors and industries, the HBM industry is growing quickly. Its anticipated CAGR rise of 32.9% over the following years makes this pretty clear (from USD 1.8 billion in 2021 to USD 4.9 billion by 2031). Technology behemoths like Samsung, Intel, and others are significantly investing in the HBM sector, which has sparked an invention boom.
The demand for this novel technology, market forecasts, significant HBM IP holders, and HBM's prospects for the future are all covered in length in this article.
Introduction to 3D High Bandwidth Memory (HBM)
HBM is a 3D-stacked DRAM that uses less power than GDDR memory and is geared for high bandwidth operation. To lower the connectivity impedance and hence overall power consumption, HBM uses vertically stacked DRAM chips coupled using the vertical interconnect technology known as TSV (through silicon via) on a high-speed logic layer. Four, eight, or twelve DRAMs can typically be stacked in HBM memory and are joined by TSVs. Additionally, an interposer is often used by HBM to connect the computation parts to the memory.
What is the need for HBM?
The limited memory bandwidth limits a system's maximum performance due to the rising demand for general-purpose processors (CPU) or graphics processors (GPU) to operate at high operating frequencies. Previous memory technologies, such DDR5 (GDDR5) memory, were unable to handle problems like increasing power consumption, which restrains the increase of graphics performance, and larger form factors to fulfil the bandwidth requirements of GPU/CPU (multiple chips needed to achieve the required high bandwidth with necessary power circuitry). Moreover, another problem that those technologies don't solve is the complexity of on-chip integration.
High-performance computing (HPC) systems, high-performance memory, large-scale processing in data centres, and AI applications all require high bandwidth memories. Memory-constrained GPU and FPGA accelerators currently on the market show a need for increased memory capacity at a very high bandwidth. High Bandwidth Memory now enters the scene. HBM reduces the overall required footprint by bringing the DRAM as close as feasible to the logic chip and enables the use of exceptionally broad data buses to achieve the necessary levels of performance.
Evolution of HBM
In October 2013, JEDEC made High Bandwidth Memory an industry standard. With four dies and two 128-bit channels per die, or 1,024 bits, the first generation of HBM had four dies. Eight times as wide as a 512-bit GDDR5 memory interface, four stacks allow access to 4,096 bits of memory and 16 GB of total memory. Compared to 32-bit GDDR5 memory, HBM with a four-die stack operating at 500 MHz may produce more than 100 GB/sec of bandwidth per stack.
JEDEC approved the HBM2 second generation in January 2016. With the same 1,024-bit stack width, it raises the signalling rate to 2 Gb/sec. With a maximum possible capacity of 64 GB at 8 GB per stack, a package could drive 256 GB/sec per stack. In HBM2E, an improved version of HBM2, the signalling rate was increased to 2.5 Gb/sec per pin and up to 307 GB/sec of bandwidth/stack.
The third generation HBM3 standard was formally introduced by JEDEC on January 27, 2022. The memory capacity can increase again to 4 GB per chip, which equates to 64 GB per stack and 256 GB of capacity with at least 2.66 TB/sec aggregate bandwidth, according to a manufacturer, SK Hynix. HBM3 can stack DRAM chips up to 16 dies high. The third generation HBM3 is anticipated in systems in 2022, enabling more than 665 GB/sec per stack and signalling at a rate of over 5.2 Gb/sec.
We outline our forecast for the HBM market in memory stacking technology in the section that follows.
HBM Market Predictions
The market for high bandwidth memory is growing significantly. Its future is bright because of a number of things, such as the widespread availability of cloud-based and quantum technologies, improvements in the use of cloud-based quantum computing, and GPUs and FPGAs. In the upcoming years, it is anticipated that the HBM market would grow dramatically. By 2026, it is anticipated that the market for high bandwidth memory would be worth $4088.5 million.
Major Players in the Global HBM Market
Following are a few of the leading suppliers to the HBM market:
1. Samsung Electronics: The South Korean company created the HBM-PIM, the first High Bandwidth Memory (HBM) to be combined with AI processing capability in the industry. In order to speed up large-scale processing in data centres, high-performance computing (HPC) systems, and AI-enabled mobile applications, the new processor-in-memory (PIM) architecture features powerful AI computing capabilities inside high-performance memory. Additionally, Samsung has created packaging technologies like I-Cube and H-Cube particularly to connect HBM with processing chips.
2. SK Hynix: This South Korean manufacturer of flash memory chips and dynamic RAM has unveiled HBM3, a Gen 4 HBM item for data centres. With a maximum data processing speed of 819 megabytes per second, HBM3 enables sharing of up to 163 full-HD movies in a single second, a 78% improvement over HBM2E.
3. Intel Corporation: Intel Corporation unveiled the Sapphire Rapids Xeon SP CPU with HBM for AI and data-intensive applications as well as bandwidth-sensitive workloads. To speed up tile operations, the processors are anticipated to adopt AMX's 64-bit programming architecture.
4. Micron: According to the business, the GeForce RTX 3090 from NVIDIA will use GDDR6X technology and have a 1 TB/sec memory bandwidth. The world's fastest graphics memory is Micron GDDR6X.
5. Advanced Micro Devices: AMD introduced the new AMD Radeon Pro 5000 series Graphics Processing Units (GPU) for the iMAC platform, which are fabricated using 7nm manufacturing technology. These new GPUs have 16GB of fast GDDR6 memory, which can handle a variety of graphically demanding tasks and applications.
6. Xilinx: Versal adaptive compute acceleration platform (ACAP) portfolio is a new series from Xilinx that incorporates HBM to provide rapid compute acceleration for large, connected data sets with fewer and less expensive servers. The most recent Versal HBM series includes cutting-edge HBM2e DRAM.
For you to better grasp the expanding industry and range of applications for the technology, the following section covers HBM IP trends during the previous five years.
HBM IP (Patent) Filing Trends by Assignees and Jurisdiction (last 5 years)
Since 2017, more than half of all HBM IP (patent) submissions have come from the US, compared to other major countries for filing patent applications in the sector. A quick analysis of the HBM IP filing trends of the top patent assignees over the preceding five years reveals Samsung's dominance over other market players. Also among the leading filers in the HBM market as of 2017 are Intel and Micron.
1. Major HBM players with robust IP portfolios include AMD and Xilinx. With regard to the technology's employment in high-performance computing applications, AMD's recent acquisition of Xilinx would only boost AMD's HBM IP portfolio.
2. Samsung intends to maintain a stronghold in this industry for a sizable amount of time, as evidenced by its current HBM IP portfolio and the integration of AI with HBM through HBM-PIM. The Xilinx Virtex Ultrascale+ (Alveo) AI accelerator has tested the HBM-PIM. With the Virtex UltraScale+ HBM family and Versal HBM series products, Xilinx and Samsung have been working together to deliver high-performance solutions for data centre, networking, and real-time signal processing applications.
3. For its Tesla P100 accelerators, Nvidia picked Samsung's HBM2 Flarebolt to power data centres in need of performance boosts. AMD decided to employ HBM2 Flarebolt in its high-end graphics cards and Radeon Instinct accelerators for the data centre. With the help of HBM2 Flarebolt, Intel has adopted the technology and released high-performance, low-power graphics solutions for portable PCs. For usage in high-performance networking chips, Rambus and Northwest Logic have introduced a memory controller and physical layer (PHY) technology that is HBM2 Flarebolt compatible. Mobiveil, eSilicon, Open-Silicon, and Wave Computing are further businesses creating products using HBM2 Flarebolt storage and numerous networking features.
4. With the first mover advantage of selling HBM3 to significant businesses like Nvidia for integration with NVIDIA H100 Tensor Core GPU, SK Hynix seeks to maintain its dominance in the HBM space.
The HBM market is growing quickly. So let's look at some recent market events and forecasts for the future of this business.
Recent Developments and Future Prospects of HBM
Currently, machine learning (ML), data pretreatment and buffering, database acceleration, and computation acceleration are all included in high bandwidth memory solutions that are tailored for data centre applications. Major HBM IP owners and producers are improving by...
To get more information, read the entire article about HBM IP.
Comments
Post a Comment