Today’s enterprise-level artificial intelligence (AI) models are growing in both size and sophistication at an unprecedented rate. Unfortunately, the traditional architectures found in data centers cannot satisfy the evolving demands of these AI systems. An emerging solution is to employ the composability capabilities provided by disaggregated system architectures. However, the full potential of disaggregated architectures can be achieved only by means of advanced in-package optical input/output (I/O) technology.
Disaggregation and composability
A modern data center can contain hundreds or thousands of cabinets or frames called racks. Each rack can contain anywhere from 10 to 40+ bays or shelves. In a traditional aggregated architecture, each shelf contains a server (in this context, “aggregated” is taken to mean “collected,” “gathered,” or “grouped”). In turn, each server features a motherboard containing one or more XPUs (CPUs, GPUs, FPGAs, ASICs), memory in the form of DDR DRAM modules, and storage in the form of solid-state drives (SSDs).
As computational workloads for tasks like AI and high-performance computing (HPC) have evolved to be larger, more complex, and more diverse than ever before, this traditional architecture has struggled to keep pace. For example, some AI tasks may require thousands of GPUs and be GPU-centric, leaving CPUs and memory underutilized. Contrawise, other AI tasks may demand vast quantities of memory as compared to the number of XPUs, thereby leaving large numbers of CPUs and GPUs sitting idle.
Figure 1. Traditional shelf (upper left), disaggregated shelves (upper right and lower left), and disaggregated racks (lower right).
The alternative is to use a disaggregated architecture, in which each shelf in the rack specializes in only one type of component (CPUs, GPUs, RAMs, SSDs…), which is referred to as “resource pooling.” (Figure 1.) Using this form of disaggregated architecture leads to the concept of composability, in which virtualized resources are automatically composed in near real time to meet the computational and memory requirements of each task on an application-specific basis. Once an application has completed its task, its resources are released back into their respective pools, at which point they can be provisioned to future applications in different ratios.
For disaggregation and composability to work, it is necessary for data to be able to pass from shelf to shelf and rack to rack at lightning speed with extremely low latency. Unfortunately, these bandwidth requirements far exceed the capabilities of traditional copper-based electrical interconnect technologies.
The solution is to use optical-based interconnect, but it is not sufficient to simply take existing devices (CPU, GPU, memory, etc.) and augment them with external optical interconnects. To achieve the necessary transmission speeds and bandwidths, the optical interconnect must be incorporated inside the device packages.
As just one example, a common use case is deep learning recommender models (DLRMs), which are typically run on GPUs and which have a large memory footprint coupled with relatively small compute requirements. Since their data is stored in High bandwidth memory (HBM) on the GPUs, these models require a lot of GPUs to store the data, leaving the GPUs’ compute resources underutilized. Disaggregation helps with this by balancing the memory and compute resources appropriately (Figure 2).
TeraPHY™ and SuperNova™-based in-package optical I/O
Fortunately, in-package optical I/O is now possible due to two recent developments from Ayar Labs in the form of TeraPHY™ optical I/O chiplets and SuperNova™ advanced light sources (Figure 3).
Figure 3. TeraPHY optical I/O chiplets and SuperNova light sources unleash the potential of disaggregated architectures.
Chiplets are small integrated circuits, each containing a well-defined subset of functionality. Multiple chiplets can be combined with the main silicon die by means of a base layer called an interposer or an organic substrate, with everything being presented as a single package to the outside world. By employing silicon photonics built on standard CMOS manufacturing processes, TeraPHY optical I/O chiplets allow the core device to communicate with the outside world at the speed of light.
The remaining piece of the puzzle is provided by the SuperNova laser light source. The use of patented micro-ring photonic modulator technology means that each SuperNova is physically small and parsimonious in its power consumption. The combination of TeraPHY chiplets and SuperNova laser light sources is set to disrupt the traditional performance, cost, and efficiency curves of the semiconductor and computing industries by delivering up to 1000x bandwidth density improvements at one tenth the power compared to traditional copper-based electrical interconnect technologies.
With optical I/O, it’s now possible for disaggregated racks of CPUs, GPUs, memory, and storage to be located tens, hundreds, or even thousands of meters from each other.
The future is closer than you think
Leading technology companies—including cloud providers and XPU manufacturers—are aligned regarding the need for next-generation disaggregated system architectures. Ayar Labs’ in-package optical I/O solution is the breakthrough technology needed to allow innovative, next-generation cloud and data center disaggregated architectures to meet the exploding demands of AI and high performance computing (HPC) workloads. Early adopters are planning on releasing products based on Ayar Labs’ revolutionary technology in 2023, with widespread deployment anticipated in 2024 -2026.
- Ayar Labs’ Optical I/O Enables Disaggregated Architectures for Cloud, AI, and HPC / video
- Unlocking the True Potential of AI with In-Package Optical I/O / solution brief
- Disaggregating System Architectures for Future HPC and AI Workloads / solution brief
- Disaggregated System Architectures for Next-Generation HPC and AI Workloads / webinar