High Performance Computing
Breaking network bandwidth barriers: delivering unprecedented throughput and system scale-out with optical I/O
As high-performance computing (HPC) and artificial intelligence (AI) workloads escalate, improving performance, memory capacity, and latency in system architectures is essential to reduce power consumption and costs and expedite critical processing tasks. Optical I/O technology offers the best way forward for enhancing off-package interconnects, supporting system performance, and complementing advancements like chiplet technology.
PowerOptical I/O is 8x more power efficient
Power efficiency has a direct impact on heat and reliability. To enable electrical I/O and pluggable optics to traverse systems, racks, and data centers, current 112 Gbps long-reach electrical I/O consumes 6-10 pJ/b. TeraPHY™ optical I/O chiplets are more efficient and consume less than 5 pJ/b (10 Watts).
LatencyOptical I/O delivers 10x lower latency
Latency limits the size and number of interconnected components in a system. Electrical I/O above 50 Gbps requires forward error correction (FEC) that introduces an additional latency of 100 ns, which is not tolerated in the distributed computing systems used for HPC. TeraPHY™ optical I/O chiplets have a latency of 5 ns per chiplet + TOF with no FEC required.
BandwidthOptical I/O has 5x higher data rates
AI models with trillions of parameters and advanced HPC designs need significantly more bandwidth. Ayar Labs’ optical I/O solution allows for a total bandwidth of 2.048 Tbps in each direction or 4.096 Tbps bidirectional, enabling next-gen models and designs.
Breaking the I/O Bottleneck
HPC is a key factor in many innovative research projects across industries. However, many HPC applications are finding that the physical limitations of copper make it impossible to transmit data farther than a few centimeters without suffering signal losses.
Given space constraints and the limitations of current materials, gains from engineering are harder to come by; input/output (I/O) throughput improvement has remained stagnant over the years, limiting CPUs and HPC systems from running at their full potential. Photonics is necessary to overcome electrical I/O bottlenecks in chip-to-chip communication, while improving power efficiency, latency, reach, and speed.
Ayar Labs’ Optical I/O Enables Disaggregated Architectures for HPC, AI, and Cloud
Disaggregated architectures are shifting the paradigm of innovation by enabling resource composability for more efficient utilization in high performance computing, AI, and cloud.
“Today, we know what technologies are necessary for the first and second generation of exascale platforms in the 2022 to 2023 timeframe, but after that a crossover to optical I/O based solutions will be needed.”
– Matt Leininger, Senior Principal HPC Strategist Advanced Technology, Lawrence Livermore National Laboratory
Introducing In-Package Optical I/O
Ayar Labs has developed a universal optical I/O solution that uses standard silicon fabrication techniques to replace electrical-based I/O with high-speed, high-density, low-power optical chiplets and disaggregated multiwavelength lasers. Ayar Labs’ in-package optical I/O technology is the first solution to enable direct optical communications between key components in HPC systems, such as CPUs, GPUs, APUs, high bandwidth memory, and pooled memory. Our electro-optical approach eliminates bandwidth issues, providing a 5x improvement in interconnect bandwidth density, as well as lower latency and power requirements. By combining the TeraPHY™ in-package optical I/O chiplet with the SuperNova™ multi-wavelength light source, Ayar Labs is building a bridge to new, flexible system architectures that are redefining what’s possible in HPC.
Disaggregated System Architectures
Composable computing combines multiple resources into a single, unified system through a software layer, making it easy to provision, configure, and manage these resources. Disaggregated system architectures break system components into smaller, independent units that can be managed, replaced, and upgraded separately, thus allowing for greater flexibility and scalability. The combination of these two concepts is the evolutionary path forward for deploying dynamically configurable high-performance computing infrastructures that are perfectly matched to their respective workloads. Optical I/O solves the bandwidth and power consumption limitations of SerDes, providing faster and more efficient transmission over the longer distances required by new, innovative, disaggregated architectures that will make composable computing a reality.
Meeting the Bandwidth Demands of Next-Gen HPC & AI System Architectures
In this panel discussion, hear from leading experts exploring innovative system designs and solutions for AI/HPC to address the performance and efficiency losses from memory capacity limitations, network bottlenecks and stranded resources to handle ever-growing, performance-intensive workloads.
Advanced Memory Architectures to Overcome Bandwidth Bottlenecks for the Exascale Era of Computing
Central to enabling new flexible system architectures will be high-bandwidth, low-latency optical interconnects. In this webinar, we’ll walk through examples of potential new architectures, technologies needed to enable them, and the ecosystem required to make them a reality.
Shared Memory Pools
Rapidly changing workloads and increasing memory intensity on HPC systems are making it difficult to keep up with scaling memory capacity to meet resource needs. Recent studies on overall memory utilization show that, on average, a node will use less than 25 percent of its memory capacity 75 percent of the time. In-package optical I/O solves this problem by providing a high-bandwidth, low-power solution for transferring data within and between XPUs, ASICs, FPGAs, memory, and storage over distances ranging from millimeters to kilometers. This democratizes access to memory and improves overall performance.
Ayar Labs’ Optical I/O Solution
As the semiconductor industry embraces the chiplet revolution, Ayar Labs’ in-package optical I/O solution is redefining I/O capabilities. Our groundbreaking TeraPHY in-package optical I/O chiplet and CW-WDM MSA-compliant SuperNova light source combine to deliver an I/O solution that obliterates traditional I/O bottlenecks and overcomes process constraints, unlocking revolutionary architectures for artificial intelligence/machine learning (AI/ML), disaggregated data centers, 6G, phased array sensor systems, and more.
Scalable and Sustainable AI: Rethinking Hardware and System Architecture
In this webinar panelists discuss the challenges of scaling up AI workloads on existing architectures and the emerging solutions that can dramatically improve performance, efficiency, and scalability. Moderated by EE Times, the webinar features panelists from Ayar Labs, Google, Lawrence Berkeley National Laboratory, NVIDIA, and Tenstorrent.
Strong Market Sentiment for Optical I/O Connectivity
Explore key findings from a recent Hyperion study on market readiness and expectations for optical I/O from HPC/AI users and vendors, regarding:
- System issues to be addressed in future architectures
- High-impact future technologies
- Two-year and four-to-six-year time horizons
- And more
Disaggregation and Optical Interconnect in AI/HPC Networks
This webinar discusses the push toward disaggregated resources and optical interconnect technologies to enable the next generation of high performance computing (HPC) and artificial intelligence (AI) infrastructure. Partners throughout the ecosystem will provide their perspective on the latest advances, from foundry to optical interconnects, to systems and cloud services. In addition, a test and measurement perspective will be provided to support this new wave of interconnect technology.
Partners include Ayar Labs, GlobalFoundries, Hyperion Research, Lightwave, Microsoft, NVIDIA, and Quantifi Photonics.
Paradigm Change: Reinventing HPC Architecture with In-Package Optical I/O
Learn how Ayar Labs’ in-package optical I/O technologies are reinventing HPC architectures that will enable HPC centers to continue to push boundaries for HPC and AI.