Artificial Intelligence

Rethinking Generative AI Architectures with Optical I/O

The growing complexity and size of AI models, especially generative AI models, are introducing huge scaling and power challenges. Communication bottlenecks are a significant drag on efficiency. Optical I/O technology changes the game by enhancing link density within and across nodes. By eliminating the bottlenecks caused by traditional interconnects (electrical I/O plus bulky, expensive pluggable optics), optical I/O enables nodes to connect at scale, allowing them to effectively function as a single, giant GPU.


5x higher data rates

Trillion-parameter AI models and advanced HPC designs require ever-increasing bandwidth. Ayar Labs’ optical I/O solution offers a total bidirectional bandwidth of 4 Tbps, opening up new possibilities for generative AI architectures.


10x lower latency

Latency limits the size and number of interconnected components in a system. Electrical I/O above 50 Gbps requires forward error correction (FEC) that introduces an additional latency of tens of ns which is not tolerated in the distributed computing systems used for HPC and AI. Ayar Labs optical I/O solution has a latency of 5 ns per chiplet + TOF with no FEC required.


8x more power efficient

Power efficiency has a direct impact on heat and reliability. To enable electrical I/O and pluggable optics to traverse systems, racks, and data centers, the 112 Gbps long-reach electrical I/O consumes 6-10 pJ/b of energy. Ayar Labs’ optical I/O solution consumes less than 5 pJ/b (10 Watts).

Revolutionizing Generative AI Performance

Large-scale generative AI workloads require robust communications and parallel processing, but traditional I/O creates bottlenecks. In-package optical I/O revolutionizes data transfer efficiency and bandwidth by connecting nodes so they effectively function as a single, giant GPU. It redefines generative AI architecture, enabling the pooling of accelerators, processors, memory, and storage across compute nodes for more efficient model training and inference.

Optical interconnects enable data to be transmitted at significantly higher throughput within each node and across nodes. More efficient communication increases GPU utilization so AI tasks complete faster. Fewer GPUs and switches are needed, reducing power and slashing CapEx and OpEx for today’s AI needs, and more efficiently scaling infrastructure for tomorrow’s AI needs.

Generative AI model complexity is growing exponentially. Traditional interconnects create a bottleneck for data transfer, forcing GPUs to remain idle. Optical I/O connects nodes at scale so they work like one giant GPU.

“Optical connectivity will be important to scale accelerated computing clusters to meet the fast-growing demands of AI and HPC workloads. Ayar Labs has unique optical I/O technology that meets the needs of scaling next-generation silicon photonics-based architectures for AI.”

– Bill Dally, Chief Scientist & Senior VP of Research, NVIDIA

Ayar Labs’ Optical I/O Solution

As the semiconductor industry embraces the chiplet revolution, Ayar Labs’ in-package optical I/O solution is redefining I/O capabilities. Our groundbreaking TeraPHY in-package optical I/O chiplet and CW-WDM MSA-compliant SuperNova light source combine to deliver an I/O solution that obliterates traditional I/O bottlenecks and overcomes process constraints, unlocking revolutionary architectures for artificial intelligence/machine learning (AI/ML), disaggregated data centers, 6G, phased array sensor systems, and more.

Our AI Technology Partners

Optical I/O Use Cases for Generative AI

Generative AI Scale Out

Scaling LLMs requires spreading computation across multiple GPUs, resulting in bottlenecks due to serialized communication. This results in a cascading delay effect between network layers, leading to more time spent on communication rather than computation as GPUs and chassis are added. In next-gen LLMs requiring entire clusters, the problem intensifies. For example, a configuration with 256 GPUs may only reach 30 percent compute efficiency compared to 80 percent efficiency with a single GPU, leading to diminishing returns for increasing investments. Optical I/O addresses this issue head on by eliminating communication lags. It supports seamless scaling and more effective use of GPU resources, paving the way for more powerful large-scale generative AI architecture without runaway costs.

This webinar discusses the push toward disaggregated resources and optical interconnect technologies to enable the next generation of high performance computing (HPC) and artificial intelligence (AI) infrastructure.

Disaggregated Architectures for AI and HPC

Disaggregated architectures enable resource composability for more efficient utilization in high performance computing, AI, and cloud.

Disaggregated architectures decouple memory from processors, accelerators, and storage to enable flexible and dynamic resource allocation — depending on the current tasks assigned to the data center. A shift to disaggregated architectures is expected to increase flexibility and performance, enabling quick and dynamic construction of customized node configurations so work that requires larger I/O or contains less compute can be offloaded.

A transition to photonics (optical I/O) enables memory to be pooled with low latency and high performance. Faster, photonic interconnects between memory and XPUs (CPUs, GPUs, FPGAs, ASICs, and accelerators) will dramatically improve performance and throughput.

Scalable and Sustainable AI: Rethinking Hardware and System Architecture

In this webinar panelists discuss the challenges of scaling up AI workloads on existing architectures and the emerging solutions that can dramatically improve performance, efficiency, and scalability. Moderated by EE Times, the webinar features panelists from Ayar Labs, Google, Lawrence Berkeley National Laboratory, NVIDIA, and Tenstorrent.

AI Resources

TeraPHY enabled compute nodes

Unlocking the True Potential of AI with In-Package Optical I/O

This brief explores:

  • The current technology and model-related trends that are leading to interconnect bottlenecks that will stifle AI and ML system scalability.
  • How leading vertically integrated vendors are approaching the problem.
  • Why Ayar Labs’ innovative in-package interconnects provide the most practical and efficient solutions for massive AI and ML model scaling.
Livestream panel discussion

Optical I/O Chiplets Eliminate Bottlenecks to Unleash Innovation

This technical brief examines the evolution of optical communications in computing systems and the transition to ‘Phase Two’ of Moore’s Law through in-package optical I/O (OIO).

  • Trends Driving In-Package Optical I/O Chiplets
  • Electrical I/O Barriers to High-Performance Architectures
  • Ayar Labs Optical I/O Chiplet
  • Applications of Optical I/O
TeraPHY enabled compute nodes

Meeting the Bandwidth Demands of Next-Gen HPC & AI System Architectures

In this panel discussion, we will hear from leading experts exploring innovative system designs and solutions for AI/HPC to address these challenges and unlock greater value from research, science, and business initiatives. Join this webinar to explore what these pioneers are learning as they define and develop new architectures and drive computing to the next level of performance.

Pin It on Pinterest

Share This