a

SC 2023

November 12–17, 2023, Denver, CO

Join Ayar Labs in booth #228 at Supercomputing 2023, the international conference for high performance computing, networking, storage, and analysis.

Denver is the place to be this fall as the high performance computing community convenes for an exhilarating week of sessions, speakers, and networking at its finest. SC23 is an unparalleled mix of thousands of scientists, engineers, researchers, educators, programmers, and developers and who intermingle to learn, share, and grow.

Ayar Labs at SC23

Ayar Labs 4 Tbps Optical I/O Demo

Come by booth #228 to see our optical I/O solution: TeraPHY™ chiplets powered by the SuperNova™ remote light source. Learn how optical interconnect will transform next-generation HPC and AI architectures.

In-package optical I/O delivers up to five times higher data rates and 10 times lower latency, with up to eight times the power efficiency versus traditional interconnects (electrical I/O + pluggable optics).

 

SC23 Panel Discussions

Chiplet Ecosystem in High Performance Computing, AI/ML, and Data Acceleration

Wednesday, November 15, 2023
3:30–5:00 PM MST
Chiplets have become a compelling approach to incorporating specialization and massive bandwidth into compute and memory devices used in HPC. But there are many challenges in realizing the vision for affordable modular HPC using advanced packaging technology. We bring together a diverse panel of experts for a discussion on whether there will be an ecosystem or marketplace of chiplets that will be available for system developers to use to build next-generation devices and weigh the pros and cons of off-the-shelf chiplets vs custom designed chiplets. Chiplets could be processors, GPUs, networking interfaces, optical engines, memory controllers, or FPGAs

Panelists include LK Bhupathi, VP products, strategy, and ecosystem at Ayar Labs, as well as representatives from AMD Research, Columbia University, Achronix, and Lawrence Berkely National Laboratory.

Scalable and Adaptable Architectures for AI/HPC Advancement.

Thursday, November 16, 2023
1:30–3:00 PM MST

AI/Machine Learning usage is exploding in both application and model size. Predictive analytics, physics, modeling, and new use cases for generative AI/ML are increasing model sizes by 10x every 18 months. The custom processors and accelerators used for AI/ML require continually higher I/O bandwidth to address this model growth. However, how does one deploy a high-performance architecture that is scalable and adaptable through time to address this phenomenon? The panel will discuss the architectures, I/O and large-scale system topologies that will be needed to grow well beyond 200 billion parameters. You will gain insights into system concepts, scaled across workload size, that are both cost-effective from a new configurability perspective as well as a focus on energy-efficiency. Is there a new Billion Parameters per Watt metric?

Panelists include Vladimir Stojanovic, professor at University of California, Berkeley, and chief architect and co-founder at Ayar Labs, as well as representatives from NVIDIA, Intel, and more.

On-Demand AI Webinar Video

Our Scalable and Sustainable AI: Rethinking Hardware and System Architecture webinar is now available on demand. Hosted by Ayar Labs and moderated by EE Times, industry experts from Ayar Labs, Google, Lawrence Berkeley National Laboratory, NVIDIA, and Tenstorrent, will discuss the challenges of scaling up AI workloads on existing architectures and the emerging solutions that can dramatically improve performance, efficiency, and scalability.

Dig Deeper into Optical I/O for HPC

Hyperion Research: Strong Market Sentiment for Optical I/O Connectivity

Explore key findings from Hyperion’s study on market readiness and expectations for optical I/O based on HPC/AI user and vendor input.

Ayar Labs Optical I/O: Shattering the Barriers to AI at Scale

Generative AI model complexity is growing exponentially. Traditional interconnects create a bottleneck for data transfer, forcing GPUs to remain idle. Optical I/O connects nodes at scale so they work like one giant GPU.

Optical I/O Enables Disaggregated Architectures for HPC, AI, and Cloud

Disaggregated architectures, made possible by optical I/O, enable resource composability for more efficient utilization in cloud, AI, and HPC.

Pin It on Pinterest

Share This