With SC23 in the rearview mirror we wanted to take a moment to collect and share our top three thoughts and observations on the show itself and the broader HPC market as a whole.
SC23 was Brilliant
This was the first time we’ve both exhibited and spoken at Supercomputing, and the results from our participation did not disappoint. On the floor we were showcasing our 4Tbps OFPGA solution (more on that below), and crowds of attendees interested in the reality of co-packed optics had the booth humming throughout the event. Our team participated in session discussions on scalable architectures for the future of AI/HPC, as well as the development of a chiplet ecosystem for next-generation devices. Enthusiasm for optical I/O was high and brought about productive discussions on technology and design integration — in particular around implementation of UCIe. It’s clear that the market’s future depends on a standardized approach for integration of optical I/O versus proprietary designs. There was a reason we had LK Bhupathi, Ayar Labs’ VP of products, strategy and ecosystem, speaking on the importance of standards in the chiplet ecosystem, and we’ll have a lot more to say on this front in the future.
Overall, everyone we spoke to agreed there was palpable energy and enthusiasm from exhibitors and attendees alike.
Why the excitement at SC23? We’ll offer two perspectives.
The first is that, after a years-long pandemic pause, industry tradeshows are back. Virtual meetings, while convenient, just can’t hold a candle to a focused, in-person gathering of an entire industry. We believe this year has marked a turning point for that pillar of marketing calendars: the conference.
The second factor leads into our next observation: there is a new driver in the HPC space, and it’s accelerating hard…
HPC or AI?
During a meeting with Ian Cutress of More than Moore, he posed an interesting question: is SC23 about HPC… or AI? There’s probably a larger philosophical debate to be had about the convergence of HPC and AI, but in the context of SC23 it’s a distinction without a difference: AI is unquestionably the new catalyst for much of the tech industry, HPC included.
Most of the exhibitors and many of the sessions at the show touched on AI in one way or another, almost to the point of AI-washing (the equivalent of green-washing climate initiatives). From data platforms to chip vendors, everyone was touting their solution for use in AI.
Which highlights another interesting truth: Supercomputing is unique in how it is less about today and more about tomorrow. Companies and researchers are showcasing product roadmaps and concepts that are years in the making. This level of in-depth forward thinking and partnership across the community is essential given the pressure AI is putting on contemporary infrastructure.
On that note, the current generations of large language models (LLMs) are pushing the limits of this infrastructure, and they’re growing at a pace that’s faster than the underlying hardware can support. Given today’s trajectories, within one or two generations the number of parameters required for AI training and inference will be prohibitive, even for state-of-the-art HPC architectures.
We believe one of the things that will make or break the industry is energy requirements: specifically, picojoules per bit required to transfer data chip-to-chip and node-to-node. This will impact system scalability as data interconnects become the weak link between next-generation chips, bringing HPC systems crashing against Amdahl’s Law. Just as important, the heat generated by these electrical connections as system density increases will soon prove prohibitive for future supercomputing designs.
Obviously, Ayar Labs and our many investors and partners are hard at work solving these issues. At SC23 we showcased our latest 4 Tbps Optically-enabled Intel FPGA design on the floor, which offers 5x current industry bandwidth at 5x lower power and 20x lower latency, all packaged in a common PCIe form factor. We’re able to transfer data bi-directionally at 4 Tbps at less than 5 pJ/b and 5ns latency per chiplet +TOF — all critical factors for the future of high performance compute fabrics and next-generation disaggregated architectures.
Rise of the AI Chip Vendors
Speaking of Ayar Labs’ investors/partners, NVIDIA did not have a booth at the show, yet it simultaneously felt like the company was everywhere. Booths across the show floor were plastered with the company logo, it was heavily represented in topics and presentations during the SC23 sessions, and we hear through the grapevine that the company had upwards of 200 people attending the show.
While NVIDIA is certainly the 800-pound gorilla in the space, competitors were also highly active. Intel (another Ayar Labs investor/partner) was heavily represented with a large booth and lots of partner activity across the floor. AMD had a strong presence, and we were also interested to see smaller AI-focused chip vendors such as Cerebras and Tenstorrent tout their latest accomplishments.
Keep an eye on this space: AI is driving some very interesting innovation and competition.
Bonus Observation: Liquid Cooling a Hot Topic
We’ve been in this space long enough to remember when liquid cooling first started appearing as a fringe curiosity for all but the most power-hungry supercomputers.
While the idea of mixing liquid and electricity still seems like a bad idea, there’s no denying that it’s rapidly moving into the mainstream. A number of cooling vendors had a large presence at SC23, and booths seemed generally busy. To our minds, this reaffirms the fact that heat and energy consumption are truly becoming issues for the HPC space.
It’s a problem we look forward to addressing.
Overall, SC23 was an exciting opportunity to witness the vibrancy of the HPC space and see first-hand the industry’s enthusiasm over AI. Speaking for ourselves, while the show was awesome, even more exciting things are happening behind the scenes at Ayar Labs. We can’t wait to show you what we have planned for 2024.