Call for Participation

[pdf version will be uploaded here](As of 2025-03-12)

Keynote Presentations

The Challenges of Delivering Power to and Cooling the Cerebras Wafer-Scale Engine

Jean-Philippe Fricker (Cerebras Systems, Inc.)

Abstract: As AI workloads push the boundaries of computational power, traditional chip architectures struggle to keep pace. Nowhere is this more evident than in wafer-scale computing, where delivering power and managing thermals become critical engineering challenges. This keynote explores the evolving landscape of high-performance computing and the infrastructure bottlenecks limiting future progress.

  We begin by examining the ever-growing compute demands and why traditional datacenter infrastructure is struggling to keep up. We’ll analyze the impact of system density on both power delivery and cooling, highlighting the inefficiencies of conventional approaches. Next, we’ll look at how past innovations—such as water cooling—are making a resurgence as viable solutions.

  Finally, we’ll explore how Cerebras has tackled these challenges head-on, leveraging novel architectural and cooling innovations to unlock unprecedented performance. We’ll compare this approach with conventional solutions to understand why wafer-scale integration represents a paradigm shift in AI computing. Join us for an in-depth look at the future of high-performance computing and what it takes to meet the growing demands of AI inference and training while overcoming power and cooling challenges. 

Jean-Philippe (J.P.) Fricker is Chief System Architect and Co-Founder at Cerebras Systems. Before co-founding Cerebras, J.P. was Senior Hardware Architect at rack-scale flash array startup DSSD (acquired by EMC). Prior to DSSD, J.P. was Lead System Architect at SeaMicro where he designed three generations of fabric-based computer systems. Earlier in his career, J.P. was Director of Hardware Engineering at Alcatel-Lucent and Director of Hardware Engineering at Riverstone Networks. He holds an MS in Electrical Engineering from École Polytechnique Fédérale de Lausanne, Switzerland, and has authored 42 patents.

 

 

“A Content-Addressable Engine for Associative Processing”

José Martínez(Cornell University)

Abstract: TBD

José Martínez is the Lee Teng-hui Professor of Engineering at Cornell University. He has been very fortunate to work with some extraordinary people, and as a result his research has received a number of awards over the years; among them: two IEEE Micro Top Picks papers, an HPCA Best Paper award, MICRO and HPCA Best Paper nominations, an NSF CAREER Award, two IBM Faculty Awards, two Qualcomm Faculty Awards, and a Distinguished Educator Award by the University of Illinois’ Computer Science Department (my graduate alma mater). On the teaching side, he has been recognized with two Kenneth A. Goldman ’71 and one Dorothy and Fred Chau MS’74 College of Engineering teaching awards, a Ruth and Joel Spira Award for Teaching Excellence, thrice as the most influential college educator of a Merrill Presidential Scholar (Andrew Tibbits ’07, Gulnar Mirza ’16, and Angela Jin ’21), and as the student-elected 2011 Tau Beta Pi Professor of the Year in the College of Engineering. He is an IEEE Fellow and Vice Chair of ACM SIGARCH.

 

 

Specialized Hardware and Open-Source Tools for Scientific Computing and Instruments

Kazutomo Yoshii (Argonne National Laboratory)

Abstract: High-performance computing (HPC) faces critical challenges as transistor scaling slows, limiting further gains in computational performance and energy efficiency. Scientific instrumentation, meanwhile, faces a different obstacle: rapidly increasing data rates. Instruments such as advanced X-ray detectors generate terabytes of data per second, making it impractical to transmit raw data downstream. On-chip data processing and reduction at the source are now essential to address this bottleneck.

   Data movement, rather than computation, has become the dominant factor limiting system performance across both HPC and scientific instruments. The performance gap between processors and memory exacerbates inefficiencies, leaving many data-intensive workloads unable to fully utilize processing capabilities. To optimize system performance, strategies such as data compression and reduction within hardware are increasingly necessary. Data flow and streaming computing paradigms can significantly improve data handling in both HPC and scientific instruments by facilitating efficient, continuous data transfer.

  Specialized hardware accelerators offer a promising solution by enhancing both performance and energy efficiency across scientific domains. These accelerators also have the potential to revolutionize scientific instruments by enabling real-time data handling at the source. However, hardware specialization requires expertise in design, verification, integration, and sharable resources such as open-source hardware libraries — all of which remain scarce globally.

  Open-source ecosystems, featuring tools such as Chisel, Verilator, FireSim, Chipyard, Mosaic, and OpenROAD, stimulate collaboration and community-driven prototyping activities by enabling the sharing of research ideas and innovations. These tools, along with open standards like RISC-V, enhance accessibility to hardware innovation, particularly for professionals from software backgrounds. Cultivating strong open-source collaborations could pave the road for both scientific computing and instrumentation.

Kazutomo Yoshii is a Principal Experimental Systems Specialist at Argonne National Laboratory. He earned an M.S. in Computer Science from Toyohashi University of Technology, Japan, in 1994. His career began at Hitachi’s research facility in Japan, where he developed medical imaging analysis software for functional MRI data. In 1998, he joined Turbolinux, contributing to the Linux operating system in both Japan and Santa Fe, New Mexico. In 2002, he transitioned to Mountain View Data, focusing on dynamic provisioning systems for cluster environments. Since December 2004, he has been with Argonne, actively engaging in co-design activities for supercomputers and scientific experimental systems. His recent work focuses on custom accelerator designs and streaming near-sensor processing architectures. His research interests include high-performance computing, power-aware computing, reconfigurable dataflow computing, hardware development tools, and hardware specialization.

 

Title: TBD

Bora Baloglu  (Intel)

Abstract:   TBD

Bio: TBA 

    

 

 

Title: TBD

Jim Keller (Tenstorrent)

Abstract: TBD

Bio: TBA.

   

 

Invited presentation

Title: TBD

Sangbum Kim (Seoul National University)

Abstract: TBD

Bio: TBA.

 

 

 

Panel Discussion

Topics “Sustainable AI: Emerging Architectures, Devices, and Quantum Computing Towards Future Computing”

Organizer and Moderator:
          Tohru Ishihara (Nagoya Univ.)
Panelist:
          TBD
 

Abstract:  TBD

 

 

 

Special Sessions (invited lectures)

“Next-Generation Quantum Computing: A Computer Architect’s Perspective”

Jangwoo Kim  (Seoul National University/MangoBoost Inc)

Abstract: Quantum computer is the next-generation computing paradigm. Therefore, many researchers are actively working in various domains in quantum computer (e.g,. qubit manufacturing, qubit interface control processor, programming and compiler, application). And, as the real-world quantum computer applications require millions of qubits, we are moving from Noisy Intermediate-Scale Quantum (NISQ) to fault-tolerant quantum computers (FTQC). In this talk, I first introduce the key challenges in developing a scalable and reliable quantum computer in the FTQC era. Next, I will introduce my research work covering quantum computer modeling, quantum control processor, quantum interface methods, distributed quantum computer, and reliable quantum computer. By integrating these outcomes, we have been contributing to realizing the real-world quantum computers in the FTQC era.

Jangwoo Kim is a full professor in the Department of Electrical and Computer Engineering at Seoul National University. He is also the CEO and founder of MangoBoost Inc which provides next-generation HW/SW solutions to maximize the efficiency of datacenters. He earned his PhD degree from Carnegie Mellon University, and his BS and MEng degrees from Cornell University. Prior to the academic career, he contributed to developing UltraSPARC T4 CPUs and servers at Sun Microsystems and Oracle Corporation. His current research interests lie in server and system architecture, cryogenic and quantum computer architecture, and AI/neuromorphic computing.

 

 

“Radiation-hardened circuit design for space application”

Alex Orailoglu (UC San Diego)

Abstract:  TBD

Bio: TBA