Datum und Uhrzeit:

Ort: Seminar room OMZ (U013, INF 350, floor -1)

Title: Differentiable Astrophysical Simulations on Modern Accelerators with JAX

Abstract: State-of-the-art astrophysical simulation codes such as AREPO, ATHENA, or GIZMO have been developed over decades around CPU-centric, MPI-parallel paradigms. These codes routinely scale to hundreds of thousands of CPU cores and enable multi-physics simulations combining magnetohydrodynamics, self-gravity, radiation, and chemistry. However, the rapidly evolving hardware landscape — characterized by large GPU clusters and heterogeneous accelerator-dominated systems such as NVIDIA GH-based supercomputers — increasingly challenges this traditional design philosophy. At the same time, machine learning and differentiable programming are becoming integral to modern astrophysics, enabling hybrid physics–AI models, data-driven subgrid physics, and gradient-based inference directly through simulations. These developments demand simulation frameworks that are both accelerator-native and performance-portable, while remaining sufficiently high-level to support rapid physics development and close integration with machine learning workflows. In this talk, I will review the current state of astrophysical simulation codes, highlighting both their scientific achievements and structural limitations with respect to modern heterogeneous hardware and differentiable computing. I will then present a new generation of astrophysical simulation tools developed by our group — including astronomix, ODISSEO, and raytrax — alongside differentiable extensions for radiation transport and chemical reaction networks. These codes are built on JAX, which provides automatic differentiation, just-in-time compilation, and native execution on CPUs, GPUs, and TPUs from a single high-level code base. We discuss how XLA-based compilation and SPMD execution in JAX map naturally onto modern accelerator architectures, in contrast to kernel-level offloading approaches in legacy MPI codes. I will argue that JAX-based simulation frameworks offer a compelling alternative to traditional HPC approaches: enabling performance-portable execution on emerging accelerator architectures while naturally unifying classical numerical solvers with machine learning models. This paradigm lowers the barrier for prototyping new physical models, facilitates hybrid physics-AI simulations, and opens a path toward fully differentiable astrophysical simulation pipelines without sacrificing performance.

CV: Dr. Tobias Buck is a Junior Group Leader at Heidelberg University and head of the Astrophysics and Machine Learning Group (AstroAI‑Lab). He earned his Ph.D. in Physics from Heidelberg University, where his research focused on the formation of the Milky Way in a cosmological context using numerical simulations. Before founding AstroAI‑Lab, he worked as a postdoctoral researcher at the Leibniz Institute for Astrophysics Potsdam (AIP) and held visiting positions abroad. His research lies at the intersection of computational astrophysics, high‑performance computing, and machine learning, with a strong focus on simulation-based inference, differentiable simulations, and galaxy formation. AstroAI‑Lab is jointly based at the Interdisciplinary Center for Scientific Computing (IWR) and the Institute for Theoretical Astrophysics (ITA) at Heidelberg University and is funded by the Carl Zeiss Foundation (Nexus Program). The group develops next-generation, ML-enabled simulation frameworks to study the formation and evolution of galaxies.

Aktualisiert: