Research on the intersection of Machine Learning, High-Performance Computing and Hardware

The Computing Systems Group (CSG) at the Institute of Computer Engineering at Ruprecht-Karls University of Heidelberg is focussing on vertically integrated research (thus considering the complete computing system) that bridges demanding applications such as deep neural networks (DNNs), high-performance computing (HPC) and data analytics (HPDA) with various forms of specialized computer hardware.

group photo
ZITI in Neuenheimer Feld 368

Today, research in computing systems is most concerned with specialized forms of computing in combination with seamless integration into existing systems. Specialized computing, for instance based on GPUs (as known for gaming) or FPGAs (field programmable gate arrays) or ASICs (not the shoe brand but “application-specific integrated circuits”), is motivated by diminishing returns from CMOS technology scaling and hard power constraints. Notably, for a given fixed power budget , energy efficiency defines performance:

As energy efficiency is usually improved by using specialized architectures (processor, memory, network), our research gears to bring future emerging technologies and architectures to demanding applications.

Particular research fields include

  • Embedded Machine Learning includes bringing state-of-the-art DNNs to resource-constraint embedded devices, as well as embedding DNNs in the real-world, requiring a treatment of uncertainty
  • Advanced hardware architecture and technology by understanding specialized forms such as GPU and FPGA accelerators, analog electrical and photonic processors, as well as resistive memory

To close the semantic gap in between demanding applications and various specializations of hardware, we are most concerned with creating abstractions, models, and associated tools that facilitate reasoning about various optimizations and decisions. Overall, this results in vertically integrated approaches to fast and efficient ML, HPC, and HPDA.

We gratefully acknowledge the generous sponsoring that we are receiving. Current and recent sponsors include DFG, Carl-Zeiss Stiftung, FWF, SAP, Helmholtz, BMBF, NVIDIA, and XILINX.

Please find on this website information about our team members, research projects, publications, teaching and tools. For administrative questions, please contact Andrea Seeger, and for research and teaching questions Holger Fröning.

Latest news

TRETS article on GraphScale in combination with HBM published!

Congrats to Jonas for a post-PhD-defense publication on “GraphScale: Scalable Processing on FPGAs for HBM and Large Graphs”, an extension of his FPL2023 paper! Read more here: link

Article on measuring random telegraph noise of resistive memory published!

Kudos to the colleagues from the Khunjerab project on a published article on “Random telegraph noise caracteristic of nonvolatile resistive random access memories based on optical interference principle”! Read more here: link

ITEM 2024 to take place with ECML-PKDD2024!

The 5th Workshop on IoT, Edge, and Mobile for Embedded Machine Learning (ITEM) has accepted to take place in conjunction with ECML-PKDD2024! CfP: www.item-workshop.org

Public talk by Dr. Georg Hager, Erlangen National High Performance Computing Center (NHR@FAU)

ZITI is very happy to welcome Dr. Georg Hager for a visit on February 5 2024. Dr. Hager will give a public talk on “Performance Engineering with Resource-Based Metrics” at 4 pm in the ZITI lecture room (INF 350, room U014). Read more

AccML paper on activation pruning accepted!

Happy to announce that Daniel got his first paper accepted! - “Compressing the Backward Pass of Large-Scale Neural Architectures by Structured Activation Pruning”, at the AccML workshop, collocated with HiPEAC2024 conference. Read the preprint here: link

Older news can be found in the News Archive.