ZITI is very happy to announce the talk ‘Efficient Hardware for Neural Networks’ by Prof. Dr. Grace Li Zhang (TU Darmstadt) on July 29 2024. The talk will be given at 4 pm in room U014 of OMZ (INF 350, floor -1)

Title: Efficient Hardware for Neural Networks

Abstract: The last decade has witnessed significant breakthroughs of deep neural networks (DNNs) in many fields. These breakthroughs have been achieved at extremely high computation and memory cost. Accordingly, the increasing complexity of DNNs has led to a quest for efficient hardware platforms. In this talk, class-aware pruning is first presented to reduce the number of multiply-and-accumulate (MAC) operations in DNNs. Class-exclusion early-exit is then examined to reveal the target class before the last layer is reached. To accelerate DNNs, digital accelerators such as systolic array from Google can be used. Such an accelerator is composed of an array of processing elements to efficiently execute MAC operations in parallel. However, such accelerators suffer from high energy consumption. To reduce energy consumption of MAC operations, we select quantized weight values with good power and timing characteristics. To reduce energy consumption incurred by data movement, logic design of neural networks is presented. In the end, on-going research topics and future research plan will be summarized.

CV: Grace Li Zhang received the Dr.-Ing. degree from the Technical University of Munich (TUM), in 2018. She joined TU Darmstadt in 2022 as a Tenure Track Assistant Professor. She leads the Hardware for Artificial Intelligence Group. Her research focuses on efficient hardware acceleration for AI algorithms and systems, AI computing with emerging devices, e.g., RRAM and optical components, circuit and system design methodologies for AI, explainability of AI and neuromorphic computing.