Specialization Application Specific Computing

The specialization "Application Specific Computing" deals with many different approaches to solve highly computing intensive tasks. Parallel, distributed compute networks with the corresponding interconnect networks are treated in detail on a hardware level, as well as the corresponding software approaches to efficiently use the hardware. Special hardware acceleration, like graphic cards (GPUs) or configurable co-processors (FPGA and others) are dealt with in depth with practical exercises. Efficient mathematical and numerical methods are presented.

3 compulsory modules:

  • GPU Computing (GPU hardware, communication, memory management, CUDA, OpenCL)
  • Reconfigurable Embedded Systems (embedded HW, FPGAs, VHDL tutorial, HW/SW co-design)
  • Parallel Algorithm Design (compute vs. communication, locality vs. parallelism, parallel design patterns)

2 elective modules from:

  • Accelerator Practice (general, sparse and dense algebra, and specialized libraries, GPU clusters)
  • Advanced Parallel Algorithms (advanced transformation for parallelism and locality, hierarchical algorithms)
  • Advanced Parallel Computing (communication, synchronization, cache coherence, multi-threading) 
  • C++ Practice (effective C++11: constexpr, move refs and ctors, initializer list, lambdas, variadic templates)
  • Electronics (components, operational amplifiers, oscillators, PLL, power supplies, ADC,...)
  • FPGA Coprocessors (FPGAs in detail, compute models, data paths, optimizations, high level synthesis)
  • High Performance Interconnection Networks (topologies, flow control, error tolerance, on-chip networks) 
  • Introduction to High Performance Computing (HW, SW and challenges of HPC, practical problems)
  • Microcontroller Based Embedded Systems (microcontrollers, circuit design, board manufacturing) 

Model study plans:

The specialization can focus on software design

 or on hardware design

back to top