Advanced Topics in Parallel Programming

  • type: Seminar (S)
  • semester: SS 2016
  • time: 2016-04-25
    15:45 - 17:15 wöchentlich
    Gebäude 20.21, Raum 314


    2016-05-02
    15:45 - 17:15 wöchentlich
    Gebäude 20.21, Raum 314

    2016-05-09
    15:45 - 17:15 wöchentlich
    Gebäude 20.21, Raum 314

    2016-05-23
    15:45 - 17:15 wöchentlich
    Gebäude 20.21, Raum 314

    2016-05-30
    15:45 - 17:15 wöchentlich
    Gebäude 20.21, Raum 314

    2016-06-06
    15:45 - 17:15 wöchentlich
    Gebäude 20.21, Raum 314

    2016-06-13
    15:45 - 17:15 wöchentlich
    Gebäude 20.21, Raum 314

    2016-06-20
    15:45 - 17:15 wöchentlich
    Gebäude 20.21, Raum 314

    2016-06-27
    15:45 - 17:15 wöchentlich
    Gebäude 20.21, Raum 314

    2016-07-04
    15:45 - 17:15 wöchentlich
    Gebäude 20.21, Raum 314

    2016-07-11
    15:45 - 17:15 wöchentlich
    Gebäude 20.21, Raum 314

    2016-07-18
    15:45 - 17:15 wöchentlich
    Gebäude 20.21, Raum 314


  • lecturer: Prof. Dr. Achim Streit
    Elizaveta Dorofeeva
  • lv-no.: 2400023
Notes

Unter Informatik Seminar 1, 2 oder 3 verbuchen.

Die vorab Anmeldung durch ILIAS ist nicht verpflichtend und unverbindlich, aber wünschenswert, damit vorzeitig ausreichende Themen und Themenbetreuer organisiert werden können.

Anmeldung zum Seminar und Themenverteilung wird während ersten Termin am 25.04 Mo, 15:45 - 17:15, Gebäude 20.21, Raum 314 stattfinden. Falls mehrere Studenten für gleiche Thema sich interessieren, werden wir per Zufallsgenerator verteilen.

Description

Efficient use of high-end supercomputing resources for simulations of a phenomenon from physics, chemistry, biology, financial modelling, neural networks or signal processing, is only possible if the corresponding applications are designed using modern and advanced computational methods in parallel programming. Often the ability of the application to use newest computing hardware like accelerators or high-speed transmission technology, plays a central role for being granted an access to big supercomputers.

Furthermore improving existing algorithms of simulation codes by using advanced technique of parallelization can result to crucial advantages for efficiency in time: when simply speeding up the generation of results, or even saving energy: when the optimised application is able to generate the same results through redistribution of main computation into low-energy consuming part of computers like graphical co-processors, local disks, cache, etc.

Students attending this seminar will be assigned topics related to up-to-date technology in the field of advanced parallel programming for distributed and shared memory systems, using MPI, OpenMP, CUDA, OpenCL, OpenACC. Also the tools for analysis of scalability, efficiency and potential consumption of time by an application will be studied and topics in parallel file systems, high-speed communications could be investigated.

The following topics can be chosen.

Parallel programming on shared memory systems and hardware accelerators (graphical/mathematical Co-Processors):

  • Parallel programming and optimisation with Intel Xeon Phi Coprocessors
  • SIMD Programming: intrinsic or framework?
  • Task-based parallelism with Intel Threading Building Blocks
  • Parallel programming with Java Multithreading
  • Comparing parallel accelerators for high-throughput image processing
  • Efficient matrix multiplication on massively parallel architectures

Parallel programming for distributed memory systems:

  • Parallel data-manipulations and access over MPI-I/O
  • One-sided communication in MPI

Hybrid parallel programming models:

  • Hybrid models on clusters of SMP nodes
  • Programming models for GPU clusters

Performance-analysis and optimisation of parallel simulation codes:

  • Performance Analysis Tools for Parallel Applications

Programming with use of parallel file systems, networks and tools:

  • InfiniBand network for Parallel Computing

Numerical Libraries and API for Parallel Solving:

  • Parallel programming with linear algebra packages
Aim

Students investigate, conceive and analyse the chosen state-of-the-art methods and technologies in the field of parallel computing. Students learn presenting their work to their course mates, answering questions and entering into the discussion of the corresponding topic.