Die ZIH-Kolloquien finden regelmäßig jeden 4. Donnerstag
im Monat um 15:00 Uhr im Willers-Bau A317 statt.
Bei zusätzlichen oder außerplanmäßigen Veranstaltungen sind
Zeit und Örtlichkeit explizit angegeben.
Vergangene Termine 2012
zusätzliches Kolloquium - 6.Dezember 2012: Christian
Pflüger (University of Utah, Huntsman Cancer Institute)
"Hacking the Epigenome: Reprogramming at its best."
(Folien)
Human stem cells are thought to be one of the most promising
tools in medicine. They can be used to treat people with
detrimental diseases that require new healthy tissue in order
to stop the symptoms or even to cure it (e.g. parkinson
disease). Recent advances in stem cell biology created
opportunities to investigate stem cell formation from adult
differentiated tissue such as skin (fibroblasts) or blood.
These new methods enable biologist to shorten the time period
for induced stem cell formation from months to mere days with
high efficiency. This in turn enables high throughput analysis
using next-generation-sequencing (NGS) of transcripts (e.g
mRNA) and various epigenetic modifications such as DNA
methylations and DNA demethylation. The resulting massive
amounts of information and data require sophisticated
bioinformatic analysis to understand key regulatory elements
and regions during iPS formation. This talk aims at pointing
out the current state of the art bioinformatic analysis methods
and offers a wish list for informatics people from a
biologist's perspective.
22. November 2012: Ruedi Seiler (TU Berlin)"Das
Q-Sanov Theorem, eine Krippe fundamentaler Resultate der
Informationstheorie" Das Quanten-Sanov Theorem und die
darin verwendeten Begriffe werden vorgestellt und daraus drei
fundamentale asymptotische Resultate der Informationstheorie
hergeleitet und diskutiert:
- Der Satz von Shannon
- Der Satz von Kaltchenko und Yang.
- Das Steinschen Lemma
Grundlegend für alle Resultate ist der Begriff typischer
Mengen, wie wir das aus dem Zufallsexperiment Münzwurf kennen:
Die Folge der Ergebnisse enthält ungefähr soviele Köpfe wie
Zahlen.
25. Oktober 2012: Torsten Höfler (ETH Zürich) "New
Features in MPI-3.0 in the Context of Exascale
Computing" (Folien)
The new MPI-3.0 standard sets out to address the changes in
technology that happened in the last years. This modernization
of MPI also addresses architectures for future large-scale
computing systems and changed user-needs. The MPI Forum
introduced new collective communication interfaces, such as
nonblocking and neighborhood collectives, better support for
topology mapping, multicore computers, and a largely extended
one sided access model. This talk will provide an overview of
those new features in the context of the changing environments
and addresses strengths and remaining weaknesses of MPI. It can
be expected that all those features are quickly available in
many MPI implementations and thus influence the state of the
art in parallel programming.
27. September 2012: Andre Brinkmann (Johannes
Gutenberg-Universität Mainz)"HPC Storage: Challenges
and Opportunities of new Storage" - wegen Erkrankung
ausgefallen - wird am 24. Januar 2013 nachgeholt
2. August 2012: zusätzliches Kolloquium - Robert
Henschel (Indiana University): "Die IU/ZIH Kollaboration - ein
Überblick" (Folien)
Die Kollaboration zwischen Research Technologies (RT), einer
Abteilung der Indiana University, und dem Zentrum für
Informationsdienste und Hochleistungsrechnen (ZIH), einer
zentralen wissenschaftlichen Einrichtung der TU Dresden, wurde
2006 durch Craig Stewart initiiert und 2008 durch dass
Unterzeichnen eine Absichtserklärung (MoU) formalisiert. Das
MoU beschreibt vier Schwerpunkte für die Zusammenarbeit der
beiden Institute: Datenintensives Rechnen, Hochleistungsrechnen
in der Bioinformatik, Dateisysteme in Weiterverkehrsnetzen und
Performanceanalyse von wissenschaftlichen Anwendungen. Der
Vortrag behandelt die gemeinsamen Arbeiten innerhalb des
FutureGrid-Projektes sowie die Partnerschaft im 100GBit-Projekt
der TU Dresden und gibt einen Einblick in die interkontinentale
Zusammenarbeit.
26. Juli 2012:Lucas Schnorr (CNRS
Grenoble/Frankreich): "Data Aggregation and Alternative
Visualization Techniques for Parallel and Distributed Program
Analysis" (Folien)
One of the main visualization techniques used for the analysis
of parallel program behavior is the space-time view, or
Gantt-chart. The visualization scalability of this technique is
usually limited by the screen size and the number of processes
that can be represented on it. This talk presents a combination
of data aggregation techniques and different ways of
visualizing trace information, achieving a better visualization
scalability on the analysis. It details the spatial and
temporal data aggregation performed on the traces, and then
presents two interactive aggregation-enabled visualization
techniques: the squarified treemap and the hierarchical graph.
Application scenarios are used to illustrate these techniques
in practice.
19. Juli 2012 14:00 Uhr WIL A317: zusätzliches
Kolloquium mit Thomas Lippert (FZ Jülich): Das
europäische Exascale-Projekt DEEP - Auf dem Weg zur "Dynamical
Exascale Entry Platform"
With begin of 2012, a consortium of 16 partners from 8
countries led by the Jülich Supercomputing Centre, among them 5
industrial partners, is engaged in developing the novel
hybrid supercomputing system DEEP. The DEEP project is funded
by the European Community under FP7-ICT-2011-7 as
Integrated Project, with co-funding by the partners. The
DEEP concept foresees a standard cluster computer component
complemented by a cluster of accelerator cards,
called booster. DEEP is an experiment with the aim to
adapt the hardware architecture to the hierarchy of
different concurrency levels of application codes.
Due to the cluster-booster concept, for a given code, cluster
as well as booster resources can be assigned to different
parts of the code in a dynamical manner, thus optimizing
scalability. This is achieved through an adaption of
the cluster operating software ParaStation (ParTec) along with
the parallel programming environment OmpSS (BSC). The major
challenge for the concept is to achieve a proper and most
efficient interaction between cluster and booster while
minimizing the communication between both parts. Moreover,
it is the combination of Intel's Many Core Integrated
Architecture (MIC, Intel Braunschweig) and the EXTOLL
communication system (Uni Heidelberg) that allows to boot
the booster cards without additional processor. DEEP
promises an unprecedented performance, scalability as well
as energy efficiency of the booster system. Energy
efficiency is further improved through hot water cooling
technology (LRZ, EuroTech). Six European
partners contribute with porting their applications that
all exhibit several concurrency levels and are expected to
require Exascale performance in the future.
28. Juni 2012: Bertil Schmidt (Uni Mainz) "Parallel
Algorithms and Tools for Bioinformatics on GPUs" (Folien) High-throughput techniques for DNA
sequencing have led to a rapid growth in the amount of
digital biological data. The current state-of-the-art
technology produces 600 billion nucleotides per machine run.
Furthermore, the speed and yield of NGS (Next-generation
sequencing) instruments continue to increase at a rate
beyond Moore's Law, with updates in 2012 enabling 1 trillion
nucleotides per run. Correspondingly, sequencing costs (per
sequenced nucleotide) continue to fall rapidly, from several
billion dollars for the first human genome in 2000 to a
forecast US$1000 per genome by the end of 2012. However, to
be effective, the usage of NGS for medical treatment will
require algorithms and tools for sequence analysis that can
scale to billions of short reads. In this talk I will
demonstrate how parallel computing platforms based on
CUDA-enabled GPUs, multi-core CPUs, and heterogeneous
CPU/GPU clusters can be used as efficient computational
platforms to design and implement scalable tools for
sequence analysis. I will present solutions for classical
sequence alignment problems (such as pairwise sequence
alignment, BLAST, multiple sequence analysis, motif finding)
as well as for NGS algorithms (such as short-read error
correction, short-read mapping, short-read clustering).
24. Mai 2012: Marc Casas Guix (LLNL) "Automatic Phase
Detection and Structure Extraction of Parallel
Applications" (Folien)
Tracing is an accepted and well-known approach to understand
and improve the performance of high performance computing
applications. However, generating and analyzing trace-files
obtained from large scale executions can be really problematic
due to the large amount of data generated by such massively
parallel executions. Thus, automatic methodologies should be
applied to reduce the size of the data, ruling out its
non-significant or redundant parts and keeping the fundamental
ones. In this talk, a solution based on signal processing
techniques, Wavelet and Fourier transforms, will be presented.
By analyzing the specter of frequencies that appear in
applications’ executions, the approach is able to detect the
internal structure of parallel executions and to rule out
redundant information, reducing by one or two orders of
magnitude the data that should be analyzed. Finally, more
general considerations regarding high performance computing and
the challenges that exascale computing brings will also we
made.
26.April 2012: Thomas Cowan (Direktor des
Instituts für Strahlenphysik am Helmholtz-Zentrum
Dresden-Rossendorf (HZDR)) "Beschleunigung der
Beschleunigung - Lasergetriebene Strahlungsquellen und ihre
Anwendungen"
Neuartige Strahlungsquellen ermöglichen nicht nur neue
Einblicke in ultraschnelle Vorgänge in Materie sondern auch
deren Kontrolle. Eine derartige Kontrolle erfordert genaues
Wissen über die Erzeugung von elektromagnetischer wie
Teilchen-Strahlung sowie deren Wechselwirkung mit Materie auf
atomarer Ebene. Um dieses Wissen zu vergrößern braucht es eine
enge Verbindung von experimentellen Messungen, Datenanalyse und
Simulation.
Der Vortrag stellt neuartige Strahlungsquellen vor und
diskutiert ihre Bedeutung in der fundamentalen Forschung ebenso
wie ihre zukünftige Anwendung, zum Beispiel in der
Krebstherapie. Als Beispiele aktueller Forschung dienen die
Beschleunigung von Teilchenstrahlen mit Hilfe von Lasern und
die in-vivo Dosimetrie bei der Krebsbehandlung mit
Ionenstrahlen. Beide Forschungsgebiete profitieren von der
Beschleunigung komplexer Rechenoperationen durch GPUs, sowohl
im Bereich der Simulation als auch in der Datenauswertung.
22. März 2012: Josef Weidendorfer (TU München):
"Architecture Simulation for Programmers"
To study performance bottlenecks of (parallel) programs,
analysis tools usually take advantage of a mix of hardware
performance counters and application instrumentation as event
source. Real hardware properties are measured, showing details
about the symptoms of any performance problem. However, this
real view to hardware can be tricky: for the tool, as
instrumentation overhead can invalidate the measurement; and
for the user, as event types can be difficult to interpret.
Architecture simulation can overcome these obstacles and
provide more abstract metrics not measurable in legacy
processor hardware.
This talk will focus on using cache simulation for detailed
analysis of memory access behavior of programs, and show the
benefits of this approach, such as better abstract metrics than
just hit/miss ratios for cache exploitation. In this regard,
upcoming extensions to the tool suite Callgrind/KCachegrind are
shown, as well as research on keeping the simulation slowdown
small.
23. Februar 2012: Martin Hofmann-Apitius (Fraunhofer
SCAI)"Large-Scale Information Extraction for Biomedical
Modelling and Simulation" (Folien)
Unstructured information is a huge resource for scientific
information. This is in particular true for sciences with a
strong empirical background, such as biology, pharmaceutical
chemistry or medicine. In my talk, I will give an overview on
our work that aims at making scientific information available
that is "hidden" in scientific publications (including patents)
and medical narratives (electronic patient records). The
presentation will cover essentials of information extraction
technologies developed in our lab, their implementation in
workflows for large-scale production of relevant information
and the application of our information extraction technologies
in the area of modelling neurodegenerative diseases.
26. Januar 2012: Michael Hohmuth (AMD, OSRC ): "OS and Architecture Research
at the AMD Operating System Research Center: ASF, the
Advanced Synchronization Facility"
In this talk, I will present the Advanced Synchronization
Facility (ASF), an experimental AMD64 architecture extension
aimed at one of these trends, parallel computing. ASF is
designed to make parallel programming easier by supporting two
styles of writing parallel programs: lock-free programming and
transactional programming.