Here is an overview of HPC-related sessions at today’s Nvidia GTC virtual conference. Note that conference participants can access the recording of these sessions (registration is free).
Accelerating AI and HPC at scale in the cloud – 2:00 p.m. to 2:40 p.m. ET
This session covers the NVIDIA A100-based Microsoft Azure VM instances for machine learning, deep learning, and HPC, including. It includes demonstrating deploying a single node for testing and calling an entire cluster of nodes for extensive training or testing, including AI workloads such as BERT and HPC workloads such as HPL.
Moderators: Eddie Weill, Data Scientist & Solutions Architect, NVIDIA; Jon Shelley, HPC / AI Benchmarking Team, Principal PM Manager, Azure Compute, Microsoft
Accelerate HPC Applications with Arm and NVIDIA GPUs – 5 p.m. to 7 p.m. ET
This session will focus on how HPC CUDA applications built for x86 can be recompiled to run on Arm. Speakers will use Oak Ridge National Laboratory’s 64-bit Arm architecture-based âwombatâ cluster to demonstrate the refactor of an x86-based HPC-CUDA application to run on Arm, including using the Fujitsu A64fx Processor with High Memory Bandwidth Show how HPC applications that depend on the CPU bandwidth can achieve additional accelerations.
Moderators: Ross Miller, software engineer, Oak Ridge National Laboratory; Robbie Searles, solutions architect, NVIDIA; Max Katz, Senior Solutions Architect at NVIDIA
Advancing Exascale: Faster, Smarter and Greener – Taken Earlier Today
In this session, Jean-Pierre Panziera, CTO of HPC at ATOS, will talk about exascale supercomputers and their reliance on computer accelerators such as GPUs with higher floating point performance and memory bandwidth. He will also look at these accelerators, which enable new AI algorithms to be used in complex workflows to improve data assimilation, data analysis or computing itself and to optimize the use of resources in the HPC data center.
Accelerating health care at Bayer with Science @ Scale and Federated Learning – recorded earlier today
In this session, David Ruau, Head of Global Data Assets & Decision Science at Bayer, explains how a large pharmaceutical company with more than 150 years of history is transforming from a traditional to a digital player, considering a cloud strategy and federated learning. Ruau explores how Bayer uses GPUs both in the cloud and locally for scientific discovery.
Realizing the vision of an AI university – recorded earlier today
This panel will discuss how a vision and mission for AI can be developed in a university. Topics are:
â¢ Why AI deserves a university-wide vision
â¢ How to make sure your university is ready to serve AI as a service
â¢ The benefits and ROI of driving AI in university
â¢ Focus on interdisciplinary research
Panelists include Arnaud Renard, CEO of the ROMEO Regional Compute Center at Reims Champagne-Ardenne University; Sean McGuire, Higher Education and Research, EMEA, NVIDIA; Marco Aldinucci, Professor at the University of Turin, Italy; Hujun Yin, professor at the University of Manchester; Wolfgang Nagel, Director Center for Information Services and High Performance Computing, TU Dresden
Benchmarking of GPU clusters with the JÃ¼lich Universal Quantum Computer Simulator – recorded earlier today
This session examines the simulation of quantum computers, a versatile method for benchmarking supercomputers with thousands of GPUs. It includes the discussion of quantum computer simulators from the perspective of linear algebra using the example of the JÃ¼lich Universal Quantum Computer Simulator (JUQCS). It shows how the memory, network, and computation-intensive operations of JUQCS can be used to benchmark high-performance computers.
Moderator: Dennis Willsch, postdoc, Forschungszentrum JÃ¼lich GmbH
Grid: A Powerful and Portable Code for Quantum Chromodynamics – Recorded Earlier Today
In this session, a portable data-parallel high-level interface for structured grid problems with GPU clusters and other architectures is examined. The library in C ++ 11 can handle multidimensional arrays distributed across an entire cluster and can target modern CPUs, Cuda, HIP, and SyCL. It provides library support for creating optimized PDE template operators in Cartesian grids and, for simplicity, F90-like Cshift constructs that run concurrently on an entire GPU cluster.
Moderator: Peter Boyle, Professor at the University of Edinburgh and Brookhaven National Laboratory
Material design towards exascale: Porting community codes for electronic structures to GPUs – recorded earlier today
This session examines how important materials are for science and technology and what connections they have to important societal challenges ranging from energy and the environment to information and communication to manufacturing. Electronic structure methods have become a key to material simulation, which allows scientists to study and design new materials before performing actual experiments. The MaX Center of Excellence – material design at eXascale – focuses on material modeling at the limits of HPC architectures. Speakers will discuss the performance and portability of flagship MaX codes, with a particular focus on GPU accelerators. Porting to GPUs has been demonstrated using various strategies (all code published as GPU-ready) to improve both performance and maintainability while keeping the community motivated.
Moderators: Andrea Ferretti, Senior Researcher and Chair of the MaX Executive Committee, CNR – Nanoscience Institute; Ivan Carnimeo, Post Doc Researcher, International School for Advanced Studies (SISSA)