MLPerf HPC Working Group
Mission
Create MLPerf HPC benchmarks based on science applications to run on large-scale supercomputers.
Purpose
The MLPerf HPC benchmark suite includes scientific applications that use ML, especially Deep Learning (DL) at HPC scale. These benchmarks will help project future system performance and assist in the design and specification of future HPC systems. The benchmark suite aims to evaluate behavior unique to HPC applications and improve our understanding across several dimensions. First, we explore model-system interactions. Second, we characterize and optimize deep learning workloads, and identify potential bottlenecks. Last, we quantify the scalability for different deep learning methods, frameworks and metrics on hardware diverse HPC systems.
Deliverables
- MLPerf HPC benchmarks with rules and definitions
- Reference implementations of the MLPerf HPC benchmarks
- Release roadmap for future versions
- Publish benchmark results annually during Supercomputing
Meeting Schedule
Weekly alternating between Monday at 8:05-9:00AM Pacific and Monday at 3:05-4:00PM Pacific.
Join
Related Blog
-
New MLPerf Training and HPC Benchmark Results Showcase 49X Performance Gains in 5 Years
New benchmarks, new submitters, performance gains, and new hardware add scale to latest MLCommons MLPerf results
-
Latest MLPerf Results Display Gains for All
MLCommons’ benchmark suites demonstrate performance gains up to 5X for systems from microwatts to megawatts, advancing the frontiers of AI
-
MLPerf HPC v1.0 results
Introducing a new machine learning metric for supercomputers and a graph neural network benchmark for molecular modeling
How to Join and Access HPC Working Group Resources
- To sign up for the group mailing list, receive the meeting invite, and access shared documents and meeting minutes:
- Fill out our subscription form and indicate that you’d like to join the MLPerf HPC Working Group.
- Associate a Google account with your organizational email address.
- Once your request to join the HPC Working Group is approved, you’ll be able to access the HPC folder in the Public Google Drive.
- To engage in working group discussions, join the working group’s channels on the MLCommons Discord server.
- To access the GitHub repository (public):
- If you want to contribute code, please submit your GitHub ID to our subscription form.
- Visit the GitHub repository.
HPC Working Group Chairs
To contact all HPC working group chairs email [email protected].
Murali Emani
Murali Emani is a Computer Scientist in the Data Science group with the Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. His research interests include scalable machine learning, high performance computing, emerging HPC and AI architectures. Prior, he was a Postdoctoral Research Staff Member at the Lawrence Livermore National Laboratory, US. He obtained his PhD from University of Edinburgh, UK. He was recently awarded DoE ASCR grant to develop a framework ‘HPC-FAIR’ to manage datasets and AI Models for Analyzing and Optimizing Scientific Applications.
Andreas Prodromou
Dr. Andreas Prodromou is a Senior Deep Learning Architect at NVIDIA, where he specializes in analyzing the requirements of state-of-the-art AI models, frameworks, and hardware accelerators. He holds a Ph.D. in Computer Science from UC San Diego, with a focus on predicting hardware events in real-time using deep learning. In addition to his industry experience, Andreas serves as a reviewer for esteemed conferences such as ISCA, MICRO, and ASPLOS, and has contributed to MLPerf HPC as his company’s representative for over two years. Beyond his professional pursuits, he is a Second Lieutenant reserve officer for the Greek Cypriot National Guard.