A-Z Index | Directory | Careers

How AMReX is Influencing the Exascale Landscape

A Q&A with Andrew Myers, core development team member

October 12, 2021

By Kathy Kincade

Contact: cscomms@lbl.gov

Andrew Myers

Andrew Myers

While the hardware components of exascale are key, so too are the software packages and frameworks that will support the demanding scientific applications that run on these next-generation computing systems, and beyond.

Since first coming onto the exascale scene in recent years, AMReX has blossomed into one of the key software ecosystems for many Exascale Computing Project (ECP) efforts, from Warp-X and MFiX-Exa to ExaWind, ExaStar, and ExaSky, among others. AMReX — an open-source framework for performing block-structured adaptive mesh refinement calculations — is the result of a collaboration between Lawrence Berkeley National Laboratory (Berkeley Lab), the National Renewable Energy Laboratory, and Argonne National Laboratory, with development centered at Berkeley Lab. All three labs are part of the Exascale Computing Project's (ECP) Block-Structured AMR Co-Design Center.

In this Q&A, Andrew Myers — a computer systems engineer in Berkeley Lab’s Center for Computational Sciences and Engineering and a member of the AMReX core development team — looks at how this unique HPC software framework has influenced, and continues to influence, a broad spectrum of scientific applications, both within ECP and outside the ECP program.

How does the AMReX code help researchers at the exascale level, and what makes it unique among adaptive mesh refinement codes?

AMReX is a framework for performing block-structured adaptive mesh refinement calculations. It provides a set of multi-level, distributed data containers for mesh and particle data and handles things like parallel communication, GPU offloading, and inter-level operations so that applications don't have to. It also has support for complex geometry through embedded boundaries and a variety of linear solvers.

I think a strength of AMReX is our readiness to adapt and to add functionality to support the needs of applications. We are always interested in growing our user base by supporting cool new application codes, and if there is functionality that doesn't exist right now that would be generally useful, we are interested in helping you build it.

What applications is it most widely used for at present, and why?

AMReX (and its predecessor, BoxLib) have a long history with reacting flows in both the low-Mach number and highly compressible limits, and it supports a lot of research along those lines in combustion applications and in astrophysics. We also have a lot of experience with particle-mesh techniques in various contexts, going back to the Nyx code, which uses particles to model parcels of dark matter in an expanding background. Aside from those staples, there is a growing body of other application areas both inside and outside of ECP, ranging from fluctuating hydrodynamics to multi-phase flow problems, electromagnetics for microelectronic circuit design, wind farm modeling, and more.

What ECP projects currently utilize AMReX? Are some of these projects also part of NERSC’s Exascale Science Applications Program (NESAP)?

There are currently codes in six ECP application projects that "fully" use AMReX: Nyx in the ExaSky project, Castro in the ExaStar project, WarpX, MFiX-Exa, AMR-Wind in the ExaWind project, and PeleLM and PeleC in the combustion project. There is also a seventh project that has an additive manufacturing code called Truchas-AM, that we partially support; they only use the linear solvers in AMReX. Of these, WarpX is also a part of NESAP, and it has benefited greatly from the collaboration, particularly in regards to a load-balancing project led by former NERSC postdoc Michael Rowan.

With the advent of Perlmutter and the first exascale systems, how is AMReX adapting to these new GPU-dominant platforms?

At the start of the project, we knew GPUs were important, but we also figured that one of the eventual exascale machines would be a many-core CPU architecture, kind of like Cori KNL or the ARM-based Fugaku machine in Japan. Events unfolded differently, and we ended up needing to accelerate our GPU efforts to prepare for machines like Summit, Perlmutter, and Frontier. Much of the core AMReX framework has been redesigned to work well on GPU-based machines.

A big part of this process was the decision to move away from Fortran towards a pure C++ codebase. We resisted this at first - most of us were big Fortran fans, and in many ways, it is an ideal language for technical computing. But the tools for adding GPU support to a complex framework like AMReX, in our opinion, were better in C++, and with modern compilers that support the __restrict__ keyword the performance edge that Fortran enjoyed for CPU execution was eliminated, so we made the transition.

Compared to the work needed to prepare for Summit, gearing up for Perlmutter and Frontier has been relatively pain-free. Part of that has certainly been due to the expertise and computing——— resources made available to us through NERSC and the NESAP program.

How has AMReX influenced the development of WarpX?

AMReX and WarpX have really grown up alongside each other. We have a tightly-coupled development model (I personally split my time 50-50 between both codes), where the needs of WarpX drive AMReX development, and things we develop for WarpX that we think will be useful for other applications end up getting migrated to AMReX. For example, there is a function parser that was developed for WarpX by Weiqun Zhang, another AMReX developer, that performs the run-time evaluation of mathematical expressions written in plain text by WarpX users on both CPUs and GPUs, without any need to recompile either WarpX or AMReX. This is really hard to do well, especially with GPUs (my advice on this, by the way, was "don't do that" - I guess I was wrong!) This was recently migrated from WarpX to AMReX so that other application codes can take advantage of it.

What are some tangible benefits that have come out of the push to exascale?

I think an under-rated success story of the ECP has been the way the technical advances made to the DOE's scientific software stack have had spin-off benefits for smaller-scale scientific computing. The exascale-driven push towards GPU computing has benefited AMReX users who have no intention of ever running on a large fraction of Perlmutter or Summit. A number of academic groups contact us wanting to re-factor their existing simulation codes to use AMReX, both for its ability to support adaptive mesh refinement and for its performance portability across architectures. More locally, this past summer two undergraduate interns, Amanda Harkleroad and Emily Bogle, working with recent UC Berkeley graduate Victor Zendejas Lopez, helped build an AMReX-based code to model the growth of cancer cells. Due to improvements made to AMReX as part of the ECP, they can write code once and it will run on both commodity CPU and GPU hardware. I think that's pretty cool.


About Computing Sciences at Berkeley Lab

High performance computing plays a critical role in scientific discovery. Researchers increasingly rely on advances in computer science, mathematics, computational science, data science, and large-scale computing and networking to increase our understanding of ourselves, our planet, and our universe. Berkeley Lab’s Computing Sciences Area researches, develops, and deploys new foundations, tools, and technologies to meet these needs and to advance research across a broad range of scientific disciplines.