About Me

I am a first year PhD student in Applied Mathematics. My research interests lie in the field of computational science, especially problems associated with numerical PDE. The following application represents one such problem that interests me. I hope to take from this class sufficient knowledge and experience to be able to implement scientific computations on a variety of parallel platforms with a good balance of productivity and efficiency.

Large Scale Modeling of Type Ia Supernovae

Overview

Type Ia Supernovae, the "cosmic candles" used to study cosmic expansion, dark energy, and other cosmological phenomena are the results of white dwarf stars dieing in massive thermonuclear explosions. Because of their cosmological usefulness, understanding Type Ia supernovae is important, but laboratory experiments of this type are somewhat problematic.

Numerical simulation is therefore the only way to conduct experiments to explain our observational data. Although supernovae modeling has been around since the 1960's, models with low dimension or resolution have proven inadaquate for answering many of the import questions. As a result, supernova modeling has long been an active area of HPC application and the associated use of parallel computation.

This website focuses on providing a general overview and assesment of the use of parallelism in one area of supernova research and tools; the MAESTRO and CASTRO tools and associated simulations delveloped and used by CCSE at LBNL. These tools model the conditions before and after ignition of a type Ia supernova. Both are designed for use in massively parallel HPC environments.

Use of Parallelism

MAESTRO and CASTRO are designed to scale to massively parallalel HPC environments. The parallelism designs are nearly identical (both are based on the BoxLib framework), so this section will simply describe MAESTRO. MAESTRO is designed for two levels of parallelism corresponding to the node/core divisions on most supercomputers. At the course level, the computational domain is divided into regions called FAB's which are allocated to the computational nodes. These nodes communicate with eachother via MPI. Within a node, OpenMP is used to spawn a thread on each core. These threads operate on sections of the data from the FAB.

As the numerical methods require some communication between regions, each node contains metadata that allows it to determine the necessary communication patterns. Fortunately the regions are designed so that communication need never extend beyond the nearest neighbor regions. The precise communication patterns are computed and cached to increase efficiency. The software is currently capable of scaling up to approximately 100,000 cores.

Implementation

A recent, large-scale use of these simulation codes is described in [2]. This simulation took place on the Jaguar Cray XP5 at Oak Ridge Leadership Computational Facility (#3 on the top500 list). Consuming roughly 6 million CPU hours, the simulation run with 1728 MPI processes with a total of 10,368 cores.

Assessment

Scientific

As a computational experiment, the results of the simulation are significant both in their scale and accuracy. The simulations in [2] represent advances in power and methods over previous simulations by the same group. These advances permit a finer resolution for the simulation. Significantly, the fine-resolution results largely agree with the earlier coarse simulations. This implies that the simulations may have reached a signficant resolution--that is, resolution may no longer be a limiting factor in achieving verisimilitude for some of the key processes being studied (ignition depth and general turbulence structure).

Nevertheless, the work is far from complete. In particular, even though Type Ia supernovae are thought to be very similar, there are believed to be a few different mechanisms by which they can be brought about. In particular, these simulations only consider a non-rotating white dwarf. Further development of MAESTRO is required to address the equally important problems of rotating white dwarfs. In addition, it has been suggested that in certain galaxies a large portion of Type Ia supernovae may be the results of collisions between 2 white dwarves, rather than the result of gradual accretion. This too is a possible application of these software techniques.

Computational

As applications of parallel computing, MAESTRO and CASTRO are very successful. Both exhibit near-perfect weak-scaling from around one thousand to around one hundred thousand cores. For detailed figures the reader may consult [1]. This scaling means that the simulations have substatial "room to grow" in their current computing environment before they no longer perform well. Data on their real vs. peak performance is not currently available.

References

  • [1] A. Almgren, J. Bell, D. Kasen, M. Lijewski, A. Nonaka, P. Nugent, C. Rendleman, R. Thomas, M. Zingale, "MAESTRO, CASTRO and SEDONA -- Petascale Codes for Astrophysical Applications," SciDAC 2010, J. of Physics: Conference Series, Chattanooga, Tennessee, July 2010. [arxiv]
  • [2] A. Nonaka, A. J. Aspden, M. Zingale, A. S. Almgren, J. B. Bell, and S. E. Woosley, "High-Resolution simulations of convection preceding ignition in type ia supernovae using adaptive mesh refinement," Nov. 2011. [Online]. Available: [arXiv]