Portable, Extensible Toolkit for Scientific Computation

The Portable, Extensible Toolkit for Scientific Computation (PETSc, pronounced PET-see; the S is silent), is a suite of data structures and routines developed by Argonne National Laboratory for the scalable (parallel) solution of scientific applications modeled by partial differential equations. It employs the Message Passing Interface (MPI) standard for all message-passing communication. PETSc is the world’s most widely used parallel numerical software library for partial differential equations and sparse matrix computations. PETSc received an R&D 100 Award in 2009. The PETSc Core Development Group won the SIAM/ACM Prize in Computational Science and Engineering for 2015. PETSc is intended for use in large-scale application projects, many ongoing computational science projects are built around the PETSc libraries. Its careful design allows advanced users to have detailed control over the solution process. PETSc includes a large suite of parallel linear and nonlinear equation solvers that are easily used in application codes written in C, C++, Fortran and now Python. PETSc provides many of the mechanisms needed within parallel application code, such as simple parallel matrix and vector assembly routines that allow the overlap of communication and computation. In addition, PETSc includes support for parallel distributed arrays useful for finite difference methods.

Components


PETSc consists of a variety of components consisting of major classes and supporting infrastructure. Users typically interact with objects of the highest level classes relevant to their application, essential lower level objects such as vectors, and may customize or extend any others. All major components of PETSc have an extensible plugin architecture.

Features and modules
PETSc provides many features for parallel computation, broken into several modules:


 * Index sets, including permutations, for indexing into vectors, renumbering, etc.
 * Parallel vectors; and matrices (generally sparse)
 * Scatters (handles communicating ghost point information) and gathers (the opposite of scatters)
 * Data management for parallel structured and unstructured meshes
 * Several sparse storage formats
 * Scalable parallel preconditioners, including multigrid and sparse direct solvers
 * Krylov subspace methods
 * Parallel nonlinear solvers, such as Newton's method and nonlinear GMRES
 * Parallel time-stepping (ODE and DAE) solvers
 * Parallel optimization solvers, such as BFGS
 * Automatic profiling of floating point and memory usage
 * Consistent interface
 * Intensive error checking
 * Portable to UNIX, Mac OS X, and Windows