Further details


You should be able to skip most of this section if you only intend to use the driver routines, described in the Tutorial Page section and in c_pexsi_interface.h.

In such case for C/C++ programmers, include the interface file:

#include  "c_pexsi_interface.h"

For FORTRAN programmers, there is no interface routines such as f_pexsi_interface.F90 yet. However, the FORTRAN routines can directly be used. See Fortran Page for more information.

The remaining section is mainly for C++ developers to have more detailed control of the PEXSI package. For C++ and usage beyond the driver routines, include the following file:

#include  "ppexsi.hpp"

For developers,

  • For VI/VIM users, PEXSI seems to be best visualized with the following options concerning indentation.:

    set tabstop=2
    set shiftwidth=2
    set expandtab

Data type

Basic data type

To enhance potential transplantability of the code, some basic data types are constants are defined in environment.hpp.

The basic data types int and double are redefined as Int and Real, in order to improve compatibility for different architecture especially on 64-bit machines (not implemented yet). The 64-bit long integer int64_t is also redefined as LongInt.

The complex arithmetic is treated using the standard C++ <complex> library. The complex data type is std::complex<double>, and is redefined as Complex in the implementation.:

typedef    int                   Int;
typedef    int64_t               LongInt;
typedef    double                Real;
typedef    std::complex<double>  Complex;

NumVec, NumMat, NumTns

The design of PEXSI tries to eliminate as much as possible the direct usage of pointers. This helps reducing memory leak. Commonly used pointers are wrapped into different classes.

The most commonly used are PEXSI::NumVec , PEXSI::NumMat , and PEXSI::NumTns, which are wrappers for 1D array (vector), 2D array (matrix) and 3D array (tensor), respectively. The arrays are always saved contiguously in memory as a 1D array. Column-major ordering is assumed for arrays of all dimensions, which makes the arrays directly compatible with BLAS/LAPACK libraries.

These wrapper classes can both actually own an array (by specifying owndata_=true), and just view an array (by specifying owndata_=false). Elements of arrays can be accessed directly as in FORTRAN convention, such as A(i) (NumVec), A(i,j) (NumMat), and A(i,j,k) (NumTns).

The underlying pointer can be accessed using the member function Data().

Commonly used wrapper classes


typedef NumVec<bool>       BolNumVec;
typedef NumVec<Int>        IntNumVec;
typedef NumVec<Real>       DblNumVec;
typedef NumVec<Complex>    CpxNumVec;


typedef NumMat<bool>       BolNumMat;
typedef NumMat<Int>        IntNumMat;
typedef NumMat<Real>       DblNumMat;
typedef NumMat<Complex>    CpxNumMat;


typedef NumTns<bool>       BolNumTns;
typedef NumTns<Int>        IntNumTns;
typedef NumTns<Real>       DblNumTns;
typedef NumTns<Complex>    CpxNumTns;

Distributed compressed sparse column (CSC) format

We use the Compressed Sparse Column (CSC) format, a.k.a. the Compressed Column Storage (CCS) format for storing a sparse matrix. Click CSC link for the explanation of the format.

We adopt the following convention for distributed CSC format for saving a sparse matrix on distributed memory parallel machines. We assume the number of processor is \(P\), the number of rows and columns of the matrix is \(N\). The class for distributed memory CSC format matrix is PEXSI::DistSparseMatrix.

  • DistSparseMatrix uses FORTRAN convention (1-based) indices for colptrLocal and rowindLocal, i.e. the first row and the first column indices are 1 instead of 0.
  • mpirank follows the standard C convention, i.e. the mpirank for the first processor is 0.
  • Each processor holds \(\lfloor N/P \rfloor\) consequentive columns, with the exception that the last processor (mpirank == P-1) holds a all the remaining \(N - (P-1) \lfloor N/P \rfloor\) columns. The first column holds by the i-th processor is \(i \lfloor N/P \rfloor\). The number of columns on each local processor is usually denoted by numColLocal.
  • colptrLocal, which is an integer array of type IntNumVec of dimension numColLocal + 1, stores the pointers to the nonzero row indices and nonzero values in rowptrLocal and nzvalLocal, respectively.
  • rowindLocal, which is an integer array of type IntNumVec of dimension nnzLocal, stores the nonzero row indices in each column.
  • nzvalLocal, which is an array of flexible type (usually Real or Complex) NumVec of dimension nnzLocal, stores the nonzero values in each column.

C/C++ interface

The main interface routines are given in c_pexsi_interface.h. The routines are callable from C/C++.

Note: C++ users also have the option of directly using the subroutines provided in ppexsi.cpp. The usage can be obtained from interface.cpp.

FORTRAN interface

The FORTRAN interface is based on the ISO_C_BINDING feature, which is available for FORTRAN 2003 or later. The usage of FORTRAN interface is very similar to the C interface as given in the Tutorial Page section.

In FORTRAN, the PPEXSIPlan data type is c_intptr_t (or equivalently INTEGER*8). The naming of the subroutines is similar to the C interface as in c_pexsi_interface.h. All FORTRAN interface routines are in f_interface.f90. For instance, the subroutine PPEXSIPlanInitialize (C/C++) corresponds to the subroutine f_ppexsi_plan_initialize (FORTRAN).

Example: Parallel selected inversion for a real symmetric matrix

integer(c_intptr_t)    :: plan
type(f_ppexsi_options) :: options

! Initialize PEXSI.
! PPEXSIPlan is a handle communicating with the C++ internal data structure

! Set the outputFileIndex to be the pole index.
! The first processor for each pole outputs information

if( mod( mpirank, nprow * npcol ) .eq. 0 ) then
  outputFileIndex = mpirank / (nprow * npcol);
  outputFileIndex = -1;

plan = f_ppexsi_plan_initialize(&
  info )

! Tuning parameters of PEXSI. The default options is reasonable to
! start, and the parameters in options can be changed.
call f_ppexsi_set_default_options(&
  options )

! Load the matrix into the internal data structure
call f_ppexsi_load_real_hs_matrix(&
      info )

! Factorize the matrix symbolically
call f_ppexsi_symbolic_factorize_real_symmetric_matrix(&

! Main routine for computing selected elements and save into AinvnzvalLocal
call f_ppexsi_selinv_real_symmetric_matrix(& plan,&

! Post processing step...

! Release the data saved in the plan
call f_ppexsi_plan_finalize( plan, info )

The examples of the FORTRAN interface can be found under fortran/ directory, including