The LAPACK library for linear algebra is integrated in many interactive environments (for example, Mathematica or Matlab) and this is the simplest way to access it. In this chapter, I will demonstrate the use of some LAPACK routines from SciPy in Python. Below it is assumed that NumPy and SciPy are installed in your Python installation.

The problem to solve is a system of linear equations

`A x = b`

where `A`

is a square matrix, `b`

is the right-hand side vector, and `x`

is the vector to be found. In the general case, one may need to solve such a system with many right-hand sides that can be expressed as follows

`A X = B`

where the number of columns in the matrix `B`

is equal to the number of right-hand sides to solve. The matrix `X`

now contains solutions for each right-hand side in `B`

.

Let us start with 01solve.py. In this script, the dimension of the matrix `A`

(`dim x dim`

) and the number of the right-hand sides `B`

(`dim x rhs`

) is supposed to be specified as the command line arguments. If `rhs`

is not given in the command line, it is assumed to be one. In the script, the matrices `A`

and `B`

are initialized with random numbers and then by means of the function solve the system of equations is solved. The script reports the time for linear solve and the relative residual `norm(A X -B)/norm(A)`

.

The script on my notebook

`$ python 01solve.py 1000 1`

time for solution is 0.136413484783 s

residual 1.11979080018e-014

reports about 0.14 s to solve a linear system with a matrix of 1000×1000 (it may be necessary to issue the command several times to reach a stable time). For a matrix 2000×2000

`$ python 01solve.py 2000 1`

time for solution is 0.876963398496 s

residual 1.65035240566e-014

the time increases about 6.5 times that is close to the theoretical value of 8. The theory estimates the number of operations in this case as `O(N^3)`

and this means that when the dimension of the problem increases twice, the number of operations increases 2^3=8 times.

Now let us increase the number of right-hand sides.

`$ python 01solve.py 2000 10`

time for solution is 0.895961511278 s

residual 4.86856128506e-014

Interestingly enough, the time to solve ten linear systems with the same matrix is about the same as to solve one linear system. This is the feature of so called direct solves based on the Gaussian elimination. The reason is simple. Internally the matrix A first is decomposed to the two triangular matrices L and U (the LU decomposition) and then these two matrices are employed to find x during so called back substitution. The main time is needed for the LU decomposition and the back substitution is pretty fast. This means that if it is necessary to solve different systems with the same matrix and different right-hand sides, the best to do it at once.

Alternatively one can use two functions separately. One for the LU decomposition and another for the back substitution. This is demonstrated in 02lu.py. Now we can measure time separately:

`$ python 02lu.py 2000 1`

time for LU is 0.807640075188 s

time for back substitution is 0.0104189885428 s

`$ python 02lu.py 2000 10`

time for LU is 0.808609364841 s

time for back substitution is 0.0233895599714 s

The results shows that SciPy has some own overhead while calling LAPACK functions. The time with the use of the two functions is a bit smaller than with one. Also the time for ten back substitutions is not ten times bigger as one could expect. Yet, I hope it is still shows you what happens behind the scene.

Is it possible to reduce the time that we see with random matrices? Or could it be bigger? In general, the time does depend slightly on the matrix for the LU decomposition. First, there is pivoting and its time depends on the matrix, second when there are many zeros in the matrix then it takes less time to perform operations with them. Yet, it is rather save to expect that the time to solve a linear system of equations with a dense matrix is the function of the matrix dimension only.

What is more important is the symmetry of the matrix and such a special property as positive definiteness. In this case, there are other decompositions that need less operations than the LU decomposition. In the case of positive definite matrices (they must be symmetric but not all symmetric matrices are positive definite), there is the Cholesky decomposition and it is shown in the script 03cholesky.py.

The function `cho_factor`

takes by default the lower triangular matrix from `A`

. Also after obtaining the random matrix `A`

, its diagonal members are multiplied with some factor to increase chances that the matrix will be positive definite. If you receive an error, that the matrix is not positive definite, please try once more or increase that factor in the script. The Cholesky decomposition

`$ python 03cholesky.py 2000 1`

time for Cholesky is 0.492923034014 s

time for back substitution is 0.0836654586466 s

residual 3.73811410548e-006

is roughly two time faster than the LU decomposition. The time for back substitution is however slower. I should say that I do not know if this is some SciPy overhead or LAPACK related. It would be good to check it. In any case, this means that if your matrix is positive definite, it makes sense to use not the LU decomposition but the Cholesky decomposition. In the case of symmetric indefinite matrix, there is the L^t D L decomposition that should be also faster than the LU decomposition but it is not available through the SciPy interface.

Finally the script 04lu_read.py gives an example on how one could read the matrix and the right-hand side in the Matrix Market format (for example from the The University of Florida Sparse Matrix Collection) and the write answer also in the Matrix Market format.

## Previous:

## Next:

Using decomp and solve from Fortran

Using decomp and solve from C++

Using LAPACK from C++

Just a quick question, the same code for the Cholesky decomposition takes about 1,5 seconds on my machine, is there something wrong with the lapack bindings or does it simply reflect the capabilities of my Corei5 750 ?

Tanks

Hard to say for sure. Basically it is necessary to check that in your version of SciPy/NumPy the optimized version of BLAS has been used during compilation.

Sorry for the hicup, i compiled everything from scratch with ATLAS support and correct linking. Everything nice and fast. Thank You.

Thanks for doing this–i.e., sharing your expertise in numerical linear algebra. The type of fundamental numerical matrix computation knowledge has become absolutely critical in light of the accelerating demand for analytics of web data, but it’s very difficult to find competent sources for it. Thanks again!

You mention, “its diagonal members are multiplied with some factor to increase chances that the matrix will be positive definite”. A better way to ensure you have positive definite matrixes is to multiply a square matrix by its (conjugate) transpose: after you generate a random square “A”, replace it with A * A.T (or A.H if complex). A * A.T is guaranteed to be positive definite.

You are right. This way would be more robust.