Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Mastering SciPy

You're reading from   Mastering SciPy Implement state-of-the-art techniques to visualize solutions to challenging problems in scientific computing, with the use of the SciPy stack

Arrow left icon
Product type Paperback
Published in Nov 2015
Publisher
ISBN-13 9781783984749
Length 404 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Authors (2):
Arrow left icon
Francisco Javier Blanco-Silva Francisco Javier Blanco-Silva
Author Profile Icon Francisco Javier Blanco-Silva
Francisco Javier Blanco-Silva
Francisco Javier B Silva Francisco Javier B Silva
Author Profile Icon Francisco Javier B Silva
Francisco Javier B Silva
Arrow right icon
View More author details
Toc

Matrix functions

A matrix function is a function that maps a square matrix to another square matrix via a power series. These should not be confused with vectorization: the application of any given function of one variable to each element of a matrix. For example, it is not the same to compute the square of a square matrix, A.dot(A) (for example, In [8]), than a matrix with all the elements of A squared (examples In [5] through In []).

Note

To make the proper distinction in notation, we will write A^2 to denote the actual square of a square matrix and A^n to represent the subsequent powers (for all positive integers n).

Constructor

Description

scipy.linalg.funm(A, func, disp)

Extension of a scalar-valued function called func to a matrix

scipy.linalg.fractional_matrix_power(A, t)

Fractional matrix power

scipy.linalg.expm(A) or scipy.sparse.linalg.expm(A)

Matrix exponential

scipy.sparse.linalg.expm_multiply(A,B)

Action of the matrix exponential of A on B

scipy.linalg.expm_frechet(A, E)

Frechet derivative of the matrix exponential in the E direction

scipy.linalg.cosm(A)

Matrix cosine

scipy.linalg.sinm(A)

Matrix sine

scipy.linalg.tanm(A)

Matrix tangent

scipy.linalg.coshm(A)

Hyperbolic matrix cosine

scipy.linalg.sinhm(A)

Hyperbolic matrix sine

scipy.linalg.tanhm(A)

Hyperbolic matrix tangent

scipy.linalg.signm(A)

Matrix sign function

scipy.linalg.sqrtm(A, disp, blocksize)

Matrix square root

scipy.linalg.logm(A, disp)

Matrix logarithm

In [1]: import numpy as np, scipy as sp; \
   ...: import scipy.linalg as spla
In [2]: np.set_printoptions(suppress=True, precision=3)
In [3]: def square(x): return x**2
In [4]: A = spla.hilbert(4); print A
[[ 1.     0.5    0.333  0.25 ]
 [ 0.5    0.333  0.25   0.2  ]
 [ 0.333  0.25   0.2    0.167]
 [ 0.25   0.2    0.167  0.143]]
In [5]: print square(A)
[[ 1.     0.25   0.111  0.062]
 [ 0.5    0.333  0.25   0.2  ]
 [ 0.333  0.25   0.2    0.167]
 [ 0.25   0.2    0.167  0.143]]
In [6]: print A*A
[[ 1.     0.25   0.111  0.062]
 [ 0.25   0.111  0.062  0.04 ]
 [ 0.111  0.062  0.04   0.028]
 [ 0.062  0.04   0.028  0.02 ]]
In [7]: print A**2
[[ 1.     0.25   0.111  0.062]
 [ 0.25   0.111  0.062  0.04 ]
 [ 0.111  0.062  0.04   0.028]
 [ 0.062  0.04   0.028  0.02 ]]
In [8]: print A.dot(A)
[[ 1.424  0.8    0.567  0.441]
 [ 0.8    0.464  0.333  0.262]
 [ 0.567  0.333  0.241  0.19 ]
 [ 0.441  0.262  0.19   0.151]]

The actual powers A^n of a matrix is the starting point for the definition of any matrix function. In the module numpy.linalg we have the routine matrix_power to perform this operation. We can also achieve this result with the generic function funm or with the function fractional_matrix_power, both of them in the module scipy.linalg.

In [9]: print np.linalg.matrix_power(A, 2)
[[ 1.424  0.8    0.567  0.441]
 [ 0.8    0.464  0.333  0.262]
 [ 0.567  0.333  0.241  0.19 ]
 [ 0.441  0.262  0.19   0.151]]
In [10]: print spla.fractional_matrix_power(A, 2)
[[ 1.424  0.8    0.567  0.441]
 [ 0.8    0.464  0.333  0.262]
 [ 0.567  0.333  0.241  0.19 ]
 [ 0.441  0.262  0.19   0.151]]
In [11]: print spla.funm(A, square)
[[ 1.424  0.8    0.567  0.441]
 [ 0.8    0.464  0.333  0.262]
 [ 0.567  0.333  0.241  0.19 ]
 [ 0.441  0.262  0.19   0.151]]

To compute any matrix function, theoretically, we first express the function as a power series, by means of its Taylor expansion. Then, we apply the input matrix into an approximation to that expansion (since it is impossible to add matrices ad infinitum). Most matrix functions necessarily carry an error of computation, for this reason. In the scipy.linalg module, the matrix functions are coded following this principle.

  • Note that there are three functions with an optional Boolean parameter disp. To understand the usage of this parameter, we must remember that most matrix functions compute approximations, with an error of computation. The parameter disp is set to True by default, and it produces a warning if the error of approximation is large. If we set disp to False, instead of a warning we will obtain the 1-norm of the estimated error.
  • The algorithms behind the functions expm, the action of an exponential over a matrix, expm_multiply, and the Frechet derivative of an exponential, expm_frechet, use Pade approximations instead of Taylor expansions. This allows for more robust and accurate calculations. All the trigonometric and hyperbolic trigonometric functions base their algorithm in easy computations involving expm.
  • The generic matrix function called funm and the square-root function called sqrtm apply clever algorithms that play with the Schur decomposition of the input matrix, and proper algebraic manipulations with the corresponding eigenvalues. They are still prone to roundoff errors but are much faster and more accurate than any algorithm based on Taylor expansions.
  • The matrix sign function called signm is initially an application of funm with the appropriate function, but should this approach fail, the algorithm takes a different approach based on iterations that converges to a decent approximation to the solution.
  • The functions logm and fractional_matrix_power (when the latter is applied to non-integer powers) use a very complex combination (and improvement!) of Pade approximations and Schur decompositions.

Tip

We will explore Schur decompositions when we deal with matrix factorizations related to eigenvalues. In the meantime, if you are interested in learning the particulars of these clever algorithms, read their descriptions in Golub and Van Loan, Matrix Computations 4 edition, Johns Hopkins Studies in the Mathematical Sciences, vol. 3.

For details on the improvements to Schur-Pade algorithms, as well as the algorithm behind Frechet derivatives of the exponential, refer to:

  • Nicholas J. Higham and Lijing Lin An Improved Schur-Pade Algorithm for Fractional Powers of a Matrix and Their Frechet Derivatives
  • Awad H. Al-Mohy and Nicholas J. Higham Improved Inverse Scaling and Squaring Algorithms for the Matrix Logarithm, in SIAM Journal on Scientific Computing, 34 (4)
lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime