Tuesday, March 27, 2007

Eigenvalues, Matrix Properties, and the Singular Value Decomposition

It's interesting to me that my Numerical Computing class is in some ways an advanced Linear Algebra class. It's appropriate, though, since so many continuous problems are really only solvable on computers if represented as linear systems. Today was an extended discussion of the properties of various types of matrices, and an introduction to the Singular Value Decomposition of a matrix.

Some properties of various types of matrices:
  • Any random matrix will have some complex eigenvalues. Furthermore, these eigenvalues tend to occur in conjugate pairs.

  • A symmetric matrix, on the other hand, will always have all real eigenvalues. In this context a symmetric matrix is called Positive Definite. If all of this matrix's eigenvalues, which will be real, are also positive it's called a Symmetric Positive Definite matrix.

  • Symmetric matrices also have the interesting property that their eigenvectors are orthogonal.

  • A stochastic matrix is one in which each column consists of positive real values which sum to 1. Note that because of this property each column vector can also be interpreted as a probability vector. The largest eigenvalue of a stochastic matrix is always exactly 1.


The Singular Value Decomposition

Any matrix A can be represented in the following way:

A = USVT

U and V both have the property that when multiplied by their transpose they yield the Identity, while S (which is actually represented with a capital Sigma) is an all-zero matrix except for the diagonal, which is made up of strictly decreasing real values, with sn 0 if A is non-singular.

This is a little difficult to represent with the text tools available here, but this means that A can be represented in the following way:

A = (Sum from i=1 to n) { siuiviT }

This essentially says that A can be constructed by multiplying each of the vectors of U and V by their cooresponding sigma and adding the resulting matrices together. As you add each term in the summation, the value of this matrix, call it B, approaches the actual value of A. B has the interesting property that, for the current value of i, it will have rank=i. It can also be proven that B, is the closest possible matrix to A with rank=1, meaning that it minimizes ||A-B|| over all B with rank r.

No comments: