I will emphasize again that a matrix with repeated eigenvalues MAY BE diagonalizable. One must check whether the geometric multiplicity is equal to the algebraic multiplicity for such eigenvalues. (Since the geometric multiplicity is always at least one, one does not need to do that for eigenvalues of algebraic multiplicity 1.) An n by n real matrix is diagonalizable over R if and only if BOTH of the following conditions hold:

(1) the characteristic polynomial has n real eigenvalues, counting multiplicities (these are the algebraic multiplicities) AND (2) for every eigenvalue c, the geometric multiplicity of c is equal to the algebraic multiplicity of c.

The second condition is automatic when there are n mutually distinct real eigenvalues, but it can hold in other cases. ( diagonal matrix is always diagonalizable no matter how many repetitions of eigenvalues there are: the standard basis is an eigenbasis.)

To get an eigenbasis, for each c, find a basis for Ker(cI - A). (The number of vectors in that basis is the geometric multiplicity of c.) Put these together to get an eigenbasis for R^n. If these are made into the columns of a matrix S, then S^{-1}AS will be diagonal, with the eigenvalues of A on the diagonal, occurring in the same order as the corresponding eigenvectors occur in the basis that makes up the columns of S.

7.2
4. The matrix is A =

```2 -1
1  0

```
Using x instead of lambda, the characteristic polynomial is x^2 - 2x + 1 = (x - 1)^2, so that 1 is an eigenvalue of multiplicity 2. If I is the 2 by 2 identity matrix, the eigenspace for 1 is Ker (1I - A), which is the kernel of

```
-1 1
-1 1    The RREF is

1 -1
0  0

```
and so there is a one-dimensional eigenspace. Thus, the geometric multiplicity of 1 is strictly smaller than the algebraic multiplicity, which is 2, and there is no eigenbasis. That is, A is NOT diagonalizable. (The eigenspace is spanned by

1
1

but it was not required by the wording of the problem to determine this.)

8. The matrix is

1 1 1
1 1 1
1 1 1

The characteristic polynomial is the determinant of A =

```
x-1  -1  -1
-1 x-1  -1
-1  -1 x-1

```
or (x-1)(x^2 - 2x + 1 - 1) - (-1)(-x + 1 - 1) + (-1)(1 - (-1)(x-1)) = (x-1)(x^2-2x) - x - x = x[(x^2 -3x + 2 -2] = x^3 - 3x^2 = x^2(x-3). Thus, 0 is an eigenvalue of algebraic multiplicity 2 and 3 is an eigenvalue of algebraic multiplicity 1. The eigenspace for 0 is the kernel of 0I - A =

-1 -1 -1
-1 -1 -1
-1 -1 -1

The RREF is

1 1 1
0 0 0
0 0 0

and so the general solution (using x, y, z for variables) is

```
-y-z
{    y   :  y, z  in  R}
z

```
which is the span of

```

-1         -1
1   and    0
0          1

```
Thus, the geometric multiplicity is also 2, and the two vectors above will be part of an eigenbasis.

The eigenspace for 3 is Ker (3I - A) which is the kernel of

```
2 -1 -1
-1  2 -1
-1 -1  2

```
Dividing the first row by 2 and adding it to the second and third rows we get:

```
1 -1/2 -1/2
0  3/2 -3/2
0 -3/2  3/2

```
Dividing the second row by 3/2, adding 1/2 of that to the first row, and adding 3/2 times it to the last row, we get the RREF:

```
1 0 -1
0 1 -1
0 0  0

```
and so the general solution is

```
z                     1
{ z  : z in R}   and    1
z                     1

```
is a basis for this eigenspace. Thus, the columns of S =

```
-1 -1 1
1  0 1
0  1 1

```
are an eigenbasis for A, and it follows that S^{-1} A S = D =

0 0 0
0 0 0
0 0 3,

which is diagonal. Note that since the question only asks for what S and D are, it is not necessary to compute S^{-1} nor to carry out the multiplication: the theory tells us that S^{-1} A S will be diagonal, with diagonal entries equal to the eigenvalues for A. One may compute S^{-1} =

```
-1  2 -1
(1/3)  -1 -1  2
1  1  1

```
and check by multiplying, but that is not required.

Alternate method of getting the eigenvalues. It is clear that 0 is an eigenvalue: the rows are dependent, and so the kernel is not 0. In fact, it is easy to see that the rank is one, so that the kernel is two-dimensional, and therefore 0 has geometric multiplicity 2. Thus, its algebraic multiplicity is at least 2, i.e., it constitutes at least two of the roots of the characteristic polynomial. Since the trace of A is 1+1+1 = 3, the third root must be 3: the trace is also the sum of the eigenvalues. Thus, the eigenvalues are 0, 0, 3, which implies that the characteristic polynomial is x^2(x-3).

7.3
6. The matrix is A =

```
0 2  2
2 1  0
2 0 -1

```
and the characteristic polynomial is the determinant of

```
x  -2  -2
-2 x-1   0
-2   0 x+1

```
Subtract the second row from the third to get

```
x   -2  -2
-2  x-1   0
0 -x+1 x+1

```
Expand with respect to the first column to get:

x(x-1)(x+1) - (-2)(-2x-2 - (2x-2)) = x(x^2-1) + 2(-4x) = x^3 - 9x = x(x-3)(x+3). Therefore, the eigenvalues are 0, 3, and -3. To get an orthonormal eigenbasis we need only find an eigenvector for each eigenvalue and divide by its length. (Orthogonality for eigenvectors corresponding to distinct eigenvalues is automatic when the matrix is symmetric.)

Ker(0I - A) = the kernel of

```
0 -2 -2
-2 -1  0
-2  0  1

```
Row operations on this give

```
1 1/2 0
0   1 1
0   1 1   and then

1 0 -1/2
0 1   1
0 0   0

```
from which the kernel is

```
z/2
{  -z    :  z in  R}
z

```
Thus,

```
1/2
-1
1

is an eigenvector for 0.  The length is the square root of
1/4 + 1 + 1 = 9/4,  or  3/2,  and so

1/3
-2/3
2/3

```
is an eigenvector of length 1. (One may also use its negative.)

Similarly, the kernel of 3I - A is the kernel of

```
3 -2 -2
-2  2  0
-2  0  4

```
can be found from the RREF, which is

```
1     0    -2
0     1    -2
0     0     0

```
One then finds that the kernel is

```
2z                          2
{  2z : z in R }    so that    2
z                          1

```
is an eigenvector, and dividing by the length gives

2/3
2/3
1/3

One may also use the negative of the vector above.

Finally, the kernel of -3I - A is

```
-3 -2 -2
-2 -4  0
-2  0 -2

```
whose RREF is

```
1 0   1
0 1 -1/2
0 0   0

```
and the kernel is

```
-z                         -1
{ z/2  : z in  R}   so that   1/2
z                          1

```
is an eigenvector. Dividing by its length, which is 3/2, gives

```
-2/3
1/3
2/3

```
and one may also use the negative of this vector.

```
1/3   2/3   -2/3
-2/3,  2/3 ,  1/3
2/3   1/3    2/3

```
is the required orthonormal basis of eigenvectors (orthogonality is automatic -- skeptics can check that the dot products are 0 easily). [Not required: if one forms a matrix S from the three columns above, then S^{-1} A S will be the diagonal matrix

```
0 0  0
0 3  0
0 0 -3

```
Moreover, S^{-1} will be easy to compute, because S is orthogonal, and so S^{-1} = S^T.]

8. The matrix A =

```
3  3
3 -5

```
The characteristic polynomial is x^2 + 2x -24 = (x+6)(x-4) and so the eigenvalues are -6, 4. -6I - A =
```
-9 -3
-3 -1     whose RREF is

1 1/3
0  0

```
and so
```

-1/3
1

```
is an eigenvector and
```
-1/sqrt{10}
3/sqrt{10}

```
is a unit eigenvector (one may also use its negative).

4I - A =

```
1 -3
-3  9   whose  RREF is

1 -3
0  0

```
from which

3
1

is an eigenvector and

3/sqrt{10}
1/sqrt{10}

is a unit eigenvector (one may also use its negative). Thus, one may take S =

```
-1/sqrt{10} 3/sqrt{10}
3/sqrt{10} 1/sqrt{10}

```
as the orthogonal matrix. Note that S^{-1} = S^T happens to be S (this matrix is a reflection) and

S^{-1}AS = D will be

```
-6 0
0 4

```
Note that the problem does not require one to compute S^{-1} and that the theory guarantees that S^{-1}AS will be diagonal, with the eigenvalues of A on the diagonal. 9.1
18. Every polynomial of degree at most n is uniquely a linear combination of the polynomials 1, x, x^2, ..., x^{n}, which is therefore a basis for this linear space, and the dimension is consequently n+1.

22. In a typical symmetric matrix A of size n one may prescribe the entries a_{ij} for j at least i (i.e., the entries on or above the diagonal) arbitrarily, while the entries below the diagonal are then determined, since a_{ji} = a_{ij}. Let D_i be the matrix with a one in the i th spot on the diagonal and 0's elsewhere. Let S_{ij} for i < j be the matrix with ones in the i,j and j,i spots and 0's elsewhere. Then

```
A = a_{11} D_1 + ... + a_{nn} D_n +
the sum of all the terms  a_ij S_{ij}  for  i < j,

```
and the coefficients are uniquely determined by A. It follows that the matrices D_1, ..., D_n and the S_{ij} together are a basis for the symmetric matrices. Counting the lengths of the successive diagonals above (and including) the main diagonal, we see that the total number of elements of this basis is n + (n-1) + (n-2) + ... + 3 + 2 + 1 = n(n+1)/2, which is the dimension of the n by n symmetric matrices.

( E.g., if n = 3, and replacing a_{11}, a_{22}, a_{33}, a_{12}, a_{13} and a_{23} by a, b, c, d, e, f resp. (to avoid subscripts) we have that the typical symmetric matrix

```
a d e       1 0 0       0 0 0       0 0 0
d b f  = a  0 0 0  + b  0 1 0  + c  0 0 0
e f c       0 0 0       0 0 0       0 0 1
0 1 0       0 0 1       0 0 0
+ d  1 0 0  + e  0 0 0  + f  0 0 1
0 0 0       1 0 0       0 1 0

```
and so the six matrices
```
1 0 0   0 0 0   0 0 0   0 1 0   0 0 1   0 0 0
0 0 0 , 0 1 0 , 0 0 0 , 1 0 0 , 0 0 0 , 0 0 1
0 0 0   0 0 0   0 0 1   0 0 0   1 0 0   0 1 0

```
are a basis. These are the same as D_1, D_2, D_3, S_{12}, S_{13}, and S_{23} in the notation above. The number of elements in this basis is 3+2+1 = 6. )