For typographical reasons, brackets around matrices are omitted here. "V_3" means "V with the subscript 3" and "R^3" means "R with superscript 3". The notation u.v is used for the dot product of vectors u and v.

Row operations on linear equations

Solving a system by putting it in reduced form

Leading and non-leading (or free) variables

Exhibiting the solution set by solving for

leading variables in terms of free variables

Recognizing a system that is not consistent

A system has a solution if and only if in the reduced form, there are no equations 0 = 1

A system has a unique solution if and only if in the reduced form, there are no equations 0 = 1 and there are no free variables

Matrices.

The matrix of a system of linear equations

The coefficient matrix of a system of linear equations

Solving the system by putting its matrix in reduced
row echelon form (RREF)

A matrix is in RREF if:

nonzero rows precede zero rows

every nonzero row has leading (leftmost) entry 1

the leading 1's move from left to right

in the column of a leading 1, the other entries are 0

The rank of a matrix is the number of leading 1's or
number of nonzero rows in the RREF:

to find the rank, first find the RREF

An m by n matrix has m rows and n columns

The notion of row vectors and column vectors

Multiplying an m by n matrix A by an n by 1 column
vector C (the result is denoted AC)

R^n = the set of all n by 1 column vectors

Addition and subtraction of matrices of the same size

Multiplying a matrix by a scalar.

(In particular, these operations apply to row and column
vectors.)

Writing a system of linear equations in the form
AX = C, where A is an m by n matrix (the coefficient
matrix), X is an n by 1 column vector of variables, and
C is an m by 1 vector of constants.

If the rank of the coefficient matrix is smaller than the number
of variables, the system has either no solutions or infinitely
many (if it is consistent, there will be free variables)

A linear transformation is a function T from R^n to R^m
that can be described by an m by n matrix. That is,
there is a matrix A such that for all X in R^n, T(X) = AX.
Linear transformations are also characterized as the functions
which preserve vector addition ( T(V + W) = T(V) + T(W) ) and
scalar multiplication ( T(cV) = cT(V) ). Linear
transformations preserve linear combinations in general.

The 0 and identity matrices. The vectors e_1, ..., e_n
are the columns of the size n by n identity matrix.

The fact the the values of a linear transformation T with
matrix A on the vectors e_1, ..., e_n are the columns of A.

Special matrices in the plane.

Rotations counter-clockwise through
angle s, which have the form:

cos s -sin s

sin s cos s

Dilations, which have the form:

c 0

0 c

for c > 0. Dilation-rotations, which have the form:

a -b

b a

for real numbers a, b not both 0. Note that if r is the square root of a^2 + b^2 then the point (a/r, b/r) lies on the unit circle, and so has the form (cos s, sin s). The matrix just above is therefore r times the matrix

cos s -sin s

sin s cos s

and may be thought of as a rotation followed by a dilation.

Shears T in R^2 parallel to a line L, which map vectors
on L to themselves and arbitrary vectors v in such a way
that T(v) - v is on L (if T fixes L it suffices to
check this for one vector v off L)

Shears parallel to the line through e_1 have matrix

1 c

0 1

while shears parallel to the line through e_2 have matrix

1 0

c 1

The image under a shear of a square with one vertex at the origin
and one side (call it the base) on L is a parallelogram with
the same base and height one.

The image of the square with sides given by e_1 and e_2 by the
linear transformation T of the plane to itself is either a

point (if T = 0)

a line segment (if T has rank 1) or

the parallelogram with sides given by T(e_1) and T(e_2).

The dot product of two vectors. The dot product of a vector with entries a_1, a_2, ..., a_n and another with entries b_1, b_2, ..., b_n is the real number a_1b_1 + a_2b_2 + ... + a_nb_n, and this is true whether they are both row vectors, both column vectors, or one is a row and the other is a column. They must have the same size, however. Two nonzero vectors are perpendicular if and only if their dot product is zero. The length of v is the square root of v.v . The dot product u.v of two column vectors may be interpreted as the product of their lengths times the cosine of the angle between them. A vector of length one is a called a unit vector. The vector v, divided by its length, is a unit vector with the same direction as v.

The projection of R^n on the line L in the direction of the unit vector u is given by proj_L (v) = (u.v)u. This projection is a linear transformation whose values are multiples of u. For this same line L, reflection in the line L in R^n is given by the formula refl_L (v) = 2 proj_L (v) - v. This reflection is also a linear transformation.

Matrix multiplication. One can multiply an m by n matrix A by an n by q matrix B to get an m by q matrix AB. The entry of AB in the i th row and k th column is the dot product of the i th row of A and k th column of B. The matrix of a composition of linear transformations is the product of the matrices. Provided that the multiplications and additions are allowed by the sizes, matrix multiplication satisfies A(B+C) = AB+AC, (B+C)A = BA +CA, as well as (cA)(B) = A(cB) = c(AB) for a scalar c, and it is also associative: A(BC) = (AB)C whenever these are defined. The powers of a square matrix are defined and commute. Matrix multiplication in general is NOT commutative. Typically, even for square matrices, AB and BA will be different.

Invertible matrices. An n by n matrix A is invertible if and only
if there is an n by n matrix B such that AB = BA = 1_n (n by n
identity). An n by n matrix A is invertible if and only if
one of the conditions below holds, in which case they all hold:

(1) A has rank n

(2) The reduced row echelon form of A is the identity matrix

(3) The system of linear equations AX = C has a unique solution
for every vector C in R^n

(4) The system of linear equations AX = C has a unique solution
for some vector C in R^n.

If A has inverse B and A' has inverse B', then AA' has inverse B'B.

To find the inverse of the square matrix A, write down A next to a copy of the identity matrix: [A | 1]. Find the reduced row echelon form. If A is invertible, one will get [1 | B], where B is the inverse of A. Here, 1 represents an identity matrix that is the same size as A.

Partitioned matrices. Given two partitioned matrices they may be multiplied by treating the blocks (or submatrices) indicated by the partition as though they were scalars: the rule for multiplying is "formally" the same as for multiplying matrices of scalars. The answer is obtained as a partitioned matrix.

Here are additional practice problems on the material for Sections 1.1 - 2.4:

1.1. 11, 15

1.2. 1, 9

1.3. 1, 3, 7, 13

2.1. 11, 15, 25, 27

2.2. 1, 2, 37, 41

2.3. 5, 29, 37, 43

2.4. 7, 13, 17, 19, 27, 29