Linear Algebra Study Guide

From systems of equations to eigenvalues. Everything you need for your university linear algebra course, whether it's MATA22, MATH 136, MATH 221, or any intro linear algebra class.

Contents
  1. Systems of Linear Equations
  2. Vectors and Vector Operations
  3. Matrices and Matrix Operations
  4. Determinants
  5. Vector Spaces and Subspaces
  6. Linear Independence, Basis, and Dimension
  7. Linear Transformations
  8. Eigenvalues and Eigenvectors
  9. Orthogonality and Least Squares
  10. Diagonalization
  11. Common Mistakes
  12. Exam Strategy
  13. FAQ

1. Systems of Linear Equations

This is where every linear algebra course starts. A system of linear equations is a collection of equations like:

2x + 3y - z = 7 x - y + 4z = 3 3x + 2y + z = 10

Augmented Matrix Form

Write the system as an augmented matrix [A | b] and use row operations to solve:

[ 2 3 -1 | 7 ] [ 1 0 0 | a ] [ 1 -1 4 | 3 ] → [ 0 1 0 | b ] [ 3 2 1 | 10] [ 0 0 1 | c ]

Row Reduction (Gaussian Elimination)

The three elementary row operations:

Goal: get the matrix into row echelon form (REF) or reduced row echelon form (RREF).

REF vs RREF REF: leading entries descend left-to-right, zeros below each pivot. Good enough for back-substitution. RREF: each pivot is 1, with zeros above AND below. Gives the solution directly - no back-substitution needed.

Solution Types

SituationMeaningExample (RREF)
Pivot in every column of AUnique solution[1 0 | 3] [0 1 | 5]
Free variables (non-pivot columns)Infinitely many solutions[1 2 | 3] [0 0 | 0]
Row like [0 0 ... 0 | k], k ≠ 0No solution (inconsistent)[1 0 | 2] [0 0 | 5]
Common mistake Forgetting to check for inconsistent rows. Always look for a row of the form [0 0 ... 0 | nonzero] before declaring a solution exists.

2. Vectors and Vector Operations

Vectors are ordered lists of numbers. In R^n, a vector has n components.

Key Operations

OperationFormulaResult Type
Additionu + v = (u1+v1, u2+v2, ...)Vector
Scalar multiplicationcu = (cu1, cu2, ...)Vector
Dot productu · v = u1v1 + u2v2 + ...Scalar
Cross product (R^3 only)u × v = (u2v3-u3v2, u3v1-u1v3, u1v2-u2v1)Vector
Length (norm)||u|| = sqrt(u · u)Scalar

Linear Combinations

A linear combination of vectors v1, v2, ..., vk is any expression of the form:

c1·v1 + c2·v2 + ... + ck·vk

where c1, c2, ..., ck are scalars. This is the single most important concept in linear algebra - almost everything else builds on it.

Span

The span of a set of vectors is the set of all their linear combinations. If vectors v1, ..., vk span R^n, then every vector in R^n can be written as a linear combination of them.

Geometric intuition One vector in R^3 spans a line. Two non-parallel vectors span a plane. Three non-coplanar vectors span all of R^3. The span is the "reachable" space.

3. Matrices and Matrix Operations

A matrix is a rectangular array of numbers. An m×n matrix has m rows and n columns.

Matrix Multiplication

For AB to be defined, the number of columns of A must equal the number of rows of B.

A is m×n, B is n×p → AB is m×p (AB)_ij = row i of A · column j of B
Matrix multiplication is NOT commutative AB ≠ BA in general. This is one of the biggest sources of errors. Always check the order.

Special Matrices

TypePropertyNotation
IdentityAI = IA = A for all AI or I_n
DiagonalAll off-diagonal entries are 0diag(d1, d2, ...)
SymmetricA = A^T-
Triangular (upper)All entries below diagonal are 0-
Triangular (lower)All entries above diagonal are 0-
InverseAA^(-1) = A^(-1)A = IA^(-1)

Finding the Inverse

To find A^(-1), augment A with I and row reduce:

[A | I] → row reduce → [I | A^(-1)]

If you can't reduce A to I (you get a row of zeros on the left), then A is not invertible (singular).

Properties of Inverses

(AB)^(-1) = B^(-1) A^(-1) ← NOTE: reverse order! (A^T)^(-1) = (A^(-1))^T (cA)^(-1) = (1/c) A^(-1)

4. Determinants

The determinant is a single number computed from a square matrix. It tells you whether the matrix is invertible and how it scales volume.

2×2 Determinant

det([a b; c d]) = ad - bc

3×3 and Beyond: Cofactor Expansion

Expand along any row or column. For a 3×3 matrix expanding along row 1:

det(A) = a11·C11 + a12·C12 + a13·C13 where Cij = (-1)^(i+j) · det(Mij) and Mij is the matrix with row i and column j deleted
Shortcut for computation Expand along the row or column with the most zeros - this minimizes the number of cofactors you need to calculate.

Key Determinant Properties

Common mistake det(A + B) ≠ det(A) + det(B). The determinant is multiplicative, not additive.

5. Vector Spaces and Subspaces

A vector space is a set V with addition and scalar multiplication that satisfies 10 axioms (closure, associativity, commutativity, identity, inverses, etc.). The main examples:

Subspaces

A subspace of V is a subset W that is itself a vector space. To verify W is a subspace, check three things:

  1. Zero vector: 0 ∈ W
  2. Closed under addition: if u, w ∈ W, then u + w ∈ W
  3. Closed under scalar multiplication: if w ∈ W and c is a scalar, then cw ∈ W

Four Fundamental Subspaces

For any m×n matrix A:

SubspaceDefinitionDimension
Column space Col(A){Ax : x ∈ R^n} - span of columnsrank(A) = r
Row space Row(A)span of rows = Col(A^T)rank(A) = r
Null space Nul(A){x : Ax = 0}n - r (nullity)
Left null space Nul(A^T){y : A^T y = 0}m - r
Rank-Nullity Theorem rank(A) + nullity(A) = n (number of columns). This is one of the most useful theorems for solving problems - if you know the rank, you automatically know the dimension of the null space.

6. Linear Independence, Basis, and Dimension

Linear Independence

Vectors v1, v2, ..., vk are linearly independent if the only solution to:

c1·v1 + c2·v2 + ... + ck·vk = 0

is c1 = c2 = ... = ck = 0. If any other solution exists, the vectors are linearly dependent - meaning at least one vector is a linear combination of the others.

How to Check Independence

Form a matrix with the vectors as columns. Row reduce. If every column has a pivot, they're independent. If any column lacks a pivot, they're dependent.

Basis

A basis for a vector space V is a set of vectors that is:

  1. Linearly independent
  2. Spans V (every vector in V is a linear combination of the basis vectors)

Every basis for the same space has the same number of vectors - this number is the dimension of the space.

Standard Basis for R^3

e1 = (1,0,0), e2 = (0,1,0), e3 = (0,0,1). Three vectors, so dim(R^3) = 3.

Basis for P_2

{1, t, t^2}. Three basis elements, so dim(P_2) = 3. Note: P_2 and R^3 have the same dimension.

Finding a Basis for Col(A)

Row reduce A. The columns of the original matrix A that correspond to pivot columns in the RREF form a basis for Col(A). Don't use the RREF columns themselves.

Finding a Basis for Nul(A)

Solve Ax = 0 by row reducing [A | 0]. Write the general solution in parametric vector form. The vectors multiplied by the free variables form a basis for Nul(A).

7. Linear Transformations

A function T: R^n → R^m is a linear transformation if:

  1. T(u + v) = T(u) + T(v) for all u, v
  2. T(cu) = cT(u) for all u and scalars c

Every linear transformation from R^n to R^m can be represented as multiplication by an m×n matrix A: T(x) = Ax.

Standard Matrix

To find the matrix for a transformation T:

A = [T(e1) | T(e2) | ... | T(en)]

Apply T to each standard basis vector and make those the columns.

Common Geometric Transformations in R^2

TransformationMatrix
Rotation by θ[cos θ -sin θ; sin θ cos θ]
Reflection across x-axis[1 0; 0 -1]
Reflection across y-axis[-1 0; 0 1]
Reflection across y = x[0 1; 1 0]
Scaling by k[k 0; 0 k]
Projection onto x-axis[1 0; 0 0]

Key Properties

Injective vs Surjective One-to-one (injective): columns are linearly independent → n ≤ m. Onto (surjective): columns span R^m → n ≥ m. Bijective (invertible): both → must be n = m (square matrix).

8. Eigenvalues and Eigenvectors

An eigenvector of a matrix A is a nonzero vector v such that:

Av = λv

The scalar λ is the eigenvalue. Geometrically: A doesn't change the direction of v, only scales it by λ.

Finding Eigenvalues

  1. Write the characteristic equation: det(A - λI) = 0
  2. Expand and solve the polynomial for λ
For a 2×2 matrix [a b; c d]: det(A - λI) = (a-λ)(d-λ) - bc = 0 λ^2 - (a+d)λ + (ad-bc) = 0 λ^2 - trace(A)·λ + det(A) = 0

Finding Eigenvectors

For each eigenvalue λ:

  1. Compute A - λI
  2. Row reduce A - λI
  3. Solve (A - λI)x = 0
  4. The nonzero solutions are the eigenvectors for λ
Common mistake The zero vector is NEVER an eigenvector, even though A·0 = λ·0 is technically true. Eigenvectors must be nonzero by definition.

Key Properties

Algebraic vs Geometric Multiplicity

TypeDefinitionNotation
Algebraic multiplicityPower of (λ - λ_i) in characteristic polynomiala_i
Geometric multiplicitydim(eigenspace) = dim(Nul(A - λI))g_i

Always: 1 ≤ g_i ≤ a_i. If g_i = a_i for every eigenvalue, the matrix is diagonalizable.

9. Orthogonality and Least Squares

Orthogonal Vectors

Vectors u and v are orthogonal if u · v = 0. An orthogonal set is a set of pairwise orthogonal nonzero vectors. An orthonormal set is an orthogonal set where every vector has length 1.

Why orthogonal bases are powerful To express a vector x in terms of an orthogonal basis {u1, ..., uk}, the coordinates are just dot products: x = (x·u1 / u1·u1)u1 + ... + (x·uk / uk·uk)uk. No row reduction needed.

Gram-Schmidt Process

Convert any basis {x1, x2, ..., xk} into an orthogonal basis {v1, v2, ..., vk}:

v1 = x1 v2 = x2 - (x2·v1 / v1·v1)·v1 v3 = x3 - (x3·v1 / v1·v1)·v1 - (x3·v2 / v2·v2)·v2 ...

To get an orthonormal basis, normalize each vector: u_i = v_i / ||v_i||.

Orthogonal Projection

The projection of vector y onto a subspace W with orthogonal basis {u1, ..., uk}:

proj_W(y) = (y·u1 / u1·u1)·u1 + ... + (y·uk / uk·uk)·uk

Least Squares

When Ax = b has no solution (inconsistent system), the least squares solution minimizes ||Ax - b||:

A^T A x̂ = A^T b

Solve this system (the "normal equations") for x̂. This is the foundation of linear regression.

10. Diagonalization

A matrix A is diagonalizable if there exists an invertible matrix P and a diagonal matrix D such that:

A = PDP^(-1) where D = diag(λ1, λ2, ..., λn) (eigenvalues on diagonal) and P = [v1 | v2 | ... | vn] (eigenvectors as columns)

When is a Matrix Diagonalizable?

Why Diagonalize?

Diagonalization makes matrix powers trivial:

A^k = P D^k P^(-1) D^k = diag(λ1^k, λ2^k, ..., λn^k)

This is essential for solving systems of differential equations, computing Markov chain steady states, and analyzing dynamical systems.

Diagonalization checklist 1. Find eigenvalues (characteristic equation). 2. For each eigenvalue, find the eigenspace (basis of eigenvectors). 3. Check: do you have n total linearly independent eigenvectors? If yes → diagonalizable. 4. P = matrix of eigenvectors (as columns, matching order of eigenvalues in D).

11. Common Mistakes

  1. Forgetting AB ≠ BA. Matrix multiplication order matters. T2 ∘ T1 = B·A, not A·B.
  2. Using RREF columns for Col(A) basis. The basis for Col(A) comes from the original matrix A, using the pivot column positions from RREF.
  3. Row reducing the augmented matrix for det. Don't augment - determinant only applies to square matrices. And track sign changes from row swaps.
  4. det(A + B) = det(A) + det(B). This is false. Determinant is multiplicative (det(AB) = det(A)det(B)), not additive.
  5. Confusing eigenvalues of A and A^(-1). If λ is an eigenvalue of A, then 1/λ is an eigenvalue of A^(-1) (same eigenvector).
  6. Declaring "no solution" when there's a free variable. Free variables mean infinitely many solutions, not no solution. No solution only occurs with an inconsistent row.
  7. Writing 0 as an eigenvector. Eigenvectors must be nonzero. The eigenspace includes 0 but the eigenvectors don't.
  8. Forgetting to check subspace conditions. To prove W is a subspace, you need all three conditions (zero vector, closure under addition, closure under scalar multiplication).
  9. Mixing up rank and nullity. Rank = number of pivot columns. Nullity = number of free variables = n - rank. They add up to n (columns), not m (rows).
  10. Gram-Schmidt order errors. Subtract projections onto ALL previous vectors, not just the first one. Each step removes the component along every vector computed so far.

12. Exam Strategy

Before the Exam

During the Exam

The Invertible Matrix Theorem (your best friend) For an n×n matrix A, the following are ALL equivalent: A is invertible, det(A) ≠ 0, rank(A) = n, Nul(A) = {0}, columns of A are linearly independent, columns span R^n, Ax = b has a unique solution for every b, A has n pivots, 0 is NOT an eigenvalue of A. If you can show ANY one of these, you get all of them.

13. Frequently Asked Questions

What is linear algebra used for?
Linear algebra is foundational to computer science (graphics, machine learning, search engines), engineering (signal processing, control systems), physics (quantum mechanics, relativity), economics (input-output models, optimization), and data science (PCA, regression, neural networks). It's one of the most applied branches of mathematics.
Is linear algebra harder than calculus?
They're hard in different ways. Calculus is procedural - learn the rules and apply them. Linear algebra is more abstract - you need to think about spaces, transformations, and properties rather than just computing answers. Many students find the abstraction of vector spaces and proofs harder than calculus computations, but the actual calculations in linear algebra are often simpler.
How do I find eigenvalues and eigenvectors?
To find eigenvalues: solve det(A - λI) = 0 (the characteristic equation). This gives you the eigenvalues λ. To find eigenvectors: for each eigenvalue λ, solve (A - λI)x = 0 by row reducing A - λI. The non-trivial solutions form the eigenspace for that eigenvalue.
What's the difference between span and basis?
The span of a set of vectors is all possible linear combinations of those vectors - the set of all vectors you can "reach" by scaling and adding them. A basis is a spanning set that is also linearly independent - it spans the space with no redundant vectors. Every basis for a given space has the same number of vectors, which is the dimension of the space.
What should I study first in linear algebra?
Start with systems of linear equations and row reduction (Gaussian elimination) - this is the computational backbone. Then learn vectors, matrix operations, and determinants. Once you're comfortable with computation, move to abstract concepts: vector spaces, linear independence, basis, dimension. Finally tackle linear transformations and eigenvalues. Each concept builds on the previous ones.

Struggling with linear algebra?

Koa's AI tutor walks you through matrix operations, proofs, and eigenvalue problems step by step - with practice questions along the way.

Try Koa Free

More Study Resources

GPA Calculator Calculus Exam Prep Statistics for Beginners Pomodoro Study Timer

View all study resources →

Get AI help with linear algebra Try Koa Free →