From systems of equations to eigenvalues. Everything you need for your university linear algebra course, whether it's MATA22, MATH 136, MATH 221, or any intro linear algebra class.
Replace a row by itself plus a multiple of another (R_i + kR_j → R_i)
Goal: get the matrix into row echelon form (REF) or reduced row echelon form (RREF).
REF vs RREF
REF: leading entries descend left-to-right, zeros below each pivot. Good enough for back-substitution.
RREF: each pivot is 1, with zeros above AND below. Gives the solution directly - no back-substitution needed.
Solution Types
Situation
Meaning
Example (RREF)
Pivot in every column of A
Unique solution
[1 0 | 3] [0 1 | 5]
Free variables (non-pivot columns)
Infinitely many solutions
[1 2 | 3] [0 0 | 0]
Row like [0 0 ... 0 | k], k ≠ 0
No solution (inconsistent)
[1 0 | 2] [0 0 | 5]
Common mistake
Forgetting to check for inconsistent rows. Always look for a row of the form [0 0 ... 0 | nonzero] before declaring a solution exists.
2. Vectors and Vector Operations
Vectors are ordered lists of numbers. In R^n, a vector has n components.
Key Operations
Operation
Formula
Result Type
Addition
u + v = (u1+v1, u2+v2, ...)
Vector
Scalar multiplication
cu = (cu1, cu2, ...)
Vector
Dot product
u · v = u1v1 + u2v2 + ...
Scalar
Cross product (R^3 only)
u × v = (u2v3-u3v2, u3v1-u1v3, u1v2-u2v1)
Vector
Length (norm)
||u|| = sqrt(u · u)
Scalar
Linear Combinations
A linear combination of vectors v1, v2, ..., vk is any expression of the form:
c1·v1 + c2·v2 + ... + ck·vk
where c1, c2, ..., ck are scalars. This is the single most important concept in linear algebra - almost everything else builds on it.
Span
The span of a set of vectors is the set of all their linear combinations. If vectors v1, ..., vk span R^n, then every vector in R^n can be written as a linear combination of them.
Geometric intuition
One vector in R^3 spans a line. Two non-parallel vectors span a plane. Three non-coplanar vectors span all of R^3. The span is the "reachable" space.
3. Matrices and Matrix Operations
A matrix is a rectangular array of numbers. An m×n matrix has m rows and n columns.
Matrix Multiplication
For AB to be defined, the number of columns of A must equal the number of rows of B.
A is m×n, B is n×p → AB is m×p
(AB)_ij = row i of A · column j of B
Matrix multiplication is NOT commutative
AB ≠ BA in general. This is one of the biggest sources of errors. Always check the order.
Special Matrices
Type
Property
Notation
Identity
AI = IA = A for all A
I or I_n
Diagonal
All off-diagonal entries are 0
diag(d1, d2, ...)
Symmetric
A = A^T
-
Triangular (upper)
All entries below diagonal are 0
-
Triangular (lower)
All entries above diagonal are 0
-
Inverse
AA^(-1) = A^(-1)A = I
A^(-1)
Finding the Inverse
To find A^(-1), augment A with I and row reduce:
[A | I] → row reduce → [I | A^(-1)]
If you can't reduce A to I (you get a row of zeros on the left), then A is not invertible (singular).
Common mistake
det(A + B) ≠ det(A) + det(B). The determinant is multiplicative, not additive.
5. Vector Spaces and Subspaces
A vector space is a set V with addition and scalar multiplication that satisfies 10 axioms (closure, associativity, commutativity, identity, inverses, etc.). The main examples:
R^n - the space of all n-tuples of real numbers
M_{m×n} - the space of all m×n matrices
P_n - the space of all polynomials of degree ≤ n
C[a,b] - the space of all continuous functions on [a,b]
Subspaces
A subspace of V is a subset W that is itself a vector space. To verify W is a subspace, check three things:
Zero vector: 0 ∈ W
Closed under addition: if u, w ∈ W, then u + w ∈ W
Closed under scalar multiplication: if w ∈ W and c is a scalar, then cw ∈ W
Four Fundamental Subspaces
For any m×n matrix A:
Subspace
Definition
Dimension
Column space Col(A)
{Ax : x ∈ R^n} - span of columns
rank(A) = r
Row space Row(A)
span of rows = Col(A^T)
rank(A) = r
Null space Nul(A)
{x : Ax = 0}
n - r (nullity)
Left null space Nul(A^T)
{y : A^T y = 0}
m - r
Rank-Nullity Theorem
rank(A) + nullity(A) = n (number of columns). This is one of the most useful theorems for solving problems - if you know the rank, you automatically know the dimension of the null space.
6. Linear Independence, Basis, and Dimension
Linear Independence
Vectors v1, v2, ..., vk are linearly independent if the only solution to:
c1·v1 + c2·v2 + ... + ck·vk = 0
is c1 = c2 = ... = ck = 0. If any other solution exists, the vectors are linearly dependent - meaning at least one vector is a linear combination of the others.
How to Check Independence
Form a matrix with the vectors as columns. Row reduce. If every column has a pivot, they're independent. If any column lacks a pivot, they're dependent.
Basis
A basis for a vector space V is a set of vectors that is:
Linearly independent
Spans V (every vector in V is a linear combination of the basis vectors)
Every basis for the same space has the same number of vectors - this number is the dimension of the space.
Standard Basis for R^3
e1 = (1,0,0), e2 = (0,1,0), e3 = (0,0,1). Three vectors, so dim(R^3) = 3.
Basis for P_2
{1, t, t^2}. Three basis elements, so dim(P_2) = 3. Note: P_2 and R^3 have the same dimension.
Finding a Basis for Col(A)
Row reduce A. The columns of the original matrix A that correspond to pivot columns in the RREF form a basis for Col(A). Don't use the RREF columns themselves.
Finding a Basis for Nul(A)
Solve Ax = 0 by row reducing [A | 0]. Write the general solution in parametric vector form. The vectors multiplied by the free variables form a basis for Nul(A).
7. Linear Transformations
A function T: R^n → R^m is a linear transformation if:
T(u + v) = T(u) + T(v) for all u, v
T(cu) = cT(u) for all u and scalars c
Every linear transformation from R^n to R^m can be represented as multiplication by an m×n matrix A: T(x) = Ax.
Standard Matrix
To find the matrix for a transformation T:
A = [T(e1) | T(e2) | ... | T(en)]
Apply T to each standard basis vector and make those the columns.
Common Geometric Transformations in R^2
Transformation
Matrix
Rotation by θ
[cos θ -sin θ; sin θ cos θ]
Reflection across x-axis
[1 0; 0 -1]
Reflection across y-axis
[-1 0; 0 1]
Reflection across y = x
[0 1; 1 0]
Scaling by k
[k 0; 0 k]
Projection onto x-axis
[1 0; 0 0]
Key Properties
Kernel (null space) of T: all vectors x where T(x) = 0. If kernel = {0}, T is one-to-one.
Range (column space) of the matrix: all vectors that T can produce. If range = R^m, T is onto.
Composition: T2 ∘ T1 corresponds to matrix product B·A (right to left).
Injective vs Surjective
One-to-one (injective): columns are linearly independent → n ≤ m.
Onto (surjective): columns span R^m → n ≥ m.
Bijective (invertible): both → must be n = m (square matrix).
8. Eigenvalues and Eigenvectors
An eigenvector of a matrix A is a nonzero vector v such that:
Av = λv
The scalar λ is the eigenvalue. Geometrically: A doesn't change the direction of v, only scales it by λ.
Finding Eigenvalues
Write the characteristic equation: det(A - λI) = 0
Expand and solve the polynomial for λ
For a 2×2 matrix [a b; c d]:
det(A - λI) = (a-λ)(d-λ) - bc = 0
λ^2 - (a+d)λ + (ad-bc) = 0
λ^2 - trace(A)·λ + det(A) = 0
Finding Eigenvectors
For each eigenvalue λ:
Compute A - λI
Row reduce A - λI
Solve (A - λI)x = 0
The nonzero solutions are the eigenvectors for λ
Common mistake
The zero vector is NEVER an eigenvector, even though A·0 = λ·0 is technically true. Eigenvectors must be nonzero by definition.
Key Properties
The eigenspace for λ is Nul(A - λI) - it's always a subspace
trace(A) = sum of eigenvalues (with multiplicity)
det(A) = product of eigenvalues
If A is triangular, the eigenvalues are the diagonal entries
Eigenvectors for distinct eigenvalues are always linearly independent
Algebraic vs Geometric Multiplicity
Type
Definition
Notation
Algebraic multiplicity
Power of (λ - λ_i) in characteristic polynomial
a_i
Geometric multiplicity
dim(eigenspace) = dim(Nul(A - λI))
g_i
Always: 1 ≤ g_i ≤ a_i. If g_i = a_i for every eigenvalue, the matrix is diagonalizable.
9. Orthogonality and Least Squares
Orthogonal Vectors
Vectors u and v are orthogonal if u · v = 0. An orthogonal set is a set of pairwise orthogonal nonzero vectors. An orthonormal set is an orthogonal set where every vector has length 1.
Why orthogonal bases are powerful
To express a vector x in terms of an orthogonal basis {u1, ..., uk}, the coordinates are just dot products:
x = (x·u1 / u1·u1)u1 + ... + (x·uk / uk·uk)uk.
No row reduction needed.
Gram-Schmidt Process
Convert any basis {x1, x2, ..., xk} into an orthogonal basis {v1, v2, ..., vk}:
This is essential for solving systems of differential equations, computing Markov chain steady states, and analyzing dynamical systems.
Diagonalization checklist
1. Find eigenvalues (characteristic equation).
2. For each eigenvalue, find the eigenspace (basis of eigenvectors).
3. Check: do you have n total linearly independent eigenvectors? If yes → diagonalizable.
4. P = matrix of eigenvectors (as columns, matching order of eigenvalues in D).
11. Common Mistakes
Forgetting AB ≠ BA. Matrix multiplication order matters. T2 ∘ T1 = B·A, not A·B.
Using RREF columns for Col(A) basis. The basis for Col(A) comes from the original matrix A, using the pivot column positions from RREF.
Row reducing the augmented matrix for det. Don't augment - determinant only applies to square matrices. And track sign changes from row swaps.
det(A + B) = det(A) + det(B). This is false. Determinant is multiplicative (det(AB) = det(A)det(B)), not additive.
Confusing eigenvalues of A and A^(-1). If λ is an eigenvalue of A, then 1/λ is an eigenvalue of A^(-1) (same eigenvector).
Declaring "no solution" when there's a free variable. Free variables mean infinitely many solutions, not no solution. No solution only occurs with an inconsistent row.
Writing 0 as an eigenvector. Eigenvectors must be nonzero. The eigenspace includes 0 but the eigenvectors don't.
Forgetting to check subspace conditions. To prove W is a subspace, you need all three conditions (zero vector, closure under addition, closure under scalar multiplication).
Mixing up rank and nullity. Rank = number of pivot columns. Nullity = number of free variables = n - rank. They add up to n (columns), not m (rows).
Gram-Schmidt order errors. Subtract projections onto ALL previous vectors, not just the first one. Each step removes the component along every vector computed so far.
Practice row reduction until it's automatic - most problems reduce to row reduction at some point
Know the big equivalences: A is invertible ↔ det(A) ≠ 0 ↔ rank(A) = n ↔ Nul(A) = {0} ↔ columns are independent ↔ rows are independent ↔ 0 is not an eigenvalue
Do full past exams under time pressure - linear algebra exams are usually time-constrained
During the Exam
Read the whole exam first. Start with problems you know how to do. Save the tricky proofs for last.
Check dimensions. If you're multiplying matrices, verify the dimensions work out. If you're finding eigenvectors of a 3×3 matrix, you should get vectors in R^3.
Use the Invertible Matrix Theorem. If any part of a problem establishes that A is invertible (det ≠ 0, full rank, independent columns, etc.), you get ALL the equivalent conditions for free.
Verify by substitution. Found an eigenvalue/eigenvector? Multiply Av and check that it equals λv. Found a basis for Nul(A)? Multiply Ax and check you get 0.
Partial credit on proofs. State what you'd need to show, write the definitions, set up the structure - even if you can't finish.
The Invertible Matrix Theorem (your best friend)
For an n×n matrix A, the following are ALL equivalent: A is invertible, det(A) ≠ 0, rank(A) = n, Nul(A) = {0}, columns of A are linearly independent, columns span R^n, Ax = b has a unique solution for every b, A has n pivots, 0 is NOT an eigenvalue of A. If you can show ANY one of these, you get all of them.
13. Frequently Asked Questions
What is linear algebra used for?
Linear algebra is foundational to computer science (graphics, machine learning, search engines), engineering (signal processing, control systems), physics (quantum mechanics, relativity), economics (input-output models, optimization), and data science (PCA, regression, neural networks). It's one of the most applied branches of mathematics.
Is linear algebra harder than calculus?
They're hard in different ways. Calculus is procedural - learn the rules and apply them. Linear algebra is more abstract - you need to think about spaces, transformations, and properties rather than just computing answers. Many students find the abstraction of vector spaces and proofs harder than calculus computations, but the actual calculations in linear algebra are often simpler.
How do I find eigenvalues and eigenvectors?
To find eigenvalues: solve det(A - λI) = 0 (the characteristic equation). This gives you the eigenvalues λ. To find eigenvectors: for each eigenvalue λ, solve (A - λI)x = 0 by row reducing A - λI. The non-trivial solutions form the eigenspace for that eigenvalue.
What's the difference between span and basis?
The span of a set of vectors is all possible linear combinations of those vectors - the set of all vectors you can "reach" by scaling and adding them. A basis is a spanning set that is also linearly independent - it spans the space with no redundant vectors. Every basis for a given space has the same number of vectors, which is the dimension of the space.
What should I study first in linear algebra?
Start with systems of linear equations and row reduction (Gaussian elimination) - this is the computational backbone. Then learn vectors, matrix operations, and determinants. Once you're comfortable with computation, move to abstract concepts: vector spaces, linear independence, basis, dimension. Finally tackle linear transformations and eigenvalues. Each concept builds on the previous ones.
Struggling with linear algebra?
Koa's AI tutor walks you through matrix operations, proofs, and eigenvalue problems step by step - with practice questions along the way.