🧠 AI Computer Institute
Content is AI-generated for educational purposes. Verify critical information independently. A bharath.ai initiative.

Linear Algebra Cheat Sheet

mathGrades 10-125 sections

Visual Overview: Vector Operations

Vector Operations Visualized Vector Addition (Parallelogram Rule) A B A+B Dot Product & Angle u v θ u · v = |u| |v| cos(θ) Perpendicular: u · v = 0 Matrix Multiplication A (2×2) × B (2×1) = C (2×1) Operation Complexity & Properties: Operation Complexity Property Key Use Dot Product O(n) Commutative: u·v = v·u Similarity, angle Matrix Mult O(n³) naive Non-commutative: AB ≠ BA Linear transforms Vector Addition O(n) Commutative: u+v = v+u Superposition Cross Product (3D) O(1) Anti-commutative: u×v = -v×u Perpendicular vector Transpose O(n²) (A^T)^T = A Swap rows/columns

Core vector and matrix operations: addition (parallelogram), dot product (angle), and matrix multiplication

Vectors & Matrices

// Vectors: 1D array of numbers
v = [1, 2, 3] (column vector, 3×1)
u = [4, 5, 6]

// Magnitude (length)
|v| = √(1² + 2² + 3²) = √14

// Dot product (scalar)
u · v = 1×4 + 2×5 + 3×6 = 32
Geometric: |u| |v| cos(θ)
Orthogonal: u · v = 0 (perpendicular, θ = 90°)

// Cross product (3D only, vector)
u × v = perpendicular vector to both
Magnitude: |u × v| = |u| |v| sin(θ)

// Matrices: 2D array (m rows, n columns)
A = [1 2 3]   (2×3 matrix)
    [4 5 6]

A^T = [1 4]   (transpose, swap rows/columns)
      [2 5]
      [3 6]

// Matrix operations
A + B: Element-wise addition
A × B: Matrix multiplication (n×m × m×p = n×p)
C[i,j] = Σ A[i,k] × B[k,j]

// Identity matrix (I)
I = [1 0 0]
    [0 1 0]
    [0 0 1]
A × I = A

// Inverse (A^-1)
A × A^-1 = I
Only square, non-singular matrices have inverse
Singular: Determinant = 0 (no inverse)

// Determinant (2×2)
det([a b]) = ad - bc
   [c d]

// Rank: Number of independent rows/columns
rank(A) ≤ min(m, n)
Full rank: rank = min(m, n)
Singular: rank < min(m, n)

Eigenvalues & Eigenvectors

// Eigenvector & Eigenvalue
A × v = λ × v
v: Eigenvector (special direction)
λ: Eigenvalue (scaling factor)

Intuition: Multiplying by A scales v by λ

// Finding eigenvalues
det(A - λI) = 0  (Characteristic equation)
Solve for λ

// Finding eigenvectors
For each λ:
(A - λI) × v = 0
Solve for v (null space of A - λI)

// Properties
Symmetric matrix: Real eigenvalues, orthogonal eigenvectors
Determinant: Product of eigenvalues
Trace: Sum of eigenvalues
Matrix power: A^n easy with eigenvalues
Diagonal form: A = P × D × P^-1
  Where D is diagonal (eigenvalues)
  And P columns are eigenvectors

// Principal Component Analysis (PCA)
Find eigenvectors of covariance matrix
Largest eigenvalues = most variance directions
Use for: Dimensionality reduction, visualization

// Example: 2×2
A = [4 1]
    [1 4]

det(A - λI) = (4-λ)² - 1 = 0
λ² - 8λ + 15 = 0
λ = 5 or λ = 3

Eigenvector for λ=5: [1/√2, 1/√2]
Eigenvector for λ=3: [1/√2, -1/√2]

Linear Transformations

// Linear transformation: T(v) = Av
Preserves: Additivity T(u+v) = T(u) + T(v)
           Homogeneity T(cv) = c×T(v)

// Common 2D transformations
Scaling: [s  0] (scale by s)
         [0  s]

Rotation by θ: [cos θ  -sin θ]
               [sin θ   cos θ]

Shear: [1  k] (horizontal shear)
       [0  1]

Reflection over x-axis: [1   0]
                        [0  -1]

Projection onto x-axis: [1 0]
                        [0 0]

// Composition: Apply multiple transformations
T1(T2(v)) = A1 × A2 × v

// Rank-Nullity Theorem
rank(A) + nullity(A) = n
rank: Dimension of image (output space dimension)
nullity: Dimension of null space (solutions to Av = 0)

// Determinant & volume
|det(A)| = factor by which area/volume scales
det(A) = 0: Transformation collapses dimension (non-invertible)
det(A) < 0: Transformation reflects
det(A) > 0: Transformation preserves orientation

Vector Spaces

// Vector space: Set of vectors with operations
Must satisfy:
- Closure (u + v in space)
- Associativity, commutativity
- Identity element (zero vector)
- Inverse elements
- Scalar multiplication

// Span: All linear combinations
span{v1, v2, ...} = {c1×v1 + c2×v2 + ...}
All vectors reachable from these basis vectors

// Linear dependence
Linearly dependent: One vector is combination of others
Linearly independent: No vector is combination

Example:
v1 = [1, 0], v2 = [2, 0], v3 = [3, 0]
v3 = 3×v1, so dependent

// Basis: Minimal spanning set
Linearly independent
Spans entire space
Standard basis: {[1,0,0], [0,1,0], [0,0,1]} for R³

// Dimension: Size of basis
R² has dimension 2
R³ has dimension 3
Dimensionality reduction: Project to lower dimension

// Inner product spaces
Define:  (generalizes dot product)
Enable: Orthogonality, projections, angles

Orthogonal:  = 0 (perpendicular)
Orthonormal: Orthogonal + unit length (|v| = 1)

// Gram-Schmidt (orthonormalization)
Convert basis to orthonormal basis
u1 = v1 / |v1|
u2 = (v2 - ×u1) / |...|
...

// Useful in ML
Orthonormal transformation: Preserves distances
Useful for: Rotation, reflection
Computational benefits: Stable, no conditioning issues

More Cheat Sheets