MnemosyneMnemosyne

Linear Algebra

The language of data and transformations — vectors, matrices, decompositions, and the geometry of machine learning.

Vectors

A vector is an ordered list of numbers that represents both a position in space and a direction. Dot products, norms, and projections are the three operations that power similarity search, attention, and regression in AI.

Matrices

A matrix is a rectangular grid of numbers. Matrix multiplication composes transformations, and the transpose flips rows into columns. These two operations are the foundation of every neural network layer.

Matrix-Vector Multiplication

Multiplying a matrix by a vector produces a new vector. The matrix is a transformation — it rotates, scales, or projects the input into a new space. Every neural network layer is this operation.

Linear Combinations and Span

A linear combination scales and adds vectors together. The span is all the points reachable by those combinations — the entire space those vectors can fill. This defines what a model can and cannot represent.

Basis and Dimensionality

A basis is the minimal set of independent vectors that spans a space — a coordinate system. Dimensionality is how many basis vectors are needed. These concepts determine how much information a representation can hold.

Eigenvalues and Eigenvectors

An eigenvector is a direction a matrix only stretches, never rotates. The eigenvalue is the stretch factor. This is the intuition behind PCA, optimization landscapes, and why certain network behaviors emerge.

Matrix Decomposition

Matrix decomposition breaks a matrix into simpler structured factors. SVD shows any matrix as a rotation + scaling + rotation — revealing how much information each direction carries. This unlocks PCA, compression, and LoRA fine-tuning.