Advanced Linear Algebra
Orthogonality
Orthogonal vectors are perfectly independent — they share no information. Orthogonal matrices are the "clean" transformations that rotate without distorting. This concept underlies stable training, attention scoring, and why certain initializations work.
Rank of a Matrix
Rank measures how many truly independent directions a matrix spans — its real information content. A low-rank matrix is secretly simpler than it looks, and low-rank structure is the foundation of compression, LoRA fine-tuning, and understanding when systems have solutions.
Singular Value Decomposition (SVD)
SVD is the universal matrix factorization — it breaks any matrix into rotate → scale → rotate. It reveals the hidden structure of transformations and is the mathematical engine behind PCA, recommendation systems, image compression, and why low-rank approximations work.