Skip to main content

Introduction

Eigenvalues and eigenvectors are among the most important concepts in linear algebra. They reveal fundamental properties of matrices and linear transformations, with applications ranging from principal component analysis (PCA) to quantum mechanics.
An eigenvector of a matrix is a non-zero vector that only changes by a scalar factor when the matrix is applied. That scalar factor is the corresponding eigenvalue.

Setup

Import required packages:
import numpy as np
import matplotlib.pyplot as plt
import utils  # Custom plotting utilities

Definition and Intuition

Mathematical Definition

For a square matrix AA, a non-zero vector vv is an eigenvector if: Av=λvAv = \lambda v where λ\lambda is a scalar called the eigenvalue. Key Insight: The transformation AA only scales vv, without changing its direction.

Multiple Eigenvectors

If vv is an eigenvector with eigenvalue λ\lambda, then any scalar multiple kvkv (where k0k \neq 0) is also an eigenvector: A(kv)=k(Av)=kλv=λ(kv)A(kv) = k(Av) = k\lambda v = \lambda(kv)
For each eigenvalue, infinitely many eigenvectors exist (all pointing along the same line). By convention, we typically choose the eigenvector with norm 1 (unit eigenvector).

Visual Example

Consider a transformation defined by: A=[2321]A = \begin{bmatrix}2 & 3 \\ 2 & 1\end{bmatrix} Apply this to standard basis vectors:
A = np.array([[2, 3], [2, 1]])
e1 = np.array([[1], [0]])
e2 = np.array([[0], [1]])

# Visualize transformation
utils.plot_transformation(A, e1, e2, vector_name='e')
Both e1e_1 and e2e_2 change direction and length. But what if we could find vectors that only change length?

Computing Eigenvalues and Eigenvectors

Using NumPy

NumPy provides np.linalg.eig() to compute eigenvalues and eigenvectors:
A = np.array([[2, 3], [2, 1]])
A_eig = np.linalg.eig(A)

print(f"Matrix A:\n{A}")
print(f"\nEigenvalues:\n{A_eig[0]}")
print(f"\nEigenvectors:\n{A_eig[1]}")
Output structure:
  • A_eig[0]: Array of eigenvalues
  • A_eig[1]: Matrix where each column is an eigenvector
Access individual eigenvectors using:
  • First eigenvector: A_eig[1][:, 0]
  • Second eigenvector: A_eig[1][:, 1]

Visualizing Eigenvectors

# Extract eigenvectors
v1 = A_eig[1][:, 0]
v2 = A_eig[1][:, 1]

# Visualize transformation on eigenvectors
utils.plot_transformation(A, v1, v2)
You’ll observe:
  • v1v_1 is stretched by factor of 4 (eigenvalue λ1=4\lambda_1 = 4)
  • v2v_2 reverses direction, equivalent to scaling by -1 (eigenvalue λ2=1\lambda_2 = -1)
Both eigenvectors remain parallel to their original directions after transformation - this is the defining property of eigenvectors.

Standard Transformations

Example 1: Reflection about Y-axis

# Define reflection matrix
A_reflection_yaxis = np.array([[-1, 0], [0, 1]])

# Find eigenvalues and eigenvectors
A_reflection_yaxis_eig = np.linalg.eig(A_reflection_yaxis)

print(f"Matrix A:\n{A_reflection_yaxis}")
print(f"\nEigenvalues:\n{A_reflection_yaxis_eig[0]}")
print(f"\nEigenvectors:\n{A_reflection_yaxis_eig[1]}")

# Visualize
utils.plot_transformation(
    A_reflection_yaxis, 
    A_reflection_yaxis_eig[1][:, 0],
    A_reflection_yaxis_eig[1][:, 1]
)
Interpretation:
  • Eigenvalue λ1=1\lambda_1 = -1: Eigenvector along x-axis (reversed)
  • Eigenvalue λ2=1\lambda_2 = 1: Eigenvector along y-axis (unchanged)

Example 2: Shear Transformation

A shear transformation displaces points proportionally to their distance from a line: Ashear_x=[10.501]A_{\text{shear\_x}} = \begin{bmatrix}1 & 0.5 \\ 0 & 1\end{bmatrix}
# Define shear matrix
A_shear_x = np.array([[1, 0.5], [0, 1]])

# Find eigenvalues and eigenvectors
A_shear_x_eig = np.linalg.eig(A_shear_x)

print(f"Matrix A_shear_x:\n{A_shear_x}")
print(f"\nEigenvalues:\n{A_shear_x_eig[0]}")
print(f"\nEigenvectors:\n{A_shear_x_eig[1]}")

# Visualize
utils.plot_transformation(
    A_shear_x,
    A_shear_x_eig[1][:, 0],
    A_shear_x_eig[1][:, 1]
)
The shear transformation has complex eigenvalues (with imaginary components). This indicates there are no real vectors that maintain their direction under this transformation.
Note: In Python, imaginary numbers use j instead of ii:
  • Mathematical: 2+3i2 + 3i
  • Python: 2 + 3j

Example 3: 90° Rotation

# Define rotation matrix
A_rotation_90 = np.array([[0, -1], [1, 0]])

# Find eigenvalues and eigenvectors
A_rotation_90_eig = np.linalg.eig(A_rotation_90)

print(f"Matrix A_rotation_90:\n{A_rotation_90}")
print(f"\nEigenvalues:\n{A_rotation_90_eig[0]}")
print(f"\nEigenvectors:\n{A_rotation_90_eig[1]}")
Interpretation: Complex eigenvalues indicate no real vector maintains its direction during a 90° rotation - makes intuitive sense!

Example 4: Identity Matrix

What happens when the transformation doesn’t change anything?
A_identity = np.array([[1, 0], [0, 1]])
A_identity_eig = np.linalg.eig(A_identity)

print(f"Matrix A_identity:\n{A_identity}")
print(f"\nEigenvalues:\n{A_identity_eig[0]}")
print(f"\nEigenvectors:\n{A_identity_eig[1]}")

# Visualize
utils.plot_transformation(
    A_identity,
    A_identity_eig[1][:, 0],
    A_identity_eig[1][:, 1]
)
Special Case: For the identity matrix, every vector is an eigenvector with eigenvalue λ=1\lambda = 1. However, NumPy only returns two eigenvectors (the standard basis).

Example 5: Uniform Scaling

Scaling equally in all directions:
A_scaling = np.array([[2, 0], [0, 2]])
A_scaling_eig = np.linalg.eig(A_scaling)

print(f"Matrix A_scaling:\n{A_scaling}")
print(f"\nEigenvalues:\n{A_scaling_eig[0]}")
print(f"\nEigenvectors:\n{A_scaling_eig[1]}")

# Visualize
utils.plot_transformation(
    A_scaling,
    A_scaling_eig[1][:, 0],
    A_scaling_eig[1][:, 1]
)
Interpretation: Both eigenvalues equal 2. Every vector is an eigenvector since all vectors are scaled uniformly.

Example 6: Projection onto X-axis

A_projection = np.array([[1, 0], [0, 0]])
A_projection_eig = np.linalg.eig(A_projection)

print(f"Matrix A_projection:\n{A_projection}")
print(f"\nEigenvalues:\n{A_projection_eig[0]}")
print(f"\nEigenvectors:\n{A_projection_eig[1]}")

# Visualize
utils.plot_transformation(
    A_projection,
    A_projection_eig[1][:, 0],
    A_projection_eig[1][:, 1]
)
One eigenvalue is λ=0\lambda = 0. This is perfectly valid! It means vectors along the y-axis are mapped to the zero vector.
Interpretation:
  • Eigenvalue λ1=1\lambda_1 = 1: Eigenvector along x-axis (unchanged)
  • Eigenvalue λ2=0\lambda_2 = 0: Eigenvector along y-axis (mapped to zero)

Understanding Eigenvalue Cases

Typical Case: Most random matrices have two distinct real eigenvalues.Example: General transformation matrix [2321]\begin{bmatrix}2 & 3 \\ 2 & 1\end{bmatrix}Two independent eigenvectors exist, forming a basis for R2\mathbb{R}^2.
Rotation/Shear: Some transformations have no real eigenvectors.Example: 90° rotation [0110]\begin{bmatrix}0 & -1 \\ 1 & 0\end{bmatrix}Complex eigenvalues indicate rotation or spiral behavior. No real vector maintains its direction.
Special Cases: Identity and uniform scaling have repeated eigenvalues.Example: Identity matrix [1001]\begin{bmatrix}1 & 0 \\ 0 & 1\end{bmatrix}Eigenvalue λ=1\lambda = 1 appears twice. Every vector is an eigenvector, but NumPy returns only two.
Projection/Collapse: Matrix maps some directions to zero.Example: Projection onto x-axis [1000]\begin{bmatrix}1 & 0 \\ 0 & 0\end{bmatrix}One eigenvalue is zero, indicating the transformation collapses one dimension.

Properties of Eigenvalues and Eigenvectors

1

Number of Eigenvalues

An n×nn \times n matrix has exactly nn eigenvalues (counting multiplicities, including complex ones)
2

Sum of Eigenvalues

The sum of eigenvalues equals the trace (sum of diagonal elements) of the matrix
3

Product of Eigenvalues

The product of eigenvalues equals the determinant of the matrix
4

Independence

Eigenvectors corresponding to distinct eigenvalues are linearly independent

Verification Example

A = np.array([[2, 3], [2, 1]])
eigenvalues, eigenvectors = np.linalg.eig(A)

print(f"Eigenvalues: {eigenvalues}")
print(f"Sum of eigenvalues: {np.sum(eigenvalues)}")
print(f"Trace of A: {np.trace(A)}")
print(f"Product of eigenvalues: {np.prod(eigenvalues)}")
print(f"Determinant of A: {np.linalg.det(A)}")

Applications

Principal Component Analysis

PCA uses eigenvectors of the covariance matrix to find directions of maximum variance in data

Google PageRank

PageRank computes the dominant eigenvector of the web graph’s adjacency matrix

Quantum Mechanics

Observable quantities are eigenvalues of Hermitian operators

Stability Analysis

Eigenvalues determine stability of dynamical systems and differential equations

Image Compression

Singular Value Decomposition (related to eigendecomposition) enables efficient compression

Vibration Analysis

Natural frequencies of mechanical systems are eigenvalues of the system matrix

Practical Tips

# Always check for complex eigenvalues
eigenvalues, eigenvectors = np.linalg.eig(A)

if np.any(np.iscomplex(eigenvalues)):
    print("Complex eigenvalues detected")
    # Handle complex case
else:
    print("All eigenvalues are real")
    # Proceed with real arithmetic

Summary

  • Eigenvector: Non-zero vector vv where Av=λvAv = \lambda v
  • Eigenvalue: Scalar λ\lambda by which eigenvector is scaled
  • Eigenvectors maintain their direction under transformation
  • Use np.linalg.eig(A) to find eigenvalues and eigenvectors
  • Returns tuple: (eigenvalues, eigenvectors)
  • Eigenvectors are columns of the second array
  • NumPy normalizes eigenvectors to unit length
  • Real: Most common, indicate scaling
  • Complex: Indicate rotation or spiral behavior
  • Repeated: Special symmetry in transformation
  • Zero: Dimension collapse or projection
  • Not all matrices have real eigenvectors
  • NumPy may not return all eigenvectors for repeated eigenvalues
  • Numerical precision can affect results
  • Always verify important results analytically
  • PCA for dimensionality reduction
  • PageRank for web search
  • Stability analysis in dynamics
  • Quantum mechanics observables
  • Vibration and resonance analysis

Further Learning

Diagonalization

Learn how matrices with full sets of eigenvectors can be diagonalized

Singular Value Decomposition

Explore SVD, a generalization applicable to non-square matrices

PCA Implementation

Apply eigendecomposition to real datasets for dimensionality reduction

Markov Chains

Use eigenvectors to find steady-state distributions

Conclusion

Eigenvalues and eigenvectors reveal the fundamental structure of linear transformations. They identify special directions that remain unchanged (up to scaling) and quantify how much scaling occurs. This powerful concept underpins countless applications in science, engineering, and data analysis.

Return to Overview

Review all linear algebra topics and continue your learning journey

Build docs developers (and LLMs) love