Numerical Analysis

Linear Systems

Direct and iterative methods for solving systems of linear equations Ax = b

Interactive Visualizer

System: 4x₁ - x₂ + x₃ = 7, 4x₁ - 8x₂ + x₃ = -21, -2x₁ + x₂ + 5x₃ = 15

Solution: x = [2, 4, 3]

Practice Problem

Solve the system using Gaussian elimination:

2x₁ + x₂ - x₃ = 8
-3x₁ - x₂ + 2x₃ = -11
-2x₁ + x₂ + 2x₃ = -3
x₁ =
x₂ =
x₃ =

Matrix & Vector Norms

A vector norm is a function that assigns a non-negative length or size to a vector. For a vector x=(x1,x2,,xn)\mathbf{x} = (x_1, x_2, \ldots, x_n), the three most common norms are:

1-norm (Manhattan / Taxicab)

x1=i=1nxi\|\mathbf{x}\|_1 = \sum_{i=1}^{n} |x_i|

Sum of absolute values of all components.

2-norm (Euclidean)

x2=i=1nxi2\|\mathbf{x}\|_2 = \sqrt{\sum_{i=1}^{n} x_i^2}

The familiar straight-line distance from the origin.

∞-norm (Maximum / Chebyshev)

x=maxixi\|\mathbf{x}\|_\infty = \max_i |x_i|

The largest absolute value among all components.

Norm Properties (for any valid norm \|\cdot\|)

  • Non-negativity: x0\|\mathbf{x}\| \geq 0, and x=0    x=0\|\mathbf{x}\| = 0 \iff \mathbf{x} = \mathbf{0}
  • Homogeneity: αx=αx\|\alpha \mathbf{x}\| = |\alpha|\, \|\mathbf{x}\| for any scalar α\alpha
  • Triangle inequality: x+yx+y\|\mathbf{x} + \mathbf{y}\| \leq \|\mathbf{x}\| + \|\mathbf{y}\|

Matrix norms extend the concept of vector norms to matrices. The most common induced norms are derived from vector norms via A=maxx0Axx\|A\| = \max_{\mathbf{x} \neq \mathbf{0}} \frac{\|A\mathbf{x}\|}{\|\mathbf{x}\|}

1-norm (Maximum Column Sum)

A1=maxjiaij\|A\|_1 = \max_j \sum_i |a_{ij}|

Take the absolute column sums; the largest one is the 1-norm.

∞-norm (Maximum Row Sum)

A=maxijaij\|A\|_\infty = \max_i \sum_j |a_{ij}|

Take the absolute row sums; the largest one is the ∞-norm.

Frobenius Norm

AF=ijaij2\|A\|_F = \sqrt{\sum_i \sum_j a_{ij}^2}

The Frobenius norm is the square root of the sum of all squared entries. It is not an induced norm but satisfies all norm axioms and is submultiplicative.

Interactive Vector Norm Calculator

Enter vector components to compute all three norms in real time.

x=(x1,x2,x3,x4)\mathbf{x} = (x_1,\, x_2,\, x_3,\, x_4)

x₁ =
x₂ =
x₃ =
x₄ =

1-norm

x1=6.4200\|\mathbf{x}\|_1 = 6.4200

2-norm

x2=5.2996\|\mathbf{x}\|_2 = 5.2996

∞-norm

x=5.1500\|\mathbf{x}\|_\infty = 5.1500

Interactive Matrix Norm Calculator

Enter a 2×2 matrix to compute all three matrix norms.

A=[abcd]A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}

a =
b =
c =
d =

1-norm (max col sum)

A1=10.0000\|A\|_1 = 10.0000

∞-norm (max row sum)

A=14.0000\|A\|_\infty = 14.0000

Frobenius norm

AF=10.5357\|A\|_F = 10.5357

Error in Linear Systems

Given the linear system Ax=bA\mathbf{x} = \mathbf{b}, let xˉ\bar{\mathbf{x}} be an approximate solution. We can quantify and correct the error using the residual vector.

Residual

r=bAxˉ\mathbf{r} = \mathbf{b} - A\bar{\mathbf{x}}

The residual measures how far AxˉA\bar{\mathbf{x}} is from b\mathbf{b}. A small residual suggests a good approximation, but a small residual does not always imply a small error — the condition number of AA matters.

Error

e=xxˉ=A1r\mathbf{e} = \mathbf{x} - \bar{\mathbf{x}} = A^{-1}\mathbf{r}

The true error satisfies Ae=rA\mathbf{e} = \mathbf{r} because A(xxˉ)=AxAxˉ=bAxˉ=rA(\mathbf{x} - \bar{\mathbf{x}}) = A\mathbf{x} - A\bar{\mathbf{x}} = \mathbf{b} - A\bar{\mathbf{x}} = \mathbf{r}.

Norm Bound

eA1r\|\mathbf{e}\| \leq \|A^{-1}\| \cdot \|\mathbf{r}\|

This gives an upper bound on the error in terms of the residual and the norm of A1A^{-1}. When AA is ill-conditioned, A1\|A^{-1}\| is large, so even a small residual can correspond to a large error.

Iterative Refinement

Once we have the residual, we can solve for the error correction eˉ\bar{\mathbf{e}} and improve our solution:

Aeˉ=rxˉcorrected=xˉ+eˉA\bar{\mathbf{e}} = \mathbf{r} \quad \Longrightarrow \quad \bar{\mathbf{x}}_{\text{corrected}} = \bar{\mathbf{x}} + \bar{\mathbf{e}}

Solving Aeˉ=rA\bar{\mathbf{e}} = \mathbf{r} for the correction eˉ\bar{\mathbf{e}} and adding it to xˉ\bar{\mathbf{x}} yields a refined approximate solution. This process can be repeated.

Worked Example

System

A=(621272125),b=(1151)A = \begin{pmatrix}6 & -2 & 1\\-2 & 7 & 2\\1 & 2 & -5\end{pmatrix}, \quad \mathbf{b} = \begin{pmatrix}11\\5\\-1\end{pmatrix}

True solution: x=(2,1,1)T\mathbf{x} = (2,\,1,\,1)^T. Gauss-Seidel approximate solution after several iterations:

xˉ=(2.000119,  1.000068,  1.000051)T\bar{\mathbf{x}} = (2.000119,\;1.000068,\;1.000051)^T

Step 1: Compute r=bAxˉ\mathbf{r} = \mathbf{b} - A\bar{\mathbf{x}}

r=bAxˉ(0.000629,  0.000340,  0.000000)T\mathbf{r} = \mathbf{b} - A\bar{\mathbf{x}} \approx (-0.000629,\;-0.000340,\;0.000000)^T

Step 2: Solve Aeˉ=rA\bar{\mathbf{e}} = \mathbf{r}

eˉ=(0.000119,  0.000068,  0.000051)T\bar{\mathbf{e}} = (-0.000119,\;-0.000068,\;-0.000051)^T

Step 3: Correct

xˉcorrected=xˉ+eˉ=(2.0001190.000119,  1.0000680.000068,  1.0000510.000051)T\bar{\mathbf{x}}_{\text{corrected}} = \bar{\mathbf{x}} + \bar{\mathbf{e}} = (2.000119 - 0.000119,\;1.000068 - 0.000068,\;1.000051 - 0.000051)^T

Result: xˉcorrected=(2,1,1)T\bar{\mathbf{x}}_{\text{corrected}} = (2,\,1,\,1)^T — the true solution. ■

Interactive Residual Calculator

Enter a 3×3 matrix AA, vector b\mathbf{b}, and approximate solution xˉ\bar{\mathbf{x}}. The residual r=bAxˉ\mathbf{r} = \mathbf{b} - A\bar{\mathbf{x}} is computed live.

Matrix A

Vector b

Approximate xˉ\bar{x}

Results

Componentbib_i(Axˉ)i(A\bar{x})_irir_i
r111.00000011.000629-6.2900e-4
r25.0000005.000340-3.4000e-4
r3-1.000000-1.0000000.0000e+0
Residual vector r\mathbf{r} (-6.2900e-4, -3.4000e-4, 0.0000e+0)
Infinity norm r\|\mathbf{r}\|_\infty 6.2900e-4

Moderate residual. Iterative refinement may improve accuracy.