Error Analysis
Understanding numerical errors, floating-point representation, and convergence analysis.
1. Floating-Point Arithmetic
IEEE 754 Double-Precision Format
A 64-bit floating-point number is represented as:
Machine Epsilon
Interactive IEEE 754 Converter
Binary Representation (64 bits)
Sign
0
Exponent (decimal)
0 (biased: -1023)
Mantissa (decimal)
0
Normalization
A floating-point number in base 10 is normalized when the leading digit d₁ ≠ 0, i.e., the mantissa satisfies 0.1 ≤ |m| < 1:
Static Examples
Interactive Normalizer
Overflow & Underflow
The exponent e is bounded: m < e < M. For IEEE 754 double precision, e ∈ [−1022, 1023].
Static Examples
Interactive Classifier
2. Error Types
Theory
Absolute Error
Relative Error
Significant Digits
The approximation has significant digits with respect to if
Error Calculator
Visual Error Bar (relative error scaled)
Absolute Error
0.000000e+0
Relative Error
0.000000e+0
Significant Digits
0
Worked Examples
Practice Problems
Compute the absolute error, relative error, and significant digits for each pair.
Problem 1: p = 2.71828, p* = 2.718
Problem 2: p = 1.41421, p* = 1.414
Problem 3: p = 9.8696, p* = 9.87
3. Convergence Rates
Theory
A sequence {αn} converges to zero with rate if:
Convergence Visualization
Linear (r=1)
αₙ = 0.5ⁿ
Quadratic (r=2)
αₙ = 0.5^(2ⁿ)
Superlinear (r≈1.618)
αₙ = 0.5^(φⁿ)
Sequence Rate Analyzer
Enter a sequence of error values (comma-separated) to classify its convergence rate.
4. Condition Number
Theory
Condition number of evaluating at :
Well-conditioned:
Small input changes produce small output changes.
Ill-conditioned:
Small input changes produce large output changes.
Relative error amplification:
Visualization shows near the evaluation point. Brackets illustrate how input perturbation maps to output perturbation .
5. Series Convergence
Theory
An infinite series is a sum of infinitely many terms:
The N-th partial sum accumulates the first N terms:
Convergence means the partial sums approach a finite limit:
Ratio Test
L < 1 → converges; L > 1 → diverges
Comparison Test
If Σbₙ converges, so does Σaₙ
Integral Test
Same convergence when f is decreasing
Preset Series
Converges when |r| < 1
Custom Series
Enter a formula for a_n using variable n. Supports: + - * / ^ sin cos sqrt abs log exp pi e n!
Appears to converge to ≈ 1.000000
Ratio test estimate: |a_{n+1}/a_n| ≈ 0.50000 (< 1, supports convergence)
Visualization
Partial Sums Table (first 20 rows)
| n | aₙ | Sₙ |
|---|---|---|
| 1 | 0.500000 | 0.500000 |
| 2 | 0.250000 | 0.750000 |
| 3 | 0.125000 | 0.875000 |
| 4 | 0.062500 | 0.937500 |
| 5 | 0.031250 | 0.968750 |
| 6 | 0.015625 | 0.984375 |
| 7 | 0.007813 | 0.992188 |
| 8 | 0.003906 | 0.996094 |
| 9 | 0.001953 | 0.998047 |
| 10 | 0.000977 | 0.999023 |
| 11 | 0.000488 | 0.999512 |
| 12 | 0.000244 | 0.999756 |
| 13 | 0.000122 | 0.999878 |
| 14 | 6.104e-5 | 0.999939 |
| 15 | 3.052e-5 | 0.999969 |
| 16 | 1.526e-5 | 0.999985 |
| 17 | 7.629e-6 | 0.999992 |
| 18 | 3.815e-6 | 0.999996 |
| 19 | 1.907e-6 | 0.999998 |
| 20 | 9.537e-7 | 0.999999 |
6. Floating-Point Arithmetic Proof
Theory
Floating-point representation model
Unit roundoff for IEEE 754 double precision
Arithmetic operations (each rounded once)
Error accumulation after n operations
Chopping (truncation)
Relative error bound:
Rounding (nearest)
Relative error bound:
Step-by-Step Proof Walkthrough
Step 1: Normalized Floating-Point Form
Any nonzero real number is written in normalized form with base β (typically 2 for binary), k significant digits d₁d₂…dₖ where d₁ ≠ 0, and exponent e. For IEEE 754 double precision: β = 2, k = 53 (1 implicit + 52 stored).
Interactive: Catastrophic Cancellation
Computing should yield . Watch the relative error grow as ε approaches the machine epsilon.
| ε value | Exact result | Computed result | Relative error |
|---|---|---|---|
| 1.0000e-13 | 1.0000e-13 | 9.9920e-14 | 7.99e-4 |
| 1.0000e-14 | 1.0000e-14 | 9.9920e-15 | 7.99e-4 |
| 1.0000e-15 | 1.0000e-15 | 1.1102e-15 | 1.10e-1 |
| 1.0000e-16 | 1.0000e-16 | 0 | 1.00e+0 |
| 1.0000e-17 | 1.0000e-17 | 0 | 1.00e+0 |
For ε = 10⁻¹⁶ ≈ u, JavaScript (IEEE 754) computes (1 + ε) = 1 exactly due to rounding, so (1 + ε) − 1 = 0. The relative error is 100%.
For ε = 10⁻¹⁷ (below machine epsilon), the situation is the same — ε is rounded to 0 when added to 1.
Error Propagation Calculator
Given two values and their relative errors, compute the propagated relative error bound.
Result (1.5 + 2.3)
3.800000000
Propagated rel. error bound
1.0000e-10
Addition error bound
Loss of Significance
Theory
When two nearly equal numbers are subtracted, the leading significant digits cancel and the relative error of the result can be much larger than the relative errors of the individual operands.
If and are small, the relative error of satisfies:
When , the denominator is tiny while the numerator remains of order , causing catastrophic amplification.
Example 1.11 — Subtraction of Nearly Equal Numbers
Given values and their approximations (rounded):
Step 1: Compute the subtraction
Step 2: Absolute error of the result
Although had 6 significant digits, has only 3 significant digits — three digits were lost to cancellation.
Step 3: Error amplification factor
The relative error of the result is roughly 53,736 times larger than the relative error of the original approximation — a dramatic loss of significance.
Example 1.13 — Reformulation to Avoid Cancellation
Consider the function:
For large , the terms and are nearly equal, causing catastrophic cancellation. Rationalising the numerator gives the equivalent but numerically stable form:
| x | Naive (cancellation) | Stable (rationalised) |
|---|---|---|
| 1 | 0.414214 | 0.414214 |
| 100 | 4.987562 | 4.987562 |
| 1,000 | 15.807437 | 15.807437 |
| 10,000 | 49.998750 | 49.998750 |
| 100,000 | 158.113488 | 158.113488 |
At the naive form suffers from severe cancellation. The stable form is algebraically identical but avoids subtracting nearly equal square roots, preserving all significant digits.
Propagated Error & Stability
Theory
By the Mean Value Theorem, if approximates , then the error in propagates as:
Converting to relative errors:
Well-conditioned:
Error is not amplified. Algorithm is stable.
Ill-conditioned:
Errors amplify dramatically. Algorithm is unstable.
Example 1.16 — Stability of
Consider . The true value is:
Step 1: 3-digit rounding of each square root
Step 2: Relative error of the naive computation
The computed result is completely wrong — a 100% relative error from 3-digit rounding.
Step 3: Condition number confirms instability
A condition number of 0 means every digit of relative error in produces roughly 0 digits of relative error in . This function is severely ill-conditioned for large .
Step 4: Stable reformulation
This algebraically equivalent form avoids subtracting nearly equal numbers and is numerically stable.
Condition Number vs. for
| x | f(x) | |
|---|---|---|
| 1 | 0.41421356 | 0.4 |
| 10 | 0.15434713 | 0.5 |
| 100 | 0.04987562 | 0.5 |
| 1,000 | 0.01580744 | 0.5 |
| 12,345 | 0.00450003 | 0.5 |
| 100,000 | 0.00158113 | 0.5 |
The condition number grows without bound as , confirming that is increasingly ill-conditioned for large .