Chapter 25. Collection of Numerical Analysis Algorithms
Recommended Post: 【Algorithm】 Algorithm Index
2. Gauss Elimination Algorithm
7. Improved Euler Algorithm (Heum’s Method)
8. Classical RUNGE-KUTTA Algorithm
9. Improved RUNGE-KUTTA Algorithm
10. RUNGE-KUTTA-FEHLBERG Algorithm
13. Explicit Method
14. Fully (Simple) Implicit Method
16. Approximation Formulas for Integration
1. Thomas Algorithm
A → A* = [ajk] = [Ar]
This algorithm uniquely calculates Ax = [xj] for a tridiagonal matrix A or indicates that a unique solution does not exist.
(Here, let the diagonal elements of A be f, the subdiagonal elements e, and the superdiagonal elements g.)
INPUT: Augmented matrix A* = [ajk] of n × (n+1) with aj, n+1 = bj
OUTPUT: Solution x = [xj] of equation (1) or a message indicating no unique solution
// DECOMPOSITION
DO k = 2, …, n
ek = ek / fk-1
fk = fk - ek × gk-1
END DO
// FORWARD SUBSTITUTION
DO k = 2, …, n
rk = rk - ek × rk-1
END DO
// BACKWARD SUBSTITUTION
xn = rn / fn
DO k = n - 1, …, 1
xk = (rk - gk × xk+1) / fk
END DO
End GAUSS
2. Gauss Elimination Algorithm
A → A* = [ajk] = [Ab]
This algorithm uniquely calculates x = [xj] for Ax = b or indicates that a unique solution does not exist.
INPUT: Augmented matrix A* = [ajk] of n × (n+1) with aj, n+1 = bj
OUTPUT: Solution x = [xj] of equation (1) or a message indicating no unique solution
For k = 1, …, n - 1 do:
m = k
For j = k + 1, …, n, do:
If amk < ajk then m = j End
End
If amk = 0 then
OUTPUT “No unique solution”
Stop
End
Else exchange row k with row m
For j = k + 1, …, n, do:
mjk = ajk / akk
For p = k + 1, …, n + 1, do:
ajp = ajp - mjkakp
End
End
End
If ann = 0 then
OUTPUT “No unique solution”
Stop
End
xn = an,n+1 / ann [Start Back Substitution]
For i = n - 1, …, 1, do:
xi = (1/aii)(ai,n+1 - (ai,i+1xi+1 + … + ai, nxn))
End
OUTPUT x = [xj]
Stop
End GAUSS
3. Doolittle Method
Ax = LUx = b
Ux = L-1b = y
x = U-1y
4. Cholesky Method
Ax = LLTx = b
LTx = L-1b = y
x = (LT)-1y
5. Jacobi Iterative Method
ex.
x1 - 0.25x2 - 0.25x3 = 50 ⇔ x1 = 0.25x2 + 0.25x3 + 50
-0.25x1 + x2 - 0.25x4 = 50 ⇔ x2 = 0.25x1 + 0.25x4 + 50
-0.25x1 + x3 - 0.25x4 = 25 ⇔ x3 = 0.25x1 + 0.25x4 + 25
-0.25x2 - 0.25x3 + x4 = 25 ⇔ x4 = 0.25x2 + 0.25x3 + 25
cf.
x(m+1) = b - Lx(m) - Ux(m) = b - (L+U)x(m)
6. Gauss-Seidel Algorithm
A, b , x(0), ε, N
This algorithm calculates the solution x to Ax = b given an initial approximation x(0) for an n × n matrix A = [ajk] where ajj ≠ 0 for j = 1, …, n.
INPUT: A, b , initial approximation x(0), tolerance ε, maximum iterations N
OUTPUT: Approximation x(m) = [x j(m)] or an error message indicating x(N) does not satisfy the tolerance condition
For m = 0, …, N - 1 do:
For j = 1, …, n do:
xj(m) = (1/ajj)(bj - (aj1x1(m+1) + … + aj, j-1xj-1(m+1)) - (aj, j+1xj+1(m) + … + ajnxn(m)))
End
If ∀j, xj(m+1) - xj(m) then
OUTPUT x(m+1)
Stop
End
End
OUTPUT
End GAUSS-SEIDEL
ex.
x1 - 0.25x2 - 0.25x3 = 50 ⇔ x1 = 0.25x2 + 0.25x3 + 50
-0.25x1 + x2 - 0.25x4 = 50 ⇔ x2 = 0.25x1 + 0.25x4 + 50
-0.25x1 + x3 - 0.25x4 = 25 ⇔ x3 = 0.25x1 + 0.25x4 + 25
-0.25x2 - 0.25x3 + x4 = 25 ⇔ x4 = 0.25x2 + 0.25x3 + 25
cf.
x(m+1) = b - Lx(m+1) - Ux(m) ⇔ x(m+1) = (I + L)-1b - (I + L)-1Ux(m)
7. Improved Euler Algorithm (Heum’s Method)
f, x0, y0, h, N
This algorithm solves the initial value problem y’ = f(x, y), y(x0) = y0 at equally spaced points x1 = x0 + h, x2 = x0 + 2h, …, xN = x0 + Nh.
Here, f is a function defined on [x0, xN] that guarantees a unique solution.
INPUT: Initial values x0, y0, step size h, number of steps N
OUTPUT: Approximation yn+1 of the solution y(xn+1) at xn+1 = x0 + (n+1)h (n = 0, …, N-1)
For n = 0, 1, …, N-1 do:
xn+1 = xn + h
k1 = hf(xn, yn)
k2 = hf(xn+1, yn+k1)
yn+1 = yn + 0.5(k1 + k2)
OUTPUT xn+1, yn+1
End
Stop
End EULER
8. Classical RUNGE-KUTTA Algorithm
f, x0, y0, h, N
This algorithm solves the initial value problem y’ = f(x, y), y(x0) = y0 at equally spaced points x1 = x0 + h, x2 = x0 + 2h, …, xN = x0 + Nh.
Here, f is a function defined on [x0, xN] that guarantees a unique solution.
INPUT: Initial values x0, y0, step size h, number of steps N
OUTPUT: Approximation yn+1 of the solution y(xn+1) at xn+1 = x0 + (n+1)h (n = 0, …, N-1)
For n = 0, 1, …, N-1 do:
k1 = hf(xn, yn)
k2 = hf(xn + 0.5h, yn + 0.5k1)
k3 = hf(xn + 0.5h, yn + 0.5k2)
k4 = hf(xn + h, yn + k3)
xn+1 = xn + h
yn+1 = yn + (1/6)(k1 + 2k2 + 2k3 + k4)
OUTPUT xn+1, yn+1
End
Stop
End RUNGE-KUTTA
9. Improved RUNGE-KUTTA Algorithm
f, x0, y0, h, N
This algorithm solves the initial value problem y’ = f(x, y), y(x0) = y0 at equally spaced points x1 = x0 + h, x2 = x0 + 2h, …, xN = x0 + Nh.
Here, f is a function defined on [x0, xN] that guarantees a unique solution.
INPUT: Initial values x0, y0, step size h, number of steps N
OUTPUT: Approximation yn+1 of the solution y(xn+1) at xn+1 = x0 + (n+1)h (n = 0, …, N-1)
For n = 0, 1, …, N-1 do:
k1 = hf(xn, yn)
k2 = hf(xn + (1/4)h, yn + (1/4)k1)
k3 = hf(xn + (3/8)h, yn + (3/32)k1 + (9/32)k2)
k4 = hf(xn + (12/13)h, yn + (1932/2197)k1 - (7200/2197)k2 + (7296/2197)k3)
k5 = hf(xn + h, yn + (439/216)k1 - 8k2 + (3680/513)k3 - (845/4104)k4)
xn+1 = xn + h
yn+1 = yn + (25/216)k1 + (1408/2565)k3 + (2197/4104)k4 - (1/5)k5
OUTPUT xn+1, yn+1
End
Stop
End RUNGE-KUTTA
10. RUNGE-KUTTA-FEHLBERG Algorithm
f, x0, y0, h, N
This algorithm solves the initial value problem y’ = f(x, y), y(x0) = y0 at equally spaced points x1 = x0 + h, x2 = x0 + 2h, …, xN = x0 + Nh.
Here, f is a function defined on [x0, xN] that guarantees a unique solution.
INPUT: Initial values x0, y0, step size h, number of steps N
OUTPUT: Approximation yn+1 of the solution y(xn+1) at xn+1 = x0 + (n+1)h (n = 0, …, N-1)
For n = 0, 1, …, N-1 do:
k1 = hf(xn, yn)
k2 = hf(xn + (1/4)h, yn + (1/4)k1)
k3 = hf(xn + (3/8)h, yn + (3/32)k1 + (9/32)k2)
k4 = hf(xn + (12/13)h, yn + (1932/2197)k1 - (7200/2197)k2 + (7296/2197)k3)
k5 = hf(xn + h, yn + (439/216)k1 - 8k2 + (3680/513)k3 - (845/4104)k4)
k6 = hf(xn + (1/2)h, yn - (8/27)k1 + 2k2 + (3544/2565)k3 - (1859/4104)k4 - (11/40)k5)
xn+1 = xn + h
yn+1 = yn + (16/135)k1 + (6656/12825)k3 + (28561/56430)k4 - (9/50)k5 + (2/55)k6
OUTPUT xn+1, yn+1
End
Stop
End RUNGE-KUTTA
11. Adams-Bashforth Algorithm
xn, xn-1, xn-2, xn-3, fn, fn-1, fn-2, fn-3, h, N
This algorithm considers the initial value problem with a unique solution in some open interval containing x0:
y’ = f(x, y), y(x0) = y0, fn = f(xn, yn).
This method replaces the integrand f(x, y(x)) with an interpolating polynomial using Newton’s backward difference formula and integrates it.
INPUT: xn, xn-1, xn-2, xn-3, fn, fn-1, fn-2, fn-3, h, N
OUTPUT: Approximation yn+m of the solution y(xn+m) at step xn+m = x0 + (n+m)h (m = 1, …, N)
k4 = hfn
k3 = hfn-1
k2 = hfn-2
k1 = hfn-3
For m = 1, …, N do:
xn+m = xn+m-1 + h
yn+m = yn+(m-1) + (1/24)(55k4 - 59k3 + 37k2 - 9k1)
k1 = k2
k2 = k3
k3 = k4
k4 = hf(xn+m, yn+m)
OUTPUT xn+m, yn+m
End
Stop
End ADAMS-BASHFORTH
12. Adams-Moulton Algorithm
xn, xn-1, xn-2, xn-3, fn, fn-1, fn-2, fn-3, h, N
This algorithm considers the initial value problem with a unique solution in some open interval containing x0:
y’ = f(x, y), y(x0) = y0, fn = f(xn, yn).
This method adds a predictor-corrector step. The error estimate is as follows:
εn+1 = (1/15)(yn+1 - y*n+1).
INPUT: xn, xn-1, xn-2, xn-3, fn, fn-1, fn-2, fn-3, h, N
OUTPUT: Approximation yn+m of the solution y(xn+m) at step xn+m = x0 + (n+m)h (m = 1, …, N)
k4 = hfn
k3 = hfn-1
k2 = hfn-2
k1 = hfn-3
For m = 1, …, N do:
xn+m = xn+m-1 + h
y*n+m = yn+(m-1) + (1/24)(55k4 - 59k3 + 37k2 - 9k1)
k* = f(xn+m, y*n+m)
yn+m = yn+(m-1) + (1/24)(9k* + 19k4 - 5k3 + k2)
k1 = k2
k2 = k3
k3 = k4
k4 = hf(xn+m, yn+m)
OUTPUT xn+m, yn+m
End
Stop
End ADAMS-MOULTON
13. Explicit Method
x(m+1) = Ax(m) + b
ex.
3×3: A = ((1 - 2λ, λ, 0)T, (λ, 1 - 2λ, λ)T, (0, λ, 1 - 2λ)T), b = (λx0, 0, λxn+1)T
14. Fully (Simple) Implicit Method
x(m+1) = A-1x(m) - A-1b
ex.
3×3: A = ((1 + 2λ, -λ, 0)T, (-λ, 1 + 2λ, -λ)T, (0, -λ, 1 + 2λ)T), b = (λx0, 0, λxn+1)T
15. Crank-Nicolson Method
x(m+1) = Aim-1(Aexx(m) + bex) - A-1bim
ex.
3×3: A = ((1 + 2λ, -λ, 0)T, (-λ, 1 + 2λ, -λ)T, (0, -λ, 1 + 2λ)T), b = (λx0, 0, λxn+1)T
16. Approximation Formulas for Integration
⑴ Trapezoidal Rule: Numerical integration method.
⑵ Simpson’s Rule: Numerical integration method; more accurate than the trapezoidal rule.
⑶ Simpson’s 3/8 Rule: Numerical integration method; more accurate than Simpson’s rule.
⑷ Bode’s Rule
⑸ Moving Average Method
17. Optimization Algorithms
import numpy as np
from scipy.optimize import minimize
from scipy.stats import spearmanr
def model(X, coeffs):
return np.sum(X.T * coeffs, axis=1)
def objective_function(coeffs, X, target):
prediction = model(X, coeffs)
return -spearmanr(target, prediction)[0]
def optimize_spearman(X, target, initial_guess=None):
if initial_guess is None:
initial_guess = np.zeros(X.shape[0]) # Default to zero coefficients
result = minimize(objective_function, initial_guess, args=(X, target), method='Nelder-Mead')
optimal_coeffs = result.x
maximized_corr = -objective_function(optimal_coeffs, X, target)
return optimal_coeffs, maximized_corr
# Example usage
# new_bulk = np.array(...) # Replace with your 11x471 matrix
# target_vector = np.array(...) # Replace with your 471-D target vector
# optimal_coeffs, maximized_corr = optimize_spearman(new_bulk, target_vector)
# print("Optimal coefficients:", optimal_coeffs)
# print("Maximized Spearman Correlation:", maximized_corr)
① Definition: Approximates the inverse of the Hessian matrix to reduce computation in Newton’s method.
② Formula
⒄ SANN(simulated annealing)
① A slow stochastic global optimization method.
② Works well with non-differentiable or non-convex functions.
⒅ Related to Programming
① Optimization algorithm-related R packages and functions: RSolnp, optim, GenSA, alabama
18. HMM-Related Algorithms
⑴ Markov Model: A system where the future state depends only on the current state, not the past states.
⑵ HMM (Hidden Markov Model)
⑶ Baum-Welch Algorithm (a type of EM algorithm) for training HMM models.
⑷ Viterbi Algorithm for inferring the most probable sequence of hidden states.
Input: 2016.12.11 02:10
Revised: 2024.03.28 22:20