Iterative methods
From CFD-Wiki
(→Stationary Iterative Methods) |
m (Reverted edits by AcelcAceld (Talk) to last version by Jasond) |
||
(5 intermediate revisions not shown) | |||
Line 1: | Line 1: | ||
- | + | We seek the solution to the linear system of equations <br> | |
- | :<math> | + | :<math> Ax = b </math> <br> |
- | After ''k'' iterations we obtain an | + | Iterative methods, unlike direct methods, generate a sequence of approximate solutions to the system that (hopefully) converges to the exact solution. After ''k'' iterations, we obtain an approximation to the exact solution as: <br> |
- | :<math> Ax^{(k)} = | + | :<math> Ax^{(k)} = b - r^{(k)}, </math><br> |
where <math> r^{(k)} </math> is the residual after ''k'' iterations. <br> | where <math> r^{(k)} </math> is the residual after ''k'' iterations. <br> | ||
- | Defining | + | Defining <br> |
:<math> \varepsilon ^{(k)} = x - x^{(k)} </math> <br> | :<math> \varepsilon ^{(k)} = x - x^{(k)} </math> <br> | ||
- | as the difference between the exact and approaximate solution | + | as the difference between the exact and approaximate solution, we obtain <br> |
- | we obtain | + | :<math> A\varepsilon ^{(k)} = r^{(k)}. </math> <br> |
- | :<math> A\varepsilon ^{(k)} = r^{(k)} </math> <br> | + | The purpose of iterations is to drive this residual to zero. |
- | + | ||
===Stationary Iterative Methods=== | ===Stationary Iterative Methods=== | ||
- | Iterative methods that can be expressed in the simple form | + | Iterative methods that can be expressed in the simple form |
+ | |||
:<math> | :<math> | ||
x^{(k+1)} = Bx^{(k)} + c | x^{(k+1)} = Bx^{(k)} + c | ||
- | </math> | + | </math> |
+ | |||
+ | when neither '''B''' nor '''c''' depend upon the iteration count (k), the iterative method is called stationary iterative method. Some of the stationary iterative methods are | ||
- | |||
#Jacobi method | #Jacobi method | ||
#Gauss-Seidel method | #Gauss-Seidel method | ||
#Successive Overrelaxation (SOR) method and | #Successive Overrelaxation (SOR) method and | ||
#Symmetric Successive Overrelaxation (SSOR) method | #Symmetric Successive Overrelaxation (SSOR) method | ||
+ | |||
+ | The convergence of such iterative methods can be investigated using the [[Fixed point theorem]]. | ||
===Nonstationary Iterative Methods=== | ===Nonstationary Iterative Methods=== |
Latest revision as of 14:17, 19 December 2008
We seek the solution to the linear system of equations
Iterative methods, unlike direct methods, generate a sequence of approximate solutions to the system that (hopefully) converges to the exact solution. After k iterations, we obtain an approximation to the exact solution as:
where is the residual after k iterations.
Defining
as the difference between the exact and approaximate solution, we obtain
The purpose of iterations is to drive this residual to zero.
Stationary Iterative Methods
Iterative methods that can be expressed in the simple form
when neither B nor c depend upon the iteration count (k), the iterative method is called stationary iterative method. Some of the stationary iterative methods are
- Jacobi method
- Gauss-Seidel method
- Successive Overrelaxation (SOR) method and
- Symmetric Successive Overrelaxation (SSOR) method
The convergence of such iterative methods can be investigated using the Fixed point theorem.
Nonstationary Iterative Methods
When during the iterations B and c changes during the iterations, the method is called Nonstationary Iterative Method. Typically, constants B and c are computed by taking inner products of residuals or other vectors arising from the iterative method.
Some examples are:
- Conjugate Gradient Method (CG)
- MINRES and SYMMLQ
- Generalized Minimal Residual (GMRES)
- BiConjugate Gradient (BiCG)
- Quasi-Minimal Residual (QMR)
- Conjugate Gradient Squared Method (CGS)
- BiConjugate Gradient Stabilized (Bi-CGSTAB)
- Chebyshev Iteration