Conjugate gradient squared method
In numerical linear algebra, the conjugate gradient squared method (CGS) is an iterative algorithm for solving systems of linear equations of the form {\displaystyle A{\mathbf {x}}={\mathbf {b}}}, particularly in cases where computing the transpose {\displaystyle A^{T}} is impractical.[1] The CGS method was developed as an improvement to the biconjugate gradient method.[2] [3] [4]
Background
[edit ]A system of linear equations {\displaystyle A{\mathbf {x}}={\mathbf {b}}} consists of a known matrix {\displaystyle A} and a known vector {\displaystyle {\mathbf {b}}}. To solve the system is to find the value of the unknown vector {\displaystyle {\mathbf {x}}}.[3] [5] A direct method for solving a system of linear equations is to take the inverse of the matrix {\displaystyle A}, then calculate {\displaystyle {\mathbf {x}}=A^{-1}{\mathbf {b}}}. However, computing the inverse is computationally expensive. Hence, iterative methods are commonly used. Iterative methods begin with a guess {\displaystyle {\mathbf {x}}^{(0)}}, and on each iteration the guess is improved. Once the difference between successive guesses is sufficiently small, the method has converged to a solution.[6] [7]
As with the conjugate gradient method, biconjugate gradient method, and similar iterative methods for solving systems of linear equations, the CGS method can be used to find solutions to multi-variable optimisation problems, such as power-flow analysis, hyperparameter optimisation, and facial recognition.[8]
Algorithm
[edit ]The algorithm is as follows:[9]
- Choose an initial guess {\displaystyle {\mathbf {x}}^{(0)}}
- Compute the residual {\displaystyle {\mathbf {r}}^{(0)}={\mathbf {b}}-A{\mathbf {x}}^{(0)}}
- Choose {\displaystyle {\tilde {\mathbf {r}}}^{(0)}={\mathbf {r}}^{(0)}}
- For {\displaystyle i=1,2,3,\dots } do:
- {\displaystyle \rho ^{(i-1)}={\tilde {\mathbf {r}}}^{T(i-1)}{\mathbf {r}}^{(i-1)}}
- If {\displaystyle \rho ^{(i-1)}=0}, the method fails.
- If {\displaystyle i=1}:
- {\displaystyle {\mathbf {p}}^{(1)}={\mathbf {u}}^{(1)}={\mathbf {r}}^{(0)}}
- Else:
- {\displaystyle \beta ^{(i-1)}=\rho ^{(i-1)}/\rho ^{(i-2)}}
- {\displaystyle {\mathbf {u}}^{(i)}={\mathbf {r}}^{(i-1)}+\beta _{i-1}{\mathbf {q}}^{(i-1)}}
- {\displaystyle {\mathbf {p}}^{(i)}={\mathbf {u}}^{(i)}+\beta ^{(i-1)}({\mathbf {q}}^{(i-1)}+\beta ^{(i-1)}{\mathbf {p}}^{(i-1)})}
- Solve {\displaystyle M{\hat {\mathbf {p}}}={\mathbf {p}}^{(i)}}, where {\displaystyle M} is a pre-conditioner.
- {\displaystyle {\hat {\mathbf {v}}}=A{\hat {\mathbf {p}}}}
- {\displaystyle \alpha ^{(i)}=\rho ^{(i-1)}/{\tilde {\mathbf {r}}}^{T}{\hat {\mathbf {v}}}}
- {\displaystyle {\mathbf {q}}^{(i)}={\mathbf {u}}^{(i)}-\alpha ^{(i)}{\hat {\mathbf {v}}}}
- Solve {\displaystyle M{\hat {\mathbf {u}}}={\mathbf {u}}^{(i)}+{\mathbf {q}}^{(i)}}
- {\displaystyle {\mathbf {x}}^{(i)}={\mathbf {x}}^{(i-1)}+\alpha ^{(i)}{\hat {\mathbf {u}}}}
- {\displaystyle {\hat {\mathbf {q}}}=A{\hat {\mathbf {u}}}}
- {\displaystyle {\mathbf {r}}^{(i)}={\mathbf {r}}^{(i-1)}-\alpha ^{(i)}{\hat {\mathbf {q}}}}
- Check for convergence: if there is convergence, end the loop and return the result
See also
[edit ]- Biconjugate gradient method
- Biconjugate gradient stabilized method
- Generalized minimal residual method
References
[edit ]- ^ Noel Black; Shirley Moore. "Conjugate Gradient Squared Method". Wolfram Mathworld.
- ^ Mathworks. "cgs". Matlab documentation.
- ^ a b Henk van der Vorst (2003). "Bi-Conjugate Gradients". Iterative Krylov Methods for Large Linear Systems. Cambridge University Press. ISBN 0-521-81828-1.
- ^ Peter Sonneveld (1989). "CGS, A Fast Lanczos-Type Solver for Nonsymmetric Linear systems" . SIAM Journal on Scientific and Statistical Computing. 10 (1): 36–52. doi:10.1137/0910004. ProQuest 921988114.
- ^ "Linear equations" (PDF), Matrix Analysis and Applied Linear Algebra, Philadelphia, PA: SIAM, 2000, pp. 1–40, doi:10.1137/1.9780898719512.ch1 (inactive 11 July 2025), ISBN 978-0-89871-454-8, archived from the original (PDF) on 2004年06月10日, retrieved 2023年12月18日
{{citation}}
: CS1 maint: DOI inactive as of July 2025 (link) - ^ "Iterative Methods for Linear Systems". Mathworks.
- ^ Jean Gallier. "Iterative Methods for Solving Linear Systems" (PDF). UPenn.
- ^ Alexandra Roberts; Anye Shi; Yue Sun. "Conjugate gradient methods". Cornell University . Retrieved 2023年12月26日.
- ^ R. Barrett; M. Berry; T. F. Chan; J. Demmel; J. Donato; J. Dongarra; V. Eijkhout; R. Pozo; C. Romine; H. Van der Vorst (1994). Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods, 2nd Edition. SIAM.