Recently, Huang and Huang [Journal of Computational and Applied Mathematics, 328 (2018) 381–399] proposed a modified generalized shift-splitting preconditioned (denoted by MGSSP) method for solving large sparse saddle point problems, and gave the corresponding theoretical analysis and numerical experiments. In this paper, based on the modified generalized shift-splitting preconditioned (MGSSP) method, we generalize the MGSSP algorithms and further present the new generalized shift-splitting preconditioned (NGSSP) method for nonsymmetric saddle point problems. Moreover, by similar theoretical analysis, we analyze the convergence conditions of the corresponding matrix splitting iteration methods of the NGSSP preconditioned saddle point matrices. In final, one example is provided to confirm the effectiveness.
Consider the following block saddle point problems
where is a symmetric positive definite, has full column rank, is the conjugate transpose of , and are two given vectors. It appears in many different applications of scientific computing, such as constrained optimization,1 the finite element method for solving the Navier-Stokes equation,2–4 and constrained least squares problems and generalized least squares problems5–17 and so on.
In recent years, there has been a surge of interest in the saddle point problem of the form (1), and a large number of stationary iterative methods have been proposed. For example, Santos et al.6 studied preconditioned iterative methods for solving the singular augmented system with . Golub et al.18 presented SOR-like algorithms for solving linear systems (1). Darvishi and Hessari9 studied SSOR method for solving the augmented systems. Bai and Wang1,20–22 presented GSOR method, parameterized Uzawa (PU) and the inexact parameterized Uzawa (PIU) methods for solving linear systems (1). Zhang and Lu23 showed the generalized symmetric SOR method for augmented systems. Peng and Li24 studied the unsymmetric block overrelaxation-type methods for saddle point. Bai and Golub25–31 presented splitting iteration methods such as Hermitian and skew-Hermitian splitting (HSS) iteration scheme and its preconditioned variants, Krylov subspace methods such as preconditioned conjugate gradient (PCG), preconditioned MINRES (PMINRES) and restrictively preconditioned conjugate gradient (RPCG) iteration schemes, and preconditioning techniques related to Krylov subspace methods such as HSS, block-diagonal, block-triangular, and constraint preconditioners and so on.
Recently, based on the modified generalized shift-splitting of the saddle point matrix , Huang and Huang32 constructed a modified generalized shift-splitting preconditioned (denoted by MGSSP) method for solving large sparse saddle point problems. Moreover, theoretical analyses and numerical example show that the corresponding iteration method unconditionally converges to the unique solution of the saddle point problems.
For large, sparse or structure matrices, iterative methods are an attractive option. In particular, Krylov subspace methods apply techniques that involve orthogonal projections onto subspaces of the form
The conjugate gradient method (CG), minimum residual method (MINRES) and generalized minimal residual method (GMRES) are common Krylov subspace methods. The CG method is used for symmetric, positive definite matrices, MINRES for symmetric and possibly indefinite matrices and GMRES for unsymmetric matrices.33
In this paper, on the basis of the special shift-splitting by Huang and Huang,32 we generalize the MGSSP algorithms and further present the new generalized shift-splitting preconditioned (NGSSP) method for nonsymmetric saddle point problems. Moreover, by similar theoretical analysis, we analyze the convergence conditions of the corresponding matrix splitting iteration methods of the NGSSP preconditioned saddle point matrices. In final, one example is provided to confirm the effectiveness.
New generalized shift-splitting preconditioned (NGSSP) method
Recently, for the coefficient matrix of the augmented system (1), Huang and Huang32 made the following splitting
where are two constant numbers, and and are the identity matrix. Based on the iteration methods studied in Huang and Huang,32 we establish the new generalized shift-splitting preconditioned (NGSSP) method of the saddle point matrix , which is as follows:
where are three constant numbers, and and are the identity matrix. By this special splitting, the following mnew generalized shift-splitting preconditioned (NGSSP) method can be defined for solving the saddle point problem (1):
New generalized shift-splitting preconditioned (NGSSP) method: Given initial vectors , and three relaxed parameters . For until the iteration sequence converges, compute
It is easy to see that the iteration matrix of the NGSSP iteration is
If we use a Krylov subspace method such as GMRES (Generalized Minimal Residual) method or its restarted variant to approximate the solution of this system of linear equations, then
can be served as a preconditioner. We call the preconditioner the NGSSP preconditioner for the generalized saddle point matrix .
In every iteration of the NGSSP iteration (4) or the preconditioned Krylov subspace method, we need solve a residual equation
needs to be solved for a given vector at each step. Hence, analogous to Algorithm 2.1 in Cao,34 we can derive the following algorithmic version of the NGSSP iteration method.
Algorithm 2.1. For a given vector , the vector can be computed by (7) from the following steps:
Step 1: Computed
Step 2: Solve
Step 3: Solve
Remark 2.1. On the new generalized shift-splitting preconditioned (NGSSP) method, when , the NGSSP method reduces to the MGSSP method. So, the NGSSP method is the generalization of existing iterative algorithm.
Convergence of NGSSP method
Now, we turn to study the convergence of the NGSSP iteration for solving saddle point problems (1). It is well known that the iteration method (4) is convergent for every initial guess if and only if where denotes the spectral radius of . In Huang and Huang,32 based on the MGSSP method, Huang and Huang established the spectral properties of the iteration matrix and the preconditioned matrix In this section, we will analyze the spectral properties of the iteration matrix and obtain that the NGSSP iterative method is convergent under certain conditions.
Lemma 3.1.Let be symmetric positive definite, have full column rank and be defined as in (5). Assume is an eigenvalue of the NGSSP iteration matrix and the corresponding eigenvector, then
(1) .
(2) .
(3) If , then
Proof. Similar to the proving process of Lemmas 2.1 and 2.2 in Cao et al.,35 we obviously can get the above Lemma.
Lemma 3.2.36Both roots of the real quadratic equation are less than one in modulus if and only if and .
Theorem 3.3.Let be symmetric positive definite, have full column rank and be defined as in (5). Then for three positive iteration parameters , and , the spectral radius of the NGSSP iteration matrix satisfies
that is, the NGSSP iteration converges to the unique solution of the saddle point problem (1), where
Proof. If we let the be an eigenvector corresponding to the eigenvalue of , then we get
which can be equivalently expanded as follows:
Then we have
From Lemma 3.1, we know that . From the second formula of equation (9), we have
By substituting this relationship into the first formula of equation (9), we can obtain
Following the spirit of Lemma 3.1, again, we have . Left-multiplying both sides of (11) by yields
For convenience, some labels are given in the forms
From Lemma 3.2, it is straightforward to show that if and only if
and
Obviously, when three iteration parameters , and , the above equations (15) and (16) are satisfied.
Theorem 3.4.Let be defined as in (13) or (14). If the iteration parameters , , satisfy , then all eigenvalues of the NGSSP iteration matrix are positive real.
Proof. By some algebra of (14), then it is easy to obtain that the explicit formula of the eigenvalue satisfies the following form
where corresponds to the + sign and associates with the − sign.
So, when or all eigenvalues of the NGSSP iteration matrix are positive real.
Remark 3.1. On the one hand, the NGSSP method is the generalization of the MGSSP method. On the other hand, when the appropriate parameters are selected, the NGSSP method will have better convergence than the NGSSP method.
Numerical examples
In this section, we verify the above conclusions through numerical experiments. MATLAB 7.1 software was used to conduct numerical experiments, and the numerical experiment matrices were respectively generated based on the mixed two-dimensional time-harmonic Maxwell equations. In all of our runs, the relative residuals have been reduced by at least six orders of magnitude by the time we use zero initial guesses and stop iterating (i.e. when ).
Example 1. In this section, to further evaluate the effectiveness of the new block triangular preconditioned matrix combined with Krylov subspace methods, we consider a numerical examples based on a two-dimensional time-harmonic Maxwell equations with constant coefficients: find the vector field and the multiplier such that vector field and the multiplier such that
Here, is simply connected polyhedron domain with a connected boundary and denotes the outward unit normal on . The datum is a given generic source. We know that finite element discretization using Nédélec elements of the first kind for the approximation of the vector field and standard nodal elements for the multiplier yields the above block saddle point problems.
For the simplicity, we take the generic source: and a finite element subdivision such as Figure 2 based on uniform grids of triangle elements. Three mesh sizes are considered: , as shown in the Figure 1. The solution of the preprocessing system can be calculated accurately in each iteration. Table 1 shows the sparsity information of correlation matrices on different grids, where nz denote the nonzero elements of matrix .
A uniform mesh with .
Datasheet for different grids.
Grid
m
n
nz
nz
nz
order of
176
49
820
462
217
225
736
225
3556
2190
1065
961
3008
961
14,788
9486
4681
3969
12,160
3969
60,292
39,438
19,593
16,129
Since the new preconditioners have three parameters, in numerical experiments we will test different values. Numerical experiments show the spectrum of the NGSSP preconditioned matrix and the MGSSP preconditioned matrix when choosing different parameters, which coincides with theoretical analysis.
In Figures 2, 4 and 6 we display the eigenvalues of the iteration matrix in the case of , and for different parameters. In Figures 3, 5, and 7 we display the eigenvalues of the iteration matrix in the case of , and for different parameters. In Tables we show iteration counts about preconditioned matrices and , when choosing different parameters and applying to BICGSTAB and GMRES Krylov subspace iterative methods on three meshes, where and are the iteration numbers and relative residual of the preconditioned matrices when applying to BICGSTAB Krylov subspace iterative methods, respectively. and are the iteration numbers and relative residual of the preconditioned matrices when applying to GMRES Krylov subspace iterative methods, respectively. , , , are similar definitions.
Remark 4.1.Figures 2 to 7 show that the distribution of eigenvalues of the iteration matrix confirm our above theoretical analysis.
Remark 4.2. From Tables 2 and 3, it is very easy to see that the preconditioner and will improve the convergence of BICGSTAB and GMRES iteration efficiently when they are applied to the preconditioned BICGSTAB and GMRES to solve two-dimensional time-harmonic Maxwell equations by choosing different parameters.
The eigenvalue distribution for the NGSSP iteration matrix when (the first), (the second), (the third) and (the fourth), respectively. Here,
The eigenvalue distribution for the MGSSP iteration matrix when (the first), (the second), (the third) and (the fourth), respectively. Here,
The eigenvalue distribution for the NGSSP iteration matrix when (the first), (the second), (the third) and (the fourth), respectively. Here, .
The eigenvalue distribution for the MGSSP iteration matrix when (the first), (the second), (the third) and (the fourth), respectively. Here, .
The eigenvalue distribution for the NGSSP iteration matrix when (the first), (the second), (the third) and (the fourth), respectively. Here, .
The eigenvalue distribution for the MGSSP iteration matrix when (the first), (the second), (the third) and (the fourth), respectively. Here, .
Iteration counts and relative residual about iteration matrices and when choosing different parameters, where the unpreconditioned BICGSTAB and GMRES are divergent, respectively. Here, denotes the size of the corresponding grid.
0.3
0.4
2.4
8.5
0.3
0.4
11
0.4
0.9
2.1
18.5
0.4
0.9
19.5
0.7
0.3
2.6
12.5
0.7
0.3
17.5
0.8
0.4
2.3
17.5
0.8
0.4
21
0.3
0.4
2.4
14(1)
0.3
0.4
16(1)
0.4
0.9
2.1
23(1)
0.4
0.9
24(1)
0.7
0.3
2.6
17(1)
0.7
0.3
22(1)
0.8
0.4
2.3
23(1)
0.8
0.4
25(1)
Iteration counts and relative residual about iteration matrices and when choosing different parameters, where the unpreconditioned BICGSTAB and GMRES are divergent, respectively. Here, denotes the size of the corresponding grid.
0.3
0.4
2.4
21.5
0.3
0.4
25.5
0.4
0.9
2.1
53.5
0.4
0.9
55.5
0.7
0.3
2.6
31.5
0.7
0.3
40.5
0.8
0.4
2.3
43.5
0.8
0.4
44
0.3
0.4
2.4
27(1)
0.3
0.4
32(1)
0.4
0.9
2.1
46(1)
0.4
0.9
48(1)
0.7
0.3
2.6
34(1)
0.7
0.3
42(1)
0.8
0.4
2.3
44(1)
0.8
0.4
49(1)
Conclusions
In this paper, based on NGSSP preconditioner, we added an acceleration factor , carried out theoretical analysis and numerical analysis. The purpose of this paper is to add an acceleration factor and then analyze the conditions of parameter convergence. In the actual numerical simulation, the convergence speed can be improved by selecting appropriate parameters.
Footnotes
Handling Editor: Chenhui Liang
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research of this author is supported by the National Natural Science Foundation of China (11226337, 11501525), Basic Research Projects of Key Scientific Research Projects Plan in Henan Higher Education Institutions (20zx003), Henan Natural Science Foundation (222300420579), Henan Science and Technology Research Program (212102110206, 222102110404, 202102310942), Henan College Students’ Innovation Training Program (s202110485045) and College Students’ Innovation Training Program (2021-70), Key Projects of Colleges and Universities in Henan Province (22A880022), Science and Technology Foundation of Henan Province of China (222102210250).
ORCID iD
Li-Tao Zhang
References
1.
ZhengBBaiZ-ZYangX.On semi-convergence of parameterized Uzawa methods for singular saddle point problems. Linear Algebra Appl2009; 431: 808–817.
2.
ElmanHSilvesterD.Fast nonsymmetric iterations and preconditioning for Navier–Stokes equations. SIAM J Sci Comput1996; 17: 33–46.
3.
ElmanHCGolubGH.Inexact and preconditioned Uzawa algorithms for saddle point problems. SIAM J Numer Anal1994; 31: 1645–1661.
4.
FischerBRamageASilvesterDJ, et al. Minimum residual methods for augmented systems. BIT Numer Math1998; 38: 527–543.
5.
ArioliMDuffISde RijkPPM. On the augmented system approach to sparse least-squares problems. Numer Math1989; 55: 667–684.
6.
SantosCHSilvaBPYuanJ-Y.Block SOR methods for rank-deficient least-squares problems. J Comput Appl Math1998; 100: 1–9.
7.
YuanJ-Y.Numerical methods for generalized least squares problems. J Comput Appl Math1996; 66: 571–584.
8.
YuanJ-YIusemAN.Preconditioned conjugate gradient method for generalized least squares problems. J Comput Appl Math1996; 71: 287–297.
9.
BaiZ-ZBenziMChenF.Modified HSS iteration methods for a class of complex symmetric linear systems. Computing2010; 87: 93–111.
10.
CaoYYaoL-QJiangM-Q.A modified dimensional split preconditioner for generalized saddle point problems. J Comput Appl Math2013; 250: 70–82.
11.
CaoYYaoL-QJiangM-Q, et al. A relaxed HSS preconditioner for saddle point problems from meshfree discretization. J Comput Math2013; 31: 398–421.
12.
ChenC-RMaC-F.A generalized shift-splitting preconditioner for saddle point problems. Appl Math Lett2015; 43: 49–55.
13.
WrightS.Stability of augmented system factorizations in interior-point methods. SIAM J Matrix Anal Appl1997; 18: 191–222.
ZhangL-T.A new preconditioner for generalized saddle point matrices with highly singular(1,1) blocks. Int J Comput Math2014; 91: 2091–2101.
16.
ZhangL-THuangT-ZChengS-H, et al. Convergence of a generalized MSSOR method for augmented systems. J Comput Appl Math2012; 236: 1841–1850.
17.
ZhouS-WYangA-LDouY, et al. The modified shift-splitting preconditioners for nonsymmetric saddle-point problems. Appl Math Lett2016; 59: 109–114.
18.
GolubGHWuXYuanJ-Y.SOR-like methods for augmented systems. BIT Numer Math2001; 41: 71–85.
19.
DarvishiMTHessariP.Symmetric SOR method for augmented systems. Appl Math Comput2006; 183: 409–415.
20.
BaiZ-ZParlettBNWangZQ.On generalized successive overrelaxation methods for augmented linear systems. Numer Math2005; 102: 1–38.
21.
BaiZ-ZWangZ-Q.On parameterized inexact Uzawa methods for generalized saddle point problems. Linear Algebra Appl2008; 428: 2900–2932.
22.
ChenFJiangY-L.A generalization of the inexact parameterized Uzawa methods for saddle point problems. Appl Math Comput2008; 206: 765–771.
23.
ZhangG-FLuQH.On generalized symmetric SOR method for augmented systems. J Comput Appl Math2008; 219: 51–58.
24.
PengX-FLiW.On unsymmetric block overrelaxation-type methods for saddle point problems. Appl Math Comput2008; 203: 660–671.
25.
BaiZ-ZYangX.On HSS-based iteration methods for weakly nonlinear systems. Appl Numer Math2009; 59: 2923–2936.
26.
BaiZ-ZGolubGHNgMK.On inexact Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. Linear Algebra Appl2008; 428: 413–440.
27.
BaiZ-Z.Several splittings for non-Hermitian linear systems. Sci China Ser A Math2008; 51: 1339–1348.
28.
BaiZ-ZGolubGHLuL-Z, et al. Block triangular and skew-Hermitian splitting methods for positive-definite linear systems. SIAM J Sci Comput2005; 26: 844–863.
29.
BaiZ-ZGolubGHNgMK.Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J Matrix Anal Appl2003; 24: 603–626.
30.
JiangM-QCaoY.On local Hermitian and skew-Hermitian splitting iteration methods for generalized saddle point problems. J Comput Appl Math2009; 231: 973–982.
31.
WangLBaiZ-Z.Convergence conditions for splitting iteration methods for non-Hermitian linear systems. Linear Algebra Appl2008; 428: 453–468.
32.
HuangZHHuangTZ.A modified generalized shift-splitting method for nonsymmetric saddle point problems. J Comput Appl Math2018; 328: 381–399.
33.
Van der VorstHA. Iterative Krylov methods for large linear systems, Cambridge monographs on applied and computational mathematics. Cambridge: Cambridge University Press, 2003.
34.
CaoY.A general class of shift-splitting preconditioners for non-Hermitian saddle point problems with applications to time-harmonic eddy current models. Comput Math Appl2019; 77: 1124–1143.
35.
CaoYDuJNiuQ.Shift-splitting preconditioners for saddle point problems. J Comput Appl Math2014; 272: 239–250.
36.
YoungDM.Iteratin solution for large systems. New York: Academic Press, 1971.