Recently, Zhang [Numerical Linear Algebra with Applications, 2018: e2166] constructed an efficient variant of Hermitian and skew-Hermitian splitting (HSS) preconditioner for generalized saddle point problems, and gave the corresponding theoretical analysis and numerical experiment. In this paper, based on the efficient variant of the HSS (EVHSS) preconditioner, we relax the strong condition on the iterative parameter , generalize the algorithms and further present the modified variant of Hermitian and skew-Hermitian splitting (HSS) (MVHSS) preconditioner for solving the generalized saddle point problem. Moreover, by similar theoretical analysis, we obtain that the MVHSS iterative method is convergent under weaker conditions. Finally, an example is given to verify the effectiveness of the proposed method.
Consider the following block saddle point problems
where is symmetric positive definite, is full rank, and is symmetric and positive semidefinite. It appears in many different applications of scientific computing, such as constrained optimization,48 finite element method for solving Navier-Stokes equations,26–28 and constrained least squares problem and generalized least squares problem1,33,42,43 and so on; see9–23,31–47 and references therein.
In recent years, people have great interest in the saddle point problem of form (1), and a large number of stationary iterative methods have been proposed. For example, Santos et al.33 studied preconditioned iterative methods for solving the singular augmented system with . Golub et al.29 presented SOR-like algorithms for solving linear systems (1). Darvishi and Hessari25 studied SSOR method for solving the augmented systems. Bai et al.,2 Bai and Wang,3 Chen and Jiang,24 and Zheng et al.48 proposed the GSOR method, parameterized Uzawa (PU) and inexact parameterized Uzawa (PIU) methods for solving linear systems (1). Zhang and Lu44 showed the generalized symmetric SOR method for augmented systems. Peng and Li32 studied unsymmetric block overrelaxation-type methods for saddle point. Bai and Yang,4 Bai et al.,5 Bai,6 Bai et al.,7,8 Jiang and Cao,30 and Wang and Bai38 proposed split iteration methods, such as Hermitian and Skew-Hermitian splitting (HSS) iteration schemes and their pre-processing variables. Krylov subspace method, such as preprocessing conjugate gradient (PCG), preprocessing MINRES (PMINRES) and constrained preprocessing conjugate gradient (RPCG) iterative schemes, and related to Krylov subspace method, HSS, block diagonal method, block trigonometry, and constrained preprocessing techniques. Prof. Savin Treanta31 propose and prove necessary and sufficient optimality conditions for multi-objective control problems involving multiple integrals. Professor Savin Treanta34–36 studied the constraints of the constrained variational problem of the second-order Lagrange equation, the PDE optimal control problem and the first-order correction under the constraints of the relevant saddle point optimality criterion, as well as a new class of vector variational control problems.
More recently, by switching the position of the two separation matrices in the HSS preprocessor, Zhang45 presented an efficient variant of the Hermitian and skew-Hermitian splitting (HSS) preconditioner for generalized saddle point problems.Theoretical analysis and numerical examples show that under certain conditions, the corresponding iterative method converges to the unique solution of the generalized saddle point problem.
For large, sparse or structured matrices, iterative method is an attractive choice. In particular, Krylov subspace method applies the technology involving orthogonal projection to this form of subspace
Conjugate gradient method, minimum residual method and generalized minimum residual method are commonly used Krylov subspace methods. CG method is used for symmetric positive definite matrix and MINRES method is used for symmetric possibly uncertain matrix.37
In this paper, based on the efficient variant of the HSS (EVHSS) preconditioner by Zhang,45 we added an acceleration factor and carried out theoretical analysis and numerical verification. Moreover, we obtain that the MVHSS iterative method is convergent under weaker conditions.
Modified variant of HSS (MVHSS) preconditioner
Recently, for the coefficient matrix of the augmented system (1), Zhang45 made the following splitting
where are the constant number and is identity matrix (with appropriate dimension). Based on the iterative methods studied in Bai et al.8 and Zhang45 we establish the modified variant of the Hermitian and skew-Hermitian splitting HSS (MVHSS) of the saddle point matrix , which is as follows:
where are two constant numbers. By this special splitting, the following modified variant of HSS splitting (MVHSS) method can be defined for saddle point problem (1):
Modified variant of the Hermitian and skew-Hermitian splitting (MVHSS) method
Given initial vectors , and two relaxed parameters and . For until the iteration sequence converges, compute
where are two constant numbers. We can see that the iteration matrix of the MVHSS iteration is
If the solution of the linear equations is approximated by the Generalized minimum residual method (GMRES) or its restarted variant Krylov subspace method, then
can be served as a preconditioner. We call the preconditioner the MVHSS preconditioner for generalized saddle point matrix .
In each iteration of mvhss iteration (4) or preprocessed Krylov subspace method, we need to solve a residual equation
needs to be solved for a given vector at each step. Since the matrix has the following matrix factorization
Let and where and Then by (8), it can result in
Hence, analogous to Algorithm 2 in Zhang,45 we can derive the following algorithm about MVHSS iteration method.
Algorithm 2.1. For given vector , and can be computed by (9) from the following steps:
Step 1: Computed
Step 2: Solve
Step 3:
Remark 2.1. On MVHSS method, when , the MVHSS method reduces to the EVHSS method. So, the MVHSS method is an extension of the existing iterative algorithm.
Covergence of MVHSS method
Now, we turn to the convergence of the MVHSS iterative solution of the generalized saddle point problem. We know that the iterative method (4) is convergent for every initial guess if and only if where denotes the spectral radius of . Based on EVHSS method, Zhang45 established the spectral properties of the iterative matrix and the preconditioned matrix In this section, we firstly obtain that the MVHSS iterative method is convergent under weaker conditions, then as a special case , we relax the condition on the iteration parameter of Theorems 3.1 and 3.2.
Theorem 3.1.45Suppose that the matrix is symmetric positive definite, is of full rank, is symmetric positive semidefinite, and is a positive constant. Let be a complex m-dimensional nonzero vector and be an eigenpair of the matrix
where denotes the conjugate transpose of the matrix . If and satisfy then it holds that
that is, the EVHSS iteration method converges to the unique solution of the generalized saddle point problem (1).
Theorem 3.2.45Suppose that the conditions in Theorem 3.1 are satisfied, and is defined as in (13) in Zhang.45Then, the eigenvalues of the preconditioned matrix satisfy the following:
1. has an eigenvalue with multiplicity .
2. If and satisfy
the real parts and the imaginary parts of the remaining nonunit eigenvalues s satisfy
and
Theorem 3.3.Suppose that the matrix is symmetric positive definite, is of full rank, is symmetric positive semidefinite, and is a positive constant. Let be a complex m-dimensional nonzero vector and be an eigenpair of the matrix
If , and satisfy
or
then it holds that
that is, the MVHSS iteration method converges to the unique solution of the generalized saddle point problem (1).
Proof. The preconditioner has the following block triangular factorization:
By straightforward computation, we can obtain
Hence, has an eigenvalue with multiplicity . Let be an eigenpair of the matrix Then, it holds that
Multiplying this equation by from the left, we can rewrite the equation (12) as
that is, the MVHSS iterative converges to the unique solution of generalized saddle point problem (1) for any initial guess.
Now, we will establish the spectral properties of the preconditioned matrix and analyze the bounds on the eigenvalues of the preconditioned matrix with the following theorem.
Theorem 3.4.Suppose that the conditions in Theorem 3.2 are satisfied, and is defined as in (6). Then, the eigenvalues of the preconditioned matrix satisfy the following:
1. has an eigenvalue with multiplicity .
2. If and satisfy
or
the real parts and the imaginary parts of the remaining nonunit eigenvalues s satisfy
and
Proof. Since according to the expression of in (11), the preconditioned matrix has an eigenvalue with multiplicity , and the remaining eigenvalues s satisfy
we find that as or Moreover, the real parts of satisfy
If we have
So, we can obtain
Then, we have
If we have
So, we can obtain
Then, we have
The imaginary parts of satisfy
If we have
So, we can obtain
Then, we have
If we have
So, we can obtain
Then, we have
As a special case of Theorems 3.3 and 3.4, we have the following results when Moreover, these results relax the condition on the iteration parameter on Theorems 3.1 and 3.2.
Theorem 3.5.Suppose that the matrix is symmetric positive definite, is of full rank, is symmetric positive semidefinite, and is a positive constant. Let be a complex m-dimensional nonzero vector and be an eigenpair of the matrix
If , and satisfy
or
then it holds that
In other words, the EVHSS iterative method converges to the unique solution of the generalized saddle point problem (1).
Theorem 3.6.Suppose that the conditions in Theorem 3.2 are satisfied, and is defined as in (13) in Zhang.45Then, the eigenvalues of the preconditioned matrix satisfy the following:
1. has an eigenvalue with multiplicity .
2. If , and satisfy
or
the real parts and the imaginary parts of the remaining nonunit eigenvalues s satisfy
and
Remark 3.1. On the one hand, MVHSS method is the generalization of EVHSS method. In addition, through the above theorems, we find that the conditions of Theorems 3.5 and 3.6 are weaker than Theorems 3.1 and 3.2 when On the other hand, according to different practical problems and different parameters, the problem will achieve faster convergence effect.
Numerical examples
In this section, we verify the above conclusions through numerical experiments. MATLAB 7.1 software was used to conduct numerical experiments, and the numerical experiment matrices were respectively generated based on the mixed two-dimensional time-harmonic Maxwell equations. In all of our runs, the relative residuals have been reduced by at least six orders of magnitude by the time we use zero initial guesses and stop iterating (i.e. when ).
Example 1. In this section, to further evaluate the effectiveness of the new block triangular preconditioned matrix combined with Krylov subspace methods, we consider a numerical examples based on a two-dimensional time-harmonic Maxwell equations with constant coefficients: find the vector field and the multiplier such that vector field and the multiplier such that
Here, is simply connected polyhedron domain with a connected boundary and denotes the outward unit normal on . The datum is a given generic source. We know that finite element discretization using Nédélec elements of the first kind for the approximation of the vector field and standard nodal elements for the multiplier yields the above block saddle point problems.
For the simplicity, we take the generic source: and a finite element subdivision such as Figure 2 based on uniform grids of triangle elements. Three mesh sizes are considered: For example, as shown in the Figure 1. The solution of the preprocessing system can be calculated accurately in each iteration. Table 1 shows the sparsity information of correlation matrices on different grids, where nz denote the nonzero elements of matrix .
A uniform mesh with .
Datasheet for different grids.
Grid
m
n
nz
nz
order of
176
49
820
462
225
736
225
3556
2190
961
3008
961
14,788
9486
3969
12,160
3969
60,292
39,438
16,129
Since the new preconditions have two parameters, we will test different values in numerical experiments. Numerical experiments show the spectrum of MVHSS preconditioned matrix and EVHSS preconditioned matrix when choosing different parameters, which coincides with theoretical analysis.
In Figures 2, 4, and 6 we display the eigenvalues of the preconditioned matrix in the case of , , and for different parameters. In Figures 3, 5, and 7 we show the eigenvalues of the preconditioned matrix in the case of ,, and for different parameters. Figures 2–7 show that the distribution of eigenvalues of the preconditioned matrix confirms our above theoretical analysis. In Tables 2–4 we show iteration counts about preconditioned matrices and , when choosing different parameters and applying to BICGSTAB and GMRES Krylov subspace iterative methods on three meshes, where and are the iteration numbers and relative residual of the preconditioned matrices when applying to BICGSTAB Krylov subspace iterative methods, respectively. and are the iteration numbers and relative residual of the preconditioned matrices when applying to GMRES Krylov subspace iterative methods, respectively. and are the iteration numbers and relative residual of unpreconditioned matrices when applying to BICGSTAB Krylov subspace iterative methods, respectively. , , , are similar definitions.
Remark 4.1. From the above figures and tables, we can see that MVHSS preconditioner has the same spectral clustering as the EVHSS preconditioner when choosing the optimal parameters.
Remark 4.2. From Tables 2–4, we can see that the preconditioner and will improve the convergence of GERES and BICGSTAB iterative efficiently when they are applied to the preconditioned GMRES and BICGSTAB and to solve the two-dimensional time-harmonic Maxwell equations by choosing different parameters.
Eigenvalue distribution of MVHSS preconditioned matrix when (the first), (the second), (the third), and (the fourth), respectively. Here, .
Eigenvalue distribution of EVHSS preconditioned matrix when (the first), (the second), (the third), and (the fourth), respectively. Here, .
Eigenvalue distribution of MVHSS preconditioned matrix when (the first), (the second), (the third), and (the fourth), respectively. Here, .
Eigenvalue distribution of EVHSS preconditioned matrix when (the first), (the second), (the third), and (the fourth), respectively. Here, .
Eigenvalue distribution of MVHSS preconditioned matrix when (the first), (the second), (the third), and (the fourth), respectively. Here, .
Eigenvalue distribution of EVHSS preconditioned matrix when (the first), (the second), (the third), and (the fourth), respectively. Here, .
Iterative counting and relative residuals of preprocessing matrices and when choosing different parameters, where the number of iterations and relative residual of unpreconditioned BICGSTAB and GMRES are and , and , respectively. Here, denotes the size of the corresponding grid.
0.2
0.4
6
0.2
6
0.6
0.8
5
0.4
5.5
0.9
1.4
4.5
0.8
5
1.2
1.6
4.5
1.2
5
0.2
0.4
8 (1)
0.2
8 (1)
0.6
0.8
8 (1)
0.4
8 (1)
0.9
1.4
8 (1)
0.8
8 (1)
1.2
1.6
8 (1)
1.2
8 (1)
Iterative counting and relative residuals of preprocessing matrices and when choosing different parameters, where the number of iterations and relative residual of unpreconditioned BICGSTAB and GMRES are and , and , respectively. Here, denotes the size of the corresponding grid.
0.2
0.4
7.5
0.2
8
0.6
0.8
6
0.4
6.5
0.9
1.4
5.5
0.8
6
1.2
1.6
5.5
1.2
5.5
0.2
0.4
10 (1)
0.2
12 (1)
0.6
0.8
10 (1)
0.4
11(1)
0.9
1.4
10 (1)
0.8
11 (1)
1.2
1.6
10 (1)
1.2
10 (1)
Iterative counting and relative residuals of preprocessing matrices and when choosing different parameters, where the number of iterations and relative residual of unpreconditioned BICGSTAB and GMRES are and , and , respectively. Here, denotes the size of the corresponding grid.
0.2
0.4
8.5
0.2
9
0.6
0.8
7
0.4
8
0.9
1.4
6
0.8
7
1.2
1.6
6
1.2
6.5
0.2
0.4
14 (1)
0.2
14 (1)
0.6
0.8
13 (1)
0.4
13 (1)
0.9
1.4
11 (1)
0.8
13 (1)
1.2
1.6
11 (1)
1.2
12 (1)
Conclusions
In this paper, based on EVHSS preconditioner, we added an acceleration factor , carried out theoretical analysis and numerical analysis. The purpose of this paper is to add an acceleration factor and then analyze the conditions of parameter convergence. In the actual numerical simulation, the convergence speed can be improved by selecting appropriate parameters.
Footnotes
Handling Editor: Chenhui Liang
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research of this author is supported by the National Natural Science Foundation of China (11226337, 11501525), Basic Research Projects of Key Scientific Research Projects Plan in Henan Higher Education Institutions (20zx003), Henan Natural Science Foundation (222300420579), Henan Science and Technology Research Program (212102110206, 222102110404, 202102310942), Henan College Students’ Innovation Training Program (s202110485045) and College Students’ Innovation Training Program (2021-70), Key Projects of Colleges and Universities in Henan Province (22A880022), Sichuan Science and Technology Program (2019YJ0357).
ORCID iD
Li-Tao Zhang
References
1.
ArioliMDuffISde RijkPPM. On the augmented system approach to sparse least-squares problems. Numer Math1989; 55: 667–684.
2.
BaiZ-ZParlettBNWangZQ. On generalized successive overrelaxation methods for augmented linear systems. Numer Math2005; 102: 1–38.
3.
BaiZ-ZWangZ-Q. On parameterized inexact Uzawa methods for generalized saddle point problems. Linear Algebra Appl2008; 428: 2900–2932.
4.
BaiZ-ZYangX. On HSS-based iteration methods for weakly nonlinear systems. Appl Numer Math2009; 59: 2923–2936.
5.
BaiZ-ZGolubGHNgMK. On inexact hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. Linear Algebra Appl2008;428: 413–440.
6.
BaiZ-Z. Several splittings for non-Hermitian linear systems. Sci China Ser A: Math2008; 51: 1339–1348.
7.
BaiZ-ZGolubGHLuL-Z, et al. Block triangular and skew-Hermitian splitting methods for positive-definite linear systems. SIAM J Sci Comput2005; 26: 844–863.
8.
BaiZ-ZGolubGHNgMK. Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J. Matrix. Anal. A2003;24: 603–626.
9.
BaiZ-ZGolubGHNgMK. On successive-overrelaxation acceleration of the Hermitian and skew-Hermitian splitting iterations. Numer Linear Algebra Appl2007; 14: 319–335.
10.
BaiZ-ZGolubGHLiC-K. Optimal parameter in Hermitian and skew-Hermitian splitting method for certain two-by-two block matrices. SIAM J Sci Comput2006; 28: 583–603.
11.
BaiZ-ZGolubGHPanJ-Y. Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems. Numer Math2004; 98: 1–32.
12.
BaiZZNgMK. On inexact preconditioners for nonsymmetric matrices. SIAM J Sci Comput2005; 26:1710–1724.
BaiZ-Z. Optimal parameters in the HSS-like methods for saddle-point problems. Numer Linear Algebra Appl2009; 16: 447–479.
15.
BaiZ-ZYinJ-FSuY-F. A shift-splitting preconditioner for non-Hermitian positive definite matrices. J Comput Math2006; 24: 539–552.
16.
BaiZ-ZGolubGHPanJ-Y. Preconditioned Hermitian and skew-Hermitian splitting methods for non-Hermitian positive semidefinite linear systems. Technical Report SCCM-02-12, Scientific Computing and Computational Mathematics Program, Department of Computer Science, Stanford University, Stanford, CA, 2002.
17.
BaiZ-ZBenziMChenF. Modified HSS iteration methods for a class of complex symmetric linear systems. Computing2010; 87: 93–111.
18.
BaiZ-ZWangLWuW-T. On convergence rate of the randomized Gauss-Seidel method. Linear Algebra Appl2021; 611: 237–252.
19.
BaiZ-ZLuK-Y. Optimal rotated block-diagonal preconditioning for discretized optimal control problems constrained with fractional time-dependent diffusive equations. Appl Math Comput2021; 163: 126–146.
20.
CaoYDuJNiuQ. Shift-splitting preconditioners for saddle point problems. J Comput Appl Math2014;272: 239–250.
21.
CaoYYaoL-QJiangM-Q. A modified dimensional split preconditioner for generalized saddle point problems. J Comput Appl Math2013; 250: 70–82.
22.
CaoYYaoL-QJiangM-Q, et al. A relaxed HSS preconditioner for saddle point problems from meshfree discretization. J Comput Math2013; 31: 398–421.
23.
ChenCMaC. A generalized shift-splitting preconditioner for saddle point problems. Appl Math Lett2015;43: 49–55.
24.
ChenFJiangY-L. A generalization of the inexact parameterized Uzawa methods for saddle point problems. Appl Math Comput2008; 206: 765–771.
25.
DarvishiMTHessariP. Symmetric SOR method for augmented systems. Appl Math Comput2006; 183:409–415.
26.
ElmanHSilvesterD. Fast nonsymmetric iterations and preconditioning for Navier–Stokes equations. SIAM J Sci Comput1996; 17: 33–46.
27.
ElmanHCGolubGH. Inexact and preconditioned Uzawa algorithms for saddle point problems. SIAM J Numer Anal1994; 31: 1645–1661.
28.
FischerBRamageASilvesterDJ, et al. Minimum residual methods for augmented systems. BIT1998; 38:527–543.
29.
GolubGHWuXYuanJ-Y. SOR-like methods for augmented systems. BIT2001; 55: 71–85.
30.
JiangM-QCaoY. On local Hermitian and skew-Hermitian splitting iteration methods for generalized saddle point problems. J Comput Appl Math2009; 231:973–982.
31.
MititeluŞTreanţăS. Efficiency conditions in vector control problems governed by multiple integrals. J Appl Math Comput2018; 57: 647–665.
32.
PengX-FLiW. On unsymmetric block overrelaxation-type methods for saddle point problems. Appl Math Comput2008; 203: 660–671.
33.
SantosCHSilvaBPYuanJ-Y. Block SOR methods for rank-deficient least-squares problems. J Comput Appl Math1998; 100: 1–9.
TreanţăS. On a modified optimal control problem with first-order PDE constraints and the associated saddle-point optimality criterion. Symmetry2020; 51: 1–9.
36.
TreanţăS. On a new class of vector variational control problems. Numer Funct Anal Optim2018; 39:1594–1603.
37.
Van der VorstHA. Iterative Krylov methods for large linear systems, Cambridge monographs on applied and computational mathematics. Cambridge: CambridgeUniversity Press, 2003.
38.
WangLBaiZ-Z. Convergence conditions for splitting iteration methods for non-Hermitian linear systems. Linear Algebra Appl2008; 428: 453–468.
39.
WrightS. Stability of augmented system factorizations in interior-point methods. SIAM J Matrix Anal Appl1997;18: 191–222.
40.
WuS-LHuangT-ZZhaoX-L. A modified SSOR iterative method for augmented systems. J Comput Appl Math2009; 228: 424–433.
41.
YoungDM. Iteratin solution for large systems. NewYork, NY: Academic Press, 1971.
42.
YuanJ-Y. Numerical methods for generalized least squares problems. J Comput Appl Math1996; 66:571–584.
43.
YuanJ-YIusemAN. Preconditioned conjugate gradient method for generalized least squares problems. J Comput Appl Math1996; 71: 287–297.
44.
ZhangG-FLuQH. On generalized symmetric SOR method for augmented systems. J Comput Appl Math2008; 219: 51–58.
45.
ZhangJL. An efficient variant of HSS preconditioner for generalized saddle point problems. Numer Linear Algebra Appl2018; 25: e2166.
46.
ZhangL-T. A new preconditioner for generalized saddle point matrices with highly singular(1,1) blocks. Int J Comput Math2014; 91: 2091–2101.
47.
ZhangL-THuangT-ZChengS-H, et al. Convergence of a generalized MSSOR method for augmented systems. J Comput Appl Math2012; 236: 1841–1850.
48.
ZhengBBaiZ-ZYangX. On semi-convergence of parameterized Uzawa methods for singular saddle point problems. Linear Algebra Appl2009; 431: 808–817.