In this article, to better implement the modified positive-definite and skew-Hermitian splitting preconditioners studied recently (Numer. Algor., 72 (2016) 243–258) for generalized saddle point problems, a class of inexact modified positive-definite and skew-Hermitian splitting preconditioners is proposed with improved computing efficiency. Some spectral properties, including the eigenvalue distribution, the eigenvector distribution, and an upper bound of the degree of the minimal polynomial of the inexact modified positive-definite and skew-Hermitian splitting preconditioned matrices are studied. In addition, a theoretical optimal inexact modified positive-definite and skew-Hermitian splitting preconditioner is obtained. Numerical experiments arising from a model steady incompressible Navier–Stokes problem are used to validate the theoretical results and illustrate the effectiveness of this new class of proposed preconditioners.
In the solution of large, sparse generalized saddle point problems, finding efficient preconditioners is crucial to obtaining an iterative solution. Consider a matrix of the form
where is a non-Hermitian positive-definite matrix (i.e. its Hermitian part is positive definite), is a rectangular matrix with full row rank, is Hermitian positive-semidefinite and . Here, denotes the conjugate transpose of the corresponding matrix or vector. From the algebraic properties studied in Benzi et al.,1 we know that the non-Hermitian positive-semidefinite generalized saddle point matrix is nonsingular. Thus, the linear system (1) has a unique solution.
Linear systems of the form (1) arise in a number of engineering applications, including piezoelectric structure computation, mixed finite element solutions of the Navier–Stokes equations, and constrained optimization problems (see Benzi et al.1 for a general discussion). Since the coefficient matrix is large, sparse, nonsymmetric, and often ill-conditioned, many traditional iterative methods converge to a solution very slowly. Thus, preconditioning is in most cases indispensable for obtaining an iterative solution of equation (1).2 In the past few decades, many preconditioning techniques have been proposed and studied for such generalized saddle point problems (1). For an overview of these preconditioning techniques, see Benzi et al.1 and Pestana and Wathen3 and the references therein.
Let be a real positive constant and I be the identity matrix with suitable dimension. Based on the Hermitian and skew-Hermitian splitting (HSS) preconditioner studied in Bai et al.,4 Benzi and Golub,5 and Bai and Golub,6 Pan et al.7 first split the generalized saddle point matrix into
where is a block diagonal positive-semidefinite matrix and is a skew-Hermitian matrix and then proposed a matrix splitting preconditioner as follows
The above matrix is called the deteriorated positive-definite and skew-Hermitian splitting (DPSS) preconditioner. Evidently, if the (1,1) leading block matrix A is Hermitian, then the DPSS preconditioner is the same as the HSS preconditioner.5 The clustering property of the eigenvalues of the DPSS preconditioned matrix has been studied.7 The unconditional convergence property of the corresponding DPSS iteration method can be found in Shen,8 Bai,9 and Cao et al.10 Numerical results show that the DPSS preconditioner performs better than the well-known HSS preconditioner.
Much effort has gone into improving the DPSS preconditioner. For example, based on the relaxed techniques studied in previous works,11–13 Cao et al.10,14 proposed a relaxed deteriorated positive-definite and skew-Hermitian splitting (RDPSS) preconditioner for equation (1) when and a simplified Hermitian and skew-Hermitian splitting (SHSS) preconditioner when the submatrices A and C are Hermitian, the resulting preconditioners are closer to the generalized saddle point matrix than the DPSS-based preconditioner. Zhou et al.15 proposed a relaxed block triangular splitting preconditioner for generalized saddle point matrix by considering the coefficient matrix as a three-by-three block matrix. As another example, see Cao et al.,16 a relaxed PSS-based preconditioner was proposed when the submatrix A is non-Hermitian positive definite. Also recently, by introducing another positive parameter , Xie and Ma17 proposed a modified positive-definite and skew-Hermitian splitting (MPSS) preconditioner as follows
which can be further decomposed into
From equation (4), we can see that the MPSS preconditioner is sufficiently close to the coefficient matrix as and . Although Xie et al. have proved that the MPSS preconditioned matrix has a clustered eigenvalue distribution, the factorization (4) indicates that the MPSS preconditioner is very difficult to implement since a sub-system of linear equations with the coefficient matrix must be solved at each iteration, see the next section for detailed implementation considerations. Just the computation of the coefficient matrix can be prohibitive. A remedy to such situations is to use inexact variants. It should be noted that the inexact preconditioners are very important in practical computation since their iteration costs are much cheaper than exact preconditioners and the necessary iteration steps do not increase significantly. For the generalized saddle point problem (1), where the submatrix A is Hermitian positive definite, some inexact block triangular preconditioners are proposed in Simoncini,18 Cao,19 and Jiang et al.20 and improved eigenvalue bounds of the preconditioned matrices are studied in Cao et al.21 For a general block two-by-two non-Hermitian matrix, some inexact block triangular preconditioners are studied in Bai and Ng,22 Peng and Li,23 and Li et al.24 For standard saddle point problem where the (1,1) block matrix A is Hermitian positive definite and the (2,2) block matrix C is a zero matrix, an inexact constraint preconditioner is proposed in Bergamaschi et al.25 and spectral analysis of the inexact constraint preconditioned matrix is studied in Sesana and Simoncini.26
In this article, we propose a class of inexact modified positive-definite and skew-Hermitian splitting (IMPSS) preconditioners for the generalized saddle point problem (1). From a computational viewpoint, the proposed IMPSS preconditioners are iteratively cheaper than the exact MPSS preconditioners studied in Xie and Ma.17 On the theoretical side, the IMPSS preconditioned matrix also exhibits very desirable spectral properties, which allow us to obtain a theoretical optimal modified positive-definite and skew-Hermitian splitting (OIMPSS) preconditioner.
The rest of this article is organized as follows. The IMPSS preconditioner and implementation aspects will be presented in section “The inexact MPSS preconditioner.” In section “Spectral analysis of the IMPSS preconditioned matrix,” we study the spectral properties of the IMPSS preconditioned matrix and introduce the OIMPSS preconditioner. Afterward, numerical experiments arising from a model steady incompressible Navier–Stokes problem are provided in section “Numerical experiments” to illustrate the theoretical results and examine the numerical effectiveness of the new proposed preconditioners.
The inexact MPSS preconditioner
Before proposing a new efficient preconditioner, we first consider the implementation aspects of the MPSS preconditioner. Let (, ) be the generalized residual vector, at each iteration step of the MPSS preconditioned iteration method (as in the preconditioned generalized minimal residual [GMRES] iteration method), we need to solve the following generalized residual equation
to obtain the current residual (, ). From the matrix factorization expression (5) of the MPSS preconditioner, the current residual z can be solved according to the following algorithm.
Algorithm 1
Computation of :
Solve to obtain ;
Solve to obtain ;
Compute and solve to obtain ;
Compute .
From Algorithm 1, we can see that the implementation of the MPSS preconditioner requires the solution of three sub-linear systems. The primary computational cost is realized in the second step, that is, finding the inverse of the matrix exactly. To avoid this, an algorithm variant is proposed in Xie and Ma17 [Algorithm 3.1], see also the Algorithm 2.
Algorithm 2
Computation of :
Solve to obtain . Solve to obtain (). Set , where denotes the ith column of matrix ;
Solve to obtain ;
Compute and solve to obtain ;
Compute .
Comparing Algorithm 2 with Algorithm 1, we can see that an additional sub-linear system with multiple right-hand sides needs to be solved. Besides this, it may also be that the coefficient matrix in the second step may be dense. It is also difficult to solve large sparse problems.
Based on the inexact preconditioners studied in Simoncini,18 Bai and Ng,22 and Bergamaschi et al.,25 by introducing a nonsingular preconditioning matrix , we propose a class of IMPSS preconditioners as follows
where we assume that is easy to obtain. Evidently, if , then the IMPSS preconditioner becomes the MPSS preconditioner (5).
The implementation of the IMPSS preconditioner requires the solution of the following generalized residual equation
where and (, ) are the current and the generalized residual vectors, respectively. According to the matrix factorization (7), the above generalized residual equation with respect to the IMPSS preconditioner can be solved according to Algorithm 3.
Algorithm 3
Computation of :
Solve to obtain ;
Solve to obtain ;
Compute and solve to obtain ;
Compute .
Comparing Algorithm 3 with Algorithm 1, we can find that only the second step is different. Algorithm 3 may also be easier to implement than Algorithm 2. So, from a computational viewpoint, the proposed IMPSS preconditioner may have much better computing efficiency than the MPSS preconditioner. In section “Numerical experiments,” we will confirm this observation through numerical results.
Spectral analysis of the IMPSS preconditioned matrix
In this section, some spectral properties of the IMPSS preconditioned matrix will be studied. Also, a theoretical optimal IMPSS preconditioner is obtained.
Theorem 1
Let be a non-Hermitian positive-definite matrix, have full row rank, be a Hermitian positive-semidefinite matrix. Let be a nonsingular matrix and be a positive constant. Let the IMPSS preconditioner be defined as in equation (7). Then the preconditioned matrix has the following:
An eigenvalue 1 with multiplicity at least n;
The remaining nonunit eigenvalues are the nonunit eigenvalues of the matrix .
In particular, if , then all eigenvalues of the preconditioned matrix are 1.
which clearly shows that the preconditioned matrix is a block upper triangular matrix. Thus, the eigenvalue distribution is easily obtained.
From equation (8), we can see that if , then the IMPSS preconditioned matrix becomes the following block upper triangular matrix
which immediately shows that all eigenvalues of the preconditioned matrix are 1.
From the eigenvalue distribution, we can see that the optimal choice of the preconditioning matrix is the Schur complement matrix . For such a case, we call the resulting matrix the theoretical OIMPSS preconditioner
Therefore, to obtain a fast convergence rate of the IMPSS preconditioned Krylov subspace method, we can find the preconditioning matrix that approximates the Schur complement matrix rather than . From the above theoretical results of the eigenvalue distribution, we can see that the MPSS preconditioner is not the optimal one. Now, we turn to the study of the eigenvector distribution of the IMPSS preconditioned matrix and the OIMPSS preconditioned matrix.
Theorem 2
Let be a non-Hermitian positive-definite matrix, have full row rank, be a Hermitian positive-semidefinite matrix. Let be a nonsingular matrix and be a positive constant. Let the IMPSS preconditioner and the optimal IMPSS preconditioner be defined as in equations (7) and (10), respectively. Then there are linearly independent eigenvectors of the preconditioned matrix :
n eigenvectors
() that correspond to the eigenvalue , where are arbitrary linearly independent vectors;
i () eigenvectors
() that correspond to nonunit eigenvalues, where satisfies and .
In addition, the preconditioned matrix has only n linearly independent eigenvectors
(), where () are arbitrary linearly independent vectors.
Proof
We first study the eigenvector distribution of the IMPSS preconditioned matrix . Let be an eigenvalue of and
be the corresponding eigenvector. Then from equation (8), we have the following eigenvalue problem
From the eigenvalue distribution observed in Theorem 1, we now consider two cases. If , then equation (12) becomes , which results in since is of full column rank. Let () be arbitrary linearly independent vectors. Then there are n eigenvectors of the form
and the vector v satisfies equation (13). Although equation (13) is trivially satisfied by , this cannot happen. Otherwise, from equation (14), we have , a contradiction. Therefore, if there exists any which satisfies equation (13), then there are i () eigenvectors
() that correspond to nonunit eigenvalues, where satisfies and .
Now, we consider the eigenvector distribution of the preconditioned matrix . From equation (9), we have
which is equivalent to the following two equalities
The second equality is satisfied naturally. From the first equality, we know that since the matrix has full column rank. Therefore, the OIMPSS preconditioned matrix has only n linearly independent eigenvectors
(), where () are arbitrary linearly independent vectors.
The linear independence of these eigenvectors can be derived by the method studied in Cao et al.10 [Theorem 3.2] with only technical modifications. Hence, we omit the rest of the proof here.
Based on the special structures of the preconditioned matrices and , the properties with respect to the minimal polynomials of these preconditioned matrices are given in Theorem 3.
Theorem 3
Let be a non-Hermitian positive-definite matrix, have full row rank, be a Hermitian positive-semidefinite matrix. Let be a nonsingular matrix and be a positive constant. Let the IMPSS preconditioner and the optimal IMPSS preconditioner be defined as in equations (7) and (10), respectively. Then, both the degree of the minimal polynomial of the preconditioned matrix and the dimension of the Krylov subspace are at most . In addition, the minimal polynomial of the preconditioned matrix is
Proof
Let and . Then the preconditioned matrix can be rewritten as the block upper triangular matrix
Let () be the eigenvalues of the matrix . Then are also the eigenvalues of the preconditioned matrix . According to the eigenvalue distribution studied in Theorem 1, the characteristic polynomial of the preconditioned matrix can be defined as
where denotes the determinant of the corresponding matrix. Consider the following polynomial of degree
Then we have
By the Hamilton–Cayley theorem, we have . Thus, the degree of the minimal polynomial of the preconditioned matrix is at most . From Saad2 [Proposition 6.1], we know that the dimension of the Krylov subspace is at most .
From equation (9), we can see that the preconditioned matrix is a block upper triangular matrix with block diagonal matrices being the identity matrices. Therefore, the corresponding minimal polynomial (15) of the preconditioned matrix can be obtained.
Numerical experiments
In this section, we present some numerical examples to illustrate the effectiveness of the new proposed IMPSS preconditioners. To this end, we will use the IMPSS preconditioned GMRES iteration method to solve the generalized saddle point problems (1) arising from the mixed finite element discretization of the following two-dimensional steady incompressible linearized Navier–Stokes equation27
where is the unit bounded square domain, is the viscosity, is the velocity field obtained from the previous Picard step, u and p stand for the velocity and the pressure, respectively. The test model is the lid-driven cavity problem. The IFISS software package written by Elman et al.27 is used to generate the test generalized saddle point problems. Here, we choose uniform grids of increasing sizes (, , , and ) to discretize the problem domain and take the local stabilization method for the element (the stabilization parameter is taken as 0.25).
To emphasize the distinct advantages of these new IMPSS preconditioners, the GMRES iteration methods with the original DPSS preconditioner (3) and its improved variant MPSS preconditioner (4) are used to solve test problems. In particular, the left preconditioning technique is used. We compare these preconditioners by the number of iteration steps (denoted by “IT”) and the elapsed CPU times in seconds (denoted by “CPU”). In actual computations, the initial guess is the zero vector and the stopping criterion is a reduction of at least six orders of magnitude of the initial residual norm (denoted by “RES”). At each iteration of the IMPSS, the DPSS, the MPSS preconditioned GMRES method, some sub-systems of linear equations need to be solved. If the coefficient matrix is non-Hermitian positive-definite, then we use the sparse LU factorization method. If the coefficient matrix is Hermitian positive-definite, then we choose the sparse Cholesky factorization method. All trial runs are performed in MATLAB2010 on an Intel Core (4G RAM) Windows 7 system.
As discussed in section “Spectral analysis of the IMPSS preconditioned matrix,” the preconditioning matrix plays an important role in the IMPSS preconditioners. We need to choose the preconditioning matrix with cheapest computational cost and best convergence performance. It is shown in Wathen and Silvester28 and Wathen and Silvester29 that for the Stokes equations, there exists a pressure mass matrix (i.e. the Grammian matrix of basis functions for discrete pressure subspace) such that
where constants and are independent of the meshsize h. For the stabilized finite element method, the diagonal matrix with d be the spatial dimension is spectral equivalent to the pressure mass matrix, that is, there exist constants , independent of the meshsize h such that
Hence, we can choose for the IMPSS preconditioner such that is very easy to obtain. The theoretical optimal OIMPSS preconditioner () is also studied. In addition, the parameters used in the discussed preconditioners should be chosen appropriately. In general, finding these optimal parameters is a very difficult task. Here, we use some algebraic estimation methods studied in Cao et al.14 and Ren and Cao,30 which can also lead to good numerical results. Specifically, the parameter estimate method proposed in Ren and Cao30 is used to derive the following estimation for the parameter of the DPSS preconditioner
which is the minimum value of the following function with respect to the parameter
The basic idea of constructing the above quadratic equation is to obtain a practical parameter such that the DPSS preconditioner approximates the generalized saddle point matrix as mush as possible in certain norm. For details of the algebraic estimate method, see Ren and Cao30 [Section 4]. Here, the matrices and are defined as in equation (2). We choose (which balances the matrices B and C)14 and (which is the same as the local stabilization parameter) for the MPSS preconditioner. Here, and denote the Frobenius norm and the Euclidean norm of the corresponding matrix, respectively. According to the eigenvalue distribution of the IMPSS preconditioned matrix (see Theorem 1) and the minimal polynomial of the theoretical optimal IMPSS preconditioned matrix (see Theorem 3), we can always take for the IMPSS preconditioner and the OIMPSS preconditioner since the parameter may not greatly impact the preconditioning effects.17
In Table 1, we list the estimated parameters as well as the numerical results of the preconditioned GMRES iteration methods for . In Table 2, we list the same items for . In these tables, numerical results of the standard GMRES method (denoted by I) are listed to show the advantages of the preconditioning effects. In addition, to show the clustering property of the eigenvalues of the preconditioned matrices, using the respective estimated parameters listed in Tables 1 and 2, we plot the eigenvalue distribution of the generalized saddle point matrices, the DPSS preconditioned matrices, the MPSS preconditioned matrices, and the IMPSS preconditioned matrices for the mesh grids with and in Figures 1 and 2, respectively. In these figures, the horizontal ordinate and the longitudinal coordinate indicate the real part and the imaginary part of the eigenvalues, respectively.
Numerical results of the preconditioned GMRES method for Oseen problem with .
Prec.
I
IT
77
171
368
404
CPU
0.0635
0.9208
8.3449
32.5204
RES
7.9934e–7
9.8576e–7
9.4717e–7
9.9803e–7
DPSS
0.0721
0.0388
0.0201
0.0102
IT
55
70
86
115
CPU
0.0242
0.0679
0.3196
3.1089
RES
7.8890e–7
8.7321e–7
8.6436e–7
9.5772e–7
MPSS
(3.7038, 0.25)
(3.9237, 0.25)
(3.9685, 0.25)
(3.9670, 0.25)
IT
17
22
25
29
CPU
0.0073
0.2040
3.5245
183.6651
RES
2.7112e–7
7.0980e–7
4.4526e–7
9.3085e–7
OIMPSS
IT
3
3
3
3
CPU
0.0034
0.0603
2.3664
172.1122
RES
1.1968e–15
9.1926e–15
6.5605e–14
2.5373e–13
IMPSS
IT
24
28
27
22
CPU
0.0081
0.0329
0.2325
0.6001
RES
7.6281e–7
4.5571e–7
4.5911e–7
7.0760e–7
DPSS: deteriorated positive-definite and skew-Hermitian splitting; GMRES: generalized minimal residual; MPSS: modified positive-definite and skew-Hermitian splitting; OIMPSS: optimal modified positive-definite and skew-Hermitian splitting; IMPSS: inexact modified positive-definite and skew-Hermitian splitting.
Numerical results of the preconditioned GMRES method for Oseen problem with .
Prec.
I
IT
84
157
275
539
CPU
0.0794
0.3316
3.0207
28.9593
RES
8.9902e–7
9.0207e–7
9.5096e–7
9.7511e–7
DPSS
0.0268
0.0099
0.0037
0.0015
IT
36
56
81
114
CPU
0.0155
0.1449
0.4603
3.0747
RES
9.1167e–7
6.9278e–7
8.2637e–7
9.4520e–7
MPSS
(3.7038, 0.25)
(3.9237, 0.25)
(3.9685, 0.25)
(3.9670, 0.25)
IT
10
15
18
20
CPU
0.0054
0.0732
3.4272
182.4161
RES
3.4673e–7
2.6273e–7
5.4827e–7
3.6449e–7
OIMPSS
IT
3
3
3
3
CPU
0.0045
0.0634
2.2084
172.3443
RES
2.5557e–15
1.3322e–14
6.4613e–14
2.1516e–13
IMPSS
IT
18
19
19
19
CPU
0.0064
0.0198
0.1841
0.6887
RES
4.9360e–7
4.2812e–7
8.9734e–7
9.3484e–7
DPSS: deteriorated positive-definite and skew-Hermitian splitting; MPSS: modified positive-definite and skew-Hermitian splitting; OIMPSS: optimal modified positive-definite and skew-Hermitian splitting; IMPSS: inexact modified positive-definite and skew-Hermitian splitting.
Eigenvalue distribution of the preconditioned matrices ( grids, ).
Eigenvalue distribution of the preconditioned matrices ( grids, ).
From the numerical results in Tables 1 and 2, we can see that without preconditioning, the GMRES method converges very slow. Just considering iteration counts, the discussed preconditioners can greatly accelerate the convergence rate of the GMRES method. However, the elapsed CPU times show that not all discussed preconditioners are very efficient, especially for large problems. Although the iteration counts of the MPSS preconditioned methods are less than those of the DPSS preconditioned methods, the elapsed CPU times show that the MPSS preconditioner requires a much higher cost, in terms of iteration counts, than the original DPSS preconditioner. So, the MPSS preconditioner does not improve the DPSS preconditioner. Using inexact preconditioning techniques, our new IMPSS preconditioner not only improves the DPSS preconditioner but also dramatically improves the computing efficiency of the MPSS preconditioner. In addition, from these numerical results, we can see that for the test problems given in equation (16), the iteration counts of the IMPSS preconditioned method do not change dramatically for different meshsizes when the viscosity parameter , the IMPSS preconditioner even shows an h-independent convergence property when the viscosity parameter . The OIMPSS preconditioned GMRES iteration methods converge to the exact solution within three iteration steps for all test problems. Therefore, from the iteration counts of these preconditioned GMRES methods, the OIMPSS preconditioner is superior and shows an h-independent convergence property, which confirms our theoretical results in Theorems 2 and 3. However, the OIMPSS preconditioner is also very computationally costly. Figures 1 and 2 show that all preconditioned matrices have clustered eigenvalue distribution, which is in agreement with the theoretical results in Theorem 1. Therefore, we can conclude that the new proposed IMPSS preconditioner is very efficient and greatly improves the DPSS and the MPSS preconditioners.
Footnotes
Handling Editor: Oronzio Manca
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work is supported by the National Natural Science Foundation of China (no. 11771225), the Natural Science Foundation of the Jiangsu Higher Education Institutions of China (no. 18KJB580012), the Natural Science Foundation of Jiangsu Province (no. BK20151272) and the “333” Program Talents of Jiangsu Province (no. BRA2015356).
ORCID iD
Quan Shi
References
1.
BenziMGolubGHLiesenJ.Numerical solution of saddle point problems. Acta Numer2005; 14: 1–137.
2.
SaadY.Iterative methods for sparse linear systems. 2nd ed.Philadelphia, PA: SIAM, 2003.
3.
PestanaJWathenAJ.Natural preconditioning and iterative methods for saddle point systems. SIAM Rev2015; 57: 71–91.
4.
BaiZ-ZGolubGHNgMK.Hermitian and skew-Hermitian splitting methods for non-Hermitian positive definite linear systems. SIAM J Matrix Anal Appl2003; 24: 603–626.
5.
BenziMGolubGH.A preconditioner for generalized saddle point problems. SIAM J Matrix Anal Appl2004; 26: 20–41.
6.
BaiZ-ZGolubGH.Accelerated Hermitian and skew-Hermitian splitting iteration methods for saddle-point problems. IMA J Numer Anal2007; 27: 1–23.
7.
PanJ-YNgMKBaiZ-Z.New preconditioners for saddle point problems. Appl Math Comput2006; 172: 762–771.
8.
ShenS-Q.A note on PSS preconditioners for generalized saddle point problems. Appl Math Comput2014; 237: 723–729.
9.
BaiZ-Z.Block alternating splitting implicit iteration methods for saddle-point problems from time-harmonic eddy current models. Numer Linear Algebra Appl2012; 19: 914–936.
10.
CaoYDongJ-LWangY-M.A relaxed deteriorated PSS preconditioner for nonsymmetric saddle point problems from the steady Navier-Stokes equation. J Comput Appl Math2015; 273: 41–60.
11.
BenziMNgMKNiuQet al. A relaxed dimensional factorization preconditioner for the incompressible Navier-Stokes equations. J Comput phys2011; 230: 6185–6202.
12.
CaoYYaoL-QJiangM-Qet al. A relaxed HSS preconditioner for saddle point problems from meshfree discretization. J Comput Math2013; 31: 398–421.
13.
CaoYMiaoS-XCuiY-S.A relaxed splitting preconditioner for generalized saddle point problems. Comput Appl Math2015; 34: 865–879.
14.
CaoYRenZ-RShiQ.A simplified HSS preconditioner for generalized saddle point problems. BIT Numer Math2016; 56: 423–439.
15.
ZhouS-WYangA-LWuY-J.A relaxed block-triangular splitting preconditioner for generalized saddle-point problems. Int J Comput Math2017; 94: 1609–1623.
16.
CaoYWangAChenY-J.A modified relaxed positive-semidefinite and skew-Hermitian splitting preconditioner for generalized saddle point problems. East Asian J Appl Math2017; 7: 192–210.
17.
XieY-JMaC-F.A modified positive-definite and skew-Hermitian splitting preconditioner for generalized saddle point problems from the Navier-Stokes equation. Numer Algor2016; 72: 243–258.
18.
SimonciniV.Block triangular preconditioners for symmetric saddle-point problems. Appl Numer Math2004; 49: 63–80.
19.
CaoZ-H.Positive stable block triangular preconditioners for symmetric saddle point problems. Appl Numer Math2007; 57: 899–910.
20.
JiangM-QCaoYYaoL-Q.On parameterized block triangular preconditioners for generalized saddle point problems. Appl Math Comput2010; 216: 1777–1789.
21.
CaoYJiangM-QZhengY-L.A note on the positive stable block triangular preconditioner for generalized saddle point problems. Appl Math Comput2012; 218: 11075–11082.
PengX-FLiW.A new Bramble-Pasciak-like preconditioner for saddle point problems. Numer Algebra Control Optim2012; 2: 823–838.
24.
LiXWuY-JYangA-Let al. Modified accelerated parameterized inexact Uzawa method for singular and nonsingular saddle point problems. Appl Math Comput2014; 244: 552–560.
25.
BergamaschiLGondzioJVenturinMet al. Inexact constraint preconditioners for linear systems arising in interior point methods. Comput Optim Appl2007; 36: 137–147.
26.
SesanaDSimonciniV.Spectral analysis of inexact constraint preconditioning for symmetric saddle point matrices. Linear Algebra Appl2013; 438: 2683–2700.
27.
ElmanHCSilvesterDJWathenAJ.Finite elements and fast iterative solvers: with applications in incompressible fluid dynamics. 2nd ed.Oxford: Oxford University Press, 2014.
28.
WathenASilvesterD.Fast iterative solution of stabilised Stokes systems. Part I: using simple diagonal preconditioners. SIAM J Numer Anal1993; 30: 630–649.
29.
SilvesterDWathenA.Fast iterative solution of stabilised Stokes systems. Part II: using general block preconditioners. SIAM J Numer Anal1994; 31: 1352–1367.
30.
RenZ-RCaoY.An alternating positive-semidefinite splitting preconditioner for saddle point problems from time-harmonic eddy current models. IMA J Numer Anal2016; 36: 922–946.