Total variation smoothing methods have been proven to be very efficient at discriminating between structures (edges and textures) and noise in images. Recently, it was shown that such methods do not create new discontinuities and preserve the modulus of continuity of functions. In this paper, we propose a Galerkin–Ritz method to solve the Rudin–Osher–Fatemi image denoising model where smooth bivariate spline functions on triangulations are used as approximating spaces. Using the extension property of functions of bounded variation on Lipschitz domains, we construct a minimizing sequence of continuous bivariate spline functions of arbitrary degree, d, for the TV-L2 energy functional and prove the convergence of the finite element solutions to the solution of the Rudin, Osher, and Fatemi model. Moreover, an iterative algorithm for computing spline minimizers is developed and the convergence of the algorithm is proved.
A few decades ago, Rudin et al.1 proposed a constrained total variation minimization method for image enhancement. Suppose you have an image f filled with artifacts and your goal is to reduce these artifacts and enhance the quality of your image while preserving the details of the image as much as possible. Assuming that the image f is a function defined on regular domain , Rudin, Osher and Fatemi’s (ROF) approach is to solve the following penalized total variation minimization problem
where is the total variation of u, and λ is a positive parameter controlling the fidelity of the recovered image to the initial image f. From now on we will denote the total variation of a function u by
We shall refer to the minimization problem (equation (1)) as the ROF model, and denote its objective functional by
Notice that a minimal condition for the ROF model to be defined is that the image f be a square integrable function over Ω in which case the domain of is , where stands for the space of functions of bounded variation over Ω.
The last decades witnessed excellent progress and exciting development regarding the analysis of the ROF model. The interested reader is referred to Chambolle et al.2 for a comprehensive introduction on this topic. Since the introduction of the model, much understanding is gained on the analytic properties of its solution (c.f. Caselles et al.,3 Allard,4,5 and Caselles et al.6). For instance, it is known that, the solution of the model does not introduce new discontinuity compared to the input data. Specifically, it was proved in Caselles et al.6 that the jump set of the solution is always contained in the jump set of the input image; and if Caselles et al.3f has modulus of continuity ω, then so does the minimizer of provided that Ω is convex. A concise outline of recent progress on the study of the properties of the continuous model can be found at Chambolle et al.7
On the computational side of the model, early algorithms were based on smooth approximations of the primal form of the variational problem (equation (1)) (c.f. Acar and Vogel8 and Chambolle and Lions9). Chambolle10 first proposed a gradient descent algorithm that operated on the dual form of the original minimization problem. In addition, He proved that this algorithm converged to the exact solution of the discretized model (c.f. Chambolle10,11). Much progress had since been made in the 2000s to develop highly accelerated first-order algorithms (c.f. Aujol and Dossal,12 Beck and Teboulle,13 Beck,14 Combettes and Wajs,15 Esser et al.,16 Goldstein and Osher,17 Zhu and Chan,18 and Zhu et al.19) to solve the dual problem. See Chambolle and Pock20 for a recent survey on such numerical methods. Independent of the aforementioned approaches, there is another class of combinatorial optimization methods that are very efficient and have been extensively studied by the computer vision community (c.f. Boykov et al.,21 Darbon and Sigelle,22,23 and Goldfarb and Yin24). We note that in all aforementioned works, the discretization of the ROF model is always carried out on a fixed rectangular grid where finite difference schemes are convenient to compute the total variation. Error bound of such discretization was derived in Lai et al.25 and Wang and Lucier.26
To the best of our knowledge, Dobson and Vogel27 (Theorem 2.2, p. 1782) are among the first to investigate solving the total variation minimization problem with Galerkin schemes. They gave a sufficient condition for the convergence of a Galerkin scheme for the ROF model. However, they also observed that the said condition is easily achieved if the solution of the ROF model is sufficiently smooth and suggested that more research be done under less stringent regularity assumptions. Along this line of study, many well-developed methods in the theory of finite element method are applied to this problem (c.f. Lai and Messi,28 Matamba Messi,29 Bartels,30,31 Bartels and Milicevic,32 Chen and Tai,33 Feng and Prohl,34 Feng et al.,35 Litvinov et al.,36 Stamm and Wihler,36 Tian and Yuan,37 and Xu et al.38). It is worth noting that all these works employ either piecewise-constant or piecewise-linear finite elements. They also presume certain regularity conditions on the input data with the exception of Bartels’ work.
This paper addresses Dobson and Vogel’s question by constructing a convergent Galerkin scheme regardless of the regularity of the solution of the ROF model. The novelty of our method is we extend the conventional piecewise-linear finite-elements–based approach to bivariate spline element with arbitrary degree, and prove the convergence of the numerical scheme. We also propose a relaxation algorithm for solving the variation problem using bivariate splines. We note in numerical experiments that unlike other algorithms that use finite difference schemes, the spline algorithm that we propose does not show the notorious “staircase” effect. We also gain flexibility with the capability to select any polygonal region in an image for local treatment; this was not easy with algorithms that required rectangular patches when doing local denoising. Our method is also anticipating the day when we will be able to capture images with non-rectangular CCD devices.
The paper is structured as follows. The next section is devoted to the necessary mathematical preliminaries on functions of bounded variation and bivariate spline functions. In the Galerkin approximation with smooth splines section, we review relevant properties of the ROF model and prove the main result (Theorem 11) of this paper. We introduce an algorithm for approximating the terms of the spline minimizing sequence and prove its convergence in the Fixed point relaxation algorithm section. In the last section, we show the numerical evidence of its effectiveness at smoothing images.
Preliminary results
In this section and throughout the paper, the planar domain Ω is assumed polygonal, unless otherwise noted. We also remind the reader that by domain of , we mean a connected open subset.
Functions of bounded variation
A function is said to be of bounded variation if and its total variation
is finite. For example any function is of bounded variation with total variation
The set of functions of bounded variation, denoted , is a Banach space for the norm
Furthermore, if , then its distributional derivative,40Du, is a finite vector-valued Radon measure on Ω, and the total variation of Du induces a Borel measure on Ω known as the total variation measure of u over Ω.
The following result asserts that a function u defined on a domain Ω of with zero total variation must be constant. In particular, if , then the total variation is a norm on equivalent to the norm defined in equation (7).
Theorem 1 (Poincaré Inequality41,42). Suppose that Ω is a bounded Lipschitz domain of . Then there exists a constant C depending only on Ω such that
whereis the average value of u over Ω. If, then there exists C > 0 such that for any compactly supported function
Another property of functions of bounded variation that is central to our contribution in this work is the existence of an extension operator from into that does not turn the boundary of Ω into a singular set for the total variation measure.
Theorem 241 (Proposition 3.21, p. 131). Suppose that Ω is a bounded Lipschitz domain. Then for any bounded open set such that Ω is relatively compact in A, there exists a bounded linear extension operator such that the following hold:
For anyand the support of TAu is contained in A.
The restriction of TA tois a bounded linear operator into.
Proof. The proof is constructive and parallels the construction of an extension operator39 from W′1,1(Ω) to using a partition of unity argument. The complete proof is found in Ambrosio et al.40□
We now give the properties of the total variation functional that play a primordial role in proving the existence and uniqueness of the solution for the ROF model.
Proposition 3. The total variation functional satisfies the properties:
(a) J is positively 1-homogeneous, i.e.,and;
(b) J is convex, i.e.,, , ;
(c) J is lower semicontinuous, i.e., ifis a sequence which converges into u, then
Proof. The proof of the proposition is straightforward with (a) and (b) arising from the definition of the total variation, while (c) is a consequence of Lebesgue Dominated Convergence Theorem.□
To establish the main result of this paper, we need to construct a sequence of smooth functions that converges in for which the equality holds in equation (10). By exploiting the extension property of functions of bounded variation (see Theorem 2 above), we will use the following lemma to achieve this goal.
Lemma 442 (Proposition 1.15). Suppose. Ifis a relatively compact open subset of Ω such that
then
where and η is radially symmetric mollifier.
Bivariate spline functions
Let Δ be a triangulation of Ω. A spline function on the triangulation Δ is a function s defined on Ω such that for any triangle , the restriction of s to T is a polynomial. The degree of a spline function is the maximum degree of its restrictions to elements of the triangulation Δ. The space of spline functions of degree d on Δ is denoted by
where is the vector space of bivariate polynomials of degree less than or equal to d. The space of smooth spline functions of degree d and order , is defined by
Given a basis of the polynomial space , it is easy to see that is isomorphic to where and is the number of triangles in Δ, while the space of smooth splines is a subspace of of the form42
where A(r) is an matrix encoding the smoothness condition across the interior edges of the triangulation Δ, and E is the number of interior edges of Δ. Notice that we can use a different basis of for each triangle and in such instance we shall write
For our purposes in this paper, we shall use the Bernstein-Bézier basis of for each triangle .
Spline functions have been used with much success in the numerical computation of partial differential equations using variational methods44–48 and more recently for the numerical simulation of the Darcy–Stokes equation.49 In general, spline functions may be utilized as approximation spaces to study some classes of variational equations using the Galerkin method. Their appeal to us in this work is twofold. Firstly, bivariate spline functions possess good approximation power in the Sobolev spaces as illustrated by the following theorem.
Theorem 543 (Theorem 10.2, p. 277). Suppose that Δ is a regular triangulation of Ω of mesh size h > 0. Let and be given. Then for every , there exists a spline function such that
where K depends only on d and the smallest angle of Δ, and
Secondly, the differential operators are bounded linear operators between the spaces and . This property is known in the literature as the Markov inequality.
Theorem 6 (Markov inequality,42 Theorem 2.32). Let Δ be a triangulation of Ω. Let and be fixed. There exists a constant K depending only on d such that for all nonnegative integers α and β with , we have
wherewith ρt the inradius of the triangle t.
Remark 7. Notice that one can define a spline functions space on any simplicial tiling of the domain Ω and all polynomials based finite element spaces are preferential subspaces of the spline space for some choice of d.
Natural images contain structural information in the form of edges and textures. Resolving these entities with continuous spline functions will require fine triangulations which in turn demand high resolution images. An alternative is to use high order spline functions on moderate size triangulations as these are more capable of capturing the variations corresponding to edges than continuous spline functions on such triangulations. The following result gives the relationship between the degree and the order of the spline function to guarantee suitable approximation power in Sobolev spaces.
Theorem 843 (Theorem 10.10). Let and suppose Δ is a regular triangulation of Ω of mesh size h. Then for every , there exists a spline such that
If Ω is convex, then the constant K depends only onand the smallest angle on Δ; otherwise K also depends on the Lipschitz constant of the boundary of Ω.
Galerkin approximation with smooth splines
In this section, we describe how we arrive at a family of continuous bivariate spline functions that approximate the minimizer of the functional
Before undertaking the analysis of our approximation method, let us briefly explain why the ROF model is well posed. In fact, Proposition 3 implies that for and fixed, the ROF functional is strictly convex and lower semi-continuous on for the norm of . Therefore, the ROF model (1) has a unique solution and the problem is well posed as illustrated by the following result.
Theorem 9.Letbe the minimizer of the ROF functional. Then for any, there holds
and
Moreover, ifis the minimizer of, then
Proof. The proof is a simple exercise of convex analysis and uses the characterization of the minimizer of a convex functional using subdifferentials. The complete proof is found in Matamba Messi.29□
The approximation of the minimizer of the ROF model by continuous spline functions is possible because the space possesses very good approximation power in high-order Sobolev spaces as illustrated by Theorem 5 and a function of bounded variation can be approximated by smooth functions. In using Theorem 5, we will need to control the norm of high order derivatives of the mollification of a BV function. This is done as in the lemma below.
Lemma 10. Letbe fixed. Then for any integer, any pair of nonnegative integersuch that, and any, we have
where C is a constant depending only on m and Ω.
Proof. Let be given. Let α and β be two nonnegative integers such that ; we may assume without loss of generality that . Then, with we have
Thus
Now by Hölder’s inequality we have
a simple computation shows that
where is the Lebesgue measure of Ω. Consequently
where
Taking the supremum in equation (22) over all such that , we obtain by duality and a denseness argument that
□
Suppose that Ω is endowed with a regular triangulation Δh of sise h, and let be given. As a finite dimensional space, is a closed and convex subset of . Thus, the ROF functional has a unique minimizer in . Let be the spline function defined by
We are ready to prove that our construction of minimum splines above yields a minimizing sequence for the ROF functional. Let hn be a monotonically decreasing sequence of real numbers such that as . Let Δn be a quasi-regular triangulation of Ω with mesh size hn and smallest angle θn. We have the following result:
Theorem 11. Suppose that the sequence of regular triangulations is such that
Given, the sequencedefined by
is minimizing for the ROF functional .
Proof. Let be the extension operator associated with the neighborhood of Ω, the existence of which is guaranteed by Theorem 2. We recall that T is also a bounded linear operator from into , and for any , Tu is supported on the the neighborhood of Ω.
Let and be fixed. Let be the minimizer of and put . Let be as in Theorem 5. Then by Lemma 10, we have
where C depends solely on d and θ. Moreover, since is bounded, and Tu is compactly supported for every u, it follows from the Poincaré inequality (equation (9)) that
with C a universal constant depending only on the neighborhood of Ω, and is the operator norm of T.
We now show that by choosing a suitable regularization scale ϵ, we achieve the convergence of to as . In fact for any , we have
So to finish the proof, it suffices to show that
and
as for a suitable choice of ϵ. First, we observe that the convergence of to follows from the fact that
and by Lemma 4 applied to
Second, by the triangle inequality and direct calculation we have
Now, using the estimate (equation (27)) and letting , we infer from the latter inequality that
where
Thus, as and the proof is complete.□
Remark 12. It is easy to construct a sequence of triangulation with vanishing mesh sizes for which condition (25) is satisfied. Starting from a triangulation of Ω with smallest angle θ0 and mesh size h0, a sequence of triangulations Δn is generated via successive refinements as follows: Given Δn, we obtain by subdividing each triangle into four triangles by connecting the midpoints of the edges of t. The resulting triangulation has mesh size and smallest angle θ0.
Corollary 13.Under the assumptions of Theorem 11, the sequencesatisfies the following two properties:
and
Proof. Since Ω is a bounded domain it suffices to establish (29) for p = 2. The result for follows from the fact that is canonically embedded into . The case p = 2 follows easily from Theorem 9 and Theorem 11. Indeed, owing to equation (18), we have for all
thus by Theorem 11 above, we have
as . Finally, since
and the sequence is bounded, thanks to Theorem 11 taking the limit of the latter identity as yields (30) and the proof is complete.□
Remark 14. Bartels [5] establish Corollary 13 for the case d = 1. Our result generalizes and is applicable to higher order finite elements under h-refinement for which property (equation (14)) holds with infinitely differentiable functions.
Remark 15. The results of Theorem 20 and Corollary 13 hold if we replace with in the definition of the spline minimizer provided that the hypotheses of Theorem 8 hold. In particular, we must have and a family of regular triangulations.
Fixed point relaxation algorithm
The challenge in computing with the ROF model stems from the fact that the objective functional is not Gâteaux differentiable; so the solution cannot be described in terms of the first variation of . The reason why we cannot use the standard machinery of calculus of variation to solve equation (24) is that the associated Lagrangian
is not differentiable with respect to at the origin . One way to mitigate this difficulty is to find a differentiable relaxation of the Lagrangian L such that the corresponding energy functional is a perturbation of ; that approach has been successfully used on at least three occasions in the literature.8,9,27
Following Chambolle and Lions,9 we let be the real-valued function defined on by
and consider the optimization problem
It is easy to check that the functional is strictly convex and lower semicontinuous on with respect to the L2-norm. Consequently, the minimization problem (equation (32)) has a unique solution that we denote . We now show that the family of functional converges uniformly to and the minimizers converge to as ϵ goes to zero.
Proposition 16.The family of functionalsconverges uniformly intoandas. Furthermore, under the assumptions of Theorem 11, we haveas h goes to 0.
Proof. Let be the continuous function defined on by . It is easy to show that
Therefore, for any we have the estimate
and it follows that converges uniformly in to .
Next, we note that Theorem 9 remains true on . Therefore, rewriting equation (18) in for , we obtain
Thus, and it follows that converges to in as ϵ goes to 0. Finally, by the triangle inequality we have
Taking the limit of the latter inequality as h goes to 0 and using Corollary 3.5, it follows that converges to in as h goes to 0.□
Unlike the spline ROF model (24) for which we cannot use the first variation to characterize the solution, the functional associated with the relaxation problem (equation (32)) is Gâteaux differentiable. Therefore, the spline function is characterized by:
Proposition 17. A function is the minimizer of the functional in if and only if u satisfies the variational equation
for all, where.
Proof. First, we observe that is Gâteaux differentiable with directional derivatives at any point given by
Therefore, u is a minimizer of in if and only if , i.e. (equation (33)) holds.□
We now set up the variational equation (33) as a fixed point equation for a nonlinear operator on . Fix and define the bilinear form on by
By Markov Inequality (Theorem 6) is a continuous, coercive bilinear form on the Hilbert space as a subspace of . Thus, by Lax–Milgram Theorem50 (Theorem 2.7.7), for any the equation
has a unique solution in that we denote by . Moreover, since is symmetric is characterized by
Hence for fixed, we define the nonlinear operator
It is easily shown using the characterizing equation (35) for and the dominated convergence theorem that the operator F is continuous. Furthermore, Proposition 17 above defines as a fixed point of F. So we may compute using the fixed point iterative scheme presented and analyzed below.
Algorithm 18. Start from any bounded nonnegative function and for , let
A standard argument using Lax-Milgram Theorem (see Brezis,39 Corollary 5.8 p. 140) shows that is characterized by the variational equation
for all . The existence and uniqueness of follows by observing that the bilinear form
is continuous—thanks to Theorem 6—and coercive on with respect to the L2-norm. Consequently, the equation (37) has a unique solution.
We fix and for the sake of notation conciseness, consider the functional E defined by
It is easy to check that
Lemma 19.The sequenceis bounded inand satisfies for all, and any
In particular, we have for all
Proof. We observe that in view of Theorem 6, proving the boundedness of in is equivalent to proving its boundedness in . Let be given. Then by definition of un, we have
Consequently, we get , and deduce by the triangle inequality that
We now show that For any , we have
In particular for any ,
Thus, the sequence is monotone nonincreasing and .□
Theorem 20.The sequenceconstructed in Algorithm 4.3 converges into the minimizerof.
Proof. In view of Proposition 17, it suffices to show that any cluster point u of the sequence with respect to the L2-norm satisfies the Euler-Lagrange equation (33). To begin, we note that the sequence has at least one cluster point as a bounded sequence in a finite dimensional normed vector space.
Let u be any cluster point of in and a subsequence such that . Since
it follows that as well. By Markov inequality—Theorem 6—we also have
Therefore, by Lebesgue dominated convergence theorem, we get
Next, we establish that u satisfies the variational equation
for all . Indeed by definition of , there holds
for all . Since converges strongly to in and converges strongly to , it follows that
as . Similarly, as converges strongly to u in , we infer that
On passing to the limit as in (43) and taking into account equations (44) and (45), we obtain equation (42) and the proof is complete.□
Remark 21. Many choices of the function for constructing a relaxation of the ROF functional are possible; presumably any choice of a family of convex continuously differentiable functions that yields a uniform approximation of the Euclidian norm should do the trick. A common choice seen in the literature and introduced by Acar and Vogel8 is the function
Remark 22. For d = 1 the relaxation algorithm is not necessary as a direct algorithm for computing the minimizers is readily available. Indeed, in this case the objective functional reads
and the ROF model over the continuous affine functions turns into the saddle-point problem
One can then solve the latter problem using the first-order primal dual algorithm studied by Chambolle and Pock.50 Indeed, Bartels52 has studied it in details and provided ample evidence of convergence.
Applications to digital image processing
In this section, we report the results of some numerical experiments done using the algorithm described above on digital images. It is well known (some of these observations have been confirmed by theory) that:
the ROF model is excellent on piecewise constant images up to a reduction in contrast;
finite difference algorithms for the ROF model are vulnerable to the staircase effect, whereby smooth regions are recovered as mosaics of piecewise constant subregions;
total variation based image enhancement methods are ineffective in discriminating textures from noise at well mixed scales.
Two examples illustrating the issues raised above are provided, using both the finite difference and spline methods. However, we will see that the staircase effect is tamed by the spline method.
A digital total variation spline model
A digital image is a quantization of a light intensity field. For our purposes here, we model digital images as samples or quantization of functions defined on a domain Ω. For example, an image with resolution M × N could be thought of as the evaluation on the grid of a function f defined on ; alternatively we could think of it as a sample of local averages of the function f on squares centered at (i, j).
The algorithm described in the previous section assumes that f is a function on the continuum domain Ω, however, digital images are merely samples of such functions. Therefore for processing digital images with the ROF model on spline spaces, we should try to estimate the function f from its samples . One way to do these is to use any of the penalized spline fitting method introduced by Awanou et al.52 The problem with this approach is that the preliminary estimation step significantly modifies the input data. When the estimated function is fed to the ROF model, we cannot easily discriminate the contribution of the total variation smoothing procedure on the final output.
In order to clearly illustrate the effect of the total variation smoothing procedure in digital image processing, we solve the following variant of the spline minimization problem (equation (24))
where P is the total number of pixels and is the value of the spline function s at the pixel location . We have simply replaced the continuum L2 fidelity term with a discrete counterpart based on the available data. In general, the optimization problem (50) may not have a solution unless the pixel locations are well distributed across the triangles of Δh.
Theorem 23. Suppose that the pixel locations
are such that, the mapping
is a norm on. Then there exists a unique spline functionsuch that
where
Proof. Notice that is strictly convex and continuous on ; therefore has at most one minimizer in . Let be a minimizing sequence of Ed, i.e converges to . The sequence is bounded, and since we assume that ND is a norm and is finite dimensional, it follows that any subsequence of has a convergent subsequence with respect to the norm ND. Now, if is the limit of a subsequence of , then by continuity of Ed we have and is a minimizer of Ed. Thus, the set of minimizers of Ed is nonempty. Finally, since Ed is strictly convex and limit points of are minimizers of Ed, we infer that the minimizing sequence converges to the unique minimizer sh of Ed.□
Remark 24. The condition (49) is equivalent to saying that the pixel locations set D is determining for the space of piecewise polynomials , that is every element is uniquely determined by the values of at the pixel locations for every . Consequently, each triangle T should contain at least pixel locations.
Remark 25. In practice the pixel locations are fixed by the image processing task. Therefore, given a choice of the degree d, condition (49) restricts our options of triangulations as well as the shape of the individual triangles as well. For example when denoising a M × N image, we may not use a triangulation containing more than triangles.
Following section of Fixed point relaxation algorithm above, the actual computation is done by iteratively solving the sequence of quadratic programs
where
These expressions of vn correspond to the relaxation derived from the functions
We shall refer to the relaxation associated to , the minimal surface approximation (MSA) spline method.
Image processing experiments
In this section we present the result of the numerical experiments for digital image processing situations. We consider denoising, in painting and image resizing. Each of these task can be done using total variation smoothing and can be formulated in the form given in equation (50).
Experiment 1 (Denoising a cartoon image).
In this test, we use the ROF model to clean up a Gaussian noise added to the a binary image made of five geometric shapes. For comparison purposes, we ran the spline algorithm 4.3 and the finite difference algorithm studied by the authors in Lai and Messi.28 The spline algorithm is less capable to accurately resolve the edges than the finite difference algorithm, as seen by visually comparing panel (d) and panel (f) in Figure 1. The performance of the spline algorithm may be improved by choosing a triangulation that is adapted to the edges in the image. However, generating such triangulations augment the computational cost of the algorithm as we would first identify the edges and the performance of the edge detection algorithm may be hampered by the noise in the data.
(a) The original cartoon image overlayed with a triangulation made of 562 triangles and 316 vertices. (b) The noised image obtained by adding a white noise with σ = 25 to the cartoon image. (c) The image recovered with the projected gradient algorithm. (d) The difference between the noised and recovered images. (e) The image recovered by fitting a continuous cubic spline over the triangulation in image (a). (f) The difference between the spline and noised images.
Experiment 2 (Denoising a natural image).
We now show the performance of the spline algorithm on a natural image with minor textures. , and . Both the projected gradient algorithm and the spline method effectively reduce the noise. The finite difference method produces sharper edges than the spline method, see shirt collar in panel (d) and panel (f) in Figure 2. However, the finite difference method results in an image with more blocky regions than the one recovered by the spline method, see panel (b) and panel (c) in Figure 3.
(a) A toddler portrait. (b) The noised image obtained by adding a white noise with σ = 25 to the image in (a). (c) The image recovered with the projected gradient algorithm. (d) The difference between the noised and recovered images. (e) The image recovered by fitting a continuous cubic spline over a mesh made of 1271 triangles. (f) The residual image of the spline algorithm.
The spline method is less vulnerable to the staircase effect. (a) Portion of the noised image in Figure 1(b). (b) The image recovered using the projected gradient algorithm. (c) The image recovered with the spline method.
Experiment 3 (Denoising a natural image with an edge-adapted triangulation). We show that by using a triangulation that aligns with the edges in the image, we make our approach very competitive with the finite difference method as measured by peak signal-to-noise ratio, see Table 1. In this experiment, the MSA spline method is run separately on the regions identified by a segmentation of the image, see Figure 4 panel (b).
(a) The image to be cleaned was obtained by addition of a sample from a white noise with σ = 20 onto the ground truth image in panel (b). (b) Edge adapted triangulation of the image domain. (c) Recovered image using the MSA spline approximation with ϵ = 1, d = 6 and r = 1. (d) Recovered image using Perona–Malik finite difference denoising. Original MSA Bicubic Bilinear.
The PSNRs of various regions in Figure 4, Panel (b).
Perona–Malik
MSA spline
Region 1 (red, left)
29.9161
29.9109
Region 2 (green)
28.8445
28.9952
Region 3 (yellow)
31.6307
31.6989
Region 4 (purple)
33.0744
33.1167
Region 5 (blue)
33.3464
33.5832
Region 6 (red, right)
32.4490
32.5882
Experiment 4 (Total variation image resizing). Image resizing consists in increasing/decreasing the resolution of a given digital image. We achieve this easily by fitting a spline function to the available image using (50) and evaluating the resulting spline over the pixel locations for the new resolution of the image. We illustrate this approach with two examples in Figure 5; each image in the first column is resized by a factor of 10, that is, the resulting images have 10 times more pixels along each direction that the original images. The first original image is of dimension 18 × 28 pixels, while the second one has a resolution of 15 × 24 pixels. One can notice that both the bicubic and bilinear interpolation methods lead to the Gibbs discontinuity effect at the edges of the two images, while our approach barely show any such effect.
Two images are scaled by a factor of 10 using the MSA spline method with d = 7 and r = 1. The results are compared to the bicubic and bilinear interpolation methods implemented in the MATLAB®’s function imresize.53
Footnotes
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
Jingyue Wang acknowledges supports from the National Natural Science Foundation of China under Grant 11771084, and from the Natural Science Foundation of Fujian under Grant 2016J01016.
Acknowledgements
Leopold Matamba Messi acknowledges support from the U.S. National Science Foundation under Grant DMS-0931642 to the Mathematical Biosciences Institute.
References
1.
RudinLOsherSFatemiE.Nonlinear total variation based noise removal algorithms. Physica D Nonlinear Phenomena1992;
60: 259–268.
2.
ChambolleACasellesVCremersDet al. An introduction to total variation for image analysis. In Theoretical foundations and numerical methods for sparse recovery, volume 9 of Radon Ser. Comput. Appl. Math. Berlin: Walter de Gruyter, 2010, pp. 263–340.
3.
CasellesVChambolleANovagaM.Regularity for solutions of the total variation denoising problem. Rev Mat Iberoam2011;
27: 233–252.
4.
AllardWK.Total variation regularization for image denoising, I. Geometric theory. Siam J Math Anal2007;
39: 1150–1190.
5.
AllardWK.Total variation regularization for image denoising, II. Examples. Siam J Imaging Sci2008;
1: 400–417.
6.
CasellesVChambolleANovagaM.The discontinuity set of solutions of the TV denoising problem and some extensions. Multiscale Model Simul2007;
6: 879–894.
7.
ChambolleADuvalVPeyreGet al.
Geometric properties of solutions to the total variation denoising problem. Inverse Prob2017;
33: 015002.
8.
AcarRVogelCR.Analysis of bounded variation penalty methods for ill-posed problems. Inverse Prob1994;
10: 1217–1229.
9.
ChambolleALionsP-L.Image recovery via total variation minimization and related problems. Numer Math1997;
76: 167–188.
10.
ChambolleA.An algorithm for total variation minimization and applications. J Math Imag Vision2004;
20: 89–97. Special issue on mathematics and image analysis.
11.
ChambolleA.Total variation minimization and a class of binary MRF models. In RangarajanAVemuriBYuilleA (eds), Energy minimization methods in computer vision and pattern recognition, volume 3757 of lecture notes in computer science.
Berlin/Heidelberg:
Springer, 2005, pp. 136–152.
12.
AujolJ-FDossalC.Stability of over-relaxations for the forward-backward algorithm, application to Fista. Siam J Optim2015;
25: 2408–2433.
13.
BeckATeboulleM.A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J Imaging Sci2009;
2: 183–202.
14.
BeckA.On the convergence of alternating minimization for convex programming with applications to iteratively reweighted least squares and decomposition schemes. Siam J Optim2015;
25: 185–209.
15.
CombettesPLWajsVR.Signal recovery by proximal forward-backward splitting. Multiscale Model Simul2005;
4: 1168–1200.
16.
EsserEZhangXChanTF.A general framework for a class of first order primal-dual algorithms for convex optimization in imaging science. Siam J Imaging Sci2010;
3: 1015–1046.
ZhuMChanTF. An efficient primal dual hybrid gradient algorithm for total variation image restoration. Technical Report 08-34, UCLA, Center for Applied Math, 2008.
ChambolleAPockT.An introduction to continuous optimization for imaging. Acta Numerica2016;
25: 161–319.
21.
BoykovYVekslerOZabihR.Fast approximate energy minimization via graph cuts. IEEE Trans Pattern Anal Machine Intell2001;
23: 1222–1239.
22.
DarbonJSigelleM.Image restoration with discrete constrained total variation—part I: fast and exact optimization. J Math Imaging Vis2006;
26: 261–276.
23.
DarbonJSigelleM.Image restoration with discrete constrained total variation—part II: Levelable functions, convex priors and non-convex cases. J Math Imaging Vis2006;
26: 277–291. December
24.
GoldfarbDYinW.Parametric maximum flow algorithms for fast total variation minimization. SIAM J Sci Comput2009;
31: 3712–3743.
25.
LaiMLucierBWangJ.The convergence of a central-difference discretization of Rudin-Osher-Fatemi model for image denoising. Scale Space Variat Methods Comput Vision2009; 5567: 514–526.
26.
WangJLucierBJ.Error bounds for finite-difference methods for Rudin-Osher-Fatemi image smoothing. SIAM J Numer Anal2011;
49: 845–868.
27.
DobsonDCVogelCR.Convergence of an iterative method for total variation denoising. SIAM J Numer Anal1997;
34: 1779–1791.
28.
LaiM-JMessiLM.Piecewise linear approximation of the continuous Rudin-Osher-Fatemi model for image denoising. SIAM J Numer Anal2012;
50: 2446–2466.
29.
Matamba MessiL.Theoretical and numerical approximation of the Rudin-Osher-Fatemi model for image denoising in the continuous setting. PhD thesis, The University of Georgia, Athens, GA, 2012.
30.
BartelsS.Broken Sobolev space iteration for total variation regularized minimization problems. IMA J Numer Anal2016;
36: 493–502.
31.
Bartels S. Error control and adaptivity for a variational model problem defined on functions of bounded variation. Math Comp 2015; 84: 1217–1240.
32.
BartelsSMilicevicM.Iterative finite element solution of a constrained total variation regularized model problem. Discrete Cont Dynamical Syst Series S2017;
10: 1207–1232.
33.
ChenKTaiX-C.A nonlinear multigrid method for total variation minimization from image restoration. J Sci Comput2007;
33: 115–138.
34.
FengXProhlA.Analysis of total variation flow and its finite element approximations. Esaim: M2an2003;
37: 533–556.
35.
FengXvon OehsenMProhlA.Rate of convergence of regularization procedures and finite element approximations for the total variation flow. Numer Math2005;
100: 441–456.
36.
LitvinovWRahmanTTaiX.A modified TV-stokes model for image processing. SIAM J Sci Comput2011;
33: 1574–1597.
37.
StammBWihlerT.A total variation discontinuous Galerkin approach for image restoration. Int J Numer Anal Model2015;
12: 81–93.
38.
TianWYiYuanX.Convergence analysis of primal–dual based methods for total variation minimization with finite element approximation. J Sci Comput2018;
76: 243–274.
BrezisH.Functional analysis, sobolev spaces and partial differential equations. Universitext,
New York:
Springer, 2011. [Database][Mismatch]
41.
AmbrosioLFuscoNPallaraD.Functions of bounded variation and free discontinuity problems. Oxford Mathematical Monographs, New York: The Clarendon Press, Oxford University Press, 2000.
42.
GiustiE.Minimal surfaces and functions of bounded variation, volume 80 of Monographs in Mathematics. Birkhäuser Basel, New York,1984.
43.
LaiM-JSchumakerLL.Spline functions on triangulations, volume 110 of encyclopedia of mathematics and its applications.
Cambridge:
Cambridge University Press, 2007.
44.
LaiM-JLiuCWenstonP.Bivariate spline method for numerical solution of time evolution Navier-Stokes equations over polygons in stream function formulation. Numer Methods Partial Differential Eq2003;
19: 776–827.
45.
LaiM-JLiuCWenstonP.Numerical simulations on two nonlinear biharmonic evolution equations. Appl Anal2004;
83: 563–577.
46.
LaiM-JWenstonP.Bivariate spline method for numerical solution of steady state Navier-Stokes equations over polygons in stream function formulation. Numer Methods Partial Different Eq2000;
16: 147–183.
47.
LaiM-JWenstonP. Trivariate C1 cubic splines for numerical solution of biharmonic equations. In Trends in approximation theory (Nashville, TN, 2000), Innov. Appl. Math. Vanderbilt Univ. Press, 2001, pp. 225–234.
48.
LaiM-JWenstonPYingLA. Bivariate C1 cubic splines for exterior biharmonic equations. In Approximation theory, X (St. Louis, MO,2001), Innov. Appl. Math. Vanderbilt Univ. Press, Nashville, TN, 2002, pp. 385–404.
49.
AwanouG.Robustness of a spline element method with constraints. J Sci Comput2008;
36: 421–432.
50.
BrennerSCScottLR.The mathematical theory of finite element methods, volume 15 of texts in applied mathematics.
New York:
Springer, 3rd edition, 2008.
51.
ChambolleAPockT.A first-order primal-dual algorithm for convex problems with applications to imaging. J Math Imaging Vis2011;
40: 120–145.
52.
BartelsS.Total variation minimization with finite elements: convergence and iterative solution. SIAM J Numer Anal2012;
50: 1162–1180.
53.
AwanouGLaiM-JWenstonP. The multivariate spline method for scattered data fitting and numerical solution of partial differential equations. In Wavelets and splines: Athens 2005, Mod. Methods Math., pp. 24–74. Brentwood, TN: Nashboro Press, 2006.
54.
The MathWorks, Inc. MATLAB[textregistered] (R2011a). Natick, Massachusetts, April 2011.