We develop a multi-dimensional numerical differentiation method in this paper. To obtain stable numerical derivatives, the Tikhonov regularization method in Hilbert scales is proposed to deal with illposedness of the problem. The penalty term in Tikhonov functional is more general so that we can choose the regularization parameter without the smoothness parameter and the a priori bound of exact solution. Numerical examples are also presented to check the effectiveness of the method.
Numerical differentiation is a problem of determining the derivatives of a function from its perturbed values. It is well known to be ill posed in the sense that every small error in the measurement may cause huge errors in the computed derivatives. It arises from many processes of scientific researches and applications, e.g. the identification of the discontinuous points in an image process,1,2 Volterra integral equation,3,4 identification,5 etc. A number of techniques have been well developed for numerical differentiation in one-dimensional case,6,7,8,9,10–14,15,16 but so far only a very few results on the high dimensional cases have been reported.17,18,19,20 In the literatures,8,12,13 the finite difference method has been proposed. The stepsize is used as a regularization parameter for noisy data. So the specified points cannot be too dense, which hinders the application of the methods. In the literatures,7,10,11 the Fourier method has been used to solve the numerical differentiation problem. The regularization parameter is a priori in these methods. It is well known that the ill-posed problem is usually sensitive to the regularization parameter and the a priori bound is usually difficult to be obtained precisely in practice.
In the literatures,19,20 the Tikhonov regularization method has been considered to deal with the numerical differentiation problem. In this paper, we will use the method of Tikhonov regularization in Hilbert scales to solve the problem. We will show that the regularization parameter can be chosen by a discrepancy principle in Hilbert scales which is proposed by Neubauer.21 The smoothness parameter and the a priori bound of exact solution are not needed for the new method.
This paper is organized as follows. In the next section, the method to construct approximate function from the noise data is proposed. The convergence results with corresponding regularization parameter choice rules will be found in The choices of regularization parameter β and convergence results section. Some numerical results are given in the Numerical examples section to show the effectiveness of the new method.
Definition of approximate function
Let and denote the Fourier transform of defined by
where and , and denotes the norm in Sobolev space defined by
When p = 0, denotes the norm. Next, let be the set of all non-negative integers. For any d tuple . The notation ∂j stands for , the symbol .
Our problem is to determine derivatives of a function from its perturbed data gδ. We assume that the exact and measured data satisfy
From the theories of Fourier transform
It is apparent that the exact data must decay faster than the rate . However, the measured function gδ does not possess such a decay property in general. So
cannot give a reliable approximation for Dαg. In the following, we apply the Tikhonov regularization method to reconstruct a new function from the perturbed data gδ. will give a reliable approximation of Dαg. Before doing that, we impose an a priori bound on the exact data g
where E > 0 is a constant.
Let be the minimizer of the Tikhonov functional
where β > 0 is a regularization parameter and q is a positive real number. The function is going to occur throughout this paper and will be denoted by . It can be verified that is the solution of the following equation
So we can get
That is to say
By , we can give an approximation of Dαg as follows
Lemma 1.For, we have
Lemma 2.
where is the unique minimizer of (7) with g instead of gδ.
Proof. Due to Parseval formula
The proposition follows by applying (12) with b replaced by .
Lemma 3.If
where cβ is defined by
Proof. With the representation
we have
The choices of regularization parameter β and convergence results
For any , we define
It is apparent that the function is continuous and strictly increasing on and
So we can get the following lemma
Lemma 4. Let g, gδ satisfy (3) and
for some τ > 1. Then there is a unique β > 0 such that
In the following, we denote the unique β determined in (22) by . In the next lemma, we consider the behavior of .
Lemma 5. Let g, gδ satisfy (3) and (21) for a τ > 1, then
a)
b)
where
Proof.
Let
then
The rest follows from a).
Now we can prove the main result of this paper.
Theorem 6. Let g, gδ satisfy (3) and (21) for . is defined by (11) and (22), then
Proof. With Lemma 2, Lemma 3, and Lemma 5, we obtain
Remark 7. It is obvious that if holds for some p > 2q, then we have
Numerical examples
In this section, we give some numerical tests to verify the effectiveness of the new method. All examples are computed by using Matlab with τ = 1.01.
One-dimensional case
Let and . The perturbed data are given by
where are generated by Function in Matlab.
Example 1 First we consider the function
It is obvious that for any . We will give the numerical results for q = 2, 4, 8, 16, which are displayed in Table 1. We can see that when δ1 decreases from 0.1 to 0.001, the relative errors become smaller and when q increases, the rates of convergence become larger.
Numerical results of Example 1.
Relative errors of the 1st-order derivative
Relative errors of the 2nd-order derivative
δ1
p = 2
p = 4
p = 8
p = 16
p = 2
p = 4
p = 8
p = 16
1e-1
3.38e-02
2.46e-02
2.29e-02
2.17e-02
6.83e-02
3.13e-02
3.04e-02
3.03e-02
1e-2
5.74e-03
3.04e-03
2.36e-03
1.99e-03
2.28e-02
4.65e-03
3.23e-03
2.96e-03
1e-3
1.12e-03
3.95e-04
2.52e-04
1.88e-04
8.40e-03
7.94e-04
3.64e-04
2.99e-04
In Figure 1, we observe that the reconstructed k-th derivatives are very similar to those of the corresponding functions.
Results of Example 1.
Results of Example 2.
Example 2 Consider a function
In this example, we only have , while we also compute with p = 2, 4, 8, 16. The results are given in Table 2, and we can see that the results are close for different p, which indicate that the method can work well even if p is greater than the smooth scale of g.
Numerical results of Example 2.
Relative errors of the first-order derivative
Relative errors of the second-order derivative
δ1
p = 2
p = 4
p = 8
p = 16
p = 2
p = 4
p = 8
p = 16
1e-1
3.87e-02
3.98e-02
4.01e-02
4.00e-02
1.74e-01
1.89e-01
1.92e-01
1.92e-01
1e-2
1.11e-02
1.14e-02
1.18e-02
1.22e-02
1.16e-01
1.21e-01
1.24e-01
1.26e-01
1e-3
3.93e-03
4.30e-03
4.47e-03
4.49e-03
8.27e-02
8.81e-02
9.01e-02
9.04e-02
Two-dimensional case
Let and , The perturbed data are given by
where is generated by Function in Matlab.
Example 3 We consider the function
for any . We will give the numerical results for p = 8. Let , and we compare Dkg and for k = 1, 2, 3 in Figures 3–5. It can be observed that the approximation derivatives match well with the exact ones.
Results of Example 3: comparison of D1g and .
Results of Example 3: comparison of D2g and .
Results of Example 3: comparison of D3g and .
Conclusion
In this paper, we apply the Tikhonov regularization with a different regularization functional to reconstruct numerical derivatives. The convergence rates for approximating the derivatives are obtained when we choose the regularization parameter with a discrepancy principle. The a priori information of the exact solution is not needed and the convergence rates are self-adaptive. The method can calculate the derivatives of a function in any finite dimension space. All examples show that the proposed schemes are effective and stable.
Footnotes
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: The project is supported by the National Natural Science Foundation of China (No. 11201085).
References
1.
DeansS. The Radon transform and some of its applications, New York: Wiley-Interscience, 1983.
2.
WanXWangYYamamotoM. Detection of irregular points by regularization in numerical differentiation and application to edge detection. Inverse Probl2006; 22: 1089–1103.
3.
Cheng J, Hon Y and Wang Y. A numerical method for the discontinuous solutions of Abel integral equations. In: Inverse problems and spectral theory: proceedings of the workshop on spectral theory of differential operators and inverse problems, Kyoto, Japan, 28 October–1 November 2002.
4.
GorenfloRVessellaS. Abel integral equations. Lecture Notes in Mathematics1991; Vol 1461, Berlin: Springer.
5.
HankeMScherzerO. Error analysis of an equation error method for the identification of the diffusion coefficient in a quasi-linear parabolic differential equation. SIAM J Appl Math1999; 59: 1012–1027.
6.
CullumJ. Numerical differentiation and regularization. SIAM J Numer Anal1971; 8: 254–265.
7.
FuCFengXQianZ. Wavelets and high order numerical differentiation. Appl Math Model2010; 34: 3008–3021.
8.
GroetschCW. Differentiation of approximately specified functions. Amer Math Monthly1991; 98: 850.
WeiTHonYWangY. Reconstruction of numerical derivatives for scattered noisy data. Inverse Probl2005; 21: 657–672.
21.
NeubauerA. An a posteriori parameter choice for Tikhonov regularization in Hilbert scales leading to optimal convergence rates. SIAM J Numer Anal1988; 25: 1313–1326.
22.
NattererF. Error bounds for Tikhonov regularization in Hilbert scales. Applicable Anal1984; 18: 29–37.