This article studies the parameter estimation to the system response from the discrete measurement data. By constructing the dynamical rolling cost functions and using the nonlinear optimization, the gradient identification method is presented for estimating the parameters of the sine response signal with double frequency. In order to overcome the difficulty for determining the step size and deduce the influence of noises, the stochastic gradient identification method is derived to estimate the signal parameters. For the purpose of improving the accuracy, a multi-innovation stochastic gradient parameter estimation algorithm is presented using the moving window data. Finally, the simulation examples are provided to test the algorithm performance.
System identification and parameter estimation have been used widely in process control, signal modeling, communication, and electronic technology. The objective of parameter estimation is to obtain the parameter estimates of system models or signal models.1–3 In general, the parameter estimation algorithm can be derived by defining and minimizing a cost function based on the measurement data.4–6 The optimization method is one of the critical factors for parameter estimation algorithm. Many optimization methods are used in system identification and parameter estimation. Li et al.7 considered the parameter estimation algorithms for Hammerstein output error systems using Levenberg–Marquardt optimization method. Xu et al.8 presented a parameter estimation method based on the Newton optimization. Ding et al.9 studied the least squares iterative identification algorithm for multivariate pseudo-linear autoregressive moving average (ARMA) systems. Wang and Xun10 considered a recursive least squares identification for a class of nonlinear multiple-input single-output systems. Wang et al.11 suggested a maximum likelihood estimation method for dual-rate Hammerstein systems. All in all, different identification methods can be derived by means of different optimization. For the linear problem, the least squares method is effective. However, for the nonlinear problem, we must use the nonlinear optimization.12,13
System identification is to obtain the system models. The system model is the basis of system control.14–16 For a control system, the system responses contain the abundant information of the system parameters. The system response at the discrete time can be collected by means of measurement instruments. In process control, the system response is obtained by applying some typical signals. In system identification, the impulse signal, the step signal, and the sine signal are used widely to generate the measurement data for estimating system parameters. Many identification methods have been presented based on the impulse response experiment, the step response experiment, and the frequency response experiment. Ahmed et al.17 studied the identification method from step responses with transient initial conditions. Fedele18 considered a method to estimate a first-order plus time-delay model from the step response. Xu and Ding19 presented a damping iterative parameter identification method for dynamical systems based on the sine signal. Hidayat and Medvedev20 studied the Laguerre domain identification of continuous linear time-delay systems from impulse response data. This article studies the parameter estimation based on the response signals.
The accuracy, the computation complexity, and the robustness21–23 are main aspects of the identification algorithm performance.24,25 Many identification algorithms focus on enhancing the accuracy and robust and reducing the complexity.26–28 Wang and Ding29–31 considered the filtering algorithm to reduce the algorithm’s complexity. Mao and Ding32 used the filtering technique to the Hammerstein controlled autoregressive systems to reduce the complexity. Na et al.33 studied the robust adaptive finite-time parameter estimation for robotic systems. Increasing the measurement data used in the estimation computation is an effective method.34,35 Moreover, some methods based on the model decomposition or the parameter decomposition can reduce the complexity.36,37 Wang and colleagues38,39 suggested the hierarchical estimation algorithms to reduce the complexity. Wang and colleagues40,41 considered a multi-innovation parameter estimation method for Hammerstein nonlinear system to enhance the accuracy. The objective of this article is to propose the identification algorithms to obtain high accuracy and low complexity. In system identification, the recursive computation and the iterative computation are used to obtain the parameter estimates.42–45 Wang and Ding46,47 studied the recursive identification algorithm for nonlinear systems. Xu48 proposed the iterative Newton method to estimate the system parameters from step response. Wang and Ding49 studied a recursive parameter and state estimation algorithm for the input nonlinear state space system. In general, the recursive computation can use the online data; therefore, the recursive algorithm can be used in the online identification.
The rest of this article is organized as follows. Section “Problem description” describes the identification and estimation problem from the system response. Section “The recursion gradient identification algorithm” derives the gradient identification method. Section “The SG identification method based on the dynamical data” derives the stochastic gradient (SG) parameter estimation method. Section “The multi-innovation stochastic gradient identification based on the moving window data” derives the multi-innovation gradient parameter estimation method. Section “Examples” gives some examples to illustrate the performance of the proposed methods. Finally, section “Conclusion” gives some concluding remarks.
Problem description
In process control, the linear time-invariant system can be described by the transfer function . The system modeling is to obtain the system parameters. This problem is called parameter estimation. In fact, it is difficult to achieve the system parameters directly. The system response is the system output by applying some excitation signals. The impulse signal, the step signal, and the sine signal are used widely as the excitation signals in the practical engineering. The excitation signals are the input signals. Once these signals are applied to the system to be modeled, the systems can generate the responses. Therefore, the system responses contain the system characteristic information. In order to obtain the system parameters, we can use the system response information to derive the parameter estimation algorithm. Then, the parameter estimates can be obtained using the parameter estimation algorithms.
Let denote the Laplace transform of the input; let denote the Laplace transform of the output; let denote the system response. According to the definition of the transfer function, we have . The system response is the inverse Laplace transform, that is, . For a process control system, different input signals lead to different responses, such as the impulse response, the step response, and the frequency response. In general, for the transfer function , the system response is a function containing exponential terms or sine terms. Therefore, the response is a highly nonlinear function with respect to the system parameters . This nonlinear form causes difficulties for estimating the system parameters based on the response measurement data. Moreover, the identification algorithm can be realized by means of constructing and minimizing the cost function about the system parameters. In order to obtain the system parameter estimates, the problem of minimizing the cost function is converted to the nonlinear optimization.
The measurement data contain the parameter information. The use of the measurement data is an important factor for identification method. Suppose that the sampling time is . As a result, the measurement data are represented as . The values of the response function at the sampling time are . Because the measurement data contain the measurement noise, the measurement output is not equal to the response model output. If the model output is very close to the measurement output, the model is effective for describing the system characteristic. The model structure and parameter are the components of the system model. After the model structure is determined, the model parameters are obtained by the parameter estimation method. According to the different measurement data which are used in the algorithm, we can construct different cost functions. For the single data at time , the cost function can be defined as
For a batch of measurement data , where is the data length of the batch data, define the cost function based on the batch data
From the cost functions, we can see that the cost functions are the rolling optimization function with the change of the sampling time. So the real-time measurement data can be used to estimate system parameters. The purpose of this article is to propose the parameter estimation based on the nonlinear dynamical rolling optimization and the discrete response measurement data.
The recursion gradient identification algorithm
Consider a response of a combinatorial sine signal
where and are the amplitude, is the angular frequency, and is a constant.
Suppose that the observation time is . The measurement data at time are . In practical, the measurement data at time contain the measured error. Therefore, the measurement data are not equal to the model output . Define the difference between the model output and the measurement output as
If the error is very small, it means that the signal model is satisfied. This problem can be converted into minimizing the gradient cost function. Define the parameter vector . Define the cost function
Because the measurement data vary with the sampling time , the observation data in the cost function vary with the sampling time. Therefore, the dynamical measurement data are used in the cost function . The parameter estimates at time can be obtained by minimizing .
Taking the first-order derivative of with respect to , we can obtain the gradient vector
Let be the estimates of at time , that is, the parameter estimates at the recursion . According to the negative gradient search and minimizing the cost function , the gradient (also called the least mean square (LMS) algorithm) for estimating the sine signal with double frequency is given by
The steps of computing the parameter estimates using the recursion gradient (RG) identification algorithm (10)–(15) are as follows:
Compute the parameter estimates using equation (12);
Increase by 1 and go to Step (2).
Remark 1
The step size at each recursion can be determined by the one-dimensional search. However, it is very difficult to obtain the step size by the direct equation solving. We can use the cut-and-try method to determine the step size or use the optimization method such as the Newton iterative method and the gradient method.
The SG identification method based on the dynamical data
The gradient parameter estimation algorithm needs to optimize the step size at each recursion . For the sine response signal, it is complicated to obtain the step size using one-dimensional search. With the increasing of the recursion, the step size cannot trend to zero. Therefore, the algorithm is sensitive to the noise. In order to avoid sensitivity to the noise and in order to avoid the complicated computation to determine the step size, we present the SG algorithm parameter estimation method. This algorithm can determine the step size automatically at each recursion.
Define the cost function based on the dynamical data
Define the information vector
Then, the gradient of the cost function can be represented as
Let be the recursive parameter estimate of the parameter vector at the recursion . Then, we have
In order to reduce the sensitivity to the noise, the step size is set to
Based on the negative gradient search and minimizing the cost function , we can obtain the SG algorithm for estimating the sine response
The steps of computing the parameter estimates using the gradient identification algorithm (equations (21)–(24)) are as follows:
To initiate: let ; let be an real vector; pre-set the recursive length ;
Compute the parameter estimates using equation (21);
If , terminate the recursive process and obtain the characteristic parameters from equation (24); otherwise, increase by 1 and go to Step (2).
Remark 2
The method of determining the step size between the gradient method and the SG method is different. For the SG method, the step size is set using the information vector. It can update automatically with the increasing recursion .
The multi-innovation stochastic gradient identification based on the moving window data
The gradient identification algorithm and the SG algorithm use the current data at the sampling time . Only one step data is used, and the accuracy is low. In order to enhance the estimation accuracy, we use the batch data in the latest time to construct the cost function. For system identification algorithm, the more the measurement data is used, the higher the estimation accuracy is. However, too many measurement data used in the algorithm can lead to the heavy computation complexity. Therefore, we propose to use the moving window measurement data to derive the parameter estimation algorithm.
The moving window measurement data are a batch data, and they can update dynamically. The moving window data are represented as , where is the length of the moving window. Based on the moving window data and the multi-innovation theory,50 we derive the parameter identification algorithm for the sine signal with double frequency.
Define the cost function using the moving window data
Taking the first-order derivative of with respect to , the gradient vector is given by
Define the information vector
Define the information matrix based on the window data
Let be the estimate of the parameter vector at time . Define the scalar innovation: . Expanding the scalar innovation into the innovation vector gives
Using the negative gradient search and minimizing the cost function , the multi-innovation stochastic gradient (MISG) algorithm is given by
The steps of computing the parameter estimates using the MISG algorithm (equations (33)–(38)) are as follows:
To initiate: let ; pre-set the innovation length ; let be a real vector; let . Pre-set the recursive length ;
Collect the measurement data . Compute and construct the information vector using equation (37). Form the stack matrix using equation (36);
Update the parameter estimates using equation (33). Obtain the estimates of the characteristic parameters , , and from using equation (38);
If , terminate the recursive process; otherwise, and go to Step (2).
Remark 3
The MISG method is derived on the basis of the SG method by expanding the scalar innovation into the innovation vector using the moving window measurement data. The moving window data update with the recursion dynamically. The scheme of the moving window data make more data to participate in the recursive estimation computation. It can improve the estimation accuracy.
Examples
In this section, we provide some examples to show the performance of the proposed method.
Example 1
Consider a power signal
where the true values are , , and .
In the simulation, the white noise is applied to the response signal. The noise variance is , , and . Using the proposed RG method to estimate the response parameters, the parameter estimates and their estimation errors are shown in Table 1. The estimation errors versus are shown in Figure 1.
The RG estimates and estimation errors.
(%)
5
5.99699
11.99699
2.90912
20.78537
10
5.98910
11.98911
2.83020
20.43685
20
5.97690
11.97694
2.76322
20.08543
50
5.55476
11.56861
2.36928
14.50865
100
4.89302
10.97619
2.28766
8.77658
200
4.62401
10.74657
2.27957
7.60596
400
5.15293
10.81028
2.27966
7.51556
600
5.03696
10.82366
2.27966
7.51475
5
5.99688
11.99688
2.92257
20.82673
10
5.98973
11.98974
2.85813
20.52884
20
5.98466
11.98468
2.82768
20.37081
50
5.56719
11.57979
2.38064
14.65151
100
4.97040
11.04183
2.30159
9.29974
200
4.58357
10.74612
2.29354
7.79463
400
5.16766
10.82690
2.29367
7.70886
600
5.02444
10.84434
2.29367
7.71654
5
5.99670
11.99671
2.94562
20.89880
10
5.99080
11.99081
2.90598
20.69168
20
5.99797
11.99795
2.93829
20.88882
50
5.58861
11.59908
2.39947
14.89849
100
5.10329
11.15457
2.32473
10.30981
200
4.51425
10.74540
2.31673
8.14811
400
5.19292
10.85543
2.31693
8.04617
600
5.00297
10.87984
2.31693
8.06935
True values
5.00000
10.00000
2.00000
RG: recursion gradient.
The RG parameter estimation errors versus .
Example 2
Consider a power signal with double frequency
where the true values are , , and .
In the simulation, the white noise is applied to the response signal. In order to test the sensitivity to the noise, the noise variance is taken as and , respectively. The ratio of the variance is 8. Using the proposed SG method to estimate the parameters of the power signal in Example 2, the parameter estimates and their estimation errors are shown in Table 2. The estimation errors versus are shown in Figure 2.
The SG estimates and estimation errors.
(%)
1
2.17973
3.24120
4.65368
7.44126
5
2.17142
3.22462
4.58170
8.18873
10
2.17042
3.22450
4.62856
7.56393
20
2.17067
3.22367
4.67715
6.94695
30
2.17059
3.22361
4.68529
6.84691
50
2.17056
3.22360
4.69062
6.78246
100
2.17055
3.22359
4.68915
6.79994
1
2.18881
3.26753
5.03265
5.33818
5
2.17103
3.23313
4.91726
4.87875
10
2.16979
3.23267
4.97627
4.68840
20
2.16924
3.23136
4.97683
4.66529
30
2.16911
3.23126
4.98486
4.65399
50
2.16906
3.23121
4.99371
4.64754
100
2.16903
3.23120
4.98966
4.64897
True values
2.00000
3.00000
5.00000
SG: stochastic gradient.
The SG parameter estimation errors versus .
Example 3
Consider a power signal with double frequency
where the true values are , , and .
In the simulation, the dynamical data with different innovation lengths , and are used to estimate the response parameters. The recursion length is . The variance of the white noise is . Using the proposed MISG method to estimate the signal parameters, the parameter estimates and their estimation errors for different are shown in Table 3. The estimation errors versus for different are shown in Figure 3.
The MISG estimates and estimation errors.
(%)
1
5
7.44140
2.78157
0.05408
8.73253
10
7.44425
2.77984
0.06366
8.64720
50
7.44828
2.77983
0.38174
7.07687
100
7.44832
2.77984
0.43586
6.98053
150
7.44833
2.77983
0.46594
6.95156
200
7.44833
2.77983
0.48631
6.94204
250
7.44833
2.77982
0.50131
6.94025
300
7.44832
2.77982
0.51286
6.94190
5
5
7.52281
2.82715
−0.38684
11.93881
10
7.52850
2.82367
−0.36768
11.72066
50
7.53656
2.82367
0.26848
6.39399
100
7.53665
2.82367
0.37671
5.96902
150
7.53666
2.82366
0.43689
5.83930
200
7.53666
2.82365
0.47762
5.79851
250
7.53665
2.82365
0.50762
5.79337
300
7.53665
2.82364
0.53072
5.80387
10
5
7.60421
2.87272
−0.82775
16.25640
10
7.61275
2.86751
−0.79901
15.91338
50
7.62485
2.86750
0.15522
6.15136
100
7.62497
2.86751
0.31757
5.11276
150
7.62499
2.86750
0.40783
4.77032
200
7.62498
2.86748
0.46893
4.66142
250
7.62498
2.86747
0.51392
4.65022
300
7.62497
2.86746
0.54858
4.68202
20
5
7.76701
2.96387
−1.70959
25.96369
10
7.78126
2.95518
−1.66169
25.39185
50
7.80141
2.95517
−0.07131
7.08639
100
7.80162
2.95519
0.19928
4.24171
150
7.80164
2.95516
0.34972
2.95448
200
7.80164
2.95514
0.45156
2.44271
250
7.80163
2.95512
0.52654
2.39653
300
7.80162
2.95510
0.58431
2.57260
True values
8.00000
3.00000
0.50000
MISG: multi-innovation stochastic gradient.
The MISG parameter estimation errors versus .
From the simulation results, we can draw the following conclusive remarks:
The common feature of the proposed RG method, the SG method, and the MISG method is that the estimation errors reduce with the increasing of the recursion . This implies that the proposed methods are effective for estimating the response parameters.
The simulation results in Example 1 show that the RG method is sensitive to the noise. The variety curves of the parameter estimates shake seriously. The changing curves of the parameter estimates given by the SG method from Example 2 show that the variety of the estimation errors obtained by the SG method has no serious fluctuation when the noise variance is big.
The estimation accuracy given by the MISG method changes with the increasing of the innovation length (i.e. the moving window data length). The innovation length denotes the numbers of measurement data participated in the recursion computation. If the innovation length is big, the estimation accuracy is high.
Compared the proposed three methods, the MISG method can obtain more satisfactory effectiveness than the RG method and the SG method. Even though the RG method and the SG method have low accuracy, their computation complexity is low.
Conclusion
This article considers the parameter estimation problem to the system response from the discrete measurement data. Using the dynamical measurement data and the moving window measurement data, the RG method, the SG method, and the MISG method are presented for estimating the response parameters. The simulation results show that the proposed algorithms are effective. The proposed methods can be expanded to parameter estimation of speech signal processing,51,52 the self-organizing map,53 and applied to other fields.54–65
Footnotes
Handling Editor: Nima Mahmoodi
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Natural Science Research of Colleges and Universities in Jiangsu Province (No. 16KJB120007) and sponsored by the Qing Lan Project of Jiangsu Province, and the Natural Science Foundation of Jiangsu Province (No. BK20160162).
References
1.
RajaMAZChaudharyNI. Two-stage fractional least mean square identification algorithm for parameter estimation of CARMA systems. Signal Process2015; 107: 327–339.
2.
XuLDingF. Recursive least squares and multi-innovation stochastic gradient parameter estimation methods for signal modeling. Circ Syst Signal Pr2017; 36: 1735–1753.
3.
XuL. The damping iterative parameter identification method for dynamical systems based on the sine signal measurement. Signal Process2016; 120: 660–667.
4.
DingFXuLZhuQM. Performance analysis of the generalised projection identification for time-varying systems. IET Control Theory A2016; 10: 2506–2514.
5.
DingFWangFFXuLet al. Parameter estimation for pseudo-linear systems using the auxiliary model and the decomposition technique. IET Control Theory A2017; 11: 390–400.
6.
WangDQMaoLDingF. Recasted models based hierarchical extended stochastic gradient method for MIMO nonlinear systems. IET Control Theory A2017; 11: 476–485.
7.
LiJHZhengWXGuJPet al. Parameter estimation algorithms for Hammerstein output error systems using Levenberg–Marquardt optimization method with varying interval measurements. J Frankl Inst2017; 354: 316–331.
8.
XuLChenLXiongWL. Parameter estimation and controller design for dynamic systems from the step responses based on the Newton iteration. Nonlinear Dynam2015; 79: 2155–2163.
9.
DingFWangFFXuLet al. Decomposition based least squares iterative identification algorithm for multivariate pseudo-linear ARMA systems using the data filtering. J Frankl Inst2017; 354: 1321–1339.
10.
WangCXunJ. Novel recursive least squares identification for a class of nonlinear multiple-input single-output systems using the filtering technique. Adv Mech Eng2016; 8: 1–8.
11.
WangDQZhangZYuanJY. Maximum likelihood estimation method for dual-rate Hammerstein systems. Int J Control Autom2017; 15: 698–705.
12.
GoncalvesMLNMeloJG. A Newton conditional gradient method for constrained nonlinear systems. J Comput Appl Math2017; 311: 473–483.
13.
WitkowskiWRAllenJ. Approximation of parameter uncertainty in nonlinear optimization-based parameter estimation schemes. AIAA J2015; 31: 947–950.
14.
CaoXZhuDQYangSX. Multi-AUV target search based on bioinspired neurodynamics model in 3-D underwater environments. IEEE T Neur Net Lear2016; 27: 2364–2374.
15.
ChuZZZhuDQYangSX. Observer-based adaptive neural network trajectory tracking control for remotely operated vehicle. IEEE T Neur Net Lear2017; 28: 1633–1645.
16.
XuL. A proportional differential control method for a time-delay system using the Taylor expansion approximation. Appl Math Comput2014; 236: 391–399.
17.
AhmedSHuangBShahSL. Identification from step responses with transient initial conditions. J Process Contr2008; 18: 121–130.
18.
FedeleG. A new method to estimate a first-order plus time delay model from step response. J Frankl Inst2009; 346: 1–9.
19.
XuLDingF. Parameter estimation algorithms for dynamical response signals based on the multi-innovation theory and the hierarchical principle. IET Signal Process2017; 11: 228–237.
20.
HidayatEMedvedevA. Laguerre domain identification of continuous linear time-delay systems from impulse response data. Automatica2012; 48: 2902–2907.
NaJYangJRenXMet al. Robust adaptive estimation of nonlinear system with time-varying parameters. Int J Adapt Control2015; 29: 1055–1072.
23.
NaJHerrmannGZhangKQ. Improving transient performance of adaptive control via a modified reference model and novel adaptation. Int J Robust Nonlin2017; 27: 1351–1372.
24.
WangXHDingF. Convergence of the recursive identification algorithms for multivariate pseudo-linear regressive systems. Int J Adapt Control2016; 30: 824–842.
25.
WangXHDingF. Joint estimation of states and parameters for an input nonlinear state-space system with colored noise using the filtering technique. Circ Syst Signal Pr2016; 35: 481–500.
26.
MaoYWDingF. A novel parameter separation based identification algorithm for Hammerstein systems. Appl Math Lett2016; 60: 21–27.
27.
WangDQ. Hierarchical parameter estimation for a class of MIMO Hammerstein systems based on the reframed models. Appl Math Lett2016; 57: 13–19.
28.
WangDQZhangW. Improved least squares identification algorithm for multivariable Hammerstein systems. J Frankl Inst2015; 352: 5292–5370.
29.
WangYJDingF. Novel data filtering based parameter identification for multiple-input multiple-output systems using the auxiliary model. Automatica2016; 71: 308–313.
30.
WangYJDingF. The filtering based iterative identification for multivariable systems. IET Control Theory A2016; 10: 894–902.
31.
WangYJDingF. The auxiliary model based hierarchical gradient algorithms and convergence analysis using the filtering technique. Signal Process2016; 128: 212–221.
32.
MaoYWDingF. Multi-innovation stochastic gradient identification for Hammerstein controlled autoregressive autoregressive systems based on the filtering technique. Nonlinear Dynam2015; 79: 1745–1755.
33.
NaJMahyuddinMNHerrmannGet al. Robust adaptive finite-time parameter estimation and control for robotic systems. Int J Robust Nonlin2015; 25: 3045–3071.
34.
XuLDingFGuYet al. A multi-innovation state and parameter estimation algorithm for a state space system with d-step state-delay. Signal Process2017; 140: 97–103.
35.
PanJJiangXWanXKet al. A filtering based multi-innovation extended stochastic gradient algorithm for multivariable control systems. Int J Control Autom2017; 15: 1189–1197.
36.
DingFWangXH. Hierarchical stochastic gradient algorithm and its performance analysis for a class of bilinear-in-parameter systems. Circ Syst Signal Pr2017; 36: 1393–1405.
37.
MaJXXiongWLChenJet al. Hierarchical identification for multivariate Hammerstein systems by using the modified Kalman filter. IET Control Theory A2017; 11: 857–869.
38.
WangXHDingFAlsaadiFEet al. Convergence analysis of the hierarchical least squares algorithm for bilinear-in-parameter systems. Circ Syst Signal Pr2016; 35: 4307–4330.
39.
WangXHDingF. Convergence of the auxiliary model based multi-innovation generalized extended stochastic gradient algorithm for Box-Jenkins systems. Nonlinear Dynam2015; 82: 269–280.
40.
WangXHDingF. Modelling and multi-innovation parameter identification for Hammerstein nonlinear state space systems using the filtering technique. Math Comp Model Dyn2016; 22: 113–140.
41.
WangXHDingFHayatTet al. Combined state and multi-innovation parameter estimation for an input nonlinear state space system using the key term separation. IET Control Theory A2016; 10: 1503–1512.
42.
LiMHLiuXMDingF. Least-squares-based iterative and gradient-based iterative estimation algorithms for bilinear systems. Nonlinear Dynam2017; 89: 197–211.
43.
LiMHLiuXMDingF. The maximum likelihood least squares based iterative estimation algorithm for bilinear systems with autoregressive noise. J Frankl Inst2017; 354: 4861–4881.
44.
LiMHLiuXMDingF. The gradient-based iterative estimation algorithms for bilinear systems with autoregressive noise. Circ Syst Signal Pr2017; 36: 1–28.
45.
WangDQGaoYP. Recursive maximum likelihood identification method for a multivariable controlled autoregressive moving average system. IMA J Math Control I2016; 33: 1015–1031.
46.
WangYJDingF. Recursive least squares algorithm and gradient algorithm for Hammerstein-Wiener systems using the data filtering. Nonlinear Dynam2016; 84: 1045–1053.
47.
WangYJDingF. Recursive parameter estimation algorithms and convergence for a class of nonlinear systems with colored noise. Circ Syst Signal Pr2016; 35: 3461–3481.
48.
XuL. Application of the Newton iteration algorithm to the parameter estimation for dynamical systems. J Comput Appl Math2015; 288: 33–43.
49.
WangXHDingF. Recursive parameter and state estimation for an input nonlinear state space system using the hierarchical identification principle. Signal Process2015; 117: 208–218.
50.
DingFWangXHMaoLet al. Joint state and multi-innovation parameter estimation for time-delay linear systems and its convergence based on the Kalman filtering. Digit Signal Process2017; 62: 211–223.
51.
ZhaoNWuMHChenJJ. Android-based mobile educational platform for speech signal processing. Int J Elec Eng Educ2017; 54: 3–16.
52.
WanXKLiYXiaCet al. A T-wave alternans assessment method based on least squares curve fitting technique. Measurement2016; 86: 93–100.
53.
CaoXZhuDQ. Multi-AUV task assignment and path planning with ocean current based on biological inspired self-organizing map and velocity synthesis algorithm. Intell Autom Soft Co2017; 23: 31–39.
54.
ChuZZZhuDQYangSX. Adaptive sliding mode control for depth trajectory tracking of remotely operated vehicle with thruster nonlinearity. J Navigation2017; 70: 149–164.
55.
JiYDingF. Multiperiodicity and exponential attractivity of neural networks with mixed delays. Circ Syst Signal Pr2017; 36: 2558–2573.
56.
PanJYangXHCaiHFet al. Image noise smoothing using a modified Kalman filter. Neurocomputing2016; 173: 1625–1629.
57.
FengLWuMHLiQXet al. Array factor forming for image reconstruction of one-dimensional nonuniform aperture synthesis radiometers. IEEE Geosci Remote S2016; 13: 237–241.
58.
DingF. Hierarchical multi-innovation stochastic gradient algorithm for Hammerstein nonlinear system modeling. Appl Math Model2013; 37: 1694–1704.
59.
DingF. Several multi-innovation identification methods. Digit Signal Process2010; 20: 1027–1039.
60.
WangDQDingF. Performance analysis of the auxiliary models based multi-innovation stochastic gradient estimation algorithm for output error systems. Digit Signal Process2010; 20: 750–762.
61.
LiuYJYuLDingF. Multi-innovation extended stochastic gradient algorithm and its performance analysis. Circ Syst Signal Pr2010; 29: 649–667.
62.
DingFLiuGLiuXP. Parameter estimation with scarce measurements. Automatica2011; 47: 1646–1655.
63.
FanCLLiHJRenX. The order recurrence quantification analysis of the characteristics of two-phase flow pattern based on multi-scale decomposition. T I Meas Control2015; 37: 793–804.
64.
WangTZQiJXuHet al. Fault diagnosis method based on FFT-RPCA-SVM for cascaded-multilevel inverter. ISA T2016; 60: 156–163.
65.
WangTZWuHNiMQet al. An adaptive confidence limit for periodic non-steady conditions fault detection. Mech Syst Signal Pr2016; 72–73: 328–345.