Zichun YangLei Zhangand Yueyun Cao
Inverse problems often arise in engineering praxis.They originate from various fields like acoustics,optics,model updating,computerized tomography,statistics,load identification and signal processing,etc.When we deal with the term inverse problem,we will ask “inverse to what”immediately.Roughly speaking,inverse problem is a general framework that is used to convert observed measurements into information about a physical object or system in which we are interested.Thus,one might say the inverse problems are concerned with determining causes for a desired or an observed effect[Heinz,Martin and Andreas(1996)].
As we will see,most inverse problems are often ill-posed problems Therefore,the necessary conditions for stability of solutions in a well-posed problem are often violated.That is to say,the total measured data does not allow the existence of a solution;the solution is also not unique,even further,not stable due to disturbances in the data[Heinz,Martin and Andreas(1996)].One central example of a linear inverse problem is Fredholm integral equations of the first kind which have been introduced by[Aster and Borchers(2004);Liu and Atluri(2009a)],such as for one-dimension:
The RTLS problem has been investigated in its algebraic setting for decades.There are two kinds of RTLS methods which are analogous to the truncated SVD and the Tikhonov regularization based on LS.The former one is called truncated total least squares(TTLS),which has already been studied by Fierro,Golub,Hansen and singular vectors corresponding to smaller and smaller singular values have raising complexity(meaning that they include more and more sign change,oscillations)[Sima and Huffel(2007)].Another typical RTLS method for solving ill-posed problem is the Tikhonov regularization approach.The main emphasis of this work was on quadratically constrained TLS problems[Sima,Huffel and Goubl(2004)].The regularization approach with a quadratic constraint is highly suitable when some knowledge about the characteristics of the exact solution is known at priority.However,it is difficult to obtain prior knowledge about the true solution and the magnitude of the noise.Recently,Schaffrin and Wieser(2008)have derived the RTLS solution using a non-linear Lagrange function approach which could be implemented by a suitable and efficient iteration algorithm.Unfortunately their convergence rate is always slower,and the convergence properties of these methods for Tikhonov RTLS problems aren’t also guaranteed.
In this paper,our purpose is to tackle the linear discrete ill-posed problems by two novel regularization methods in the TLS setting described in Section 2.One is to extend the Lanczos TTLS algorithm to the iterative TTLS method which can solve a convergent sequence of projected linear systems in Section 2.1.The other one in section 2.2 is the iterative RTLS method based on conjugate gradient algorithm,which includes three creative schemes:establishing an modified unconstrained optimization problem with the properties of a convex function;giving the adap-tive strategy for selecting regularization parameters;using a state-of-the-art CG method to solve the modified unconstrained optimization problem.In Section 3,several numerical examples related to Fredholm integral equations of the first kind are presented,and the efficiency and robustness of the two novel algorithms,compared with other typical regularization methods,are also demonstrated.Concluding remarks can be found in Section 4.
The TLS method is a generalized version of the original least squares method.Let the Errors-in-Variables model be defined by the functional relationship
Algorithm.1
1)execute the SVD of the augmented matrix(A,b)
3)block-partitionVVV∈R(n+1)×(n+1)as
4)compute the TTLS solutionxTTLS,kas
The TTLS method which simultaneously considers error and noise on both sides can be applied to handle ill-posed problems,especially when there are obvious gap in the singular values spectrum.And its performance is usually better than conventional Tikhonov method.However,when the dimensions ofA A Abecome large,the efficiency and robustness of this approach become increasingly poor because the SVD algorithm is of high complexity.We shall therefore describe that a Lanczos technique which can project large-scale TLS problems onto the smaller subspaces may improve the efficiency of the TTLS algorithm.
The typical algorithm is the Lanczos bidiagonalization method which can generate a sequence of bidiagonal matrices.Here,the extremal singular values of bidiagonal matrices are progressively better estimates of the extremal singular values of the given matrices[Goulb,Hanse and O’Leary(1999)].The main advantages of the Lanczos method are that the original matrix is not overwritten and little storage is required since only matrix-vector products are computed.Therefore,the computation cost of SVD of a matrix may be more attractive,which makes the Lanczos method interesting for large matrices especially if they are sparse and there exists fast routines for computing matrix-vector products.
Thus afterkLanczos iterations,the projected TLS problem mentioned in Eq.(4)is given by
To overcome these deficiencies,we note that it is easy to extend the above algorithm to an iterative TTLS method without any prior knowledge.This method can solve a convergent sequence of projected linear systems generated by the Lanczos bidiagonalization method,which is a potentially inexpensive task.The structure of this algorithm is as follows
Algorithm.2(iteration Lanczos TTLS called I-LTTLS)
2)fork=1,2,···until convergence do
3)obtain the projected TLS problem of(4)based on Lanczos bidiagonalization.
4)compute the TTLS solutiony y yk,lviaalgorithm.1,ldenotes truncate parameter.
6)end for
We now discuss details how to efficiently execute algorithm.2.
·We apply an modified generalized cross validation(GCV)combined with the TTLS method to obtain truncate parameterlin step 4,which has been proposed in[Sima and Huffel(2007)],see Eq.(4)
The Lagrangian of the problem(8)is given by
Substituting(11)into(10)we have
Combining(12)with(13)we can conclude
It can be observed that the Eq.(14)is an unconstrained optimization problem,which is not known to be convex or concave in general.Beck and Ebn-Tal(2006)computed a value and a derivative of the problem(14)consists of solving a sequence of trust region subproblems.The suggested TRTLSG algorithm converges to the global minimum when the functionf(x)is unimodal.If,for some reason,the functionf(x)is not unimodal,the TRTLSG algorithm doesn’t necessarily converge to global minimum and more sophisticated one dimensional global solver should be employed.
The classical Newton iterative method has been used to tackle the unconstrained optimization problem(14)in[Maziar and Hossein(2009)],which is an extremely powerful technique—in general the convergence is quadratic.The Newton iterative method requires that the gradient and hessian of the objective function can be calculated directly.However,an analytical expression for the derivative may not be easily obtainable and may be expensive to evaluate.For situations where the method fails to converge,it is because the assumption such as the second derivative of the positive definite made in the proof is not met.Lampe and Voss(2013)proposed an iterative projection method which was an efficient method for solving large-scale TLS problem.This algorithm requires a suitable starting basis called orthogonal basis of the Krylov space,which has a great influence on the computational efficiency and is hard to be determined.The main computational cost is again building up the search space,in general,which is not a Krylov subspace.In particular,the new space basis vector cannot be computed with a short recurrence relation.
The aim of this section is to propose a CG method to solve the Tikhonov RTLS problem(14).The main difficulty associated with problem(14)is its nonconvex-ity.This deficiency may result in a non-convergent sequence i.e.,cannot get the global optimal solutions,and make the CG algorithm ineffective or difficult implementation.Nevertheless,we will propose in this section several creative schemes to solve the unconstrained optimization problem(14)efficiently and stably.The par-
It is obviously that the iteration Eq.(15)has adaptive characteristics which are able to fully reflect the continuity of recovery process.More importantly,the original Eq.(14)with a nonconvex function is transformed into a convex function,which can facilitate the optimization problem greatly and improve the computational efficiency significantly.
Oraintara,Karl,Castanon,and Nguyen(2000)have proposed an algebraic condition for choosing the optimal regularization parameter of regularized LS.The main idea is to identify the corner of the L-curve as the point of tangency between a straight line of arbitrary slope and the L-curve.The main restrictions are that the object function should be differentiable,non-negative and convex scalar function of their vector arguments.Owing to these restrictions,the algebraic method cannot be used to acquire regularization parameter for RTLS optimization problem(14).Fortunately,for the modified optimization problem(15)with the properties of a convex function,the algebraic method can be extended to obtain the regularization parameter of Tikhonov RTLS.That is also an important way to make the conjugate gradient method converge to a global minimum point.
For convenience in what follows,the function of the parameterλis
As a consequence,we demonstrate that extreme points ofξ(λ)are fixed points of a related function,and a fixed point iterative algorithm for computing the optimal parameterλis as follows
In particular,ifλkconverges,it is guaranteed to converge to the L-corner.The formula is able to choose the regularization parameter adaptively and get higher efficiency attributed to the convex properties of the problem(15).
Therefore,we propose three creative schemes in this section in order to solve Tikhonov RTLS problem.Firstly,the modified minimum optimization problem(15)characterized by the properties of a convex function is established.Secondly,the adaptive strategy for selecting regularization parameter is given,which gets better quality of the result in view of the former one.Finally,a state-of-the-art CG method is used to solve the unconstrained optimization problem(15).More precisely,this iterative RTLS method based on conjugate gradient(called CGRTLS method)can be described as follows
Algorithm 3(The CGRTLS algorithm)
1)set the outer iteration terminate toleranceε,0<ε?1,and the largest admissible number of outer iterationkmax
2)set the inner iteration terminate toleranceξ,0<ξ?1,and the largest admissible number of inner iterationlmax
4)begin outer iteration
(c)else go step(d)
(e)determine a step?αl=ρj(j=0,1,2,···)satisfying Armijo-type condition
with the scalarρ,μ∈(0,1)
4.5)k:=k+1
4.6)until the convergence conditionη<εork>kmax,execute step 5
5)end outer iteration
To evaluate the effectiveness of the Algorithm 2 and 3,we consider the one and two-dimensional Fredholm integral equations of the first kind,which are known to be severely ill-posed problems.We compare the solutions computed by two novel algorithms with the solutions obtained from several typical methods,i.e.,Tikhonov regularization LS(RLS)[Hansen(2007)],Lanczos TTLS(L-TTLS)established in[Sima and Huffel(2007)]and RTLSQEP introduced in[Lampe and Voss(2012)].All algorithms are carried out by MATLAB software.Firstly,we discuss how to efficiently execute these algorithms for solving the ill-posed inverse problems.
·Tikhonov regularization LS(RLS):we determinate regularization parameter using L-curve method withλin the range(10?10,102),and then,chose regularization matrixLwhich equals to the approximate first derivative operator i.e.,
·Lanczos TTLS(L-TTLS):the maximal truncate indexkmax=15,the truncate index is acquired by L-curve method.
·I-LTTLS’s truncate parameter is determined by an modified generalized cross validation(GCV)in Eq.(7),terminate toleranceε=10?6.
We want to compare the optimal solutions that can be attained by any of the above methods.To do this,for each algorithm we define relative errorγbetween the optimal regularized solutionx x xTLSand the exact solution.For example,for CGRTLS
as the noise level.In the tests we select the noise levelsρ1=1×10?3andρ2=1×10?2.
Firstly,we consider one-dimensional Fredholm integral equation of the first kind,which is a classical ill-posed problem.The Fredholm integral equation with a square integrate kernel is of the form
in which the kernelKrepresents a known model for the physical phenomenon,the right-hand sideTis a given date function,andfis a function to be determined.
To solve(19)numerically,it is necessary to make the variables discrete and replace the integral equation by a set of finite linear equations.Firstly,let us discretize the intervals of[a,b]and[c,d]intom1andm2equally.The integral equation can then be replaced by a set of numerical equations
wherei=1,2,···,m2,andwjare the weighting coefficients for the quadrature formula.Through a trapezoidal rule,Eq.(20)can be rewritten as
The above equations may be abbreviated as
We examine our TLS approaches by considering the numerical solution of the following one-dimensional Fredholm integral equations of the first kind.
Example.1:the one problem is the discretization of the inverse Laplace transformation by means of Gauss-Laguerre quadrature.The kernelKis given by
and both integration interval[a,b]and[c,d]are[0,∞).
Example.2:other one is the famous one-dimensional Fredholm integral equation of the first kind devised by Phillips[Hansen,P.C.(2007)],which is described as follows:
Both the integration intervals are[?6,6].Where the functionφis
Table 1:The condition numbers of different matrix dimensions in two examples.
Figure 1:The declining ratio of neighboring eigenvalues
Fig.2 and Fig.3 show histograms of the relative errorsγfor all five regularization methods in different matrix dimensions,respectively.And our results are obtained in the solution over 1000 independent simulations of the same example.It can be readily observed that the RLS method produces a worse solution than other RTLS algorithms.It is probably because the RLS cannot consider the errors of the system matrix efficiently.It is obvious that the I-LTTLS,CGRTLS and RTLSQEP methods are able to generate more accurate solutions than the classical regularization methods L-TTLS.Here,the effects of random noise on L-TTLS may reduce the accuracy of the solutions and increase dispersion of the solutions greatly,which is apt to obtain the unstable solutions.Furthermore,the I-LTTLS,CGRTLS and RTLSQEP methods possess lower noise sensitivity.Especially,the robustness of the I-LTTLS algorithm perfects best of these methods.The accuracy of the stateof-the-art RTLSQEP algorithm and CGRTLS algorithm is somewhere in between,where the latter one yields more accurate approximations.The RTLSQEP and LTTLS algorithms are suitable when some knowledge about the characteristic of the exact solution or noise condition is known a priori,however,it is difficult to be obtained in some cases.
Figure 2:Histograms present the optimal relative errors of 1000 test problems solved by five different regularization methods for Example.1,with matrix dimensions m1=m2=20,noise levels ρ1=1×10?3
Figure 3:Histograms present the optimal relative errors of 1000 test problems solved by five different regularization methods for Example.1,with matrix dimensions m1=m2=100,noise levels ρ1=1×10?3
The I-LTLLS algorithm outperforms CGRTLS,L-TTLS,RTLSQEP,RLS in Fig.2.This is no surprise that there is a larger gap in the eigenvalue spectrum when the matrix dimensions satisfym1=m2=20.This feature denotes that it is easy to cut off a certain number of terms in the SVD of the coefficient matrix.And these certain terms can be considered as noises far away from the singular subspaces of true system energy.In this case,Tikhonov regularization method may be difficult to regularize both reliable and noise parts efficiently.In Fig.3,the CGRTLS algorithm is clearly superior to the other three methods since the singular values of matrix decay gradually to zero when the matrix dimensions satisfym1=m2=100.At this time,it is difficult to determine an appropriate truncation level for truncated TLS.And the smaller singular values which are truncated may be useful information.Therefore,distribution of singular values has a great impact on solving ill-posed problems when we employ regularization algorithms.
Figure 4:Histograms present the optimal relative errors of 1000 test problems solved by four different regularization methods for Example.2,with matrix dimensions m1=60,m2=50,noise levels ρ1=1×10?3
Figure 5:Histograms present the optimal relative errors of 1000 test problems solved by four different regularization methods for Example.2,with matrix dimensions m1=60,m2=50,noise levels ρ2=1×10?2
Test 2.Our second test problem is generated by considering the Example.2.We consider the rectangle matrix with dimensionsm1=60,m2=50,whose singular values decay gradually to zero and the condition number is 6.529×1016.Therefore it is a typically ill-condition matrix.Our test is presented as histograms of the relative error,in the solution over 1000 independent simulations of the same example.Seeing numerical relative errorsγof all four TLS-based algorithms in the histograms Fig.4 and Fig.5,where the noise levels areρ1=1×10?3andρ2=1×10?2,respectively.It is obvious that for smaller noise levelσ1=1×10?3,the solutions of all four algorithms are not expected much difference.However,in Fig.5,the relative errors of all algorithms grow as noise level increases,and the I-LTTLS,CGRTLS and RTLSQEP algorithms increase lower than L-TTLS algorithms when the noise level increases toσ1=1×10?2.The CGRTLS algorithm with adaptive selection of regularization parameter is turned out to be slightly superior to other TLS algorithms.We can also conclude that the results of relative error indicate that the I-LTTLS is not very sensitive to the random noises.
Next,several starting regularization parameters are used to initialize the CGRTLS algorithm,and the results average over 100 random simulations.The average regularization parameterˉλand average relative errorˉγfor various starting regularization parametersλ0are computed in Table.2.As we can see,the CGRTLS algorithm has low sensitivity to initial regularization parameter since theˉγandˉλare almost same at different initial parameter values.As a result,rather than using the parameter selection principles described in some of previous works,an adaptive principle of selecting regularization parameter can be applied to determine the optimum regularization parameters,which has a stronger robustness,higher accuracy and convergent rate.
Table 2:The average relative errorˉγ and average regularization parameterˉλ for various λ0
Example.3:Now we apply the two novel methods of TLS regularization to tackle inverse heat conduction problem.The one-dimensional heat conduction problem is described as
whereu(x,t)denotes temperature,xis spatial variable andtis time variable.f(x)is initial condition,Ddenotes heat transfer coefficient.
The temperature distributionu(x,t)of the heat conduction problem for a given initial condition is explicitly obtained using separation of variables
Then we can change Eq.(23)into the Fredholm integral equation of the first kind
The initial temperature distribution computed by LS method,compared with exact initial temperature distribution,is given in figure 6.It is obvious that there is a great error when the LS method is used to estimate the initial temperature.The constructed solutions of the I-LTTLS and CGRTLS algorithms apprehended from Fig.7 and Fig.8 are in good agreement with the exact solution.Therefore,the two novel methods of TLS regularization are efficient and accurate to solve backward heat conduction problem,even when both the measurement itemsu(x,t1)and the integral operatorKare contaminated by some random noises.As we can see,the solution of CGRTLS algorithm is slightly superior to I-LTTLS algorithm. It is probably because the singular values of system matrix decay gradually to zero. Next, the initial temperature constructed by CGRTLS algorithm for several different starting regularization parameters are shown in Fig.9,and we can still concluded that the CGRTLS algorithm has low sensitivity to starting regularization parameters.
Consider the following two-dimensional Fredholm integral equations of the first kind
Figure 6:Comparison between the exact and the LS algorithm results
Figure 7:Comparison between the exact and the TLS algorithm results at ρ1=1×10?3
Figure 8:Comparison between the exact and TLS algorithm results at ρ1=1 ×10?2
Figure 9:The constructed initial temperature for various λ0
whose exact solution isf(s,t)=s+twith the kernel
wheres,t∈?1?R2,u,v∈?2?R2,we set ?1=?2=?=[?5,5]×[?5,5],let us discretize the intervals ?1and ?2intom1×n1andm2×n2respectively.The twodimensional Fredholm integral Eq.(25)can then be replaced by a set of numerical equations
wherep=1,2,···,m2,q=1,2,···,n2,the above equations may be scattered concretely as
Figure 10:The four histograms illustrate the statistical distribution of relative error for two-dimensional Fredholm integral equation with m1=n1=m2=n2=20 and a noise level ρ1=1×10?3
Finally,a sample solution of two-dimensional Fredholm integral equations computed by the CGRTLS,I-LTTLS and RLS schemes is compared with the exact solutions apprehended from Fig.11.The constructed solutions of the I-LTTLS and CGRTLS schemes perform better than RLS scheme.This is due to the fact that the errors in both the system matrix and the right-hand side may produce large errors in the computed results.Consequently,the constructed solutions by the regularized TLS schemes which can consider both errors are much accurate than LS-based methods.It can be seen that the constructed solution using the CGRTLS algorithm is slightly superior to the solutions computed by the I-LTTLS algorithm i.e.,the former solution match the exact solution well.Therefore,we prove that the CGRTLS algorithm generate more accurate solutions than the I-LTTLS algorithm when the singular values decay gradually to zero yet again.
Figure 11:Approximated solution for different regularization solvers i.e.,RLS,I-LTTLS and CGRTLS
We have proposed two novel iterative algorithms to incorporation of regularization and stabilization into the TLS setting.The two algorithms named I-LTTLS and CGRTLS are analogous to the truncated SVD and Tikhonov regularization approaches based on LS,respectively.The I-LTTLS algorithm overcomes the deficiencies of the Lanczos-TTLS algorithm which are dif ficult to obtain the truncate indexkand get maximal truncate indexkmaxregarded as a critical precondition.The CGRTLS algorithm is able to choose the regularization parameter adaptively which gets higher efficiency than other famous methods,and moreover,converge to a global minimum point.Both algorithms aren’t necessary to obtain any priori knowledge about noise level and exact solution.
Acknowledgement:The work described in this paper was supported in part by the New Century National Excellent Talents Program through the Ministry of Human Response and Social Security of China,and in part by the New Century Excellent Talents Program in University,funded by the Ministry of Education of China.And the first author would like to acknowledge the support of the Center for Aerospace Research&Education,University of California,Irvine.
Aster,R.;Borchers,B.(2004):Parameter estimation and inverse problem.Elsevier Academic Press.
Bech,A.;Ben-Tal,A.(2006):On the solution of the tikhonov regularization of the total least squares problem.SIAM Journal Optimal,vol.17,pp.98-118.
Fierro,R.D.;Golub,G.H.;Hansen,P.C.;O’Leary,D.P.(1997).Regularization by truncated total least squares.SIAM Journal on Scientific and Statistical Computing,vol.18,no.4,pp.1223-1241.
Golub,G.C.;Hanse,P.C.;O’Leary,D.P.(1999):Tikhonov regularization and total least squares.SIAM.J.Matrix Anal.Appl.,vol.21,pp.185-194.
Hanse,P.C.;O’Leary,D.P.(1996):Regularization algorithms based on total least squares.Technical Report CS-TR-3684,pp.127-137.
Hansen,P.C.(2007):Regularization tools:A MATLAB package for analysis and solution of discrete ill-posed problems.Numer,Algo,vol.46,pp.189-194.
Heinz,W.E.;Martin,H;Andreas,N.(1996):Regularization of inverse problem-s.Kuluwer Academic Pulblishers,Amsterdam.
Ioannou,Y.;Fyrillas,M.M.;Doumanidis,C.(2012):Approximate solution to Fredholm integral equations using linear regression and applications to heat and mass transfer.Engineering Analysis with Boundary Elements,vol.36,pp.1278-1283.
Lampe,J.(2010):Solving regularized total least squares problems based on eigenproblems.Hamburg:Hamburg University of Technology,Institute of Numerical Simulation.
Lampe,J.;Voss,H.(2012):Efficient determination of the hyperparameter in regularized total least squares problems.Applied Numerical Mathematics,vol.62,pp.1229-1241.
Lampe,J.;Voss,H.(2013):Large-scale tikhonov regularization of total least squares.Journal of Computational and Applied Mathematics,vol.238,pp.95-108.
Liu,C.S.;Atluri,S.N.(2009a):A fictitious time integration method for the numerical solution of the Fredholm integral equation and for numerical differentiation of noisy date,and its relation to the filter theory.CMES:Computer Modeling in Engineering&Science,vol.41,pp.243-261.
Liu,C.S.(2009):A new method for Fredholm integral equations of 1D backward heat conduction problems.CMES:Computer Modeling in Engineering&Science,vol.47,pp.1-21.
Liu,C.S.(2008):Improving the ill-conditioning of the method of fundamental solutions for 2D Laplace equation.CMES:Computer Modeling in Engineering&Science,vol.28,pp.77-93.
Liu,C.S.;Atluri,S.N.(2009b):A highly accurate technique for interpolations using very high-order polynomials,and its applications to some ill-posed linear problems.CMES:Computer Modeling in Engineering&Science,vol.43,pp.253-276.
Liu,C.S.;Yeih,W.;Atluri,S.N.(2009):On solving the ill-conditioned system Ax=b:general-purpose conditions obtained from the boundary-collocation solution of the Laplace equation,using Trefftz expansions with multiple length scales.CMES:Computer Modeling in Engineering&Science,vol.44,pp.281-311.
Liu,C.-S,Hong,H.K.;Atluri,S.N.(2010):Novel algorithms based on the conjugate gradient method for inverting ill-conditioned matrices,and a new regularization method to solve ill-posed linear systems.CMES:Computer Modeling in Engineering&Science,vol.60,pp.279-308.
Liu,C.S.;Kuo,C.L.(2011):A dynamic Tikhonov method for solving nonlinear ill-posed problems.CMES:Computer Modeling in Engineering&Science,vol.76,pp.109-132.
Markovsky,I.;Huffel,S.Van.(2007):Overview of total least squares methods.Signal Process,vol.87,pp.2283-2302.
Maziar,S.;Hossein,Z.(2009):Computational experiments on the Tikhonov regularization of the total least squares problem.Computer of Science Journal of Moldova,vol.49,pp.14-25.
Micheli,E.De.;Viano,G.A.(2011):Fredholm integral equations of the first kind and topological information theory.Integral Equations and Operator Theory,vol.4,pp.553-571.
Oraintara,S.;Karl,W.C.;Castanon,D.A.;Nguyen,T.Q.(2000):A method for choosing the regularization parameter in generalized Tikhonov regularization linear inverse problems.Image Processing,Proceedings,vol.1,pp.93-96.
Renaut,R.;Gou,H.(2005):Efficient algorithms for solution of regularized total least squares.SIAM.J.Matrix Anal.Appl.,vol.26,pp.457-476.
Sima,D.M.;Huffel,S.Van.(2007):Level choice in truncated total least squares.Computational Statistics&Data Analysis,vol.52,pp.1103-1118.
Sima,D.M.;Huffel,S.Van.;Golub,G.H.(2004):Regularized total least squares based on quadratic eigenvalue problem solvers.BIT.Numerical Mathematics,vol.44,pp.793-812.
Schaffrin,B.;Wieser A.(2008):On weighted total least squares adjustment for linear regression.Journal of Geodesy,vol.82,pp.373-383.
Wazwaz,A.M.(2011):The regularization method for Fredholm integral equations of the first kind.Computers and Mathematics with Applications,vol.61,pp.2981-2986.
Zhang,L.;Zhou,W.J.;Li,D.H.(2006):a descent modified Polk-Ribiere-polyak conjugate method and its global convergence.Journal of Numerical Analysis,vol.4,pp.629-640.