Helin LI, Bin LIN,*, Tinyi SUI,b, Ling XU, Chen ZHANG, Shui YAN,Yng WANG, Xinqun HAO, Deyu LOU
a School of Mechanical Engineering, Ministry of Education, Tianjin University, Tianjin 300350, China
b International Institute for Innovative Design and Intelligent Manufacturing of Tianjin University, Shaoxing 312000, China
c Science and Technology of Advanced Functional Composite Laboratory, Aerospace Research Institute of Materials and Processing Technology, Beijing 100076, China
d Key Laboratory of Advanced Ceramics and Machining Technology of Ministry of Education, Tianjin University, Tianjin 300072, China
KEYWORDS Hand-Eye Calibration;Laser Displacement Sensor;Machine learning;Measurement errors;On-Machine Measurement
Abstract An on-machine measuring (OMM) system with a laser displacement sensor (LDS) is designed for measuring free-form surfaces of hypersonic aircraft’s radomes.To improve the measurement accuracy of the OMM system, a novel Iteratively Automatic machine learning Boosted hand-eye Calibration (IABC) method is proposed.Both the hand-eye relationship and LDS measurement errors can be calibrated in one calibration process without any hardware changes via IABC.Firstly, a new objective function is derived, containing analytical parameters of the handeye relationship and LDS errors.Then,a hybrid calibration model composed of two kernels is proposed to solve the objective function.One kernel is the analytical kernel designed for solving analytical parameters.Another kernel is the automatic machine learning(AutoML)kernel designed to model LDS errors.The two kernels are connected with stepwise iterations to find the best calibration results.Compared with traditional methods, hand-eye experiments show that IABC reduces the calibration RMSE by about 50%.Verification experiments show that IABC reduces the measurement deviations by about 25%-50% and RMSEs within 40%.Even when the training data are obviously less than the test data,IABC performs well.Experiments demonstrate that IABC is more accurate than traditional hand-eye methods.
The radome is a key component of hypersonic aircraft to protect the entire microwave system and antenna,1and the geometric accuracy of the radome is one of the vital factors affecting the guidance performance of the aircraft.2,3The onmachine measurement of the radome, followed by analysis and removing its manufacturing deviations during the manufacturing process, is an effective way to ensure the geometric accuracy of the radome.4With the improving requirements of aerodynamics and guidance performance, shapes of radomes are gradually developed from rotary to irregular shapes,free-form surfaces are increasingly adopted in new generations of radomes,and the resulting complexity and diversity have posed considerable challenges to on-machine measurements of radomes.Therefore, new measurement methods and equipment are urgently needed.
Traditionally, coordinate measuring machines (CMMs) or on-machine measuring (OMM) systems equipped with touchtrigger probes have been widely used to measure free-form parts.5,6Although high accuracy and reliability can be achieved,long operation time is required,resulting in relatively low measurement efficiency and making it difficult to meet efficient manufacturing requirements.With the development of technology, non-contact measurement methods and equipment7,8are gradually being considered.The binocular or multi-ocular stereo vision-based 3D scanners9are potential solutions.However, the sizes of these devices are usually large and hardly penetrate inside the radome’s cavity, so it is not easy to cover larger measuring areas when measuring the inner surfaces if the radome is relatively narrower and longer.With advantages of small volume, fast measuring speed, high response frequency, remote and non-destructive evaluation,etc., laser displacement sensors (LDS) have become a new trend in dimension metrology.10Moreover, the OMM system combined with the LDS and a robot has become one of the highly flexible systems for on-machine measurement requirements11.
To ensure the performance of the LDS-based OMM system, the robot calibration and hand-eye calibration between the robot and the LDS must be performed before application.10In addition, the LDS itself has measurement errors,and these errors may be more significant,especially measuring free-form surfaces.12,13Therefore, the LDS errors should also be calibrated.The robot calibration can be accomplished via external instruments, such as laser trackers14or other methods.15–17Thus, hand-eye calibration and LDS errors calibration become key problems governing the performance of the OMM system, and they are in the same kinematic chain and are coupled problems.However, there are still no effective methods to simultaneously accomplish online hand-eye calibration and LDS errors calibration until today.The high accuracy of the OMM system is the prerequisite for guaranteeing the geometric accuracy of manufacturing the radome.Solving the above problems is of great significance for improving the final guidance performance of hypersonic aircraft.
Most previous hand-eye calibration methods of LDS follow three steps:1)select a kind of reference artifact,fix the artifact in space, and measure the artifact with multiple positions and orientations by the OMM system; 2) construct the objective function based on the geometric constraints of the artifact; 3)solve the objective function via optimization algorithms with measured data.In detail,there are many kinds of available artifacts for hand-eye calibration, such as planar objects,18–20straight edge objects,21L-type plates,22X-type objects,23multi-step objects,24pin-shaped objects,25and standard balls.10,11,26Each artifact has its geometric feature and can produce corresponding geometric constraint equations.Taking planar artifacts as an example, the points measured from them should meet the plane constraint equations.18–20For standard ball artifacts, the points measured from them should meet the spherical constraint equations.10,11,26–29Then, calibration objective functions based on geometric constraints can be established.The hand-eye relationship in the objective functions is usually represented as analytical parameters such as a homogeneous coordinate transformation matrix30or three relative movement parameters and three rotation parameters.30Finally,optimization algorithms like the Gauss-Newton method(GN),10the Levenberg-Marquardt method(LM),11and partial swarm optimization-based methods22can be used to identify the analytical hand-eye parameters.However, the LDS itself has measurement errors in practice.12,13,31The points measured from artifacts by the LDS actually contain the measurement errors of the LDS,and thus the analytical hand-eye parameters identified by the traditional calibration methods are actually affected by LDS errors.Therefore,the results are not very accurate indeed.As a result, the measurement accuracy cannot be further improved based on the existing methods.
Many studies12,13,31have pointed out that there are systematic components in LDS errors,so LDS errors can also be calibrated.Previous studies individually investigated LDS errors but rarely considered the hand-eye calibration problem simultaneously.Isheil et al.12and Van Gestel et al.13have shown that measurement errors are related to three global geometrical features:global scan depth,global in-plane angle,and global outof-plane angle.Error calibration and compensation based on these global parameters could improve the measurement accuracy to a certain extent, but the root mean square errors(RMSE) of the feature fitting after compensation also increased, indicating that the method reduced the accuracy of the measurement system.12,32Although the local response was included to reduce the RMSE for planer workpieces,12those strategies cannot be directly used, especially when measuring free-form surfaces.Because when measuring a free-form surface, the relative scanning depth and incident angle of each measuring point are different.If different parameters are modeled with same global parameters,the model itself is unreasonable, and the calibrated residuals cannot be further reduced.Thus,more detailed features are required to model LDS errors.
In general,the hand-eye calibration results actually contain LDS errors.The two calibrations of hand-eye calibration and LDS errors calibration are coupled problems indeed.Previous studies of hand-eye calibration hardly further deal with LDS errors,while the studies of LDS errors calibration hardly consider the influences of the hand-eye relationship.As a result,the measurement accuracy of the LDS-based OMM system cannot be further improved.
The coupling relationship between hand-eye parameters and LDS errors mentioned above can be described by a mathematical model, i.e., a forward kinematic model.If establishing an integrated mathematical model containing both hand-eye parameters and LDS errors and then solving the model in one calibration process, the two calibrations can be achieved simultaneously, thus helping to improve measurement accuracy.This paper proposes a novel IABC method to solve the problems mentioned above.The core idea of IABC is a hybrid calibration model composed of two calibration kernels.One of them is the analytical kernel designed to solve the analytical parameters of the hand-eye relationship, similar to the traditional hand-eye calibration models.The other kernel is the AutoML kernel designed to model LDS errors implemented with a data-driven AutoML pipeline and designed for modeling LDS errors.The two kernels are connected with a stepwise iteration way, and the best hand-eye calibration results can be obtained through iterations.Before the IABC,a new objective function is proposed, containing analytical parameters and LDS errors rather than only analytical parameters in traditional hand-eye calibration methods.In the AutoML kernel,the AutoML pipeline embedded with several advanced candidate machine learning models is designed.The competition mechanism is introduced to automatically train the candidate models and select the best estimator of LDS errors.Compared with traditional hand-eye calibration methods, IABC can not only accomplish both hand-eye calibration and LDS errors calibration in one calibration process without any hardware change but also improve the final measurement accuracy in applications.In fact,IABC is a kind of extended or generalized hand-eye calibration method,so still called a hand-eye calibration method in this paper.
The structure of this paper is organized as follows:Section 2 introduces an LDS-based OMM system for illustrating IABC.Section 3 is the method of IABC,including Section 3.1 Objective function, Section 3.2 Solving the objective function with IABC, Section 3.3 Training data acquisition of IABC, and Section 3.4 Using IABC.Several hand-eye calibration and verification experiments are accomplished, analyzed, and discussed in Section 4, including Section 4.1 Experimental setup, Section 4.2 Calibration with IABC, and Section 4.3 Comparison.Section 5 is the conclusion of this work.
To improve the readability, the main contribution of this work is also presented as follows:
1) The measurement principles with and without LDS errors are analyzed, and thus the ideal and actual forward kinematic models are derived.Based on them and the standard ball artifact, a new hand-eye calibration objective function is proposed,increasing the possibility of further reducing the calibration residuals.
2) To solve the objective function, the novel IABC method is proposed, containing a hybrid calibration model composed of the analytical kernel and the AutoML kernel.IABC makes it possible that the analytical parameters and LDS errors can be online calibrated in one calibration process.
3)In the AutoML kernel,a new data-driven AutoML pipeline is proposed for automatically modeling LDS errors, containing several advanced candidate machine learning models.The pipeline can automatically select the best estimator for LDS errors to reduce the dependence on human experts and improve the applicability of IABC.
4) Four new kinds of local features and two new kinds of local targets of LDS errors are constructed as input for the AutoML pipeline.They are constructed based on the local characteristics of each point on the LDS rather than global parameters, allowing LDS to measure curved surfaces more accurately.
5) The core source code of IABC is an open source at https://github.com/lihelin666/IABC to promote the progress of academia and industry.
An LDS-based OMM system is introduced as the research object to illustrate the proposed method better.The OMM system is composed of an LDS and a movable robot,as shown in Fig.1.The robot is composed of three orthogonal linear axes of Z, X, and Y and a measuring arm.The measuring arm is designed to measure thin wall workpieces’internal and external surfaces with free-form surfaces.The LDS is mounted at the robot’s tool flange, and its orientation can be adjusted and then fixed before measurements.The robot is moved and fixed in front of a machine tool before each deployment.Then, the robot calibration is implemented via other instruments, such as laser trackers or interferometers,16,33,34which ensures that the robot itself has enough absolute positioning accuracy.Next, the hand-eye calibration of the LDS in this paper can be executed, and then this OMM system is applied to measure workpieces.This paper only focuses on the online hand-eye calibration problem.Reference frames, as well as coordinate systems of the OMM system, are defined as follows:
1) The base frame(FB)is fixed to the base of the robot,and its origin coincides with the origin of the robot.Its axes XB, YB, and ZBare parallel to axes X, Y, and Z of the robot.
2) The tool frame(FT),also known as the‘‘hand”reference coordinate system, is fixed to the tool flange of the robot.Its origin coincides with FBat the initial position.Its axes XT, YT, and ZTare parallel to those of FB.
3) The sensor frame (FS), also known as the ‘‘eye”reference coordinate system, is fixed to the LDS.Its origin coincides with the origin of the LDS.Its XZ plane coincides with the laser plane, and the directions of axes XSand ZSare defined as shown in Fig.1.Since the 2D LDS only returns 2D profiles,the coordinate values in YSare always zero.
Fig.1 An LDS-based OMM system for IABC.
4) The workpiece frame (FW) is fixed to the workpiece, its origin coincides with the origin of the machine tool’s rotation axis, and its axes XW, YW, and ZWare parallel to those of FBat the initial position.
In industrial metrology,geometric dimensioning and tolerancing (GD & T) are evaluated in the workpiece frame FW.35Hence, all the measured values of the kinematic chain should convert into FW.Assuming that there is a measured point PSin the sensor frame FS, and the same point in FWis PW, their transformation relation is established based on the multi-body system theory36,37and called the ideal forward kinematic model in this paper, as defined in Eq.(1).PSis defined as a homogeneous matrix containing indicated coordinates xSand zSin FS, while PWis also a homogeneous matrix containing the corresponding xW,yW,zWcoordinates in FW.HWSis defined as a homogeneous coordinate transformation matrix (HTM)from FSto FW, as shown in Eq.(2).The detailed derivation process is available in Section 1 of Appendix A.
where xBT, yBT, zBTare relative displacements from FTto FB;xBW, yBW, zBWare relative positions of FWin FB, and they are set to zero for simplification.(xTS, yTS, zTS) and (αTS, βTS,γTS)are relative positions and Euler angles of FSin FT,respectively.They are also hand-eye parameters in traditional handeye calibration methods.
In practice, there are measurement errors in XSand ZSdirections of the LDS in FS.Due to the collimation characteristics of the laser, the measurement error component in YSdirection is neglected.If the LDS error of every point can be calibrated and the corresponding compensating values are defined as ΔxSand ΔzS, by adding them to xS, zS, then the compensated PcScould be derived.Further, the compensated PWdenoted as PcWcan also be derived through Eq.(3).This equation is also called the actual forward kinematic model,and is available in Section 2 of Appendix A.
The hand-eye calibration is an optimization problem in traditional hand-eye calibration methods, so the hand-eye calibration method in this study still requires the objective function corresponding to the artifact.This paper adopts the standard ball as the reference artifact, as shown in Fig.2, and the corresponding objective function is established in this section.Compared with the existing methods, the new objective function in this paper considers not only the traditional analytical hand-eye parameters, but also LDS errors,instead of only the analytical hand-eye parameters included in the existing methods, increasing the possibility of further reducing calibration residuals.
The standard ball is chosen as the calibration artifact in this paper because it meets the ISO 10360–4,38and the sphericity of nowadays standard balls reaches micron or even sub-micron levels with a moderate cost.Most importantly, when measuring different free-form surfaces, each point on the LDS may have different relative positions and orientations.The relative positions and orientations are important factors affecting LDS errors.12,13,31Besides, a data-driven AutoML pipeline is proposed for modeling LDS errors in the following sections.The pipeline requires a large amount of training data containing various relative positions and orientations, to guarantee the generalization ability of the pipeline.Compared with other artifacts, the normal vector’s orientations of different points on the standard ball are different.By constructing specific scanning strategies, every laser point of the LDS can get massive and different relative distances and incident angles.Thus,the calibration measurements on the standard ball can simulate application measurements of different surfaces as much as possible, so the trained model with the data can effectively predict LDS errors of every measured point in applications.
After transforming all the measured points to FW, all the points should theoretically satisfy the spherical constraint equations as they are located on the sphere.However, measured data cannot completely satisfy the equations due to LDS errors.Therefore,the least-square method10,11is adopted to solve the best-fit solutions of the equations, as shown in
3.2.1.Architecture of IABC
In the above objective function, the type of analytical parameters (xTS, yTS, zTS, αTS, βTS, γTS) is constant since they are derived from homogeneous coordinate transformation matrices.The type of LDS errors (ΔxS,ΔzS) is unknown because the real LDS errors are very complex and have nonlinear characteristics,12,31,41,42so it seems more appropriate to express them as some nonlinear models rather than simple values or formulas.In recent years, nonlinear modeling methods have been well developed in machine learning,providing references for modeling LDS errors.
Fig.2 Hand-eye calibration principle of LDS using standard ball.
To deal with the above two different kinds of unknowns in the one objective function, a corresponding hybrid calibration model is proposed,as shown in Fig.3.The model contains two kernels.Kernel 1 is the analytical kernel designed for solving analytical parameters in the objective function.Kernel 2 is the AutoML kernel designed for modeling the LDS errors in the objective function.The two kernels are connected in a stepwise iteration way.The whole proposed calibration method is called the Iteratively AutoML Boosted hand-eye Calibration(IABC) method.
Fig.3 Architecture of Iteratively AutoML Boosted hand-eye Calibration (IABC) method.
For the first iteration of the IABC, the training data measured from the standard ball is input into the analytical kernel,and the LDS errors are temporarily treated as zeros.Current analytical parameters in the objective function of Eq.(4) can be solved via the gradient-based solving method such as the LM algorithm11or Trust Region Reflective algorithm(TRF).43The LM is an efficient algorithm widely used in previous studies but cannot deal with boundary constraints.11,10,28So, the new TRF is introduced in this paper because it is a robust algorithm and can handle boundary constraints.Next,the analytical parameters are temporarily treated as ideal parameters,so current LDS errors can be calculated and form training data.The data are input into the AutoML kernel,and the AutoML kernel returns the best estimator of current data.With the best estimator, the compensating values of every point can be predicted.Thus, the compensated point cloud can be obtained by adding the compensating values to the original point cloud,and the compensated point cloud is used to fit a sphere and produce the corresponding RMSE.Because the radius in the objective functions is set to the nominal radius of the standard ball in advance,a smaller RMSE actually represents a reduction in the overall calibration residual, so a lower RMSE can represent that the calibration method improves the comprehensive performance of accuracy.
From the second iteration, analytical parameters are input into the analytical kernel as initial values, and then repeat the same subsequent procedures as the first iteration.If the RMSE of the sphere fitting in this iteration is less than the previous iteration, the current iteration is considered to be better than the previous one,and the iterations will continue in an attempt to find more accurate results.On the contrary, iterations will terminate.Finally, the final calibration results containing the best-fit analytical parameters and LDS errors model will be obtained, and the IABC is accomplished here.
Compared with traditional hand-eye calibration methods,IABC calibrates both analytical parameters and LDS errors model in one process rather than only considering analytical parameters, helping to further reduce calibration residuals and thus improve the measurement accuracy in applications.
In general, for the two kinds of unknowns in the objective function of an iteration, the analytical parameters can be solved by the analytical kernel with LM or TRF algorithms,and the model of LDS errors can be automatically modeled by the AutoML kernel.The next Section 3.2.2 illustrates the detailed automatically modeling method of LDS errors,including: 1) AutoML pipeline for LDS errors modeling, and 2) features and targets construction for AutoML pipeline.
3.2.2.Automatically modeling LDS errors of objective function
(1) AutoML pipeline for LDS errors modeling
The AutoML kernel is the core component of IABC and is implemented by a new specific AutoML pipeline in this paper.In fact, the LDS errors in the real world are relatively complex,12,31,41,42and many kinds of machine learning models may be suitable for modeling LDS errors, but which model is the best is unknown in advance.Based on the intuition that‘‘two heads are better than one”,44the AutoML pipeline is proposed.The pipeline contains multiple kinds of candidate machine learning models with advanced nonlinear modeling capabilities,and it then automatically selects the best estimator of LDS errors through specific metrics.The AutoML pipeline is an automatically data-driven modeling method to deal with the complexity of the real world.
The pipeline is composed of three main sub-procedures(see Fig.4): 1)feature engineering; 2)model construction; 3)automatic hyperparameter optimization.In the feature engineering procedure,LDS errors data are input into the polynomial feature transformer.The transformer is designed to extend the original features and to discover potential nonlinear characteristics of the LDS errors.
Fig.4 AutoML pipeline for LDS errors modeling.
In the second procedure, several candidate machine learning models with advanced nonlinear fitting ability are employed, including Generalized Linear Models (GLM)45such as the Linear model or ElasticNet model,Support Vector Machine (SVM),46and Multilayer Perceptron (MLP).47Besides the single models,ensemble learning models48are also considered,such as the gradient boosting decision trees models(GBDT).49Recent studies have shown that GBDT performs better than other advanced models for modeling tabular data.50,51In fact, the LDS errors data constructed in the next section exactly belong to the tabular data.Therefore,GBDT is also employed in the pipeline.The XGBoost,49LightGBM,52and histogram-based gradient boosting models (HGBM)53are available GBDT implementations, and the HGBM is adopted in this paper.It should be noted that the pipeline is not a set-in-stone pipeline but an extensible framework to incorporate other models with better nonlinear modeling capabilities in the future.
In the third procedure of the pipeline, each machine learning model has its hyperparameters.The hyperparameters are not directly learned within the pipeline,so the hyperparameter grid containing the candidate model’s type and corresponding hyperparameters must be prepared before training.Then the kfold cross-validation54training strategies are adopted to train and evaluate all the candidate models.In each training round,the AutoML pipeline will fetch one machine learning model and its hyperparameters from the hyperparameter gird and train the model on training folds, the trained model is subsequently applied to predict LDS errors and compensate point cloud on validation folds, and then the compensated point cloud is used to fit a sphere and produce the corresponding RMSE.The lower the RMSE, the better the model, and the RMSE is the metric for evaluating the model’s performance.All the candidate models in the hyperparameter grid are trained and evaluated, and their performance is all updated to the global performance leaderboard.Finally,the best model will be automatically selected from the leaderboard whose RMSE is the smallest one.
In the whole AutoML pipeline,human experts only need to prepare the hyperparameter grid and LDS errors data in advance, and then the pipeline itself will automatically select the best estimator from many candidates, giving IABC a relatively strong ability to model complex LDS errors.
(2) Features and targets construction for AutoML pipeline
In the AutoML pipeline, it is required that the LDS errors data are not directly obtained from calibration measurements but need to be constructed.How to construct practical features and targets in the data is vital for the AutoML pipeline.In machine learning fields, without selecting appropriate features and targets, it is difficult to achieve satisfying performance even with the most advanced models.55–57In this section, four new local features and two new local targets associated with each measured point are proposed, which helps to allow LDS to measure different free-form surfaces more accurately.
Fig.5 Comparison of feature selection between previous studies and this paper.
Previous studies12,13have shown that the LDS errors are related to three global geometrical features, including the global scan depth, the global in-plane angle, and the global outof-plane angle, as shown in Fig.5 (a)-5(c).However, when measuring a free-form surface, the relative scanning depth and incident angles of each measuring point on the 2D profile are different,so it is not enough to still use the same global features.Thus, four new local features of LDS errors are proposed to represent relative positions and orientations of every measured point rather than global parameters.All features are required to be investigated in the sensor frame FSrather than in the workpiece frame FW.This perspective is crucial so that different measurement scenarios can be described in a uniform way,thus helping to guarantee the generalization ability of the AutoML pipeline.
It is assumed that there is a point P on the free-form surface measured by the LDS, as shown in Fig.5(d)-5(f).The point is on the 2D measured profile,its coordinate in FSis PS(xS,0,zS),and the normal vector of PSis NS.Further, the projection of NSon the XZ plane is NSXZ(see Fig.6), and the projection of NSon the YZ plane is NSYZ.The light band emitted by the LDS is trapezoidal, generated by the refraction of the cylindrical objective lens.The reverse extension of each laser line in these trapezoidal light bands is focused on one point called the virtual emission point LS(xLS,0,zLS) of the laser in this paper.Moreover, it is defined that the light vector from LSto PSis the incident light vector, while its reverse vector is the reverse incident vector IS.The four new local features are defined as the followings:
1)Local xS,the local×,is the X coordinate value of PS,as shown in Fig.5 (d).
2) Local dSZ, the local scan depth, is defined as dSZ=zLS-zS, representing the Z-direction distance between PSand the virtual emission point LS, as shown in Fig.5 (d).
Fig.6 Errors analysis of LDS in one of iterations.
3)Local α,the local in-plane angle,is the signed angle from vector ISto NSXZ, following the right-hand rules, as shown in Fig.5 (e).
4) Local β, the local out-of-plane angle, is the signed angle from vector ISto NSYZ, following the right-hand rules, as shown in Fig.5 (f).
In the above definitions, α and β are designed to consider the virtual incidence angle of each measuring point, they are projected onto the light plane and its vertical plane for investigation,corresponding to the global in-plane angle and global out-of-plane angle.Further, dSZis introduced to account for the scan depth of each point,corresponding to the global scan depth.Next, xSis introduced since it can identify the unique position of each point in the field of view together with dSZ,reflecting local position characteristics.
Besides the above-listed features, targets can also be constructed.As mentioned in the IABC, there is a group of analytical parameters treated as ideal results in one of the iterations.Hence the measured point cloud in FWcan be calculated through the ideal forward kinematic model.Ideally,when the standard ball is scanned, each point should be on the theoretical sphere with the radius r.However, due to LDS errors, those points are generally not located in the ideal sphere.The deviations between the measured values and the theoretical sphere are considered to be the LDS errors in the current iteration.As shown in Fig.6, its definition is equal to those shown in Fig.5(d)-5(f) and is also investigated in FS.The measured point cloud can be fitted to a sphere, and the sphere’s center is translated into FSand becomes CS.The ideal sphere center with radius r is fixed with CS.Moreover,there is a measured point PS, while the vector from LSto PSis the outgoing laser light.Due to LDS errors,PSis not located on the ideal sphere according to collimation characteristics of the laser, and its ideal point QSis the intersection point of the outgoing light and the ideal sphere.The coordinates of QScan be calculated using the intersection algorithm between the line and the sphere.The LDS error vector is equal to the vector from PSto QS, while the compensation vector Δs(ΔxS,0,ΔzS)is the vector from PSto QS.Thus, the two targets in this paper are constructed as follows:
1) ΔxS, defined as the projection value of Δs on XSaxis.
2) ΔzS, defined as the projection value of Δs on ZSaxis.
Fig.7 Calibration procedure of IABC.
Due to the AutoML pipeline in the IABC being a kind of data-driven modeling method, relatively big training data(amount of 1 × 105or more) is helpful to improve the confidence of the calibration results.Nevertheless, big data also brings new challenges to computation efficiency.For highspeed calculation demand, a fully vectorized construction algorithm of the above features and targets is proposed.Its key pseudo-codes are shown in Algorithm 1.One can also see our GitHub repository https://github.com/lihelin666/IABC.
Algorithm 1 (Vectorized construction algorithm of four local feature vectors(xS,dSZ,α,β)and two local target vectors(ΔxS,ΔzS) by measuring a standard sphere for AutoML kernel.).
When training IABC, the quality of the input data itself will also affect the calibration performance.If samples of each feature in the dataset are insufficient, the prediction deviation may be large and even deteriorate the calibration results.Therefore, the features (xS, dSZ, α, β) should contain various and different values as much as possible.And a specific calibration procedure for IABC is proposed.
For one point in the LDS range,when the constant distance from the sphere is retained and the spherical crown is scanned,there will be many measured points with different values of α and β.Since the normal vector direction of the measured point changes when the LDS moves relative to the sphere.Furthermore, if all points in the LDS range traverse the same sphere crown, each point with different xSand dSZcan obtain sufficient and different a and β samples.The above characteristics are the key reasons why the standard ball is chosen as the calibration artifact in this paper compared with other kinds of artifacts.
Based on the ideas above,a new calibration path generation strategy is proposed, as shown in Fig.7.The method includes three steps: volume construction, discretization, and traversal.The LDS is consolidated with the tool frame FT,and each axis of FTis parallel to those of FW.Firstly,a calibration volume is determined based on the LDS range (xSmin When the IABC is accomplished, the best analytical parameters and the best LDS errors estimator will be produced.Those calibration results can be used for real measurement applications, as shown in Algorithm 2.When the OMM system measures workpieces, the uncompensated point cloud PWin FWcan be calculated via the ideal forward kinematic model provided in Eq.(1).Further, to compensate LDS errors, the features vectors (xS, dSZ, α, β) of the machine learning model should be constructed in advance.Among them, xSis known data from the LDS.dSZis also known because dSZ=zLS-zS,where zLSis a constant and zSis measured data from the LDS.α and β are unknown but related to the normal vectors NSof each point in FSaccording to definitions provided earlier(see Fig.5).NScannot be directly calculated in FS,but NWto measure the displacement in the X,Y, and Z directions.An LDS (KEYENCE LJ-V7060) was mounted at the robot’s tool flange, and the main specifications of the LDS are listed in Table 259.The LDS has high repeatability, up to 0.4 μm in its Z direction,and 5 μm in its X direction.However,its linearity is only±0.1%of F.S.,i.e.,±16 μm.As shown in the notes from the manufacturer, linearity was preprocessed with smoothing and averaging several times in default conditions. Table 1 Specifications of robot and LDS. Algorithm 2 (Vectorized algorithm of IABC for measurement applications.). An OMM system was applied to carry out the experiments.Its specifications are listed in Table 1 and shown in Fig.8.The robot’s motion was controlled by a fully closed-loop system with three linear encoders, and the encoders were also used Hence, the absolute accuracy in other conditions may not be lower than the linearity shown.Moreover, since the absolute positioning accuracy of the robot following the calibration is lower than the linearity, the hand-eye calibration was deemed possible. The standard ball used in the calibration experiment was a ZrO2ceramic ball with a lambert surface.Its radius was 9.9969 mm with a sphericity of 1.8 μm (see the small standard ball in Fig.8 (b)).According to the calibration procedure proposed in this paper, the lengths in the X,Y, and Z directions were 30 mm, 20 mm, and 30 mm, respectively.The Y and Z dimensions of the volume were divided into 10 layers each.Each path was an arc with a radius of 12.5 mm connecting the corresponding start and end points.The measurement path connected all the individual paths and then generated the NC code, which was then executed by the robot. Fig.8 Experimental setup for hand-eye calibration. Table 2 Specifications of LDS. In the calibration experiments, the standard ball was scanned,and each dataset contained about 25 million original points.And the data used in experiments were subsampled from the original data.The detailed parameters, training processes,and results of Kernel 1 and Kernel 2 in one iteration are listed in Sections 3 and 4 in Appendix A.Only important results trained with 250 k sub-sample are listed here.For the results of Kernel 1, as shown in Section 3 in Appendix A, the‘‘TLS-LINEAR-TRF”(denotes‘‘residual function-loss function - solving algorithm”) method performs as a better choice for solving analytical parameters in Kernel 1.For the results of Kernel 2, as shown in Section 4 in Appendix A and Table 3 here,108 candidate models in the AutoML pipeline have competed,the top 10 models are all HGBMs,and the best HGBM reduces the RMSE of fitting the standard ball’s sphere by about 50 % compared with traditional methods.The performance of different machine learning models in the global leaderboard can be ranked as: HGBM > MLP >Linear > SVM > ElasticNet.However, not all models perform well, and the ElasticNet and some of the SVMs yielded increase RMSE values, indicating that the models deteriorate the calibration results.It should be noted that the global leaderboard is not a rank of model advancement, e.g., it is not appropriate to conclude that HGBMs are totally more advanced than MLPs, but only conclude that HGBMs perform better than MLPs within the current hyperparameter grid.If human experts change the hyperparameter grid, the leaderboard may change, too.Nevertheless, no matter how the results change, the AutoML pipeline will always select the relatively best one. In each iteration of IABC, the best estimator is applied to predict LDS errors, compensate the points, fit a sphere and evaluate the RMSE.The performance of every iteration with the 250 k sub-sample is shown in Fig.9 and Table 4.The RMSE of the 0th iteration is about 0.019 that comes from only executing the Kernel1 before the 1st iteration for comparison,and actually represents the performance of the traditional methods.The RMSE of the 1st iteration is about 0.010.The RMSE of the 3rd iteration is greater than that of the 2nd iteration,and the iteration is thus terminated.Therefore,the analytical parameters and the best estimator in the 2nd iteration are treated as the final calibration results.Calibration experiments show that IABC reduces the RMSE of fitting the standard ball’s sphere by about 50 % compared with traditional methods.As shown in Table 4, the differences in analytical parameters and the RMSE among the 1st, 2nd, and 3rd iterations are tiny, mainly because the HGBM itself has strong modeling capacities.Thus, only a few iterations are necessary to produce a good performance, benefiting industrial applications.The training time of the 108 models is about 1700 s,relatively long for industrial applications.The main reason for such behaviors is that the MLP and SVM consume much time.Parallel computing or removing some poor candidate models can be employed to reduce the training time in applications further. Table 3 Global leaderboard of AutoML pipeline with 108 candidate machine learning models (trained with 250 k sub-sample). Fig.9 RMSE and calculation time of every iteration (trained with 250 k sub-sample). IABC with the 2.5 M sub-sample was also trained,and only HGBMs were adopted in the AutoML pipeline.As shown in. Table 5,each HGBM consumes about 25 s,and the training of each iteration takes about 300 s.Compared to the calibration results with the 250 k sub-sample,the RMSE is not significantly improved, indicating that a larger amount of training data are not very necessary for improving the performance.In the observed case, the 250 k data subset is enough to characterize the properties of LDS errors. Machine learning models are often guarded because of their‘‘black-box”nature, although they often have good performance in engineering.To improve the transparency and interpretability of the machine learning models in this study, the partial dependence (PDP)60plots and individual conditional expectation (ICE)60plots that are commonly used in the field of explainable artificial intelligence61are introduced to explain the above best HGBM model.PDP shows the expected target response as a function of the input features of interest on average, marginalizing over values of other input features.The PDP for a single instance is an ICE plot, and the average of all ICE lines is the PDP.PDP and ICE plots provide overall and detailed views of dependencies between a set of input features of interest and the target,respectively.The 10000 sample points were sampled from the training data and then fed into the HGBM model to construct single-feature PDPs, as shown in Fig.10, where fifty ICE plots are also shown. Single-feature dependencies of four features (xS, dSZ, α, β)and the target ΔxSare shown in Fig.10 (a)-10(d).It can be seen that there is a nearly linear correlation between xSand ΔxSin Fig.10 (a).PDPs of dSZ,α and β are relatively flat but resemble linear relationships, as shown in Fig.10 (b)-10(d).ICE plots of dSZ, α and β are relatively scattered, suggesting that other features may interact with them and theirperformance cannot be simply examined independently.Single-feature dependencies of four features (xS, dSZ, α, β)and the target ΔzSare shown in Fig.10 (e)-10(h).Their PDP and ICE plots are more complex curves, indicating relatively significant nonlinear relationships between features and ΔzS.In the PDP and ICE plots about α and β, i.e., Fig.10 (c),10 (d), 10 (g), and 10 (h), the target responses ΔxSand ΔzSshow relatively obvious fluctuations in the minimum and maximum regions of α and β, which may be caused by the larger measurement errors due to larger α and β according to the hardware properties of the LDS.In engineering, α and β are suggested to be valid in smaller ranges when using the model to predict LDS errors, e.g.α,β ?[-0.7rad, 0.7rad], to avoid significant prediction fluctuations even if the model is trained with larger ranges of α and β data. Table 4 Calibration results of every iteration (trained with 250 k sub-sample). Table 5 Calibration results of every iteration (trained with 2.5 M sub-sample only using HGBM models). Fig.10 Single-feature PDPs and ICE plots of features (xS, dSZ, α,β) and targets (ΔxS, ΔzS). To further investigate the interactions between features,two-feature PDPs are introduced, as shown in Fig.11 and Fig.12.The irregular contours of the target responses in the figures indicate that the two features are interacting with each other rather than being independent.In Fig.11 (a), 11 (c), 11(e),and 11(f),the trends of ΔxSalong horizontal axes are more evident than those of vertical axes,indicating that the features on the horizontal axes have relatively more significant effects on the trends of ΔxS.The overall changing direction of ΔxSin Fig.11 (b) appears to be in a diagonal direction along the upper right,suggesting that the two variables have a relatively similar ability to influence ΔxS.The trend of Fig.11 (d)varies mainly along the vertical axis, indicating that α corresponding to the vertical axis has more significant effects on the trend of ΔxSthan dSZcorresponding to the horizontal axis.Two-feature PDPs in Fig.12 show more complex contours, which indicate more complex nonlinear relationships between the features and the target ΔzS.The above PDPs and ICE plots improve the transparency and interpretability of the best HGBM model.And it is worth noting that the above explanations still belong to the interpretations of the LDS error model in the statistical meaning. In order to verify whether the OMM system after hand-eye calibration is effective in measurement applications and compare IABC with traditional methods, a new integrated reference artifact was designed and adopted, as shown in Fig.13.Since the surfaces of the real workpieces to be measured may be diverse,reference artifacts with spherical,cylindrical,or planar features in the integrated reference artifact were selected to represent potential measurement scenarios.Because if the parametric equation like z=f(x,y) is used to represent a surface in R3, the normal direction of the sphere will vary with respect to the two parameters × and y, the normal direction of the cylinder can vary along only one parameter, while the plane can remain constant with respect to the two parameters.The normal vector variations of the above three geometries can be applied to represent the characteristics of most workpieces’surfaces.Besides, standard artifacts corresponding to the above geometries, i.e., standard balls, pin gauges, and gauge blocks, have been produced with high accuracy and have been widely used in metrology.The use of standard artifacts helps to ensure the traceability of experiments. Fig.11 Two-feature PDPs of features (xS,dSZ,α,β) and targetΔxS. Fig.12 Two-feature PDPs of features (xS,dSZ,α,β) and targetΔzS. The integrated reference artifact consists of the small standard ball mentioned in the above calibration experiments, a big standard ball, a pin gauge, and the stepped gauge blocks composed of two gauge blocks, as shown in Fig.13 (b).They are all made of ZrO2ceramics with lambert surfaces.The radius r of the big ball is the reference, and its nominal value is 12.7075 mm with a sphericity of 1.6 μm.The diameter d of the pin gauge is the reference, and its nominal value is 9.000 mm with a cylindricity of 1 μm.The height h of gauge block 2 in the stepped gauge blocks is the reference, and its nominal value is 10.0000 mm with a deviation within 0.1 μm.The integrated reference artifact was fixed on a 3-axis turntable,as shown in Fig.13(a).By adjusting the C1,A,or C2 axis,different spatial positions (Position 1 or 2, or 3) can be positioned.Each artifact was scanned multiple times to obtain at least 1 million (1 m) points at each position.Finally, different sizes(10 k,100 k,1 m)were subsampled from the original data and applied to test calibration methods.Besides IABC, three similar traditional methods were adopted for comparison in this study, including Yu’s method,11Bi’s method,10and Yin’s method28. Fig.13 Verification experiments at three positions with integrated reference artifact. A comparison case of Yu’s method11and IABC about the big standard ball at Position 3 is shown in Fig.14.The two methods were trained with 250 k calibration data in the previous section and tested with 10 k verification data sub-sampled from the big sphere, and the nominal radius of the sphere is 12.7075 mm.Points and the fitted sphere via Yu’s method11are shown in Fig.14 (a), the fitted radius is 12.7547 mm with the RMSE of 0.0231 mm, and its radius deviation is 0.0472 mm.The left figure of Fig.14 (b) shows an important result when using IABC, namely the estimated normals for a scattered point cloud in Algorithm 2 mentioned above.Points and the fitted sphere via IABC are shown in the right figure of Fig.14 (c), and the radius deviation is 0.0263 mm with the RMSE of 0.0163 mm.It is obvious that both the radius deviation and the RMSE of IABC are smaller than those of Yu’s method.11The case shows that IABC is more accurate than traditional methods. The detailed comparisons of two standard balls between IABC and traditional methods are shown in Fig.15 and Table 6.The radius deviation and RMSE of sphere fitting are metrics to evaluate the performance of different methods.It could be seen that the performance of each method is almost the same with different training data and test data, indicating that the calibration method is not sensitive to the density of the data.Hence, IABC is of interest in practical applications.In terms of small standard balls, IABC reduces the radius deviations of Position 1 and Position 2 by nearly-one order of magnitude,and the radius deviations are reduced by about 75%of Position 3 compared with traditional methods.Meanwhile,RMSEs of IABC are reduced by 10 %-40 %.As for big standard balls,IABC reduces the radius deviations by about 50%and RMSEs by 20 %-30 % compared with traditional methods.Similar performance is shown in different amounts of training and test data rather than in a single case.Furthermore,even by training the IABC with 250 k calibration data and testing it with 1 m verification data, it should be emphasized that the current number of test data is four times more than the number of training data, and IABC still performs well, as shown in Fig.15(b) and 15(d), demonstrating that IABC has good generalization abilities.On the other hand, it can be found that too large training data, such as 2.5 M,fail to significantly reduce deviations and RMSEs compared with the 250 k data.The 250 k training data are enough for industrial applications in this paper as they already provide acceptable accuracy and efficiency. Fig.14 A comparison case of Yu’s method11 and IABC about big standard ball at Position 3. Fig.15 Comparisons of standard balls between IABC and traditional methods with different amounts of data. Comparisons of the pin gauge and the stepped gauge blocks between IABC and traditional methods are shown in Fig.16,Table 7, and Table 8.The IABC was trained with 250 k calibration data, and the measured data sub-sampled with 10 k,100 k,and 1 M at Position 2 and Position 3 were tested.Since many surrounding points around target reference artifacts were scanned during the actual scanning process, the final results obtained by all methods were cropped using the same bounding boxes to exclude points in the non-target areas if they were from the same positions of the same artifacts, as shown in the top left thumbnails of Fig.16 (a) and 16 (b).The cropped points shown in Fig.16 (a) are applied to fit a cylinder and produce the corresponding RMSE as well as deviation when compared with the nominal diameter of the pin gauge.The cropped points in Fig.16 (b) are used to fit two planes, i.e., Plane 1 and Plane 2, and produce the corresponding RMSE 1 and RMSE 2.The height of stepped gauge blocks h is replaced by the distance between the mass center of Plane 1 and Plane 2, The deviation of h comes from the comparison with the nominal height of Gauge block 2, and RMSE is replaced by mRMSE that is the average of RMSE 1 and RMSE 2.It can be seen that the proposed IABC reduces diameter deviations by about 50 % and RMSEs within 15 % compared with traditional methods regarding pin gauge’s results.IABC reduces height deviations by 25 %-50 % and mRMSEs within 5 % compared with traditional methods in the stepped gauge blocks’results.The RMSE reductions of the above two reference artifacts are less than those of the standard balls.Because the normal vectors of cylinders and planes do not change significantly compared with spheres, compensation effects are less than in spheres.But the diameter and height deviations are effectively reduced, so the IABC is valid. In addition,an interesting phenomenon is that the results of traditional methods seem to be equal to each other in the above experiments, but they have tiny differences in the fifth or even sixth decimal place, which are no longer valid indeed.This is because the underlying principles of the traditional methods are almost identical, and the differences are formal.In detail, residual functions RTLSand RLLSare both about radius, and the residual functions with known and unknown radius are all based on the same constraints.For the same data from the physical world, different forms of the method will converge to minimally different local optimal solutions because their truth values are deterministic, and thus their fitting results appear to be the same. In summary, verification experiments demonstrate that IABC is an effective hand-eye calibration method.Compared with traditional hand-eye calibration methods, the measurement deviations are reduced by about 25 % ~50 %, and the fitting RMSEs are reduced within 40 %.Even when IABC’s training data are obviously less than the test data,IABC still performs well.Hence, IABC is a more accurate hand-eye calibration method than traditional methods.Experiments also show that the additional expenses introduced by the IABC are about 200,000 points/s in one CPU thread(Test Platform: Windows 10, Intel Core i7-9700 3.00 GHz octa-core, 16 GB DDR4 2666 MHz) compared with traditional methods, which can be accepted by most online inspection scenarios with medium scan densities in this study.The inference speed can be even faster if hardware acceleration techniques62are used to speed up IABC because the vectorized algorithms provided are inherently conducive to parallel computation for massive data.All the above experiments in this paper were repeated several times.And similar results were derived. Table 7 Pin gauge’s radius deviations and RMSEs of IABC and traditional methods with different amounts of data. Table 8 Stepped gauge blocks’height deviations and mRMSEs of IABC and traditional methods with different amounts of data. From calibration results and verification results in the above sections, IABC is demonstrated to be more accurate than traditional hand-eye methods.This may be mainly due to the following reasons: Firstly, the new objective function of hand-eye calibration contained both analytical parameters and LDS errors.However, there are only analytical parameters in traditional methods,making IABC possible to reduce the calibration residuals further.Then, the hybrid calibration model composed of two calibration kernels is proposed to solve the objective function.Kernel 1 is an analytical kernel especially designed for solving analytical parameters.Kernel 2 is an AutoML kernel especially designed for modeling LDS errors.Two kernels reflected the characteristics of two kinds of unknowns, allowing IABC to take advantage of the benefits of machine learning methods without losing the advantages of traditional hand-eye calibration methods.In Kernel 1,the‘‘TLS-LINEAR-TRF”,a more robust algorithm than other gradient-based algorithms,is proposed to identify analytical parameters.In Kernel 2,a specific AutoML pipeline containing several advanced machine learning models is designed, increasing the success probability of calibration to a certain extent because the pipeline can experiment with many possible candidate models.In the AutoML pipeline, the competition mechanism is introduced, and the best LDS errors model is selected from various candidate models.The final selected model would best fit the distribution of LDS errors, thus ensuring the calibrated performance.The AutoML pipeline reduces the dependence on human experts.Even if the error distribution of LDS errors is different, the pipeline can always select a relatively best model rather than be subject to a single model.After all, the performance of a single model may be affected by different error distributions.To further improve the calibration accuracy, the iteration is also introduced to connect Kernel 1 and Kernel 2, allowing IABC to find better hand-eye calibration results. In addition,to construct the input data of the AutoML kernel,four new kinds of local features and two new kinds of local targets are proposed.Their construction is based on the local characteristics of each point on the LDS rather than global parameters,so the pipeline can effectively deal with curved surfaces in applications.Finally, to obtain calibration data containing different features and targets, the special calibration procedure for IABC is proposed,allowing the AutoML kernel to learn from sufficient and diverse samples, thus ensuring the generalization ability of IABC to a certain extent. In general, each part of the methodology has its contributions to the feasibility and progressiveness of IABC, so the whole IABC becomes more accurate than traditional handeye methods.However, IABC has its applicability because it is proposed only for hand-eye calibration and cannot be applied to the body calibration of the robot.Therefore,before using the IABC in this paper,the absolute positioning errors of the measurement machine should be calibrated in advance with other instruments like interferometers or laser trackers. In order to improve the accuracy of an LDS-based OMM system, a novel IABC method is proposed to calibrate both the hand-eye relationship and LDS errors in one process.The standard ball is adopted as the calibration artifact, and thus a new objective function containing hand-eye parameters and LDS errors with spherical constraints is proposed.Then, the hybrid calibration model composed of two calibration kernels is proposed.One is the analytical kernel, and the other is the AutoML kernel.They are connected with stepwise iterations to find the relatively best calibration results. Hand-eye calibration experiments, verification, and comparison experiments were executed on an OMM system.In hand-eye calibration experiments, each calibration experiment obtained about 25 million original calibration points, and 108 candidate machine learning models competed in the AutoML pipeline.The top 10 models are all HGBMs, the performance of different models in the global leaderboard is ranked as:HGBM > MLP > Linear > SVM > ElasticNet.Compared with traditional hand-eye calibration methods, hand-eye experiments show that IABC reduced the calibration RMSE by about 50 %.Verification experiments show that IABC reduces the measurement deviations by about 25 %-50 %and RMSEs within 40 %.Even when the training data are obviously less than the test data, IABC performs well.Therefore, IABC is demonstrated to be a more accurate hand-eye calibration method than traditional hand-eye methods.To promote academic exchanges and industrial progress, the key source code of IABC is also an open source.The detailed conclusions of this paper are drawn as follows: 1) A new hand-eye calibration objective function containing both analytical parameters and LDS errors is proposed.The objective function increases the possibility of further reducing the calibration residuals and provides a new idea for designing objective functions in calibration fields. 2) To solve the objective function, the novel IABC method is proposed, containing a hybrid calibration model composed of the analytical kernel and AutoML kernel.The two kernels’design reflects the corresponding characteristics of unknowns.IABC makes it possible for the analytical parameters and LDS errors to be online calibrated in one calibration process. 3) In the AutoML kernel, a specific data-driven AutoML pipeline is proposed to model LDS errors.The pipeline contains several candidate machine learning models with advanced nonlinear modeling capabilities,giving IABC a relatively strong ability to model complex LDS errors.The pipeline can also automatically select the best model for LDS errors, reducing the dependence on human experts and improving the applicability of IABC. 4) Four new kinds of local features (xS, dSZ, α, β) and two new kinds of local targets (ΔxS, ΔzS) are constructed as input for the AutoML pipeline.They are constructed based on the local characteristics of each point on the LDS rather than global parameters, and they allow LDS to measure curved surfaces more accurately. Nowadays, the LDS is widely used in many measurement systems worldwide because of its relatively high measurement efficiency.IABC proposed in this paper could further improve the measurement accuracy of the LDS-based systems without any hardware change.This work provides new ideas for related fields, especially the field of hand-eye calibration.In the future,we will try to upgrade the IABC to calibrate all parameters or errors in the kinematic chain of a measurement system. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Acknowledgements This work was supported by the National Natural Science Foundation of China (Nos.51875406 and 51805365). Appendix A.A.1.Forward kinematics In industrial metrology, the evaluation of Geometric Dimensioning and Tolerancing (GD & T) is carried out in the workpiece frame FW.Hence, all the measured values of the kinematic chain must be converted to FW.Assuming that there is a point PSon one of the measured profiles obtained via the laser displacement sensor in FS,the corresponding point of PSin the workpiece frame FWis PW.Their relationship is shown in Eq.(A1)and Table A1.Eq.(A1)is also called the ideal forward kinematic model in this paper. Table A1 Definition of formulas. The compensated PWis denoted as PcW, and R is the residual function.There are two kinds of R,RLLSand RTLS,namely the linear least-squares and the total least-squares residual functions, and their formulas are available in Table A2. In one of the calibration experiments, a total of 25 million original calibration data points were obtained.In the next step,the hybrid calibration model is applied.In each iteration,gradient-based solving algorithms were used to identify analytical parameters,as shown in Table A3 and Table A4.Different methods were compared before the first iteration, including different residual functions (namely TLS and LLS), different loss functions(LINEAR loss and HUBER loss),and different solving algorithms(LM and TRF).Additionally,the radius of the objective function is set as unknown in some traditional calibration methods,2,3so two cases where the residual function radius was known or unknown were also compared for comparison.Table 3 includes identified results of the unknown radius, while Table 4 shows identified results for a known radius.In both Table A3 and Table A4,the‘‘method”columns were designated as TLS-LINEAR-TRF,which denotes‘‘residual function - loss function - solving algorithm”.Further, to verify the performance of the solving methods, each method was applied to different sub-sample datasets.Different proportions of 0.1%,1%,10%,and 100%from the original datasets were selected, corresponding to the amount of 25,000(25 k), 250,000 (250 k), 2,500,000 (2.5 M), and 25,000,000(25 M) points.All the algorithms were implemented based on SciPy. Each method was calculated 10 times using every dataset,and the initialization parameters calculated each time were randomly generated based on the random seed (seed = 1–10).If the RMSE of a solution was too large,it was considered an unsuccessful solution.In this paper, if the RMSE of TLS was less than 0.05 and if the RMSE of LLS was less than 0.5, solutions were considered as successful.The ‘’NOK’’column indicates the number of successful solutions in 10 calculations.Most methods could not find successful solutions every time, showing that the initial value has an impact on the solution.Therefore, before the first iteration, the optimal analytical parameters were usually obtained from several tries and had minimal RMSE.The averaged time column indicates the average time needed for 10 calculations as shown in Fig.A1;as the amount of sub-sample data increased,the solving time increased.When the dataset size was 250 k,most algorithms completed the calculation within 20 s.However, when the dataset size was 25 M, the solving time took several minutes.Therefore, if a project requires efficiency, the size of sub-sampled datasets should be compromised.Table A3 and Table A4 also show the optimal analytical parameters xTS,yTS, zTS, αTS, βTS, γTSwith the minimal RMSE in 10 calculations.The differences among the successful solutions were very small, just like their standard deviations (SD) of RMSE, and the magnitudes of them were smaller than 1 × 10-10. Table A2 Formulas of residual functions. Table A3 Identified initial analytical parameters obtained using different methods with unknown radius in residual functions. Regarding the solutions with unknown radius, shown in Table A3, no difference in RMSE among different TLSbased methods was noted.For different sub-sample sizes,slight differences among the LLS-based methods were noted but were within 0.001.Compared to the nominal radius of 9.9969 mm,the solving accuracy of the LLS was slightly higher than that of the TLS;however,the difference among them was under 0.001 mm.The difference between all the results and the nominal radius was approx.0.02 mm.Therefore,it can be considered that there was no significant difference between the TLS and the LLS in r’s accuracy and the sphere fitting RMSE. Table A4 Identified initial analytical parameters obtained using different methods with known radius in residual functions. Fig.A1 Solving time for unknown and known radius in residual functions at the first iteration. In every TRF-based method, variables αTS, βTS, and γTSwere constrained within (-π,π] and r ?[0.75r,1.25r], and the values solved by TRF-based methods always meet the constraints.The LM-based methods provided αTS, βTS, and γTSwhose absolute values were greater than 2π, and the radius r with a negative value, which was not in line with the physical meanings.Besides, variables αTS, βTS, and γTSof TLSLINEAR-LM changed greatly in Table A3 and Table A4.Because the LM was an iterative solution method, every solution of the LM needed the initial values.But the true values of the analytical parameters were not known in advance, so the initial values of the LM and TRF were set randomly.However, the LM algorithm was also unconstrained, and thus the solutions may frequently vary with different initial values,and it even produced solutions that were not in line with the physical meanings.Conversely, the TRF-based algorithms could constrain the range of analytical parameters, so their solutions were relatively stable. Admittedly, these values could also be used after further processing.The TRF had certain advantages in terms of convenience and robustness.Regarding the loss function in TRFbase methods, there was no significant difference in r between the LINEAR loss and the HUBER loss, showing that there was not much noise in the dataset of our experiments.The HUBER-based methods had slightly higher accuracy for r,but only in LLS-based methods.As for the solutions with known radius (in Table A4), their performance was similar to the solutions with unknown radius without r.Most of the analytical parameters between them were similar, but the RMSE of the known radius condition was slightly greater. Since r should be known when estimating the sensor errors in Section 3, the residual function with a known radius was applied in this paper.Furthermore.to deal with potentially severe working conditions, the TLS-LINEAR-TRF method was set as the default solving method.In fact,in the traditional eye-hand calibration method,the solutions above were treated as final eye-hand calibration results. In each iteration, based on the present analytical parameters,four feature vectors (xS, dSZ, α, and β) and two target vectors(ΔxSand ΔzS)could be estimated via the construction in Algorithm 1.Fig.A2 shows the best estimations of sensor errors ΔxS, ΔzSwith the minimum sphere fitting RMSE in several iterations with 250 k sub-sample data.ΔxSwas mainly within(-0.005 mm, 0.005 mm) interval, while ΔzSranges between-0.05 mm and 0.05 mm.Hence, ΔxSwas nearly an order of magnitude smaller than ΔzSbecause α of the laser displacement sensor was limited, while dSZwas longer. With the estimated features and targets, the dataset was input into the automatic machine learning pipeline in every iteration.In the pipeline, the original features were firstly upgraded by polynomials with orders ranging between 1 and 3 to form new features.Besides,whether to include or exclude interactive terms and whether the bias was included or not were considered when constructing the new features.Then,the candidate machine learning models in the pipeline were constructed with the following models: Linear model; ElasticNet model with the hyperparameter l1 ratio in[0.1,0.5,0.8]; SVM model with the linear kernel C ?[0.1,1],ε ?[0.01,0.1]; MLP with the hidden layer in [10,50,20,200]and max iteration in [500,1000], the activation function for the hidden layer is the rectified linear unit function (RELU),the solver for weight optimization is the ‘‘Adam”solver, the learning rate is 0.001, and the strength of the l2 regularization term is 0.0001;HGBM model with l2 regularization parameter in [0, 0.01, 0.1].All the random states were set as 1, all the models were implemented based on the Scikit-Learn,and other default parameters can be found in their documentation.5The exceptions were the MLP and SVM models.They were trained using the original features due to their advanced nonlinear fitting ability; all the other models were trained via new polynomial features. There were 108 candidate models in total,and they were all trained in every iteration via 3-fold cross-validation and ranked for competition.Table A5 shows the full rank with the minimum sphere fitting RMSE on 250 k sub-sample data.The HGBM had the highest mean test score, which was equal to the mean negative RMSE on the test dataset in crossvalidation.Hence,the model with model Id‘‘107”was selected as the best estimator, the corresponding l2 regularization parameter was 0.1, and the order of the polynomial’s feature was 3, without interaction terms and bias.The top 10 models were all HGBM, with score ranks as follows:HGBM > MLP > Linear > SVM > ElasticNet.Compared to the RMSE of approx.0.01, the HGBM, MLP, Linear, and SVM models all effectively reduced its value, and the best HGBM reduced the RMSE to approx.0.010 mm.However,not all models performed well – the ElasticNet and some of the SVM models yielded worse RMSE values. Standard deviations (SD) of all the models were within 0.001 mm, showing that their performances were relatively stable and that experimental data quality is good.As for the training time, the fastest model was ElasticNet, followed by the Linear, HGBM, MLP, and SVM.The best HGBM model consumes 5 s in each fold, which was faster compared to the MLP and SVM model with 250 k data,generally meeting industrial demands.Therefore, the HGBM had comprehensive advantages in terms of both accuracy and efficiency. Fig.A2 Best estimations of sensor errors ΔxS and ΔzS with minimum RMSE of sphere fitting in several iterations. Table A5 Leaderboard of AutoML pipeline with 108 candidate machine learning models (trained with 250 k sub-sample). Table A5 (continued) Table A5 (continued) Table A5 (continued)3.4.Using IABC
4.Experimental verification
4.1.Experimental setup
4.2.Calibration with IABC
4.3.Comparison and discussion
5.Conclusion
A.2.Residual functions
A.3.Results of kernel 1 (Analytical kernel)
A.4.Results of kernel 2 (AutoML kernel)
CHINESE JOURNAL OF AERONAUTICS2023年8期