• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Exploring 2D projection and 3D spatial information for aircraft 6D pose

    2023-09-02 10:17:56DaoyongFUSongchenHANBinBinLIANGXinyangYUANWeiLI
    CHINESE JOURNAL OF AERONAUTICS 2023年8期

    Daoyong FU, Songchen HAN, BinBin LIANG, Xinyang YUAN, Wei LI

    School of Aeronautics and Astronautics, Sichuan University, Chengdu 610065, China

    KEYWORDS 2D and 3D information;6D pose regression;aircraft 6D pose estimation;End-to-end network;RGB image

    Abstract The 6D pose estimation is important for the safe take-off and landing of the aircraft using a single RGB image.Due to the large scene and large depth,the exiting pose estimation methods have unstratified performance on the accuracy.To achieve precise 6D pose estimation of the aircraft, an end-to-end method using an RGB image is proposed.In the proposed method, the 2D and 3D information of the keypoints of the aircraft is used as the intermediate supervision,and 6D pose information of the aircraft in this intermediate information will be explored.Specifically,an off-the-shelf object detector is utilized to detect the Region of the Interest(RoI)of the aircraft to eliminate background distractions.The 2D projection and 3D spatial information of the pre-designed keypoints of the aircraft is predicted by the keypoint coordinate estimator (KpNet).The proposed method is trained in an end-to-end fashion.In addition, to deal with the lack of the related datasets,this paper builds the Aircraft 6D Pose dataset to train and test,which captures the take-off and landing process of three types of aircraft from 11 views.Compared with the latest Wide-Depth-Range method on this dataset, our proposed method improves the average 3D distance of model points metric(ADD)and 5°and 5 m metric by 86.8%and 30.1%,respectively.Furthermore, the proposed method gets 9.30 ms, 61.0% faster than YOLO6D with 23.86 ms.

    1.Introduction

    The correct pose is an important guarantee for the safe takeoff and landing of the aircraft.Excessive elevation angle,excessive speed, and deviation from the runway are all likely to cause irreparable accidents.The translation (x,y,z) (position)and rotation (φ,θ,γ)(attitude angle)are utilized to show the 6D pose information of the aircraft.Usually, the aircraft obtains its own 6D pose information through the Global Positioning System (GPS) and inertial navigation system installed on the body.However, when these sensors fail and cannot be used, the safe take-off and landing of the aircraft will face a major challenge.1Therefore, it is necessary to obtain the 6D pose information of the aircraft when taking off and landing by other means.

    With the rapid development of deep learning, many deeplearning-based solutions have been proposed in various fields.The field of object pose estimation is no exception.According to different strategies, the 6D pose estimation methods based on deep learning are mainly divided into two categories: the one-stage methods and the two-stage methods.The one-stage methods analyze the image and directly estimate the 6D pose of the object.The two-stage methods first predict 2D-3D correspondences,and then use the Random Sampling Consensusbased(RANSAC-based)Perspective-n-Point(PnP)algorithm2to estimate the 6D pose of the object through these 2D-3D correspondences.Although the latter has higher accuracy than the former, this kind of methods rely on the detection of 2D-3D correspondences.In addition, due to the lack of depth information, the latter will lead to poor accuracy on the 3D translation,while some methods use RGB-D images as input for 6D pose estimation.3–4However, for large scenes and large depth faced by the aircraft 6D pose estimation problem, the acquisition of the depth map is not feasible.This is mainly because the distance from the camera to the aircraft can reach hundreds of meters and even kilometers, which is beyond the effective range of a depth camera.

    Compared with the two-stage methods,the one-stage methods pay more attention to the direct estimation of the 6D pose and reduce the dependence on the object depth information.In addition, the end-to-end training strategy also makes such methods easier to train.However,this kind of method ignores the feature learning of the object,5which leads to the poor accuracy.Inspired by the two-stage method, some scholars are trying to replace the PnP algorithm with network learning method,6whereas this kind of iterative approaches heavily rely on good coordinate initialization.

    To obtain high-precision aircraft 6D pose information,this paper proposes an end-to-end 2D + 3D information based aircraft 6D pose estimation method.When the keypoints of the aircraft are described in 3D space (such as camera coordinate system), they are called 3D keypoints whose position are called 3D coordinates.When the keypoints of the aircraft are described in 2D image (such as pixel coordinate system), they are called 2D keypoints whose position are called 2D coordinates.Inspired by the high-precision estimation of the rotation matrix in the two-stage methods,the proposed method utilizes the strategy of the keypoint acquisition and pose estimation method, which can be regarded as the intermediate supervision.7Compared with the two-stage methods, the main differences are that the proposed method not only detects the 2D projection position of each keypoint in the image,but also predicts the 3D spatial position of each keypoint.The 2D projection information can obtain high-precision rotation, and the 3D spatial position information will be helpful for the estimation accuracy of translation matrix due to the supplement of depth information.Different from previous one-stage methods,the proposed method predicts the 2D and 3D information of the keypoints,and the 6D pose information of the aircraft is explored.Due to the lack of relevant datasets about aircraft take-off and landing, this paper uses Unreal Engine 4 to restore the take-off and landing of the aircraft with high fidelity and produces the 6D pose dataset of the aircraft for training and testing.

    The contributions of this paper are summarized as follows:(A)This paper proposes a 2D+3D information based end-toend 6D pose estimation method for the aircraft.We guide the network to learn the patterns in the image to restore the 2D projection coordinates of the keypoints in the image and 3D spatial coordinates of the keypoints in 3D space.And we fuse these 2D projection information and 3D spatial position information to estimate the 6D pose of the aircraft.(B)The Aircraft 6D Pose dataset is built,which can be used to estimate the 6D pose of the aircraft when taking off and landing.

    The rest of this paper is organized as follows: Section 2 introduces the related work,and Section 3 clarifies the method proposed in this paper, detailing how to fuse the 2D and 3D information of aircraft keypoints.Section 4 introduces the dataset built in this paper, as well as the relevant details and results of the experiments.Section 5 draws the conclusion.

    2.Related work

    According to the different strategies used,the 6D pose estimation of the object can be divided into two categories, the onestage methods and the two-stage methods.The former directly returns to the 6D pose of the object by analyzing the input single RGB image.The latter first detects 2D-3D correspondences, and then uses the RANSAC-based PnP algorithm to estimate the 6D pose of the object.

    (1) One-stage methods.Early-one-stage methods usually suffer from rotational nonlinearity,8which can be effectively solved by combining with PnP algorithm or RANSAC method.2The one-stage methods initially hope to input the pictures (RGB image, RGB-D image, or a combination of the two), and then directly return the pose information of the 3D object.However, pose data annotation of 3D objects is difficult, especially when depth images are not available.Compared with pose data, the annotation of 2D bounding boxes is relatively simple,so Yang et al.9used a weakly supervised method10to learn the segmentation map of the object,and utilized the segmentation mask as the prior knowledge of the pose estimation.When performing pose estimation,the proposed DSC-PoseNet predicts the pose of the object by comparing the similarity between the segmentation mask map and the rendered visible object mask map.The algorithm is even comparable to several supervised directions.In order to solve the accurate pose information of the 3D model in the pictures taken when the external parameters of multiple cameras are unknown, Labbé et al.11proposed the CosyPose method,which first matched a single object from different perspectives,and used this object to estimate the relative pose between cameras.The RANSAC method is utilized to optimize consistency across the scene.Finally, a global optimization process is utilized to eliminate object poses with noisy single views.Stevsˇicˇ and Hilliges6believed that most of the existing methods adopt a non-iterative method, which does not take into account the need to use high-frequency features for accurate alignment between the 3D model and the 2D image.At the same time,the iterative optimization method has great large room for improvement, such as DeepIM.12Therefore, to improve the estimation accuracy of pose,Stevsˇicˇand Hilliges6used an iterative method to identify and utilize spatial information to improve the process of pose refinement.

    (2) Two-stage methods.Two-stage pose estimation methods based on deep learning usually use point features (or key points) as intermediate passes for pose estimation.This kind of method believes that the deep learning network can accurately detect the exact position of the key points (markers) of the three-dimensional object in the two-dimensional image.And the keypoints in these two-dimensional images can be used to return the pose information of the three-dimensional object.13The eight vertices of the 3D bounding box(3D bbox)of the object(or adding the centroid point of the 3D bounding box) are used by many researchers to build bridges between 2D-3D correspondences,such as YOLO6D14and Bb8.15There are also some methods that use points on the 3D model to construct 2D-3D correspondences,such as PVNet.5However,this type of pose estimation method usually uses the corresponding regression surrogate object for training (such as key points),which does not necessarily reflect the actual pose of the 3D object solved by the PnP algorithm.The reasons are that the average error may cause some outlier 2D-3D correspondences to be judged correct.At the same time,due to the lack of depth information of the object,for large scenes and large depth aircraft, the accuracy of this method will drop sharply.

    Based on the above analysis, the existing methods do not make full use of the 3D information of the object key points.However, generating 3D information without the depth map as input is very helpful for the estimation of the 6D pose of the aircraft.

    3.Method

    The 3D coordinates of the keypoints on the aircraft body contain the position information and the attitude information of the aircraft.To make full use of these information to estimate the 6D pose of the aircraft,this paper proposes an aircraft 6D pose estimation method based on the 2D+3D coordinates of the keypoints,as shown in Fig.1.The object detector(Detect-Net)is used to obtain the position of the aircraft in the image,and then send the Region of Interest (RoI) to the keypoint coordinate estimator (KpNet) to estimate the 2D + 3D coordinates of each keypoint.Finally, these 2D and 3D information are sent to Perspective-n-Point solver (PnPNet) for estimating the 6D pose of the aircraft,i.e.,the rotation matrix(R) and translation matrix (T).

    3.1.Selection of 3D keypoints

    The liveries on the fuselage of the aircraft are different.This makes it impossible to select keypoints using aircraft texture features.Compared with texture features, the geometric features of aircraft are more suitable for defining key-point.Based on the above reasons, this paper selects the corner points and connection points on the aircraft body as the keypoints of the aircraft, namely, the nose of the aircraft, the left-wing tip, the right-wing tip, the right horizontal tail tip, the left horizontal tail tip, the vertical tail tip, the tail of the aircraft, the connection point between the wing and the fuselage, and the connection point between the horizontal stabilizer and the fuselage,as shown with the blue points in Fig.2.In addition, the body coordinate system origin O has been selected to show the center of the aircraft, as shown with the red points in Fig.2.

    Fig.2 Selected keypoints of aircraft.

    Since the aircraft body is a rigid body structure,the relative positional relationship between keypoints is fixed,that is,when the origin of the body coordinate system and the type of the aircraft are known, the position of any point on the aircraft body will be known.Thus, the 3D coordinates of each keypoint in the body coordinate system can be determined by

    3.2.Parameterization of 6D pose

    The rotation matrix R shows the attitude angle of the aircraft.The common methods,such as unit quaternions,16log quaternions,17and Lie algebra-based vectors,18are utilized to show the rotation matrix.However, they will cause an error close to the discontinuities.Thus, this paper adopts continuous 6D representation method proposed by Zhou et al.19:

    The rotation matrix R can be obtained easily by the following expression:

    where vn(?) represents the normalization operation.

    The translation matrix T is used to show the position of the aircraft.To return to a high accurate aircraft position, this paper adopts the method Scale-Invariant Translation Estimation (SITE),20as shown in

    Fig.1 End-to-end aircraft 6D pose estimation method based on 2D + 3D information of keypoints.

    3.3.Network architecture

    This paper uses the intermediate supervision to guide the neural network to predict the 2D + 3D coordinates of keypoints on the aircraft body and achieves the 6D pose estimation of the aircraft by analyzing the 2D + 3D information of these keypoints.Inspired by the literature,21the aircraft pose estimation network based on 2D + 3D information of keypoints mainly includes three modules, i.e., DetectNet, KpNet and PnPNet,which are used for the object detection, 2D + 3D coordinate estimation of the keypoints and 6D pose of aircraft estimation,respectively.Fig.3 shows the Network architecture of the proposed method.

    (1)Object detector(DetectNet).This paper uses an off-theshelf object detector(Faster-RCNN)22for aircraft detection to obtain the position of the aircraft in the image,namely,Region of Interest (RoI).The corresponding RoI is enlarged and a square RoI is obtained.And then this square RoI will be scaled to a size of 256×256×3 and sent to KpNet to estimate the 2D + 3D coordinate of the keypoints.To improve the robustness, during training, this paper uniformly translates the position of the aircraft in the RoI and scales the size of the RoI by 25%.During testing, the RoI with the size of 256 × 256 is directly sent to KpNet for 2D + 3D coordinate estimation of keypoints.For the groundtruth of the 2D bounding box(2D bbox)of the aircraft in the image,we can obtain it by manual labeling.

    Keypoint coordinate estimator (KpNet).ResNet-34 is utilized as the backbone of KpNet.The RoI with the size of 256×256×3 sent by the DetectNet are preliminarily analyzed through the backbone and obtaining the feature maps with size of 512×8×8.And after three up-sample opeartions, the feature maps are mapped into the size of 41×64×64 as the output of the KpNet.The outputs of the KpNet include three parts.The first part is the predicted mask M with the size of 1×64×64 (For the groundtruth of the mask, we can obtain it by manual labeling.).It is utilized to penalize the wrong detected points during calculating the loss, as shown in Eq.(7).The second one is the predicetd 2D position of all keypoints of the aircraft in the image,corresponding to the output maps Kp_2D with the size of 10×64×64.Kp_2D involves 10 maps, each one for one keypoint of the aircraft.Each element of one map shows the probililty that the keypoint occurs at this position, as shown in Fig.4(a).Kp_2D adopts the Gaussian distribution which is widely utilized to represnent the position of the keypoint in the image.23The last one is the predicted 3D position of all keypoints in the camera coordinate system,corresponding to the output maps Kp_3D with the size of 30×64×64.Kp_3D involves 30 maps.The first 10 maps show the predicted × coordinate of the 3D coordinates of 10 keypoints, as shown in Fig.4(b).The next 20 maps represent the predicted y and z coordinates of 10 keypoints,respectively.Inspired by the depth and contour lines, Kp_3D adopts the binary map while Gaussian distribution is not suitable for representing discontinuous 3D coordinate.Eq.(5) and Eq.(6)show how to use the map to show 2D and 3D position of the keypoints.

    Fig.3 Network architecture.

    Fig.4 Output map of KpNet.

    Kp_2D involves the accurate 2D projection information of the keypoints.Kp_3D combines the rough 2D projection information with accurate 3D spatial information.These 2D and 3D information in Kp_2D and Kp_3D will be concatenated,and then sent into PnPNet to estimate 6D pose of the aircraft.

    Perspective-n-Point solver(PnPNet).Kp_2D and Kp_3D as the inputs are sent into PnPNet which is combined with three convolutional layers and four fully connected layers.The last two fully connected layers are utilized to regress the rotation matrix and the translation matrix,respectively.PnPNet parses the 2D projection information and 3D spatial information in Kp_2D and Kp_3D to obtain the 6D pose of the aircraft, i.e.,R and T.The 6D pose of the aircraft illustrates the conversion relationships between body coordinate system and camera coordinate system, i.e., the product of the 6D pose of the aircraft in the world coordinate system and the extrinsic parameters of the camera.

    (2) Loss function.The network predicts the 2D and 3D position of the keypoints and estimates the 6D pose of the aircraft.Thus, the overall loss includes two parts: Loss2D-3Dand Losspose, as shown in Eq.(7) and Eq.(8).In addition, Mean Squared Error (MSE) Loss is utilized to train KpNet.

    4.Experiments

    4.1.Dataset

    To overcome the lack of a dataset for estimating the 6D pose of the aircraft during take-off and landing,this paper creates a dataset called Aircraft 6D Dataset.The Unreal Engine 4 is utilized to build a 1:1 model of Chengdu Shuangliu International Airport (ICAO: ZUUU) and build the three high-precision models of the Airbus A320 and Airbus A350, as shown in Fig.5.Eleven cameras are installed at Chengdu Shuangliu International Airport to capture images of the aircraft when taking off or landing on the runway.The camera distribution is shown in Fig.6.The data of the position and attitude angle of the aircraft when taking off and landing comes from the state records of the aircraft when the captain uses the flight simulator for recurrent training.From 11 viewpoints (or cameras), the images of the three types of aircraft when taking off or landing are recorded,and the mapping relationship between the body coordinate system and the camera coordinate system is recorded at each moment.The Aircraft 6D Dataset contains 17460 images with the size of 1920×1000.All images are split at a ratio of 3:1 for training and testing.

    4.2.Implementation details

    The Pytorch24is utilized to train and test the proposed method and run on the Intel CoreTMi7-9700 CPU, 16 GB RAM and a single RTX2080Ti with 11 GB of memory.The initial learning rate is 10-4.A cosine schedule25is adopted to adjust the learning rate.During the training, the Ranger optimizer26–28executes calculation.The proposed method is trained for 120 epochs.The proposed method includes three modules.We utilized the Faster-RCNN as DetectNet.The backbone of the KpNet is ResNet-34, followed by three convolutional layers and three up-sampling layers.PnPNet consists of three convolutional layers and four fully connected layers.The input and output size of each module are described in Section 3.3.

    4.3.Evaluation metrics

    In this paper,three common metrics are utilized to evaluate the proposed methods, i.e., average 3D distance of model points metric (ADD),3,295° and 5 cm metric,30–31and 2D projection metric (2D Proj).32Especially, eADDcalculates the average 3D error between the 3D coordinate of the keypoints transformed by the estimated pose (R + T) and the groundtruth of 3D coordinate of the keypoints.Eq.(9) illustrates ADD metric:

    where R*and T*denote the groundtruth.ADD judges whether the error is less than 5% of the object’s diameter.

    Since the size of the aircraft is 100–400 times the size of the objects that are suitable for the 5 cm metric and 5 cm metric is too small for the aircraft,5 cm metric is adjusted.In this paper,the symbol 5° and 5 m are utilized for 5° and 5 m metric,respectively.And 5°, 5 m metric is utilized to represent the rotation error and translation error which are less than 5°and 5 m, respectively.

    Fig.5 Three aircraft models including Airbus A320 and Airbus A350.

    The metric 2D projection (e2Dproj) calculates the distance between the 2D projection coordinates of the keypoints by the estimated pose (R + T) and the groundtruth of the 2D projection coordinates of the keypoints.Eq.(10) illustrates 2D projection metric:

    where K is the camera intrinsic parameters.

    4.4.Comparison with state-of-the-art methods

    To demonstrate the effectiveness of the proposed method,performance comparison between the proposed method and the state-of-the-art methods is carried out, such as YOLO6D,14PVNet,5Single-Stage,33segmentation-driven,7and Wide-Depth-Range.34As shown in Table 1, bold indicates the best performance, compared with the Wide-Depth-Range,34the 2D + 3D information-based method proposed in this paper is respectively 4.1% and 6.5% lower in terms of 5° and 2D Proj, but for ADD, 5°&5m, and 5 m metrics, the proposed method outperforms by a large margin of 87.0%, 30.1% and 35.6%,respectively.In addition,the proposed method is much faster than other methods.The proposed method gets 9.30 ms,61.0% faster than YOLO6D with 23.86 ms.The two-stage methods(YOLO6D and PVNet)based on the PnP/RANSCAS algorithm has high accuracy in the estimation of the rotation matrix but has unsatisfied performance in estimation of the translation matrix.This is because the locations of the aircraft that are taking off or landing within the viewing angle of a single camera vary greatly.The two-stage methods only learn the2D projection information of the aircraft.Due to the lack of depth information, this kind of methods will have a bad performance on the translation matrix of the object in such a large scene.Single-Stage33has poor performance in pose estimation based on the metrics, because Single-Stage only uses multiple 1D convolutional layers to replace the PnP algorithm, which is essentially not much different from the one-stage method.Wide-Depth-Range aims to solve the object 6D pose estimation problem in a large scene and large depth,but this method does not make full use of 3D information.Compared with these methods, the proposed method makes full use of the 2D projection information and 3D coordinate information of the aircraft, which can greatly improve the accuracy of 6D pose estimation.In addition, this simple end-to-end network greatly improves the efficiency of the method.Fig.7 shows the visualization results of the methods in Table 1 for 2D Proj metric.It includes 3 types of the aircraft in 11 scenarios.The blue bbox illustrates the project of 3D bbox using the estimated 6D pose.The red one represents the groundtruth projection of 3D bbox.The circles in Fig.7 emphasize some errors.As shown from Fig.7, YOLO6D and Single-Stage make the clear wrong estimation.For the proposed method,Wide-Depth-Range and PVNet,the gap among them is slight.

    Table 1 Performance comparison on aircraft 6d datasets.

    4.5.Ablation study

    In this section,a series of ablation experiments are carried out to explore the impact of 2D and 3D information on aircraft 6D pose estimation.In addition,this section also explores the role of hyperparameters, such as the representation of 2D and 3D information, the radius and number of the keypoints.

    Table 2 shows the influence of 2D information (Kp_2D)and 3D information (Kp_3D) on aircraft 6D pose estimation.In this set of experiments, the number of keypoints is 10,and the radius of the keypoints is r = 3 pixels.

    Like most methods, the proposed method also predicts the mask of the aircraft, which can eliminate negative detections caused by the background.Firstly,directly fusing the predicted mask,2D and 3D information will get very bad results.Therefore, the translation matrix is normalized to [-1,1], which is called T Norm.And then the impacts of mask,2D information and 3D information on performance are explored, respectively.The 2nd to 5th sets of experimental results show that 2D information improves performance more than 3D information, and 3D information has a side effect after merging 2D and 3D information.The reason is the inaccurate estimation of the 3D coordinate of the keypoints.Therefore,all 3D coordinates of the keypoints are also normalized to [-1,1],which is called Coord Norm.The results of the last four sets of experiments in Table 2 show that mask,2D and 3D information all contribute to the performance improvement of the method after translation matrix and coordinate normalization.Finally,the configuration of the 8th set of experiments whose settings have the highest accuracy is adopted.

    Binary map is utilized to represent 3D coordinate of the keypoints.If the pixel is close to the 2D projection coordinate of the 3D keypoints, the value of this pixel is the value of one coordinate of a 3D keypoint.Otherwise, it will be set to ‘0′.Does the arrangement of the binary graph affect the performance? Table 3 shows the answers.Based on the 8th set of the experiment in Table 2, three arrangements are explored.The first one is that the vthcoordinate of the ithkeypoint is stored in (i+N×(v-1))thmap, which is called as Reprexxyyzz.The second one is that the vthcoordinate of the ithkeypoint is stored in (v+(i-1)×3)thmap,which is called as Reprexyzxyz.The last one is that the vthcoordinate of the ithkeypoint is stored in vthmap, which is called as Reprexyz.As shown in Table 3, the difference in the representation has less impact on performance.The three sets of experiments in Table 3 also show that the number of input layers does not have much influence on the 6D pose analysis of PnPNet.Finally, Reprexxyyzzis adopted.

    Table 4 illustrates the effect of the radius of the keypoint on performance,that is,the effect of the parameter σ in Eq.(5).In theory, the smaller the radius is, the fewer features the keypoint contains.The larger the radius is, the more features are included.The richness of features affects the 2D and 3D information prediction of key points.It also affects the representation of G3D(?).Therefore, this paper conducts the experiments in Table 4, gradually increasing the radius of the 2D projection point of the 3D keypoint to explore the optimal radius.As shown in Table 4, the effect of radius is slight.The reason is that PnPNet analyzes the input 2D and 3D information and has a certain robustness to the errors of these information.Finally, the selected radius of the keypoint is r = 4 pixels.

    The proposed method utilizes the 2D and 3D information of keypoints to estimate the 6D pose of the aircraft.Thus,does the amount of 2D and 3D information have an impact on performance?To explore whether the number of keypoints has an impact on performance,the experiments in Table 5 are carried out.According to the symmetry of the aircraft, 14 points are pre-designed in this paper (see Fig.8), of which point 7 is the origin of the body coordinate system,points 11 and 12 represent the center of the engine, points 13 and 14 represent the position of the landing gear, and the remaining points are described in Section 3.1.‘‘N”in Table 5 means to take the first N points in Fig.8.Some endpoints,connection points,and the origin of the body coordinate system are considered necessary(i.e., the first 9 keypoints).It is because these points make up the body of the aircraft.These 9 keypoints are regarded as the original keypoints.As shown in Table 5, as the keypoints gradually increase,the accuracy of the aircraft’s 6D pose does not change significantly.It illustrates that the increase in the number of keypoints has little effect on the estimation of the aircraft’s 6D pose.Finally, N = 10 is adopted.

    Fig.7 Qualitative results of different methods under 2D projection metric.

    Table 2 Performance of mask, 2D and 3D information w/ or w/o normalization.

    Table 3 Effect of different representation of 3D information.

    Table 4 Effect of the radius of keypoint on performance.

    Table 5 Effect of the number of keypoint on performance.

    Fig.8 Candidate keypoints.

    The accuracy of the intermediate supervision plays an important role in the final 6D pose estimation of the aircraft.Thus, we explore the accuracy of each step of the proposed method.For the first step (DetecNet), we use Average Precision (AP)22,35as the evaluation metric which is widely used in object detection task to evaluate the result of object detection.For the second step (KpNet), we use mask Intersectionover Union(mask IoU),36–37which is used to show the overlap between the predicted mask and real mask,to evaluate the predicted mask.The Object Keypoint Similarity (OKS) metric,38which is widely used in keypoint detection task, is adopted to evaluate the predictedKp_2D.For predictedKp_3D, we adopt the same principle as 5 m metric to calculate the regression error of each coordinate of each keypoint.Specially, for each map inKp_3D,the clustering method will be used to find the pixels that are close to the predicted keypoint.Perform the mean operation on these pixels to obtain the final predicted coordinate.Further, perform L2norm on the difference between predicted coordinate and real coordinate to obtain regression error.If regression error is less than 5 m.We think it is acceptable.For the third step (PnPNet), as shown in Table 1, ADD, 5°&5m, 5°, 5 m, and 2D Proj are utilized as the evaluation metrics.As shown in Table 6,the result of intermediate supervision(2D bbox,Kp_2D,Kp_3Dand mask)and the final result (6D pose) are satisfactory.The reason why the accuracy ofKp_3Dis 81.51%is that the position of the aircraft changes greatly, which can reach hundreds or thousands of meters.Thus, it is very difficult to keep the precision within 5 m.The accuracy of the mask is 80.24%, which is mainly caused by the wrong prediction of pixels near the landing gear.

    Table 6 Performance of each step of the proposed method.

    5.Conclusions

    Due to the large scene and large depth, the existing 6D pose estimation methods cannot obtain high-precision 6D pose information.This is because the two-stage methods lack depth information, and the existing one-stage methods focus too much on the regression of the final 6D pose.As a result, the 6D pose estimation of the aircraft is inaccurate.In this paper,a 3D position based end-to-end aircraft 6D pose estimation method is proposed.The proposed method utilizes an offthe-shelf object detector to obtain the position of the aircraft in the image.And the region of interest will be sent to the 3DKpNet to predict the 2D projection position and 3D spatial position of the aircraft keypoints.These 2D and 3D information will be fused to estimate the 6D pose of the aircraft during take-off and landing.The experimental results show that the proposed method is more accurate and efficient than other methods.

    Declaration of Competing Interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    Acknowledgements

    This study was co-supported by the Key research and development plan project of Sichuan Province, China(No.2022YFG0153).

    国产黄a三级三级三级人| 国产成人一区二区三区免费视频网站| 日本一本二区三区精品| 国产精品一区二区精品视频观看| 精华霜和精华液先用哪个| 淫秽高清视频在线观看| 老司机午夜十八禁免费视频| 中文资源天堂在线| 国产野战对白在线观看| 色综合站精品国产| 两人在一起打扑克的视频| 国产av麻豆久久久久久久| 亚洲 欧美 日韩 在线 免费| 十八禁网站免费在线| 亚洲国产欧美一区二区综合| 国产片内射在线| 亚洲av五月六月丁香网| 国产精品久久久久久精品电影| 亚洲精品中文字幕一二三四区| 99久久国产精品久久久| 国产精品野战在线观看| 老熟妇仑乱视频hdxx| 他把我摸到了高潮在线观看| 男插女下体视频免费在线播放| 国产精品久久电影中文字幕| 三级男女做爰猛烈吃奶摸视频| 精华霜和精华液先用哪个| 我要搜黄色片| 亚洲avbb在线观看| 在线a可以看的网站| 男女下面进入的视频免费午夜| 搡老妇女老女人老熟妇| 欧美中文综合在线视频| 国产亚洲精品久久久久久毛片| 久久精品夜夜夜夜夜久久蜜豆 | 美女扒开内裤让男人捅视频| 久久国产精品影院| 一本久久中文字幕| 久久午夜综合久久蜜桃| 亚洲欧美精品综合一区二区三区| 免费电影在线观看免费观看| 国产亚洲av嫩草精品影院| 天堂影院成人在线观看| 亚洲精品中文字幕在线视频| 久久久久久久午夜电影| 精品一区二区三区四区五区乱码| 精品高清国产在线一区| 久久久国产欧美日韩av| а√天堂www在线а√下载| 亚洲一区高清亚洲精品| 欧美 亚洲 国产 日韩一| 色在线成人网| 色播亚洲综合网| 欧美极品一区二区三区四区| 麻豆成人av在线观看| 国产av一区二区精品久久| 日韩av在线大香蕉| 久久久久精品国产欧美久久久| 国产精品98久久久久久宅男小说| www.999成人在线观看| 成人手机av| 成人精品一区二区免费| 伊人久久大香线蕉亚洲五| 亚洲av日韩精品久久久久久密| 99精品久久久久人妻精品| 日日夜夜操网爽| 久久久久久人人人人人| 极品教师在线免费播放| 国产日本99.免费观看| 国产亚洲精品综合一区在线观看 | 校园春色视频在线观看| 久久国产精品影院| 麻豆国产97在线/欧美 | 夜夜看夜夜爽夜夜摸| 日韩欧美三级三区| 香蕉国产在线看| 夜夜躁狠狠躁天天躁| 久久婷婷成人综合色麻豆| 长腿黑丝高跟| 中文字幕精品亚洲无线码一区| 99国产精品99久久久久| 亚洲国产精品sss在线观看| 一个人免费在线观看电影 | 成人国产综合亚洲| 丰满人妻一区二区三区视频av | 久久精品国产清高在天天线| 在线观看日韩欧美| 久久精品亚洲精品国产色婷小说| 日本 av在线| 麻豆av在线久日| 久久午夜综合久久蜜桃| 午夜福利高清视频| 少妇裸体淫交视频免费看高清 | 欧美又色又爽又黄视频| 久久中文字幕一级| 亚洲第一电影网av| 久久久久久九九精品二区国产 | 国产精品国产高清国产av| avwww免费| 久久天躁狠狠躁夜夜2o2o| 久久人妻av系列| 人妻夜夜爽99麻豆av| 国产精品九九99| 免费在线观看成人毛片| 白带黄色成豆腐渣| 哪里可以看免费的av片| 国产97色在线日韩免费| 麻豆成人av在线观看| 五月玫瑰六月丁香| 亚洲人成伊人成综合网2020| 婷婷亚洲欧美| 免费高清视频大片| 欧美黄色淫秽网站| 久久国产精品影院| 亚洲av成人av| 亚洲人成伊人成综合网2020| 岛国在线观看网站| 天堂动漫精品| 成年免费大片在线观看| 亚洲国产精品合色在线| 亚洲,欧美精品.| 欧美人与性动交α欧美精品济南到| 午夜日韩欧美国产| 黄色女人牲交| bbb黄色大片| 亚洲av成人一区二区三| 国产精品久久久久久精品电影| 两性夫妻黄色片| 成人精品一区二区免费| 美女大奶头视频| 亚洲精华国产精华精| 淫妇啪啪啪对白视频| 男人舔女人的私密视频| av视频在线观看入口| 91大片在线观看| 亚洲成人久久性| a级毛片a级免费在线| 免费看日本二区| 国产精品免费视频内射| 久久久久久久午夜电影| 特大巨黑吊av在线直播| 色哟哟哟哟哟哟| 亚洲精品国产精品久久久不卡| 非洲黑人性xxxx精品又粗又长| 成人永久免费在线观看视频| 丰满人妻一区二区三区视频av | 天天添夜夜摸| 精品久久久久久久久久久久久| 亚洲精品中文字幕在线视频| 成人午夜高清在线视频| 国产探花在线观看一区二区| 五月伊人婷婷丁香| 麻豆成人午夜福利视频| 成人高潮视频无遮挡免费网站| 两个人免费观看高清视频| 国产成人影院久久av| 成人欧美大片| 日韩精品免费视频一区二区三区| 99精品久久久久人妻精品| 国产精品亚洲美女久久久| 日本 av在线| 最近在线观看免费完整版| 91在线观看av| 丰满的人妻完整版| 欧美色欧美亚洲另类二区| 亚洲精华国产精华精| 脱女人内裤的视频| 男女下面进入的视频免费午夜| 欧美乱妇无乱码| 国产亚洲精品av在线| 久久亚洲精品不卡| 成人av在线播放网站| 成人永久免费在线观看视频| 国产av不卡久久| 亚洲第一欧美日韩一区二区三区| 麻豆久久精品国产亚洲av| 俄罗斯特黄特色一大片| 日本一本二区三区精品| 午夜福利欧美成人| 国产精华一区二区三区| 亚洲va日本ⅴa欧美va伊人久久| 一进一出好大好爽视频| 亚洲一区高清亚洲精品| 99久久久亚洲精品蜜臀av| 亚洲精品中文字幕一二三四区| xxx96com| 一区福利在线观看| xxxwww97欧美| 国产午夜福利久久久久久| 欧美性猛交╳xxx乱大交人| 在线播放国产精品三级| 2021天堂中文幕一二区在线观| 99久久无色码亚洲精品果冻| 9191精品国产免费久久| 色在线成人网| 色播亚洲综合网| 久久伊人香网站| 国产亚洲精品久久久久久毛片| 国产91精品成人一区二区三区| 亚洲aⅴ乱码一区二区在线播放 | 午夜福利在线在线| 日韩三级视频一区二区三区| 久久久国产精品麻豆| 欧美成人一区二区免费高清观看 | 欧美久久黑人一区二区| 手机成人av网站| 99re在线观看精品视频| 狠狠狠狠99中文字幕| 女人被狂操c到高潮| 亚洲精品av麻豆狂野| 亚洲中文字幕日韩| 国产成年人精品一区二区| 久久久久九九精品影院| 99热这里只有精品一区 | 国产激情欧美一区二区| 日本黄大片高清| 国产91精品成人一区二区三区| 一个人观看的视频www高清免费观看 | av中文乱码字幕在线| 久久精品成人免费网站| 两个人看的免费小视频| 又粗又爽又猛毛片免费看| 国产精品一及| 每晚都被弄得嗷嗷叫到高潮| 久久精品国产清高在天天线| 久久九九热精品免费| 国产在线精品亚洲第一网站| 无限看片的www在线观看| 18禁观看日本| 麻豆国产97在线/欧美 | 中文字幕最新亚洲高清| 国产精品1区2区在线观看.| av国产免费在线观看| 一夜夜www| 精品日产1卡2卡| 亚洲自偷自拍图片 自拍| 亚洲专区中文字幕在线| av有码第一页| 嫁个100分男人电影在线观看| 亚洲欧洲精品一区二区精品久久久| 夜夜看夜夜爽夜夜摸| 欧美日韩乱码在线| 色综合婷婷激情| 日韩欧美三级三区| 国内久久婷婷六月综合欲色啪| 男人舔奶头视频| 18禁裸乳无遮挡免费网站照片| 又黄又粗又硬又大视频| 久久久国产精品麻豆| 91大片在线观看| 搡老岳熟女国产| 国产麻豆成人av免费视频| 欧美成人午夜精品| 久久天躁狠狠躁夜夜2o2o| 91九色精品人成在线观看| 日本精品一区二区三区蜜桃| 亚洲第一电影网av| 亚洲片人在线观看| 久久草成人影院| 精品电影一区二区在线| 欧美色视频一区免费| 国产精品99久久99久久久不卡| 嫩草影院精品99| 国内精品久久久久精免费| 一区福利在线观看| 99久久综合精品五月天人人| 久久国产乱子伦精品免费另类| 国产欧美日韩一区二区精品| 亚洲电影在线观看av| 欧美在线一区亚洲| 老司机深夜福利视频在线观看| 国产精品免费一区二区三区在线| 亚洲第一电影网av| 国产精品av视频在线免费观看| 在线看三级毛片| 三级国产精品欧美在线观看 | 正在播放国产对白刺激| 久久精品aⅴ一区二区三区四区| 亚洲免费av在线视频| 午夜视频精品福利| 亚洲五月婷婷丁香| 熟女电影av网| 午夜福利在线观看吧| 熟妇人妻久久中文字幕3abv| 叶爱在线成人免费视频播放| netflix在线观看网站| 长腿黑丝高跟| 怎么达到女性高潮| 亚洲18禁久久av| 国产成人啪精品午夜网站| 男女午夜视频在线观看| 欧美色欧美亚洲另类二区| 一区二区三区高清视频在线| 两性夫妻黄色片| 在线a可以看的网站| 国产高清有码在线观看视频 | 国产欧美日韩精品亚洲av| 高清在线国产一区| 少妇被粗大的猛进出69影院| 国产精品永久免费网站| 国产精品香港三级国产av潘金莲| 色av中文字幕| 欧美在线黄色| 特大巨黑吊av在线直播| 2021天堂中文幕一二区在线观| 在线观看免费午夜福利视频| 我的老师免费观看完整版| 中国美女看黄片| 曰老女人黄片| 亚洲aⅴ乱码一区二区在线播放 | 欧美色视频一区免费| 国产亚洲精品久久久久久毛片| 99久久无色码亚洲精品果冻| 精品福利观看| 国产精品久久久久久人妻精品电影| 搡老熟女国产l中国老女人| 国产成年人精品一区二区| 伦理电影免费视频| 免费av毛片视频| 亚洲国产中文字幕在线视频| 日韩精品免费视频一区二区三区| 蜜桃久久精品国产亚洲av| 美女免费视频网站| 成熟少妇高潮喷水视频| 午夜a级毛片| 91麻豆av在线| www日本黄色视频网| 欧美3d第一页| 成人三级黄色视频| 中文字幕精品亚洲无线码一区| 欧美一区二区精品小视频在线| 国产av又大| 国产午夜福利久久久久久| 俺也久久电影网| 在线观看66精品国产| 岛国在线免费视频观看| 国产精品国产高清国产av| 18禁美女被吸乳视频| 一本综合久久免费| 欧美日韩乱码在线| 成人高潮视频无遮挡免费网站| 一个人免费在线观看电影 | 美女大奶头视频| 成人国语在线视频| 18禁黄网站禁片免费观看直播| e午夜精品久久久久久久| 久久久久亚洲av毛片大全| 国产成人av教育| 少妇粗大呻吟视频| 亚洲人与动物交配视频| 此物有八面人人有两片| 国产视频内射| 成人三级黄色视频| 欧美日韩亚洲国产一区二区在线观看| 亚洲美女视频黄频| 高清在线国产一区| 久久久久久久久久黄片| 日韩大尺度精品在线看网址| 国产精品九九99| 日日爽夜夜爽网站| 少妇熟女aⅴ在线视频| 免费观看人在逋| 国产伦在线观看视频一区| 久久中文字幕一级| 成人高潮视频无遮挡免费网站| 亚洲欧美一区二区三区黑人| 成人高潮视频无遮挡免费网站| 久久精品国产亚洲av高清一级| 天天一区二区日本电影三级| 欧美zozozo另类| 成人高潮视频无遮挡免费网站| 欧美成人性av电影在线观看| 老司机在亚洲福利影院| 国产成人影院久久av| 午夜两性在线视频| 久9热在线精品视频| 国产高清视频在线播放一区| 99热6这里只有精品| 又粗又爽又猛毛片免费看| 午夜激情福利司机影院| 亚洲自拍偷在线| 国产区一区二久久| 五月伊人婷婷丁香| 国内久久婷婷六月综合欲色啪| 色综合亚洲欧美另类图片| 亚洲欧洲精品一区二区精品久久久| 欧美午夜高清在线| 91老司机精品| 久久久久久国产a免费观看| 免费一级毛片在线播放高清视频| 成在线人永久免费视频| 成人高潮视频无遮挡免费网站| 亚洲aⅴ乱码一区二区在线播放 | 精品一区二区三区四区五区乱码| av天堂在线播放| 两人在一起打扑克的视频| 嫩草影视91久久| 久久这里只有精品19| 男女午夜视频在线观看| 一卡2卡三卡四卡精品乱码亚洲| 美女午夜性视频免费| 欧美中文综合在线视频| 久久久久久久久久黄片| 天堂av国产一区二区熟女人妻 | 亚洲人成网站高清观看| 国产精品九九99| 亚洲成av人片在线播放无| 日韩欧美免费精品| 手机成人av网站| 国模一区二区三区四区视频 | www.999成人在线观看| 可以在线观看毛片的网站| 国产亚洲欧美在线一区二区| 国产伦在线观看视频一区| 两个人视频免费观看高清| 成年版毛片免费区| 成人手机av| 国内少妇人妻偷人精品xxx网站 | 欧美日韩瑟瑟在线播放| 欧美成人一区二区免费高清观看 | 国产av不卡久久| 一区二区三区国产精品乱码| 丝袜美腿诱惑在线| 婷婷亚洲欧美| 给我免费播放毛片高清在线观看| 欧美一区二区精品小视频在线| 99热6这里只有精品| 在线视频色国产色| 日本三级黄在线观看| 99热只有精品国产| 久久 成人 亚洲| 国产精品精品国产色婷婷| 特级一级黄色大片| 一边摸一边抽搐一进一小说| 每晚都被弄得嗷嗷叫到高潮| e午夜精品久久久久久久| 国产真人三级小视频在线观看| 国产激情久久老熟女| 在线永久观看黄色视频| 欧美久久黑人一区二区| 19禁男女啪啪无遮挡网站| 国产成人精品久久二区二区免费| 91av网站免费观看| 精品福利观看| 欧美乱妇无乱码| 国产精品综合久久久久久久免费| 亚洲欧美激情综合另类| 日韩 欧美 亚洲 中文字幕| √禁漫天堂资源中文www| 国产91精品成人一区二区三区| 国产成人av激情在线播放| 五月玫瑰六月丁香| 国产成人精品久久二区二区免费| 亚洲欧美一区二区三区黑人| 日本成人三级电影网站| 亚洲色图 男人天堂 中文字幕| www国产在线视频色| 久久人人精品亚洲av| 色av中文字幕| 在线免费观看的www视频| 脱女人内裤的视频| 国产精品av久久久久免费| 欧美日韩瑟瑟在线播放| 18禁裸乳无遮挡免费网站照片| 12—13女人毛片做爰片一| e午夜精品久久久久久久| 亚洲片人在线观看| 精品国内亚洲2022精品成人| 久久中文字幕人妻熟女| 一级黄色大片毛片| 亚洲人成电影免费在线| 五月伊人婷婷丁香| 国产成人av激情在线播放| 舔av片在线| 午夜激情av网站| 毛片女人毛片| 欧美日本亚洲视频在线播放| 黄色丝袜av网址大全| 亚洲精品粉嫩美女一区| 亚洲最大成人中文| 成年人黄色毛片网站| 亚洲精品一区av在线观看| 欧美另类亚洲清纯唯美| 久久久水蜜桃国产精品网| 91九色精品人成在线观看| a级毛片在线看网站| 一区二区三区高清视频在线| 国产v大片淫在线免费观看| 精品国产美女av久久久久小说| 黄色毛片三级朝国网站| 大型av网站在线播放| 国产一级毛片七仙女欲春2| 午夜激情福利司机影院| 国产精品av视频在线免费观看| 国产探花在线观看一区二区| 一夜夜www| 欧美三级亚洲精品| 亚洲一区高清亚洲精品| 日韩欧美三级三区| 国产午夜精品论理片| 18美女黄网站色大片免费观看| 精品电影一区二区在线| 国产激情久久老熟女| 久久久水蜜桃国产精品网| 亚洲va日本ⅴa欧美va伊人久久| 嫁个100分男人电影在线观看| 人人妻,人人澡人人爽秒播| 亚洲avbb在线观看| 亚洲精品粉嫩美女一区| 曰老女人黄片| 国产精品美女特级片免费视频播放器 | av国产免费在线观看| 99热这里只有是精品50| 少妇裸体淫交视频免费看高清 | 免费搜索国产男女视频| 日韩免费av在线播放| 欧美另类亚洲清纯唯美| 国产精品九九99| 激情在线观看视频在线高清| 一级作爱视频免费观看| 午夜福利欧美成人| 成人欧美大片| 久久亚洲真实| 18禁观看日本| svipshipincom国产片| 亚洲国产欧美人成| 欧美精品亚洲一区二区| 黄色片一级片一级黄色片| 欧美黑人精品巨大| 国产1区2区3区精品| 亚洲一码二码三码区别大吗| 一进一出抽搐gif免费好疼| 国产v大片淫在线免费观看| 亚洲熟妇中文字幕五十中出| 日本熟妇午夜| 亚洲欧美日韩高清专用| 国产一区二区在线观看日韩 | 在线观看免费午夜福利视频| 亚洲精品久久成人aⅴ小说| 麻豆一二三区av精品| 亚洲国产精品成人综合色| 国内少妇人妻偷人精品xxx网站 | 国产亚洲精品久久久久久毛片| 欧美色欧美亚洲另类二区| 国产精品,欧美在线| 搡老熟女国产l中国老女人| 桃红色精品国产亚洲av| 国产亚洲精品第一综合不卡| 91大片在线观看| 久久中文字幕一级| 老熟妇乱子伦视频在线观看| 午夜免费激情av| 欧美成狂野欧美在线观看| 999久久久国产精品视频| 日韩成人在线观看一区二区三区| a在线观看视频网站| 日本黄大片高清| 欧美色视频一区免费| av天堂在线播放| x7x7x7水蜜桃| 亚洲国产精品sss在线观看| 两性午夜刺激爽爽歪歪视频在线观看 | 69av精品久久久久久| 曰老女人黄片| 18禁裸乳无遮挡免费网站照片| 999精品在线视频| 午夜老司机福利片| 少妇人妻一区二区三区视频| 精品久久久久久成人av| 美女黄网站色视频| 成人一区二区视频在线观看| 成在线人永久免费视频| 亚洲国产精品合色在线| 又大又爽又粗| 国内少妇人妻偷人精品xxx网站 | 蜜桃久久精品国产亚洲av| 一进一出好大好爽视频| 免费在线观看日本一区| 最近视频中文字幕2019在线8| 日本熟妇午夜| 国产成人影院久久av| 午夜视频精品福利| 久久久久久亚洲精品国产蜜桃av| av在线播放免费不卡| 精品久久久久久成人av| 日韩三级视频一区二区三区| 国产精品免费视频内射| 欧美丝袜亚洲另类 | 欧美色视频一区免费| 一区福利在线观看| 在线观看舔阴道视频| 成年人黄色毛片网站| 国内揄拍国产精品人妻在线| 欧美乱码精品一区二区三区| 亚洲欧美日韩高清专用| 两个人免费观看高清视频| 国产午夜精品论理片| 麻豆成人午夜福利视频| 丰满的人妻完整版| 欧美 亚洲 国产 日韩一| 国产成人aa在线观看| 丰满人妻一区二区三区视频av | svipshipincom国产片| 丝袜美腿诱惑在线| 国产高清视频在线播放一区| 国产高清videossex| 在线观看美女被高潮喷水网站 | 午夜精品在线福利| 亚洲美女视频黄频| 成人永久免费在线观看视频| 操出白浆在线播放| 免费在线观看完整版高清| 白带黄色成豆腐渣| 成年免费大片在线观看| 国产一区二区在线av高清观看|