• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    MC-LRF based pose measurement system for shipborne aircraft automatic landing

    2023-09-02 10:18:02ZhuoZHANGQiufuWANGDomingBIXiolingSUNQifengYU
    CHINESE JOURNAL OF AERONAUTICS 2023年8期

    Zhuo ZHANG, Qiufu WANG, Doming BI, Xioling SUN,*, Qifeng YU

    a College of Aerospace Science and Engineering, National University of Defense Technology, Changsha 410073, China

    b AVIC Shenyang Aircraft Design & Research Institute, Shenyang 110850, China

    KEYWORDS Automatic landing;Data processing and image processing;Laser range finder;Monocular camera;Pose measurement

    Abstract Due to the portability and anti-interference ability,vision-based shipborne aircraft automatic landing systems have attracted the attention of researchers.In this paper,a Monocular Camera and Laser Range Finder(MC-LRF)-based pose measurement system is designed for shipborne aircraft automatic landing.First, the system represents the target ship using a set of sparse landmarks,and a two-stage model is adopted to detect landmarks on the target ship.The rough 6D pose is measured by solving a Perspective-n-Point problem.Then, once the rough pose is measured, a region-based pose refinement is used to continuously track the 6D pose in the subsequent image sequences.To address the low accuracy of monocular pose measurement in the depth direction,the designed system adopts a laser range finder to obtain an accurate range value.The measured rough pose is iteratively optimized using the accurate range measurement.Experimental results on synthetic and real images show that the system achieves robust and precise pose measurement of the target ship during automatic landing.The measurement means error is within 0.4° in rotation, and 0.2% in translation, meeting the requirements for automatic fixed-wing aircraft landing.Received 5 July 2022; revised 19 August 2022; accepted 27 September 2022.

    1.Introduction

    Shipborne aircraft are widely used in marine missions such as reconnaissance, surveillance, search, and payload delivery.1Shipborne aircraft landings are considered as one of the most crucial and dangerous types of aircraft missions,2and obtaining the relative position and attitude of the target ship is essential for the safety of the aircraft.Recently, several guidance methods have been applied to pose measurement to meet the needs of shipborne aircraft automatic landing,including radar landing guidance, optoelectronic landing guidance, and navigation satellite guidance.3,4These guidance systems need the support of shipborne measurement equipment and communication links.Therefore, when faced with a complex electromagnetic environment, it is challenging to meet the requirements of aircraft landing.

    In view of the above problems,cameras,as light and cheap sensors, have been applied to aircraft automatic landing guidance due to their strong anti-interference ability and rich information5provision ability.Visual guidance systems for automatic guidance landing have been widely researched throughout the world, including airborne and shipborne systems.6Without a communication link, the airborne visual guidance system is capable of independently measuring the position, attitude, and motion parameters via airborne imaging and processing equipment.7Therefore,the core technology must obtain accurate and efficient 6D pose parameters (3D rotation and 3D translation) using monocular images.This paper adopts the monocular visual measurement scheme due to the limited installation space and wide depth measurement range.The visual guidance system employed in this paper captures real-time images with the camera and then processes the images to calculate the relative pose parameters between the aircraft and the ship.Generally, vision-based guidance methods are divided into cooperative and non-cooperative methods.Cooperative methods are designed to provide cooperation targets for landing.However, cooperative methods depend on the stability of the cooperative targets, which are affected by factors such as the surface and manipulation of the ship.Therefore, this paper focuses on the noncooperative methods in which the point, line, or contour features of the target ship are extracted to establish the 2D-3D correspondence.8Then, the pose parameters are recovered by solving Perspective-n-Point (PnP) or Perspective-n-Line(PnL) problems.9–11In traditional non-cooperative methods,the extracted target texture information changes easily with the environment.As stated in a review,12the critical advances of the methods based on geometric features are challenging to find in complex objects.Due to the characteristics of the monocular camera, the wide depth range directly suppresses the accuracy of pose estimation.

    Fig.1 MC-LRF-based pose measurement system for shipborne aircraft automatic landing.

    To address these problems,we designed a landing guidance system named the Monocular Camera-Laser Range Finder(MC-LRF) system.Fig.1 shows that this system primarily consists of a monocular camera and a Laser Range Finder(LRF).First, the target ship and landmarks are detected in the images from the monocular camera by PP-tinypose.13Then, the ship’s pose is roughly estimated by solving a PnP problem.14,15During this process, both synthetic and real images are generated to train the network.In contrast to the generally rigid body representation, we innovatively use a set of landmarks found in each image to construct the target representation.In the synthetic images, we apply the background and texture randomization of the target while rendering for geometric feature learning.Furthermore, to achieve tracking,the parameters are refined in the subsequent frames via the region-based method.When pose tracking fails, the pose estimation is provided again as a new initial value.Our guidance system takes advantage of the LRF, which provides accurate depth measurements.Finally, the orthogonal iterative algorithm9uses the accurate depth values to amend the pose parameter errors.The experimental results for both synthetic and real images show that the proposed MC-LRF guidance system achieves high precision and robust pose measurements when applied to shipborne aircraft landing.The primary contributions of this paper are twofold:

    (1) A shipborne aircraft automatic landing system that integrates a monocular camera and a laser range finder is designed.The system realizes high precision and robust 6D pose parameter measurements between the aircraft and ship.

    (2) A representation of the target ship using a landmark set is proposed and achieves flexibility, efficiency, and robustness.

    The remainder of this paper is organized as follows: Section 2 presents the related work on aircraft automatic landing system design.Section 3 introduces the 6D pose estimation and refinement algorithm, which is based on MC-LRF block optimization.The experimental details and validation results are presented in Section 4, and Section 5 concludes the paper.

    2.System design

    This section introduces the composition of the MC-LRF guidance system and its working principle as shown in Fig.1.The red point S indicates the laser spot on the object’s surface.The red line represents a laser beam.We will present the system composition,coordinate system definition and MC-LRF block design.

    2.1.System composition

    Successful shipborne aircraft landing depends on the guidance system and the pose measurement algorithm shown in Fig.2.The MC-LRF guidance system consists of an MC-LRF block and a pose measurement block.The MC-LRF block includes a monocular camera and an LRF, which obtains optimized depth values,and the monocular camera collects RGB images of the target ship to provide reliable features.The LRF is a vital part of this system, and is installed beside the monocular camera to measure the precision depth values in real time.The missions that require the pose measurement block include object detection, landmark detection, initial pose calculation,pose optimization, and pose refinement.After a mission is complete, the system sends the pose measurement results to the flight control system.

    2.2.Coordinate system definition

    As indicated in Fig.1,the coordinate frames for shipborne aircraft automatic landing are defined as follows: the target ship coordinate frame is defined as OW- XWYWZW, and the monocular camera coordinate frame OC- XCYCZCincludes the image coordinate frame OμV.

    The target ship coordinate frame is the world cooperative frame, which is also known as the global coordinate frame.The origin OWof the target ship’s coordinate frame is located at the center of the target ship deck, the YWaxis is perpendicular to the base of the target ship,the ZWaxis is parallel to the axis of the ship’s direction of motion, and the XWaxis is perpendicular to the YZ plane,forming a right-handed coordinate frame.In our experiment, the MC-LRF block coordinate frame needs to calibrate; in other words, it needs to calculate the transformation relationship between the monocular camera’s coordinate frame and the LRF’s coordinate fame.In the MC-LFR block, the monocular camera and the LRF are considered as a whole and constitute the camera coordinate system.The origin OCof the MC-LRF block coordinate frame is located at the optical centre of the camera.When this system performs pose measurement, the LRF can be adjusted so that the light point hits the target ship’s surface.Therefore,the relationship between the aircraft and the MC-LRF block is calibrated in practical applications.

    2.3.MC-LRF block design

    As shown at the bottom of Fig.1, the monocular camera and the LRF are fixed together and known as the MC-LRF block.OLrepresents the LRF’s light-emitting point,and S is the light point hit by the LRF on the target ship’s surface.l denotes the orientation vector of a laser beam.In our experiment, the 6D pose parameters RCWand TCWcan be calculated in real time,and they represent the transformation from the world coordinate system to the camera coordinate system.Before using the MC-LRF guidance system for pose measurement, the laser light-emitting point OLand laser beam l must be calibrated.Then, the measurement results from the LRF coordinate system are converted to the camera coordinate system.The measurement distance can be obtained,and the translation value is optimized as TLRF.Notably, the rotation error is difficult to eliminate during monocular camera pose measurement, so we aim to optimize the 6D poses using the TLRF.

    3.MC-LRF-based pose measurement

    Fig.2 MC-LRF guidance system schematic.

    The single RGB image captured by the monocular camera includes the target ship and background.As shown in Fig.3,the pose measurement block includes three steps:object detection, landmark detection, and pose measurement.The target ship is detected, and the landmarks are regressed.Then, the initial 6D poses are calculated according to the detected landmarks by solving the PnP problem, and are optimized by orthogonal iteration and provides the optimized initial value for the region-based pose refinement algorithm.

    First, the target ship is detected by PP-PicoDet,13and the ship landmarks are regressed using PP-tinypose.Then, the rotation and translation parameters are calculated by solving the PnP problem, which represents a rough initial pose.Finally,the pose is refined based on the region method to realize tracking.During the initial pose estimation and refinement,inaccurate translation values are corrected by the LRF’s TLRF,and the rotation value is optimized with the orthogonal iterative algorithm.Notably, the translation value is replaced by TLRFin each iteration.

    3.1.Landmark set-based ship representation and synthetic training dataset generation

    The depth neural network model makes texture and geometric features easy to extract,but the texture is often unstable under illumination and other environmental factors, which causes obvious obstacles during neural network training.In contrast,the primary geometric features of the target ship are relatively stable.In the existing research on human pose estimation16and face detection,17a discrete landmark set is used to represent the human body.Similarly, in our system, the rigid body target is creatively described as a set of landmarks, which are manually selected from the apparent geometric structure.As shown in Fig.4, the target ship is described with landmarks,which are sets of points or pixels in images that contain rich geometric information.The red points are the major geometric landmarks, which were chosen manually.The blue points are occluded.

    They reflect the intrinsic structure or shape of the target ship since landmarks are fundamental in our aircraft automatic landing system.In addition, in network landmark regression,two widely-used methods that represent landmarks are coordinate regression and heatmaps18,19for deep learningbased semantic localization.Rather than directly regressing the numerical coordinate with a fully-connected layer, heat map-based methods are more accurate and robust since they predict the heatmap of the maximum activation point in the input image.Therefore, a heatmap-based method is used in the proposed automatic aircraft landing system to regress the semantic landmarks.

    Different from object detection or classification datasets,pose parameter ground truth is difficult to label.Additionally,manual annotation is inaccurate, which causes difficulties of network training.To address the limitations of the target ship,overcome the difficulties of the pose dataset generation, and consider the influence of unstable texture features, we propose synthetic images and texture randomization methods to train our network.In addition to synthetic images,a few real images were also used for training in our study.When the synthetic images were generated,as shown in Fig.5,the target ship texture and background were randomly rendered in each image to reduce network interference.The scene of the target ship was simulated by Ref.20,which constructed high-quality images.Random images were taken from the MS COCO21dataset and used as the background and texture of the target ship’s model to make the network focus on the model’s geometry features.

    3.2.Initial pose estimation based on corresponding 2D-3D landmarks

    In the MC-LRF guidance system, the initial pose parameters are estimated based on the corresponding 2D-3D landmarks and landmark detection, as shown in Fig.6.The 2D landmarks are detected in the target ship and then the initial pose parameters are calculated by solving the PnP problem.The accuracy and efficiency of this system in terms of finding the initial pose estimation have attracted our attention, especially for mobile devices.To successfully and efficiently apply this system, we use the PP-tinypose22to obtain landmarks, as shown in Fig.6 ShuffleNetV2, which is used for target detection,is more robust on mobile devices than other structures.23Therefore, we chose PP-LCNet,24which includes an enhanced ShuffleNetV2 structure named Enhanced ShuffleNet (ESNet)that addresses the problem of expensive computation on

    Fig.3 Schematic overview of pose measurement.

    Fig.4 Representation of target ship.

    Fig.5 Synthetic datasets preparation (i.e., the pipeline used to render simulation datasets).The target ship’s model was rendered with random poses in the scene that used MS COCO images as background and model textures.Middle block: the red points are the major geometric landmarks, which are chosen manually.

    mobile devices.25–27The landmark choice is an important design parameter of our method that occurs after target detection.The landmark needs to be sufficiently localized on the object’s geometry and spread across the surface to provide a more stable landmark for pose calculation.

    3.3.Region-based pose refinement

    In the MC-LRF system, the initial pose in continuous images can be obtained through pose measurement, and needs to be refined efficiently.Region-based methods perform well compared with other methods for objects distinct from the background in terms of color, texture, etc.These methods typically use color statistics to model the probability that a pixel belongs to the object or to the background.The object pose is then optimized to best explain the segmentation of the image.Region-based pose refinement methods28–30have gained increasing popularity and achieved state-of-the-art performances.These methods assume that the object and background regions have different statistical properties.Based on the projected silhouette rendered from the 3D model,the probability density functions of foreground and background can be obtained, which is applied to construct a segmentation energy function.The object pose is estimated by iteratively optimizing the pose parameters that minimize the segmentation energy.Our system follows the method proposed by Ref.30With a known initial 6D pose and 3D object model,it is refined by this method to achieve pose tracking.This process refines the target’s pose in the current frame more efficiently.In our system,the initial 6D pose is optimized by the MC-LRF block and provides pose parameters that are needed as input values.Then, the iterative pose optimization process proposed in Ref.30is used to refine the approximately estimated pose parameters.In this paper, the pose is optimized with sparse viewpoint and correspondence line models.The energy function is as follows:

    where D denotes the data from all correspondence lines,ω the sparse correspondence line domain, d the contour distance, l the pixel colour on the correspondence line, ξ the pose variation vector and ncis the number of randomly sampled points on the contour from the sparse viewpoint model.Then, optimization using the Newton method with Tikhonov regularization is used to calculate the pose variation vector as follows:

    Fig.6 Architecture of top-to-down landmark detection.

    where H and g are the Hessian matrix and gradient vector,respectively.I3×3represents the 3 × 3 identity matrix.λrand λtare the regularization parameters, which are used for rotation and translation.Then, the pose is iteratively refined according to:

    ΔT=exp(ξ^)?SE(3) (3)

    With an initial pose,the relative pose can be tracked within the successive monocular images by applying pose refinement to each frame.It is significant that the refinement’s translation value is optimized by the orthogonal iteration algorithm.

    3.4.Orthogonal iteration optimization

    In the proposed system, the rotation and translation of pose estimation and tracking are optimized using the orthogonal iteration algorithm9.

    The orthogonal iteration algorithm defines pose estimation using an appropriate object space error function.This function can be rewritten in a way that an iteration is admitted based on the absolute orientation problem, which involves determining rotation value R and translation value T from corresponding pairs qjandpj.Among them,qjrepresents the 3D camera coordinates and pjrepresents the noncollinear 3D coordinates where j stands for the number of coordinates.The optimization problem can be expressed as:

    where ‖A‖2=tr(ATA) and tr(AB)=tr(BA).When we have optimal solution of rotation value R^, the translation optimal vector can be expressed as:

    Rather than depending heavily on the solution to the absolute orientation, we can obtain the predicted rotation value and the corrected T value with the MC-LRF guidance system.The specific flow of the orthogonal iterative algorithm is shown in Fig.7 right panel.TLRFstands for the optimized translation value and its depth orientation value is corrected by LRF system.According to initial rotation value and TLRF, continuously iteratively updating rotation and translation value is expressed as follows:

    It is noteworthy that the depth value of T is replaced by TLRFin each iteration.The condition for stopping the iteration is that the objective function is sufficiently small or a predetermined upper limit on the number of iterations has been reached.

    4.Experiments

    To evaluate the performance of our guidance system, we report the MC-LRF guidance system performance on the synthetic and real images.

    4.1.Experimental settings

    In these experiments,the shipborne automatic landing process is simulated from approximately 2.0 km to 1.0 km distance expressed as D.In the different depth ranges, we rendered 2000 synthetic images with a resolution of 1280×1024 pixels.In the real experiments, the size of the target ship model is about 113 mm × 120 mm × 440 mm.Camera parameters are set according to the actual environment and Electronic Total Station is used to simulate a laser ranging finder.We collected 600 real images with a resolution of 1920 × 1200 pixels under Electronic Total Station coordinate system by monocular camera (DAHENG IMAGING MER-231-41U3C).All of the experiments were run on a laptop with an RTX 3060 GPU,ADM Ryzen 7 5800H CPU,and 32 GB of RAM.In addition,the efficiency experiments were run on an embedded platform with an ARM + AI module.In this section, to evaluate the pose accuracy performance, H3R17was also used for landmark detection.

    4.2.Evaluation metrics

    Similar to previous research,31the Normalized Mean Error(NME) relative to the bounding box was used to evaluate the landmark detection.

    Fig.7 Orthogonal iteration algorithm architecture.

    where the translation and rotation ground truth are represented by Rgand Tg, respectively.The rotation error (ER) is the angular error between the ground-truth quaternion and the predicted quaternion.The translation error(ET)is the normalized translation error.

    4.3.Initial pose estimation results and analysis

    To better ascertain the MC-LRF guidance system’s performance in terms of initial pose estimation, we report the individual predictions for 1000 synthetic images.These images include different depth range targets with random poses.The initial pose estimation process is divided into two steps: landmark detection and pose calculation.We present the results of landmark detection obtained by PP-tinypose and H3R method in Fig.8 and Table 1.

    The red bounding box represents the results obtained by PP-PicoDet, and both methods can detect the landmarks in each image.In Table 1, we compare the NME results obtained by these networks, and the H3R method’s regression landmarks are more precise.The landmark detection error tends to decrease as the distance decreases.Note that there are obvious errors in both networks at a far distance;in these cases, because the target is too small, it occupies few pixels in the image, leading to feature reduction.On the contrary, when the large target of the bounding box is resized before the landmarks are input into the detection network, some of the features are discarded, which may be one of the reasons why the error does not continue to decrease at near distances.

    Table 1 NME results of landmark detection using only RGB images.

    After the corresponding 2D-3D landmarks are predicted,the initial pose parameters are calculated by solving the PnP problem.The predicted poses are reprojected on the original image in Fig.9(a),where the gold and purple objects represent the initial and optimized pose reprojections, respectively.The purple objects are obviously optimized in scale and rotation when compared to the gold objects,which are magnified to display the improvement in the results.In Figs.9 (b) and (c), the rotation and translation error curves show that the MC-LRF guidance system has significantly improved in terms of its ETvalue.Additionally,the translation optimization error demonstrates good robustness to changes in distance.ERis also optimized by the orthogonal iteration algorithm.

    To perform a quantitative analysis, we summarize the pose estimation comparison results according to whether or not the LRF was used in Table 2.The pose error variation trend is related to the accuracy of landmark detection,and the rotation estimation results based on the H3R network are more precise than those produced by PP-tinypose.The translation estimation is similarly optimized by our guidance system,which evaluates the LRF performance when combined with different networks.Although the rotation accuracy becomes higher as the distance decreases,the translation error remains stable during the landing process.In addition,compared with weak pose optimization in near target images, the ERvalue’s mean rose about 36% for far target pose estimation.Note that the PPtinypose network has slightly lower accuracy than the H3R network, but its speed can reach 37 Frames Per Second(FPS) from target ship detection to pose estimation completion, which is significantly more efficient than the other method.

    Fig.8 Exemplary landmark detection results on synthetic images.

    Fig.9 Initial pose estimation results.

    Table 2 Results of pose estimation at different distances.

    4.4.Landing simulation results and analysis

    Another experiment was designed based on 1000 successive synthetic images to evaluate the performance of the MCLRF guidance system during automatic landing missions.The automatic landing process includes landmark detection,initial pose estimation, pose refinement, and orthogonal iteration optimization.In Fig.10, the landmarks are detected by H3R and PP-tinypose.The group of images show the simulated process of aircraft automatic landing from 2.0 km to 0.5 km.The gold target represents the initial pose estimation results.The purple target represents the estimation results optimized by the MC-LRF block.The green target represents the refinement results.

    The initial pose is calculated by solving the PnP problem,which is reprojected as a gold target.Then, the initial pose value is optimized as the purple target, which shows a noticeable improvement when compared to the gold target.Fig.11 provides the pose error curves obtained during the aircraft landing simulation, and the optimization effect has been fully reflected,especially that of the H3R method.Significantly,the robustness of the rotation error is better on this network.In comparison, the translation error is stable on both networks,and stable and accurate initial pose estimation is beneficial to pose refinement.

    After the initial value is provided by pose estimation, the system refines the pose to achieve continuous tracking.When the tracking fails, the pose estimation again calculates a new initial value.To facilitate an accurate comparison, we estimated the initial value for each frame and used the refinement results from the first frame as an initial input in our experiment.As shown at the bottom of Fig.10, the optimized pose value is refined based on the region method to realize pose tracking,and the first frame of the sequence was used to initialize the tracking algorithm.The details of the reprojection results are emphasized, and there is a significant refinement from the purple target to the green target.To further focus on the MC-LFR guidance system’s performance in terms of pose refinement, the initial input pose values were applied without (Fig.12 (a)) and with (Fig.12 (b)) the system.The pose refinement result for each frame is more precise than the pose estimation result.Additionally, the pose refinement speed is approximately 1000 FPS, which is much higher than that for pose estimation (37 FPS) on the laptop.Therefore,pose refinement is more suitable for automatic landing after obtaining the initial pose estimation.There is an obvious error in the refinement algorithm in the grey area in Fig.12 where the pose value applied without being optimized by the proposed system was used as the input value.Because the input value was optimized with the LRF, the pose error converges rapidly and more precisely than that without optimization.

    Fig.10 Exemplary initial pose estimation results and pose refinement on a synthetic dataset.

    Fig.11 Initial pose estimation error curves of simulation landing process on synthetic scene using MC-LRF optimization vs a monocular camera only.

    Fig.12 Initial pose estimation and pose refinement error curves obtained during landing process when using a synthetic dataset.

    Table 3 Results of pose refinement during simulation loading.

    Table 3 shows the refinement comparison between different input poses, which were optimized with or without the MCLFR system.The pose error will be confined to a small range when detecting successive motions to achieve pose tracking,which shows that the rotation and translation error can be refined to 0.4°and 0.2%.These experiments show that our system outperforms synthetic images and meets the aircraft automatic landing requirements.

    4.5.MC-LRF system test in a real scene

    In this section, we describe an experiment in which the equal proportions reduction model was placed on a complex background and 600 real images were collected.We illustrate the performance of our guidance system on the real images, as in the simulation experiment.The landmark detection and pose measurement results on 300 real images are shown in Fig.13 and Fig.14.The detection region is zoomed in on the top-right.The blue point is the prediction.

    These results prove that our system is also effective in real scenes, especially for inaccurate depths at long distances.In addition, we found that rotation optimization is still affected by different distances.Due to the lack of accurate ground truth annotation,the metric is the ratio of the difference between the predicted and the measured depth values, which we call the LRF correction values.These values are approximately 9.54% and 25.47% for H3R and PP-tinypose, respectively,which shows that the initial pose estimation has significantly improved on actual experiments in which the MC-LRF system was applied.Moreover, on real images, the pose reprojection result from PP-tinypose demonstrates more precision than the H3R network,which is partly because of the different abilities of deep neural networks.The H3R + LRF method,which has excellent fitting ability, calculated more accurate landmarks using the synthetic images.On the contrary, the better generalization and efficiency of PP-tinypose made the landing process more robust and suitable for engineering applications.Since the training images were primarily composed of synthetic images,PP-tinypose is more suitable for real images.

    Fig.13 Exemplary point detection of results on real images.

    Fig.14 Exemplary pose estimation results including different scales of real ship models.

    Therefore,we used the moving ship model in the real scene to simulate the automatic aircraft landing process.Note that the test images’background has new scenes that never appeared in the training images.Therefore, we tested the MC-LRF system and chose the PP-tinypose network to detect landmarks on 300 real images.In addition, the embedded ARM + AI module was also chosen to test the efficiency of the MC-LRF system guidance loading process.The landmarks detected by PP-tinypose and the initial pose reprojection results are shown in the top and middle panels in Fig.15.The gold target is the initial pose estimation results,the purple targets represent the estimation results optimized by the MCLRF block, and the green target represents the refinement results.The blue point is the prediction.

    The MC-LRF system optimizes the initial pose values,represented by the purple targets, with obvious scale accuracy improvements compared with the gold target.Then, the pose refinement reprojection results are shown at the bottom of Fig.15.The details of the reprojection results emphasize the significant refinement from the purple target to the green target.

    The LRF error ELRFvalue curve is shown in Fig.16,which illustrates the optimized process on a real landing experiment.Meanwhile, experiments show that the guidance system performs well on mobile platforms.On the embedded ARM + AI model, the PP-tinypose-based guidance system reaches 25 FPS from target ship detection to pose estimation completion.Furthermore, the speed of pose refinement can reach approximately 333 FPS on real images.In practical applications, pose estimation and region-based refinement can be replaced by the corrected translation output of the MC-LRF block in each iteration.The refinement accuracy and speed meet the requirements of practical applications.

    Fig.16 LRF error values during aircraft automatic landing.

    5.Conclusions

    Fig.15 Exemplary initial pose estimation and pose refinement results on a real dataset.

    In this work, we present a vision-based guidance system for shipborne aircraft automatic landing using a monocular camera and laser range finder.The system achieves high accuracy and is robust when estimating relative 6D pose parameters.The MC-LRF guidance system and 6D pose measurement algorithm are described in detail, and accurate successive 6D pose parameters for the landing process are calculated.The object and landmarks are detected by a deep neural network to establish 2D-3D landmark correspondence with the object model, which is provided to calculate the initial pose parameters by solving the PnP problem.Moreover, a region-based pose refinement method is proposed and applied to track the poses of successive motion after the initial pose.The MCLRF block is then used for accurate translation and optimization of the initial pose estimation as well as the pose refinement using orthogonal iteration.In this work, we address the problem of inaccurate 6D pose estimation during shipborne aircraft automatic landing.Extensive experimental results have been provided for both synthetic and real datasets during shipborne aircraft automatic landing, and the mean 6D pose parameter error can be refined to 0.4°and 0.2%on the synthetic dataset.The qualitative and quantitative results indicate that the system achieves high accuracy and efficiency during automatic aircraft landing guidance in real scene.In addition, these pose measurement techniques and our guidance system also can be applied to robotics, driverless vehicles, satellite docking and other fields32.

    Declaration of Competing Interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    Acknowledgements

    This study was co-supported by the National Natural Science Foundation of China,China(No.12272404)and the Postgraduate Research Innovation Project of Hunan Province of China, China (No.CX20210016).

    又黄又爽又刺激的免费视频.| xxx大片免费视频| 欧美日韩视频高清一区二区三区二| 丝袜脚勾引网站| 亚洲国产精品国产精品| 国产成人freesex在线| 日韩av不卡免费在线播放| 七月丁香在线播放| 亚洲综合精品二区| 精品亚洲乱码少妇综合久久| 全区人妻精品视频| av免费在线看不卡| 久久国产亚洲av麻豆专区| 亚洲精品第二区| 精品视频人人做人人爽| 国产永久视频网站| 国产高清国产精品国产三级 | 2022亚洲国产成人精品| 欧美性感艳星| 91精品国产九色| 欧美高清成人免费视频www| 22中文网久久字幕| 在线观看三级黄色| 欧美精品亚洲一区二区| 国产淫语在线视频| 国产成人精品久久久久久| 欧美日韩综合久久久久久| 99久久精品国产国产毛片| av卡一久久| 亚洲va在线va天堂va国产| 婷婷色综合www| 少妇 在线观看| 狠狠精品人妻久久久久久综合| 又黄又爽又刺激的免费视频.| 欧美老熟妇乱子伦牲交| 人体艺术视频欧美日本| freevideosex欧美| 一区二区av电影网| 久久久国产一区二区| 免费大片18禁| 97热精品久久久久久| 99热6这里只有精品| 亚洲不卡免费看| av在线老鸭窝| 我的老师免费观看完整版| av又黄又爽大尺度在线免费看| 婷婷色综合www| av在线老鸭窝| 精品人妻偷拍中文字幕| 欧美性感艳星| 蜜桃久久精品国产亚洲av| 大香蕉久久网| 国内揄拍国产精品人妻在线| 夜夜骑夜夜射夜夜干| 少妇精品久久久久久久| 久久久久久九九精品二区国产| 日韩av免费高清视频| 日本欧美国产在线视频| 久久久精品免费免费高清| av网站免费在线观看视频| 国产亚洲欧美精品永久| 人妻系列 视频| 91精品一卡2卡3卡4卡| 这个男人来自地球电影免费观看 | 中文欧美无线码| 99九九线精品视频在线观看视频| 建设人人有责人人尽责人人享有的 | 国产熟女欧美一区二区| 九色成人免费人妻av| 国产精品久久久久久精品电影小说 | 亚洲精品,欧美精品| 成人18禁高潮啪啪吃奶动态图 | 国产综合精华液| 久久精品久久久久久久性| 色5月婷婷丁香| 国产精品无大码| 国产高清有码在线观看视频| 亚洲不卡免费看| 99精国产麻豆久久婷婷| 亚洲内射少妇av| 最近2019中文字幕mv第一页| 日本黄色日本黄色录像| 日韩精品有码人妻一区| 男人和女人高潮做爰伦理| 黄色欧美视频在线观看| 亚洲精品久久久久久婷婷小说| 国产精品一区二区在线不卡| 国产精品久久久久久久久免| 美女脱内裤让男人舔精品视频| 日韩中文字幕视频在线看片 | 亚洲精品日韩av片在线观看| 亚洲熟女精品中文字幕| 青青草视频在线视频观看| 亚洲精品456在线播放app| 久久久久国产网址| 亚洲av男天堂| 女性被躁到高潮视频| 中文字幕人妻熟人妻熟丝袜美| 老司机影院毛片| 最近最新中文字幕免费大全7| 免费观看的影片在线观看| 久久国产精品男人的天堂亚洲 | 成人二区视频| 欧美区成人在线视频| 又粗又硬又长又爽又黄的视频| 亚洲欧美中文字幕日韩二区| 在线精品无人区一区二区三 | 亚洲第一av免费看| 只有这里有精品99| 国精品久久久久久国模美| 高清毛片免费看| 最近中文字幕2019免费版| 免费不卡的大黄色大毛片视频在线观看| 成年人午夜在线观看视频| 人妻 亚洲 视频| 久久99精品国语久久久| 午夜福利视频精品| 午夜免费观看性视频| 汤姆久久久久久久影院中文字幕| 人人妻人人添人人爽欧美一区卜 | 国产极品天堂在线| 嫩草影院入口| 秋霞伦理黄片| 日本与韩国留学比较| 久久 成人 亚洲| 亚洲精品一区蜜桃| 亚洲精品亚洲一区二区| videossex国产| 亚洲三级黄色毛片| 国产 一区精品| 天天躁日日操中文字幕| 在线观看免费视频网站a站| 这个男人来自地球电影免费观看 | 久久人人爽av亚洲精品天堂 | 大片电影免费在线观看免费| 欧美成人精品欧美一级黄| av天堂中文字幕网| 伦理电影免费视频| 如何舔出高潮| 边亲边吃奶的免费视频| 亚洲欧美日韩卡通动漫| 一级毛片aaaaaa免费看小| 日产精品乱码卡一卡2卡三| 亚洲,一卡二卡三卡| 精品人妻一区二区三区麻豆| 男人舔奶头视频| 国产精品一区二区在线不卡| 成年人午夜在线观看视频| 亚洲精品456在线播放app| 18禁动态无遮挡网站| 男人添女人高潮全过程视频| 黄片无遮挡物在线观看| 亚洲综合精品二区| 高清视频免费观看一区二区| 亚洲自偷自拍三级| 在线亚洲精品国产二区图片欧美 | 欧美激情国产日韩精品一区| 亚洲自偷自拍三级| a级毛色黄片| 亚洲av二区三区四区| xxx大片免费视频| a级毛色黄片| 一个人看的www免费观看视频| 国产亚洲91精品色在线| 国产爽快片一区二区三区| 欧美日韩一区二区视频在线观看视频在线| 一级毛片电影观看| 新久久久久国产一级毛片| 精品人妻偷拍中文字幕| 一级毛片我不卡| 国产伦精品一区二区三区四那| 欧美日韩视频精品一区| 国产淫片久久久久久久久| 欧美人与善性xxx| 十分钟在线观看高清视频www | 国产爽快片一区二区三区| 日本黄大片高清| 欧美性感艳星| 亚洲三级黄色毛片| 少妇人妻 视频| 国产淫片久久久久久久久| 99国产精品免费福利视频| 日日撸夜夜添| 极品教师在线视频| 成人毛片a级毛片在线播放| 国产精品女同一区二区软件| 精品亚洲乱码少妇综合久久| 91aial.com中文字幕在线观看| 99久久综合免费| 精品一区二区三卡| 免费观看a级毛片全部| 毛片女人毛片| 赤兔流量卡办理| 亚洲第一区二区三区不卡| 日韩强制内射视频| 啦啦啦啦在线视频资源| 色综合色国产| 激情 狠狠 欧美| 直男gayav资源| 亚洲一区二区三区欧美精品| 国产亚洲精品久久久com| 国产精品不卡视频一区二区| 亚洲精品色激情综合| 天堂俺去俺来也www色官网| 大陆偷拍与自拍| 国产精品一区二区在线不卡| 中文字幕人妻熟人妻熟丝袜美| freevideosex欧美| 日韩欧美精品免费久久| 国产探花极品一区二区| 男女边吃奶边做爰视频| 国产精品一二三区在线看| 高清视频免费观看一区二区| 一区在线观看完整版| 伦精品一区二区三区| 国产精品国产三级专区第一集| 国产亚洲最大av| 免费看日本二区| 免费av不卡在线播放| 国产精品.久久久| 国产欧美日韩精品一区二区| 免费少妇av软件| 天堂中文最新版在线下载| 国产精品人妻久久久影院| 麻豆成人午夜福利视频| 一级av片app| 国产黄色免费在线视频| 成人国产麻豆网| 黄色怎么调成土黄色| 免费看av在线观看网站| 亚洲在久久综合| 亚洲熟女精品中文字幕| 国产高清有码在线观看视频| 亚洲图色成人| 女性生殖器流出的白浆| 永久网站在线| 国产精品一及| 亚洲欧美清纯卡通| 成人美女网站在线观看视频| 欧美区成人在线视频| 中文字幕av成人在线电影| 亚洲av福利一区| 国产精品嫩草影院av在线观看| 少妇精品久久久久久久| 国产男女超爽视频在线观看| 亚洲天堂av无毛| 色视频在线一区二区三区| 久久精品国产亚洲网站| 欧美日本视频| 国产午夜精品久久久久久一区二区三区| 久久久久久久精品精品| 丝瓜视频免费看黄片| 插逼视频在线观看| 国产欧美另类精品又又久久亚洲欧美| 久久久久网色| 久久久欧美国产精品| 久久久午夜欧美精品| 青青草视频在线视频观看| 国产欧美日韩一区二区三区在线 | 蜜桃亚洲精品一区二区三区| 久久99精品国语久久久| 最近的中文字幕免费完整| 国产av码专区亚洲av| 午夜免费鲁丝| 赤兔流量卡办理| 免费不卡的大黄色大毛片视频在线观看| 欧美成人精品欧美一级黄| 久久久亚洲精品成人影院| 成人无遮挡网站| 久久av网站| 午夜激情久久久久久久| 丝袜喷水一区| 十分钟在线观看高清视频www | 精品久久久噜噜| 少妇人妻 视频| 美女高潮的动态| 午夜福利高清视频| 2021少妇久久久久久久久久久| 国产精品.久久久| 国产精品女同一区二区软件| 亚洲精品一二三| 大码成人一级视频| av在线老鸭窝| 97超视频在线观看视频| 精品亚洲成a人片在线观看 | 久久久欧美国产精品| 美女国产视频在线观看| 一本一本综合久久| 免费少妇av软件| 九九在线视频观看精品| 一级毛片 在线播放| 免费黄色在线免费观看| 亚洲精品aⅴ在线观看| 日韩欧美精品免费久久| 久久久久久久久久人人人人人人| av卡一久久| 亚洲久久久国产精品| 丰满乱子伦码专区| 久久久精品免费免费高清| 国产一区亚洲一区在线观看| 少妇精品久久久久久久| 免费不卡的大黄色大毛片视频在线观看| 亚洲人成网站在线播| 亚洲美女黄色视频免费看| 国产 精品1| 永久网站在线| 国产精品嫩草影院av在线观看| 免费av不卡在线播放| 国产av国产精品国产| 熟女av电影| 黑人猛操日本美女一级片| 99热这里只有是精品50| 亚洲一级一片aⅴ在线观看| 亚洲精品乱码久久久v下载方式| 国产精品成人在线| 99热这里只有是精品在线观看| 制服丝袜香蕉在线| 国模一区二区三区四区视频| 黑丝袜美女国产一区| 亚洲,一卡二卡三卡| 亚洲久久久国产精品| 搡老乐熟女国产| 国产伦精品一区二区三区四那| 在线观看三级黄色| 精品一区二区三卡| 国产精品免费大片| 久久精品熟女亚洲av麻豆精品| 丰满人妻一区二区三区视频av| 99热这里只有精品一区| 日韩伦理黄色片| 超碰97精品在线观看| 久久久午夜欧美精品| 国产成人aa在线观看| 国产免费一级a男人的天堂| 亚洲欧美清纯卡通| 寂寞人妻少妇视频99o| 插阴视频在线观看视频| 亚洲电影在线观看av| 日韩欧美精品免费久久| 你懂的网址亚洲精品在线观看| 午夜福利视频精品| .国产精品久久| 91久久精品国产一区二区成人| 在线精品无人区一区二区三 | 黄色日韩在线| 亚洲va在线va天堂va国产| av天堂中文字幕网| 亚洲欧美清纯卡通| 三级国产精品欧美在线观看| 亚洲欧美中文字幕日韩二区| 久久久亚洲精品成人影院| 久久久久精品性色| 亚洲av欧美aⅴ国产| 一本一本综合久久| 成人毛片a级毛片在线播放| 成年人午夜在线观看视频| 男女免费视频国产| 熟女人妻精品中文字幕| 日日啪夜夜撸| 成年免费大片在线观看| 亚洲成人中文字幕在线播放| 欧美极品一区二区三区四区| 亚洲怡红院男人天堂| 亚洲不卡免费看| 视频区图区小说| 狠狠精品人妻久久久久久综合| 少妇被粗大猛烈的视频| 免费黄网站久久成人精品| 26uuu在线亚洲综合色| 欧美成人精品欧美一级黄| 亚洲成人中文字幕在线播放| 蜜桃久久精品国产亚洲av| 亚洲欧美成人综合另类久久久| 熟妇人妻不卡中文字幕| 亚洲怡红院男人天堂| 亚洲国产精品国产精品| 丰满人妻一区二区三区视频av| 国产 一区 欧美 日韩| 中国三级夫妇交换| 国产免费又黄又爽又色| 国国产精品蜜臀av免费| 尾随美女入室| 久久97久久精品| 国产在线免费精品| 国产免费一区二区三区四区乱码| 国产有黄有色有爽视频| 国产淫片久久久久久久久| 欧美精品一区二区大全| 少妇熟女欧美另类| 欧美日韩在线观看h| 国产乱来视频区| 你懂的网址亚洲精品在线观看| 哪个播放器可以免费观看大片| 美女主播在线视频| 午夜福利影视在线免费观看| 亚洲精品国产av蜜桃| 1000部很黄的大片| 亚洲高清免费不卡视频| 亚洲精品一二三| 日韩三级伦理在线观看| 中文字幕免费在线视频6| 国产真实伦视频高清在线观看| 成年免费大片在线观看| 日韩精品有码人妻一区| av在线老鸭窝| 亚洲自偷自拍三级| 尤物成人国产欧美一区二区三区| 最黄视频免费看| 丝瓜视频免费看黄片| 一区在线观看完整版| 国产精品一区www在线观看| 日本黄大片高清| a 毛片基地| 久久久亚洲精品成人影院| 韩国av在线不卡| 深夜a级毛片| 久久久久久人妻| 亚洲伊人久久精品综合| 大又大粗又爽又黄少妇毛片口| 国产亚洲午夜精品一区二区久久| 91狼人影院| 久久6这里有精品| 国产高清不卡午夜福利| 色婷婷久久久亚洲欧美| 日产精品乱码卡一卡2卡三| 免费看日本二区| 亚洲欧美成人综合另类久久久| 精品亚洲成a人片在线观看 | 国产免费又黄又爽又色| 舔av片在线| 国产在线视频一区二区| 一级毛片我不卡| av在线蜜桃| 国产中年淑女户外野战色| 在线观看免费日韩欧美大片 | 国产亚洲一区二区精品| 亚洲欧洲国产日韩| 99热这里只有是精品50| 久久ye,这里只有精品| 十八禁网站网址无遮挡 | 老师上课跳d突然被开到最大视频| 自拍欧美九色日韩亚洲蝌蚪91 | 亚洲一区二区三区欧美精品| 欧美 日韩 精品 国产| 国产成人91sexporn| 成人综合一区亚洲| 丰满迷人的少妇在线观看| 国产成人freesex在线| 久久午夜福利片| 日本黄大片高清| 少妇熟女欧美另类| 亚洲欧洲国产日韩| 美女中出高潮动态图| 久久ye,这里只有精品| 日本黄色日本黄色录像| 夜夜骑夜夜射夜夜干| 中文字幕av成人在线电影| 亚洲成人一二三区av| 国产伦精品一区二区三区视频9| 国产 一区 欧美 日韩| 男女边吃奶边做爰视频| 亚洲国产日韩一区二区| 99久久精品国产国产毛片| 亚洲激情五月婷婷啪啪| 中文字幕免费在线视频6| 亚州av有码| 国产午夜精品一二区理论片| 国产成人免费无遮挡视频| 亚洲电影在线观看av| 97精品久久久久久久久久精品| 成人黄色视频免费在线看| 尤物成人国产欧美一区二区三区| 一本一本综合久久| 夜夜骑夜夜射夜夜干| 国产在线免费精品| 日本爱情动作片www.在线观看| 中文资源天堂在线| 我要看日韩黄色一级片| 国产av一区二区精品久久 | 日本爱情动作片www.在线观看| 熟女av电影| 久久久久人妻精品一区果冻| 少妇的逼水好多| 亚洲欧洲日产国产| 国产av一区二区精品久久| √禁漫天堂资源中文www| 国产淫语在线视频| 大片电影免费在线观看免费| 巨乳人妻的诱惑在线观看| 一本—道久久a久久精品蜜桃钙片| 亚洲图色成人| 91麻豆av在线| 99re6热这里在线精品视频| 人妻人人澡人人爽人人| 日韩大码丰满熟妇| av天堂在线播放| 黑人猛操日本美女一级片| 欧美激情极品国产一区二区三区| 女性被躁到高潮视频| 香蕉国产在线看| 精品亚洲成a人片在线观看| 久久久久精品国产欧美久久久 | 国产福利在线免费观看视频| 蜜桃国产av成人99| av又黄又爽大尺度在线免费看| 黄片小视频在线播放| av国产久精品久网站免费入址| cao死你这个sao货| 国产欧美日韩一区二区三区在线| 高清av免费在线| 国产一区二区三区综合在线观看| 免费日韩欧美在线观看| 成人黄色视频免费在线看| 国产成人系列免费观看| 欧美日韩视频高清一区二区三区二| 国产av精品麻豆| 国产黄色视频一区二区在线观看| 亚洲人成电影观看| 啦啦啦啦在线视频资源| 大型av网站在线播放| 免费女性裸体啪啪无遮挡网站| 久久国产亚洲av麻豆专区| 深夜精品福利| 国精品久久久久久国模美| 色94色欧美一区二区| 男男h啪啪无遮挡| 欧美xxⅹ黑人| 一级a爱视频在线免费观看| 国产色视频综合| 亚洲 国产 在线| 精品欧美一区二区三区在线| 亚洲人成网站在线观看播放| 精品少妇久久久久久888优播| 夫妻性生交免费视频一级片| www日本在线高清视频| 十八禁人妻一区二区| 男人舔女人的私密视频| 久久久久视频综合| 久久天堂一区二区三区四区| 50天的宝宝边吃奶边哭怎么回事| 精品人妻熟女毛片av久久网站| 中文字幕av电影在线播放| 欧美日韩综合久久久久久| 啦啦啦在线免费观看视频4| 看免费成人av毛片| 久久精品久久精品一区二区三区| 九草在线视频观看| 色视频在线一区二区三区| 久久久久精品人妻al黑| 亚洲av片天天在线观看| 深夜精品福利| 成人影院久久| 欧美黑人精品巨大| 一级毛片女人18水好多 | 2018国产大陆天天弄谢| 欧美老熟妇乱子伦牲交| 黄片播放在线免费| 19禁男女啪啪无遮挡网站| 国产免费视频播放在线视频| 国产男女超爽视频在线观看| 一本综合久久免费| 99国产精品一区二区蜜桃av | 国产欧美日韩精品亚洲av| 久久精品国产综合久久久| 交换朋友夫妻互换小说| 97精品久久久久久久久久精品| 亚洲熟女精品中文字幕| 丁香六月天网| 欧美精品av麻豆av| 人人妻人人爽人人添夜夜欢视频| 亚洲av片天天在线观看| 老鸭窝网址在线观看| 欧美日韩黄片免| 免费不卡黄色视频| 国产精品香港三级国产av潘金莲 | 热99久久久久精品小说推荐| 久久人人爽av亚洲精品天堂| 久久久久精品人妻al黑| 人人澡人人妻人| 丝袜美腿诱惑在线| 国产精品国产三级国产专区5o| 亚洲欧洲国产日韩| av欧美777| 久久99热这里只频精品6学生| 少妇裸体淫交视频免费看高清 | 少妇裸体淫交视频免费看高清 | 大陆偷拍与自拍| 一级a爱视频在线免费观看| 婷婷色综合大香蕉| 亚洲第一av免费看| 亚洲欧美日韩高清在线视频 | 性色av一级| 18禁黄网站禁片午夜丰满| 久久久久久免费高清国产稀缺| 操美女的视频在线观看| 80岁老熟妇乱子伦牲交| 两个人看的免费小视频| 国产精品.久久久| 老司机在亚洲福利影院| 婷婷色综合大香蕉| 男女午夜视频在线观看| 伦理电影免费视频| 19禁男女啪啪无遮挡网站| 不卡av一区二区三区| 午夜av观看不卡| 丁香六月天网| 黑人猛操日本美女一级片| 捣出白浆h1v1| 2021少妇久久久久久久久久久| 日本av免费视频播放| av在线老鸭窝| 国产成人av激情在线播放| 性色av乱码一区二区三区2| 韩国高清视频一区二区三区| 老鸭窝网址在线观看|