• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Grasp Planning and Visual Servoing for an Outdoors Aerial Dual Manipulator

    2020-06-01 02:06:30PabloRamonSoriaBegoArrueAnibalOllero
    Engineering 2020年1期

    Pablo Ramon-Soria*, Bego?a C. Arrue, Anibal Ollero

    Robotics, Vision, and Control Group, School of Engineering, University of Seville, Seville 41092, Spain

    Keywords:

    Aerial manipulation

    Grasp planning

    Visual servoing

    ABSTRACT This paper describes a system for grasping known objects with unmanned aerial vehicles (UAVs) provided with dual manipulators using an RGB-D camera. Aerial manipulation remains a very challenging task. This paper covers three principal aspects for this task: object detection and pose estimation, grasp planning, and in-flight grasp execution. First, an artificial neural network (ANN) is used to obtain clues regarding the object’s position. Next, an alignment algorithm is used to obtain the object’s sixdimensional(6D)pose,which is filtered with an extended Kalman filter.A three-dimensional(3D)model of the object is then used to estimate an arranged list of good grasps for the aerial manipulator. The results from the detection algorithm—that is, the object’s pose—are used to update the trajectories of the arms toward the object. If the target poses are not reachable due to the UAV’s oscillations, the algorithm switches to the next feasible grasp. This paper introduces the overall methodology, and provides the experimental results of both simulation and real experiments for each module,in addition to a video showing the results.

    1. Introduction

    The use of unmanned aerial vehicles(UAVs)for industrial applications is on the rise. UAVs are suitable for a wide range of applications that take place in inaccessible locations and are typically dangerous to human operators, such as power-line inspection[1,2], wind-turbine maintenance [3], the inspection of various structures in facilities [4], and photogrammetry [5]. However,these applications merely involve perceptive tasks.In recent years,it has been demonstrated that aerial robots can effectively perform manipulation tasks as well [6].

    Aerial manipulation remains a challenging task, as it has extra requirements; the platform is usually delicate and remains in an unstable equilibrium in the air. In order to perform manipulation tasks, it is necessary to provide the robot with various capabilities such as perception, planning, and, in general, smart capabilities. It is also necessary to equip the robot with the appropriate hardware to perform the desired tasks. The purpose of introducing robots in these applications is to provide a more efficient solution to problems, reduce costs (both time and money), and reduce decreases in quality due to faults by human operators resulting from the monotony of some of these tasks. Instead, the operator is given a more important role that cannot be performed by robots.

    Manipulation involves interaction with physical objects using manipulators, and may include grasping objects, moving them, or maintaining contact with a stiff object. In order to perform such tasks, the robot needs to be able to detect and locate objects, as well as analyze how to interact with the objects. In addition, in order to perform such tasks, it is necessary to plan the actions and movements of the manipulators and prevent them from colliding with the environment and the platform itself.

    To perform any manipulation task, it is first necessary to find and track the object to be manipulated. Many computer vision algorithms can be used for this task; these can be classified depending on the application, computational resources, and sensors used.

    Region proposal algorithms for the monocular detection of objects are commonly used.These algorithms can be based on simple color algorithms [7,8], features detection [9,10], or machine learning. Novel algorithms use artificial neural networks (ANNs)to detect pre-trained objects. ANNs have been exploited thanks to the field of deep learning and the availability of powerful graphics processing units (GPUs) that boost and parallelize ANNs.The latest designs(SSD [11],RCNN[12],F-RCNN[13],Mask-RCNN[14], and YOLO [15]) have overtaken most traditional objectdetection algorithms. However, detecting instances of objects in color images is insufficient to perform manipulation tasks.It is also necessary to analyze the three-dimensional(3D)information of the object.With this purpose,recent research on deep neural networks(DNNs) has extended ANNs to compute the six-dimensional (6D)location of the objects as well [16].

    Depth sensors such as Kinect or Intel RealSense devices can be used to directly extract 3D information from the environment[17,18]. However, these sensors usually have strict limitations in outdoor environments,as they use infrared-structural light,which is usually blinded outdoors. In contrast, passive stereo cameras make it possible to compute a disparity map that can be used to perform a pixel-wise triangulation of points, resulting in a point cloud. This kind of device is usually less accurate than a depth camera, but works well under almost any conditions. However,as it is based on the visible light spectrum,the performance of this device is poor under low-light conditions.

    Image features can be used to accurately obtain the position of objects in space[19-21].These methods show good results and are robust to occlusions.However,these algorithms may present problems if objects are texture-less or if they show reflections. Other authors have used projective algorithms, which are based on geometrical models that are projected onto the image in order to find its position and orientation using monocular cameras [22].

    In regards to the manipulation itself, robot manipulators have been widely studied for a variety of applications. Mechanical and mathematical models of robot hands and their interaction with objects are a fundamental aspect of the analysis of robotic manipulation.The vast number of combinations of object and hand configurations makes this area of research challenging.

    Miller et al. [23] proposed the use of primitive shapes that approximate the object, thus facilitating the generation of grasps.However, this method requires a simple object,and cannot be used for more complex shapes.In Ref.[24],researchers proposed an algorithm that approximates the shape of the objects to a set of planar regions which are used to generate the contact points for the grasps.More recent research has used machine learning methods to generate better grasps;in these methods,the system is trained using synthetic datasets [25-27] or reinforcement learning [28].

    Other approaches use the data from sensors to create a probabilistic model of the objects.In Refs.[29-31],researchers proposed an algorithm that uses Gaussian processes for the reconstruction of 3D objects, which is known as Gaussian process implicit surfaces.

    Once grasps are generated,they must be evaluated.The quality of grasps[32]can be analyzed according to their geometry,strength of the fingers, and many other aspects. In the present work, we focus on the common aspects, which are force closure, largest minimum resisted wrench, and task-oriented quality metrics.

    Marturi et al.[33]proposed an offline/online planner to control an industrial robotic arm in order to cooperate with human operators.The algorithm uses a pre-learned grasp planner(offline)that generates a set of likely grasps. Later, the object’s pose is detected online and the second planner generates a set of feasible grasps according to the current state of the robot and the pose of the object.

    However, aerial manipulation is more challenging than ground manipulation. For example, in aerial applications, both the robot and the target objects are moving. Moreover, the arms’ dynamic influences the aerial platform’s dynamic,compromising its stability. Several publications [34,35] have studied a UAV’s stability and control, coupled with its manipulators. In the present work, we assume that the movements are not aggressive and the controller is fast enough to compensate for the semi-static external forces that the arms exert on the platform.

    Furthermore,in order to actually perform the grasping task,it is necessary to run a visual servoing algorithm to accurately move the manipulators. Visual servoing, or vision-based control, can be classified into two types depending on the data. If the algorithm uses 3D positions, it is named position-based visual servoing(PBVS) [36,37]; if the algorithm uses just 2D information from images, it is named image-based visual servoing [37,38]. A third option involves a combination of information—the so-called‘‘hybrid algorithms.” This article proposes a PBVS method that detects the position of the target object. A control law is designed to reduce the position error of the end-effectors.

    Aerial platforms need to locate themselves within the work zone in order to control their position. A standard technique is to use motion-capture (MOCAP) systems, to accurately locate UAVs during research. These systems locate the robots at a high frequency, which is very convenient when testing new technologies and algorithms.Outdoors missions also require a precise and accurate location for the robots.The global positioning system provides a wide-ranging and extensive system for locating robots outdoors.However,standard devices have unacceptable errors for manipulation tasks.To solve this issue,our research group[39]proposed the use of a total station at which information is fused with the inertial measurement unit of the aerial robot in order to locate the robot anywhere outdoors.

    Aerial manipulators can be classified according to the configuration of the arm.This work focuses on the use of serial manipulators. Laiacker et al. [40] developed a helicopter equipped with an industrial robotic arm to perform manipulations task. The current increasing interest in aerial manipulation has led researchers to place not one but two manipulators on the same platform in order to allow it to perform dexterous manipulation with both hands.Korpela et al. [41] proposed the use of a pair of manipulators to turn valves in industrial environments. A dual manipulator configuration [42] has been found to be very versatile for the performance of manipulation tasks.

    This article proposes the design of a complete aerial manipulation system that includes the development of manipulators for the UAV, a visual perception system to detect the objects to be manipulated, and the visual servoing to guide the arms. This work was carried out under the framework of AEROARMS(SI-1439/2015),an EU-funded project. This project aims to develop a novel aerial platform with dexterous manipulation capabilities to perform inspection and maintenance tasks in industrial environments.

    In this work, a mock-up model of a crawler robot that was developed in the AEROARMS project is used as a use case for the detection algorithm and grasping experiments.The same methodology can be applied to other objects.

    The remainder of the article is organized as follows. Section 2 introduces the aerial manipulator system that has been developed to perform the manipulation task. Section 3 describes the vision algorithm used to detect, track, and locate the target object.Section 4 shows the grasping and re-planning algorithm, which takes into account the possible movement of both the robot and the target. Section 5 describes the global procedure and the designed state machine to execute the mission. Section 6 shows an experimental analysis of the system, through actual grasping experiments with the flying manipulator. Finally, Section 7 discusses the results and future steps for this research.

    2. Aerial manipulator

    Multirotors have been demonstrated to be practical solutions for inspection and maintenance tasks [6]. Unlike fixed-wing aircraft, multirotors can move freely in space and hover at a desired position to perform any tasks.

    In this project,a coplanar hexacopter was chosen to perform the manipulation. This configuration has more stability and strength than smaller multirotors, in order to meet the payload requirements to carry out the mission. Fig. 1 shows the platform used for the experiments. The frame is a hexacopter that was designed and built by DroneTools SL.?

    The rest of the hardware, including the dual manipulators and software, was designed and developed by the present authors at the University of Seville.

    The platform has a Pixhawk?autopilot that is responsible for the robot’s flying control. At the bottom, an Intel NUC computer is installed to run all the systems,from the vision algorithm to the control of the manipulators.Finally,an Intel RealSense camera is built in to perceive the environment.

    In order to perform manipulation, the multirotor is equipped with a pair of arms. These arms have three-degree-of-freedom(3-DoF) integrated and exchangeable end-effectors, which give them four or six DoFs, depending on the application. Fig. 1 shows the models of the arms with grippers as the end-effectors. Using two manipulators instead of one is advantageous for various reasons. For one thing, two manipulators make it possible to carry out more complex manipulation tasks, as each arm can perform a different operation. In addition, having two end-effectors (as e.g., two grippers) makes the grasps more stable, because the object can be held from two different sides. With just one gripper,the grasps are more limited,as the gravity exerted on the center of mass will be misaligned in many situations. Thus, the only stable grasps are those located at the top of the object,if possible,which limits the manipulation to objects that have handles on top.

    The arms presented here have been released in the open-source project called hecatonquiros??, which aims for general-purpose,cheap, and easy-to-use robotic arms. These arms are designed to be lightweight, so relatively small UAVs can carry them. They are 3D printed, which reduces the overall cost of production, as the material is relatively cheap and does not require post-processing,allowing the arms to be assembled directly. The cost of a single arm is about $150 USD (including the smart serial servos). The project also provides an OpenRAVE-based wrapper for the kinematic solvers and has support for simulation with robot operating system(ROS) before actual experiments are performed.

    Fig. 2(a) shows the kinematic reachability of the pair of arms,which is used to estimate the appropriate position of the robot to perform the manipulation tasks.Figs.2(b)and(c)show the coordinate frames defined for the robot,which are used to transform the coordinates of the detected objects from the camera’s coordinates to the arm’s coordinate system, to move the end-effectors.

    As mentioned earlier, the UAV is equipped with an Intel Real-Sense D435 device. This device was chosen due to its capability to obtain reasonably accurate depth information outdoors. The depth information is exploited by the visual algorithm to obtain the 6D pose of the object for the manipulation process.This camera also provides excellent quality for short distances, as it is able to capture depth up close (up to 0.2 m), which makes it ideal for hand-eye coordination in manipulation tasks in robotics.

    To conclude,Table 1 summarizes the specifications of the complete aerial system.

    ? http://www.dronetools.es.

    ? https://pixhawk.org/.

    ??https://github.com/bardo91/hecatonquiros/.

    3. Object detection and pose tracking

    Fig. 1. A photograph of the arms built in the multirotor.

    This section describes the algorithm used to detect and track the target object. Object detection is a challenging task. However,the use of deep learning techniques has revolutionized the way in which perception is conceived.The proposed methodology combines a convolutional neural network(CNN)with a random sample consensus algorithm to detect the object and compute its pose for the aerial manipulator. The algorithm is summarized in Fig. 3.

    The algorithm is based on two stages. First,an object-detection CNN is applied to produce object candidates. Then, an alignment algorithm is used to compute the exact location of the target object.

    CNNs are widely used nowadays and provide solutions to many difficult problems, as in the case of object detection. If the target object is known, only a labeled database of images is needed to train the network.However,these algorithms are heavy in mathematical operations,albeit highly parallelizable.Thus,GPUs are usually used because they are specifically designed to perform parallel operations efficiently. Some of the most used frameworks for this purpose are TensorFlow [43] and Caffe [44]. These frameworks provide an extensive ‘‘model zoo” with the implementation of the newest networks, thus speeding up the testing and development process. However, it is essential to consider that the final algorithm will be carried out by the onboard computer in the drone, which may have some limitations such as compute unified device architecture (CUDA) compatibility. In Section 6, a comparison between different algorithms (F-RCNN, SSD, and YOLO) on three different devices (a laptop computer with a GTX1070, a Jetson TX1 by Nvidia, and an Intel NUC computer) is presented.

    The CNN is fed with RGB images provided by the camera. As a result,a set of candidate bounding boxes is obtained.The bounding box with the highest probability is employed to crop both the RGB and the depth images. These images are used to compute a local point cloud, combining the information from the pictures and the calibration matrices of the device. This colored point cloud is used in the next stage.

    In short, the use of the object-detection algorithm reduces the search for the object during the alignment stage, decreasing the searchable area by 50%, and possibly by as much as 80% if the crawler represents a small fragment of the image.

    At this point,a fragment of point cloud that contains the target object is obtained.In order to locate the object,an alignment algorithm is used. This algorithm filters the object from the background. A point cloud model of the crawler is provided to perform an iterative closest points (ICP)-based algorithm [45,46],which outputs the pose of the target. The ICP method uses all the available information to enhance the alignment process.It includes the 3D location of points(Eq.(1)),the normal information obtained from both the model and the scene (Eq. (2)), and the color data(Eq. (3)). This information is used in the form of outlier rejections during the selection process of points before the iterative calculation of the affine transformation.

    Fig.2. Manipulators’kinematic reachability and computer-aided design models for the online simulations and planning.(a)Kinematic reachability of the aerial manipulator;(b) 6-DoF arms model in OpenRAVE and coordinate frames; (c) 4-DoF arms model in OpenRAVE and coordinate frames.

    Table 1 Platform specifications.

    where Ddistis the geometrical distance between any two points x1and x2of the point cloud.

    where Dnormmeans the cosine of the angle between the normals A1and A2; A1and A2are the normals of any two points x1and x2of a point cloud, respectively.

    where Dcolormeans the absolute distance between the colors assigned to any two points x1and x2of a point cloud; R, G, and B are the values of the red, green, and blue channels, respectively.

    It is worth noting that ICP algorithms are sensitive to the initial conditions. For this reason, the pose computed in the previous instant(k-1)(k is the corresponding instant of the previous computed cloud) is used to provide a good estimate for the algorithm.This fact, together with the crop and clean of the input cloud,maximizes the probability of the algorithm converging to the correct solution.

    Finally, the result of the algorithm is filtered by an extended Kalman filter (EKF) [47,48]. The filter helps to reduce the noise in the results and makes it possible to predict the relative speed of the object, which is used to improve the object pose estimation.A simple kinematic model was chosen to estimate the object’s state.

    4. Grasp planning

    In order to make it possible to grasp the target object, a grasp planning algorithm was developed, composed of the following steps:First,a set of feasible grasps is generated,taking into account the target’s shape; next, these grasps are arranged according to their properties; finally, the best available grasp is chosen according to the computed visual information at each instant.During the grasping procedure, if the pose of the object makes the current grasp unreachable, the planner chooses the closest best grasp.

    4.1. Grasp generation

    First,a set of possible contact points is generated on the surface of the target object.Our approach is based on the grasp generation algorithm described in Ref. [49]. This algorithm casts a set of rays from a bounding box toward the target object. Each ray collides once with the surface, and the surface normal is computed. An example is shown in Fig. 4.

    However,this methodology may fail to generate a feasible grasp for objects with complex shapes. Fig. 5 shows an example of the ray-tracing algorithm for a U-shaped object. It can be seen that if the aperture of the gripper is smaller than the size labeled as ‘‘a(chǎn)”in Fig. 5(b), then the object is ungraspable for the robot. However,if the algorithm takes into account all the possible internal nooks of the object’s surface, then the robot can grasp the object if the aperture of the gripper is larger than the sizes labeled as‘‘b”or‘‘d”in Fig. 5(b).

    Fig. 3. Scheme of the visual algorithm for the object detection and pose estimation of the target object. EKF: extended Kalman filter.

    Fig.4. Example of the ray-tracing algorithm for contact point calculation.(a)The initial set of rays used to compute the grasp points;(b)the resulting grasp points computing using the ray-tracing algorithm.

    Fig.5. Example of the use of the grasping algorithm for a non-convex object.(a)An isometric view of the rays crossing the object;(b)the front view.Red lines represent the rays used to trace the possible grasp points.

    In general terms,let p be a single ray passing through an object.The number of collisions of p with a mesh is Mcollisions=Mfolds×2;thus, the number of combinations of opposite contact points that might generate a grasp are

    where Mgraspsis the number of possible grasps; Mfoldscorresponds to the folds intersected by the ray; j is the internal loop variable of the SUM operator.

    This generation of contact point candidates may be time intensive if the mesh of the object is too complicated.In order to speed up this step of the algorithm, the contact points generated are stored in a binary file so they can be reused in any mission if the object is the same.Once this ray tracing is performed,the untransformed results are stored in a database;if the same object or a different instance of the same object is grasped later,this database is recovered, thus avoiding all the ray-tracing computations.

    The set of contact points is then used to generate candidate grasps. The grasps are sampled in an internal simulator according to the pose of the object relative to the aerial manipulator. The algorithm computes the reachability for each grasp, the possible collisions between the manipulator and the target object, and the quality of each grasp. First, each arm samples all the possible candidate solutions to generate a list of feasible grasps for each arm.Next, all the feasible grasps are tested in pairs to obtain a list of dual grasps. The quality of these grasps is analyzed in terms of force closure and largest minimum resisted wrench, as described in Ref. [32]. All these grasps are then stored to be used during the planning process.

    4.2. Planning, servoing, and grasping

    Once the list of grasps is arranged, the best grasp is chosen based on the current relative target object pose. This is the closest and most stable grasp for the dual manipulator. Before grasping,the arms are sent to a pre-catch position; this prevents the manipulators from performing large and dangerous movements,as it is crucial to prevent the manipulators from colliding with the flying components of the robot.

    Let it be a multi-link robot manipulator L specified by the scalar variables Θ=θ1,...,θidescribing the states of its joints.The end of each link of the body has a certain position S = s1, ..., si, some of which are the so-called end-effectors. These positions can be described using the Denavit-Hartenberg (DH) formulation [50] as a chain of transformations between the joints of the manipulator and computed by a set of parameters DHi={αi, ai,di,φi};αimeans the angle between the z axis of the joints,aiis the distance between the joints along the x axis of the first joint, diis the distance between two joints along the z axis of the first joint,φiis the angle along the z axis of the first joint between the x axis of the joints.Each DHielement is a matrix that transforms between link (i - 1)to i; that is, it performs forward kinematics (FK).

    where S(Θ) represents the position of the end effector in terms of the joint space Θ.

    where δ is the partial derivative operator, θnis an specific joint of the joint space,smis the position of the m link,m and n are the index iterators over the variables to define the complete Jacobian matrix J(Θ).

    The final purpose is to guide the arms toward the target grasps using the visual information from the sensors. In this work, a Jacobian damped-least square (DLS) gradient descent method[53,54] is applied to ensure the convergence of the arms toward the target position.This method has been shown to be more robust to inverse and pseudoinverse methods[54]near to instabilities and singularities in the Jacobian matrices. Let XKand QKbe the target position of orientation for the end-effector given by the grasp planning and updated by the vision algorithm.The aim is to update the manipulator’s pose by updating its joints.

    The DLS proceeds as follows: Eqs. (7) and (8) are computed based on the current state of the robot and the target pose of the end-effector;next,Eq.(9)is used to compute a vector that contains an increment in the joint values of the robot that allows the endeffector to move toward the target pose.

    where Xtargetand Qtargetare the target positions for the endeffectors; XKand QKare the target position and orientation of the end-effector at instant K.

    where Jc, JX, and JQare the corresponding position and orientation Jacobians at the current state.

    where ΘKrepresents the arm angles of the current joints; λ is the damping coefficient to reduce the issues related to the inversion of the matrix. λ must be large enough to ensure that the algorithm behaves adequately close to singularities,but not too large to grant a good convergence rate.I means an identity matrix with the size of Jc. These are calculated based on the position of the target object,which is computed in the vision module.

    Furthermore, working with a pair of manipulators requires additional constraints.The principal constraint is the need to avoid collisions with itself and with other objects.However,checking the collisions of any non-convex object is not trivial.The use of a convex hull is advantageous in some situations, as it can be used to check them. However, this simplification can be rough in some applications.Modern approaches split the objects into a set of convex hulls [55,56]. This results in a more accurate solution while preserving the advantages and mathematical simplifications of working with convex hulls. In the present work, the implementation given by OpenRAVE is used [57], which integrates these fast methodologies.

    Finally, it is possible that due to the oscillations of the aerial robot, or if the target object is mobile, the best grasp may became unreachable. The algorithm continuously measures the quality of the current grasp against the database of grasps in a separate thread,in order to overcome this issue. If the grasp became unreachable or unfeasible, the algorithm switches to a new best option.

    5. State machine and complete mission

    The work presented in this paper covers the process starting from the take-off of the UAV from the platform until its landing after grasping the target object. Such a complete mission requires a sophisticated system design. The system is composed of five modules, as shown in Figs. 6 and 7. Each module is responsible for a single task in the UAV. In order to simplify the development process, communication between modules is carried out by ROS[58].In this section,each module is initialized in an idle state that is triggered by a global signal start.

    Fig. 6(a) shows the global state machine, which is responsible for the global behavior of the mission. It is composed of three states: approaching, grasping, and homing. Approaching triggers the take_off signal and waits until the UAV has taken off.After that,the global state machine sends the approaching signal, which carries an initial waypoint, to the UAV’s controller,close to the target object.Once the UAV is in the target position,it holds until the target object is found(found signal).Then the state machine switches to the grasping state, in which the visual servoing comes into action to grasp the target. Once the object is grasped, the state machine switches to the homing state, which ends when the UAV lands.

    Fig.6(b)shows the state machine of the UAV’s controller,which consists of six states. It starts with idle. Once the UAV takes off, it keeps hovering until the next signal. If the global state machine sends an approaching signal, the UAV control state machine switches to the waypoint state and moves the UAV to a target position. If the object is found, it then switches to the tracking state.The tracking module is responsible for checking if the target object is in a reasonable position to be grasped.If so,it sends the near signal. This state finishes when the object is either lost or grasped.

    Fig.7(a)shows the vision state machine,which has the simplest structure. This state machine continuously runs the vision algorithm in order to detect the target object and compute its pose.When the object is found and its pose is computed, the state machine publishes that information together with the found signal. If the object is lost, then the lost signal is sent to cause the appropriate effect.

    Fig. 7(b) shows the state machine of the grasp planner module,which is responsible for computing the appropriate grasps of the object. It also runs the servoing algorithm to compute the desired joints of the arms according to the target grasp.It starts by estimating the arranged list of grasps. Later, if the object is close, within the range of the arms, the state machine switches to the servoing state, in which the visual servoing publishes the target joints for the arms.As mentioned in Section 4,the arms are not sent directly to the grasping pose.Instead,they are sent to a position close to it,called the‘‘a(chǎn)pproaching pose.”O(jiān)nce the arms are in the approaching position, the state machine toggles to the grasping state in which the arms move quickly toward the object and grasp it.

    Finally, Fig. 7(c) shows the arms controller state machine. The manipulator controller is responsible for publishing information about the manipulators and providing the interface for moving them.It also limits the speed and trajectories of the arms to ensure that they are safe. An emergency stop (E_S) state has been placed to stop the arms externally for the sake of security.Emergency stop also causes the UAV to keep hovering to prevent it from crashing.

    Fig. 6. First two submodules of the system. (a) Global state machine; (b) UAV control state machine.

    6. Experiments

    This section begins by describing the experiments that were performed to evaluate and validate the system and algorithms.Next, a set of experiments are described and discussed to show the performance of the system in carrying out aerial manipulation tasks. As mentioned earlier,this work focuses on grasping a crawler robot that was developed during the project AEROARMS, and that is used for pipe inspection in factories.However,the algorithm can be used equally well for any object,as neither the vision algorithm nor the planning process have any particular restriction to this particular object. A summary video of the experiments can be found.?

    ? https://youtu.be/nXYlzqwM8kA.

    6.1. System evaluation

    As described in Section 3,an object-detection CNN is used in the first stage of the vision algorithm to detect the object.This provides a good initial solution for the alignment algorithm, which is more computationally expensive.

    Three CNNs were tested in three different devices to choose the best option that fits the system requirements for the manipulation task and the payload limitations of the aerial manipulator. The specific CNNs tested were F-RCNN, SSD300, and YOLO (tiny YOLO v2).These were tested in the following three devices:a laptop with a GTX1070, a Jetson TX1, and an Intel NUC with an iris GPU. The first two devices have CUDA capabilities,as they have Nvidia GPUs.However, the third device cannot use common frameworks due to a lack of CUDA compatibility. Table 2 summarizes the average computational times of the different detection algorithms in the different devices. As the Intel NUC is not CUDA compatible, only the YOLO was evaluated.To be specific,an OpenCL implementation of the YOLO has been used.?

    It can be clearly seen that the performance on the laptop overtakes the results of the other devices. However, due to the strict payload limitations, only the other two devices can be considered when operating with UAVs. At first, the YOLO runs a bit faster in the Jetson TX1 device. However, the central processing unit(CPU) capabilities of the Intel NUC computer strongly overtake those of the Jetson. As the whole system requires many processes from the rest of algorithms, the Intel NUC platform was selected.

    The main benefit of the region proposal algorithm (the CNN) is that it drastically reduces the size of the cloud to be used in the alignment process (within 50% and 80%). It has a double improvement in that it decreases the computational time of this step,while also allowing the algorithm to converge a good solution;as gradient descent algorithms are usually sensitive to outliers and initial conditions, this improves convergence to the minimum.

    Once the object is detected and its pose is estimated, the algorithm performs the grasp planning process described in Section 4.This process can be split into three stages. The first stage is the computation of contact points; this process can be precomputed,as it only depends on the resolution chosen for the algorithm and the model of the object.The second stage is the computation of feasible grasps for each arm separately. Table 3 summarizes the average computation times for each stage for various objects. Finally, the grasps are taken in pairs to arrange them by their quality metrics. The second column evaluates the time spent in the generation of the initial set of contact points.The grid resolution is the distance between the initial set of rays used during the ray tracing evaluation as described in Section 4.1.The third column shows the time spent evaluating the feasibility of the grasps generated in the previous step. Eventually, the fourth column shows the time spent evaluating the quality of the grasps.

    In order to prevent the arms from colliding with any part of the aerial platform, a safe volume is left above the shoulders of the arms. This volume corresponds to the flying parts of the robot,which must never be collided with.For safety reasons,this volume is taken into account in the planning process and during the servoing algorithm.

    Fig. 8 shows a sequence of the online simulations run in Open-RAVE for the grasp planning process.Afterwards,the set of feasible grasps is stored for later use during the re-planning stage.

    ? https://github.com/ganyc717/Darknet-On-OpenCL.

    Table 2Computational time in seconds of the different algorithms in the tested devices.

    Table 3Computational times per mesh faces for grasp planning process.

    Fig. 8. Different samples of the grasp planning process.

    6.2. Simulated and test-bench experiments

    In order to verify the performance of the arms and their control before the actual experiments,a set of simulations was carried out.The pose of the objects to be grasped was given with a certain noise.These objects were then animated to stress the arm servoing algorithm. Three objects were shown with different shapes. However,any other object can be chosen at this stage,as the algorithm is capable of computing a grasp for any mesh.

    Fig. 9 shows a sequence of snapshots of the simulated experiments to demonstrate the reachability of the arms in different situations. In these simulation experiments, the algorithm proceeds as in the real experiments, but the pose of the object is provided by the simulator, with a certain level of noise. First, the algorithm computes a set of feasible grasps, as described in Section 4.1.Next,according to the position of the object,it chooses the best grasp and guides the arm toward the object, as described in Section 4.2. Once the arms are close to the grasping poses, the system sends a signal to close the gripper.

    The purpose of these experiments was to demonstrate that regardless of the vision, the algorithm is able to handle different objects and shapes.It generates a set of feasible grasps and dynamically chooses the best option based on the estimation of the object pose.

    Before the real flights, the complete system—including the vision—was tested on a test-bench (Fig. 10), where the aerial manipulator was placed in a fixed structure and the target object was moved.

    6.3. Flying experiments

    This section describes the experiments that were carried out to demonstrate the performance of the system in real environments.For these experiments, a mock-up of a crawler robot was 3D printed. Two different setups were prepared. In the first setup,which was indoors, the UAV performed a completely autonomous operation,thanks to the position obtained by a MOCAP system.The second setup was intended to test the system outdoors. In both experiments, the 4-DoF gripper was chosen, as it is stronger than the 6-DoF version.

    Fig. 9. Samples of simulation tests with different objects at different times.

    Fig.10. Testing the complete system at the indoor test-bench.(a)The camera view and the 2D detection of the crawler mock-up;(b)the 3D virtualization of the robot,including the estimated position of the crawler and the target grasps; (c) a thirdview of the experiment.

    The first key module is the vision system, which computes the pose of the object to be grasped for the grasp planning, feeds the arms module for the servoing, and provides a reference position for the UAV. Fig. 11 shows the results of the algorithm.Fig. 11 (a) shows the result of the object detection in the RGB image using the CNN. Fig. 11(b) shows the fragment of the point cloud computed using the depth image and the estimation of the object’s pose from the alignment algorithm,filtered by the Kalman filter.The pose estimation is highlighted with an overlaid red point cloud and a coordinate system, as shown in the figure.

    Fig.12 shows the estimation of the object’s pose from the vision system against the ground truth using a MOCAP system. The estimation shows an average of the root mean square error (RMSE)smaller than 0.02 m. This accuracy is fundamental to ensure that the robot can perform the grasp tasks.Larger errors in the estimation of the localization of the robot would be problematic,causing the robot to miss the grasp and even collide with the environment.

    Fig. 13 shows an additional experiment for testing the visual detection module. In this experiment, the mock-up was moved along a pipe. The position of both the UAV and the crawler was measured by the vision system and a MOCAP system.

    The accuracy of the system was tested firstly indoors in a controlled environment with a MOCAP system. The mission started with the UAV taking off; the UAV then performed the pipeline maneuver explained in Section 5. Fig. 14 provides a snapshot of the experiment.

    Fig.15 shows the values of the joints during two visual servoing tests.The dashed line corresponds to the target joints and the solid line corresponds to the actual joints values. The figure shows how the system switches to a new grasp when the previous grasp becomes unreachable.

    Fig.11. Object detection and pose estimation results.(a)A bound box resulting from YOLO;(b)the fragment of cloud and the pose estimation as an overlaid red-dots model and a coordinate frame.

    Fig. 12. Estimated pose of the crawler versus the ground truth in the UAV’s coordinate system. (a) Estimated distance from the camera vs. ground truth in the X axis; (b) estimated distance from the camera vs. ground truth in the Y axis;(c) estimated distance from the camera vs. ground truth in the Z axis.

    Finally, Fig. 16 provides a photograph of the outdoors experimental test-bench. In this experiment, the UAV flew while being tied to a structure with a safety rope, for safety reasons. The photograph shows the point of view of the robot while grasping the mock-up of the crawler. This experiment demonstrated that the perception algorithm works outdoors using the RealSense Depth camera, regardless of the sunlight conditions.

    7. Conclusions

    This paper presents a complete system for performing manipulation operations with an aerial platform.Integrating two manipulators, instead of just one, makes it possible to grasp larger and more complex objects.In addition,having two manipulators helps the UAV to cancel fluctuations due to external disturbances and even choose grasps that are more stable because they enclose the center of mass. Both qualitative and quantitative experimental data have been provided in order to demonstrate that the system is capable of performing grasping operations.

    The average RMSE of the vision system is lower than 0.02 m.The use of object-detection networks has been shown to be highly beneficial for the speed of the pose estimation. The main limitations come from the available onboard devices. Most state-ofthe-art CNNs have been tested and designed to be used in powerful computers, which makes it challenging to integrate them in aerial platforms. However, after the analysis in Section 6, the chosen algorithm provided fair enough results for the mission. The next step will include the estimation of object pose in the neuronal network.

    Fig. 13. Trajectory of the UAV and estimated position of the crawler measured by the vision system in global coordinates(solid lines)and ground truth(dashed lines).(a) 3D trajectory of the UAV and the crawler during a moving experiment;(b)estimated position vs.real position of the crawler using the visual estimation in each of the axis.

    Fig. 14. Indoor autonomous mission using the MOCAP system.

    It has been demonstrated in several environments that the system is able to operate and grasp the target object.In addition,these results have been supported with simulated objects in order to prove that the grasp planning algorithm is able to plan manipulation tasks with other shapes. The vision system has been tested under low-light conditions as well as outdoors under direct sunlight. The algorithm ran correctly under both illumination conditions.

    Acknowledgement

    This work was carried out in the framework of the AEROARMS(SI-1439/2015) EU-funded project and the national project ARMEXTENDED (DPI2017-89790-R).

    Fig.15. Joints values during the experiments(in radians).At first,the arms follow the grasp target.After a certain point,the object rotates,so the system switches to another feasible grasp.

    Fig. 16. Outdoor autonomous test with a safety rope.

    Compliance with ethics guidelines

    Pablo Ramon-Soria, Bego?a C. Arrue, and Anibal Ollero declare that they have no conflict of interest or financial conflicts to disclose.

    后天国语完整版免费观看| 亚洲色图 男人天堂 中文字幕| 最近中文字幕2019免费版| 国产欧美日韩精品亚洲av| 欧美av亚洲av综合av国产av| 免费不卡黄色视频| 大话2 男鬼变身卡| 亚洲久久久国产精品| 国产在线一区二区三区精| av又黄又爽大尺度在线免费看| 亚洲国产精品一区二区三区在线| 在线观看人妻少妇| www日本在线高清视频| 一本综合久久免费| 成年女人毛片免费观看观看9 | 精品欧美一区二区三区在线| 久久狼人影院| 亚洲国产精品成人久久小说| 肉色欧美久久久久久久蜜桃| 自线自在国产av| 日本猛色少妇xxxxx猛交久久| 国产伦理片在线播放av一区| 精品第一国产精品| 免费观看av网站的网址| 考比视频在线观看| 亚洲国产精品成人久久小说| 大片电影免费在线观看免费| 日韩精品免费视频一区二区三区| 亚洲 欧美一区二区三区| 国产成人精品无人区| 老司机靠b影院| 99久久综合免费| 99久久精品国产亚洲精品| 男女之事视频高清在线观看 | 亚洲午夜精品一区,二区,三区| 1024视频免费在线观看| 精品少妇黑人巨大在线播放| 五月天丁香电影| 国精品久久久久久国模美| 99国产综合亚洲精品| 亚洲视频免费观看视频| 国产日韩欧美视频二区| 久久久久久久久免费视频了| 男女边吃奶边做爰视频| 久久久亚洲精品成人影院| 青春草视频在线免费观看| 中文字幕色久视频| 国产日韩欧美视频二区| 国产高清国产精品国产三级| av福利片在线| 男人添女人高潮全过程视频| 国产黄色视频一区二区在线观看| √禁漫天堂资源中文www| 麻豆国产av国片精品| 亚洲av日韩精品久久久久久密 | 色94色欧美一区二区| 欧美成人午夜精品| 成年av动漫网址| 人人妻人人爽人人添夜夜欢视频| 日本av免费视频播放| 欧美激情高清一区二区三区| 精品人妻一区二区三区麻豆| 亚洲色图综合在线观看| 亚洲男人天堂网一区| 欧美+亚洲+日韩+国产| a级片在线免费高清观看视频| xxxhd国产人妻xxx| 久久久久久人人人人人| 亚洲国产精品999| 国产精品久久久久久人妻精品电影 | 亚洲国产欧美日韩在线播放| 最新在线观看一区二区三区 | 日本a在线网址| www.av在线官网国产| 精品国产一区二区三区久久久樱花| 五月开心婷婷网| 国产在线一区二区三区精| 国产精品久久久久久精品古装| 免费在线观看日本一区| 99re6热这里在线精品视频| 狂野欧美激情性xxxx| 美女主播在线视频| 永久免费av网站大全| 久久久久久亚洲精品国产蜜桃av| 男人爽女人下面视频在线观看| 久久免费观看电影| 十八禁网站网址无遮挡| 一区二区三区乱码不卡18| 欧美少妇被猛烈插入视频| 亚洲色图 男人天堂 中文字幕| 成年美女黄网站色视频大全免费| 国产成人免费观看mmmm| 多毛熟女@视频| 亚洲国产中文字幕在线视频| 亚洲综合色网址| avwww免费| 欧美亚洲日本最大视频资源| 亚洲 欧美一区二区三区| 国产日韩一区二区三区精品不卡| 国产精品久久久久久人妻精品电影 | 久久精品国产亚洲av涩爱| 女人被躁到高潮嗷嗷叫费观| 在线观看免费高清a一片| 超色免费av| 美女午夜性视频免费| 咕卡用的链子| 欧美日韩一级在线毛片| 看十八女毛片水多多多| www.av在线官网国产| 国产淫语在线视频| 国产人伦9x9x在线观看| 亚洲人成电影观看| 成年人免费黄色播放视频| 黄网站色视频无遮挡免费观看| 2018国产大陆天天弄谢| 别揉我奶头~嗯~啊~动态视频 | 午夜福利免费观看在线| 国产成人系列免费观看| 欧美成人午夜精品| 人人澡人人妻人| 免费不卡黄色视频| 波多野结衣一区麻豆| a级毛片在线看网站| 日韩av不卡免费在线播放| 中国美女看黄片| 手机成人av网站| 50天的宝宝边吃奶边哭怎么回事| 亚洲伊人久久精品综合| 精品免费久久久久久久清纯 | 另类亚洲欧美激情| 狠狠精品人妻久久久久久综合| 久久久久精品国产欧美久久久 | 丝瓜视频免费看黄片| 宅男免费午夜| 在线观看免费午夜福利视频| 2021少妇久久久久久久久久久| 美女午夜性视频免费| 成年人免费黄色播放视频| 久久女婷五月综合色啪小说| 久久人妻福利社区极品人妻图片 | 国产高清不卡午夜福利| 一个人免费看片子| 一边摸一边做爽爽视频免费| 人妻人人澡人人爽人人| 在线 av 中文字幕| 成年av动漫网址| 成年人免费黄色播放视频| 操美女的视频在线观看| 亚洲欧美精品综合一区二区三区| 国产深夜福利视频在线观看| 国产成人系列免费观看| 国产精品 欧美亚洲| 电影成人av| kizo精华| 宅男免费午夜| 日韩视频在线欧美| 日韩电影二区| 丝瓜视频免费看黄片| 亚洲专区中文字幕在线| 黑丝袜美女国产一区| 精品欧美一区二区三区在线| 蜜桃国产av成人99| 国产精品国产av在线观看| 亚洲中文字幕日韩| 精品一区在线观看国产| 国产精品久久久久久人妻精品电影 | 亚洲欧美中文字幕日韩二区| 亚洲自偷自拍图片 自拍| av在线app专区| 天堂中文最新版在线下载| 大码成人一级视频| 亚洲av电影在线观看一区二区三区| 69精品国产乱码久久久| 午夜福利影视在线免费观看| 国产成人精品无人区| 国产在线视频一区二区| 9热在线视频观看99| 看免费av毛片| 悠悠久久av| kizo精华| 亚洲国产精品999| 97精品久久久久久久久久精品| 亚洲国产中文字幕在线视频| 午夜久久久在线观看| 成人国产av品久久久| 欧美中文综合在线视频| 欧美成人午夜精品| 午夜免费男女啪啪视频观看| 90打野战视频偷拍视频| 亚洲黑人精品在线| 老汉色av国产亚洲站长工具| 日本91视频免费播放| 男人舔女人的私密视频| 丝瓜视频免费看黄片| 黄色 视频免费看| 欧美成人午夜精品| 亚洲国产看品久久| 两个人免费观看高清视频| 中国国产av一级| 国产色视频综合| 欧美黄色片欧美黄色片| 亚洲欧美一区二区三区久久| 亚洲国产欧美在线一区| 一级片'在线观看视频| 一本—道久久a久久精品蜜桃钙片| 少妇人妻久久综合中文| 建设人人有责人人尽责人人享有的| 亚洲精品av麻豆狂野| 国产成人av激情在线播放| 又黄又粗又硬又大视频| 欧美精品啪啪一区二区三区 | 精品一品国产午夜福利视频| 伦理电影免费视频| 国产国语露脸激情在线看| 精品熟女少妇八av免费久了| xxx大片免费视频| 久久国产精品人妻蜜桃| 久久热在线av| 成年女人毛片免费观看观看9 | 国产xxxxx性猛交| 啦啦啦啦在线视频资源| 国产成人欧美| 91精品三级在线观看| 在线av久久热| 91字幕亚洲| 99精国产麻豆久久婷婷| 亚洲精品久久久久久婷婷小说| 麻豆av在线久日| 国产有黄有色有爽视频| 人妻一区二区av| 90打野战视频偷拍视频| 久久精品熟女亚洲av麻豆精品| 国产不卡av网站在线观看| 久久天堂一区二区三区四区| 亚洲国产日韩一区二区| 日本猛色少妇xxxxx猛交久久| 好男人电影高清在线观看| 亚洲精品自拍成人| 夜夜骑夜夜射夜夜干| 久久天堂一区二区三区四区| 国产成人精品久久二区二区91| 亚洲av日韩在线播放| 国产精品.久久久| 宅男免费午夜| 一区二区三区精品91| 久久九九热精品免费| 日韩一卡2卡3卡4卡2021年| 亚洲综合色网址| 中文字幕高清在线视频| 国产欧美日韩一区二区三 | 在线观看国产h片| 国产成人精品久久久久久| 久久精品熟女亚洲av麻豆精品| 99热全是精品| 一本久久精品| 国产成人欧美| 19禁男女啪啪无遮挡网站| 最黄视频免费看| 黄色一级大片看看| 青春草视频在线免费观看| 久久国产精品男人的天堂亚洲| 亚洲精品一卡2卡三卡4卡5卡 | 久久精品熟女亚洲av麻豆精品| 国产有黄有色有爽视频| 日韩伦理黄色片| 美女扒开内裤让男人捅视频| 一本一本久久a久久精品综合妖精| 在现免费观看毛片| 国产97色在线日韩免费| 亚洲伊人久久精品综合| 精品一区在线观看国产| av国产久精品久网站免费入址| av在线app专区| 亚洲七黄色美女视频| 性色av乱码一区二区三区2| 亚洲国产精品成人久久小说| 男男h啪啪无遮挡| 免费av中文字幕在线| 精品福利永久在线观看| 大话2 男鬼变身卡| 亚洲五月婷婷丁香| 国产一区二区 视频在线| 50天的宝宝边吃奶边哭怎么回事| 日本午夜av视频| 少妇人妻久久综合中文| 日本wwww免费看| 亚洲av成人精品一二三区| 一本—道久久a久久精品蜜桃钙片| 搡老乐熟女国产| 精品国产超薄肉色丝袜足j| 成人午夜精彩视频在线观看| 亚洲成人免费av在线播放| 午夜激情av网站| 国产野战对白在线观看| 丝袜在线中文字幕| 成人国产一区最新在线观看 | 久久久国产精品麻豆| 中文字幕另类日韩欧美亚洲嫩草| av视频免费观看在线观看| 久热爱精品视频在线9| www.自偷自拍.com| 国产老妇伦熟女老妇高清| 久久99热这里只频精品6学生| 一本大道久久a久久精品| 亚洲精品国产av蜜桃| 麻豆av在线久日| 高清欧美精品videossex| 国产在线免费精品| 国产99久久九九免费精品| 国产精品久久久人人做人人爽| 国产成人欧美在线观看 | 久久久久国产一级毛片高清牌| 成在线人永久免费视频| 成人亚洲欧美一区二区av| 宅男免费午夜| 久久国产精品影院| 精品亚洲成a人片在线观看| 成年美女黄网站色视频大全免费| 久久青草综合色| 国产淫语在线视频| 日本猛色少妇xxxxx猛交久久| 欧美精品高潮呻吟av久久| 成在线人永久免费视频| 亚洲精品久久久久久婷婷小说| 国产成人91sexporn| 激情五月婷婷亚洲| 丰满人妻熟妇乱又伦精品不卡| 男男h啪啪无遮挡| 自拍欧美九色日韩亚洲蝌蚪91| 另类亚洲欧美激情| 欧美97在线视频| kizo精华| 一区二区日韩欧美中文字幕| 免费少妇av软件| 久久国产精品大桥未久av| 久久久久国产一级毛片高清牌| 亚洲精品一区蜜桃| 免费不卡黄色视频| 午夜福利视频在线观看免费| 午夜av观看不卡| 丰满迷人的少妇在线观看| 国产1区2区3区精品| 十分钟在线观看高清视频www| 国产一区二区三区av在线| 国产欧美日韩精品亚洲av| 欧美xxⅹ黑人| 欧美日韩一级在线毛片| 国产黄频视频在线观看| av又黄又爽大尺度在线免费看| 青青草视频在线视频观看| 飞空精品影院首页| 免费女性裸体啪啪无遮挡网站| 啦啦啦中文免费视频观看日本| www.999成人在线观看| 国产黄频视频在线观看| 亚洲人成电影免费在线| 亚洲人成电影观看| 日韩一本色道免费dvd| 日韩制服骚丝袜av| 大片电影免费在线观看免费| 久久国产亚洲av麻豆专区| 精品国产国语对白av| 亚洲成人免费电影在线观看 | 少妇的丰满在线观看| 韩国精品一区二区三区| av片东京热男人的天堂| 777久久人妻少妇嫩草av网站| 777久久人妻少妇嫩草av网站| 丁香六月天网| 久久狼人影院| 亚洲av日韩精品久久久久久密 | 少妇人妻 视频| 少妇猛男粗大的猛烈进出视频| 黄色视频在线播放观看不卡| 在线观看免费视频网站a站| 亚洲九九香蕉| 一区二区三区乱码不卡18| 久久狼人影院| 亚洲国产欧美一区二区综合| 亚洲av男天堂| 又大又黄又爽视频免费| 亚洲成国产人片在线观看| 日本av手机在线免费观看| 欧美国产精品va在线观看不卡| 免费在线观看影片大全网站 | 亚洲七黄色美女视频| 男女国产视频网站| 每晚都被弄得嗷嗷叫到高潮| 日韩熟女老妇一区二区性免费视频| 99香蕉大伊视频| 伊人亚洲综合成人网| 七月丁香在线播放| 久久精品国产综合久久久| 丰满饥渴人妻一区二区三| 久久亚洲精品不卡| 精品人妻熟女毛片av久久网站| 欧美成狂野欧美在线观看| xxxhd国产人妻xxx| 亚洲国产精品国产精品| 日韩,欧美,国产一区二区三区| 久9热在线精品视频| 操出白浆在线播放| 黑人欧美特级aaaaaa片| 手机成人av网站| 日韩制服丝袜自拍偷拍| 亚洲国产日韩一区二区| 中国美女看黄片| 中文欧美无线码| 黄色a级毛片大全视频| 丝袜脚勾引网站| 久久久精品区二区三区| 亚洲av成人不卡在线观看播放网 | 久久人人爽人人片av| 久久久国产欧美日韩av| 国产熟女欧美一区二区| 纵有疾风起免费观看全集完整版| e午夜精品久久久久久久| 精品卡一卡二卡四卡免费| 亚洲第一av免费看| 十八禁人妻一区二区| av国产精品久久久久影院| 啦啦啦啦在线视频资源| 久久久欧美国产精品| 在线观看免费日韩欧美大片| 黄色怎么调成土黄色| 色婷婷久久久亚洲欧美| 搡老岳熟女国产| 精品少妇黑人巨大在线播放| 日韩中文字幕视频在线看片| 国产精品一区二区精品视频观看| 成人免费观看视频高清| 亚洲成国产人片在线观看| 国产一区二区 视频在线| av电影中文网址| 91麻豆精品激情在线观看国产 | 女性被躁到高潮视频| 国产xxxxx性猛交| 亚洲熟女精品中文字幕| 老司机在亚洲福利影院| 欧美黄色片欧美黄色片| 精品久久久久久电影网| 亚洲午夜精品一区,二区,三区| 久久鲁丝午夜福利片| 2018国产大陆天天弄谢| 亚洲,一卡二卡三卡| 国产精品熟女久久久久浪| 如日韩欧美国产精品一区二区三区| 亚洲欧洲日产国产| 久久久久久久国产电影| 久久天躁狠狠躁夜夜2o2o | 国产91精品成人一区二区三区 | 亚洲欧美一区二区三区国产| 国产三级黄色录像| 成人三级做爰电影| 啦啦啦中文免费视频观看日本| 国产欧美日韩一区二区三 | 亚洲久久久国产精品| 可以免费在线观看a视频的电影网站| 亚洲精品国产av蜜桃| 99国产精品一区二区蜜桃av | 久久精品久久精品一区二区三区| 日本91视频免费播放| 欧美日韩一级在线毛片| 亚洲中文字幕日韩| 男女下面插进去视频免费观看| 美女福利国产在线| 午夜福利一区二区在线看| 只有这里有精品99| 久久久久久久精品精品| av天堂在线播放| 久久精品成人免费网站| netflix在线观看网站| 国产又色又爽无遮挡免| 99热国产这里只有精品6| 欧美精品亚洲一区二区| 国产精品av久久久久免费| 多毛熟女@视频| 在线观看一区二区三区激情| 日本wwww免费看| 久久精品久久久久久久性| 老鸭窝网址在线观看| 男男h啪啪无遮挡| a级片在线免费高清观看视频| 国产一区二区在线观看av| 在线观看www视频免费| 少妇裸体淫交视频免费看高清 | 日本av手机在线免费观看| 巨乳人妻的诱惑在线观看| 国产黄频视频在线观看| 亚洲欧美中文字幕日韩二区| 五月开心婷婷网| 伦理电影免费视频| 国产主播在线观看一区二区 | 欧美成人午夜精品| 成人亚洲精品一区在线观看| 校园人妻丝袜中文字幕| 久久狼人影院| 国产麻豆69| 激情视频va一区二区三区| 亚洲av日韩在线播放| 9191精品国产免费久久| 中文字幕人妻丝袜一区二区| 亚洲免费av在线视频| 在线亚洲精品国产二区图片欧美| a级毛片在线看网站| 老司机深夜福利视频在线观看 | 国产淫语在线视频| 人人妻人人澡人人看| 亚洲熟女毛片儿| 伊人亚洲综合成人网| 超碰成人久久| 色网站视频免费| 欧美日韩国产mv在线观看视频| 久久毛片免费看一区二区三区| 免费在线观看黄色视频的| 亚洲中文字幕日韩| 亚洲成色77777| av一本久久久久| 欧美黑人精品巨大| 免费人妻精品一区二区三区视频| 777久久人妻少妇嫩草av网站| 国产女主播在线喷水免费视频网站| 国产免费视频播放在线视频| tube8黄色片| 欧美日韩福利视频一区二区| 18禁观看日本| 精品国产乱码久久久久久小说| 精品国产超薄肉色丝袜足j| 少妇裸体淫交视频免费看高清 | av福利片在线| 秋霞在线观看毛片| 亚洲九九香蕉| 亚洲国产最新在线播放| 久久亚洲国产成人精品v| 两人在一起打扑克的视频| 亚洲av成人不卡在线观看播放网 | 欧美日韩视频精品一区| 国产成人欧美| 国产亚洲欧美精品永久| 国产在线视频一区二区| a级毛片黄视频| 制服诱惑二区| 男女午夜视频在线观看| 久久影院123| videosex国产| 啦啦啦在线免费观看视频4| www.av在线官网国产| 日日爽夜夜爽网站| 亚洲精品美女久久av网站| 黄片播放在线免费| 国产91精品成人一区二区三区 | 欧美精品高潮呻吟av久久| 别揉我奶头~嗯~啊~动态视频 | 日本欧美国产在线视频| 国产男女超爽视频在线观看| 精品少妇久久久久久888优播| 国产一区二区激情短视频 | 久久毛片免费看一区二区三区| 精品卡一卡二卡四卡免费| 国产精品国产三级专区第一集| 999久久久国产精品视频| 少妇人妻久久综合中文| 日本91视频免费播放| 可以免费在线观看a视频的电影网站| 日韩视频在线欧美| 18禁裸乳无遮挡动漫免费视频| 乱人伦中国视频| 日本色播在线视频| 亚洲国产欧美日韩在线播放| 成年人黄色毛片网站| 黄网站色视频无遮挡免费观看| 日韩大码丰满熟妇| 老司机靠b影院| 视频区图区小说| 国产欧美日韩一区二区三区在线| 欧美精品一区二区免费开放| 欧美 日韩 精品 国产| 免费人妻精品一区二区三区视频| 建设人人有责人人尽责人人享有的| 日本色播在线视频| 一区二区日韩欧美中文字幕| 亚洲国产最新在线播放| 亚洲美女黄色视频免费看| 亚洲欧美色中文字幕在线| 亚洲精品av麻豆狂野| 亚洲av国产av综合av卡| 国产亚洲精品第一综合不卡| 久久久久久久精品精品| 电影成人av| 国产亚洲精品第一综合不卡| 看免费av毛片| 久久免费观看电影| 亚洲成人免费av在线播放| 免费在线观看日本一区| 亚洲激情五月婷婷啪啪| 天天躁狠狠躁夜夜躁狠狠躁| 777米奇影视久久| 欧美变态另类bdsm刘玥| 91老司机精品| 国产精品二区激情视频| 黄色毛片三级朝国网站| 久久久国产精品麻豆| 亚洲国产最新在线播放| 天堂中文最新版在线下载| 男人爽女人下面视频在线观看| 久久99精品国语久久久| 久久久久网色| 男女边摸边吃奶| 亚洲国产成人一精品久久久| av片东京热男人的天堂| 午夜影院在线不卡| 国产视频首页在线观看| 日韩,欧美,国产一区二区三区| 十八禁人妻一区二区| 日韩视频在线欧美|