• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Attention-based efficient robot grasp detection network?

    2023-11-06 06:14:52XiaofeiQINWenkaiHUChenXIAOChangxiangHESongwenPEIXuedianZHANG

    Xiaofei QIN ,Wenkai HU ,Chen XIAO ,Changxiang HE ,Songwen PEI,3,4 ,Xuedian ZHANG?,3,4,5

    1School of Optical-Electrical and Computer Engineering,University of Shanghai for Science and Technology,Shanghai 200093,China

    2College of Science,University of Shanghai for Science and Technology,Shanghai 200093,China

    3Shanghai Key Laboratory of Modern Optical System,Shanghai 200093,China

    4Key Laboratory of Biomedical Optical Technology and Devices of Ministry of Education,Shanghai 200093,China

    5Shanghai Institute of Intelligent Science and Technology,Tongji University,Shanghai 201210,China

    Abstract: To balance the inference speed and detection accuracy of a grasp detection algorithm,which are both important for robot grasping tasks,we propose an encoder-decoder structured pixel-level grasp detection neural network named the attention-based efficient robot grasp detection network (AE-GDN).Three spatial attention modules are introduced in the encoder stages to enhance the detailed information,and three channel attention modules are introduced in the decoder stages to extract more semantic information.Several lightweight and efficient DenseBlocks are used to connect the encoder and decoder paths to improve the feature modeling capability of AE-GDN.A high intersection over union (IoU) value between the predicted grasp rectangle and the ground truth does not necessarily mean a high-quality grasp configuration,but might cause a collision.This is because traditional IoU loss calculation methods treat the center part of the predicted rectangle as having the same importance as the area around the grippers.We design a new IoU loss calculation method based on an hourglass box matching mechanism,which will create good correspondence between high IoUs and high-quality grasp configurations.AEGDN achieves the accuracy of 98.9% and 96.6% on the Cornell and Jacquard datasets,respectively.The inference speed reaches 43.5 frames per second with only about 1.2 × 106 parameters.The proposed AE-GDN has also been deployed on a practical robotic arm grasping system and performs grasping well.Codes are available at https://github.com/robvincen/robot_gradet.

    Key words: Robot grasp detection;Attention mechanism;Encoder-decoder;Neural network

    1 Introduction

    The development of intelligent robots has brought great convenience to people’s lives.Intelligent robots can replace humans in labor tasks,such as handling,assembly,palletizing,household service (Wang Q et al.,2022),and logistics sorting.In these applications,robot grasping plays a vital role,in which the position of the object relative to the robot’s gripper must be calculated.Traditional methods are based on model analysis,such as form closure and force closure methods,in which the specific parameters of the object must be known and the calculation process is complicated.Therefore,model analysis methods are difficult to use in unstructured scenarios.In recent years,algorithms based on computer vision and deep learning have become mainstream in the field of robot grasping due to the great progress in artificial intelligence.These algorithms usually use a deep neural network to detect the grasp configuration of the object in a visual image.This process,called grasp detection of the object,can be conducted using system parameters.

    Jiang et al.(2011)conducted a pioneering work of grasp detection based on computer vision,giving the definition of correct grasp box,and many subsequent works follow this definition in evaluating performance.Lenz et al.(2015) proposed a two-step grasp detection method based on fully connected neural networks and presented a five-dimensional representation of the grasp configuration.Guo et al.(2017)fused tactile and visual information,and used the sliding window mechanism for grasp detection.Although these early methods achieved good accuracy,they suffered from poor real-time performance,which is important for grasp detection tasks.Therefore,many algorithms optimized for execution speed have emerged subsequently.Redmon and Angelova(2015)proposed a method using the idea of meshing to directly regress multiple grasp configurations of the object.Kumra and Kanan (2017) used ResNet(He et al.,2016)as the feature extraction backbone,extracted features from RGB and depth inputs separately,and fused them in the middle layer.Morrison et al.(2018) proposed a pixel-level grasp detection network with much fewer network parameters and less computation,making it fully meet the real-time requirement of robot grasping.However,although many existing methods have achieved good performance,the balance between the real-time requirement and accuracy still needs improvement.

    To meet the real-time requirement of grasp detection and obtain better accuracy,in this paper we propose an attention-based efficient robot grasp detection network(AE-GDN)with an encoder-decoder structure.Inspired by Asif et al.(2019),this network uses DenseBlocks for feature modeling,which is efficient with less computation.In the encoder stages,spatial attention modules are used to improve the detailed information modeling capability.In the decoder stages,channel attention modules are used to improve the semantic information modeling capability.

    In the mainstream grasp detection methods,a grasp detection result is regarded as correct if the intersection over union (IoU) between the predicted grasp rectangle and the ground truth (GT) is larger than the threshold.However,as shown in Fig.1,although the predicted rectangles of the top row seem to meet the IoU threshold,if the grasp is performed,a fingertip on one side of the gripper will collide with the object.For clarity,Fig.2 presents the top row of Fig.1.If an IoU threshold of 0.25 is used,as in most methods,the robot will grasp the objects with its fingers at the red bars,which will obviously collide with the objects at the cross points in Fig.2.

    Fig.1 Visual comparison of grasp detection results and the ground truth

    Fig.2 The collision explanation

    The reason is that in the calculation of IoU the central area of the predicted grasp rectangle and the area near the fingertips are treated equally,although the area near the fingertips is obviously more important than the central area.Therefore,when IoU loss is calculated during training,this work parses the predicted grasp configuration into hourglass-shaped boxes,so the impact of the central area of the predicted grasp rectangles is reduced.Note that because the test accuracy given by other methods is based on the IoU calculated by rectangles,for fair comparison,the test accuracy results of this work are also calculated based on rectangles.

    The main contributions of this work are summarized as follows:

    1.We propose an encoder-decoder structured grasp detection neural network named AEGDN.The accuracy of this model on the Cornell and Jacquard datasets reaches 98.9% and 96.6%,respectively.

    2.The IoU item in the loss function is improved by using an hourglass-shaped predicted grasp box instead of a rectangular one when IoU is calculated,which will reduce the influence of the central area in the predicted grasp boxes and create good correspondence between high IoU values and grasping success rates.

    3.We deploy the proposed AE-GDN in a robotic arm grasp system with an antipodal gripper,and apply it to grasp 25 commonly used tools and the necessities with considerable shape diversity and complexity.Real-time grasping can be achieved with a fairly high success rate.The proposed AE-GDN algorithm is open-sourced in our GitHub repository at https://github.com/robvincen/robot_gradet.

    2 Related works

    Robot grasp detection methods based on deep learning can be divided roughly into two categories according to the fundamental mechanism: object detection (OD) and heatmap matching (HM).ODbased grasp detection algorithms can be divided into two-and one-stage methods.The two-stage method includes a region generation network and a grasp box regressor.First,the regions of interest(RoIs)in the feature map are obtained through the backbone,and the grasp boxes are predicted in the second stage.The one-stage method obtains the grasp boxes by directly regressing the features extracted by the backbone.HM-based grasp detection algorithms output a pixel-level grasp configuration instead of the multiple discrete grasp probabilities that are outputted by OD-based methods,and different preprocessing methods need to be executed according to various output formats.

    2.1 Methods based on object detection

    2.1.1 Two-stage methods

    Lenz et al.(2015)proposed a cascaded network based on a sparse autoencoder structure.At first,the potential grasp boxes are exhaustively generated by a simple network to produce a set of rectangular grasp candidates,and in the second stage a deeper network is used to select the best one.Guo et al.(2017)used the network proposed by Zeiler and Fergus (2014)to fuse visual information and tactile information,and discretized the rotation angle for classification.Zhou et al.(2018) proposed an oriented anchor box matching mechanism with ResNet (He et al.,2016)as the backbone.Zhang et al.(2019) extended the method proposed by Zhou et al.(2018),and used an RoI-based method to solve problems when multiple objects to be grasped are stacked together.Dex-Net 2.0,proposed by Mahler et al.(2017),first generates grasp candidates for the depth image,and then chooses the best grasp box by evaluating the grasp quality of the candidate set.Ainetter and Fraundorfer (2021) improved the second stage of the grasp detection network by introducing a semantic segmentation branch and using it to refine the grasp detection boxes.

    The two-stage grasp detection method tends to output pretty accurate grasp boxes.However,this is based on sacrificing computational speed,because generating candidate regions in the first stage is timeconsuming and is not suitable for robot grasping tasks with significant real-time requirements.

    2.1.2 One-stage methods

    One-stage methods directly predict a set of grasp configurations that can be parsed into grasp rectangles in the image.Redmon and Angelova(2015) proposed a regression-based grasp detection method,and AlexNet (Krizhevsky et al.,2012) was adopted as the backbone.This method assumes that each image contains only one grasp object,divides the image into a set of grids,and predicts a grasp configuration for each grid.Kumra and Kanan(2017) proposed a deep convolutional neural network (CNN) to predict robust grasp configurations,demonstrating the potential of multimodal information fusion.A commonly used representation of the grasp configuration includes the grasp rectangle and rotation angle.The rotation angle means the deflection angle of the grasp rectangle relative to the horizontal line.Chu et al.(2018a)discretized the rotation angle into multiple different classes,and used a classification method to predict it.The reason for discretizing the rotation angle is that the angle is a non-Euclidean value (Hara et al.,2017),so the standard regression loss function is not suitable.Although methods using a discretizing mechanism as that used in Chu et al.(2018a) converge more easily during training,their output angles are limited to the preset classification set.Park et al.(2020)used a spatial transformer network(Jaderberg et al.,2015)and ResNet(He et al.,2016)instead of a sliding window mechanism to meet the real-time requirement.

    The one-stage grasp detection method does not need to generate RoIs,so it has high inference speed but limited grasp detection accuracy.

    2.2 Methods based on heatmap matching

    The heatmap matching method outputs pixellevel grasp configurations.Commonly used grasp configuration primitives include the grasp width,rotation angle,object position,and grasp quality score.The heatmap matching method uses the error between the predicted output of these primitives and their corresponding GT values for supervised training.

    The GG-CNN proposed by Morrison et al.(2018) is a pioneering heatmap matching grasp detection method.It is a CNN without any fully connected layer,taking only depth information as input,and the numbers of parameters and calculations have been greatly reduced,so real-time performance is guaranteed.Asif et al.(2019) proposed a densely supervised grasp detector with a multi-level CNN structure.This network generates global,regional,and pixel-level grasp configurations,and chooses the best one as the final output.Kumra et al.(2020)proposed a grasp detection network named GRConvNet,which can handle different input modes including RGB,RGB-D,and depth.Based on the GG-CNN of Morrison et al.(2018),GR-ConvNet adds multiple ResBlocks (He et al.,2016) to connect the encoder and decoder,which can provide better mapping capabilities from input to grasp configuration.DD-Net (Wang Y et al.,2021) defines the grasp configuration as a pair of fingertips,and presents a specialized loss function to supervise the training process.

    The grasp detection method based on heatmap matching can achieve good balance among robustness,inference speed,detection accuracy,etc.,and has aroused extensive research enthusiasm in the robot grasping community in recent years.AEGDN proposed in this paper is a heatmap matching method.It improves the grasp detection performance mainly through modifications in three aspects: backbone,network architecture,and IoU loss.AE-GDN can achieve fairly high grasp detection accuracy and real-time execution with inference time between 23 and 26 ms.

    3 Problem formulation

    To better understand the grasp detection algorithm and its role in the overall practical robot grasping task,like other grasp detection papers,we provide a simple definition of the robot grasping problem.In this study,the robot grasping problem is defined as a planar grasping problem,and the image captured by the camera is used to predict the grasp configurations.Then the grasp configurations are converted into grasp commands executed by the robotic arm.

    First,the grasp detection algorithm is used to detect the pixel-level grasp configuration,which is noted asG.In this study,Gis defined as follows:

    whereW,Θ,andQrepresent three output images with the same size as the input image.Each pixel in these images can be regarded as the width,rotation angle,and grasp quality score of a grasp rectangle candidate.

    Second,an argmax operation is performed onQofGto obtain the pixel coordinates of the highest grasp quality score,and then the coordinates are used as the index to obtain the width,rotation angle,and grasp quality score of the grasp rectangle in the pixel coordinate systemGp.In this study,Gpis defined as follows:

    wherepdenotes the pixel coordinates (u,v) of the grasp center point,wpis the width of the pixel-level grasp rectangle,θris the deflection angle relative to the horizontal line of the image in the range of,andqis the pixel-level grasp quality score in the range of[0,1].

    Third,Gpis converted into the camera coordinate system through the camera’s internal parameters and the depth information captured by the stereo camera,which is noted asGc.In this study,Gcis defined as follows:

    wherePdenotes the coordinates (x,y,z) of the grasp center point in the camera coordinate system,wcis the width of the grasp in the camera coordinate system (i.e.,the difference of the opening and closing degrees of the gripper),θris the same as that in Eq.(2) because in this study the eye-in-hand robot grasping system is used,andqrepresents the grasp quality score.

    Finally,Gcis converted into the robot coordinate system through external parameters of the robot and camera,which is noted asGr.The definition ofGris similar to that ofGc,and will not be given here.The convertion process fromGptoGrcan be represented as follows:

    whereRis the rotation matrix from the camera coordinate system to the robot coordinate system,andTis the translation matrix.Similarly,Rin Transcpis the rotation matrix from the pixel coordinate system to the camera coordinate system,andTin Transcpis the translation matrix.

    4 Method

    We propose an end-to-end grasp detection CNN named AE-GDN as shown in Fig.3,in which Dense-Blocks (Huang et al.,2017) are used to connect the encoder and decoder.Due to the efficiency of Dense-Block,AE-GDN possesses powerful feature modeling capability and high inference speed.Inspired by CBAM (Woo et al.,2018),spatial and channel attention modules are added to the encoder and decoder stages of AE-GDN,respectively.Because the inputs of the spatial and channel attention modules are different,coming from shallow and deep layers of the network,the capability of AE-GDN to extract detailed and semantic information can be enhanced.During the calculation of traditional IoU loss,the central area of the predicted grasp rectangle and the area near the fingertips are treated equally,leading to bad correspondence between high IoUs and high grasp success rates as shown in Fig.1.In this study,new IoU loss based on an hourglass-shaped grasp box is designed to reduce the impact of the central area in the predicted grasp box.

    Fig.3 Model architecture

    4.1 Model architecture

    Fig.3 shows the model architecture of the proposed AE-GDN,which is an encoder-decoder structured pixel-level robot grasp detection network.The output images,which are the grasp quality score image,rotation angle image,and width image,have the same resolution as the input.Specifically,the three output images have the same resolution,so given the pixel with the highest grasp quality,the width and angle can be known.Note that in this study,we follow the approach of GG-CNN (Morrison et al.,2018) to predict the rotation angle.Here,the rotation angle is not directly outputted;instead,sin(2θ)and cos(2θ)are first calculated and then parsed into the rotation angle image.The reason is that theL2loss function cannot be used directly in the non-Euclidean space of the angle value(Hara et al.,2017).

    AE-GDN has three encoder and decoder stages.At the bottom of the encoder-decoder structure,some DenseBlocks are used to enhance the feature modeling capability.DenseBlock is an outstanding computer vision feature extractor;it can facilitate model training,strengthen feature propagation,and encourage feature reuse.Importantly,the number of its parameters and the amount of calculations are small due to its narrow layers;it can meet the realtime requirement of robot grasping.According to its output channel number,DenseBlock has many variants,and this study adopts the variant with 24 output channels.As the number of DenseBlocks increases,the performance of the model will increase,but it will also lead to an increase in the inference time and the number of parameters,which is not conducive to the real-time robot grasping.Therefore,after balancing the pros and cons between the grasp detection performance and the inference time,the number of DenseBlocks selected in this study is 18.

    In the downsampling path,the output of each encoder stage is inputted into a spatial attention module to enhance the detail information.A 1×1 convolution layer is used for the output of the last DenseBlock to reduce the channel number.In the upsampling path,three channel attention modules are used to enhance the semantic information.The outputs of the spatial and channel attention modules are fused together,and fed into each decoder stage.The output of the last decoder stage is inputted into the grasp prediction layer.The design of the spatial and channel attention modules is inspired by convolutional block attention module (CBAM)(Woo et al.,2018),but the difference is that AEGDN uses different inputs for spatial and channel attention modules,as shown in Fig.4.To make the network pay more attention to the object in the image instead of the background,this study feeds the features of the encoder into the spatial attention module.To make the network pay more attention to the semantic information in the features,this study feeds the features of the decoder into the channel attention module.Because spatial information is more abundant in the shallower layer and semantic information is richer in the deeper layer(Fang et al.,2022),the spatial attention module is used only in the downsampling path and the channel attention module is used only in the upsampling path.

    The parameter number of AE-GDN will change slightly according to different input modes.When the input mode is depth,RGB,or RGB-D image,there are 1.373×106,1.378×106,or 1.380×106parameters,respectively.AE-GDN is smaller in size compared with many works such as GR-ConvNet(Kumra et al.,2020) and DD-Net (Wang Y et al.,2021),and its inference time is between 23 and 26 ms on our graphics processing unit (GPU) workstation,which can meet the real-time requirement of the robot grasping task.

    4.2 Loss function

    The total loss of AE-GDN is defined as follows:

    whereLqualityrefers to the grasp quality score loss function in GG-CNN (Morrison et al.,2018),and using the mean square error (MSE) loss function,it is defined as follows:

    whereEis the area of the smallest box that contains both the predicted grasp boxPand GT rectangleG.Uis the union of the predicted grasp boxPand GT rectangleG.

    Fig.5 shows the examples of GT rectangleGand the predicted grasp boxPfor calculating GIoU.The left part of Fig.5 shows an example whenGandPdo not intersect,and then we have Eq.(9):

    Fig.5 Examples of the ground truth (GT) rectangle and the predicted grasp box for calculating the generalized intersection over union (GIoU)

    WhenGandPget closer to each other,Ewill get smaller,so GIoU will get larger,which can provide the gradients needed by model convergence.

    The right part of Fig.5 shows an example whenGandPhave some intersection,and then we have Eq.(10):

    WhenGandPget closer to each other,Iwill get larger andEwill get smaller,so GIoU will get larger,providing converging gradients.Therefore,no matter whetherGandPhave some intersection or not,the model can always benefit from the supervision of the GIoU loss.

    As shown in Fig.1,when the traditional predicted grasp rectangle(the red rectangle in Fig.6)is used to calculate IoU,a high IoU value does not correspond to a good grasp configuration,because the center part of the predicted grasp rectangle is not as important as the area around the gripper.We parse the predicted grasp configuration into an hourglass(HG) box (the orange shadow in Fig.6),and use it asPin Eq.(8).Because the maximum IoU of the HG box and GT rectangle is around 0.5,a small change is made to the loss function,which is defined as follows:

    Fig.6 The effect of the hourglass box,which drives the grasp detection result to change from the left(collision between the object and gripper despite a high IoU value) to the right during training.References to color refer to the online version of this figure

    In summary,the power of AE-GDN comes mainly from two sources.The first source is the delicately designed model architecture,i.e.,the efficient DenseBlock backbone and the spatial and channel attention modules.The number of DenseBlocks is chosen to balance the model accuracy and inference speed.The spatial and channel attention modules take different features from the encoder and decoder as input,which can extract both detailed and semantic information from the input image.The second source is the creative idea of the hourglass-shaped predicted grasp box,which makes the model pay more attention to the area around the fingertips.

    5 Experiments

    To evaluate the effectiveness of the AE-GDN proposed in this study,grasp detection and robot grasping experiments are carried out.

    5.1 Grasp detection

    5.1.1 Datasets and metrics

    Grasp detection experiments are carried out on the Cornell and Jacquard datasets.

    The Cornell dataset(Jiang et al.,2011)contains a total of 885 pairs of RGB and depth images including 240 objects,so an RGB-D image can also be obtained.The resolution of these images is 640×480,and each pair of images is marked with several grasp rectangles.In our work only 5110 graspable configurations in the Cornell dataset are used.

    The Jacquard dataset (Depierre et al.,2018)contains a total of 5.4×104different scenes and 1.1×104different objects,which are all simulation models extracted from a large CAD dataset.The labeled graspable rectangles are generated based on successful grasps in the experimental environment.The Jacquard dataset has more than 1.0×106grasp rectangles.

    The current mainstream grasp detection evaluation methods follow the criteria proposed by Jiang et al.(2011),which are also adopted in this work to perform fair comparisons with other methods.The evaluation method judges whether a grasp rectangle is valid based on two conditions.One is that IoU between the predicted rectangle and GT rectangle is≥0.25,and the other is that the angle between the predicted rectangle and the GT rectangle is≤30°.

    5.1.2 Grasp detection implementation details

    The network in this stduy is built using Py-Torch,the operating system is Ubuntu16.04,the hardware device used for network training and inferencing is an Intel Xeon?ES-1660 v3 eight-core processor clocked at 3.00 GHz,and the graphics card is NVIDIA GeForce GTX1080Ti.

    The input data mode can be depth image,RGB image,or RGB-D image.Note that the RGB image and the depth image are aligned,so the depth value of each pixel can truly reflect the depth information.The model is trained using the Adam optimizer,the mini-batch size is 32,the initial learning rate is 0.001,the learning rate decays every 15 epochs,and the decay factorαis 0.5.

    Both datasets are divided into two sets with a ratio of 9:1 for training and testing.In the Cornell dataset,there are image-wise(IW)splits and objectwise (OW) splits.The former splits the dataset directly and randomly.The latter divides the dataset according to the type of object.That is,objects in training do not appear in testing.

    Because the Cornell dataset is small,data augmentation is used to alleviate overfitting.In the inference stage,we choose the maximum value point of the grasping quality score as the grasp center point,so the corresponding grasp width and angle can be obtained from the two other predicted images.

    5.1.3 Evaluation and analysis on the Cornell dataset

    For the Cornell dataset,Table 1 shows the performance comparison between AE-GDN and stateof-the-art methods with different input modalities.First,when the modality is depth or RGB-D,the grasp detection accuracy of AE-GDN is better than those of other methods.When the RGB-D input modality is used,AE-GDN can achieve the best accuracy,i.e.,98.9% on the IW splits and 97.9%on the OW splits.Furthermore,Table 1 shows that the highest inference speed of AE-GDN is 44.3 frames per second(FPS),generally between 37.9 and 44.3 FPS.This can meet the real-time requirement of the robot grasping task.Although the accuracy of Zhou et al.(2018)is better than that of AE-GDN when RGB input modality is used,their parameter numbers are much larger.It is worth noting that the inference speed of Kumra et al.(2020) in Table 1 is the reported value from the original paper,which is higher than that of AE-GDN.However,the number of parameters and the amount of calculations of Kumra et al.(2020)calculated with the PyThon package thop are 1.82×106and 9.47 GFlops,which are larger than 1.38×106and 9.19 GFlops of AE-GDN (RGB-D input modality),respectively.Therefore,the theoretical inference speed of Kumra et al.(2020)should be lower than that of AE-GDN.The reason why the reported speed of Kumra et al.(2020) is higher might be that their inference hardware platform is more powerful or their deployment optimization is better.The inference speed of Morrison et al.(2018) is higher than ours because the number of network parameters of this method is only about 6×104.Finally,the results in Table 1 also demonstrate the effectiveness of the attention modules and the HG box modules.At first glance,the spatial and channel attention modules used in this work seem complex,which might influence the speed of the network.However,in fact,the added number of parameters and amount of computations caused by the attention modules are≤3×103and≤0.006 GFlops (can be calculated by the PyThon package thop),and have little impact on the inference speed of the entire network.From Table 1,it can also be seen that no matter whether the attention module is added or not,the difference in inference speed is no more than 2 FPS.If it is converted into inference time,the difference is no more than 1.3 ms.

    Table 1 Comparison using the Cornell dataset

    The accuracy of the model will definitely increase when the number of DenseBlocks increases,as shown in Table 2 and Fig.7.However,more DenseBlocks mean more computation and parameters,and also longer inference time.The accuracy reaches a high level when 18 DenseBlocks are used,and the real-time performance is important for practical robot grasping applications.When the number of DenseBlocks exceeds 18,the accuracy gain is not proportional to the increase of the number of Dense-Blocks,as shown in Fig.7.In this work,AE-GDN uses 18 DenseBlocks to balance the accuracy and inference speed.

    Table 2 Computation amount,parameter number,and image-wise grasp detection accuracy on the Cornell dataset when different numbers of DenseBlocks are used

    Table 3 Accuracy on the Cornell dataset with different Jaccard indices when the RGB-D input modality is used

    Table 4 Accuracy on the Cornell dataset with different Jaccard indices when the RGB input modality is used

    Fig.7 Image-wise grasp detection accuracy on the Cornell dataset when different numbers of Dense-Blocks are used

    As described in Section 5.1.1,a predicted grasp rectangle is considered as correct when its IoU value with the GT rectangle is≥0.25.This threshold(0.25) is called the Jaccard index.Obviously,the Jaccard index has a great impact on the performance of the grasp detection algorithm.The HG box mechanism proposed in this work can enhance the robustness of the grasp detection algorithm to the Jaccard index.As shown in Tables 3 and 4,when the Jaccard index becomes larger,the grasp detection accuracy of AE-GDN with HG does not drop significantly.

    5.1.4 Evaluation and analysis on the Jacquard dataset

    For the Jacquard dataset,Table 5 shows the performance comparison between AE-GDN and stateof-the-art methods with different input modalities.When the input modality is depth or RGB-D,the grasp detection accuracy of AE-GDN is better than those of other methods.When the RGB-D input modality is used,AE-GDN can achieve the best accuracy,i.e.,96.6%.Although the accuracy of Wang Y et al.(2021)’s method is better than that of AE GDN when the RGB input modality is used,its number of parameters is much larger.The results in Table 5 also demonstrate the effectiveness of the HG box mechanism.Similar to the Cornell dataset,the impact of the Jaccard index is tested on the Jacquard dataset,as shown in Table 6;when the Jaccard index becomes larger,the grasp detection accuracy of AE-GDN with HG does not drop significantly.

    Table 5 Comparison on the Jacquard dataset

    Table 6 Accuracy on the Jacquard dataset using RGB with different Jaccard indices

    Some visualization results on the Cornell and Jacquard datasets are given in Fig.8.The column on the right of the two with the same object uses the HG mechanism,which tends to output a more graspable rectangle.

    Fig.8 Visualization results on two datasets

    5.2 Robot grasp

    5.2.1 Grasp objects and metrics

    The proposed AE-GDN algorithm is deployed on a practical robotic arm grasp system,which is tested to grasp one of the 25 tools and daily necessities in our laboratory as shown in Fig.9.In the testing process,each object is tested in eight random poses.The evaluation criterion for successful grasping is that according to the grasp configurations detected by AE-GDN,the robotic arm can pick up the object and ensure that the object does not slip off.

    Fig.9 Objects for testing the robot grasping system.Each object has eight poses

    5.2.2 System setup and implementation details

    The robotic arm used in the grasp test is a UR5 collaborative robot,the end effector is a Backyard P80 gripper,and the vision sensor is an Intel RealSense D435i camera that is fixed on the end of the robotic arm.For data and signals that need to be transmitted,we use ROS (Quigley et al.,2009),which can establish communication between various program nodes.The MoveIt_ROS package is used to establish communication with the robotic arm and control the movement.The motion planning algorithm used is ESTkconfigdefault,which is named Expand the Space Tree.

    In the practical object grasp test,the position of the object relative to the end effector of the robotic arm needs to be obtained.In this work,the ROS cv_bridge and tf_packages are used to transform the position of the object from the camera coordinate system to the robot coordinate system.The grasping experiment process is to grasp a bunch of objects from a designed area and move them to another.For the same object,multiple postures and positions are tried.The purpose is to verify the generalizability of the grasp detection algorithm to unknown objects and the stability of the robot grasping system.

    5.2.3 Evaluation and analysis

    We have carried out 200 robot grasping tests,8 times for each of 25 objects,188 of which are successful.Table 7 gives the comparison result of the robot grasping success rate between our approach and mainstream methods,and shows that the method in this work has a high success rate for grasping household objects.The approach proposed by Kumra et al.(2020) has a little higher success rate,because the objects grasped using their approach have lighter weight.Fig.10 gives some visualization results of the practical robot grasping test,and Fig.11 gives an example of the robot grasping process.Table 8 shows the numbers of attempted and successful grasps for different objects.It can be seen that the robot grasping system in this work performs well even for irregularly-shaped objects,but some items are too heavy,and the grasping force of the gripper and the friction of the gripper are inadequate,leading to grasping failures.For example,the plush stick has smooth surfaces and is heavy,and tends to slip during the grasping process.Another example is the wire strippers.The algorithm can detect the correct grasp rectangle on its handle;however,due to the heavy head,the wire strippers tend to tilt during the movement and may drop.

    Table 7 Comparison of the success rates for grasping household objects among different approaches

    Fig.10 Visualization results in the practical robot grasping test

    The robot grasping test is a systematic experiment,and there are many reasons for the 12 grasping failures,such as the object being too large to grasp,the object being too heavy for the end effector,errors in the image acquisition step,low quality of the predicted grasp rectangle,and possible errors in the hand-eye calibration step.Therefore,it is necessary to conduct independent experimental analysis of each of the above possible reasons to improve the success rate of the robot grasping system.

    The proposed AE-GDN can also be used in the multi-object scenario.When there are multiple objects in the input image,the output heatmap of grasp quality will have multiple local maxima corresponding to multiple objects.The pixel coordinates of these local maxima can be used as indices to obtain the width and rotation angle of each object.Finally,multiple grasp boxes can be parsed from the combinations of the coordinates,widths,and rotation angles,as shown in Fig.12.After the grasp boxes are obtained,the actual grasping sequence can be arranged according to certain rules.For example,the robot can always grasp the object with the highest grasp quality score or the object closest to the left upper corner of the image until the last object is grasped.Fig.13 gives a multi-object grasping process example.It is worth noting that if the distances between multiple objects are relatively small,although AE-GDN can output correct grasp boxes,the robot may fail to grasp some objects because the algorithm calculates the grasp box of each object independently and does not consider the collision between the grasp box and other objects,as shown in Fig.14.Fig.15 gives an example of a collision between the gripper and a non-target object in a multi-object scene when the distance between objects is relatively small.In addition,AE-GDN does not give the categories of multiple objects,so sequential grasping according to certain category order cannot be realized.Finally,when occlusion occurs,the overlapped objects may be treated as a single object,so AE-GDN may not output enough grasp boxes for all objects,as shown in Fig.16.

    Fig.12 Visualization results of grasp detection in a multi-object scenario

    Fig.13 A multi-object grasp process example.The grasp process is from left to right,and then from top to bottom

    Fig.14 Example of a collision between the predicted grasp box and another object

    Fig.15 Example of a collision between the gripper and a non-target object in a multi-object scenario when the distance between objects is relatively small

    Fig.16 Example of the algorithm not outputting enough grasp boxes for all objects

    Robots also have significant requirements for grippers to achieve grasping.Liu et al.(2023) used a soft gripper that can handle various objects of different shapes.In the future,research can be focused on grippers.

    6 Conclusions

    This paper proposes an efficient pixel-level grasp detection network AE-GDN based on an attention mechanism and an hourglass box matching mechanism.The attention mechanism can enhance both the detailed and semantic information.The hourglass box matching mechanism creates good correspondence between high IoUs and high-quality grasp rectangles.The accuracy of AE-GDN on the Cornell and Jacquard datasets has reached 98.9%and 96.6%,respectively.The inference speed of the proposed AE-GDN can meet the real-time requirement of robot grasping tasks.The proposed AEGDN is also deployed on a practical robot grasping system and performs grasping tasks well.

    Contributors

    Xiaofei QIN and Wenkai HU proposed the design.Xiaofei QIN and Chen XIAO conducted the experiments.Xiaofei QIN and Changxiang HE analyzed the results.Wenkai HU drafted the paper.Xiaofei QIN,Songwen PEI,and Xuedian ZHANG helped organize the paper.All authors revised and finalized the paper.

    Compliance with ethics guidelines

    Xiaofei QIN,Wenkai HU,Chen XIAO,Changxiang HE,Songwen PEI,and Xuedian ZHANG declare that they have no conflict of interest.

    Data availabilityThe data that support the findings of this study are openly available in the repository at https://github.com/robvincen/robot_gradet.

    97超级碰碰碰精品色视频在线观看| 19禁男女啪啪无遮挡网站| 18禁裸乳无遮挡免费网站照片 | 纯流量卡能插随身wifi吗| 一进一出好大好爽视频| 久久久久国内视频| 国产三级黄色录像| 中文字幕人妻熟女乱码| 日本精品一区二区三区蜜桃| 亚洲人成77777在线视频| 成人亚洲精品av一区二区| 日韩三级视频一区二区三区| 无限看片的www在线观看| 成年人黄色毛片网站| 叶爱在线成人免费视频播放| 欧美国产精品va在线观看不卡| 丝袜人妻中文字幕| 999久久久国产精品视频| 日本vs欧美在线观看视频| 国产在线观看jvid| 亚洲av片天天在线观看| 1024视频免费在线观看| 在线视频色国产色| 人人妻人人澡人人看| 亚洲欧美日韩高清在线视频| 国产伦人伦偷精品视频| 国语自产精品视频在线第100页| 中文字幕另类日韩欧美亚洲嫩草| 国产精品99久久99久久久不卡| 中文字幕人妻熟女乱码| 97人妻精品一区二区三区麻豆 | 制服人妻中文乱码| 99在线人妻在线中文字幕| 757午夜福利合集在线观看| 久久这里只有精品19| 日韩欧美一区二区三区在线观看| 手机成人av网站| 一夜夜www| 老司机靠b影院| 校园春色视频在线观看| 国产精品九九99| 国产精品99久久99久久久不卡| a在线观看视频网站| 18禁观看日本| 欧洲精品卡2卡3卡4卡5卡区| 日本a在线网址| 国产成人欧美| 亚洲avbb在线观看| 黑丝袜美女国产一区| 在线观看66精品国产| 国产精品亚洲av一区麻豆| 欧美av亚洲av综合av国产av| 午夜福利影视在线免费观看| 国产单亲对白刺激| 欧美日韩亚洲综合一区二区三区_| 国产视频一区二区在线看| 99久久精品国产亚洲精品| 黄频高清免费视频| 亚洲伊人色综图| 亚洲av片天天在线观看| 可以在线观看的亚洲视频| av有码第一页| 波多野结衣高清无吗| 黄色毛片三级朝国网站| 成年人黄色毛片网站| 久久精品影院6| 国产99白浆流出| 亚洲欧美日韩高清在线视频| 欧美日韩福利视频一区二区| 国产亚洲精品久久久久5区| 18禁美女被吸乳视频| 国产xxxxx性猛交| 精品一区二区三区视频在线观看免费| 精品卡一卡二卡四卡免费| 中文字幕最新亚洲高清| 99久久精品国产亚洲精品| 香蕉国产在线看| 国产亚洲精品久久久久5区| 91麻豆精品激情在线观看国产| 午夜免费成人在线视频| 变态另类成人亚洲欧美熟女 | 欧美黑人精品巨大| 黄网站色视频无遮挡免费观看| 一区二区三区国产精品乱码| 婷婷丁香在线五月| 电影成人av| 国产精品自产拍在线观看55亚洲| 日韩欧美一区视频在线观看| www.自偷自拍.com| 亚洲国产欧美日韩在线播放| 中文字幕精品免费在线观看视频| 亚洲情色 制服丝袜| 99久久综合精品五月天人人| 在线观看免费午夜福利视频| 免费人成视频x8x8入口观看| 女性生殖器流出的白浆| 久热这里只有精品99| www.自偷自拍.com| 亚洲精品在线美女| 老熟妇乱子伦视频在线观看| 91九色精品人成在线观看| 欧美一级a爱片免费观看看 | 动漫黄色视频在线观看| 亚洲午夜精品一区,二区,三区| 国产激情欧美一区二区| 首页视频小说图片口味搜索| 久久久久久久午夜电影| 两个人免费观看高清视频| av天堂在线播放| 成熟少妇高潮喷水视频| 精品久久久久久久毛片微露脸| 少妇被粗大的猛进出69影院| 18禁观看日本| 久久欧美精品欧美久久欧美| 亚洲成人久久性| av免费在线观看网站| 欧美精品啪啪一区二区三区| 亚洲国产日韩欧美精品在线观看 | 国产99白浆流出| 天天一区二区日本电影三级 | 国产高清激情床上av| 久久久国产欧美日韩av| 少妇粗大呻吟视频| 精品少妇一区二区三区视频日本电影| 欧美 亚洲 国产 日韩一| 香蕉久久夜色| 怎么达到女性高潮| 久久人妻av系列| 亚洲第一电影网av| 自拍欧美九色日韩亚洲蝌蚪91| 狂野欧美激情性xxxx| aaaaa片日本免费| 无人区码免费观看不卡| 后天国语完整版免费观看| 亚洲国产欧美一区二区综合| 高清黄色对白视频在线免费看| 国内毛片毛片毛片毛片毛片| 成人亚洲精品一区在线观看| 国产精品九九99| av有码第一页| 国产精品日韩av在线免费观看 | 性欧美人与动物交配| 啪啪无遮挡十八禁网站| 美女大奶头视频| 在线国产一区二区在线| 久99久视频精品免费| 人人妻人人爽人人添夜夜欢视频| 国产精品乱码一区二三区的特点 | 亚洲av片天天在线观看| tocl精华| 亚洲视频免费观看视频| 久久精品亚洲熟妇少妇任你| aaaaa片日本免费| 色综合亚洲欧美另类图片| 亚洲va日本ⅴa欧美va伊人久久| 无人区码免费观看不卡| 日本精品一区二区三区蜜桃| 一二三四在线观看免费中文在| 成人18禁高潮啪啪吃奶动态图| 免费搜索国产男女视频| 免费少妇av软件| 一边摸一边抽搐一进一出视频| 在线十欧美十亚洲十日本专区| 757午夜福利合集在线观看| 亚洲全国av大片| 97碰自拍视频| 国产一区二区三区视频了| 久久香蕉精品热| 动漫黄色视频在线观看| 欧美亚洲日本最大视频资源| 少妇 在线观看| 国产精品一区二区精品视频观看| 午夜激情av网站| 国产精品99久久99久久久不卡| 少妇 在线观看| 动漫黄色视频在线观看| 少妇粗大呻吟视频| 国产男靠女视频免费网站| av天堂在线播放| 国产xxxxx性猛交| 精品一品国产午夜福利视频| 久久精品aⅴ一区二区三区四区| av在线播放免费不卡| 久久久国产欧美日韩av| av在线播放免费不卡| 国产免费av片在线观看野外av| 亚洲视频免费观看视频| 亚洲成人久久性| 久久这里只有精品19| 精品国产一区二区久久| 美女扒开内裤让男人捅视频| 亚洲午夜精品一区,二区,三区| 最近最新中文字幕大全电影3 | 精品人妻在线不人妻| 国产精品1区2区在线观看.| 中文字幕av电影在线播放| 成人亚洲精品av一区二区| 99香蕉大伊视频| 一个人观看的视频www高清免费观看 | 亚洲精品一卡2卡三卡4卡5卡| 成人欧美大片| 午夜久久久久精精品| 亚洲男人的天堂狠狠| 久久久久久亚洲精品国产蜜桃av| 亚洲人成电影免费在线| 久久午夜亚洲精品久久| 久久午夜亚洲精品久久| 国产色视频综合| 午夜精品在线福利| 桃红色精品国产亚洲av| 久久精品91无色码中文字幕| а√天堂www在线а√下载| 狠狠狠狠99中文字幕| 一区二区日韩欧美中文字幕| 精品不卡国产一区二区三区| 99精品在免费线老司机午夜| 欧美黄色淫秽网站| 麻豆av在线久日| 美女高潮到喷水免费观看| 日韩成人在线观看一区二区三区| 国产精品亚洲一级av第二区| 国产伦一二天堂av在线观看| 精品一区二区三区av网在线观看| 成年女人毛片免费观看观看9| 色精品久久人妻99蜜桃| 天堂√8在线中文| 男女下面进入的视频免费午夜 | 亚洲 欧美一区二区三区| 亚洲国产日韩欧美精品在线观看 | 美女高潮到喷水免费观看| 制服丝袜大香蕉在线| 国产亚洲av嫩草精品影院| 欧美色欧美亚洲另类二区 | 人人妻人人澡欧美一区二区 | 色av中文字幕| 久久青草综合色| 又黄又爽又免费观看的视频| 日韩免费av在线播放| 精品国产一区二区三区四区第35| 国产精品,欧美在线| 国产精品av久久久久免费| 老鸭窝网址在线观看| 操出白浆在线播放| 午夜久久久在线观看| 看免费av毛片| 一边摸一边抽搐一进一出视频| 不卡一级毛片| 国产精品久久久av美女十八| 男女床上黄色一级片免费看| 一边摸一边抽搐一进一小说| 久久久久国产一级毛片高清牌| 免费久久久久久久精品成人欧美视频| 中文字幕久久专区| 欧美 亚洲 国产 日韩一| 亚洲精品在线美女| 美女大奶头视频| 88av欧美| 狠狠狠狠99中文字幕| 99在线视频只有这里精品首页| 国产单亲对白刺激| 淫妇啪啪啪对白视频| 又黄又粗又硬又大视频| 中文亚洲av片在线观看爽| 国产三级在线视频| 欧美黑人欧美精品刺激| 亚洲无线在线观看| av超薄肉色丝袜交足视频| 国产成人av教育| 色尼玛亚洲综合影院| 欧美久久黑人一区二区| 久久久久久久久中文| 一级a爱视频在线免费观看| 日本免费一区二区三区高清不卡 | 亚洲成国产人片在线观看| 欧美最黄视频在线播放免费| 757午夜福利合集在线观看| 一进一出抽搐gif免费好疼| 久久亚洲精品不卡| 久久精品人人爽人人爽视色| 丝袜人妻中文字幕| www国产在线视频色| 成人欧美大片| 久久香蕉精品热| 午夜福利欧美成人| 国产精品精品国产色婷婷| 国产av又大| 伦理电影免费视频| 超碰成人久久| 可以免费在线观看a视频的电影网站| 精品久久久久久久久久免费视频| 亚洲va日本ⅴa欧美va伊人久久| 12—13女人毛片做爰片一| 精品一品国产午夜福利视频| 久久中文字幕人妻熟女| 亚洲人成电影免费在线| 国产日韩一区二区三区精品不卡| 欧美精品啪啪一区二区三区| av中文乱码字幕在线| 欧美日本中文国产一区发布| 亚洲国产欧美一区二区综合| 性少妇av在线| 啦啦啦观看免费观看视频高清 | 淫秽高清视频在线观看| 成年人黄色毛片网站| 高清在线国产一区| 美女国产高潮福利片在线看| 久久人人精品亚洲av| 一个人免费在线观看的高清视频| 夜夜爽天天搞| 在线av久久热| 久久国产亚洲av麻豆专区| 亚洲精品一卡2卡三卡4卡5卡| 久久国产乱子伦精品免费另类| 欧美黄色片欧美黄色片| 久久久久久久精品吃奶| 日韩一卡2卡3卡4卡2021年| 久久青草综合色| 亚洲人成网站在线播放欧美日韩| 久久国产精品男人的天堂亚洲| 天天添夜夜摸| 国产极品粉嫩免费观看在线| 国产日韩一区二区三区精品不卡| 日韩精品中文字幕看吧| 婷婷精品国产亚洲av在线| 1024香蕉在线观看| 一个人观看的视频www高清免费观看 | 国产成人免费无遮挡视频| 在线观看66精品国产| 国产又爽黄色视频| 俄罗斯特黄特色一大片| 国产欧美日韩综合在线一区二区| 9色porny在线观看| 亚洲欧美日韩另类电影网站| 男人舔女人下体高潮全视频| 免费看美女性在线毛片视频| 久久草成人影院| 99久久99久久久精品蜜桃| 日韩免费av在线播放| 精品国产美女av久久久久小说| 久久久久久久久久久久大奶| 十分钟在线观看高清视频www| 丝袜美足系列| 免费在线观看影片大全网站| 午夜福利,免费看| 一个人观看的视频www高清免费观看 | 亚洲精品在线美女| 亚洲国产毛片av蜜桃av| 久久久精品欧美日韩精品| bbb黄色大片| 亚洲精品av麻豆狂野| 每晚都被弄得嗷嗷叫到高潮| 亚洲男人天堂网一区| 这个男人来自地球电影免费观看| 91麻豆av在线| 亚洲最大成人中文| 一级a爱片免费观看的视频| 欧美日韩乱码在线| 大码成人一级视频| www日本在线高清视频| 国产真人三级小视频在线观看| 国产伦人伦偷精品视频| 性少妇av在线| 熟妇人妻久久中文字幕3abv| 亚洲视频免费观看视频| 午夜福利在线观看吧| 午夜成年电影在线免费观看| 欧美精品亚洲一区二区| 精品一品国产午夜福利视频| 男女之事视频高清在线观看| 午夜久久久久精精品| 黄网站色视频无遮挡免费观看| 久久国产精品男人的天堂亚洲| 久久中文字幕一级| 人人妻人人澡人人看| 99精品在免费线老司机午夜| 国产99白浆流出| av欧美777| 久久精品国产亚洲av高清一级| 真人一进一出gif抽搐免费| 国产亚洲欧美精品永久| 看片在线看免费视频| 一区二区三区激情视频| 一区二区日韩欧美中文字幕| 亚洲av日韩精品久久久久久密| 欧美乱妇无乱码| 欧美日韩亚洲国产一区二区在线观看| 看黄色毛片网站| 不卡一级毛片| 深夜精品福利| 久久午夜亚洲精品久久| 久久午夜综合久久蜜桃| 国产精品免费视频内射| 一a级毛片在线观看| 亚洲国产精品成人综合色| 美女扒开内裤让男人捅视频| 日韩精品中文字幕看吧| 一区二区日韩欧美中文字幕| 久久天躁狠狠躁夜夜2o2o| 欧美大码av| 亚洲中文字幕一区二区三区有码在线看 | 老司机午夜十八禁免费视频| 国产精品乱码一区二三区的特点 | 国产成人一区二区三区免费视频网站| 亚洲第一青青草原| 搡老妇女老女人老熟妇| 成熟少妇高潮喷水视频| 欧美精品啪啪一区二区三区| 免费少妇av软件| 国产精品久久视频播放| 怎么达到女性高潮| 老汉色∧v一级毛片| 亚洲精品美女久久av网站| 日本vs欧美在线观看视频| 好看av亚洲va欧美ⅴa在| 免费少妇av软件| 亚洲精品久久国产高清桃花| 久久影院123| 亚洲 国产 在线| 国产伦人伦偷精品视频| 成人18禁在线播放| 亚洲三区欧美一区| 啦啦啦 在线观看视频| 久久这里只有精品19| 国产亚洲精品一区二区www| 久久狼人影院| 欧美成狂野欧美在线观看| 91精品国产国语对白视频| 亚洲国产毛片av蜜桃av| 中国美女看黄片| 国产成人免费无遮挡视频| 美女大奶头视频| 18禁裸乳无遮挡免费网站照片 | 极品教师在线免费播放| 老司机福利观看| 一级黄色大片毛片| 欧美激情极品国产一区二区三区| www国产在线视频色| 一边摸一边抽搐一进一出视频| 中文字幕人妻熟女乱码| netflix在线观看网站| 欧美黑人精品巨大| 99在线视频只有这里精品首页| 精品久久久精品久久久| 欧美黑人精品巨大| 欧美日韩一级在线毛片| 一本综合久久免费| 免费av毛片视频| 三级毛片av免费| 欧美日韩精品网址| 久久久久久久午夜电影| 女人精品久久久久毛片| av免费在线观看网站| 女生性感内裤真人,穿戴方法视频| 国产成人欧美在线观看| 日本黄色视频三级网站网址| 嫩草影院精品99| 国产av一区在线观看免费| 精品电影一区二区在线| 男女下面插进去视频免费观看| 精品国产乱子伦一区二区三区| 97人妻精品一区二区三区麻豆 | 国产又爽黄色视频| 香蕉国产在线看| 欧美精品啪啪一区二区三区| 美女高潮喷水抽搐中文字幕| 国内精品久久久久久久电影| 女性被躁到高潮视频| 黄色视频不卡| 美女扒开内裤让男人捅视频| 午夜精品国产一区二区电影| 久久精品国产亚洲av高清一级| 黄色a级毛片大全视频| 亚洲国产毛片av蜜桃av| 香蕉国产在线看| 亚洲va日本ⅴa欧美va伊人久久| 久久久久久大精品| 久久午夜综合久久蜜桃| 人人妻人人澡欧美一区二区 | 精品久久久精品久久久| 99国产精品免费福利视频| 亚洲天堂国产精品一区在线| 一区二区三区高清视频在线| 亚洲av第一区精品v没综合| 免费观看精品视频网站| 午夜a级毛片| 三级毛片av免费| 一级黄色大片毛片| 成在线人永久免费视频| 人人澡人人妻人| 日韩 欧美 亚洲 中文字幕| 午夜精品在线福利| 国产精品国产高清国产av| 国产亚洲欧美在线一区二区| 亚洲一区高清亚洲精品| 欧美成人午夜精品| 天堂√8在线中文| 欧美中文综合在线视频| 此物有八面人人有两片| 一级,二级,三级黄色视频| 亚洲国产日韩欧美精品在线观看 | 日本在线视频免费播放| 午夜免费成人在线视频| 午夜福利视频1000在线观看 | 国产成人精品在线电影| 最新美女视频免费是黄的| 色在线成人网| 精品熟女少妇八av免费久了| 美女国产高潮福利片在线看| 久久久精品欧美日韩精品| 国产在线精品亚洲第一网站| 亚洲国产毛片av蜜桃av| 99香蕉大伊视频| 九色亚洲精品在线播放| 亚洲人成77777在线视频| 91老司机精品| 国产一区在线观看成人免费| 麻豆国产av国片精品| 亚洲精品一卡2卡三卡4卡5卡| 国产精品精品国产色婷婷| 亚洲欧美日韩高清在线视频| 国产不卡一卡二| 首页视频小说图片口味搜索| 亚洲 欧美 日韩 在线 免费| 亚洲aⅴ乱码一区二区在线播放 | 久久香蕉国产精品| 最近最新中文字幕大全电影3 | 午夜福利视频1000在线观看 | 黄色视频不卡| 国产aⅴ精品一区二区三区波| 国产又爽黄色视频| 欧美色欧美亚洲另类二区 | 麻豆av在线久日| 熟妇人妻久久中文字幕3abv| 亚洲av日韩精品久久久久久密| 精品久久久久久久久久免费视频| avwww免费| 成人国语在线视频| 一进一出抽搐gif免费好疼| 久久久久九九精品影院| 久久久久国产精品人妻aⅴ院| av在线天堂中文字幕| 午夜免费鲁丝| 亚洲视频免费观看视频| 成人亚洲精品av一区二区| 91成人精品电影| 国产麻豆69| ponron亚洲| 成人手机av| 亚洲熟女毛片儿| 中亚洲国语对白在线视频| 欧美国产精品va在线观看不卡| 淫秽高清视频在线观看| 中文字幕久久专区| 精品乱码久久久久久99久播| 精品久久久久久成人av| 日本五十路高清| 无人区码免费观看不卡| 12—13女人毛片做爰片一| 久久九九热精品免费| 国产精品1区2区在线观看.| 国产伦人伦偷精品视频| 亚洲va日本ⅴa欧美va伊人久久| 精品国产乱码久久久久久男人| netflix在线观看网站| 亚洲人成77777在线视频| 91精品国产国语对白视频| 啦啦啦 在线观看视频| 人人澡人人妻人| 黄片小视频在线播放| 亚洲欧洲精品一区二区精品久久久| 亚洲精品久久成人aⅴ小说| 欧美一级毛片孕妇| 搡老岳熟女国产| 中文字幕色久视频| 午夜福利18| 在线av久久热| 亚洲国产毛片av蜜桃av| 国产单亲对白刺激| 自线自在国产av| 最近最新免费中文字幕在线| 超碰成人久久| 大型av网站在线播放| 成人国产综合亚洲| 日韩av在线大香蕉| 亚洲av五月六月丁香网| 亚洲七黄色美女视频| netflix在线观看网站| 日韩视频一区二区在线观看| 亚洲午夜理论影院| 久久中文字幕一级| 熟妇人妻久久中文字幕3abv| 18禁美女被吸乳视频| 色婷婷久久久亚洲欧美| 亚洲av熟女| 激情在线观看视频在线高清| 国产成人精品无人区| 777久久人妻少妇嫩草av网站| 99国产精品一区二区蜜桃av| 老司机在亚洲福利影院| 亚洲一码二码三码区别大吗| 亚洲最大成人中文| 老鸭窝网址在线观看| 久久国产精品影院| 黄片大片在线免费观看| 在线观看www视频免费| 欧美乱妇无乱码| 免费在线观看影片大全网站| 精品熟女少妇八av免费久了| 97碰自拍视频| 亚洲一区高清亚洲精品| 亚洲自拍偷在线| 色综合欧美亚洲国产小说| 黄色女人牲交| 91老司机精品| 国产亚洲欧美在线一区二区|