• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    3D Instance Segmentation Using Deep Learning on RGB-D Indoor Data

    2022-11-11 10:49:02SiddiquiMuhammadYasirAminMuhammadSadiqandHyunsikAhn
    Computers Materials&Continua 2022年9期

    Siddiqui Muhammad Yasir,Amin Muhammad Sadiq and Hyunsik Ahn

    1Department of Robot System Engineering,Tongmyong University,Busan,48520,Korea

    2Department of Information&Technology,University of Central Punjab,Lahore,Pakistan

    3Department of Electronics Engineering,Tongmyong University,Busan,48520,Korea

    Abstract: 3D object recognition is a challenging task for intelligent and robot systems in industrial and home indoor environments.It is critical for such systems to recognize and segment the 3D object instances that they encounter on a frequent basis.The computer vision, graphics, and machine learning fields have all given it a lot of attention.Traditionally, 3D segmentation was done with hand-crafted features and designed approaches that didn’t achieve acceptable performance and couldn’t be generalized to large-scale data.Deep learning approaches have lately become the preferred method for 3D segmentation challenges by their great success in 2D computer vision.However,the task of instance segmentation is currently less explored.In this paper, we propose a novel approach for efficient 3D instance segmentation using red green blue and depth (RGB-D)data based on deep learning.The 2D region based convolutional neural networks(Mask R-CNN)deep learning model with point based rending module is adapted to integrate with depth information to recognize and segment 3D instances of objects.In order to generate 3D point cloud coordinates (x, y, z), segmented 2D pixels (u, v)of recognized object regions in the RGB image are merged into(u,v)points of the depth image.Moreover, we conducted an experiment and analysis to compare our proposed method from various points of view and distances.The experimentation shows the proposed 3D object recognition and instance segmentation are sufficiently beneficial to support object handling in robotic and intelligent systems.

    Keywords: Instance segmentation; 3D object segmentation; deep learning;point cloud coordinates

    1 Introduction

    3D instance segmentation is a challenging and fundamental computer vision problem with applications in robotics, autonomous driving, and augmented reality, industrial and home environments.Significant progress in 2D image segmentation and object detection has been made with the advancement of deep learning.3D segmentation has traditionally been performed using handcrafted features and engineered methods that have failed to achieve acceptable accuracy and cannot be universally applied to large datasets[1,2].Due to their breakthrough in 2D computer vision,deep learning algorithms have lately been the preferred tool for 3D segmentation challenges.

    The goal of 3D instance segmentation is to develop computational approaches that can anticipate the fine-grained labels of objects in a 3D scene for a variety of applications.Additionally,compared to 2D segmentation, 3D segmentation provides more detailed and specified information such as point clouds, voxels, and meshes, with 3D coordinates, for the practical applications.Deep learning approaches have recently dominated a number of research domains,including computer vision,speech recognition, and natural language processing.3D deep learning approaches, on the other hand,nevertheless face a number of unsolved issues.For example, because point clouds are irregular, it is difficult to merge features from RGB and depth channels and translating them to high-resolution voxels imposes a considerable processing overhead[1,2].

    The capacity of 2D instance segmentation to mask pixels from a single or multiple objects in an image has the potential to improve perception in a variety of applications.The deep learning-based approach to 2D instance segmentation has attracted much attention.In particular, Convolutional Neural Network (CNN)based deep learning models are providing many advanced results.The common detection and region-based deep learning models known as Fast,Faster,and Mask R-CNN have shown impressive results in 2D instance segmentation.In these approaches,a backbone predicts regions of interest and then segments them into foreground and background instances as masks[2,3].Numerous researchers proposed 3D scene understanding with 2D segmentation using RGB-D data.However,they are insufficient to support advanced intelligent and robotic systems applications as the robot requires more precise 3D geometric information about objects[4,5].

    Moreover,3D instance segmentation that extracts the 3D areas of objects in the(x,y,z)coordinate system is necessary for multiple industrial areas.It is needed to identify target objects for applications such as robot object handling,situation perception for autonomous vehicles,and vision systems for factory automation[4-6].With the advent of RGB-D sensors that capture color and depth data,3D segmentation and 3D object identification may be able to achieve advanced fulfillment.Using RGB-D data,multiple approaches are used to infer bounding boxes in 3D point clouds,transform 2D depth maps to 3D point clouds, and apply machine learning techniques to detect objects in RGB images[5,7-9].Furthermore,these networks require point clouds as an input,and color information cannot be considered properly[10].Processing with a point cloud as an input is time expensive and requires a significant amount of processing.As a result,our goal is to solve the problem of preprocessing and lower the high demand for resources for a speedy solution.Therefore, our work requires RGB and depth map data as an input of deep learning processing and later used the out-put of deep learning for 3D instance segmentation[11].

    The depth map in RGB-D data provides geometric information about the scene or real environment that can be used to identify foreground objects from their background,allowing for better segmentation accuracy [12,13].However, this simple framework is powerful enough to reduce postprocessing and improve 3D instance segmentation accuracy.Methods for understanding 3D scenes can also distinguish between distinct instances of the same class.In this research,the 3D mask backbone predicts the region and final instance masks.Proposal-based techniques construct final instance masks by first predicting and then improving object suggestions.A novel method for recognizing and segmenting 3D objects from a 3D indoor environment using RGB-D data is proposed in this paper, The Mask R-CNN deep learning model has been adapted for 2D segmentation using RGB data.Furthermore, the Point Based Rendering module is applied to achieve the precise geometric information.In order to achieve 3D instance segmentation,the(x,y,z)coordinates are computed from 2D segmented pixels by utilizing RGB-D data.The main contributions to this paper are as follows:

    1.A novel 3D instance segmentation method is proposed that integrates 2D object recognition using deep learning with depth data.

    2.The integration of 2D segmented RGB data with depth data to achieve 3D instance segmentation for foreground objects and their background information is presented.

    The remainder of this work is structured as follows.Section 2 outlines the work that is connected to the suggested strategy.The third section goes through the specifics of the proposed 3D instance segmentation.Section 4 explores into the experimental results of the suggested 3D instance segmentation method.Section 5 concludes with a last statement on future effort.

    2 Related Work

    This section discusses prior findings on object recognition and instance segmentation in the areas of computer vision and robotics.

    2.1 Deep Learning-Based 2D Instance Recognition

    In the computer vision area, object recognition, including classification and segmentation, has been a main challenge for a long time before the appearance of deep learning approaches.Image segmentation is typically used to locate the regions and boundaries of objects in images.Instance segmentation recognizes objects of interest in an image efficiently while also generating a high-quality individual segmentation mask for each instance[7,13].

    In order to recognize objects in images,researchers utilized deep learning techniques.An extraordinary development has been made in 2D object recognition with the improvement of Convolutional Neural Networks (CNNs)[14].The convolutional neural network is a multi-layered feed-forward neural network that allows learning of hierarchical features.The design of CNN is made by stacking a few hidden layers sequentially on top of each other.A well-known model for object recognition with bounding boxes called “You Only Look Once”(YOLO)involves a neural network that includes an image as an input and predicts the bounding box and class label[14].

    The R-CNN is a neural network defined as“Region-based CNN”The techniques were designed and demonstrated as region-based convolutional neural network(R-CNN),Fast R-CNN,and Faster R-CNN for object classification and object recognition.In general,the R-CNN models might be more accurate,but the YOLO family of models is fast,far faster than the R-CNN,achieving real-time object detection.The selective search methods are implemented in R-CNN to classify region classification proposals with less complexity.The features are extracted in CNN layers and are passed to several binary classifiers to classify the classes in particular regions[11,14].Fig.1 R-CNN Model Architecture features hierarchies for accurate object detection and semantic segmentation.The approach involved in R-CNN to classify and recognize objects is relatively simple and straightforward.Fig.1 explains the“selective search”approach used to suggest candidate regions or border boxes of possible objects.

    The Mask R-CNN is a computer vision approach based on CNN that can recognize,classify,and mask object instances in 2D images.Two phases are part of Mask R-CNN.First, the model allows proposals for input images to be made for regions where an object can be created.The second is the estimation of the class objects,which will refine the bounding box and construct the mask at the object level based on the first proposal at the object level[15].In recent challenges,the Mask RCNN is state-of-the-art in terms of region-based image and instance segmentation regardless of object size.These region-based systems typically anticipate masks on a 28×28 grid.This is sufficient for small objects,but it generates an unpleasant“blobby”output for large objects,which excessively smooths the fine-level details.An alternate sliding window method,which employs a sophisticated network design to anticipate sharp high-resolution masks for huge objects, falls slightly behind in accuracy.While boosting the accuracy of region-based techniques, a region-based segmentation model integrated with point-based rendering(PointRend)can produce masks with fine-level information.Furthermore,PointRend is used to segment objects and scenes in high-quality RGB data quickly and effectively,comparing various methods for effective rendering by sampling pixel labeling tasks.The approaches are based on iterative subdivision,and the refined results are achieved with the point-based rendering(PointRend)neural network module for regional detection.It performs point-based segmentation predictions in adaptively selected regions.PointRend can be applied to both instance and semantic segmentation tasks.While the general concept can be implemented in a variety of approaches[16],in places where prior approaches have over smoothed the object borders,PointRend produces crisp object boundaries.Instance segmentation with PointRend uses a new point-based feature representation to make predictions at adaptively sampled points on the image.PointRend produces substantially more detailed results and iteratively computes its prediction when used to replace Mask with CNN’s default mask head[16].Each step uses bilinear interpolation in smooth regions and produces better resolution predictions at a small number of adaptively chosen points that are likely to be on object boundaries.

    Figure 1:General description of convolution neural network(CNN)deep learning model

    The PointRend module played an initial role in this research for 2D object recognition and segmentation for targeted object regions in 2D RGB data for further processing.

    2.2 3D Object Recognition

    The 3D object recognition is the ultimate goal of computer vision and can be applied in private areas like homes as well as industrial areas.It extracts meaningful 3D information from various types of data, such as point clouds, RGB-D, scanned data extracted from a range of sensors, and even monocular images [12,16].The key distinctions between semantic and instance segmentation are as follows.While the semantic segmentation job can be regarded as a classification task with a fixed number of known labels,the object indices for example segmentation are permutation invariant,and the total number of objects is unknown a priori[17].Currently,there are two primary approaches to instance segmentation:Proposal-based methods seek out intriguing places and then divide them into foreground and background.Proposal-free methods discover a feature embedding space for the pixels in the image[18].Following that,the pixels are grouped based on their feature vector.In our work,we adopt the first approach since performing instance segmentation in the scene is both robust and fast.

    The Mask R-CNN 3D object detector produces 3D box proposals directly by the use of only the raw point cloud as bottom-up data, which is then optimized by the proposed bin-based 3D box regression loss in the canonical coordinate [19].The 3D bounding boxes are not suitable for robotic systems,particularly in object grasping scenarios.A vehicle shape segmentation approach was proposed by projecting the predicted 2D to 3D confidence maps to be combined[9,20].The PointIT,a fast tracking framework based on 3D instance segmentation,is proposed recently for object tracking based on 3D instance segmentation.The method first predicts instance mask for each class then uses a convolutional neural network called MobileNet as a primary encoder[20].

    Qi et al.[21] used a sequence of multi-layer perceptron (MLPs)and max-pooling to perform feature learning directly on raw point clouds.In the subsequent work[22],hierarchical characteristics are included.Both pieces,however,necessitated extensive processing and resources.The prepressing and input of a point-cloud is a complex process and a burden for speedy instance segmentation.

    Similarly,Generative Shape Proposal Network(GSPN)introduces a 3D object proposal network that reconstructs object shapes from shape noisy observations to enforce geometric understanding.GSPN is embedded into a 3d instance segmentation network named as region-based PointNet (RPointNet)to reject,receive and refine proposals.Step-by-step training of these networks is required,and object proposal refinement requires an complex and costly suppression operation[1,23].The 3D environment understanding with point cloud became a vital point,since it can advantage numerous applications,such as autonomous driving,augmented,and virtual reality[24].Current 3D segmentation methods for recognizing objects are based on post-processing on the point clouds and require a lot of processing and these methods are not suitable for real-time and robotic applications[4,25].

    However,numerous approaches to object detection utilizing RGB-D data are proposed.Shao et al.merged RGB-D data and clustered the features to produce instance segmentation masks on simulated RGB-D data [26].The 2D-driven 3D object detection approach has been proposed using synthetic data[27]suitable for real-time and robotic applications for fast processing.

    3 Deep Learning-Based 3D Instance Segmentation

    The new approach for 3D instance segmentation and recognition using RGB-D data is presented in this section.For 3D instance segmentation, the suggested approach revolves around 2D to 3D segmentation methods by using camera calibration together with 2D instance segmentation in 3D space.The proposed approach is divided into three steps;the output of each step is the input of the following step.

    Fig.2 illustrates the overview of the proposed method for 3D instance segmentation from RGBD data.Fig.3 demonstrates the masking of each targeted object in an indoor environment.In our approach,we assume that RGB-D data has the same aligned resolution in pairs of a color image and depth data as shown in Figs.3a and 3b.

    First,the 2D Mask R-CNN model has been adapted to mask and recognize the targeted objects.A masking process is required to identify the region of interest(ROI)in RGB for further processing.Later,point based rendering(PointRend)produced crisp object boundaries for more accurate object ROI.The output of the first step is masking,isolating segmented objects in images with distinct color grading between the targeted objects and their background.

    Figure 2:Processing pipeline explained for identifying region of interest(ROI)and detaches objects in an indoor environment from their background

    Figure 3: First row shows RGB image (a).The corresponding depth image (b).The refined masked image for multi-objects,identifying the area of each object with their class label in an environment(c).The second row shows distinct masked objects detected from RGB images

    Second, after applied to RGB and depth data separately, the technology of contour detection from masks is used to segment the region of interest (ROI)in RGB-D data.Contours are used to connect all continuous points that have the same hue or intensity.There was an issue with unknown or trash information obtained from the sensor in RGB-D data.Therefore we applied a threshold within the range of distance from the sensor to the object to deal with it.As multi-level thresholding is an important consideration in many real-time pattern recognition applications [28].Third,the ROI detached from their background and converted to 3D pixels with the help of perspective transformation.The following section describes 2D instance segmentation with Mask R-CNN in detail.

    The 2D instance segmentation is important to recognize the class and ROI from RGB images.The Mask R-CNN plays the initial role in this research, designed to solve problems like instance segmentation in machine learning or computer vision problems.Faster R-CNN first utilizes a Convolutional Network to extract feature maps from the images[13,28].These feature maps are then passed through a Region Proposal Network(RPN)which returns the candidate bounding boxes.ROI pooling layer on these candidate bounding boxes can bring all the candidates to the same size.The flattened proposals are passed to a fully connected layer to classify and output the bounding boxes for objects.The Mask R-CNN can determine object class,bounding box,and segment based on pixel points as shown in Fig.3c.It produces suggestions for regions where objects are placed in an input image.Later,it defines the target class,refines the bounding box,and generates a refined mask with PointRend based on a first-stage proposal at the pixel level of the object.

    All the masks are stored in the mask array predicted by PointRend in the form of Boolean labels.Boolean labels always represent 0 s and 1 s, 0 s represent no object at that particular pixel and 1 s represent the existence of the detected object pixel.The shape will represent the number of segmented objects in an image.Each mask is multiplied by the original image to be able to use each segment of the image in a loop.Obtaining the segments from the whole image will reduce the cost of computing as it will not preprocess the whole image,but only the segments.

    The pixel-level mask is essential for this research because it targets the exact ROI.The mask of the objects is generated by isolating the individual segmented instances as shown in Fig.3d.Each mask is established with the isolated area and binary coloring of the ROI with their background information for further processing.In order to achieve 3D pixel values,distinct binary masks are applied to regions of interest on the corresponding depth information.

    The next section will explain the process of 3D instance segmentation involving masked images with distinct colors.The 3D instance segmentation is accomplished with the assistance of PointRend refined masks.Fig.2 demonstrates the process of both phases.The first phase of research focuses on the recognition of objects by masking the region of interest.The second phase of the research is segmentation,which aims at achieving sufficient 3D matching pixels from both RGB-D data.

    This section explains how the target area is separated and transformed into three-dimensional information for each object using RGB-D data that is already aligned by the sensor.The 2D key points,defined by adding contours to RGB images,extracted and combined the corresponding depth values from the associated depth and RGB images to obtain the corresponding 3D pixel points.The required target was achieved by marking the masked area in the RGB image with the help of the contour boundary.The area within the boundary was calculated and the targeted RGB-D data pixel values were acquired from the corresponding aligned color and depth images.

    The depth values are extracted from the depth image using the pixel locations obtained from the area of interest with the aligned RGB image.Besides, the average depth distance was calculated to ignore the environmental background and to fill the non-targeted pixels with nulls to achieve the best results, as demonstrated in Fig.4b.Furthermore, the information obtained from RGB-D data was merged for further processing.A 3D environment is drawn once a desired color and depth pixel is generated using the Open3D a computer vision library.

    As compared to the other two networks,Mask R-CNN has high accuracy.Even without occlusion,images of several objects are the most complex and impossible to detect.The Mask R-CNN with pertained weights obtained the best results,while the masked region still lacks certain pixels that belong to the objects.Tab.1 illustrates the object detection accuracy of single and multi-objects.

    Figure 4:RGB data with a masked area of an object(a).Corresponding targeted pixel values of the targeted region from both RGB and Depth images(b).The region of interest(ROI)was differentiated on the base of individual masked object pixels by utilizing the contour methods and applied on both RGB and depth data

    Table 1:Comparisons with Fast,Faster R-CNN and Mask R-CNN in terms of object detection task using same dataset for all three models.The number shows the sum of accuracy for multiple objects detection for three different models

    The Mask R-CNN was chosen for single and multi-object detection for experiment purposes due to its accuracy with single and multi-object detection.Moreover,PointRend refined the masked area of interest for more accurate detection.

    This subsection describes the proposed process of 3D instance segmentation and recognition of an object as well as multi-objects from the indoor environment.Fig.4 explains the second phase of segmentation as described in Fig.2 in our proposed research.The recognition of the object is achieved with PointRend and the instance segmentation for a region of interest (ROI)in RGB-D pixels is achieved with contour technology[29].

    Besides, contour technology is used to measure the area of masked pixels.The standard explanation of contours is used to find the markers and measure the region of interest (ROI)for the selected object.Computer vision techniques focus on the use of bottom-up signals to produce images for boxes and regions.Such techniques usually detect contours in images to produce hierarchical segmentation[29].

    The regions of this hierarchical segmentation are combined to generate a list of areas with lowlevel object indices that represent objects in the image.

    The architecture of the proposed method is illustrated in Fig.5 revolves around the two steps of the research.The first step of our approach is to recognize and highlight region of interest(ROI)the target objects,which was accomplished by utilizing the PointRend method.The output of the first phase will be the input of the second phase as a masked RGB image of each object in an indoor environment,illustrated in Fig.5.The second phase determined the area of each object by applying the contour as explained in Fig.4.The use of pixel values could be a way of segmenting various objects.As the contrast increases,the pixel values for the objects and the image’s background differ.As explained in Fig.4, 2D three-channel images with high contrast give a clear idea of the region of interest (ROI)and background,explained in Fig.4.Furthermore,the RGB and the corresponding depth values are acquired by providing the depth image as an input for calculating 3D coordinates with perspective transformation.

    Figure 5:The architectre of proposed 3D instance segmentation using 2D images,the process for the construction of aligned RGB and depth pixels in the region of interest(ROI)

    The process of calculating 3D coordinates uses 2D pixel locations(u,v)in an image,as well as a depth (d), into a 3D space, called a perspective transformation.The distance of the point(u, v,d) is defined as, where u and v are the locations in the image geometry, and d is the depth.We can reconstruct x and y coordinates using Eq.(1)for perspective transformation of the given depth[30].Here,f and f are the focal lengths of the X and Y axis,which are computed from the intrinsic matrix of camera calibration[30].

    The depth and color key points are achieved by segmenting and aligning pixels from RGB-D data and projecting them into 3D space.The key steps performed in this research are explained below in the form of simple steps.

    The RGB-D data should be aligned to produce 3D pixels exactly as they are in the experiment.The algorithm 1 explains the process of segmentation on RGB-D data.The same process was applied to both RGB and Depth images,but with better results.The RGB-D camera sensors usually include some extra pixels due to environmental noise or light factors that are not meant to be included in 3D space.To tackle this problem,a threshold was introduced with the mean distance of depth pixel values.

    Algorithm 1:3D instance segmentation from indoor environment using RGB-D data Input:{RGB-D aligned image.RGB Color Image i.e.,SR,Depth Image i.e.,SD}Output:{3D instance segmentation.Segmented 3D Objects,Segmented 3D Background}1.Initialize SM:=Mask R-CNN+PointRend(SR)i.e.,2D Segmented image(s)2.Set Threshold:=moderate i.e.,to avoid unecessory pixels 3.While Si <=SM do 4.MaskedObjectArea:=FindContour(Si)i.e.,identify objects area 5.if Segment:=Objects then(Continued)

    Algorithm 1:Continued 6.DepthObjectArea:=SD-Threshold(Masked Object Area)7.ColoredObjectArea:=SR-MaskedObjectArea 8.PCD:=CreatePointCloud(Colored Object Area,Depth Object Area,Camera Intrinsic)9.Save Segmented 3D Objects as PCD 10.else if Segment:=Background then 11.DepthObjectArea:=Threshold(MaskedObjectArea)-SD 12.ColoredObjectArea:=MaskedObjectArea-SR 13.PCD:=CreatePointCloud(ColoredObjectArea,DepthObjectArea,CameraIntrinsic)14.Save Segmented 3D Background as Point Cloud 15.end if end while

    4 Experimental Results

    In a controlled setup,the Intel RealSense D-415 sensor is used for capturing RGB-D images,with the sensor positioned 3-4 meters away from objects in the indoor environment.The Intel RealSense D415 uses stereo scans with a pair of depth sensors, RGB sensors, and infrared radiation (IR)projection systems for depth estimation.The experiment revolves around the combination of a few technologies used to segment 3D objects from a 3D point cloud scene for an indoor environment.In order to project 3D points calculated after RGB-D segmentation,the Open3D was utilized.Open3D is an open-source library that supports 3D data processing and applications.It is used to build and compile from the source on various platforms with minimal effort.

    In order to achieve the detection and masked objects,the Mask R-CNN deep learning model was used with the common objects in context(COCO)dataset.Microsoft released the MS COCO dataset,which is large-scale object detection, segmentation, and captioning dataset.The COCO dataset is a large-scale dataset that provides dense pixel-wise contextual annotations to images of high complexity.COCO is a tool for studying thing-thing interactions, and it includes images of complicated scenes with a lot of small items that are labeled with very detailed outlines.To address the annotations and high level of correctness, the COCO dataset contains 164 K images of 91 classes, divided into four subgroups:training(118 K),validation(5 K),test(20 K),and test-challenge(20 K)(20 K)[31].

    The contribution and the findings of this research are as follows in the field of 3D instance segmentation.In Fig.6,some findings from real-world indoor scenes without recognized objects(as a background)and separated recognized objects are projected in 3D space.The first two rows are projecting RGB-D data;the third row is projecting the first phase of the research as masked images after recognizing the targeted objects.In Fig.6, fourth row projected as point-cloud in 3D space,generated from RGB-D data.The fifth and sixth rows represent the second phase of the research after segmenting RGB-D data into separate 3D point cloud objects and their corresponding backgrounds.Finally,all 3D matching key points are projected into 3D space with the support of the Open3D library.Finally,the process of 3d instance segmentation can be seen in Fig.6 from top to bottom in individual column handling single and multi-objects in indoor environment.

    Figure 6: The aim of this research is to classify and segment objects from their context in 3D space using RGB-D segmentation.The distribution of columns is defined as an RGB image (First Row).The corresponding aligned depth image(Second Row).Masked area of an object using Mask-RCNN(Third Row).3D view generated with RGB-D images (Third Row).3D view of indoor environment without the objects (Fourth Row).The last row shows 3D view of Multi-objects without indoor(background)environment,following the process from top to bottom

    The proposed method was evaluated on a PC with an NVIDIA GeForce 1080Ti GPU and an Intel?Core i7 CPU.All the images were captured with the Intel RealSense D415 camera.The RGB-D data from the first row is captured at two matters of distance from the camera;the second row is three,and the third row is captured four meters away from the camera.The RGB-D data is used to segment objects that are placed at different viewpoints.

    In this research, the aim was to segment 3D instances of targeted objects and to separate them from their background within a given indoor environment using RGB-D data.To fully recognize and separate a 3D instance, one would require both a semantic label and an instance label.We adopted a simple approach to segment and recognize instances in RGB images first and then generate and separate 3D instances from their background according to 2D instance segmentation.We benefit from the real distances in 3D scenes, where the sizes and distances between objects are vital to ultimate instance segmentation.Our approach consistently outperforms 3D object recognition and instance segmentation with background separation across all three difficulty levels.Difficulty levels are considered in the form of single and multi-objects in an indoor environment.The result is adequate and clean,which is suitable for simulations and real-time applications.

    The goal of detecting and segmenting simple objects in Fig.6 was to study whether the proposed approach can recognize,segment,and separate various objects in 3D indoor environments,although their shapes are rather similar.The image(c)in Fig.6 illustrates how the objects are segmented into a 2D RGB image, where different colors represent separate target instances.The method depicted in Fig.3 Select each masked instance from the scene so that it can be processed later.Each distinct identified object may be segmented individually by applying a technique explained in algorithm Section 3.3 by considering individual masked segmented images.Finally, we can visualize the 3D environment in Fig.6,and 3D instances are effectively extracted separately from the 3D point cloud.In Fig.6 we visualize representative outputs of our proposed method.We see that for simple cases of non-occluded objects at a reasonable distance(so we get a sufficient number of points),our approach outputs remarkably accurately in 3D instance segmentation and separate their background.For 3D instance segmentation with nearby or occluded objects,the result of mixed segmentation appears to be very challenging in some cases.Because our method is based on the Mask R-CNN and PointRend 2D instance segmentation models,we could potentially mitigate this problem if we propose a solution in future.

    The quantitative assessment of real-world scenes is challenging since no ground-truth model is suitable for this specific scenario.A comprehensive comparison of all segmented objects in each scene reconstituted 3D instances has been collected in Fig.6.The exactness of our proposed method is demonstrated by comparing the average width measurements of segmented 3D objects to real-world original measurements in Tab.2.

    Table 2: A controlled assessment in millimeters of 3D instance and an object from the real world is provided.From left to right in Fig.6 Column(c)

    The calculated difference clearly explains the accuracy of our proposed method between segmented 3D objects and real-world objects in the last column of Tab.2.The calculated difference is nominal against every object,which can be neglected by different applications used for various systems,such as intelligent and robot systems.

    Currently,five objects are used to segment for experiment in one frame of RGB-D data.Width and height are calculated between different objects for observing the calculated difference between real world and 3D segmented objects.Due to nominal measurement difference in real world and segmented objects the 3D instance segmentation presented in this paper for RGB-D data that can be deployed on a variety of systems with minimal effort and resources.

    In Fig.6, we demonstrate the outcomes of our suggested 3D instance segmentation method employing RGB-D data, as well as the margin error in Tab.2.The method we present successfully distinguishes between many objects and their surroundings.To the best of our knowledge, there is currently no way to separate objects from their backgrounds using segmentation.Most of the proposed approaches can segment items in a scene,but none of them can separate objects from their backgrounds.Many investigations employ a specific dataset for 3D objects without considering their context or background; however, our study focuses on segmenting objects and their backgrounds as two distinct outcomes.As robotics applications require information about their surrounds and context to the items,such an output is useful for intelligent robot system applications.As demonstrated in Tab.2, the error margin is quite small in our scenario, (1-3 millimeters)which is nominal for the movement or grasping of robot systems.In addition, as a follow-up to our current research,we want to conduct some experiments and comparisons with some existing methods after making structural adjustments to our method to match our output with that of existing methods.Another future project is to segment under some obstructed or congested conditions with a 360-degree view of the surroundings.As a result,segmented object measurements are critical for intelligent and robot system applications to overcome margins of error in real-world settings,as demonstrated in this study.

    5 Conclusion

    In this paper, a novel approach to 3D instance segmentation and detachment of multi-objects and their context from the indoor environment has been presented,that has not been explored before.The approach uses RGB-D data from the Intel RealSense sensor and recognizes targeted objects in RGB data by utilizing the PointRend with Mask R-CNN.An effective mechanism has been developed and evaluated for 3D instance segmentation and detaching the 3D instances from their background.The method of segmenting 3D instances is achieved by identifying masked pixels in RGB and depth data.Furthermore,a comprehensive comparison of the proposed 3D instance segmentation approach for the indoor environment was collected and accessed.The outcome of this research focused on 3D instance segmentation for single as well as multi-objects from an indoor environment.The same approach can be applied to multiple overlapping RGB-D data frames by utilizing the registration approaches.

    Research has resulted in the effectiveness of segmentation and separation of objects in an indoor environment for robot vision.Different applications may use separated objects and their backgrounds, i.e., intelligent and robot systems.Since robots use precise xyz coordinates to grasp objects,our proposed approach has a small discrepancy between actual measurements and 3D object measurements that robot systems may neglect.

    Although the research output is adequate, certain concerns in the proposed approach still have the potential for improvements.For instance, in the identification of occluded objects and their 3D instance segmentation,noise may occur due to different camera sensors and environmental noise in the 3D view of scenes.Furthermore,the research intends to bring this work forward to the next stage involving the registration 360°degree process in multiple distinct overlapping scenes.After obtaining a registered view of several overlapping RGB-D data frames, 3D instances can be segmented and separated from their background in 360°.

    Funding Statement:This research was supported by the BB21 plus funded by Busan Metropolitan City and Busan Institute for Talent&Lifelong Education(BIT)and a grant from Tongmyong University Innovated University Research Park(I-URP)funded by Busan Metropolitan City,Republic of Korea.

    Conflicts of Interest:The authors declare no conflict of interest.

    乱人伦中国视频| 国产免费现黄频在线看| 国产成人a∨麻豆精品| 午夜福利,免费看| 亚洲婷婷狠狠爱综合网| 夜夜看夜夜爽夜夜摸| 国产精品99久久99久久久不卡 | videossex国产| 国产女主播在线喷水免费视频网站| 亚洲精品第二区| 最黄视频免费看| 三级国产精品片| 中国三级夫妇交换| 夜夜爽夜夜爽视频| 黄色怎么调成土黄色| 欧美人与性动交α欧美精品济南到 | 看非洲黑人一级黄片| 亚洲经典国产精华液单| 国产不卡av网站在线观看| 丝袜在线中文字幕| 一边亲一边摸免费视频| 午夜福利网站1000一区二区三区| 成人午夜精彩视频在线观看| 国产无遮挡羞羞视频在线观看| 欧美人与性动交α欧美精品济南到 | 不卡视频在线观看欧美| 一区二区日韩欧美中文字幕 | 王馨瑶露胸无遮挡在线观看| 制服人妻中文乱码| 亚洲丝袜综合中文字幕| 亚洲四区av| 欧美精品亚洲一区二区| av电影中文网址| 色视频在线一区二区三区| 日本猛色少妇xxxxx猛交久久| 极品人妻少妇av视频| 91国产中文字幕| 高清黄色对白视频在线免费看| 夫妻性生交免费视频一级片| 国产在线视频一区二区| 在线播放无遮挡| 夜夜爽夜夜爽视频| 一级爰片在线观看| 高清毛片免费看| 久久综合国产亚洲精品| 两个人的视频大全免费| 亚洲经典国产精华液单| 国产精品国产三级国产专区5o| 色哟哟·www| 国产男女内射视频| 久久久国产精品麻豆| 免费看av在线观看网站| 欧美精品亚洲一区二区| 国产成人免费无遮挡视频| 欧美性感艳星| 青春草亚洲视频在线观看| 成人国语在线视频| 十分钟在线观看高清视频www| 国产国语露脸激情在线看| 久久99一区二区三区| 欧美最新免费一区二区三区| 熟女av电影| av视频免费观看在线观看| 国产精品麻豆人妻色哟哟久久| 高清在线视频一区二区三区| 精品久久久噜噜| 丰满少妇做爰视频| 一级毛片 在线播放| 夫妻性生交免费视频一级片| 老熟女久久久| 国产在视频线精品| 一级毛片电影观看| 视频区图区小说| 天天躁夜夜躁狠狠久久av| videosex国产| 午夜视频国产福利| 久久久国产欧美日韩av| 黄色怎么调成土黄色| 各种免费的搞黄视频| 精品熟女少妇av免费看| 又黄又爽又刺激的免费视频.| 91精品国产国语对白视频| 日本猛色少妇xxxxx猛交久久| 多毛熟女@视频| 青春草亚洲视频在线观看| 在线观看国产h片| 成人漫画全彩无遮挡| 青青草视频在线视频观看| 黄色毛片三级朝国网站| 亚洲成人手机| 国产精品三级大全| 久久人妻熟女aⅴ| 看十八女毛片水多多多| kizo精华| 免费大片黄手机在线观看| 人体艺术视频欧美日本| 亚洲中文av在线| 国产一区二区三区综合在线观看 | 亚洲av在线观看美女高潮| 老熟女久久久| 久久狼人影院| 久久久欧美国产精品| www.av在线官网国产| videosex国产| 精品国产露脸久久av麻豆| 亚洲美女黄色视频免费看| 亚洲av日韩在线播放| 天天躁夜夜躁狠狠久久av| 日韩人妻高清精品专区| 日本色播在线视频| .国产精品久久| www.av在线官网国产| 狠狠精品人妻久久久久久综合| av黄色大香蕉| av卡一久久| 国产亚洲午夜精品一区二区久久| 少妇丰满av| 色94色欧美一区二区| 波野结衣二区三区在线| 精品一区二区三卡| 午夜福利视频在线观看免费| 成年av动漫网址| 欧美一级a爱片免费观看看| 日韩精品免费视频一区二区三区 | 亚洲综合精品二区| 国产黄色视频一区二区在线观看| 国产亚洲一区二区精品| 国产一区有黄有色的免费视频| 精品99又大又爽又粗少妇毛片| 久久久久久久久久久丰满| 18禁动态无遮挡网站| 精品99又大又爽又粗少妇毛片| 少妇精品久久久久久久| 大又大粗又爽又黄少妇毛片口| 亚洲国产毛片av蜜桃av| 麻豆成人av视频| 日本91视频免费播放| 如何舔出高潮| 亚洲五月色婷婷综合| 国精品久久久久久国模美| 各种免费的搞黄视频| 亚洲伊人久久精品综合| 国产熟女午夜一区二区三区 | 欧美日韩国产mv在线观看视频| 在线观看免费高清a一片| 亚洲精品久久午夜乱码| 欧美性感艳星| 9色porny在线观看| 国产亚洲最大av| 天堂8中文在线网| 国产精品一区二区在线不卡| 九色成人免费人妻av| 日韩大片免费观看网站| 国产亚洲欧美精品永久| 中国三级夫妇交换| 在线观看美女被高潮喷水网站| 亚洲精品国产av成人精品| 国产亚洲一区二区精品| av在线app专区| 赤兔流量卡办理| 性高湖久久久久久久久免费观看| 免费观看在线日韩| 两个人的视频大全免费| 欧美bdsm另类| 3wmmmm亚洲av在线观看| 在线天堂最新版资源| 肉色欧美久久久久久久蜜桃| 国产成人精品无人区| 91午夜精品亚洲一区二区三区| 亚洲人成77777在线视频| 人妻一区二区av| 亚洲国产精品999| 一级爰片在线观看| 夜夜爽夜夜爽视频| 精品99又大又爽又粗少妇毛片| 欧美 日韩 精品 国产| 国产一区有黄有色的免费视频| 国产精品不卡视频一区二区| 国产在线免费精品| 99视频精品全部免费 在线| 免费观看av网站的网址| 亚洲精品乱码久久久久久按摩| 91国产中文字幕| 亚洲一级一片aⅴ在线观看| 男人操女人黄网站| 亚洲人成77777在线视频| 久久亚洲国产成人精品v| 少妇的逼水好多| 中文字幕免费在线视频6| 精品一区二区三卡| 欧美精品亚洲一区二区| 国产乱来视频区| 丝袜美足系列| 久久青草综合色| 亚洲婷婷狠狠爱综合网| 最近手机中文字幕大全| 国产免费一区二区三区四区乱码| a级毛片黄视频| 99久久人妻综合| 在线观看免费视频网站a站| 伦理电影大哥的女人| 大陆偷拍与自拍| 久久热精品热| 26uuu在线亚洲综合色| 亚洲精品色激情综合| 久久精品久久精品一区二区三区| 免费观看无遮挡的男女| 好男人视频免费观看在线| 嘟嘟电影网在线观看| 亚洲av在线观看美女高潮| 亚洲国产色片| 自线自在国产av| 国产乱人偷精品视频| 国产成人精品一,二区| 亚洲欧美成人精品一区二区| 伊人久久精品亚洲午夜| 成人免费观看视频高清| 五月开心婷婷网| 老司机影院毛片| 在线看a的网站| 最新的欧美精品一区二区| 蜜桃在线观看..| 精品一品国产午夜福利视频| 欧美激情 高清一区二区三区| av.在线天堂| 国产午夜精品久久久久久一区二区三区| 久久99精品国语久久久| 最新中文字幕久久久久| 五月玫瑰六月丁香| 综合色丁香网| 午夜激情福利司机影院| 精品久久久噜噜| 国产精品.久久久| 蜜臀久久99精品久久宅男| 国产精品欧美亚洲77777| 毛片一级片免费看久久久久| 欧美xxxx性猛交bbbb| av在线app专区| 看免费成人av毛片| 国产精品一国产av| 国产精品人妻久久久影院| √禁漫天堂资源中文www| 成人影院久久| 精品少妇内射三级| 99热这里只有是精品在线观看| 免费黄频网站在线观看国产| 国产精品国产av在线观看| 少妇高潮的动态图| 免费人妻精品一区二区三区视频| 日韩中文字幕视频在线看片| 黄色毛片三级朝国网站| 亚洲欧美日韩另类电影网站| 伊人久久国产一区二区| 在线精品无人区一区二区三| 成人毛片60女人毛片免费| 成人国产麻豆网| 中国国产av一级| 国产精品久久久久久精品电影小说| 欧美成人午夜免费资源| 国产伦理片在线播放av一区| 最近中文字幕2019免费版| 久久精品国产自在天天线| 国产精品偷伦视频观看了| 色婷婷av一区二区三区视频| 26uuu在线亚洲综合色| 又大又黄又爽视频免费| 亚洲精品日韩av片在线观看| 如日韩欧美国产精品一区二区三区 | 欧美丝袜亚洲另类| 国产精品久久久久久精品电影小说| 国产爽快片一区二区三区| 韩国高清视频一区二区三区| 国产乱来视频区| 午夜激情福利司机影院| 高清在线视频一区二区三区| 亚洲久久久国产精品| 天天躁夜夜躁狠狠久久av| 欧美精品国产亚洲| 精品国产国语对白av| 国产精品久久久久久精品古装| av视频免费观看在线观看| 国产免费福利视频在线观看| 国产成人免费观看mmmm| 一二三四中文在线观看免费高清| 国产精品久久久久久精品古装| 少妇被粗大的猛进出69影院 | 一区二区三区乱码不卡18| 精品久久久精品久久久| 久久国产亚洲av麻豆专区| 亚洲国产最新在线播放| 最近的中文字幕免费完整| 亚洲精品乱码久久久v下载方式| 亚洲第一av免费看| 人人妻人人爽人人添夜夜欢视频| 国产精品久久久久久久电影| 街头女战士在线观看网站| 女人精品久久久久毛片| 青春草亚洲视频在线观看| 免费观看的影片在线观看| 国产69精品久久久久777片| av在线老鸭窝| 日本黄色日本黄色录像| 熟女电影av网| 五月伊人婷婷丁香| 一级毛片电影观看| av不卡在线播放| 狠狠婷婷综合久久久久久88av| 大香蕉久久成人网| 免费av中文字幕在线| 国产老妇伦熟女老妇高清| 精品一品国产午夜福利视频| 婷婷色麻豆天堂久久| 搡老乐熟女国产| 最近的中文字幕免费完整| 国产精品人妻久久久久久| 亚洲av国产av综合av卡| 亚洲av电影在线观看一区二区三区| 欧美日韩av久久| 日韩在线高清观看一区二区三区| 中文欧美无线码| 卡戴珊不雅视频在线播放| 亚洲国产成人一精品久久久| 人人妻人人爽人人添夜夜欢视频| 在线看a的网站| 国产一区二区在线观看av| 久久青草综合色| 亚洲av综合色区一区| 亚洲性久久影院| 日本av免费视频播放| 在线观看国产h片| 欧美 亚洲 国产 日韩一| 18禁观看日本| 亚洲中文av在线| 日韩制服骚丝袜av| 美女福利国产在线| 18禁动态无遮挡网站| 看十八女毛片水多多多| 久久久久久伊人网av| a级毛片黄视频| 亚洲性久久影院| 一区二区av电影网| 一本一本综合久久| 美女中出高潮动态图| 午夜福利网站1000一区二区三区| 国产男女内射视频| 99热6这里只有精品| 久久毛片免费看一区二区三区| 男女边摸边吃奶| 久久ye,这里只有精品| 久久久久国产精品人妻一区二区| 久久婷婷青草| 亚洲精品,欧美精品| 亚洲精品久久午夜乱码| 亚洲欧美中文字幕日韩二区| 天堂俺去俺来也www色官网| 亚洲精华国产精华液的使用体验| 岛国毛片在线播放| 黑人高潮一二区| 97在线视频观看| 久久国产亚洲av麻豆专区| 韩国av在线不卡| 久久精品国产a三级三级三级| 久久热精品热| 桃花免费在线播放| av天堂久久9| 免费黄频网站在线观看国产| 尾随美女入室| 欧美精品一区二区免费开放| 久久人人爽av亚洲精品天堂| 免费看av在线观看网站| 亚洲精品一二三| 制服诱惑二区| 久久 成人 亚洲| 久久精品国产a三级三级三级| 你懂的网址亚洲精品在线观看| 又粗又硬又长又爽又黄的视频| 最近的中文字幕免费完整| 欧美人与善性xxx| 美女主播在线视频| 美女内射精品一级片tv| 新久久久久国产一级毛片| 精品熟女少妇av免费看| 色哟哟·www| 天堂俺去俺来也www色官网| 国产成人精品福利久久| 亚洲精品色激情综合| 婷婷色av中文字幕| 在线天堂最新版资源| 国产亚洲最大av| 热re99久久精品国产66热6| 91精品一卡2卡3卡4卡| 这个男人来自地球电影免费观看 | 久久av网站| 汤姆久久久久久久影院中文字幕| 一边亲一边摸免费视频| av不卡在线播放| 大又大粗又爽又黄少妇毛片口| 国产成人精品无人区| 2018国产大陆天天弄谢| 在线 av 中文字幕| 亚洲精品久久成人aⅴ小说 | 亚洲怡红院男人天堂| av国产久精品久网站免费入址| 高清在线视频一区二区三区| 亚洲经典国产精华液单| 七月丁香在线播放| 丰满少妇做爰视频| 亚洲不卡免费看| 亚洲成人av在线免费| 我的女老师完整版在线观看| 99久国产av精品国产电影| 日韩精品免费视频一区二区三区 | 日韩欧美一区视频在线观看| 国产精品偷伦视频观看了| 丝袜美足系列| kizo精华| 日韩中文字幕视频在线看片| 99热这里只有是精品在线观看| .国产精品久久| 色哟哟·www| 久久久久久久久久久丰满| 天天躁夜夜躁狠狠久久av| 一级毛片电影观看| 中文字幕免费在线视频6| 永久网站在线| 午夜激情久久久久久久| 久久久久精品性色| 大又大粗又爽又黄少妇毛片口| 亚洲av国产av综合av卡| 久久韩国三级中文字幕| 一级毛片aaaaaa免费看小| 国产精品无大码| videossex国产| 精品一品国产午夜福利视频| 日韩av不卡免费在线播放| 免费观看av网站的网址| 两个人的视频大全免费| 国产熟女午夜一区二区三区 | 久久午夜福利片| 一区二区日韩欧美中文字幕 | av视频免费观看在线观看| 高清黄色对白视频在线免费看| 欧美性感艳星| 在线观看人妻少妇| av网站免费在线观看视频| 免费观看性生交大片5| 成人综合一区亚洲| 少妇被粗大猛烈的视频| 欧美精品亚洲一区二区| 高清视频免费观看一区二区| 成人国产麻豆网| 熟女人妻精品中文字幕| av在线播放精品| 免费黄频网站在线观看国产| 国产午夜精品久久久久久一区二区三区| 亚洲国产最新在线播放| 亚洲,欧美,日韩| 国产欧美另类精品又又久久亚洲欧美| 欧美3d第一页| 亚洲av欧美aⅴ国产| 国产成人a∨麻豆精品| 久久久精品免费免费高清| 日韩不卡一区二区三区视频在线| 亚洲人成网站在线观看播放| 看非洲黑人一级黄片| 午夜激情福利司机影院| 国产精品.久久久| 欧美少妇被猛烈插入视频| 熟妇人妻不卡中文字幕| 亚洲av.av天堂| 国产欧美日韩综合在线一区二区| 热99国产精品久久久久久7| 亚洲在久久综合| 国产精品国产三级国产av玫瑰| 51国产日韩欧美| 女性生殖器流出的白浆| 国产精品欧美亚洲77777| 欧美 日韩 精品 国产| 亚洲精品乱码久久久久久按摩| 2021少妇久久久久久久久久久| 久久久久久久久久久免费av| 青春草亚洲视频在线观看| av在线老鸭窝| 国产一区二区三区av在线| 国产精品熟女久久久久浪| 国产精品99久久久久久久久| 日韩中文字幕视频在线看片| 另类精品久久| 边亲边吃奶的免费视频| 最近手机中文字幕大全| 国产精品一区二区在线观看99| 久久久久久久精品精品| 久久av网站| 亚洲精品国产色婷婷电影| 又黄又爽又刺激的免费视频.| 国产一区二区三区av在线| 美女国产高潮福利片在线看| 国产精品一区二区三区四区免费观看| 精品久久久久久久久av| 成年av动漫网址| 国产精品 国内视频| 夜夜骑夜夜射夜夜干| 国产精品.久久久| 国产精品一区二区在线不卡| 九九爱精品视频在线观看| 亚洲激情五月婷婷啪啪| 久久午夜福利片| 日韩成人伦理影院| 亚州av有码| 国产精品久久久久久久久免| 成人毛片a级毛片在线播放| 在线观看美女被高潮喷水网站| 亚洲av成人精品一区久久| 亚洲色图 男人天堂 中文字幕 | videosex国产| 日韩av免费高清视频| 亚洲人成网站在线播| 交换朋友夫妻互换小说| 91精品伊人久久大香线蕉| 午夜视频国产福利| 成人国产av品久久久| 国产精品麻豆人妻色哟哟久久| 国产黄频视频在线观看| 2021少妇久久久久久久久久久| 久久久久久久久久久免费av| 国产黄片视频在线免费观看| 欧美老熟妇乱子伦牲交| 中文精品一卡2卡3卡4更新| 精品亚洲乱码少妇综合久久| 成人黄色视频免费在线看| 免费黄色在线免费观看| 纵有疾风起免费观看全集完整版| 一级毛片我不卡| 日韩熟女老妇一区二区性免费视频| 18禁在线播放成人免费| 日韩欧美一区视频在线观看| 国产一区二区在线观看av| 成年人午夜在线观看视频| kizo精华| 黑人高潮一二区| 中文字幕免费在线视频6| 久久99蜜桃精品久久| 日韩 亚洲 欧美在线| 精品熟女少妇av免费看| 王馨瑶露胸无遮挡在线观看| 少妇高潮的动态图| 视频中文字幕在线观看| 少妇被粗大的猛进出69影院 | 亚洲欧美成人精品一区二区| 国产国拍精品亚洲av在线观看| 新久久久久国产一级毛片| 亚洲欧美成人综合另类久久久| 蜜桃国产av成人99| 亚洲熟女精品中文字幕| 国产69精品久久久久777片| 视频在线观看一区二区三区| 午夜激情福利司机影院| 亚洲激情五月婷婷啪啪| 亚洲欧美中文字幕日韩二区| 熟女电影av网| 成人无遮挡网站| 九草在线视频观看| 黄色怎么调成土黄色| 亚洲精华国产精华液的使用体验| 亚洲av福利一区| av在线老鸭窝| 女人精品久久久久毛片| 乱人伦中国视频| 欧美日韩精品成人综合77777| 韩国av在线不卡| 男女啪啪激烈高潮av片| 日韩免费高清中文字幕av| 亚洲,欧美,日韩| 色哟哟·www| 亚洲精品久久成人aⅴ小说 | 欧美国产精品一级二级三级| 久久久a久久爽久久v久久| 国产精品一国产av| 国产精品99久久99久久久不卡 | 亚洲精品久久成人aⅴ小说 | 丰满迷人的少妇在线观看| 精品久久久噜噜| 一本色道久久久久久精品综合| 极品少妇高潮喷水抽搐| 3wmmmm亚洲av在线观看| 国产精品久久久久久久久免| 成人黄色视频免费在线看| 香蕉精品网在线| 看非洲黑人一级黄片| 18禁观看日本| 在线观看美女被高潮喷水网站| 热99久久久久精品小说推荐| 多毛熟女@视频| 成人午夜精彩视频在线观看| 丰满乱子伦码专区| 热99久久久久精品小说推荐| 18禁在线播放成人免费| 91精品三级在线观看| 国产成人免费无遮挡视频| 少妇高潮的动态图| 在线观看免费高清a一片| 夫妻性生交免费视频一级片| 大片免费播放器 马上看| 在线 av 中文字幕| 久久99一区二区三区| 男人爽女人下面视频在线观看| 国产女主播在线喷水免费视频网站| 老司机影院成人| 美女大奶头黄色视频| 麻豆乱淫一区二区| 3wmmmm亚洲av在线观看| 十八禁高潮呻吟视频| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲第一av免费看| 国产片内射在线|