• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Avoiding Non-Manhattan Obstacles Based on Projection of Spatial Corners in Indoor Environment

    2020-08-05 09:40:22LupingWangandHuiWei
    IEEE/CAA Journal of Automatica Sinica 2020年4期

    Luping Wang and Hui Wei

    Abstract—Monocular vision-based navigation is a considerable ability for a home mobile robot. However,due to diverse disturbances, helping robots avoid obstacles, especially non-Manhattan obstacles,remains a big challenge. In indoor environments, there are many spatial right-corners that are projected into two dimensional projections with special geometric configurations.These projections, which consist of three lines,might enable us to estimate their position and orientation in 3D scenes. In this paper, we present a method for home robots to avoid non-Manhattan obstacles in indoor environments from a monocular camera.The approach first detects non-Manhattan obstacles.Through analyzing geometric features and constraints,it is possible to estimate posture differences between orientation of the robot and non-Manhattan obstacles. Finally according to the convergence of posture differences, the robot can adjust its orientation to keep pace with the pose of detected non-Manhattan obstacles, making it possible avoid these obstacles by itself. Based on geometric inferences, the proposed approach requires no prior training or any knowledge of the camera’s internal parameters,making it practical for robots navigation. Furthermore, the method is robust to errors in calibration and image noise. We compared the errors from corners of estimated non-Manhattan obstacles against the ground truth. Furthermore, we evaluate the validity of convergence of differences between the robot orientation and the posture of non-Manhattan obstacles.The experimental results showed that our method is capable of avoiding non-Manhattan obstacles, meeting the requirements for indoor robot navigation.

    I.Introduction

    with the aging population and growing amount of disabled peoples,the development of home service robots is becoming an increasingly urgent issue. Visual navigation in an indoor environment has considerable value for monitoring and mission planning.However,there exist a multitude of disturbance of clutters and occlusion in an indoor environment,resulting in the predicament of avoiding obstacles,especially non-Manhattan obstacles(e.g.,shelf,sofa,chair),which remains a difficult challenge for visionbased robots.

    Instead of current methods(e.g.,3D laser scanners),visual navigation that uses a single low-cost camera draws more attention, because is advantageous in consumption and efficiency.In the human visual system,Gibson described that perception of depth is inborn and does not require additional knowledge via experiment “visual cliff”[1].It used to be seen that humans can recover the three dimensional structures using a binocular parallax.However,it was indicated that the human ability to estimate the depth of isolated points is extremely weak,and that we are more likely to infer relative depths of different surfaces from their jointed points[2].This indicates binocular features are not that important,and that it is possible to understand scenes using only monocular images.Meanwhile,it was reported that humans are sensitive to surfaces of different orientations,allowing us to extract surface and orientation information for understanding a scene[3].Accordingly,it can be assumed that there are some simple rules that can be used to infer 3D structure over a short period of time.Methods were presented to understand indoor scenes based on projections of rectangles and right angles, but non-Manhattan obstacles remains an undiscussed issue[4],[5].

    In this paper,we present a method which allows for understanding of non-Manhattan obstacles in an indoor environment from a single image,without prior training or internal calibration of a camera.First,straight lines were detected,and the spatial corners projections consisting of three lines can be extracted.Secondly,through geometric inferences,it is possible to understand the non-Manhattan obstacles.Finally,through convergence of differences in geometric features,it is possible to adjust robot orientation to keep pace with the posture of non-Manhattan obstacles,allowing for the avoidance of such objects.

    Instead of data-driven methods,such as those using deep learning,the proposed approach requires no prior training.with the use of simple geometric inferences, the proposed algorithm is robust to changes in illumination and color.For disturbances,the method can understand non-Manhattan obstacles with neither knowledge of the camera’s intrinsic parameters nor the relation between the camera and the world,making it practical and efficient for a navigating robot.Besides,without other external devices,the method has the advantages of lower required investment.

    For classic benchmarks,our algorithm is capable of describing details of non-Manhattan obstacles.We compared the corners estimated by the proposed approach against the corner ground truth, measuring the error through the percentage of pixels from summing up all Euclidean distances between estimated corners and the associated ground truth corners.Furthermore,the experimental results demonstrated that robots can understand the non-Manhattan obstacles and avoid them via the convergence of posture difference between the robot orientation and the non-Manhattan obstacle,meeting the requirements of indoor robot navigation.

    II.Related Work

    There are previous works which have made impressive progress,including structure-from-motion[6]–[9]and visual SLAM[10]–[14].Through a series of visual observations,they propose a scene model in the form of a 3D point cloud.A method showed that three dimensional point clouds and image data were combined for semantic segmentation [15].Nevertheless, just a fraction of the information from original images can be provided via point clouds and geometric cues,thus some aspects such as edge textures are sometimes lost.

    Also,3D structures can be reconstructed through inferring the relationship between connected super pixels.Saxenaet al.assigned each pixel of an image of grass,trees,sky,or something else,through heuristic knowledge[16].But these methods hardly work in indoor settings with different levels of clutter and incomplete surfaces and coverage.

    Furthermore,there are approaches that model geometric scene structures from a single image,including approaches for geometric label classification[17]and for finding vertical/ground fold-lines[18].As to others[19],local image properties were linked to a classification system of local surface orientation,and walls were extracted based on jointed points with the floor.However,due to a great dependance on precise floor segmentation, these methods may fail in an indoor environment with clutter and covers.There has been renewed interest in 3D structures in restricted domains such as the Manhattan world[20],[21].Based on vanishing points,method detected rectangular surfaces aligned with major orientations[5].But dominant directions alone were discussed and object surface information were not extracted.

    Additionally,a top down approach for understanding indoor scenes was presented by Peroet al.[22].However,it was difficult to explain room box edges when there were no additional objects.Although Pero’s algorithm[23]can understand the 3D geometry of indoor environments, it required objects and prior knowledge such as relative dimensions,size,and locations.Also a comprehensive Bayesian generative model was proposed to understand indoor scenes[24], but it relied on more specific and detailed geometric models,and suffered greatly from hallucinating objects.Conversely,parameterized models of indoor environments were developed[25].However,this method sampled possible spatial layout hypotheses without clutter,was prone to errors because of occlusions,and tended to fit rooms where walls coincided with object surfaces.Meanwhile,the relative depth-order of rectangular surfaces were inferred by considering their relationships [26], [27], but it just provided depth cues of partial rectangular regions in the image and not the entire scene.

    Approaches that can estimate what part of the 3D space is free and what part is occupied by objects are modeled either in terms of clutter[28],[29]or bounding boxes[30],[22].A significant work was found to combine 3D geometry and semantics in the scope of outdoor scenes.Hedau proposed a method that identified beds by combining image appearances and 3D reasoning made possible by estimating the room layout [31].

    As to Dasgupta’s work[32],indoor layout can be estimated by using a fully convolutional neural network in conjunction with an optimization algorithm.It evenly sampled a grid of a feasible region to generate candidates for vanishing points.Nevertheless, the vanish point may not lie in the feasible region when the robot faces certain layout scenarios,such as,a two-wall layout scenario.Additionally, because of the iterative refinement process,optimization took approximately 30 seconds per frame,with a step size of 4 pixels for sampling lines,and a grid of 200 vanishing points.Hence,the efficiency of this method cannot meet the requirements of robot navigation in an indoor environment. Also,a method was presented to predict room layout from a panoramic image[33].Meanwhile,other methods using convolutional neural network were proposed to infer indoor scenes from a single image[34]–[38].Since these methods have no regard for non-Manhattan structures,it is difficult for them to understand non-Manhattan obstacles.

    Recently,a method was presented to detect horizontal vanishing points and the zenith vanishing point in man-made environments[39].Also,another method was proposed to estimate the camera orientation and vanishing points through nonlinear Bayesian filtering in a non-Manhattan world[40].However,it is difficult for these methods to understand non-Manhattan obstacles.In previous works, the proposed algorithm can estimate the layout of an indoor scene via projections of spatial rectangles, but there was difficulty in handling non-Manhattan structures[5].Also,a method can provide understanding of indoor scenes that satisfy the Manhattan assumption[4];however,it failed to understand non-Manhattan obstacles because the structures that do not satisfy the Manhattan assumption were not discussed.Therefore,it is necessary to develop an algorithm to understand the non-Manhattan obstacles for visual navigation that uses a single low-cost camera in robots.Furthermore the method of low consumption and high efficiency meet the requirement of robot navigation.

    III.Inference

    Fig.5.An example of avoiding an obstacle of non-Manhattan structure.

    TABLE II Turn Motion Mode in the Camera Coordinate System

    Based on its understanding of the indoor scene,the robot can turn its orientation in order to keep pace with the posture of different structures(Manhattan or non-Manhattan).The turning of its orientation can be modeled as a convergence of the functionDt,as shown in Fig.6.with a converging value for posture difference,the robot can adjust its orientation,step by step. AsDt→0,the robot’s orientation is in accordance with the posture of the obstacle,allowing it to avoid the obstacle by itself.

    IV.Experimental Results

    We design experiments to evaluate the performance of the robot in avoiding non-Manhattan obstacles through the proposed approach.The focus of the experiments is to evaluate algorithms underlying the execution of a real robot mounted with only one camera.The goals of the experiment are to evaluate not only their performance in detecting non-Manhattan obstacles in indoor settings, but also their ability to avoid such non-Manhattan obstacles by turning its orientation viaDt.

    A. Performance of Detecting Non-Manhattan Obstacles

    For an input image that contains many occlusions and clutter,our method copes with clutter without prior training.Based on geometric constraints of spatial corners,our approach not only detects the obstacles satisfying the Manhattan assumption,but also can estimate the pose of the obstacles,especially non-Manhattan obstacles.

    Fig.6.The decreasing posture difference between the obstacle and robot orientation.

    We compare the obstacles estimated by our algorithm against the ground truth,measuring the corner error by summing up all Euclidean distances between the estimated corners and the associated ground truth corners.The performance on the LSUN dataset [44]is compared in Table III.Although the errors of corners appear lower in methods[32],[34], they only measured the error of corners that belong to the layout of their indoor setting,without competence of understanding and estimating corners of non-Manhattan obstacles.However,our method estimates the error of corners that are non-Manhattan obstacles,which plays an important role in the navigation of the robot,allowing the robot to avoid non-Manhattan obstacles in the indoor setting.

    TABLE III Performance on the LSUN Dataset

    Experimental comparisons were conducted between our method and Wei’s method[4],as shown in Fig.7.In Fig.7,the image size(height and width)of the Wei’s scene understanding and for our non-Manhattan obstacle detection are the same.For example,the scene understanding(the fifth row,fourth column)and the non-Manhattan obstacle detection(the sixth row,fourth column)are from the same group of line segments,in which the line segments’numbers are the same.What is different is that some line segments which do not satisfy the constraint of angle projections in the scene understanding(the fifth row,fourth column)are eliminated,resulting in displaying less lines.Since Wei’s method only considers lines belonging to vanishing points of layout in indoor scenes,it is prone to failure in detecting non-Manhattan obstacles.However,our method can deal with clutter,and can efficiently detect details,especially non-Manhattan obstacles,without any prior training.

    Experimental comparisons were conducted between our method and Wang’s method[5],as shown in Fig.8.Since Wang’s method only considers rectangular projections that belong to vanishing points of layout of indoor scenes,it is also difficult to detect non-Manhattan obstacles.

    Fig.7.Experimental comparisons.(a)input frames from UCB dataset[26];(b)understanding of indoor scenes by Wei’s method[4];(c) non-Manhattan obstacles estimated by our method;(d)images from Hedau dataset[42];(e)results estimated by Wei’s method[4];(f)non-Manhattan structures estimated by our method.

    B. Avoiding Non-Manhattan Obstacles

    Fig.8.Experimental comparisons.(a)input images from LSUN dataset[44];(b)understanding of indoor scenes by Wang’s method[5];(c) non-Manhattan obstacles estimated by our method.

    Fig.9.An unmanned aerial vehicle with a two mega-pixel fixed camera that was used for capturing video.

    Here,as shown in Fig.9,an unmanned aerial vehicle with a two mega-pixel fixed camera was used for capturing video.The vision information was transmitted to the computer with the CPU Intel Core i7-6500,2.50GHz.Then,our method can efficiently be applied to identify non-Manhattan obstacles in a scene, without any prior training.Take aFt1as an example in Fig.10(first column);it is obvious that there exists a difference between the orientation of the robot and the pose of non-Manhattan obstacle.Based on the equation above,theDt1can be approximately estimated(ηt1=76.43).According to the Table II,the robot understood the scene and identified the non-Manhattan obstacle,and turned right by a prelimited angle(the max turning angle prelimited in robot controlling).

    Then,the robot caughtFt2(Fig.10,the second column)and entered a next loop of unerstanding-turn.For the frames as shown in Fig.10, non-Manhattan obstacles are detected and pose differences between the robot’s orientation and the non-Manhattan obstacles are estimated in Table IV.

    Through successive understanding of the non-Manhattan obstacles and orientation turning,the converging of differences(Mxand η)can be seen as a convergence process,as shown in Fig.11.In the left image of Fig.11,the horizontal axis indicates the index of frame in Table IV,and the vertical axis represents the valueMx.Meanwhile,in the right image in Fig.11,the horizontal axis indicates the index of frame in Table IV,and the verticalaxis represents the value η.with the decreasing value of difference betweenMxand η,the orientation of the robot can be adjusted to keep pace with the pose of detected non-Manhattan obstacles,step by step,avoiding the non-Manhattan obstacles by itself.

    Obviously,in facing the non-Manhattan obstacle,the pose difference between the robot orientation and the non-Manhattan obstacle can be approximately estimated so as to determine whether or not and how to change the robot orientation in order to avoid the obstacles.

    V.Conclusion

    Fig.10.Pose difference estimation;(a)(c)input frame;(b)(d) pose difference (Mx and η)between the robot orientation and the non-Manhattan obstacles.

    The current work presents an approach for home mobile robots to avoid non-Manhattan obstacles in indoor environments using a monocular camera.The method first detects projections of spatial right-corners and estimates their position and orientation in three dimensional scenes.Accordingly,it is possible to model non-Manhattan obstacles via the projections of corners.Then, based on understanding such non-Manhattan obstacles, the difference between the robot orientation and the posture of the obstacles can be estimated via geometric features and constraints.Finally,according to the difference,it is possible for the robot to determine whether and how to turn its orientation so as to keep pace with the posture of the detected non-Manhattan obstacles,making it possible to avoid such obstacles.Instead of data driven approaches,the proposed method requires no prior training.with use of geometric inference,the presented method is robust against changes in illumination and color.Furthermore, without any knowledge of the camera’s internal parameters, the algorithm is more practical for robotic application in navigation.In addition, using features from a monocular camera,the approach is robust to the errors in calibration and image noise.without other external devices,this method has the advantages of lower investment and energy efficiency.The experiments measure the error of corners by comparing the corners of non-Manhattan obstacles estimated by our algorithm against the ground truth.Moreover,we demonstrated the validity of avoiding obstacles via the convergence of difference between the robot orientation and non-Manhattan obstacle posture.The experimental results showed that our method can understand and avoid the non-Manhattan obstacles,meeting the requirements of indoor robot navigation.

    Fig.11.Convergence.(a)convergence curve of Mx;(b)convergence curve of η.

    TABLE IV Difference Between the Robot Orientation and the Non-Manhattan Obstacles

    久久婷婷人人爽人人干人人爱 | 欧美绝顶高潮抽搐喷水| 亚洲 欧美一区二区三区| 脱女人内裤的视频| 中文字幕最新亚洲高清| 国产在线精品亚洲第一网站| 日本三级黄在线观看| 人人妻人人澡人人看| 中国美女看黄片| 1024视频免费在线观看| 99riav亚洲国产免费| 大香蕉久久成人网| 亚洲色图 男人天堂 中文字幕| 亚洲五月婷婷丁香| 亚洲七黄色美女视频| 亚洲人成77777在线视频| 极品教师在线免费播放| 国产精品自产拍在线观看55亚洲| 香蕉丝袜av| 大陆偷拍与自拍| 最近最新免费中文字幕在线| 欧洲精品卡2卡3卡4卡5卡区| 亚洲色图av天堂| 亚洲伊人色综图| 99精品欧美一区二区三区四区| 久久亚洲真实| 自拍欧美九色日韩亚洲蝌蚪91| 看片在线看免费视频| 一区二区三区激情视频| 久久中文看片网| 久久久久久久久久久久大奶| 午夜福利成人在线免费观看| 精品福利观看| 精品国产一区二区三区四区第35| 国产成人精品久久二区二区91| 亚洲一区高清亚洲精品| 欧美日韩瑟瑟在线播放| 精品人妻在线不人妻| cao死你这个sao货| 中出人妻视频一区二区| 黑人巨大精品欧美一区二区mp4| avwww免费| 高清黄色对白视频在线免费看| 桃色一区二区三区在线观看| 国产精品久久视频播放| 国产精品 欧美亚洲| 两性午夜刺激爽爽歪歪视频在线观看 | 少妇被粗大的猛进出69影院| 久久久久久久精品吃奶| 正在播放国产对白刺激| 女性被躁到高潮视频| 国产成人系列免费观看| 1024视频免费在线观看| 看免费av毛片| 神马国产精品三级电影在线观看 | 亚洲第一青青草原| 久久香蕉激情| 国产亚洲精品av在线| 神马国产精品三级电影在线观看 | 美女高潮喷水抽搐中文字幕| 99精品久久久久人妻精品| 人妻丰满熟妇av一区二区三区| 亚洲成人免费电影在线观看| 琪琪午夜伦伦电影理论片6080| 久久香蕉激情| 亚洲自偷自拍图片 自拍| 国产男靠女视频免费网站| 亚洲人成77777在线视频| 国产欧美日韩综合在线一区二区| 亚洲av成人一区二区三| 久久国产精品男人的天堂亚洲| 搡老妇女老女人老熟妇| 亚洲avbb在线观看| 亚洲精品国产区一区二| 女生性感内裤真人,穿戴方法视频| 久9热在线精品视频| 日韩大尺度精品在线看网址 | aaaaa片日本免费| 久久久久久久久中文| 国产男靠女视频免费网站| 91av网站免费观看| 国产激情久久老熟女| 手机成人av网站| 在线观看一区二区三区| 在线观看午夜福利视频| 日日爽夜夜爽网站| 美女免费视频网站| 免费看十八禁软件| 中文字幕高清在线视频| 国产真人三级小视频在线观看| 香蕉丝袜av| 成熟少妇高潮喷水视频| 中文亚洲av片在线观看爽| 在线免费观看的www视频| 国产亚洲av嫩草精品影院| 久久久久久久久免费视频了| 97碰自拍视频| 亚洲精品中文字幕一二三四区| 精品午夜福利视频在线观看一区| 9热在线视频观看99| 精品人妻在线不人妻| 午夜免费激情av| 黑人欧美特级aaaaaa片| 乱人伦中国视频| 久久中文字幕一级| 亚洲第一欧美日韩一区二区三区| 中文亚洲av片在线观看爽| 亚洲av成人av| 99久久综合精品五月天人人| 中文亚洲av片在线观看爽| 老汉色av国产亚洲站长工具| 无人区码免费观看不卡| 激情在线观看视频在线高清| 少妇被粗大的猛进出69影院| 人人妻人人澡人人看| 亚洲专区字幕在线| 日韩欧美一区视频在线观看| 精品欧美国产一区二区三| 91精品国产国语对白视频| 九色亚洲精品在线播放| 亚洲全国av大片| 欧美日本亚洲视频在线播放| 国产亚洲欧美在线一区二区| 国产区一区二久久| 变态另类成人亚洲欧美熟女 | 亚洲av片天天在线观看| av在线播放免费不卡| 久久天躁狠狠躁夜夜2o2o| 精品一区二区三区av网在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 日本 欧美在线| 精品久久久久久久久久免费视频| 欧美一区二区精品小视频在线| 久久久久久大精品| 国产成人啪精品午夜网站| 精品不卡国产一区二区三区| 露出奶头的视频| 成人国语在线视频| 欧美精品亚洲一区二区| 亚洲成av片中文字幕在线观看| 在线观看日韩欧美| 久久久久国产精品人妻aⅴ院| 少妇被粗大的猛进出69影院| 免费在线观看亚洲国产| 变态另类成人亚洲欧美熟女 | 天天一区二区日本电影三级 | 国产1区2区3区精品| 无遮挡黄片免费观看| 免费人成视频x8x8入口观看| 91精品国产国语对白视频| 国产在线观看jvid| 别揉我奶头~嗯~啊~动态视频| 香蕉久久夜色| 国产精品,欧美在线| 99精品久久久久人妻精品| 欧美乱色亚洲激情| 亚洲久久久国产精品| 无人区码免费观看不卡| 女同久久另类99精品国产91| 国产av精品麻豆| 美女扒开内裤让男人捅视频| 日韩精品青青久久久久久| 男女之事视频高清在线观看| 久久人妻av系列| 亚洲 国产 在线| 51午夜福利影视在线观看| 久久久久精品国产欧美久久久| 999精品在线视频| 一个人免费在线观看的高清视频| 精品久久久久久成人av| 国产一级毛片七仙女欲春2 | 国产私拍福利视频在线观看| 亚洲av第一区精品v没综合| av电影中文网址| 欧美激情高清一区二区三区| 老司机午夜十八禁免费视频| 看黄色毛片网站| 12—13女人毛片做爰片一| 18禁国产床啪视频网站| 国产成人精品无人区| 成人亚洲精品av一区二区| 欧美久久黑人一区二区| av天堂久久9| 在线国产一区二区在线| 999久久久精品免费观看国产| 亚洲av美国av| 久久精品成人免费网站| 可以免费在线观看a视频的电影网站| 国产亚洲精品综合一区在线观看 | 亚洲五月天丁香| 亚洲天堂国产精品一区在线| 丝袜在线中文字幕| 亚洲专区中文字幕在线| 欧美av亚洲av综合av国产av| 啪啪无遮挡十八禁网站| 国产成人啪精品午夜网站| 最近最新免费中文字幕在线| 亚洲成a人片在线一区二区| 国产人伦9x9x在线观看| 精品电影一区二区在线| 国产精品影院久久| 久久人妻福利社区极品人妻图片| 99久久综合精品五月天人人| netflix在线观看网站| 精品国产亚洲在线| 国产三级黄色录像| 免费在线观看亚洲国产| 亚洲一区中文字幕在线| 国产亚洲精品第一综合不卡| 黄片小视频在线播放| 国产麻豆成人av免费视频| 久久久久久久久免费视频了| 精品不卡国产一区二区三区| 中文字幕人妻丝袜一区二区| 美女 人体艺术 gogo| 久久久久国产精品人妻aⅴ院| 欧美午夜高清在线| 久久人妻福利社区极品人妻图片| 午夜福利成人在线免费观看| 国产99白浆流出| 欧美激情极品国产一区二区三区| 日本 欧美在线| 久久九九热精品免费| 麻豆一二三区av精品| 国产不卡一卡二| 国产精品久久久久久人妻精品电影| 中文字幕色久视频| 侵犯人妻中文字幕一二三四区| 欧美午夜高清在线| 亚洲黑人精品在线| 人人澡人人妻人| 男人舔女人的私密视频| 亚洲精品中文字幕一二三四区| 色综合欧美亚洲国产小说| 国产免费男女视频| av在线播放免费不卡| 丝袜人妻中文字幕| 99国产精品99久久久久| 国产一区在线观看成人免费| 成人手机av| 亚洲情色 制服丝袜| 国产高清视频在线播放一区| 日韩欧美免费精品| 国产高清videossex| 亚洲精品在线美女| 欧美午夜高清在线| 亚洲第一青青草原| 久久久国产精品麻豆| 日本一区二区免费在线视频| 怎么达到女性高潮| 一级a爱片免费观看的视频| 欧美黄色片欧美黄色片| 亚洲人成伊人成综合网2020| 久久人妻av系列| 午夜免费激情av| 美女扒开内裤让男人捅视频| 悠悠久久av| 少妇裸体淫交视频免费看高清 | 欧美精品啪啪一区二区三区| 免费在线观看完整版高清| 精品一区二区三区四区五区乱码| 欧美乱码精品一区二区三区| 日本 av在线| 嫁个100分男人电影在线观看| 中文字幕久久专区| 婷婷精品国产亚洲av在线| 少妇裸体淫交视频免费看高清 | netflix在线观看网站| 99久久久亚洲精品蜜臀av| 久久久久久久午夜电影| 日韩一卡2卡3卡4卡2021年| 两个人看的免费小视频| 这个男人来自地球电影免费观看| 久久香蕉激情| 久久久久久久久免费视频了| 999久久久国产精品视频| 亚洲av电影在线进入| 97超级碰碰碰精品色视频在线观看| 日日爽夜夜爽网站| 欧美一区二区精品小视频在线| 久久香蕉国产精品| 国产欧美日韩综合在线一区二区| 国产亚洲精品av在线| 又大又爽又粗| 非洲黑人性xxxx精品又粗又长| 国产熟女xx| 国产野战对白在线观看| 国产麻豆69| 国产成人欧美| 亚洲精品久久国产高清桃花| 一边摸一边抽搐一进一出视频| 大陆偷拍与自拍| 中文字幕av电影在线播放| 欧美午夜高清在线| 天堂影院成人在线观看| 男女做爰动态图高潮gif福利片 | 欧美成人性av电影在线观看| 中文字幕高清在线视频| 亚洲av日韩精品久久久久久密| 校园春色视频在线观看| 亚洲av五月六月丁香网| 精品第一国产精品| 久久久久国产精品人妻aⅴ院| 婷婷六月久久综合丁香| 狠狠狠狠99中文字幕| 欧美中文综合在线视频| 麻豆成人av在线观看| 99在线人妻在线中文字幕| 老司机午夜十八禁免费视频| 50天的宝宝边吃奶边哭怎么回事| 一级作爱视频免费观看| 免费在线观看日本一区| 亚洲中文字幕一区二区三区有码在线看 | 久久国产精品影院| 美女免费视频网站| 男人的好看免费观看在线视频 | 久久天躁狠狠躁夜夜2o2o| 91老司机精品| 91字幕亚洲| 久久天躁狠狠躁夜夜2o2o| 成人永久免费在线观看视频| 国产蜜桃级精品一区二区三区| 国产亚洲av嫩草精品影院| 成人精品一区二区免费| 丁香欧美五月| 久久九九热精品免费| 一本大道久久a久久精品| 欧美黄色片欧美黄色片| 亚洲人成伊人成综合网2020| 美女国产高潮福利片在线看| 精品国产美女av久久久久小说| 午夜精品久久久久久毛片777| 香蕉久久夜色| 国产午夜精品久久久久久| 国产aⅴ精品一区二区三区波| 久久婷婷人人爽人人干人人爱 | 变态另类丝袜制服| 免费一级毛片在线播放高清视频 | 欧美乱色亚洲激情| 亚洲七黄色美女视频| 精品久久久久久久人妻蜜臀av | 国产人伦9x9x在线观看| 免费一级毛片在线播放高清视频 | 色av中文字幕| 又紧又爽又黄一区二区| 国产精品免费一区二区三区在线| 亚洲人成电影免费在线| 成人国产综合亚洲| 国产精品av久久久久免费| 啦啦啦韩国在线观看视频| 国产精品香港三级国产av潘金莲| av福利片在线| 又紧又爽又黄一区二区| 99国产精品一区二区蜜桃av| 在线观看舔阴道视频| 国产欧美日韩精品亚洲av| 午夜福利成人在线免费观看| 亚洲五月色婷婷综合| 午夜亚洲福利在线播放| 亚洲国产中文字幕在线视频| 午夜视频精品福利| 国产成人精品在线电影| 精品午夜福利视频在线观看一区| 欧美精品亚洲一区二区| 老鸭窝网址在线观看| 国语自产精品视频在线第100页| 日日干狠狠操夜夜爽| 国产免费男女视频| 亚洲国产精品成人综合色| 9热在线视频观看99| 免费人成视频x8x8入口观看| 亚洲欧美日韩另类电影网站| 日韩大码丰满熟妇| 精品一区二区三区视频在线观看免费| 两性夫妻黄色片| 久久婷婷成人综合色麻豆| 亚洲成av人片免费观看| 每晚都被弄得嗷嗷叫到高潮| 日本一区二区免费在线视频| 免费看a级黄色片| 十分钟在线观看高清视频www| av中文乱码字幕在线| 亚洲人成伊人成综合网2020| 在线视频色国产色| 亚洲国产精品久久男人天堂| 99热只有精品国产| 日韩大尺度精品在线看网址 | 国产亚洲精品一区二区www| 国内毛片毛片毛片毛片毛片| 国产精品乱码一区二三区的特点 | 精品国产超薄肉色丝袜足j| 亚洲成av人片免费观看| 天天躁夜夜躁狠狠躁躁| 久久久久久亚洲精品国产蜜桃av| 亚洲美女黄片视频| 欧美中文综合在线视频| 午夜福利,免费看| 人妻丰满熟妇av一区二区三区| 亚洲精品美女久久av网站| 久久 成人 亚洲| 成人免费观看视频高清| 国产熟女xx| 亚洲av第一区精品v没综合| 又黄又爽又免费观看的视频| 18禁裸乳无遮挡免费网站照片 | 村上凉子中文字幕在线| 欧美成人免费av一区二区三区| 啦啦啦免费观看视频1| www.精华液| 亚洲激情在线av| 淫妇啪啪啪对白视频| 亚洲精品国产精品久久久不卡| 精品高清国产在线一区| 日本a在线网址| 国产乱人伦免费视频| 久久精品91无色码中文字幕| 久久精品91蜜桃| 别揉我奶头~嗯~啊~动态视频| 成人国产一区最新在线观看| 老司机午夜福利在线观看视频| 精品一品国产午夜福利视频| 亚洲天堂国产精品一区在线| 日韩三级视频一区二区三区| 美国免费a级毛片| 日本a在线网址| 一二三四在线观看免费中文在| 日韩大码丰满熟妇| 午夜福利高清视频| www国产在线视频色| 国产xxxxx性猛交| 高清在线国产一区| 日韩三级视频一区二区三区| 国产成+人综合+亚洲专区| 亚洲国产精品成人综合色| 国产精品亚洲一级av第二区| 好男人在线观看高清免费视频 | 俄罗斯特黄特色一大片| 欧美日韩精品网址| 后天国语完整版免费观看| 午夜精品国产一区二区电影| 老司机福利观看| 亚洲成a人片在线一区二区| 亚洲国产高清在线一区二区三 | 国产亚洲欧美精品永久| 国产精品久久久久久亚洲av鲁大| 桃红色精品国产亚洲av| 日韩国内少妇激情av| 精品久久久久久久毛片微露脸| 窝窝影院91人妻| 亚洲熟妇中文字幕五十中出| netflix在线观看网站| 性色av乱码一区二区三区2| 一级a爱视频在线免费观看| 久久午夜亚洲精品久久| 涩涩av久久男人的天堂| 午夜久久久在线观看| 国产一卡二卡三卡精品| 99精品在免费线老司机午夜| 天天添夜夜摸| 国产精品免费视频内射| 一区在线观看完整版| 久久欧美精品欧美久久欧美| 色综合欧美亚洲国产小说| 国产一卡二卡三卡精品| 丝袜美足系列| 亚洲国产欧美一区二区综合| 国产精品综合久久久久久久免费 | 18禁观看日本| 成人三级做爰电影| 99国产精品免费福利视频| 国产亚洲精品综合一区在线观看 | 精品国产乱码久久久久久男人| 777久久人妻少妇嫩草av网站| 午夜免费激情av| 欧美+亚洲+日韩+国产| 少妇的丰满在线观看| 午夜福利一区二区在线看| 看免费av毛片| 51午夜福利影视在线观看| 一边摸一边做爽爽视频免费| x7x7x7水蜜桃| av视频免费观看在线观看| 欧美一级a爱片免费观看看 | 亚洲精品粉嫩美女一区| 色婷婷久久久亚洲欧美| tocl精华| 国产在线观看jvid| 精品一区二区三区av网在线观看| 免费看美女性在线毛片视频| 亚洲欧美激情在线| 18禁黄网站禁片午夜丰满| 国产真人三级小视频在线观看| 久久久久精品国产欧美久久久| 国产精品九九99| 又黄又粗又硬又大视频| 国产精华一区二区三区| 欧美日本亚洲视频在线播放| 午夜久久久久精精品| 久久香蕉国产精品| 欧美不卡视频在线免费观看 | 嫁个100分男人电影在线观看| 成人国产一区最新在线观看| 亚洲国产精品久久男人天堂| 国产精品日韩av在线免费观看 | 成年人黄色毛片网站| 男女午夜视频在线观看| 91大片在线观看| 嫩草影院精品99| 亚洲专区国产一区二区| 日本一区二区免费在线视频| 国产主播在线观看一区二区| 亚洲色图综合在线观看| 51午夜福利影视在线观看| 动漫黄色视频在线观看| e午夜精品久久久久久久| 国产欧美日韩一区二区三| 国产成人精品久久二区二区免费| 色精品久久人妻99蜜桃| 国产高清视频在线播放一区| 一卡2卡三卡四卡精品乱码亚洲| 人人妻人人爽人人添夜夜欢视频| 无限看片的www在线观看| 亚洲精品久久国产高清桃花| 精品人妻1区二区| 多毛熟女@视频| 国产午夜精品久久久久久| 成人国产综合亚洲| 免费在线观看黄色视频的| avwww免费| 精品国内亚洲2022精品成人| 大型av网站在线播放| 亚洲精品国产区一区二| 精品午夜福利视频在线观看一区| АⅤ资源中文在线天堂| 少妇熟女aⅴ在线视频| 成人手机av| 成人三级黄色视频| 久久久久久久久免费视频了| 一夜夜www| 亚洲国产欧美日韩在线播放| 婷婷丁香在线五月| www.999成人在线观看| 天天一区二区日本电影三级 | 97人妻精品一区二区三区麻豆 | 久久精品亚洲熟妇少妇任你| 欧美乱妇无乱码| 男女午夜视频在线观看| 精品久久久久久久久久免费视频| 日日干狠狠操夜夜爽| 国产精品综合久久久久久久免费 | 天天躁狠狠躁夜夜躁狠狠躁| 国内精品久久久久精免费| 亚洲一码二码三码区别大吗| 亚洲专区中文字幕在线| 成人18禁在线播放| 99国产综合亚洲精品| 亚洲人成网站在线播放欧美日韩| 欧美黄色淫秽网站| 长腿黑丝高跟| 又黄又爽又免费观看的视频| 国产麻豆成人av免费视频| 丰满的人妻完整版| 国产伦一二天堂av在线观看| 欧美色视频一区免费| 久久人人精品亚洲av| 欧美一级a爱片免费观看看 | 国产伦人伦偷精品视频| 欧美日韩黄片免| 精品欧美一区二区三区在线| 久久香蕉国产精品| 亚洲五月色婷婷综合| 国产精品亚洲av一区麻豆| 老汉色av国产亚洲站长工具| 国语自产精品视频在线第100页| 精品一区二区三区四区五区乱码| 在线免费观看的www视频| 天天一区二区日本电影三级 | 久久狼人影院| 久久 成人 亚洲| 久久久久国内视频| 国产精品久久久av美女十八| 欧美最黄视频在线播放免费| 久久天躁狠狠躁夜夜2o2o| 久久人人精品亚洲av| 脱女人内裤的视频| 国产精品久久视频播放| 午夜福利18| 日日夜夜操网爽| 国产精品 国内视频| 91成人精品电影| 国产99久久九九免费精品| 一级,二级,三级黄色视频| 亚洲美女黄片视频| 成人欧美大片| a在线观看视频网站| 国产真人三级小视频在线观看| 男男h啪啪无遮挡| 91国产中文字幕| 日韩精品免费视频一区二区三区| 黑人操中国人逼视频| 后天国语完整版免费观看| 日韩欧美一区视频在线观看| 美国免费a级毛片| 欧美日韩一级在线毛片| 黄色女人牲交| 亚洲va日本ⅴa欧美va伊人久久| 久久久国产成人免费| 一区二区日韩欧美中文字幕| 国产精品亚洲美女久久久| 久99久视频精品免费| 久久天堂一区二区三区四区| 亚洲自偷自拍图片 自拍| 人人妻人人澡人人看| 成人国产一区最新在线观看| 日本在线视频免费播放|