• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Visual Navigation Method for Indoor Mobile Robot Based on Extended BoW Model

    2018-01-12 08:30:44XianghuiLiXindeLiMohammadOmarKhyamChaominLuoandYingziTan

    Xianghui Li, Xinde Li, Mohammad Omar Khyam, Chaomin Luo, and Yingzi Tan

    1 Introduction

    In the research field of mobile robots, navigation is very important and necessary, with its aim to make a robot move to the desired destination and while completing specified tasks as expected.

    The first step in navigation is to build a model of the environment which can be a grid, geometric, topological or 3D one. However, Thrun proposed a hybrid method that combines grid and topological models[1,2], using the grid to represent local features and a Voronoi graph[3]topological structures. Wu Xujian et al. also proposed a visual navigation method for mobile robots based on a hand-drawn map[4]which provides the start and end information and approximate distance between these points, the approximate positions of landmarks the robot might meet during navigation, etc. Relative to traditional map-building models, such as grid, geometric and topological ones, we use the convenient hand-drawn map[5]. Although some researchers used artificial[6-10]and quasi-artificial landmarks (with some taken photos of natural ones in advance[4,5]), recognizing natural ones is essential for navigating successfully in real environments. However, unlike an artificial landmark, a natural one is not labeled.

    Also, we should not restrict natural landmarks to specific objects due to the changeable environment; in other words, even if they change to some extent or adopt another form, a robot should still be capable of recognizing them using a visual sensor. In order to achieve our goal, we solve the problem of general object recognition in a real environment.

    The BoW (Bag of Words) model is effective arithmetical means of general object recognition because of its simple strategy and robustness for determining an object’s position and deformation in an image. However, as each feature is independent of the others in it, there is no spatial relationship to consider whereas such relationships among features could be useful for describing the internal structures of objects or highlighting the importance of their contextual visual information. Although research on this theme is becoming more and more popular[11-19], there is still room for improvement.

    In this paper, we propose a novel extended BoW method which works in the following statistical way. A multi-dimensional vector is used to describe an image with all its elements divided into two parts: one describes its local features; and the other is the spatial relationships among them. Then, we use the SVM (Support Vector Machine) classifier to train our model to obtain discriminant functions and, taking real time into account, GPU (Graphic Processing Unit) acceleration arithmetic to speed up image processing. Finally, this method is successfully applied to indoor mobile robot navigation.

    2 Building Model of Environment

    Building a model of the environment aimed at planning a path for a mobile robot involves feature extraction and information representation.

    It is not necessary for many biological systems, such as those of human beings, butterflies and bees, to obtain precise distance information when perceiving their environments through their visual systems as they navigate by remembering some key landmarks based on qualitative analysis.

    Based on the method proposed in [5], we design a hand-drawn map for guiding a robot’s navigation with the advantage that we do not need to input detailed environmental information into the robot and enables our navigation model to handle dynamic situations, such as when some landmarks change often or a person walks without stopping around the robot. We require only the starting point and orientation of the robot, the route and its approximate physical distance, and rough estimates of the locations of landmarks the robot might encounter during navigation.

    Our novel extended BoW model for general object recognition proposed to overcome the challenge of recognizing natural landmarks is discussed in the next section.

    3 Recognition of Landmarks

    Recognizing different objects which belong in the same category and filtering complex backgrounds are the most difficult problems that might be faced in the course of general object recognition in a real environment. Every determinate object has its own characteristics, such as local parts and spatial relationships with others. While a human being can understand the advanced semantic features of a picture, a computer can comprehend only the raw information in an image. However, it is still very helpful to refer to the human vision system because the course of general object recognition is analogous to a human’s judgment, that is, firstly, descriptors of general objects are established, then their categories determined through machine learning and, finally, the learned model used to classify and recognize new objects[20].

    3.1 Overview of recognition algorithm

    The method proposed in this paper, the framework of which is presented in Fig.1, follows the principle of general object recognition, that is, firstly, it describes an image, then learns the object model and, finally, classifies objects.

    Fig.1 Framework of extended BoW model.

    3.2 Building vision code library

    In 1999, Professor G. David[21]proposed the SIFT (Scale-invariant Feature Transform) algorithm based on the scale space, which is invariant in terms of translation, rotation and zoom, and then improved it in 2004[22]. It is widely used for object recognition, and has a very strong image-matching capability with the following general characteristics.

    a) A SIFT feature is a local feature of an image which remains invariant of translation, rotation, zoom, illumination, occlusion and noise and even maintains some degree of stability with a change in vision and affine transformation.

    b) It has an abundance of information which is very helpful for fast and accurate image matching.

    c) It is fast which may satisfy the real-time requirements of image matching after optimization.

    d) It has strong extendibility which may be associated with other feature vectors.

    Therefore, in this paper, we choose the SIFT algorithm to detect key points; for example, if we want to construct a code library of cars, first we choose pictures of different cars from different views and then detect key points using this algorithm.

    After completing feature detection, we need to establish a vision code library using the large number of ‘words’ we obtain from detecting many images. However, as some SIFT descriptors are similar to those of other algorithms, it is necessary to cluster these vision words. K-means is a common clustering method[23]which aims to group elements in K centers according to the distance between an element and a center, and obtain the clustering centers based on continuous iterations; for example, if the elements for clustering are (x1,x2,x3,…,xn-1,xn) and every element is ad-dimensional vector, they are expected to cluster into K sets according to the sum of their minimum average variances (s={s1,s2,s3,…,sk-1,sk}) as:

    (1)

    whereμiis the average ofsi[25].

    Based on experience, we adoptk= 600 and complete the construction of a vision code library in which every code is a 128-dimension SIFT description vector after K-means clustering.

    3.3 Pre-processing images

    As a computer can understand only raw information, acquiring an advanced meaning that reflects an object’s appearance is the major difficulty of general object recognition.

    As previously mentioned, we obtain a vision code library of certain objects and, before describing an image, compute the similarity between a local feature and every word in the library. If a similarity meets the threshold, this local feature is considered a key point that belongs to this image; for example, if there are N vision codes in a library and M local features in an image, the pseudo-code is as follows.

    Whilei

    Whilej

    Ifsimilarity(Pi,Qj)

    Return true;

    Else return false.

    wherePidenotes the SIFT descriptor of theithlocal feature in an image andQjthat of thejthvision code in the library. We define

    (2)

    After performing local feature extraction, we take the normalized operation for every SIFT descriptor and believe that, although the remaining local features belong to the image, some are produced from the background. Therefore, an extra operation is required to delete them if: (1) the number of local features obtained from an image is much greater than that from the background after computing similarities; or (2) we want to further reduce the background disturbance based on the density distribution of the local features.

    Fig.2 shows the results if we obtainTlocal features fromMones after computing similarities.

    Obviously, if we want to obtain the object in the rectangle despite some disturbance, we may use RANSAC to reduce the negative influence on later image descriptions. For convenience, we use a circle to cover the area where the density of features is very high using the following pseudo-code.

    Whileiteration

    The points inside the model equal to the key points that were randomly selected fromTdatum, with the possible center of a circle:

    (3)

    The radius (R) can be defined as the maximum of the distances between the key points in the model and the possible center of a circle.

    For every key point that doesn’t belong to the model, if the distance is less than 1.2R, consider that it does belong and add 1 to the number of key points in the model.

    If the number of key points in the model is larger thanE(E=80%*T), consider this model to be correct and save the possible center of a circle and key points.

    Forj

    (4)

    If the distance (r,r∈[1,T]) is the minimum of all distances, save the model’s possible radius (r) and 80 % of its key points closest to the possible center and consider that they belong to the image.

    Fig.2 Feature points filtered from background.

    3.4 Describing images

    In our BoW model, a multi-dimensional vector was used to describe an image with all its elements grouped in two parts, one of which describes its local features and the other describes the spatial relationships among them.

    a) The local features are described based on the numbers of times words appear in the vision code; for example, if there are (x0,x1,x2,…,xP-2,xP-1) vision words in the library, the dimension of the vector isP, with the number of each dimension denoting the number of times the corresponding vision code appears.

    b) To describe the spatial relationships among the local features, we use the distance between a key point and the center of a circle, and the relative angle. The new center of the key points is expressed in equation (5), wheremindicates the number of key points after processing, and the geometric center is the center of a circle. As shown in Fig.3, the marks around it denote key points, for example, for the five stars in the upper right corner, the corresponding distance and angle areLandθ, respectively.

    (5)

    The Euclidean distances between every key point and the geometric center (xc,yc) were calculated as the distance, i.e., from (L1,L2,L3,…Lm-1,Lm). We take the middle distance as the unit length (L) and divide all the remaining ones into 0-0.5L,0.5L-L,L-1.5L,1.5L-MAXaccording to the ratioLi/L.

    A random key point were chosen and the angles were calculated between its half line and those of all the other key points to the center of the circle, i.e.,θin Fig.3. We obtain these angles after simple mathematical transformations (θ1,θ2,θ3…θm-1,θm), with that of one key point measured counter-clockwise. In fact, each angle is not very large and we divide them into five ranges of 0°-30°, 30°-60°, 60°-90°, 90°-120° and 120°-MAX.

    Fig.3 Spatial relationships of feature points.

    Therefore, 4×5=20 spatial relationships were generated and can be described every image using the vector as (x0,x1,x2,…,xP-3,xP-2,xP-1,y0,y1,y2…yQ-3,yQ-2,yQ-1). The dimension vectors (PandQ) stand for the numbers of times each vision word and a certain spatial relationship appear in an image, respectively. Since the distances and angles are relative, the descriptions of spatial relationships are invariant of translation, rotation and zoom, and finally they were normalized.

    3.5 Obtaining discriminant function

    During navigation, the camera continually acquires images and arrives at judgments according to the discriminant function which is trained offline and obtained as follows.

    Generally speaking, there are two kinds of classifiers that depend on the degree to which a human participates during learning, i.e., supervised and unsupervised. As the SVM has attracted a great deal of attention and recently achieved some good results, we use it to train our models as supervised classifiers. Its aim is to classify two kinds of patterns as far as possible according to the principle of the minimum structural risk by constructing a discriminant function.

    The training set is supposed to be{xi,yi},i=1, …,l,xi∈Rn,yi∈{-1,1}, which can be separated by a hyperplane ((w·x)+b=0), with a linear hyperplane at a distance (Δ) from the samples:

    (w·x)+b=0(w·x)+b=0, ||w||=1

    if (w·x)+b≥Δ,y=1

    if (w·x)+b≤-Δ,y=-1

    3.6 Recognizing landmarks online

    The discriminant functions obtained offline, including those of many different kinds of objects, are used to build a database which a robot uses to recognize landmarks.

    During the running of a robot, its camera continually acquires image information and then every image is processed using the SIFT algorithm to obtain feature points. After computing similarities, the feature points that meet the specified threshold are saved and calculated according to RANSAC arithmetic to reduce background disturbance. Every image is represented as aP+Qdimension vector recognized by the offline discriminant functions in the database. Then, a series of recognition results is obtained to localize the robot itself, the framework of which is shown in Fig.4.

    Fig.4 Framework of online landmark recognition.

    We can summarize our recognition algorithm in two parts, i.e., offline and online.

    (1) Recognition of landmarks offline: in the phase of training and learning,I={I1,I2,I3,…,Im-2,Im-1,Im}, whereIis a set of images for generating visual words,A={A1,A2,A3,…,Aw-2,Aw-1,Aw}, whereAis a set ofwtraining images containing target objects with every image marked as +1 when trained; andB={B1,B2,B3,…,Bt-2,Bt-1,Bt},B, whereBis a set of training images not containing target objects with every image marked as -1 when trained. This phase involves the following three steps:

    a) a visual code library of imageIwas generated;

    b) every image inAandBwas represented as a multi-dimensional vector with background disturbance reduced;

    c) the SVM was used to train images and finally obtain the discriminant functions.

    (2) Recognition of landmarks online:

    a) an image was obtained, and its background disturbance was reduced and represented as a multi-dimensional vector;

    b) landmarks were recognized by using discriminant functions.

    4 Robot Navigation

    4.1 Navigation arithmetic

    The navigation flowchart shown in Fig.5 is clearly explained in [5]. However, our proposed general object recognition method for recognizing natural landmarks for robot navigation is different and requires photos of landmarks to be taken manually in advance and regarded as image-matching templates.

    Fig.5 Navigation flowchart.

    4.2 GPU acceleration during image processing

    SIFT algorithm was used mainly to detect and acquire key points in the course of general object recognition by a 128-dimension vector.

    These points are some local extreme ones containing orientation information which are detected in different scale spaces of an image and can be described from the three views of scale, size and orientation. As this process takes a long time, it is necessary to use GPU acceleration to speed up the SIFT algorithm which occupies most of our algorithm’s time.

    The GPU is a concept with a great advantage over a Central Processing Unit (CPU) for image processing and the NVDIA released an official development platform, i.e., CUDA, on which we can compute SIFT code in a parallel manner. We test images of different sizes with different numbers of key points using: operating system -32 bit win7; flash memory -2G; CPU-Intel(R) Core(TM) 2Duo E7500@2.93GHz; GPU-nVIDIA GeForce310; dedicated image memory -512MB; shared system memory -766MB; and compiler environment-Visual Studio 2010.

    In the testing results in Table 1, the acceleration is obvious when an image is large and has many key points.

    Table 1 Comparisons with and without GPU.

    5 Experimental Results

    5.1 Experimental environment

    A Pioneer 3-DX mobile robot equipped with a PTZ monocular color camera, 16 sonar sensors, a speedometer, an electronic compass, etc., is chosen for our experiments which are conducted in the SEU mobile robot laboratory shown in Fig.6. The size is approximately 10 m×8 m.

    5.2 Specific experiments

    5.2.1 Experiment 1

    As our proposed algorithm aims to recognize general objects in this experiment, its main goal is to recognize different ones in the same category. This experiment is conducted three times and there are five kinds of key landmarks each time, i.e., chair, guitar, wastebasket, umbrella and fan, which might be changed every time by retaining their corresponding same categories. The robot always starts from the lower left-hand corner and finishes at the upper right-hand one. The hand-drawn map and three paths it runs respectively during its navigation were shown in Fig.7.

    As can be seen, even if the landmarks are changed by their corresponding same categories, our algorithm can still work well. Therefore, the robot always reaches its destination successfully.

    5.2.2 Experiment 2

    The robustness of the navigation algorithm is tested when the landmarks were moved slightly. In the first navigation, all the positions of the landmarks and hand-drawn route remain unchanged while; in the second one, they are moved 1 meter to the left and; in the third one, they are moved 1meter to the right.

    As can be seen in Fig.8, the robot can still reach its intended destination even if the landmarks are moved slightly from their original positions.

    Fig.6 Experiment settings.

    Fig.8 Experiment 2: navigation with the movement of landmark in different directions.

    5.2.3 Experiment 3

    The robot’s navigation performance is tested by reducing the number of landmarks from 5 the first time to 4 the second time and 3 the third time.

    As can be seen in Fig.9, there is almost no effect on the path when the number of landmarks is reduced. However, if the environment is too large and the number of landmarks too few, the robot might not perform very well in terms of navigation according to the sketched map due to lacking of some necessary information. In this case, the odometer plays an important role.

    Fig.9 Experiment 3: navigation with the reduced number of landmarks.

    5.2.4 Experiment 4

    By setting obstacles in the path the robot might cover according to the hand-drawn map, we aim to test its capability to avoid them by recording its reactions when facing them.

    As shown in Fig.10, the robot tries to avoid the obstacles it might encounter according to the navigation algorithm by using sonar sensors.

    Fig.10 Experiment 4: navigation with the obstacles.

    5.2.5 Experiment 5

    To further test the robot’s navigational performance, firstly, the landmarks on the hand-drawn map correspond to those in a real environment; secondly, the fourth landmark is the rubbish bin while the real one is the chair.

    As can be seen in Fig.11, during its second navigation, the robot cannot find the fourth landmark because it does not correspond to what is shown on the hand-drawn map. However, the robot arrives at its destination based on other landmarks and the odometer as well as the map.

    Fig.11 Experiment 5: navigation with missing landmarks.

    6 Conclusions and Future Work

    In this paper, we proposed a novel general object recognition arithmetic method using offline training and learning and online recognition which is helpful for recognizing natural landmarks and even human-robot interactions. We successfully applied it to robot navigation based on a hand-sketched map and natural landmarks, with the experimental results proving its advantages. Due to rapid changes in the world over time, categories of objects have become increasingly complex. As our current level of recognition arithmetic is very limited, it is important that our next work is on online learning and training for recognition.

    Acknowledgment

    This work was supported in part by the National Natural Science Foundation of China under Grant 61573097 and 91748106, in part by the Qing Lan Project and Six Major Top-talent Plan, and in part by the Priority Academic Program Development of Jiangsu Higher Education Institutions.

    [1]S.Thrun,RoboticMapping:ASurvey.Pittsburgh: CMU-CS-02-111, School of Computer Science, Carnegie Meiion University, 2002.

    [2]S.Se and D.Lowe, Mobile robot localization and mapping with uncertainty using scale-invariant visual landmarks,TheInternationalJournalofRoboticsResearch, vol.21, no.8, pp.735-738, 2002.

    [3]M.I.Shamos and D.Hoey, Closest-point problems, in 16thAnnualSymposiumonFoundationsofComputerScience, 1975.

    [4]X.Wu,VisualNavigationMethodResearchforMobileBasedonHand-drawnMap.Nanjing Southeast University, 2011.

    [5]X.Li, X.Zhang, and X.Dai, An Interactive Visual Navigation Method Using a Hand-drawn-Route-Map in an Unknown Dynamic Environment,InternationalJournalofFuzzySystems, vol.13, no.4, pp.311-322, 2011.

    [6]K.Kawamura, R.A.I Peters, D.M.Wilkes, A.B.Koku, and A.Sekman, Toward perception-based navigation using EgoSphere, inProceedingsforSPIE4573,MobileRobotsXVI, Boston, MA, October, United States, 2001.

    [7]K.Kawamura, A.B.Koku, D.M.Wilkes, R.A.Peters Ii, and A.Sekmen, Toward egocentric navigation,InternationalJournalofRoboticsandAutomation, vol.17, no.4, pp.135-145, 2002.

    [8]G.Chronis and M.Skubic, Sketch-based navigation for mobile robots, inProceedingsofthe12thIEEEInternationalConferenceonFuzzySystems(FUZZ-IEEE), 2003, vol.1, pp.284-289.

    [9]M.Skubic, C.Bailey and G.Chronis, A sketch interface for mobile robots, inProceedingsoftheIEEEInternationalConferenceonSystems,ManandCybernetics(SMC), 2003, vol.1, pp.919-924.

    [10] M.Skubic, S.Blisard, C.Bailey, J.A.Adams, and P.Matsakis, Qualitative Analysis of Sketched Route Maps: Translating a Sketch into Linguistic Descriptions,IEEETransactionsonSystems,ManandCybernetics, vol.34, no.2, pp.1275-1282, 2004.

    [11] T.Li, T.Mei, I.Kweon, and X.S.Hua, Contextual Bag-of-Words for Visual Categorization,IEEETransactionsonCircuitsandSystemsforVideoTechnology, vol.21, no.4, pp.381-392, 2011.

    [12] T.Liu, J.Liu, Q, S.Liu and H.Q.Lu, Expanded bag of words representation for object classification, inProcessingof2009 16thIEEEInternationalConferenceonImage, 2009, pp.297-300.

    [13] S.Agarwal, A.Awan, and D.Roth, Learning to detect objects in images via a sparse, part-based representation,IEEETransactionsonPatternAnalysisandMachineIntelligence, vol.26, no.11, pp.1475-1490, 2004.

    [14] X.L.Liu, Y.H.Lou, A.W.Yu, and B.Lang, Search by mobile image based on visual and spatial consistency, in 2011IEEEInternationalConferenceonMultimediaandExpo(ICME), 2011, pp.1-6.

    [15] X, Cheng, J.Wang, L.Chia, and X.Hua, Learning to combine multi-resolution spatially-weighted co-occurrence matrices for image representation, in 2010IEEEInternationalConferenceonMultimediaandExpo(ICME), 2010, pp.631-636.

    [16] M.Sun and V.Hamme, Image pattern discovery by using the spatial closeness of visual code words, in 2011 18thIEEEInternationalConferenceonImageProcessing, 2011, pp.205-208.

    [17] I.Elsayad, J.Martinet, T.Urruty, and C.Djeraba, A new spatial weighting scheme for bag-of-visual-words, in 2010InternationalWorkshoponContent-BasedMultimediaIndexing,2010, pp.1-6.

    [18] S.Lazebnik, C.Schmid, and J.Ponce, Beyond bags of features: spatial pyramid matching for recognizing natural scene cat-egories, inProceedingsoftheIEEEComputerVisionandPatternRecognition, New York, USA: IEEE, 2006, pp.2169-2178.

    [19] L.Zhang, C.Wang, B.Xiao, and Y.Shao, Image Representation Using Bag-of-phrases,ACTAAUTOMATICASINICA, vol.38, no.1, pp.46-54, 2012.

    [20] H.Chen,ResearchonObjectClassifierBasedonDistinguishedLearning.Hefei: University of science and technology of China, 2009

    [21] G.David, Object recognition from local scale-invariant feature, inProceedingsoftheSeventhIEEEInternationalConferenceonComputerVision, Kerkyra, Greece, 1999, pp.1150-1157.

    [22] G.David, Distinctive Image Features from Scale-Invariant Key points,InternationalJournalofComputerVision, vol.60, no.2, pp.91-110, 2004.

    [23] J.A.Hartigan,Clusteringalgorithms.John Wiley and Sons, Inc.New York, NY, USA, 1975.

    [24] Z.Niu,ResearchofKeyTechnologyinImageRecognition.Shang Hai: Shang Hai Jiao Tong University, 2011.

    [25] J.Sivic and A.Zisserman, Video Google: A Text Retrieval Approach to Object Matching in Video, InProceedingsofIEEEInternationalConferenceonComputerVision, Nice,F(xiàn)rance, 2003, vol.2, pp.1470-1477.

    亚洲成人久久性| 国产精品爽爽va在线观看网站| 国产精品久久久久久av不卡| 久久99热6这里只有精品| 成人综合一区亚洲| 伦理电影大哥的女人| 又粗又爽又猛毛片免费看| 国产一区亚洲一区在线观看| 好男人在线观看高清免费视频| 九九爱精品视频在线观看| 亚洲,欧美,日韩| 日韩一本色道免费dvd| 又粗又硬又长又爽又黄的视频 | 国产精品野战在线观看| 亚洲精品乱码久久久久久按摩| 亚洲精品色激情综合| 欧美zozozo另类| 亚洲精品国产av成人精品| 51国产日韩欧美| 特大巨黑吊av在线直播| 国产又黄又爽又无遮挡在线| 在线天堂最新版资源| 菩萨蛮人人尽说江南好唐韦庄 | 国内精品宾馆在线| 联通29元200g的流量卡| 日韩欧美 国产精品| 在线免费观看的www视频| 午夜福利在线观看吧| 国产91av在线免费观看| 欧美最新免费一区二区三区| 亚洲一级一片aⅴ在线观看| 性插视频无遮挡在线免费观看| 国产v大片淫在线免费观看| av在线亚洲专区| 99国产极品粉嫩在线观看| 久久久精品94久久精品| 麻豆av噜噜一区二区三区| 久久久久久久久中文| 91午夜精品亚洲一区二区三区| 久久人人爽人人爽人人片va| 一本一本综合久久| 国产在线男女| 中文欧美无线码| 听说在线观看完整版免费高清| 狂野欧美白嫩少妇大欣赏| 成人午夜精彩视频在线观看| 日本与韩国留学比较| 久久久精品94久久精品| 成人二区视频| 人人妻人人澡人人爽人人夜夜 | 日本成人三级电影网站| 国产一区亚洲一区在线观看| 热99re8久久精品国产| 国产伦精品一区二区三区四那| 99久久久亚洲精品蜜臀av| 亚洲18禁久久av| 亚洲欧美日韩高清专用| 欧美高清成人免费视频www| 中文字幕av成人在线电影| 一级毛片电影观看 | 国产91av在线免费观看| 亚洲中文字幕日韩| 亚洲国产欧洲综合997久久,| 国产黄色视频一区二区在线观看 | 此物有八面人人有两片| 校园人妻丝袜中文字幕| 久久久久久久久久久丰满| 一个人看的www免费观看视频| 热99在线观看视频| 18禁在线无遮挡免费观看视频| 内地一区二区视频在线| 久久精品91蜜桃| 亚州av有码| 在现免费观看毛片| av.在线天堂| 在线免费观看的www视频| 99久国产av精品国产电影| 国产高清三级在线| 午夜福利在线观看免费完整高清在 | 免费观看人在逋| 久久久国产成人精品二区| 国产精品乱码一区二三区的特点| 免费av毛片视频| 一级毛片我不卡| 老师上课跳d突然被开到最大视频| 成人亚洲欧美一区二区av| 99久久中文字幕三级久久日本| 性色avwww在线观看| 老女人水多毛片| 国产成人精品久久久久久| АⅤ资源中文在线天堂| 美女 人体艺术 gogo| 亚洲精华国产精华液的使用体验 | 又爽又黄无遮挡网站| 欧美人与善性xxx| 哪个播放器可以免费观看大片| 两个人视频免费观看高清| 蜜桃久久精品国产亚洲av| 免费av不卡在线播放| 国产精品久久久久久精品电影| 插阴视频在线观看视频| 成人欧美大片| 久久国产乱子免费精品| 一边摸一边抽搐一进一小说| 国产色婷婷99| 亚洲欧美精品专区久久| 国产精品三级大全| 中国美女看黄片| 极品教师在线视频| 又黄又爽又刺激的免费视频.| 成人欧美大片| 在线播放无遮挡| 日本黄色视频三级网站网址| 亚洲精品日韩av片在线观看| 99久久中文字幕三级久久日本| 女人十人毛片免费观看3o分钟| 一级黄片播放器| 精品久久久久久久久久久久久| 久久久久久久久久成人| 国产乱人视频| av免费在线看不卡| 久久久成人免费电影| 国产视频首页在线观看| 中国美女看黄片| 日本黄色片子视频| 日韩视频在线欧美| 成人永久免费在线观看视频| 色哟哟哟哟哟哟| 男女下面进入的视频免费午夜| 淫秽高清视频在线观看| 97热精品久久久久久| 免费看日本二区| 国产成人午夜福利电影在线观看| 国产av在哪里看| 69人妻影院| 国产乱人偷精品视频| 日本黄大片高清| 淫秽高清视频在线观看| 精品久久久久久久久久久久久| 色哟哟哟哟哟哟| 淫秽高清视频在线观看| 高清在线视频一区二区三区 | 2022亚洲国产成人精品| 成人无遮挡网站| 又黄又爽又刺激的免费视频.| 日韩av不卡免费在线播放| 在线观看美女被高潮喷水网站| 亚洲图色成人| 日韩人妻高清精品专区| 国产精品久久久久久av不卡| 中文字幕av在线有码专区| 2022亚洲国产成人精品| 中国美女看黄片| 久久这里有精品视频免费| 亚洲欧美日韩无卡精品| 99在线视频只有这里精品首页| 人人妻人人看人人澡| 精品久久久久久久久亚洲| 久久久久九九精品影院| 欧美+日韩+精品| 国产高清激情床上av| 欧美色视频一区免费| 亚洲国产日韩欧美精品在线观看| 高清日韩中文字幕在线| 美女高潮的动态| 亚洲经典国产精华液单| 丰满乱子伦码专区| 中文字幕人妻熟人妻熟丝袜美| 两个人的视频大全免费| 欧美成人a在线观看| 欧美一级a爱片免费观看看| 亚洲av中文av极速乱| 丝袜美腿在线中文| 在线国产一区二区在线| 99久久精品热视频| 精品人妻熟女av久视频| 日韩,欧美,国产一区二区三区 | 亚洲av免费高清在线观看| 国产午夜精品一二区理论片| 国产成人91sexporn| 国产麻豆成人av免费视频| 伊人久久精品亚洲午夜| av免费在线看不卡| 毛片女人毛片| 天堂影院成人在线观看| 亚洲人成网站在线播放欧美日韩| 久久久久久伊人网av| 可以在线观看的亚洲视频| 少妇熟女aⅴ在线视频| 免费电影在线观看免费观看| 久久久国产成人精品二区| 熟女电影av网| 一个人免费在线观看电影| 久久久久久久久久久丰满| 91午夜精品亚洲一区二区三区| 日韩av在线大香蕉| a级一级毛片免费在线观看| 18禁黄网站禁片免费观看直播| 亚洲欧美清纯卡通| 看免费成人av毛片| 国产精品日韩av在线免费观看| 欧美成人免费av一区二区三区| 日本黄色视频三级网站网址| 亚洲国产精品成人综合色| 日韩大尺度精品在线看网址| 亚洲美女视频黄频| 日本黄色片子视频| 免费观看在线日韩| av卡一久久| 国产av麻豆久久久久久久| 日产精品乱码卡一卡2卡三| 中国国产av一级| 黄色配什么色好看| 激情 狠狠 欧美| 国产av一区在线观看免费| 男人舔奶头视频| 99久久无色码亚洲精品果冻| 亚洲精华国产精华液的使用体验 | 中文字幕av在线有码专区| 久久精品国产亚洲av香蕉五月| 国产真实乱freesex| 国产探花极品一区二区| 欧美bdsm另类| 午夜免费激情av| 波多野结衣巨乳人妻| 精品国产三级普通话版| 乱系列少妇在线播放| 伦理电影大哥的女人| 久久久精品94久久精品| 午夜久久久久精精品| 亚洲精品乱码久久久久久按摩| 一区二区三区免费毛片| 免费看光身美女| 麻豆国产97在线/欧美| 亚洲人成网站在线播| 亚洲精品日韩在线中文字幕 | 男人舔奶头视频| 99riav亚洲国产免费| 麻豆一二三区av精品| 男女做爰动态图高潮gif福利片| 韩国av在线不卡| 亚洲人成网站在线播放欧美日韩| 99久久无色码亚洲精品果冻| 国产探花极品一区二区| 色综合站精品国产| 国产日韩欧美在线精品| 国产私拍福利视频在线观看| 一卡2卡三卡四卡精品乱码亚洲| 男的添女的下面高潮视频| 久久韩国三级中文字幕| 日韩,欧美,国产一区二区三区 | 一级黄片播放器| 欧美日韩一区二区视频在线观看视频在线 | 国产淫片久久久久久久久| 黄色一级大片看看| 深爱激情五月婷婷| 国产精品人妻久久久久久| 在线免费观看的www视频| 特大巨黑吊av在线直播| 国产真实伦视频高清在线观看| 国产精品嫩草影院av在线观看| 亚洲国产精品合色在线| 免费观看人在逋| 99久久精品一区二区三区| 久久久成人免费电影| 欧美3d第一页| 欧美三级亚洲精品| 黄色视频,在线免费观看| 观看免费一级毛片| 亚洲欧洲国产日韩| 亚洲av电影不卡..在线观看| 国产精品无大码| 在线免费十八禁| 看免费成人av毛片| 一夜夜www| 99久久精品一区二区三区| 亚洲av.av天堂| 国产精品伦人一区二区| 国产极品天堂在线| 最近的中文字幕免费完整| 中文亚洲av片在线观看爽| 国产高清有码在线观看视频| 久久久久久久久久成人| 免费大片18禁| 中国美白少妇内射xxxbb| 国产成人福利小说| 青春草视频在线免费观看| 九九久久精品国产亚洲av麻豆| 成人二区视频| 国产真实伦视频高清在线观看| 最近2019中文字幕mv第一页| 麻豆久久精品国产亚洲av| 国产精品久久久久久av不卡| 白带黄色成豆腐渣| 国产高清有码在线观看视频| 内射极品少妇av片p| 国国产精品蜜臀av免费| 亚洲三级黄色毛片| 亚洲自偷自拍三级| 精品久久久久久久久久免费视频| 久久这里只有精品中国| 国产久久久一区二区三区| 亚洲乱码一区二区免费版| 成人性生交大片免费视频hd| 寂寞人妻少妇视频99o| 好男人视频免费观看在线| 人妻制服诱惑在线中文字幕| 一级二级三级毛片免费看| 啦啦啦韩国在线观看视频| 亚洲av.av天堂| 亚洲美女搞黄在线观看| 精品久久久久久久久亚洲| 夜夜夜夜夜久久久久| 91午夜精品亚洲一区二区三区| 好男人在线观看高清免费视频| 人体艺术视频欧美日本| 成人无遮挡网站| 伦理电影大哥的女人| 一夜夜www| 中国美白少妇内射xxxbb| a级毛片免费高清观看在线播放| 亚洲三级黄色毛片| 少妇的逼好多水| 国产亚洲av片在线观看秒播厂 | 熟女电影av网| 日本一二三区视频观看| 国产一级毛片七仙女欲春2| 国模一区二区三区四区视频| 小说图片视频综合网站| 亚洲精品影视一区二区三区av| 久久鲁丝午夜福利片| 亚洲av一区综合| 亚洲欧美成人精品一区二区| 亚洲最大成人av| 亚洲丝袜综合中文字幕| 国产黄a三级三级三级人| 国产日韩欧美在线精品| 天堂√8在线中文| 国产 一区 欧美 日韩| 久久精品夜夜夜夜夜久久蜜豆| 亚洲欧美精品综合久久99| 少妇人妻一区二区三区视频| 久久精品国产99精品国产亚洲性色| 2022亚洲国产成人精品| 美女高潮的动态| 最好的美女福利视频网| 久久精品国产99精品国产亚洲性色| 亚洲真实伦在线观看| 99热6这里只有精品| 成人毛片a级毛片在线播放| 毛片女人毛片| 国产免费一级a男人的天堂| 美女cb高潮喷水在线观看| 在线a可以看的网站| 一个人观看的视频www高清免费观看| 人妻制服诱惑在线中文字幕| 久久韩国三级中文字幕| 小说图片视频综合网站| 国产精华一区二区三区| 国产成人精品久久久久久| 给我免费播放毛片高清在线观看| 我的老师免费观看完整版| 1024手机看黄色片| 在线观看av片永久免费下载| 人妻系列 视频| 久久欧美精品欧美久久欧美| 人妻系列 视频| 久久99精品国语久久久| 啦啦啦啦在线视频资源| 美女大奶头视频| 国产精品伦人一区二区| avwww免费| 成年av动漫网址| 男女下面进入的视频免费午夜| 欧美另类亚洲清纯唯美| 久久这里只有精品中国| 真实男女啪啪啪动态图| 亚洲熟妇中文字幕五十中出| 99久久精品一区二区三区| 97在线视频观看| 久久这里有精品视频免费| 国产一区二区亚洲精品在线观看| 亚洲成av人片在线播放无| 欧美性猛交╳xxx乱大交人| 亚洲欧美成人精品一区二区| 精品免费久久久久久久清纯| 国产精品一区二区三区四区免费观看| 在线免费观看不下载黄p国产| 久久热精品热| 国产视频内射| 91aial.com中文字幕在线观看| 亚洲精品456在线播放app| 免费观看a级毛片全部| 中文字幕制服av| 热99在线观看视频| av在线老鸭窝| 99在线人妻在线中文字幕| 欧美最黄视频在线播放免费| 激情 狠狠 欧美| 欧美日韩国产亚洲二区| av视频在线观看入口| 看十八女毛片水多多多| 日本一二三区视频观看| ponron亚洲| 最近2019中文字幕mv第一页| 村上凉子中文字幕在线| 内地一区二区视频在线| 97超视频在线观看视频| 美女被艹到高潮喷水动态| 久久久久久国产a免费观看| 欧美高清成人免费视频www| 听说在线观看完整版免费高清| 国产伦一二天堂av在线观看| 丝袜美腿在线中文| 欧美日韩一区二区视频在线观看视频在线 | 黄色欧美视频在线观看| 日韩三级伦理在线观看| 国产精品野战在线观看| 深爱激情五月婷婷| 久久久久久久久久久免费av| 久久九九热精品免费| 三级国产精品欧美在线观看| 午夜激情欧美在线| 深夜a级毛片| 国产精品久久电影中文字幕| 黑人高潮一二区| 99久久精品一区二区三区| 最近视频中文字幕2019在线8| 国产伦在线观看视频一区| 久久6这里有精品| 小说图片视频综合网站| 亚洲国产精品成人久久小说 | 国产一级毛片在线| 最好的美女福利视频网| 51国产日韩欧美| 国产精品电影一区二区三区| 又爽又黄a免费视频| 美女国产视频在线观看| 午夜福利成人在线免费观看| 国内少妇人妻偷人精品xxx网站| 国产精品嫩草影院av在线观看| 国产亚洲精品久久久久久毛片| 久久6这里有精品| 伦精品一区二区三区| 久久久久久国产a免费观看| 国内精品久久久久精免费| 看非洲黑人一级黄片| 亚洲欧美精品自产自拍| 床上黄色一级片| 国产色爽女视频免费观看| 国产午夜精品论理片| 日韩三级伦理在线观看| 乱码一卡2卡4卡精品| 久久精品国产自在天天线| 欧美成人精品欧美一级黄| 国产伦精品一区二区三区视频9| 级片在线观看| 麻豆国产97在线/欧美| 国产伦在线观看视频一区| av卡一久久| 寂寞人妻少妇视频99o| 免费在线观看成人毛片| 精华霜和精华液先用哪个| 精品人妻一区二区三区麻豆| 久久久久久国产a免费观看| 内射极品少妇av片p| 欧美成人一区二区免费高清观看| 亚洲中文字幕一区二区三区有码在线看| 国产真实乱freesex| 村上凉子中文字幕在线| 级片在线观看| 久久久久久大精品| 国产白丝娇喘喷水9色精品| 毛片女人毛片| 日韩人妻高清精品专区| 成人亚洲精品av一区二区| 99久久中文字幕三级久久日本| 看片在线看免费视频| 69人妻影院| av.在线天堂| 国产麻豆成人av免费视频| 婷婷色综合大香蕉| 99久久成人亚洲精品观看| 床上黄色一级片| 在线播放无遮挡| 性插视频无遮挡在线免费观看| 国产午夜福利久久久久久| 51国产日韩欧美| 久久精品国产自在天天线| 国产在线精品亚洲第一网站| 久久久精品欧美日韩精品| 国产精品一区二区三区四区免费观看| 亚洲欧美日韩高清专用| 亚洲精品亚洲一区二区| 欧美另类亚洲清纯唯美| 久久久成人免费电影| 欧美最黄视频在线播放免费| av在线播放精品| 九九在线视频观看精品| 毛片女人毛片| 国产真实伦视频高清在线观看| 乱码一卡2卡4卡精品| 狂野欧美激情性xxxx在线观看| www.av在线官网国产| 国产亚洲精品久久久com| 蜜臀久久99精品久久宅男| 国产91av在线免费观看| 国产亚洲91精品色在线| 日本熟妇午夜| 久久久久网色| 看非洲黑人一级黄片| 啦啦啦观看免费观看视频高清| 午夜亚洲福利在线播放| 欧美日韩一区二区视频在线观看视频在线 | 色视频www国产| 亚洲av男天堂| 亚洲欧美成人精品一区二区| 嫩草影院入口| 久久精品人妻少妇| 国产精品久久电影中文字幕| 国产精品日韩av在线免费观看| 色综合色国产| 亚洲国产精品成人久久小说 | 黄色欧美视频在线观看| 国产黄片视频在线免费观看| 寂寞人妻少妇视频99o| 久久久国产成人精品二区| 日韩欧美三级三区| 亚洲在久久综合| 日韩高清综合在线| 国产av在哪里看| 乱码一卡2卡4卡精品| 我要看日韩黄色一级片| 精品不卡国产一区二区三区| 非洲黑人性xxxx精品又粗又长| 久久久成人免费电影| kizo精华| 最后的刺客免费高清国语| 午夜福利在线在线| 国产又黄又爽又无遮挡在线| 一级毛片久久久久久久久女| 不卡一级毛片| 午夜福利成人在线免费观看| 男女边吃奶边做爰视频| 国产白丝娇喘喷水9色精品| 国产成人aa在线观看| 亚洲欧美成人综合另类久久久 | 亚洲精品亚洲一区二区| av黄色大香蕉| 在线观看美女被高潮喷水网站| 永久网站在线| 99久国产av精品| 国产男人的电影天堂91| 成人高潮视频无遮挡免费网站| 精品欧美国产一区二区三| 成人三级黄色视频| 99热精品在线国产| 天天躁日日操中文字幕| 国产av在哪里看| 国产在线精品亚洲第一网站| 日日干狠狠操夜夜爽| 久久久久久九九精品二区国产| 亚洲欧美精品综合久久99| 亚洲精品粉嫩美女一区| 欧美最新免费一区二区三区| 青春草国产在线视频 | 晚上一个人看的免费电影| 国产成人精品婷婷| 少妇熟女aⅴ在线视频| 国产成人aa在线观看| 美女国产视频在线观看| 国产精品99久久久久久久久| 麻豆成人午夜福利视频| 国产伦精品一区二区三区视频9| 在线播放无遮挡| 国内精品宾馆在线| 最新中文字幕久久久久| 国产精品人妻久久久久久| 男插女下体视频免费在线播放| 日本色播在线视频| 国产av麻豆久久久久久久| videossex国产| 国产精品福利在线免费观看| 日韩大尺度精品在线看网址| av卡一久久| 亚洲乱码一区二区免费版| 黄色欧美视频在线观看| 免费观看a级毛片全部| 亚洲三级黄色毛片| 六月丁香七月| 中文欧美无线码| 亚洲丝袜综合中文字幕| 亚洲精品久久国产高清桃花| 亚洲乱码一区二区免费版| 精品国内亚洲2022精品成人| 性插视频无遮挡在线免费观看| 波多野结衣高清无吗| 久久人妻av系列| 校园人妻丝袜中文字幕| 久久精品人妻少妇| 久久人妻av系列| 嫩草影院新地址| 成年女人看的毛片在线观看| 欧美成人免费av一区二区三区| 亚洲精品粉嫩美女一区| 欧美区成人在线视频| 国产精品伦人一区二区| 一级毛片电影观看 | 国产三级中文精品| 午夜久久久久精精品| 午夜亚洲福利在线播放| 午夜福利高清视频| 亚洲不卡免费看| 美女xxoo啪啪120秒动态图| 国产免费一级a男人的天堂|