• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Ship Local Path Planning Based on Improved Q-Learning

    2022-06-18 07:40:16-,-,,,-
    船舶力學(xué) 2022年6期

    -,-,,,-

    (1.Key Laboratory of High Performance Ship Technology(Wuhan University of Technology),Ministry of Education,Wuhan 430000,China;2.School of Transportation,Wuhan University of Technology,Wuhan 430000,China)

    Abstract:The local path planning is an important part of the intelligent ship sailing in an unknown en?vironment. In this paper, based on the reinforcement learning method of Q-Learning, an improved QLearning algorithm is proposed to solve the problems existing in the local path planning, such as slow convergence speed, high calculation complexity and easily falling into the local optimization. In the proposed method, the Q-table is initialized with respect to the artificial potential field, so that it has a prior knowledge of the environment.In addition,considering the heading factor of the ship,the two-di?mensional position information is extended to the three-dimensional by joining the angle information.Then,the traditional reward function is modified by introducing the forward information and the obsta?cle information obtained by the sensor,and by adding the influence of the environment.Therefore,the proposed method is able to obtain the optimal path with the ship energy consumption reduced to a cer?tain extent. The real-time capability and effectiveness of the algorithm are verified by the simulation and comparison experiments.

    Key words:Q-Learning;state set;reward function

    0 Introduction

    Path planning plays an important role in navigation of the autonomous vehicles like unmanned surface vehicle (USV). The path planning methods include the global path planning and the local path planning.In the global path planning,the safe path is planned for the USV under a static envi?ronment,where the obstacles are assumed to be known.The local path planning deals with the prob?lems of identifying the dynamic condition of environment and avoiding obstacles in real time. For the local path planning, some algorithms have been proposed in the literature, such as the artificial potential field[1], genetic algorithm[2], fuzzy logic[3], neural network[4-6]and so on. These algorithms can plan a safe path under the partially known environment or some unknown environment.But the adaptability of these algorithms for the unknown environment or mutation environment is not very well.

    At present, the reinforcement learning is the hottest research area. There are many studies about reinforcement learning on path planning[7-9]. Common reinforcement learning algorithms in?clude Q-Learning,SARSA,TD-Learning and Adaptive Dynamic Planning.Sadhu[10]proposed a hy?brid algorithm combining the Firefly algorithm and Q-Learning algorithm. In order to speed up the convergence, the flower pollination algorithm was utilized to improve the initialization of Q-Learn?ing[11]. Cui[12]optimized the cost function through Q-Learning to obtain the optimal conflict avoid?ance action sequence of UAV under the motion threat environment, and considered the maneuvers in order to create the path planning that complies with UAV movement limitations.Ni[13]proposed a joint motion selection strategy based on tabu-search and simulated annealing,and created a dynam?ic learning rate for the path planning method based on Q-Learning so that the method can effective?ly adapt to various environments.Ship maneuverability is integrated in Q-Learning algorithm as pri?or knowledge,which shortens the model training time[14].

    In this paper, we propose an improved Q-Learning algorithm for the ship sailing in an un?known environment. Firstly, this algorithm increases the state variable and includes the ship’s heading information,so that it can increase the path smoothness,and also changes the number of ac?tions to increase the path diversity. Then, we introduce the potential field gravity to initialize the Q table to speed up the convergence of the algorithm. After that, we modify the reward function, by adding the angle of environmental force and the forward guidance to the target point, to reduce the exploration step number and the calculation period of the algorithm. The search performance of the traditional algorithm and that of the improved algorithm in different environments are compared in the simulation experiments respectively.

    1 Classical Q-Learning algorithm

    Q-Learning is a typical reinforcement learning algorithm developed by Watkins[15], and it is a kind of reinforcement learning without environment model. Q-Learning applies the concept of reward and penalty in exploring the unstructured environment, and the term of Q-Learning is shown in Fig.1. Agent in the Fig.1 represents the unmanned vehicle, and in this paper, it repre?sents USV.State is the position of USV in local environment,and action is the movement that the USV moves from current state to next state.For the reward,it is a positive value to increase the Q-value for the cor?rect action. Besides, the penalty is a negative value to decrease the Qvalue for the wrong action.

    In general,the idea of Q-Learning is not to estimate an environment model,but to directly op?timize a Q function that can be iterated.The policy being applied is not affected by the values of the optimal Q to be converged. TheQvalues of the Q-Learning algorithm is updated using the expres?sion:

    Fig.1 Interaction between agent and environ?ment

    wheresindicates the current state of ship,aindicates the action performed insstate,rindicates the received reinforcement signal aftersexecuted,γindicates the discount factor (0<γ<1), andαindi?cates the learning coefficient(0<α<1).

    TheQvalue of the next possible state is determined by the following expression:

    whereεindicates the degree of greed,and it is a normal number with the values between 0 and 1,cis a random value,andArepresents the set of actions.

    2 Improved Q-Learning algorithm

    2.1 Improved state set

    The state in the traditional Q-Learning algorithm is represented by the grid position, and the corresponding four actions of each state are up,down,left and right[16].However,the ship path plan?ning not only needs the position information, but also needs to consider its heading information.Therefore, the following improvements are made in this paper: (1) the angle information introduced on the basis of the position information is discretized into eight directions. In other words, the state is extended from two degree of freedom (DOF) to three DOF, as shown in Fig.2; (2) four additional actions are introduced, i.e., left front, right front, left back and right back, as shown in Fig.3; with the increase of action set, the planned path is more diverse than the traditional method. Besides, in this paper, some actions are punished, such as the back action, because the ship cannot perform a large bow turn and is less likely to perform a retreat.

    Fig.2 Improved status

    Fig.3 Improved action

    The ship motion coordinate system established is s hown in Fig.4, in whichθvrepresents the ship heading,θgoalrepresents the angle between the ship’s current position to the target point and thexax?is,andθenvrepresents the direction of the environmental force.

    In order to quantitatively analyze the degree of path smoothness planned by the algorithm, a path angle function is introduced as follows:

    whereθi-θi-1indicates the included angle between the pointiand its previous point,nrepresents the number

    Fig.4 Ship motion coordinate system

    of path points. Obviously, the smaller the value ofζ, the smoother the path curve, which is more suitable for ship navigation.

    2.2 Prior information

    We know that the artificial potential field method is widely used in path planning. Its basic idea is to construct an artificial potential field in the surrounding environment of a ship, which in?cludes the gravitational field and the repulsive field. The target node forms a gravitational field to the ship, and the obstacle has a repulsive force field in a certain range around it, which makes the joint force in its working environment push the ship forward to the target. The traditional Q-Learn?ing algorithm has no prior information,which means that theQvalue is set to a same value or a ran?dom value in the initialization process, so the convergence speed of the algorithm is slow. In order to solve the above problems,the gravity information of potential field is introduced when theQtable is initialized. That means we add the environmental prior knowledge to speed up the convergence of the algorithm.

    The gravitational function is defined as

    2.3 Improved reward function

    The reward function in Q-Learning maps the perceived state to the enhancement signal to evaluate the advantages and disadvantages of the actions. The definition of the reward function de?termines the quality of the algorithm. The traditional Q-Learning algorithm has a simple definition of return function and does not contain any heuristic information,which leads to a slow convergence speed and high complexity of the algorithm. In this paper, the reward function is divided into three stages:reaching the target point,encountering obstacles and the stage showed in Eq.(6).

    wheref1,f2,andf3are the predefined numbers,pis the current point,Goal is the target point,dobsis the obstacle position,dois the obstacle influence range,m1,m2andm3are the weight coefficients,which are positive, and the sum of them is 1,d1,d2, andd3are the evaluation factors defined as fol?lows:

    whered2is the angle evaluation factor,andk1andk2are the weight coefficients.It can be seen from Fig.4 that when the ship implements the collision avoidance measures with the influence by envi?ronmental force at the same time, the direction of the environmental forces is quite relevant to the direction of the ship, which means it should follow the direction of environmental resultant force,which is beneficial to the saving of energy. Suppose the angle between the direction of the environ?mental force and the direction of the ship is between 120°and 240°,it is relatively energy-consum?ing.So it is necessary to combine the angle between the heading and the target position,the smaller the angle,the better.In that course of the ship,the direction of navigation is the same as that of the environmental force, which is beneficial to energy conserving. However, at the same time, the state of the target point cannot be excessively deviated, so the punishment will be larger when the devia?tion angle of the ship is higher.

    whered3is the obstacle evaluation factor,bis the total number of obstacles in the sensor detection range, and thedh(b) is the distance from the current point to the obstacle position. The formula shows that the closer the ship to the obstacle,the greater the penalty.

    Due to the heading of the ship is limited and we do not want the ship to retreat. For example,the ship moves from one state to another,but the action is not front,left front or right front,so the in?centive value is set to a negative value to avoid a large change in the path angle planned by the algo?rithm.

    2.4 Modification of greedy selection

    The balance between the exploration and exploitation of the algorithm depends on the greedy setting,and the greed of the traditional algorithm is set to a constant,which makes the algorithm de?pend on the quality of the set value. If the setting value is too large, the algorithm will fall into the local optimal that cannot jump out.If the setting value is too small,the algorithm will still find other paths,even the smallest path,resulting in a slow convergence speed and a long calculation time.In this paper,we propose an adaptive greed.The algorithm pays attention to the path exploration in the early stage to improve the path diversity. With the increase of the number of iterations, the algo?rithm focuses on the exploitation of the results of the previous exploration,and the formula is as fol?lows:

    wherehis the current number of iterations andεmaxis set to the maximum greedy value. As can be seen from Eq.(10),with the increase of the number of iterations,the greedy value increases until the maximum value is reached.Therefore,the probability of the random selection action is gradually re?duced,and the converge of the algorithm is accelerated.

    2.5 Algorithm pipeline

    The improved algorithm pipeline is shown in Fig.5,and is described as follow:

    Step 1: Establish aQtable formed by state and action, and initialize theQvalue table, the start stateSand the iterative counterh,and the stateSis a three-DOF information composed of the grid position and the angle to determine whetherSis the target point.

    Step 2: At the beginning of the iteration, the greed degreeεis set according to Eq.(10) in the action policy selection. If the random valuecis less thanε, then randomly select the action. If the random valuecis larger thanε,then select the action of the maximumQvalue according to Eq.(2).

    Step 3: Select the reward value of the action according to Eq.(6), and the determination of its reward value contains the direction of environmental force, whether to advance to the target point and the distance from the obstacle.

    Step 4: Update theQvalue table according to Eq.(1). If the next state is the target, the loop ends, otherwise, turn back to Step 2. If the result of the successive 15 iterations is consistent, the output result is considered to be converged in advance, otherwise, if the number of iterationshis greater than 1000,the result is outputted directly.

    Fig.5 Flow chart of the improved Q-Learning algorithm

    3 Simulation results and analysis

    In order to verify the validity of this algorithm,the improved Q-Learning algorithm and the tra?ditional Q-Learning algorithm are compared under the simulation platform of Python 3.6 with CPU of Intel (R) Core (TM) i3, 3.9 GHz, and 8 G memory. The grid map is established and the sensor measurement range is set to four girds, as shown in Fig.6. The simulation situation is divided into two common scenarios:one is that there are many obstacles at the starting point of the ship when en?tering the port, and the other is that there are obstacles at the target point when the ship leaves the port. One of the maps is a grid with a small map of 10×10, and the other is a grid with a large map of 20×20.The related parameters set in the algorithm are as follows:the maximum number of itera?tionsh=1000, the learning rateα=0.4, the discount factorγ=0.95, the greedy policyε=0.5,εmax=0.95,the parameters ofm1,m2andm3are 0.7,0.2 and 0.1 respectively,and those off1,f2,f3are+10,-10,-0.02.If the same value is outputted for 15 consecutive rounds,it can be regarded as the con?vergence of the algorithm in advance, otherwise the algorithm ends when the number of iterations reaches the maximum.

    Fig.6 Simulation environment

    Fig.7 and Fig.8 show the path lengths of the two algorithms in the small map and the large map respectively,where Fig.7(a)and Fig.8(a)are the traditional algorithms and Fig.7(b)and Fig.8(b)are the improved algorithms. As can be seen from Fig.7, the maximum number of exploration steps of the traditional algorithm is 1600 steps while that of the improved algorithm is only 15 steps, which is reduced by 99.06%. When the map becomes larger, the number of exploration steps increases.The number of exploration steps in the early stage of the traditional algorithm is larger than that of the latter, and most of them are above 500 steps, with a maximum of 3000 steps. Compared with that of the traditional algorithm, the number of exploration steps of the improved algorithm is pretty small, and the highest is 32 steps, only 1.07% of that of the traditional algorithm. So, we can con?clude that the improved algorithm performs better in exploration performance than the traditional al?gorithm.

    Fig.7 Length of path versus the iterations in small map

    Fig.8 Length of path versus the iterations in large map

    Fig.9 and Fig.10 are the path graphs of the two algorithms under the small map and large map,where Fig.9(a) and Fig.10(a) are the traditional algorithms, and Fig.9(b) and Fig.10(b) are the im?proved algorithms. From Fig.9, we can see that the improved algorithm increases the action set and improves the diversity of paths.In addition,the improved algorithm converts the two-DOF variables of the state into the three-DOF by adding the information of heading,and considers the limitation of ship motion.As can be seen from Fig.10,the improved algorithm reduces the rotation angle and im?proves the smoothness of the path.

    Fig.9 Planned path in small map

    In Fig.9, there are more obstacles at the end point, however, in Fig.10, there are more obsta?cles at the starting point,which is similar to the cases of the ship entering and leaving port.We can see that the improved algorithm can plan a better route from the starting point to the target node.

    Fig.10 Planned path in large map

    Tab.1 shows the comparison of the performance of the above-mentioned algorithm in the path planning, including the important factors such as running time, total running steps, path length and angle. Meanwhile, we average the data in the table after 10 times of operation. In the simple envi?ronment, compared with that of the traditional algorithm, the total operation step size of the im?proved algorithm is reduced by 91.19%,and the final path is shortened by 38.89%.Meanwhile,the smoothness of the planning path is increased 79.17%, and the time is shortened by 83.22%. In the complex environment, the step size of the improved algorithm operation is reduced by 98.59%, the path length is shortened by 42.11%, the path smoothness is improved by 75%, and the time is re?duced by 95.98%. The simulation results show that the improved algorithm is superior to the tradi?tional algorithm.And when the map is enlarged,the time of the improved algorithm is less than that ofthe traditional algorithm,indicating its effectiveness and real-time performance.

    Tab.1 Performance comparison of different algorithms in different environments

    4 Concluding remarks

    This paper presented an improved Q-Learning algorithm for the local path planning of an un?known environment.In the proposed algorithm,theQvalue table was initialized with potential field gravity to reduce the number of exploration steps. The two-DOF state variable was extended to the direction information of three-DOF with a reduced curvature of the route. The path was diversified by adding action set while angle limit was considered to meet the requirements of ship driving at the same time. Such information as the reward function, the introduction distance and the environment force direction was modified,with the convergence speed of the algorithm accelerated and the path length shortened as a result. Comparison of the traditional Q-Learning algorithm with the improved algorithm and the simulation results show that the improved algorithm is effective and feasible.

    欧美日韩福利视频一区二区| 好男人电影高清在线观看| 日本 av在线| av在线天堂中文字幕| 国产亚洲精品久久久com| 成年女人毛片免费观看观看9| 日韩欧美在线乱码| 12—13女人毛片做爰片一| eeuss影院久久| 综合色av麻豆| 亚洲人成伊人成综合网2020| 欧美日本视频| 国产三级在线视频| 一个人免费在线观看电影| 热99re8久久精品国产| 亚洲欧美日韩高清在线视频| 成人av一区二区三区在线看| 亚洲熟妇中文字幕五十中出| 99精品久久久久人妻精品| 国产又黄又爽又无遮挡在线| 精品国产亚洲在线| 天堂√8在线中文| 一本久久中文字幕| 啦啦啦观看免费观看视频高清| 又紧又爽又黄一区二区| 精品乱码久久久久久99久播| 一边摸一边抽搐一进一小说| 又黄又爽又刺激的免费视频.| 少妇高潮的动态图| 色综合亚洲欧美另类图片| 丝袜美腿在线中文| 国产av一区在线观看免费| 99久久九九国产精品国产免费| 一级a爱片免费观看的视频| 极品教师在线免费播放| 免费看日本二区| 一本综合久久免费| 男人舔女人下体高潮全视频| 18禁裸乳无遮挡免费网站照片| 日本三级黄在线观看| 久久久久国产精品人妻aⅴ院| 亚洲精品日韩av片在线观看| 一个人观看的视频www高清免费观看| 一级av片app| 国产伦在线观看视频一区| av在线观看视频网站免费| 亚洲av电影在线进入| 99在线视频只有这里精品首页| 夜夜躁狠狠躁天天躁| 精品国产亚洲在线| 久久人人爽人人爽人人片va | 欧美+日韩+精品| 国产男靠女视频免费网站| 噜噜噜噜噜久久久久久91| bbb黄色大片| 天堂av国产一区二区熟女人妻| 身体一侧抽搐| 淫秽高清视频在线观看| 亚洲精品456在线播放app | 少妇人妻一区二区三区视频| 性色av乱码一区二区三区2| 女人被狂操c到高潮| 欧美乱色亚洲激情| 搡女人真爽免费视频火全软件 | 99久久精品热视频| 禁无遮挡网站| 免费一级毛片在线播放高清视频| 在线观看av片永久免费下载| 美女xxoo啪啪120秒动态图 | 在线免费观看的www视频| 高清毛片免费观看视频网站| 欧美潮喷喷水| 国产野战对白在线观看| 日韩av在线大香蕉| 国产 一区 欧美 日韩| 亚洲人成网站在线播放欧美日韩| 国产私拍福利视频在线观看| 成人av在线播放网站| 国产色爽女视频免费观看| 日韩中文字幕欧美一区二区| 亚洲国产欧美人成| 中出人妻视频一区二区| 欧美区成人在线视频| 国产精品一及| 欧美乱妇无乱码| 亚洲在线观看片| 三级国产精品欧美在线观看| 综合色av麻豆| 欧美日韩亚洲国产一区二区在线观看| 亚洲国产精品合色在线| 美女黄网站色视频| 老司机午夜十八禁免费视频| 午夜精品一区二区三区免费看| 久久久久免费精品人妻一区二区| 一二三四社区在线视频社区8| 亚洲aⅴ乱码一区二区在线播放| 久久性视频一级片| 亚洲不卡免费看| 在线国产一区二区在线| 亚洲第一区二区三区不卡| 色噜噜av男人的天堂激情| a级毛片a级免费在线| 精品一区二区三区视频在线| 国产精品久久久久久久久免 | 91av网一区二区| 精品久久国产蜜桃| 国产欧美日韩一区二区精品| 成熟少妇高潮喷水视频| 国产色婷婷99| 国产精品亚洲美女久久久| 在线天堂最新版资源| 99久久精品热视频| 美女免费视频网站| 美女大奶头视频| 中文字幕人成人乱码亚洲影| 美女免费视频网站| 九色成人免费人妻av| 午夜福利在线观看吧| 别揉我奶头~嗯~啊~动态视频| 久久精品国产亚洲av天美| 亚洲美女搞黄在线观看 | 91av网一区二区| 一个人免费在线观看电影| 麻豆av噜噜一区二区三区| 十八禁国产超污无遮挡网站| 99久久九九国产精品国产免费| 免费看a级黄色片| 亚洲最大成人av| 午夜激情欧美在线| 9191精品国产免费久久| 久久人人精品亚洲av| 有码 亚洲区| 欧美日韩福利视频一区二区| 麻豆一二三区av精品| 两个人的视频大全免费| 天堂动漫精品| 毛片女人毛片| 男女那种视频在线观看| 三级男女做爰猛烈吃奶摸视频| 热99在线观看视频| 日本a在线网址| 欧美中文日本在线观看视频| 99国产精品一区二区三区| 91午夜精品亚洲一区二区三区 | 18禁在线播放成人免费| 国产日本99.免费观看| 日韩大尺度精品在线看网址| 国产精品一区二区性色av| 国产精品一区二区三区四区久久| 18美女黄网站色大片免费观看| 久久99热6这里只有精品| 国产伦精品一区二区三区四那| 日韩中文字幕欧美一区二区| 草草在线视频免费看| 一区二区三区四区激情视频 | 伊人久久精品亚洲午夜| 午夜免费成人在线视频| 欧美一区二区国产精品久久精品| 能在线免费观看的黄片| 日韩大尺度精品在线看网址| 成人av一区二区三区在线看| 国产精品一区二区三区四区久久| 91久久精品国产一区二区成人| xxxwww97欧美| 91九色精品人成在线观看| 久久久久久九九精品二区国产| 亚洲av成人av| 啪啪无遮挡十八禁网站| 久久九九热精品免费| 噜噜噜噜噜久久久久久91| 国产淫片久久久久久久久 | netflix在线观看网站| 国产精品综合久久久久久久免费| 十八禁网站免费在线| 欧美黑人巨大hd| 亚洲成a人片在线一区二区| 国内精品久久久久精免费| 亚洲经典国产精华液单 | 首页视频小说图片口味搜索| 看片在线看免费视频| 人妻夜夜爽99麻豆av| www.色视频.com| 一区二区三区四区激情视频 | 老司机深夜福利视频在线观看| 亚洲成人久久性| 亚洲av美国av| 男人的好看免费观看在线视频| 观看免费一级毛片| 波多野结衣高清无吗| 一区二区三区四区激情视频 | 91在线精品国自产拍蜜月| 久久人妻av系列| 亚洲av成人不卡在线观看播放网| 俺也久久电影网| 国产精品人妻久久久久久| 成人三级黄色视频| 国产欧美日韩一区二区精品| 少妇熟女aⅴ在线视频| 亚洲av美国av| 欧洲精品卡2卡3卡4卡5卡区| 亚洲精品一区av在线观看| 亚洲五月天丁香| 人人妻人人看人人澡| www.色视频.com| 免费在线观看亚洲国产| 婷婷精品国产亚洲av| 成人特级av手机在线观看| 亚洲男人的天堂狠狠| 一区二区三区高清视频在线| 午夜福利在线观看吧| 人人妻人人看人人澡| 精品一区二区免费观看| 如何舔出高潮| 婷婷精品国产亚洲av在线| 18美女黄网站色大片免费观看| 9191精品国产免费久久| 哪里可以看免费的av片| 极品教师在线视频| 国产高潮美女av| 夜夜爽天天搞| 国产精品野战在线观看| 宅男免费午夜| 99热这里只有精品一区| 午夜福利18| 又爽又黄无遮挡网站| 99视频精品全部免费 在线| 极品教师在线免费播放| 免费一级毛片在线播放高清视频| 精品乱码久久久久久99久播| 亚洲久久久久久中文字幕| 国产爱豆传媒在线观看| 亚洲精品久久国产高清桃花| 亚洲av中文字字幕乱码综合| 国产主播在线观看一区二区| 国产69精品久久久久777片| 国产极品精品免费视频能看的| 制服丝袜大香蕉在线| 国产色婷婷99| 美女黄网站色视频| 欧洲精品卡2卡3卡4卡5卡区| 波多野结衣高清无吗| 午夜免费激情av| 日本熟妇午夜| 久久久精品欧美日韩精品| av在线老鸭窝| avwww免费| 免费在线观看亚洲国产| 午夜福利高清视频| 在现免费观看毛片| av天堂中文字幕网| 深爱激情五月婷婷| 人人妻人人看人人澡| 久久精品国产亚洲av香蕉五月| 18禁黄网站禁片午夜丰满| 国产精品一区二区三区四区免费观看 | 小蜜桃在线观看免费完整版高清| 亚洲av成人av| 国产色爽女视频免费观看| 97人妻精品一区二区三区麻豆| 最近在线观看免费完整版| 亚洲av五月六月丁香网| 欧美日韩乱码在线| 久久久久免费精品人妻一区二区| a级毛片免费高清观看在线播放| 级片在线观看| 九九久久精品国产亚洲av麻豆| 免费在线观看亚洲国产| 久久精品人妻少妇| 精品欧美国产一区二区三| 国产精品一区二区三区四区免费观看 | 中文字幕av成人在线电影| 18禁裸乳无遮挡免费网站照片| 久久精品国产亚洲av涩爱 | 网址你懂的国产日韩在线| 噜噜噜噜噜久久久久久91| 国产精品久久电影中文字幕| 亚洲成人中文字幕在线播放| 性欧美人与动物交配| 两个人的视频大全免费| 噜噜噜噜噜久久久久久91| 免费观看的影片在线观看| 亚洲内射少妇av| 给我免费播放毛片高清在线观看| 脱女人内裤的视频| 一区二区三区高清视频在线| 高潮久久久久久久久久久不卡| 免费av观看视频| 丁香欧美五月| 国产精品久久久久久人妻精品电影| 国产成人啪精品午夜网站| 国产精品1区2区在线观看.| 国产成年人精品一区二区| 午夜激情欧美在线| 亚洲欧美日韩东京热| 丰满的人妻完整版| 好男人在线观看高清免费视频| 亚洲无线观看免费| 国产主播在线观看一区二区| 久久九九热精品免费| 免费av观看视频| 热99在线观看视频| 在现免费观看毛片| 搡老熟女国产l中国老女人| 草草在线视频免费看| 能在线免费观看的黄片| 欧美激情国产日韩精品一区| 男女做爰动态图高潮gif福利片| 国产亚洲av嫩草精品影院| 人妻制服诱惑在线中文字幕| 午夜亚洲福利在线播放| 成人永久免费在线观看视频| 少妇裸体淫交视频免费看高清| 色综合婷婷激情| 国产黄a三级三级三级人| 男女那种视频在线观看| 亚洲精品色激情综合| 日本在线视频免费播放| 久久婷婷人人爽人人干人人爱| 又爽又黄无遮挡网站| 精品人妻1区二区| 欧美日韩福利视频一区二区| 国产乱人视频| 精品无人区乱码1区二区| 日韩欧美 国产精品| 18禁裸乳无遮挡免费网站照片| h日本视频在线播放| 91在线精品国自产拍蜜月| 国产成+人综合+亚洲专区| 久久久久久大精品| 日韩欧美在线乱码| 黄色视频,在线免费观看| 中文字幕高清在线视频| 久久久精品欧美日韩精品| 看片在线看免费视频| 床上黄色一级片| 国产成人a区在线观看| 黄片小视频在线播放| 精品人妻一区二区三区麻豆 | a级一级毛片免费在线观看| 长腿黑丝高跟| 丁香六月欧美| a级毛片a级免费在线| 看片在线看免费视频| 床上黄色一级片| 精品国内亚洲2022精品成人| 99在线人妻在线中文字幕| 国产在线精品亚洲第一网站| АⅤ资源中文在线天堂| 啦啦啦韩国在线观看视频| 日本精品一区二区三区蜜桃| 亚洲欧美日韩高清在线视频| 国产国拍精品亚洲av在线观看| 久久久精品大字幕| 精品久久国产蜜桃| 亚洲午夜理论影院| 极品教师在线免费播放| 国产探花极品一区二区| 欧美激情在线99| 国产精品国产高清国产av| 岛国在线免费视频观看| 看十八女毛片水多多多| 国产精品久久视频播放| 久久精品91蜜桃| 给我免费播放毛片高清在线观看| 免费在线观看影片大全网站| 国产成人av教育| 一进一出抽搐gif免费好疼| 一本一本综合久久| 在线观看av片永久免费下载| 免费在线观看亚洲国产| 国产真实伦视频高清在线观看 | 免费看光身美女| 国产精品自产拍在线观看55亚洲| 精品人妻一区二区三区麻豆 | 成年女人永久免费观看视频| 一本精品99久久精品77| 亚洲成人免费电影在线观看| 久久精品综合一区二区三区| 国产69精品久久久久777片| 免费在线观看影片大全网站| 最近视频中文字幕2019在线8| 国产精品人妻久久久久久| 亚洲真实伦在线观看| 亚洲av第一区精品v没综合| 亚洲欧美日韩高清在线视频| 精品一区二区免费观看| 国产亚洲精品久久久com| 国产精品不卡视频一区二区 | 真实男女啪啪啪动态图| 91av网一区二区| 免费大片18禁| 午夜精品一区二区三区免费看| 3wmmmm亚洲av在线观看| 色播亚洲综合网| 国内精品久久久久精免费| 91麻豆精品激情在线观看国产| 亚洲人成伊人成综合网2020| 国产亚洲精品久久久久久毛片| 综合色av麻豆| 欧美一级a爱片免费观看看| 日韩中字成人| 精品人妻熟女av久视频| 欧美高清成人免费视频www| 亚洲精品亚洲一区二区| 亚洲av日韩精品久久久久久密| 精品熟女少妇八av免费久了| 久久久久免费精品人妻一区二区| 欧美精品国产亚洲| 别揉我奶头 嗯啊视频| av在线观看视频网站免费| 欧美成狂野欧美在线观看| 日韩精品中文字幕看吧| 久久久精品欧美日韩精品| 怎么达到女性高潮| 久久99热这里只有精品18| 久久久久性生活片| 1000部很黄的大片| 国产亚洲精品久久久com| 一级黄色大片毛片| 岛国在线免费视频观看| 少妇熟女aⅴ在线视频| 美女xxoo啪啪120秒动态图 | 一个人免费在线观看的高清视频| 看免费av毛片| 天堂√8在线中文| 淫妇啪啪啪对白视频| 亚洲av免费在线观看| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 男女床上黄色一级片免费看| 久久精品国产99精品国产亚洲性色| 亚洲中文字幕一区二区三区有码在线看| 亚洲av第一区精品v没综合| a级一级毛片免费在线观看| 一进一出抽搐gif免费好疼| 色播亚洲综合网| 欧美激情国产日韩精品一区| 国产一区二区激情短视频| 淫妇啪啪啪对白视频| 丰满的人妻完整版| 亚洲av电影不卡..在线观看| 欧美日本亚洲视频在线播放| 最近在线观看免费完整版| 亚洲av二区三区四区| 偷拍熟女少妇极品色| 久久久久久久亚洲中文字幕 | 99久久精品国产亚洲精品| 午夜福利高清视频| 美女cb高潮喷水在线观看| h日本视频在线播放| 51午夜福利影视在线观看| 午夜免费男女啪啪视频观看 | h日本视频在线播放| 国产伦人伦偷精品视频| 在线播放无遮挡| 国产黄a三级三级三级人| 日本一本二区三区精品| 他把我摸到了高潮在线观看| 亚洲精品乱码久久久v下载方式| 亚洲精品日韩av片在线观看| 免费在线观看成人毛片| ponron亚洲| 欧美成狂野欧美在线观看| 亚洲成人免费电影在线观看| 欧美xxxx黑人xx丫x性爽| 少妇高潮的动态图| 欧美性感艳星| 又黄又爽又刺激的免费视频.| 色噜噜av男人的天堂激情| 免费在线观看成人毛片| 动漫黄色视频在线观看| 免费观看人在逋| 中文字幕久久专区| 久久婷婷人人爽人人干人人爱| 色综合欧美亚洲国产小说| 少妇人妻一区二区三区视频| 两人在一起打扑克的视频| 精品人妻偷拍中文字幕| 别揉我奶头~嗯~啊~动态视频| 国内精品美女久久久久久| 桃红色精品国产亚洲av| 在线观看免费视频日本深夜| 亚洲精品影视一区二区三区av| 久久久久国产精品人妻aⅴ院| 午夜亚洲福利在线播放| 国产伦在线观看视频一区| 欧美性猛交╳xxx乱大交人| 免费av观看视频| 嫩草影院精品99| 欧美高清成人免费视频www| 久久精品综合一区二区三区| a级一级毛片免费在线观看| 精品久久国产蜜桃| 麻豆一二三区av精品| 少妇的逼好多水| 在现免费观看毛片| 啦啦啦观看免费观看视频高清| 女生性感内裤真人,穿戴方法视频| 午夜免费男女啪啪视频观看 | 成人特级黄色片久久久久久久| 欧美色视频一区免费| 国产爱豆传媒在线观看| 中文字幕精品亚洲无线码一区| 欧美一区二区精品小视频在线| 亚洲人与动物交配视频| 欧美一区二区精品小视频在线| 欧美xxxx性猛交bbbb| 亚洲精品一区av在线观看| 蜜桃亚洲精品一区二区三区| 色噜噜av男人的天堂激情| 国产av一区在线观看免费| 国产高清视频在线观看网站| av专区在线播放| 极品教师在线免费播放| 亚洲成av人片在线播放无| 精品免费久久久久久久清纯| 日本熟妇午夜| 欧美zozozo另类| 一级黄色大片毛片| 禁无遮挡网站| 永久网站在线| 嫁个100分男人电影在线观看| 看十八女毛片水多多多| 久久6这里有精品| 人妻制服诱惑在线中文字幕| 欧美zozozo另类| 亚洲av二区三区四区| 久久久久国内视频| 动漫黄色视频在线观看| 好男人在线观看高清免费视频| 悠悠久久av| 精品熟女少妇八av免费久了| 亚洲 国产 在线| 国产日本99.免费观看| 村上凉子中文字幕在线| 一级av片app| 久久精品夜夜夜夜夜久久蜜豆| 日本在线视频免费播放| 国产欧美日韩一区二区三| 尤物成人国产欧美一区二区三区| 精品乱码久久久久久99久播| 天堂网av新在线| 精品久久国产蜜桃| 麻豆一二三区av精品| 给我免费播放毛片高清在线观看| 国产高潮美女av| 一卡2卡三卡四卡精品乱码亚洲| 亚洲内射少妇av| 国产精品一区二区三区四区免费观看 | 国产精品av视频在线免费观看| 国产在线男女| 亚洲精品久久国产高清桃花| 日本与韩国留学比较| 国产在线精品亚洲第一网站| 最近中文字幕高清免费大全6 | 深夜a级毛片| 免费观看精品视频网站| 亚洲国产精品成人综合色| 又粗又爽又猛毛片免费看| 99久国产av精品| 国内揄拍国产精品人妻在线| 亚洲欧美激情综合另类| 免费一级毛片在线播放高清视频| 亚洲精品日韩av片在线观看| 免费黄网站久久成人精品 | 亚洲人成伊人成综合网2020| 一级a爱片免费观看的视频| 尤物成人国产欧美一区二区三区| 天堂√8在线中文| 欧美高清成人免费视频www| 久久6这里有精品| 三级国产精品欧美在线观看| 桃色一区二区三区在线观看| eeuss影院久久| av黄色大香蕉| 国产高清视频在线观看网站| 搡老熟女国产l中国老女人| 最近最新免费中文字幕在线| 身体一侧抽搐| 18禁黄网站禁片午夜丰满| 搡老岳熟女国产| 一进一出抽搐动态| 亚洲人成网站在线播放欧美日韩| 久久久色成人| 成人国产一区最新在线观看| 一本一本综合久久| 精品一区二区三区视频在线观看免费| 国产中年淑女户外野战色| 午夜a级毛片| 国产成人a区在线观看| 免费看美女性在线毛片视频| 麻豆成人av在线观看| 国产真实乱freesex| 自拍偷自拍亚洲精品老妇| 精品久久久久久成人av| 亚洲综合色惰| 精品久久久久久久久久免费视频| 成年女人永久免费观看视频| 国产伦在线观看视频一区| 久久久久久久精品吃奶| 中文亚洲av片在线观看爽| 成人av在线播放网站| 亚洲美女黄片视频| 在线播放国产精品三级| 国产av不卡久久| 国产精品99久久久久久久久| 久久天躁狠狠躁夜夜2o2o| 免费在线观看影片大全网站| 99热这里只有是精品在线观看 | 两人在一起打扑克的视频| 最新中文字幕久久久久| 国产精品一区二区免费欧美| 琪琪午夜伦伦电影理论片6080| 亚洲va日本ⅴa欧美va伊人久久| 日韩高清综合在线| 久久欧美精品欧美久久欧美|