• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Navigation Method Based on Improved Rapid Exploration Random Tree Star-Smart (RRT*-Smart) and Deep Reinforcement Learning

    2022-12-09 14:22:50ZHANGJueLIXiangjian李祥健LIUXiaoyan劉肖燕LINanYANGKaiqiang楊開(kāi)強(qiáng)ZHUHeng

    ZHANG Jue(張 玨), LI Xiangjian(李祥健), LIU Xiaoyan(劉肖燕)*, LI Nan (李 楠), YANG Kaiqiang(楊開(kāi)強(qiáng)), ZHU Heng(朱 恒)

    1 College of Information Science and Technology, Donghua University, Shanghai 201620, China

    2 Engineering Research Center of Digitized Textile & Fashion Technology, Ministry of Education, Shanghai 201620, China

    Abstract: A large number of logistics operations are needed to transport fabric rolls and dye barrels to different positions in printing and dyeing plants, and increasing labor cost is making it difficult for plants to recruit workers to complete manual operations. Artificial intelligence and robotics, which are rapidly evolving, offer potential solutions to this problem. In this paper, a navigation method dedicated to solving the issues of the inability to pass smoothly at corners in practice and local obstacle avoidance is presented. In the system, a Gaussian fitting smoothing rapid exploration random tree star-smart (GFS RRT*-Smart) algorithm is proposed for global path planning and enhances the performance when the robot makes a sharp turn around corners. In local obstacle avoidance, a deep reinforcement learning determiner mixed actor critic (MAC) algorithm is used for obstacle avoidance decisions. The navigation system is implemented in a scaled-down simulation factory.

    Key words: rapid exploration random tree star smart (RRT*-Smart); Gaussian fitting; deep reinforcement learning (DRL); mixed actor critic (MAC)

    Introduction

    In printing and dyeing plants, the logistics task is to transport fabric rolls and dye barrels. Workers use trolleys to load fabric rolls and dye barrels and transport the material between printing and dyeing machines.

    Intelligent robot navigation systems must perform two tasks in printing and dyeing plants: plan the best route and maneuver around obstacles.

    The rest of the paper is organized as follows. In section 1, historical literature related to the methods is recalled. In section 2, the Gaussian fitting smoothing rapid exploration random tree star-smart (GFS RRT*-Smart) algorithm and the determiner mixed actor critic (MAC) method are introduced. In section 3, experimental results of global path planning and local obstacle avoidance strategies are demonstrated.

    The main contributions of this paper are the proposal of a navigation method that uses the GFS RRT*-Smart algorithm for global path planning to address the problems of inability to pass smoothly at corners in practice, and a deep reinforcement learning (DRL) determiner MAC method to solve local obstacle avoidance problems.

    1 Related Work

    Simultaneous localization and mapping (SLAM) systems based on 2D light detection and ranging (LiDAR) devices are unable to recognize 3D targets in the environment, while 3D mapping achieved by fusion of 2D LiDAR and 3D ultrasound sensors is highly influenced by ambient light[1-2]. DRL methods for autonomous navigation are developing rapidly, and excellent methods are continuously proposed. Francisetal.[3]presented a deep deterministic policy gradient(DDPG) network to obtain a better local DRL policy and then parallelize the computational DRL policy with probabilistic roadmaps for a given large map to obtain a network. After deployment, the best path can be searched in this network using the A star (A*) algorithm. Savvaetal.[4]developed the indoor navigation platform Habitat, in which the visual properties during navigation were evaluated using DRL, and found that depth maps helped to improve the visual performance of navigation. Saxetal.[5]used mid-level migration invariant feature data with migration invariance for DRL of robot vision to solve the problem of failure to recognize training results properly due to illumination changes. Researchers[6-7]designed a neurotopological simultaneous localization and mapping(SLAM) for visual navigation whose main approach utilized image semantic features for nodes and a spatial topological representation that provided approximate geometric inference, and a goal-oriented semantic exploration module to solve the problem of navigating to a given object category in unseen environments. The above methods suggest that the DRL autonomous navigation method combined with semantic segmentation techniques can help to solve the navigation problems. A random sampling algorithm, for example the probabilistic roadmap (PRM) or the rapid-exploration random tree (RRT), is primarily used for solving path planning problems in low-dimensional spaces. The RRT*algorithm is based on RRT, which uses the re-selecting parent node and rewiring operations to optimize the path. The RRT*-Smart algorithm optimizes the path by converting curves into straight lines as much as possible[8]. Further research improved the RRT*path planning method by using a sampling-based approach[9]. Wangetal.[10]optimized the kinematic planning by using a sampling approach.

    Heidenetal.[11]designed an open-source sampling-based motion planning benchmark for wheeled mobile robots. The benchmark provided a diverse of algorithms, post-smoothing techniques, steering functions, optimization criteria, complex environments similar to real-world applications, and performance metrics. Kulhneketal.[12]designed DRL-based visual assistance tasks, customized reward schemes, and simulators. The strategy was fine-tuned on images collected from real-world environments. Kontoudis and Vamvoudakis[13]presented an online kinodynamic motion planning algorithmic framework by using asymptotically optimal RRT*and continuous-time Q-learning. Shietal.[14]proposed an end-to-end navigation method based on DRL that translated sparse laser ranging results into movement actions, which accomplished map-less navigation in complex environments.

    2 Method

    The logistics path of printing and dyeing plants has following characteristics: low overall environmental change and frequent local area obstacles. According to these characteristics, we designed a navigation method with global planning of the overall path and local dynamic obstacle avoidance decisions (shown in Fig. 1). Among them, the GFS RRT*-Smart algorithm is used for global path planning, and a decision method MAC combined with semantic segmentation is used for avoiding obstacles.

    Fig. 1 Overview of the navigation system

    2.1 GFS RRT*-smart algorithm

    By using intelligent sampling and path optimization techniques, the RRT*-Smart algorithm solves the problem of slow convergence when the RRT*algorithm is close to the optimal value. However, the paths generated by the RRT*-Smart algorithm are not suitable for robots, especially for 2-wheeled differential transition chassis. The RRT*-Smart algorithm-generated curve steers at a small angle, which results in the robot’s inability to steer accurately when the turning radius is small. Smoothing the direct steering line segments with a curved path is beneficial to actual operation of the robot.

    The GFS RRT*-Smart algorithm is outlined in Fig. 2. The algorithm traverses path points derived from the RRT*-Smart to confirm whether the path is legal (step 21), and chooses those points in the path where the turning angle is smaller than the certain angle (step 24). In step 25, the algorithm intercepts line segments on both sides of these points, and samples randomly near the line segments according to a certain standard deviation. In steps 26 and 27, the algorithm uses these data to fit a Gaussian curve, and chooses points within the confidence interval to replace the original points.

    Fig. 2 GFS RRT*-Smart algorithm

    2.2 MAC

    The DDPG algorithm is a deterministic policy learning algorithm, but it has considerable estimation bias along with value estimation, which tends to lead to a large bias in policy learning and reduces its exploration ability.

    Twin delayed deep deterministic policy gradient (TD3) effectively reduces the value estimation bias of DDPG with techniques such as dual critic structure and differential delayed update, but its policy learning and decision making are still tendentious.

    The girl would turn her cell1 phone off and put it by her photo on the desk every night before going to bed. This habit has been with her ever since she bought the phone.

    The soft actor critic (SAC) algorithm considers the evaluation of the entropy of the policy in addition to the rewards inherent in the task itself, correlates the evaluation of the entropy with the dimensions of actions of the agent, and employs a stochastic policy approach; all of those techniques help to improve the ability of intelligent exploration. However, stochastic policies can lead to a lack of certainty in the making of actions, which can easily lead to a failed task.

    The proposed algorithm MAC is a deep fusion of soft-of-the-art DRL algorithms and techniques such as TD3, DDPG, and SAC, and its fusion ratio is adapted to the task. MAC integrates the advantages of other algorithms to perform the task more efficiently.

    MAC adopts six critic networks: SAC networksQα1,Qα2andQβto increase pre-explorations; TD3 networksQθ1andQθ2to obtain mid-term certainty; DDPG networkQηto obtain posterior certainty. The target value is given by

    (1)

    whereyis a substitute for the long formula that follows (to facilitate the substitution of the formula below),f,g, andkrefer to the functions that satisfy the sum of 1 at the same momentt,s′ denotes the state quantity at the momentt+1,a′ denotes the action quantity at the momentt+1,xdenotes the input at any moment of loss,rdenotes the reward value, andγindicates the decay factor (generally between 0 and 1). The fusion expectations of TD3, DDPG, and SAC errors are used to update the criticQnetwork, and the fusion ratio is constrained by

    f(t)+g(t)+k(t)=1.

    (2)

    The loss function is given as

    (3)

    wheretis the index of steps in each episode, andβandρβare distributions with noise of action and state, respectively.

    The goal of the actor network is to increase the values ofQθ1, 2(si,ai),Qη(s,a),Qα1, 2(si,ai), andQβ(s,a).The actor network can be updated through the deterministic policy gradient algorithm:

    J(φ)≈Eat~β,st~ρβ[(Qθ(st,at)f(t)+Qη(st,at)g(t)-

    (Qα(st,at)+Qβ(st,at))k(t))│a=π(s)πφ(s)],

    (4)

    whereJstands for the update formula of the critic network,φdenotes the full set of parameters in the critic network, and π refers to the abstract concept strategy and more broadly, the action neural network. The results are calculated by using the MAC algorithm to take the advantages of TD3, DDPG and SAC algorithms at different stages. They reduce the bias caused by using a single algorithm.

    As shown in Fig. 3, MAC is composed of the actor network and several critic networks. The target actor network and the target critic networks are used to store the parameters of the previous state of their corresponding networks, respectively. The replay buffer is used to store the information that the agent interacts with the environment, and the information is also sampled to update the actor network and the critic networks. The update formula for the critics network and the actor network uses the target critics network output valueQ′ and the critic network output valueQas inputs.

    Fig. 3 Flow chart of MAC

    3 Experiments

    In this section, we demonstrate the experimental results of the GFS RRT*-Smart algorithm for global planning and its smooth turns. In a virtual scenario created by the virtual robot experiment platform (V-REP), the performances of the DRL methods DDPG, TD3, SAC, and MAC for obstacle avoidance direction selection are also compared.

    3.1 Global path planning

    The GFS RRT*-Smart algorithm is executed in a 2D environment simulating a printing and dyeing plant (shown in Fig. 4). The upper left side is the blank fabric warehouse, from which the intelligent robot departs to transport the fabric rolls to the inlet or delivery side of the mechanical equipment in the center and right.

    Fig. 4 Global path planning in a simulated printing and dyeing plant environment by using the GFS RRT*-Smart algorithm: (a) path planning with the end of the bleacher as the target; (b) optimized path planning near the back of the bleacher; (c) path planning with the front of the dyeing machine as the target; (d) optimized path planning near the front of the dyeing machine

    The GFS RRT*-Smart algorithm explores the simulation plant, and comes up with the best path from the blank fabric shelves to the back of the bleacher as shown in Fig. 4 (a), and the optimized path to the dyeing machine as shown in Fig. 4 (c), respectively.

    Then the algorithm calls the Gaussian fitting smoothing subroutine to determine whether the corner of the path affects the steering of the intelligent robot, and conducts coordinate point sampling and Gaussian fitting to obtain the curve and corresponding coordinate points shaped like a Gaussian distribution, which part of the curve is intercepted and added to the original path. As shown in Figs. 4 (b) and 4 (d), the red path is the initial path, and the blue path is the local path after being processed by the GFS sub-algorithm.

    In order to compare the smoothness of the turning points generated by two algorithms, we implemented route planning tests by the RRT*-Smart algorithm and the GFS RRT*-Smart algorithm for different targets, respectively. The tests for each target were performed 50 times.

    As shown in Fig. 5, the curvature comparison for two algorithms at points shown in Figs. 4 (b) and 4 (d) indicates that the GFS RRT*-Smart algorithm generates a smaller curvature than the RRT*-Smart algorithm, confirming that the GFS RRT*-Smart algorithm generates a more favorable steering of the robot at the turning point in the real environment.

    Fig. 5 Curvature comparison for RRT*-Smart algorithm and GFS RRT*-Smart algorithm at turning points: (a) curvature comparison at point shown in Fig. 4 (b); (b) curvature comparison at point shown in Fig. 4 (d)

    3.2 Local decision for avoiding obstacles

    In order to train the obstacle avoidance strategy of DRL, the segmented image is used as an input to the agent. According to this image, the agent needs to learn to transfer from the current direction to the target direction. Therefore, the obstacle avoidance policy is considered equivalent to a point-to-point path exploration policy in a two-dimensional plane at this time.

    The research team built a scaled-down printing and dyeing plant model and its 3D simulation environment (shown in Figs. 6 (a) and 6 (b)). In the V-REP simulation environment (shown in Fig. 6 (b)), we implemented direction decision tests for DDPG, TD3, SAC, and MAC in the V-REP, where the randomly generated obstacles were placed at different locations for the experiments. The orientation decision tests for obstacle avoidance at each location were performed ten times, with 500 interactions between the agent and the environment each time. The total cumulative return was used as an evaluation criterion.

    Fig. 6 Scaled-down printing and dyeing plants model, its 3D simulation, and robot perspectives: (a) Scaled-down physical model of printing and dyeing plant with 7 target sites; (b) 3D simulation test environment in V-REP; (b) robot perspectives when the cart were driving around a corner

    As shown in Fig. 7, the performance of MAC and SAC was significantly better than that of DDPG and TD3 in the 500 agent-environment interactions, and the performance of MAC was also better than that of SAC. The superior performance shown by MAC within the 500 interactions set in the test satisfies the requirement for the timeliness of obstacle avoidance decision when the robot encounters an obstacle.

    Fig. 7 Experimental results of obstacle avoidance strategies in three different scenarios: (a) MAC performs better than other algorithms; (b) MAC performs slightly better than other algorithms; (c) MAC performs significantly better than other algorithms

    3.3 Application in a mobile system

    A 28 cm×23 cm×9 cm size cart equipped with a Jetson Nano controller (NVidia, USA) and a RealSense camera(Intel, USA) was used instead of the intelligent robot for the real-world test (shown in Fig. 6 (a)).

    To evaluate the effectiveness of the GFS RRT*-Smart algorithm, seven alternative target sites were chosen for testing in the scenario as shown in Fig. 6 (a). The algorithm was tested on the navigation program of the cart, and the results showed that the cart took less time to adjust its posture for smooth passage when driving around a corner (shown in Fig. 6 (c)).

    In the obstacle avoidance test, a robot moved about in the simulated scene with its monocular camera, which collected about 2 000 pictures of the scene during the movement. The research team finely annotated the objects in the pictures and trained the semantic segmentation of the annotated data by using the DeepLab v3[15]network.

    The trained parameters of DeepLab v3 were deployed on the Jetson Nano controller (NVidia, USA), which was combined with a binocular camera depth estimation algorithm for detecting obstacle distances to dynamically derive the complete obstacle data. The DRL method MAC successfully implements a local obstacle avoidance strategy by rapidly processing this data.

    4 Conclusions

    This navigation system uses an intelligent robot logistics in a printing and dyeing plant as an application context, and uses an improved RRT*-Smart algorithm to plan motion paths globally and to perform flexible obstacle avoidance strategy actions locally. The GFS RRT*-Smart algorithm improved the method RRT*-Smart by making its steering path smoother to suit the operation of intelligent robots in practice. In the local obstacle avoidance strategy, the new obstacles on the already planned path are detected with the help of image segmentation, and local obstacle avoidance is combined with DRL to achieve flexible detour processing to obtain the actual feasible optimal path.

    国产精品久久久av美女十八| 丁香六月天网| 真人做人爱边吃奶动态| 国产麻豆69| 一夜夜www| 国产日韩欧美在线精品| 岛国在线观看网站| 国产高清国产精品国产三级| 欧美成狂野欧美在线观看| 国产在线视频一区二区| 成人国产一区最新在线观看| 99九九在线精品视频| 一本—道久久a久久精品蜜桃钙片| 天堂中文最新版在线下载| 日韩欧美一区视频在线观看| 国产精品亚洲一级av第二区| 啦啦啦中文免费视频观看日本| 国产极品粉嫩免费观看在线| 黑丝袜美女国产一区| 日韩视频在线欧美| 国产亚洲av高清不卡| 99精品在免费线老司机午夜| 极品人妻少妇av视频| 久久久国产精品麻豆| 国产精品久久久av美女十八| 麻豆成人av在线观看| 91九色精品人成在线观看| 高清欧美精品videossex| e午夜精品久久久久久久| 99国产精品一区二区蜜桃av | 757午夜福利合集在线观看| 正在播放国产对白刺激| 亚洲色图 男人天堂 中文字幕| 国产精品一区二区在线观看99| 丁香欧美五月| 黄网站色视频无遮挡免费观看| 真人做人爱边吃奶动态| 精品欧美一区二区三区在线| 视频区图区小说| 老司机午夜福利在线观看视频 | 久久久久久人人人人人| 国产精品国产av在线观看| 久久久水蜜桃国产精品网| 国产日韩欧美亚洲二区| 欧美激情久久久久久爽电影 | 久久人妻av系列| 极品人妻少妇av视频| 午夜福利在线观看吧| 亚洲精品国产精品久久久不卡| 国产精品麻豆人妻色哟哟久久| 国产亚洲av高清不卡| 午夜91福利影院| √禁漫天堂资源中文www| 99香蕉大伊视频| 久久精品国产亚洲av高清一级| 亚洲,欧美精品.| 国产精品影院久久| 国产精品一区二区精品视频观看| 丁香六月欧美| 亚洲综合色网址| 亚洲一卡2卡3卡4卡5卡精品中文| 精品国产乱码久久久久久男人| 亚洲,欧美精品.| 色在线成人网| 国产无遮挡羞羞视频在线观看| 一区二区三区乱码不卡18| 国产人伦9x9x在线观看| 中文字幕人妻熟女乱码| 色综合欧美亚洲国产小说| 男女下面插进去视频免费观看| 亚洲成av片中文字幕在线观看| 99久久人妻综合| 亚洲专区国产一区二区| 午夜91福利影院| 久久久久久久久免费视频了| 久久久久久亚洲精品国产蜜桃av| 一区二区三区精品91| 叶爱在线成人免费视频播放| 国产精品免费一区二区三区在线 | av国产精品久久久久影院| 亚洲综合色网址| 99久久国产精品久久久| 51午夜福利影视在线观看| 99在线人妻在线中文字幕 | 精品亚洲成国产av| 999久久久精品免费观看国产| 我要看黄色一级片免费的| 91老司机精品| 2018国产大陆天天弄谢| 人人妻人人澡人人看| 日韩人妻精品一区2区三区| 国产男女超爽视频在线观看| kizo精华| 日本欧美视频一区| 日韩免费av在线播放| 变态另类成人亚洲欧美熟女 | 99久久99久久久精品蜜桃| 久久热在线av| 一级片'在线观看视频| 亚洲国产av影院在线观看| 免费人妻精品一区二区三区视频| 国产亚洲精品久久久久5区| 国产午夜精品久久久久久| 肉色欧美久久久久久久蜜桃| 叶爱在线成人免费视频播放| 日韩视频在线欧美| 午夜精品国产一区二区电影| 久热爱精品视频在线9| 午夜91福利影院| 国精品久久久久久国模美| 久久久久精品国产欧美久久久| 18禁裸乳无遮挡动漫免费视频| 欧美乱妇无乱码| a级毛片在线看网站| 国产伦人伦偷精品视频| 美女高潮到喷水免费观看| 亚洲男人天堂网一区| 精品视频人人做人人爽| 一个人免费看片子| 乱人伦中国视频| 12—13女人毛片做爰片一| 在线观看66精品国产| 久久精品亚洲熟妇少妇任你| 王馨瑶露胸无遮挡在线观看| 怎么达到女性高潮| 一级片'在线观看视频| 夜夜夜夜夜久久久久| 最黄视频免费看| 一区二区三区精品91| 男女边摸边吃奶| 国产免费福利视频在线观看| av有码第一页| 久久亚洲精品不卡| 久久久国产欧美日韩av| 国产深夜福利视频在线观看| 欧美日韩亚洲高清精品| 亚洲午夜精品一区,二区,三区| 免费在线观看日本一区| 久久ye,这里只有精品| 国产成人啪精品午夜网站| 777米奇影视久久| 99热网站在线观看| 99久久国产精品久久久| 99香蕉大伊视频| 波多野结衣一区麻豆| av超薄肉色丝袜交足视频| 国产成人精品无人区| 国产免费福利视频在线观看| 成在线人永久免费视频| 国产精品.久久久| 国产午夜精品久久久久久| 国产欧美日韩精品亚洲av| 亚洲久久久国产精品| 欧美黄色片欧美黄色片| 少妇被粗大的猛进出69影院| 99re在线观看精品视频| 丁香欧美五月| 国产在线观看jvid| a级毛片黄视频| 欧美久久黑人一区二区| 久久久国产一区二区| 精品久久蜜臀av无| 两人在一起打扑克的视频| 性少妇av在线| 国产一区二区在线观看av| 国产麻豆69| 99精品久久久久人妻精品| 免费观看a级毛片全部| 国产精品偷伦视频观看了| 在线观看免费日韩欧美大片| 亚洲五月婷婷丁香| 男女边摸边吃奶| av超薄肉色丝袜交足视频| 人成视频在线观看免费观看| 亚洲性夜色夜夜综合| 天天添夜夜摸| 亚洲精华国产精华精| 亚洲伊人久久精品综合| 在线播放国产精品三级| 精品少妇久久久久久888优播| 色尼玛亚洲综合影院| 2018国产大陆天天弄谢| 免费久久久久久久精品成人欧美视频| 中文字幕色久视频| 久久这里只有精品19| 免费不卡黄色视频| 亚洲免费av在线视频| 十分钟在线观看高清视频www| 菩萨蛮人人尽说江南好唐韦庄| bbb黄色大片| 日韩一卡2卡3卡4卡2021年| 国产97色在线日韩免费| 最新在线观看一区二区三区| 热99re8久久精品国产| 日本wwww免费看| 亚洲免费av在线视频| 大码成人一级视频| 国产精品欧美亚洲77777| 国产精品一区二区在线不卡| 成人18禁高潮啪啪吃奶动态图| 亚洲情色 制服丝袜| 亚洲精品av麻豆狂野| 中文字幕高清在线视频| 久久精品国产综合久久久| 两性午夜刺激爽爽歪歪视频在线观看 | 国产精品成人在线| 美女主播在线视频| 免费在线观看黄色视频的| 三级毛片av免费| 少妇的丰满在线观看| 人妻久久中文字幕网| 亚洲精品国产色婷婷电影| 久久国产精品人妻蜜桃| 精品国内亚洲2022精品成人 | 欧美日韩视频精品一区| bbb黄色大片| 久久久久视频综合| svipshipincom国产片| 久久久久久久久免费视频了| 免费一级毛片在线播放高清视频 | 女人爽到高潮嗷嗷叫在线视频| 老司机亚洲免费影院| 久热爱精品视频在线9| 婷婷成人精品国产| av天堂在线播放| 国产成人系列免费观看| 国产精品久久久人人做人人爽| 高清视频免费观看一区二区| 国产精品.久久久| 亚洲欧美一区二区三区黑人| 麻豆av在线久日| 午夜精品久久久久久毛片777| av片东京热男人的天堂| 亚洲精品国产一区二区精华液| 久久国产精品人妻蜜桃| 在线观看免费高清a一片| 两性午夜刺激爽爽歪歪视频在线观看 | 国产亚洲欧美精品永久| 久久国产精品人妻蜜桃| 亚洲中文日韩欧美视频| 女性被躁到高潮视频| 亚洲欧美激情在线| www.自偷自拍.com| 日本欧美视频一区| 免费在线观看完整版高清| 一个人免费在线观看的高清视频| 免费女性裸体啪啪无遮挡网站| 亚洲av国产av综合av卡| 免费不卡黄色视频| 热re99久久国产66热| 亚洲国产欧美日韩在线播放| 欧美黑人欧美精品刺激| 人成视频在线观看免费观看| 青青草视频在线视频观看| 丰满少妇做爰视频| 欧美日韩av久久| 12—13女人毛片做爰片一| 丝袜喷水一区| 午夜福利,免费看| 亚洲人成77777在线视频| 欧美日韩亚洲综合一区二区三区_| 1024视频免费在线观看| 亚洲一卡2卡3卡4卡5卡精品中文| 久久精品国产99精品国产亚洲性色 | 丁香欧美五月| 国产精品二区激情视频| 精品国产一区二区三区久久久樱花| a级毛片在线看网站| 国产国语露脸激情在线看| 亚洲国产欧美在线一区| 欧美久久黑人一区二区| 久久精品亚洲精品国产色婷小说| 侵犯人妻中文字幕一二三四区| 成人国产一区最新在线观看| 久久亚洲精品不卡| 色94色欧美一区二区| 亚洲国产欧美一区二区综合| 三级毛片av免费| 国产片内射在线| 天天躁日日躁夜夜躁夜夜| 亚洲三区欧美一区| 一个人免费看片子| 中文字幕人妻丝袜一区二区| 欧美日韩亚洲综合一区二区三区_| 99riav亚洲国产免费| 一本色道久久久久久精品综合| 成人18禁高潮啪啪吃奶动态图| 搡老熟女国产l中国老女人| 建设人人有责人人尽责人人享有的| 精品亚洲成a人片在线观看| 日韩欧美三级三区| 亚洲av第一区精品v没综合| 久久久国产成人免费| 国产精品免费大片| 亚洲国产欧美网| 日本黄色视频三级网站网址 | 自线自在国产av| kizo精华| 免费观看av网站的网址| 久久午夜亚洲精品久久| 亚洲avbb在线观看| 王馨瑶露胸无遮挡在线观看| 丰满饥渴人妻一区二区三| 天天躁狠狠躁夜夜躁狠狠躁| 午夜福利,免费看| 亚洲一码二码三码区别大吗| 精品国产乱码久久久久久小说| av一本久久久久| 国产在视频线精品| 妹子高潮喷水视频| 国产xxxxx性猛交| 精品少妇黑人巨大在线播放| 精品福利永久在线观看| 日韩一卡2卡3卡4卡2021年| 91老司机精品| 午夜激情av网站| 午夜免费成人在线视频| 国产一卡二卡三卡精品| 亚洲一码二码三码区别大吗| 午夜老司机福利片| 黄色a级毛片大全视频| 亚洲欧美精品综合一区二区三区| 免费黄频网站在线观看国产| 大香蕉久久网| 成人手机av| 亚洲欧洲日产国产| 又黄又粗又硬又大视频| 自线自在国产av| 精品少妇黑人巨大在线播放| 久久精品91无色码中文字幕| 一本色道久久久久久精品综合| 国产成人免费无遮挡视频| 在线观看www视频免费| 少妇被粗大的猛进出69影院| av网站免费在线观看视频| xxxhd国产人妻xxx| 巨乳人妻的诱惑在线观看| 亚洲人成电影观看| 老司机亚洲免费影院| 亚洲精品美女久久av网站| 久久天躁狠狠躁夜夜2o2o| 老司机午夜福利在线观看视频 | 美女视频免费永久观看网站| 欧美午夜高清在线| 国产精品久久久久久人妻精品电影 | 在线av久久热| 在线观看免费高清a一片| 不卡一级毛片| 精品乱码久久久久久99久播| 久久免费观看电影| 亚洲avbb在线观看| 欧美精品啪啪一区二区三区| 国产野战对白在线观看| 国产成人精品在线电影| 国产熟女午夜一区二区三区| 国产在线精品亚洲第一网站| 一二三四在线观看免费中文在| av在线播放免费不卡| 视频区欧美日本亚洲| 性少妇av在线| 亚洲欧美一区二区三区久久| 少妇粗大呻吟视频| 国产欧美日韩精品亚洲av| 欧美乱妇无乱码| 久久国产精品人妻蜜桃| 一边摸一边做爽爽视频免费| 亚洲国产看品久久| 男女免费视频国产| 精品国产国语对白av| 久久久久久免费高清国产稀缺| 一区福利在线观看| cao死你这个sao货| 一本色道久久久久久精品综合| 男男h啪啪无遮挡| 女人被躁到高潮嗷嗷叫费观| 亚洲精品一二三| 亚洲人成电影观看| 脱女人内裤的视频| 欧美 日韩 精品 国产| 久久久久精品人妻al黑| 欧美精品一区二区免费开放| 欧美久久黑人一区二区| 成人特级黄色片久久久久久久 | 久久久久久久精品吃奶| av网站在线播放免费| 曰老女人黄片| 久久狼人影院| 久久久久久久久久久久大奶| 国产精品欧美亚洲77777| 一级a爱视频在线免费观看| 热99久久久久精品小说推荐| 久久久久国内视频| 国产高清videossex| 在线观看66精品国产| 18禁国产床啪视频网站| 国产精品成人在线| 黄色怎么调成土黄色| 久久久久久人人人人人| 成人免费观看视频高清| 亚洲人成伊人成综合网2020| 欧美乱码精品一区二区三区| 丝袜喷水一区| 国产成人精品久久二区二区91| 亚洲精品国产一区二区精华液| 水蜜桃什么品种好| 国产一卡二卡三卡精品| 欧美成狂野欧美在线观看| 脱女人内裤的视频| 黑人巨大精品欧美一区二区蜜桃| 国产精品自产拍在线观看55亚洲 | 亚洲天堂av无毛| 在线观看免费高清a一片| 人人妻,人人澡人人爽秒播| 国产有黄有色有爽视频| 亚洲第一欧美日韩一区二区三区 | 亚洲美女黄片视频| 满18在线观看网站| 黄色丝袜av网址大全| 妹子高潮喷水视频| 国产三级黄色录像| 国产一区二区在线观看av| 久久精品亚洲精品国产色婷小说| 亚洲第一av免费看| 男女边摸边吃奶| 我的亚洲天堂| 午夜视频精品福利| 午夜福利在线免费观看网站| 国产一区二区在线观看av| 国产亚洲精品一区二区www | videos熟女内射| 久久99一区二区三区| 多毛熟女@视频| 国产精品久久久久久精品古装| 国产精品国产av在线观看| 美女扒开内裤让男人捅视频| 久久久久久人人人人人| 高清毛片免费观看视频网站 | 十八禁网站网址无遮挡| 精品国产一区二区久久| 99国产精品一区二区三区| 80岁老熟妇乱子伦牲交| 欧美成狂野欧美在线观看| 亚洲国产欧美网| 国产淫语在线视频| 久久性视频一级片| 大片免费播放器 马上看| 日本欧美视频一区| 汤姆久久久久久久影院中文字幕| 人人妻人人添人人爽欧美一区卜| 亚洲av成人不卡在线观看播放网| 欧美变态另类bdsm刘玥| 成人手机av| 女人爽到高潮嗷嗷叫在线视频| 久久精品熟女亚洲av麻豆精品| 亚洲五月婷婷丁香| 精品少妇内射三级| 精品视频人人做人人爽| 香蕉国产在线看| 一区二区三区乱码不卡18| 久久精品国产亚洲av香蕉五月 | 国产成人影院久久av| 真人做人爱边吃奶动态| 亚洲国产看品久久| 亚洲色图 男人天堂 中文字幕| 欧美乱码精品一区二区三区| 考比视频在线观看| 日本一区二区免费在线视频| 欧美激情久久久久久爽电影 | 18禁国产床啪视频网站| 久久这里只有精品19| 老司机在亚洲福利影院| 亚洲精品粉嫩美女一区| 国产成人欧美在线观看 | 亚洲少妇的诱惑av| 最新在线观看一区二区三区| 丝袜喷水一区| 久久久久视频综合| www.自偷自拍.com| 精品熟女少妇八av免费久了| 亚洲中文字幕日韩| 嫁个100分男人电影在线观看| 成年人午夜在线观看视频| 国产免费现黄频在线看| 黄色丝袜av网址大全| 精品人妻在线不人妻| 国产国语露脸激情在线看| 日韩中文字幕欧美一区二区| 中亚洲国语对白在线视频| 亚洲av片天天在线观看| 五月开心婷婷网| 国产在线精品亚洲第一网站| 亚洲七黄色美女视频| 国产精品熟女久久久久浪| 高清视频免费观看一区二区| 男女高潮啪啪啪动态图| 女人爽到高潮嗷嗷叫在线视频| 男女床上黄色一级片免费看| 1024视频免费在线观看| 久久久久久亚洲精品国产蜜桃av| 日韩中文字幕欧美一区二区| 中文字幕另类日韩欧美亚洲嫩草| 久久影院123| 丁香六月天网| 一区在线观看完整版| 色老头精品视频在线观看| 亚洲精品国产精品久久久不卡| 国产亚洲午夜精品一区二区久久| 国产深夜福利视频在线观看| 汤姆久久久久久久影院中文字幕| 国产亚洲精品一区二区www | 欧美av亚洲av综合av国产av| 成人黄色视频免费在线看| 亚洲午夜精品一区,二区,三区| 久久精品人人爽人人爽视色| 丰满人妻熟妇乱又伦精品不卡| 亚洲精品在线观看二区| 中亚洲国语对白在线视频| 日韩一卡2卡3卡4卡2021年| 激情视频va一区二区三区| 满18在线观看网站| 精品少妇内射三级| 免费在线观看完整版高清| 精品午夜福利视频在线观看一区 | 成年动漫av网址| 色婷婷av一区二区三区视频| 高清欧美精品videossex| 日本精品一区二区三区蜜桃| 国产成人欧美在线观看 | 国产免费av片在线观看野外av| 一级片免费观看大全| 国产精品美女特级片免费视频播放器 | 午夜老司机福利片| 天天躁日日躁夜夜躁夜夜| 久久久国产欧美日韩av| 国产成人精品久久二区二区91| www.精华液| 妹子高潮喷水视频| 久久精品人人爽人人爽视色| 精品久久蜜臀av无| 美女视频免费永久观看网站| 精品国产一区二区三区久久久樱花| 久久影院123| 涩涩av久久男人的天堂| 亚洲精品自拍成人| 热99国产精品久久久久久7| 免费高清在线观看日韩| 久久久久久久久免费视频了| 搡老岳熟女国产| 久久久久久亚洲精品国产蜜桃av| 婷婷成人精品国产| 久久久久久久久久久久大奶| av不卡在线播放| 国产有黄有色有爽视频| 操美女的视频在线观看| 狠狠精品人妻久久久久久综合| 久久性视频一级片| 亚洲午夜精品一区,二区,三区| 美女高潮喷水抽搐中文字幕| 久久精品国产99精品国产亚洲性色 | 乱人伦中国视频| 久久性视频一级片| 下体分泌物呈黄色| 成在线人永久免费视频| 9色porny在线观看| 久久毛片免费看一区二区三区| 亚洲欧洲日产国产| 久久久久久人人人人人| 桃红色精品国产亚洲av| 曰老女人黄片| 日本a在线网址| 欧美日韩国产mv在线观看视频| 老汉色av国产亚洲站长工具| 黑人欧美特级aaaaaa片| 91麻豆精品激情在线观看国产 | h视频一区二区三区| 久久久久视频综合| 国产精品久久久av美女十八| 黑人操中国人逼视频| 欧美日本中文国产一区发布| xxxhd国产人妻xxx| 美女高潮喷水抽搐中文字幕| 黄色丝袜av网址大全| 考比视频在线观看| 法律面前人人平等表现在哪些方面| 亚洲国产毛片av蜜桃av| 精品熟女少妇八av免费久了| 老汉色∧v一级毛片| tocl精华| 9热在线视频观看99| 精品福利永久在线观看| 变态另类成人亚洲欧美熟女 | 亚洲av国产av综合av卡| 亚洲精品一卡2卡三卡4卡5卡| 国产黄色免费在线视频| 午夜两性在线视频| 超色免费av| 老司机靠b影院| 欧美精品一区二区免费开放| 国产精品免费视频内射| 桃红色精品国产亚洲av| 激情视频va一区二区三区| 国产成人影院久久av| 亚洲五月色婷婷综合| 亚洲人成电影观看| 日韩一区二区三区影片| 超色免费av| 老司机靠b影院| 黄色片一级片一级黄色片| av线在线观看网站| 国产精品秋霞免费鲁丝片| 日本五十路高清| 十分钟在线观看高清视频www| 丁香六月天网| 国产男女内射视频| 国产不卡av网站在线观看|