• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep Reinforcement Learning Based Unmanned Aerial Vehicle(UAV)Control Using 3D Hand Gestures

    2022-11-11 10:48:56FawadSalamKhanMohdNorzaliHajiMohdSaifulAzrinZulkifliGhulamMustafaAbroSuhailKaziandDurMuhammadSoomro
    Computers Materials&Continua 2022年9期

    Fawad Salam Khan,Mohd Norzali Haji Mohd,Saiful Azrin B.M.Zulkifli,Ghulam E Mustafa Abro,Suhail Kazi and Dur Muhammad Soomro

    1Faculty of Electrical and Electronics(FKEE),Universiti Tun Hussein Onn Malaysia,Parit Raja,81756,Malaysia

    2Department of Electrical&Electronic Engineering,Universiti Teknologi PETRONAS,Seri Iskandar,32610,Malaysia

    3Faculty of Engineering Science and Technology,Isra University,Hyderabad,71000,Pakistan

    4Department of Innovation,CONVSYS(Pvt)Ltd.,44000,Islamabad,Pakistan

    Abstract: The evident change in the design of the autopilot system produced massive help for the aviation industry and it required frequent upgrades.Reinforcement learning delivers appropriate outcomes when considering a continuous environment where the controlling Unmanned Aerial Vehicle(UAV)required maximum accuracy.In this paper,we designed a hybrid framework,which is based on Reinforcement Learning and Deep Learning where the traditional electronic flight controller is replaced by using 3D hand gestures.The algorithm is designed to take the input from 3D hand gestures and integrate with the Deep Deterministic Policy Gradient(DDPG)to receive the best reward and take actions according to 3D hand gestures input.The UAV consist of a Jetson Nano embedded testbed,Global Positioning System(GPS)sensor module,and Intel depth camera.The collision avoidance system based on the polar mask segmentation technique detects the obstacles and decides the best path according to the designed reward function.The analysis of the results has been observed providing best accuracy and computational time using novel design framework when compared with traditional Proportional Integral Derivatives (PID)flight controller.There are six reward functions estimated for 2500, 5000, 7500, and 10000 episodes of training, which have been normalized between 0 to-4000.The best observation has been captured on 2500 episodes where the rewards are calculated for maximum value.The achieved training accuracy of polar mask segmentation for collision avoidance is 86.36%.

    Keywords: Deep reinforcement learning; UAV; 3D hand gestures; obstacle detection;polar mask

    1 Introduction

    The eminent establishment of 3D hand gestures recognition systems provides massive applications with the advent of artificial intelligence.To control a UAV having its own decision to fly by taking input from 3D hand gestures provides for the user with the operator less mechanism by replacing electronic remote with 3D hand gestures.A state of art segmentation and classification method [1]describes the input image segments and classifies them with the designed optimized algorithm.This technique is utilized in designing the recognition of 3D hand gestures classification from six different types (Upward, backward, downward, upward, left, and right), which is used as the input for the designed framework.

    The hardware of any UAV is the most important part due to its design to hover and control during flight.Proportional Integral Derivates(PID)and fuzzy controllers help the aviation industry to design these technologies but with certain limitation such as professional knowledge for control,electronic noise from the remote controllers, sensor-based collision avoidance, etc.The hardware design mentioned in[2]utilizes the Inertial Measurement Unit(IMU)with the sensors for yaw,pitch and roll to provide the values to formulate the reward functions for the decision to mobilize the UAV.The best reward is estimated where after each episode a reset function is defined to learn the best path during training.

    There are many approaches when considering reinforcement learning such as policy-based,modelbased and value-based.A model-free approach where a policy is designed for UAV controller is used for path planning,navigation and control[3].Different path smoothening methodologies using Grey Wolf [4] are used to provide the best path planning during the flight.There are various simulators available like GymFC[5]for the implementation of reinforcement learning,especially for UAV attitude control.LSTM based hand gestures recognition system may be useful for UAV control system [6].Generally,ResNet101 and Inception V3 are utilized as backbone network during the selection of best features for segmentation of the images[7].

    Sensor-free obstacle detection using only a camera for detection and recognition has overcome the cost and maintenance of UAVs.The image instance segmentation technique[8],which is used in this work for UAV obstacle detection during its flight provides a prediction mechanism,which utilizes the contours of the object present for the UAV,a polar coordinates system where the rays are constructed from the edge of the object to calculate the Intersection Over Union(IoU)between different bounding boxes.

    The study in the subject of UAV control is intriguing, not only because of several improved or proposed new DRL algorithms, but a wide range of its applications and also resolving for control issues that were previously virtually difficult to solve.The DRL algorithm’s process of learning was built on knowledge collected from images in [9,10].As a result, improvements in single and multi-UAV control utilizing various communications and sensing techniques clear the doors to widespread real-world application of these techniques in a variety of activities such as monitoring,first rescuers in disasters,transportation and agricultural,etc.Each of these studies shows that selecting a reward function is just as essential as selecting a DRL algorithm.Every study presents, a unique reward function depending on the study’s scenario as well as the control algorithm’s goal.This necessitates a comprehensive examination of reward functions,and testing alternative reward functions underneath the similar control algorithm can be fruitful resulting in greater control efficacy.

    1.1 UAV Navigation with RL

    A test is carried on a UAV utilizing various reinforcement learning algorithms with the goal of classifying the algorithms into two groupings:discrete action space and continuous action space.Reinforcement learning is predicated on the agents being educated via tests and mistakes to navigate and avoid obstacles.This feature is advantageous since the agent will begin learning on its own as quickly as the training environment is complete.The research began with RL, which was used to derive equations for sequential decision making,wherein the agent engages with the surrounding visual world by splitting it into discrete time steps.Some parameters are tuned in the state form to receive the best action provided by the Actor-network where the resultant Temporal Difference(TD)errors are normalized by Critic-network for the control of UAV.

    The suggested agent in discrete action space selects to implement a strategy in the manner of greedy learning by selecting the best action depending on the provided state value.A deep Q network may be used to determine this value in high-dimensional data,such as photographs(DQN).To address these concerns,a new approach is developed where the suggested algorithm,dubbed Double Dueling(DQN),integrates the Double DQN with the Dueling DQN(D3QN).In tests,the algorithm shows a strong capacity to remove correlation and enhance the standard of the states.The study utilizes a simulation platform named AirSim, which creates images using the Unreal Engine, to assist in constructing a realistic simulation environment through using discrete action space.The simulation,while providing certain constraints in the environment,does not give intricate pathways for the UAV because all of the obstacles are situated on plain terrain.To address this problem, the researchers designed a new habitat that comprises a variety of impediments such as solid surfaces in cubes and spheres,among other things.RGB and depth sensors,as well as CNN as inputs to the RL network,are utilized to calculate the best route for the drone[3].The system is compatible with SAC off policy and all other RL algorithms.There are various datasets containing multiple types of 3D hand gestures available but in our case study,only six types of hand gestures are required to control the UAV in six different directions.

    1.2 Research Contributions

    1.The design of the framework based on hybrid modules consists of 3D hand gestures recognition using deep learning and reinforcement learning to control the UAV.

    2.Development of an algorithm for an embedded platform to recognize 3D hand gestures for activation of reward functions for the control of UAVs.

    3.Design of the collision avoidance system for the UAV using polar mask techniques, which calculates the least distance from the center of the obstacle for collision avoidance.

    1.3 Research Objective

    The research objective of this study is to design a novel framework to control the UAVs with 3D hand gestures and a state of art collision avoidance system without using sensors.

    1.4 Structure of the Article

    The article is divided into(2)Related Work,(3)Proposed Framework(4)Results(5)Analysis and Discussion,and(6)Conclusions.

    2 Related Work

    Deep Reinforcement learning has changed the traditional design of flight controllers.There are two versatile adaptive controllers for unmanned aerial vehicles(UAVs).The first controller was a fuzzy logic-based robust adaptive PID.The second was based on an intelligent flight controller built on ANFIS.The results showed that the built-in controllers are robust.Similarly, the findings showed that in presence of external wind disruptions and UAV parametric uncertainties,the intelligent flight controller based on ANFIS outperformed the stable adaptive PID controller based on fuzzy inference[11].An exhaustive study of open-source flight controllers that are freely accessible and can be used for research purposes.The drone’s central feature is the flight controller,which is an integrated electronic component.Its aim is to carry out the main functions of the drone,such as autonomous control and navigation.There are many categories of flight controllers,each with its own set of characteristics and functions.The paper proposes the fundamentals of the UAV design and its elements.It investigates and contrasts the processing capacities, sensor composition, interfaces, and other features of opensource UAV platforms.It also illustrates the discontinued open-source UAV platforms[12].Few flight controllers where timing assurances are critical in embedded and cyber-physical systems that would limit the time between sensing,encoding,and actuation.This research discusses a modular pipe model for sensor data processing and actuation.The pipe model was used to investigate two end-to-end semantics:freshness and response time.The paper provides a statistical method for calculating feasible assignment cycles and budgets that met both schedulability and end-to-end timing criteria.It shows the applicability of the design strategy by porting the CleanFlight flight controller firmware to Quest,the in-house real-time operating system.Experiments demonstrated that CleanFlight on Quest can attain end-to-end latencies even within time bounds expected by observation[13].

    A new framework concept for 3D flight path tracking control of(UAVs)in windy conditions.The new design paradigm simultaneously met the following three goals:(i)3D path tracking error device representation in wind environments using the Serret-Frenet frame, (ii)assured cost management,and(iii)simultaneous stabilization via a single controller for various 3D paths with a similar interval parameter configuration in the Serret-Frenet frame.In the Serret-Frenet frame,a path tracking error scheme based on a 3D kinematic model of UAVs in wind conditions was built to realize the three points.Inside the considered operation domains,the Takagi-Sugeno(T-S)fuzzy model accurately represented the path tracking error system.It examined a guaranteed cost controller design that reduced the upper bound of a provided output function as a benefit of the T-S fuzzy model construction.The problem of the guaranteed cost controller model was expressed in terms of Linear Matrix Inequalities(LMIs).As a result, the developed controller ensured not only path stability but also cost management and path tracking control for a suitable value 3D flight path in wind environments.Also,a simultaneous stabilization issue in terms of finding a common solution in a series of LMIs was considered.The simulation findings demonstrated the effectiveness of the proposed 3D flight path tracking control in windy conditions[14].

    A monitoring flight control scheme for a quadrotor with external disturbances dependent on a disturbance observer.It was believed to include certain harmonic disturbances to aid in the processing of potential time-varying disturbances.Then, to quantify the uncertain disturbance, a disturbance observer was proposed.A quadrotor flight controller was designed using the output of the disturbance observer to monitor the provided signals produced by the reference model.Finally,a proposed control system was used to control the flight of the quadrotor Quanser Qball 2.The experimental findings were presented to illustrate the efficacy of the control technique produced [15].The design and development of a quadrotor utilizing low-cost components and a Proportional Integral Derivative(PID)control system itself as controller.This paper also explained the PID control system similar to a flight controller.To explain the expense of developing this quadrotor,a basic economic analysis was provided.According to the results of the experimental trials,the quadrotor could fly stably with a PID controller,but there was still an overshoot at attitude responses[16].A new full-duplex(FD)confidentiality communication system for UAVs was used,which explored its optimum configuration to maximize the UAV’s Energy Efficiency(EE).Particularly,the UAV collected sensitive information from a ground channel while also sending jamming signals to disrupt a possible ground eavesdropper.This research intended to optimize the EE for their secrecy contacts by jointly optimizing the UAV trajectory as well as the source/UAV transmits/jamming forces over a finite flight time,since the UAV has minimal onboard energy in practice.Despite the fact that the formulated problem was difficult to solve optimally due to its non-convexity,the study also proposed an effective iterative algorithm to reach a good suboptimal solution.The simulation results demonstrated that major EE changes could be obtained by joint optimization,and the EE benefits were strongly dependent on the capacity of the UAV’s self-interference cancelation[17].

    A novel Integral Sliding Mode Control (ISMC)technique for quadrotor waypoint tracking control in the existence of model inconsistencies and external disturbances.The inner-outer loop configuration was included in the proposed controller:The outer loop generated the reference signals for the roll and pitch angles,whereas the inner loop was equipped for the quadrotor to monitor the desired x,y positions,as well as the roll and pitch angles,using the ISMC technique.The Lyapunov stability study was used to demonstrate how the detriments affected the bounded model uncertainty and external disturbances could be greatly reduced.To solve the consensus challenge,the engineered controller was applied to a heterogeneous Multi-Agent System(MAS)comprised of quadrotors and two-wheeled mobile robots (2WMRs).The control algorithms for 2WMRs and quadrotors were presented.If the switching graphs still had a spanning tree, the heterogeneous MAS would achieve consensus.Finally, laboratory experiments were carried out to validate the efficacy of the proposed control methods[18].

    A collision avoidance problem involving multiple Unmanned Aerial Vehicles(UAVs)in high-speed flight, allowing UAV cooperative formation flight and mission completion.The key contribution was the development of a collision avoidance control algorithm for a multi-UAV system using a bidirectional network connection structure.To efficiently prevent collisions between UAVs as well as between the UAVs and obstacles, a consensus-based algorithm ‘‘leader-follower”control technique was used in tandem for UAV formation control to ensure formation convergence.In the horizontal plane,each UAV had the same forward velocity and heading angle,and they held a constant relative distance throughout the vertical direction.Centered on an enhanced artificial potential field method,this paper proposed a consensus-based collision avoidance algorithm for multiple UAVs.To verify the proposed control algorithm as well as provide a guide for engineering applications, simulation tests including several UAVs were conducted[19].

    Because of their long-range connectivity, fast maneuverability, versatile operation, and low latency, unmanned aerial vehicle (UAV)communications play a significant role in developing the space air-ground network and achieving seamless wide-area coverage.Unlike conventional groundonly communications, control methods have a direct effect on UAV communications and may be developed collaboratively to improve data transmission efficiency.In this paper, the benefits and drawbacks of integrating communications and control in UAV systems were looked at.A new frequency-dependent 3D channel model was presented for single-UAV scenarios.Channel monitoring was then demonstrated with a flight control system,and also mechanical and electronic transmission beam formulation.New strategies were proposed for multi-UAV scenarios such as cooperative interactions, self-positioning, trajectory planning, resource distribution, and seamless coverage.Finally,connectivity protocols, confidentiality, 3D complex topology heterogeneous networks, and low-cost model for realistic UAV applications were explored[20].

    A hybrid vertical takeoff and landing(VTOL)unmanned aerial vehicle(UAV)of the kind known as dual system or extra propulsion VTOL UAV in this paper [21].This research covered the entire system construction of such VTOL UAVs, covering aircraft model and implementation, onboard computer- integration, ground station service, and long-distance communication.Aerodynamics,mechanical design,and controller creation were also explored.Finally,a hybrid VTOL UAV was tested to ensure that this had the necessary aerodynamic efficiency, flight stability, durability, and range.Furthermore, with the built-in flight controller, the VTOL UAV could fly fully autonomously in a real-world outdoor environment.It provided an excellent foundation for future research in areas such as vision-based precise landing,motion planning,and fast 3-D imaging,as well as service applications like medication delivery[22].

    The design of using a motion controller to control the motion of a drone utilizing basic human movements in this research.For this implementation, the Leap Motion Controller and the Parrot AR DRONE 2.0 were used.The AR-DRONE communicated with the ground station through Wi-Fi,while the Leap communicated with the ground station via a USB connection[23].The hand signals were recognized by the LEAP motion controller and relayed to the ground station.The ground station operated ROS(Robot Operating System)in Linux that served as the implementation’s base.Python was used to communicate with the AR DRONE and express basic hand signals.In execution,Python codes were written to decode the LEAP-captured hand gestures and relay them in order to control the motion of the AR-DRONE using these gestures[24].

    The gesture-sensing system leap motion to control a drone in a simulated world created by the game engine Unity.Four swiping movements and two static gestures were checked,like face up and face down.According to the findings of the experiments,static movements were more identifiable than dynamic gestures[25].Between different users,the drone responded to gesture control with an average accuracy of more than 90%[26].Due to their basic mechanical structure and propulsion philosophy,quadrotor UAVs are among the most common types of small unmanned aerial vehicles.Also,due to the nonlinear dynamic behavior of these vehicles,specialized stabilizing control is needed.The use of a learning algorithm that makes the training of appropriate control behavior is one potential approach in easing the tough challenge of nonlinear control design[27].

    Reinforcement learning was used as a form of unsupervised learning in this case study.A nonlinear autopilot was first suggested for quadrotor UAVs based on feedback linearization.This controller then was comparable to an autopilot learned by reinforcement learning with fitted value repetition in terms of design commitment and efficiency.The effect of this comparison was highlighted by the first simulation and experimental finding[28].They compared the performance and accuracy of the inner control loop that provides attitude control by using intelligent flight control systems trained with cutting-edge RL algorithms such as Deep Deterministic Policy Gradient(DDPG),Trust Region Policy Optimization, and Proximal Policy Optimization.To explore these unknown parameters, an opensource high-fidelity simulation system was created first for training a quadrotor flight controller’s attitude control using RL.The environment was therefore used to equate their output to a PID controller in order to determine whether or not using RL is sufficient in high-precision,time-critical flight control[5].

    3 Proposed Framework

    The framework consists of a hybrid module based on deep learning and deep reinforcement learning.The deep learning module is responsible for 3D hand gestures recognition, segmentation,and classification.A private dataset contains 4200 images of 3D hand gestures of six types(up,down,back, forward, left, and right)trained deep learning module is used as output, which fed into the deep reinforcement learning module.The DRL agent (UAV)takes the state information from the environment and calculated the reward function depending upon the gestures output and sensor data from the environment.The hand gestures once segmented and classified with higher accuracy with skeletal information converted into the required signals.In Fig.1 shows the hybrid modules where the Deep Reinforcement Learning (DRL)agent (UAV)activates the DRL algorithm after receiving the state values from (pitch, yaw, roll)and best the reward functions to identify which action to be performed.

    Figure 1:A framework based on RL to control UAVs using 3D hand gestures

    3.1 Reward Function

    The framework based on deep reinforcement learning calculates the maximum reward during the flight for its decision to move from left to right or right to left,down to up or up to down,backward to forward,or forward to backward direction.The reward functions are the mathematical formation from the different values of velocity, yaw, pitch and roll.The hand gestures input which is included with these reward functions to be initialized for the UAV to take its decision according to the given hand gesture.These reward functions can mathematically describe as:

    Vy describes the velocity of UAV in the Y direction,Vx demonstrates the velocity of the UAV in the X direction and Vz is the velocity of the UAV in the Z direction.x is the initial position of UAV in X-axis when forward gesture initiated,z is the initial position of UAV in Z-Axis when the downward gesture initiated and y is the initial position of UAV in Y-Axis when right hand gesture is initiated.y’,x’,z’for the opposite direction.θrare angular values for Roll,φpfor Pitch andδyfor Yaw.k:Constant value to minimize the motion of drone in x and z axis.k’:constant value to minimize the Euler angles k’’:constant value to push the drone in y-direction.

    The velocity (Vy, Vx, Vz)of brushless DC electric motors are adjusted near to minimize value for hovering purposes.Initially, when the UAV started, it directly hovers to 6 feet from the ground position.Fig.2 demonstrate different positions of the UAV in the coordinate system for Pitch, Roll and Yaw.

    Figure 2:UAV attributes(Roll,Pitch,Yaw)with coordinate description

    3.2 Algorithm for 3D Hand Gestures Recognition for UAV

    The algorithm is designed for the embedded system platform to control the UAV and can be scalable for any non-embedded system.

    3.3 Collision Avoidance Using Polar Mask

    The GPS sensor used for fencing the area which covers 10 meters from the center of the origin as shown in Fig.3.There can be various sizes and shapes of the objects(obstacles)in its path.The UAV consists of a camera having a field of view of 3 meters.The test scenario consists of four obstacles(A, B, C, D).We considered obstacle A as the “Tree”.The Image captured for the obstacle object(tree)during the flight of the UAV fed on the backbone network where different convolution layers with stride sizes are used to create the feature pyramid network (FPN), then the process for masksegmentation is used to rebuild the captured image into a polar-coordinate plane.For depicting an obstacle center,each bounding box with annotated area for center,mass,and the upper bound of the segmented mask is examined for efficiency.

    A center-sample, if it fell within a specific level from the obstacle mass-center.A Distance-Regression of Rays is drawn over the complete mask.A network was generated for confidence scores for the center and ray length.After the mask construction, Non-Maximum Suppression (NMS)is used to eliminate superfluous masks over the same image.

    The minimal bounding boxes with masks are computed and then Non-Max Suppression(NMS)relying upon on IoU of the resulting bounding boxes.The shortest distance was calculated from the origin to the boundary of the mask,once the shortest distance computed,the reward function activated and decided to move the UAV and avoid a collision from the obstacle.

    A Feature-Pyramid-Network was created for the mask from the highest-scoring predictions to build by combining the best forecasts of all levels using Non-Max Suppression (NMS).The mask assembling and NMS techniques can be defined by using the center locations(wd,vd) as well as the length of rays(b1,b2,...,bn),the spot of each equivalent contour point(wj,vj)can be calculated.

    Figure 3:Obstacle avoidance scenario

    For obstacle detection, centerness was developed to reduce poor bounding boxes.Nevertheless,merely implementing center-ness in a polar plane was insufficient as it was intended for conventional bounding boxes though not mask.Polar Centerness can be defined by supposing the length of rays(b1,b2,...,bn):

    Polar-Ray-Regression developed a convenient and straightforward approach for computing the mask-IoU in a polar plane and the Polar-IoU loss function, in order to enhance the modeling and attain competitive results.So,Polar IoU is calculated as:

    To maximize the size of each ray,the Polar IoU loss function is described by the Polar-IoU’s Binary Cross Entropy (BCE)loss.The minus log of the Polar-IoU is used to illustrate the polar-IoU loss function.The Polar Mask architecture consist of backbone+FPN combined with the Head network is shown in Fig.4.The polar loss function is computed as:

    Figure 4:Architecture for collision avoidance using polar mask

    Integrating the differential Intersection Over Union(IoU)distribution in terms of differentially angles yields for the mask-IoU in polar coordinates.Polar-IoU loss improves the mask regression overall rather than improving every ray individually and resulting in higher efficiency.Mask IoU is found by using polar integration.

    The architecture for the design of the collision avoidance system is shown in Fig.3 consists of backbone network and the feature pyramid network (FPN)having eight convolution layers with different stride sizes.The designated polar rays are marked from the center using Eq.(9)and the least distance calculated for each ray from all directions.

    The major effect of using Polar mask segmentation is to provide a length of predicted rays which must be similar to the target rays,once the rays are equal to IoU which calculates the minimized mask in the polar space.Feature Pyramid Network(FPN)may also be refined used in the backbone network by re-scaling into a different level of feature maps that have been achieved by contextual information.

    Eqs.(9), (10)and (12)is used to provide the mechanism to calculate the least distance from the center of the obstacle object to the end of the boundaries marked during the rays are regressed.Once the least distance is calculated either from left to right or down to up or in inverse directions,the reward function mentioned above activated for the avoidance of collision with the object(tree).

    The velocities of the brushless DC electric motors will be minimized to take the hover position,once the obstacle is detected inside the GPS coordinates circle is shown in Fig.3.

    4 Results

    The results of the proposed framework are divided into three-part(i)The reward estimation for six different hand gestures using Deep Deterministic Policy using Actor critic Network and(ii)PID based controller results for the analysis between the RL based controller.(iii)Accuracy and loss results for the Polar Mask segmentation.

    4.1 Experimental Setup

    The Nvidia Jetson nano with intel D435i depth camera is used for the experimentation.The UAV consists of 4 x brushless DC electric motors, F450 UAV chassis, Electronic Speed Controller (ESC)four quantity, 10-inch four quantity fiber propellers, Power distribution box (PDB)for connecting different wires from motors,batteries,landing gears,and Inertial Measurement Unit(IMU).The 40 General Purpose Input/Output(GPIO)pins of Jetson Nano embedded board contains 4×I2C pins,4×Universal Asynchronous Receiver-Transmitter(UART)Pins,1×5 V pin,2×3V3,and 3 Ground Pins other 26 GPIO Pins.The pin#3(SDA)on jetson nano connected with pin#27 Serial Data Pin(SDA)on IMU and Pin#5 Serial Clock Pin(SCL)on jetson nano with Pin#28(SCL)on IMU.We send the Pulse Width Modulation (PWM)signal from pin # 33 to ESC which operates at 3.3v and sends the 3-phase supply to brushless DC electric motors.

    In the environment created on Ubuntu 18.04, different libraries of deep learning installed consisting of NumPy, Pandas, Tensor Flow, and Keras.For the DDPG agent, we used Actor-Critic network followed by a reply buffer for the storage of reward functions during the training.The reset function self.reset ()has been created.This function is activated when it follows the wrong path during the training.Multiple epochs are considered during the training for which maximum reward achieved on 2500 epochs by hit and trial mechanism which resulted to stabilized for six different reward functions is shown in Fig.5.

    Fig.5 describes the reward estimation for the implementation of six different hand gestures.The best reward estimation was observed during the training of 2500 episodes.

    Figure 5: (Continued)

    Figure 5:Reward estimation with six different 3D hand gestures

    The training cycle of 30 epochs with 1560 iterations configured 52 iterations per epochs with the learning rate of 3.2e-08 for the calculation of Polar Mask for collision avoidance, below Fig.6.demonstrates the accuracy and loss where black dots show the validation cycle.The received accuracy is 86.36%.

    Figure 6:Estimations of accuracy and loss for polar mask segmentation

    5 Analysis and Discussion

    A PID controller may also be used to adjust and train the Reinforcement Learning (RL)algorithms.The controller updates the reward values and the next action based on the inputs and observations of the UAV’s current state.The PID controller receives data from onboard sensors as well as the value of the three gains used to assess the system’s durability.The analysis has been made both for PID and RL-based controller,it is quite obvious that after the training for 2500 episodes,the reward functions for six different hand gestures provide the best accuracy and control for UAVs using the proposed framework.Figs.7 and 8 below shows the values of pitch and roll from the PID-based controller.PID controller utilized to train on fewer iterations where RL based controller trained on a minimum of 2500 epochs.

    Figure 7:PID controller initialized state without 3D hand gestures

    Figure 8:PID controller initialized state without 3D hand gestures

    It was also observed that the polar mask technique used for collision avoidance provided better results without using any sensors to stop the UAVs,the system has calculated the center location and marked the edges and construct the rays with different angles.The segmented image once marked with IoU then calculate the least distance with the center locations, the distance then utilized for the activation of reward functions to move the UAV for collision avoidance.The initial threshold for the distance between the UAV and the obstacle (tree)was set for 5 feet and marked before the experimentation.Figs.9 and 10 shows the attitude control obtained during collision avoidance while the movement of UAV calculated through roll and pitch in back-and-forth direction, the IMU is calibrated and the offset in the accelerometer and gyroscope removed from the initial values, so the graph of the pitch only changes and the graph of roll smoothen near zero.

    Figure 9::Attitude control after calibration of IMU

    Figure 10::Attitude control before calibration of IMU

    6 Conclusion

    Deep reinforcement learning has revolutionized the area of UAV route planning,navigation,and control.Luckily,advances in DRL controller design and UAV mechanical architecture are constantly being created and evaluated.As a result,new difficult tasks and uses for various types of UAVs have emerged.

    The state of art reinforcement learning UAV control with 3D hand gestures provided evident contribution in the field of robotics.There are some environmental factors including wind speed,rainfall, and dirt which must be addressed while improving whole systems because they create ambiguity in outcomes.As a result, it should be classified as a system disruption and dealt with properly.The limitation of detecting 3D hand gestures due to the FOV of the camera ranging to 3 meters can be removed by replacing a better range of FOV of the camera.

    The reward function,which is defined by the UAV’s behaviors,is important to using RL in UAV navigation.The designed reward functions imply the best stability during training with 2500, 5000,7500, and 10000 episodes where it has observed the maximum reward received on 2500 episodes.The computational time on NVidia jetson nano observed for each episode is 15 micro second during training.The system works by continuous modification of the UAV state depending on data produced by onboard sensors,and calculating the best course of action and associated reward values.

    For future work, the collision avoidance system may be improved by replacing the GPS sensor with the Camera Field of View(FOV)to avoid limitations with GPS and its accessories.

    Acknowledgement:This research was funded by Yayasan Universiti Teknologi PETRONAS(YUTP),grant number 015LC0-316, and the APC was funded by Research Management Centre, Universiti Teknologi PETRONAS,Malaysia under the same grant.

    Funding Statement:The hardware used in this research consisting of Jetson nano,brushless DC electric motor,sensors,camera,UAV body,batteries are made available by CONVSYS(Pvt.Ltd.,Islamabad)Pakistan.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产精品香港三级国产av潘金莲| 九色国产91popny在线| 精品一区二区三区视频在线 | 欧美3d第一页| 久久草成人影院| 18禁黄网站禁片午夜丰满| 精品久久蜜臀av无| 国产高清三级在线| 久久久久久久久中文| 曰老女人黄片| av天堂中文字幕网| 国产成人一区二区三区免费视频网站| 色播亚洲综合网| 国产一区二区三区在线臀色熟女| 我要搜黄色片| 久久午夜亚洲精品久久| xxxwww97欧美| 亚洲天堂国产精品一区在线| 国产欧美日韩精品一区二区| 亚洲一区二区三区不卡视频| 亚洲va日本ⅴa欧美va伊人久久| 看片在线看免费视频| 中文资源天堂在线| 在线观看免费视频日本深夜| 亚洲专区中文字幕在线| 国产精品香港三级国产av潘金莲| 国产欧美日韩精品亚洲av| 国产黄色小视频在线观看| 男插女下体视频免费在线播放| 精品一区二区三区四区五区乱码| 婷婷丁香在线五月| 国产aⅴ精品一区二区三区波| 精品久久久久久久久久久久久| 成人精品一区二区免费| h日本视频在线播放| 亚洲欧美精品综合一区二区三区| 神马国产精品三级电影在线观看| 欧美日本亚洲视频在线播放| 国产精品av视频在线免费观看| 日韩欧美三级三区| 男女之事视频高清在线观看| 亚洲九九香蕉| 十八禁人妻一区二区| 国内精品久久久久久久电影| 中文在线观看免费www的网站| 人人妻人人看人人澡| 欧美绝顶高潮抽搐喷水| 亚洲国产高清在线一区二区三| 午夜视频精品福利| 村上凉子中文字幕在线| 宅男免费午夜| 欧美日韩中文字幕国产精品一区二区三区| 桃红色精品国产亚洲av| x7x7x7水蜜桃| 欧美中文综合在线视频| 国产欧美日韩精品一区二区| 一二三四在线观看免费中文在| 12—13女人毛片做爰片一| 久久国产乱子伦精品免费另类| 99精品欧美一区二区三区四区| 天堂√8在线中文| 日韩av在线大香蕉| 中亚洲国语对白在线视频| 成人av在线播放网站| 国内久久婷婷六月综合欲色啪| 看免费av毛片| 熟妇人妻久久中文字幕3abv| 久久香蕉精品热| 麻豆久久精品国产亚洲av| 日本与韩国留学比较| 十八禁网站免费在线| 欧美性猛交黑人性爽| 国产精品 国内视频| 久久久国产成人精品二区| 真实男女啪啪啪动态图| 又黄又粗又硬又大视频| 一级黄色大片毛片| 亚洲成人精品中文字幕电影| 国产成年人精品一区二区| 18禁美女被吸乳视频| 日韩有码中文字幕| 亚洲第一电影网av| 九九热线精品视视频播放| 97超视频在线观看视频| 国产精品女同一区二区软件 | 成人18禁在线播放| 制服人妻中文乱码| 亚洲专区字幕在线| 日韩中文字幕欧美一区二区| 丁香欧美五月| 国产91精品成人一区二区三区| 51午夜福利影视在线观看| 好男人在线观看高清免费视频| 特大巨黑吊av在线直播| 啦啦啦韩国在线观看视频| 成人国产综合亚洲| 国产精品久久久久久亚洲av鲁大| 国产男靠女视频免费网站| 亚洲av美国av| 少妇裸体淫交视频免费看高清| 后天国语完整版免费观看| 久久精品国产99精品国产亚洲性色| 啦啦啦观看免费观看视频高清| 两性午夜刺激爽爽歪歪视频在线观看| 久久香蕉精品热| 男女下面进入的视频免费午夜| 国内揄拍国产精品人妻在线| 极品教师在线免费播放| 欧美成狂野欧美在线观看| 欧洲精品卡2卡3卡4卡5卡区| 免费一级毛片在线播放高清视频| 欧美日韩瑟瑟在线播放| 午夜福利在线在线| 香蕉av资源在线| 一区福利在线观看| or卡值多少钱| 午夜精品在线福利| 精品99又大又爽又粗少妇毛片 | 99视频精品全部免费 在线 | 亚洲av电影不卡..在线观看| 成人一区二区视频在线观看| 久久香蕉精品热| 美女午夜性视频免费| 亚洲av电影在线进入| 免费观看的影片在线观看| 熟女少妇亚洲综合色aaa.| 黑人操中国人逼视频| 国产99白浆流出| 在线免费观看的www视频| 亚洲黑人精品在线| 99热6这里只有精品| 日韩欧美国产在线观看| 成人国产综合亚洲| 中文字幕人成人乱码亚洲影| 国产黄a三级三级三级人| 1024手机看黄色片| 99久久成人亚洲精品观看| 在线十欧美十亚洲十日本专区| 久久精品人妻少妇| 97超视频在线观看视频| 国产精品久久久久久人妻精品电影| 性色avwww在线观看| 亚洲国产精品久久男人天堂| 国产精品99久久久久久久久| 欧美又色又爽又黄视频| 久久久国产精品麻豆| 亚洲五月婷婷丁香| 亚洲欧洲精品一区二区精品久久久| 久久久久国产精品人妻aⅴ院| 最好的美女福利视频网| 亚洲av电影在线进入| 无人区码免费观看不卡| 伦理电影免费视频| 日韩精品中文字幕看吧| 亚洲精品一区av在线观看| 亚洲av日韩精品久久久久久密| 国产精品美女特级片免费视频播放器 | 一进一出好大好爽视频| 两个人视频免费观看高清| e午夜精品久久久久久久| 人人妻人人看人人澡| 日韩人妻高清精品专区| 国产一区二区在线av高清观看| 国产精品久久久久久人妻精品电影| 国产精华一区二区三区| 午夜成年电影在线免费观看| 欧美激情久久久久久爽电影| 村上凉子中文字幕在线| 91av网站免费观看| 色视频www国产| 久久精品国产综合久久久| 大型黄色视频在线免费观看| 亚洲精品国产精品久久久不卡| 麻豆成人午夜福利视频| 观看免费一级毛片| 全区人妻精品视频| 97超视频在线观看视频| 十八禁人妻一区二区| 午夜精品一区二区三区免费看| 日韩中文字幕欧美一区二区| 国产私拍福利视频在线观看| 18禁裸乳无遮挡免费网站照片| 国产高清videossex| 又紧又爽又黄一区二区| 久久这里只有精品中国| 最近最新中文字幕大全免费视频| 亚洲欧美精品综合久久99| 韩国av一区二区三区四区| 久久久国产欧美日韩av| 精品久久久久久久毛片微露脸| 99精品久久久久人妻精品| 国产在线精品亚洲第一网站| 激情在线观看视频在线高清| 美女免费视频网站| 亚洲精品中文字幕一二三四区| 精品久久蜜臀av无| 手机成人av网站| 午夜a级毛片| 免费搜索国产男女视频| 黄色丝袜av网址大全| 99视频精品全部免费 在线 | 色av中文字幕| 精品福利观看| 久久婷婷人人爽人人干人人爱| a级毛片在线看网站| 亚洲欧洲精品一区二区精品久久久| 国产亚洲精品久久久久久毛片| 国产精品一及| 男插女下体视频免费在线播放| 国内精品久久久久精免费| 人妻夜夜爽99麻豆av| 国产成人福利小说| 狂野欧美白嫩少妇大欣赏| 91在线观看av| 人人妻人人澡欧美一区二区| 免费观看人在逋| 人妻久久中文字幕网| 国产精品98久久久久久宅男小说| 国产免费av片在线观看野外av| 成人无遮挡网站| 久久午夜综合久久蜜桃| 国产一区二区三区视频了| 久久精品影院6| 国产成人一区二区三区免费视频网站| 黑人操中国人逼视频| 国产成人系列免费观看| 欧美在线一区亚洲| 高清在线国产一区| 成年女人看的毛片在线观看| 亚洲熟妇熟女久久| 亚洲成人久久爱视频| 99久久99久久久精品蜜桃| 国产男靠女视频免费网站| 国产高清有码在线观看视频| 久久伊人香网站| 又紧又爽又黄一区二区| 在线免费观看的www视频| av女优亚洲男人天堂 | 欧美性猛交黑人性爽| av片东京热男人的天堂| 少妇熟女aⅴ在线视频| 男女下面进入的视频免费午夜| 午夜免费激情av| 亚洲欧美日韩高清在线视频| 久久久久久人人人人人| 国产一区二区三区在线臀色熟女| 欧美中文日本在线观看视频| 亚洲一区二区三区不卡视频| 久久亚洲精品不卡| 嫁个100分男人电影在线观看| 精品一区二区三区av网在线观看| 我的老师免费观看完整版| 淫秽高清视频在线观看| 国产精品亚洲av一区麻豆| 国产毛片a区久久久久| 色综合亚洲欧美另类图片| 欧美成狂野欧美在线观看| 看片在线看免费视频| 国产精品久久久av美女十八| 日韩高清综合在线| 精品国内亚洲2022精品成人| 99国产极品粉嫩在线观看| 亚洲七黄色美女视频| 99久久成人亚洲精品观看| 黄色 视频免费看| 中文字幕熟女人妻在线| 欧美极品一区二区三区四区| 久久国产精品影院| АⅤ资源中文在线天堂| 最新在线观看一区二区三区| 757午夜福利合集在线观看| 精品国产美女av久久久久小说| 香蕉国产在线看| netflix在线观看网站| 亚洲成人中文字幕在线播放| 在线观看午夜福利视频| 国产高清三级在线| 欧美+亚洲+日韩+国产| 亚洲av日韩精品久久久久久密| 精品午夜福利视频在线观看一区| 欧美不卡视频在线免费观看| 国产免费av片在线观看野外av| 男人和女人高潮做爰伦理| 亚洲国产日韩欧美精品在线观看 | 亚洲18禁久久av| 精品熟女少妇八av免费久了| netflix在线观看网站| 国产真人三级小视频在线观看| 久久性视频一级片| 可以在线观看毛片的网站| 亚洲成人久久性| 长腿黑丝高跟| 欧美乱妇无乱码| 亚洲成av人片在线播放无| 亚洲真实伦在线观看| 久久午夜亚洲精品久久| 成人永久免费在线观看视频| 人人妻人人澡欧美一区二区| 久久久久久久午夜电影| 看黄色毛片网站| 亚洲一区二区三区不卡视频| 动漫黄色视频在线观看| 少妇丰满av| 最近视频中文字幕2019在线8| 欧美黑人巨大hd| 亚洲av成人精品一区久久| 亚洲精品在线观看二区| 国产精品九九99| 熟女少妇亚洲综合色aaa.| 亚洲色图av天堂| 在线免费观看的www视频| 一个人观看的视频www高清免费观看 | 日韩高清综合在线| 欧美日韩综合久久久久久 | 婷婷亚洲欧美| 美女午夜性视频免费| 嫩草影院入口| 人妻久久中文字幕网| 18禁观看日本| a级毛片a级免费在线| 日本黄色片子视频| 熟女电影av网| 欧美一区二区国产精品久久精品| 欧美3d第一页| 久久香蕉精品热| 国产黄a三级三级三级人| 久久这里只有精品19| 免费看光身美女| 男女做爰动态图高潮gif福利片| 最近最新免费中文字幕在线| 成人一区二区视频在线观看| 美女大奶头视频| 欧美黑人巨大hd| 真人做人爱边吃奶动态| 国产精品久久久av美女十八| 免费看十八禁软件| 中文字幕久久专区| 久久欧美精品欧美久久欧美| 又大又爽又粗| 91九色精品人成在线观看| 亚洲成人中文字幕在线播放| a在线观看视频网站| 午夜日韩欧美国产| 日韩 欧美 亚洲 中文字幕| 两性夫妻黄色片| 国产亚洲精品综合一区在线观看| 国内揄拍国产精品人妻在线| 精品一区二区三区四区五区乱码| av福利片在线观看| 成人特级av手机在线观看| www.熟女人妻精品国产| 97碰自拍视频| 国产精品久久久久久人妻精品电影| 天堂av国产一区二区熟女人妻| 美女午夜性视频免费| 国产成人aa在线观看| 国产乱人伦免费视频| 一个人免费在线观看的高清视频| 亚洲欧洲精品一区二区精品久久久| 国产野战对白在线观看| 桃红色精品国产亚洲av| 欧美乱色亚洲激情| 男女床上黄色一级片免费看| 精品午夜福利视频在线观看一区| 久久久久久人人人人人| 久久亚洲真实| 欧美高清成人免费视频www| 免费电影在线观看免费观看| 97超级碰碰碰精品色视频在线观看| 日韩中文字幕欧美一区二区| 欧美激情在线99| 丁香六月欧美| 久久99热这里只有精品18| 久久久久性生活片| 一级黄色大片毛片| 精品国产乱码久久久久久男人| 99热精品在线国产| 久久精品夜夜夜夜夜久久蜜豆| 免费在线观看亚洲国产| 久久国产精品影院| 免费观看人在逋| 人妻丰满熟妇av一区二区三区| 国内揄拍国产精品人妻在线| 操出白浆在线播放| 亚洲精品一卡2卡三卡4卡5卡| 天天添夜夜摸| 淫妇啪啪啪对白视频| 国产精品av视频在线免费观看| 狂野欧美白嫩少妇大欣赏| 女同久久另类99精品国产91| 制服丝袜大香蕉在线| 国产又黄又爽又无遮挡在线| 此物有八面人人有两片| 性色avwww在线观看| 国产精品99久久久久久久久| 19禁男女啪啪无遮挡网站| 99久久成人亚洲精品观看| 成人三级黄色视频| 久久这里只有精品19| 日本黄大片高清| 国产精品 国内视频| 国产免费男女视频| 国产午夜精品久久久久久| 亚洲人成电影免费在线| 女同久久另类99精品国产91| 亚洲无线观看免费| 99久久国产精品久久久| 熟女人妻精品中文字幕| 久久久精品大字幕| 国产亚洲精品av在线| 亚洲男人的天堂狠狠| 最近最新中文字幕大全电影3| 在线看三级毛片| 搡老熟女国产l中国老女人| 国语自产精品视频在线第100页| 18禁观看日本| cao死你这个sao货| 老司机午夜福利在线观看视频| 麻豆成人av在线观看| 国产黄a三级三级三级人| bbb黄色大片| 岛国在线观看网站| 午夜福利免费观看在线| 精品久久久久久成人av| 久久欧美精品欧美久久欧美| 99久久无色码亚洲精品果冻| 国产欧美日韩精品一区二区| 欧美性猛交黑人性爽| 亚洲中文日韩欧美视频| 少妇裸体淫交视频免费看高清| 国产成人精品久久二区二区免费| 最近视频中文字幕2019在线8| 97人妻精品一区二区三区麻豆| 午夜精品在线福利| 一个人看的www免费观看视频| 床上黄色一级片| 亚洲一区高清亚洲精品| 成人永久免费在线观看视频| 美女大奶头视频| 久久亚洲真实| 一级毛片精品| 丁香欧美五月| 国产精品久久久久久人妻精品电影| 成人三级做爰电影| 日韩免费av在线播放| 久久中文字幕人妻熟女| 日本在线视频免费播放| 午夜免费成人在线视频| 18禁裸乳无遮挡免费网站照片| 成人欧美大片| ponron亚洲| 观看美女的网站| 特大巨黑吊av在线直播| 亚洲成人免费电影在线观看| 国产黄a三级三级三级人| 97超级碰碰碰精品色视频在线观看| 亚洲欧美精品综合一区二区三区| 色综合站精品国产| 国产aⅴ精品一区二区三区波| 亚洲av成人不卡在线观看播放网| a级毛片在线看网站| 无限看片的www在线观看| 老汉色av国产亚洲站长工具| 99久国产av精品| h日本视频在线播放| 性色av乱码一区二区三区2| 午夜福利成人在线免费观看| 18禁观看日本| 毛片女人毛片| 亚洲av成人一区二区三| 悠悠久久av| 亚洲精品色激情综合| www国产在线视频色| 国产伦在线观看视频一区| 久久婷婷人人爽人人干人人爱| 麻豆成人午夜福利视频| 亚洲自偷自拍图片 自拍| 国产不卡一卡二| 99国产精品一区二区蜜桃av| 亚洲天堂国产精品一区在线| av中文乱码字幕在线| 亚洲五月天丁香| 亚洲最大成人中文| 亚洲,欧美精品.| 亚洲精品456在线播放app | 亚洲,欧美精品.| 午夜成年电影在线免费观看| 人人妻人人澡欧美一区二区| 久99久视频精品免费| 日韩欧美国产一区二区入口| 欧美极品一区二区三区四区| 在线免费观看的www视频| 日日夜夜操网爽| 精品免费久久久久久久清纯| 九九在线视频观看精品| 毛片女人毛片| 久久国产乱子伦精品免费另类| 国产精品久久电影中文字幕| 国产不卡一卡二| 欧美大码av| 欧美+亚洲+日韩+国产| 夜夜夜夜夜久久久久| 女人高潮潮喷娇喘18禁视频| 亚洲国产中文字幕在线视频| 亚洲av成人av| 国产精品乱码一区二三区的特点| 高清毛片免费观看视频网站| 国内少妇人妻偷人精品xxx网站 | 亚洲真实伦在线观看| 成人亚洲精品av一区二区| 日本 av在线| 欧美极品一区二区三区四区| 精品人妻1区二区| 日韩高清综合在线| 亚洲专区字幕在线| 久久精品国产清高在天天线| 国产日本99.免费观看| 久久精品亚洲精品国产色婷小说| 床上黄色一级片| 色尼玛亚洲综合影院| 国产一区二区三区视频了| 99国产精品一区二区蜜桃av| 天堂网av新在线| 91麻豆精品激情在线观看国产| 一级作爱视频免费观看| 亚洲国产日韩欧美精品在线观看 | 久久久水蜜桃国产精品网| 日本撒尿小便嘘嘘汇集6| 超碰成人久久| 国产三级黄色录像| 啦啦啦观看免费观看视频高清| 99热只有精品国产| 精品99又大又爽又粗少妇毛片 | 亚洲片人在线观看| 最近最新免费中文字幕在线| 老司机深夜福利视频在线观看| 亚洲美女黄片视频| 高清毛片免费观看视频网站| 久久久久久久久中文| 久久久国产欧美日韩av| 亚洲欧美日韩高清专用| 精品不卡国产一区二区三区| 天堂√8在线中文| 亚洲第一电影网av| 国产又色又爽无遮挡免费看| av片东京热男人的天堂| 国产精品久久久久久亚洲av鲁大| 亚洲人与动物交配视频| 午夜视频精品福利| 又大又爽又粗| 国产黄片美女视频| 欧美激情在线99| 亚洲av五月六月丁香网| 天天躁日日操中文字幕| 成人av一区二区三区在线看| 色综合欧美亚洲国产小说| 波多野结衣高清作品| 又黄又爽又免费观看的视频| 午夜精品久久久久久毛片777| 亚洲成人中文字幕在线播放| 精品国产乱子伦一区二区三区| 黄频高清免费视频| 999精品在线视频| 欧美日韩亚洲国产一区二区在线观看| 亚洲国产欧美人成| 婷婷亚洲欧美| 精品无人区乱码1区二区| 1024香蕉在线观看| 香蕉久久夜色| 亚洲成av人片免费观看| 91av网一区二区| 黄色丝袜av网址大全| 久久中文字幕人妻熟女| 国内精品美女久久久久久| 精品久久蜜臀av无| 天堂√8在线中文| 国产黄片美女视频| 国产视频内射| 欧美黄色片欧美黄色片| 国内精品一区二区在线观看| 18禁黄网站禁片午夜丰满| 香蕉av资源在线| 国产伦精品一区二区三区四那| 成人特级av手机在线观看| 搡老熟女国产l中国老女人| 国产欧美日韩精品亚洲av| av在线天堂中文字幕| 9191精品国产免费久久| 国产一区二区在线av高清观看| xxx96com| 国产私拍福利视频在线观看| 亚洲avbb在线观看| 黄色女人牲交| 欧美丝袜亚洲另类 | 人妻久久中文字幕网| 人人妻人人澡欧美一区二区| 国产69精品久久久久777片 | bbb黄色大片| av天堂在线播放| 最新在线观看一区二区三区| 久久久国产欧美日韩av| 精品久久蜜臀av无| av欧美777| 韩国av一区二区三区四区| 亚洲欧美日韩高清专用| 九九久久精品国产亚洲av麻豆 | 亚洲熟妇熟女久久| 精品福利观看| 国产69精品久久久久777片 | 岛国在线观看网站| 亚洲人成伊人成综合网2020| 国产高清视频在线观看网站| 老司机深夜福利视频在线观看| 18禁黄网站禁片午夜丰满| 在线视频色国产色|