• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Mode-Switching Motion Control System for Reactive Interaction and Surface Following Using Industrial Robots

    2018-08-11 07:48:32DanialNakhaeiniaPierrePayeurandRobertLaganire
    IEEE/CAA Journal of Automatica Sinica 2018年3期

    Danial Nakhaeinia,Pierre Payeur,and Robert Laganière

    Abstract—This work proposes a sensor-based control system for fully automated object detection and exploration(surface following)with a redundant industrial robot.The control system utilizes both off line and online trajectory planning for reactive interaction with objects of different shapes and color using RGBD vision and proximity/contact sensors feedback where no prior knowledge of the objects is available.The RGB-D sensor is used to collect raw 3D information of the environment.The data is then processed to segment an object of interest in the scene.In order to completely explore the object,a coverage path planning technique is proposed using a dynamic 3D occupancy grid method to generate a primary(off line)trajectory.However,RGB-D sensors are very sensitive to lighting and provide only limited accuracy on the depth measurements.Therefore,the coverage path planning is then further assisted by a real-time adaptive path planning using a fuzzy self-tuning proportional integral derivative(PID)controller.The latter allows the robot to dynamically update the 3D model by a specially designed instrumented compliant wrist and adapt to the surfaces it approaches or touches.A modes witching scheme is also proposed to efficiently integrate and smoothly switch between the interaction modes under certain conditions.Experimental results using a CRS-F3 manipulator equipped with a custom-built compliant wrist demonstrate the feasibility and performance of the proposed method.

    I.INTRODUCTION

    SENSOR-BASED control in robotics can contribute to the ever growing industrial demand to improve speed and precision,especially in an uncalibrated working environment.But beside the kinematic capabilities of common industrial robots,the sensing abilities are still underdeveloped.Force sensors are mainly used in the context of the classical constrained hybrid force/position control[1]?[3]where force control is added for tasks that involve dealing with contact on surfaces.Tactile sensors determine different physical properties of objects and allow the assessment of object properties such as deformation,shape,surface normal,curvature measurement,and slip detection through their contact with the world[4],[5].However,these sensors are only applicable where prior knowledge of the environment is available and there is a physical interaction between the robot and objects.In addition,the execution speed of the task is limited due to the limited bandwidth of the sensors,and to prevent loss of contact and information some planned trajectories must be prescribed[6].

    Using vision sensors in order to control a robot is commonly called visual servoing where visual features such as points,lines and regions are used in various object manipulation tasks such as alignment of a robot manipulator with an object[7].In contrast to force/tactile control,vision-based control systems require no contact with the object,allow non-contact measurement of the environment and do not need geometric models.The vision measurement is usually combined with force/tactile measurement as hybrid visual/force control[8],[9]for further manipulation.3D profiling cameras,scanners,sonars or combinations of them have been used for manipulation of objects,which often result in lengthy acquisition and slow processing of massive amounts of information[10].An alternative to obtain 3D data is replacing the high-cost sensors by an affordable Kinect sensor[11],[12].The Kinect technology is used to rapidly acquire RGB-D data over the surface of objects.

    Beyond selecting proper sensors,reliable interaction control strategy and reactive planning algorithms are required to direct the robot motion under consideration of the kinematic constraints and environment uncertainties[13].Although conventional proportional derivative/proportional integral derivative(PD/PID)controllers are the most applicable controller for robot manipulators,they are not suitable for nonlinear systems and lack robustness to disturbances and uncertainties.A general solution to this problem is to design and develop adaptive controllers where adaptive laws are devised based on the kinematics and dynamic models of the robot systems[14].Furthermore,to achieve a better performance and smoothly switch between different motion modes,switched control systems were proposed[15]to dictate switching law and determine which motion mode(s)should be active.

    Aiming at bringing dexterous manipulation at a higher level of performance,dexterity and operational capabilities,this work proposes a unique mode-switching motion control system.It details the development of a dexterous manipulation framework and algorithms using a specially designed compliant wrist for complete exploration and live surface following of objects with arbitrary surface shapes endowed with adaptive motion planning and control approach coupled with vision,proximity and contact sensing.

    Fig.1.Block diagram of the proposed mode switching control system.

    In order to enable industrial robots to interact with the environment smoothly and flexibly,adaptable compliance should be incorporated to industrial robots.A review of the traditional and recent robotics technologies shows that in many cases the modifications are technically challenging,not feasible and require hardware modifications,retro fitting,and new components which result in high costs.In this work,the flexibility problem is addressed by using a low cost specially designed compliant wrist mounted on the robot end-effector to increase the robot mechanical compliance and reduce the risk of damages.In contrast to other works,the adaptive controllers solve the force control problem in form of a position control problem where infrared sensors provide the required information for the interaction instead of touch sensors(force/torque,tactile and haptic).The proposed adaptive controllers require no learning procedure,no precise mathematical model of the robot and the environment and do not rely on force/torque calculation.As such the controllers enable the robot to interact with objects with and without contact.

    The paper is organized as follows:Section II gives an overview of the control system design.Section III presents the off line trajectory planning,and Section IV details the design of the online proximity and contact path planning.Section V describes the proposed switching scheme.Experimental results are presented in Section VI,and a final discussion concludes the paper in Section VII.

    II.MODE-SWITCHING MOTION CONTROL SYSTEM OVERVIEW

    The robotic interaction with objects or surface following problem consists of a complete exploration and alignment of a robot’s end-effector with a surface while accommodating its location and curvature.This work proposes a switched control approach for fast and accurate interaction with objects of different shapes and colors.In order to guide the robot’s end-effector towards the surface,and then control its motion to execute the desired interaction with the surface,the problem is divided into 3 stages of free,proximity and contact motion.Each of the three control modes uses a specific sensory information to guide the robot in different regions of the workspace.

    The main components and interconnections,as well as how they interact with each other are shown in Fig.1.

    The first step to achieve interaction with an object is the detection and localization of that object in the robot workspace.For that matter,a Kinect sensor is used for rapidly acquiring color and 3D data of the environment and to generate a primary(of fline)trajectory as a general guidance for navigation and interaction with the object(free motion mode).The proximity control and contact control modes are also designed and integrated to the system.They dynamically update the 3D model through using a specially designed instrumented compliant wrist and modify the position and orientation of the robot based on the online sensory information when it is in close proximity(proximity interaction)and in contact with the object(contact interaction).The following sections detail the design and implementation of the system and its main components.

    III.OFFLINE(FREE MOTION)TRAJECTORY PLANNING

    In the free motion phase,which refers to the robot movement in an unconstrained workspace,a 3D model of the object of interest is constructed using 3D data collected by a Kinect sensor.For that matter,the Kinect sensor is positioned behind the robot with its viewing axis pointing towards the surface of the object for collecting data over the object as shown in Fig.2.Based on the surface shape acquired by this peripheral vision stage,a unique coverage path planning(CPP)strategy is developed using a dynamic 3D occupancy grid to plan a global trajectory and support the early robotic exploration stage of the object’s surface.

    Fig.2.The RGB-D sensor positioning.

    A.Object Detection and Segmentation

    In order to efficiently navigate a robotic manipulator and interact with objects of various shapes,identification and localization of the object of interest in the robot workspace must first be achieved.For that matter,a Kinect sensor is used to acquire RGB images of the workspace along with depth information.When multiple objects are present in the working environment,objects of interest must be identified and discriminated from the scene.Three filters are used to extract and segment an object of interest from the workspace:a depth filter,a color filter and a distance filter.The data acquired by the Kinect sensor contains an RGB image and a collection of vertices(Point Cloud).Each point,pi,with(x,y,z)coordinates is defined with respect to the Kinect(K)reference frame and supports corresponding(r,g,b)color values,defined as follows:

    where v is the number of vertices in the point cloud.The depth of field covered by the Kinect sensor is typically between 0.5m to 6m and the error in depth considerably rises with the distance[16].Therefore,the noisy background is discarded and the maximum depth range is limited to that of the robot workspace.To filter out the vertices that are not in the desired depth range(Dz),a thresholding filter is applied(see Algorithm 1)which eliminates all the vertices with a depth value(Z-coordinate)over a pre-set desired range(Fig.3).

    Fig.3.Depth filter:(a)original data from Kinect,(b)after applying the depth filter.

    In order to optimize the robot movement,the desired object information should be extracted from the RGB-D data using color and distance segmentation.After acquiring the data,the RGB image(Fig.4(a))is presented to the operator.A mouse event callback function waits for the operator to click on any points of the desired object and returns the RGB-D values of the corresponding pixels.The RGB values of the points selected over the desired object(Fig.4(b))are used as a filter for the color segmentation.To increase the accuracy of the color segmentation,the operator can choose a number of points(μ)over the object surface according to the object’s size and shape.The RGB values of the operator selected points of interest(2)are used to define a range of color for the object of interest.The vertices in the RGB image that are within the same color range and for which the Euclidean color distance(?E)is less than a specific pre-set threshold(λ)are extracted from the point cloud,while the other vertices are eliminated.

    Fig.4.(a)RGB image of the working environment,and(b)RGB values related to the selected points over the object of interest(the white square board).

    The color segmentation allows to group parts of similar color.However,if there are multiple objects of similar color in the scene,a single object of interest cannot be identified only using color segmentation.Therefore,a distance filter is also applied on the point cloud to extract only the object of interest.Since each pixel in the RGB image corresponds to a particular point in the RGB-D point cloud,a primary estimate of the object of interest’s position(3)in the workspace is extracted from the vertices corresponding to the points selected by the operator.By clicking on points respectively close to the corners and to the center of the object of interest,the maximum distance(£max)between the center and the corners is calculated by(4).Then,the distance between each vertex in the point cloud and the center point(£)selected by the user is compared with the maximum distance(£max)and the vertices for which that distance is higher than£maxare dropped.

    where[Xc,Yc,Zc]are the coordinates of the object center andμis the number of selected points by the operator.This filter ensures that objects sharing similar color will be segmented individually,unless they overlap in space.

    Eventually,the remaining set of m vertices(Oi)forms the object of interest RGB-D point cloud(5).However,the vertices are defined with respect to the Kinect reference frame and must be transformed to the robot’s(R)base frame(6)

    B.Coverage Path Planning

    The key task of coverage path planning(CPP)is to guide the robot over the object surface and guarantee a complete coverage using the preprocessed RGB-D data.The proposed CPP is developed using the dynamic 3D occupancy grid method.The occupancy grid is constant when the environment(object)is static.In most of the previously proposed methods,the occupancy grid is built in an off line phase and it is not updated during operation,which requires very accurate localization.However,in the proposed method the occupancy grid is initially formed using the information provided by the Kinect sensor and it is dynamically refined during interaction using the data acquired by proximity sensors embedded in the compliant wrist mounted on the robot.

    1)Occupancy Grid

    As shown in Fig.5(a),once the object of interest is identified and modeled in 3D,the object surface is discretized to a set of uniform cubic cells(7).The resolution of the occupancy grid is adapted according to the volume of the object of interest:

    (Zmax?Zmin)mm×(Xmax?Xmin)mm×(Ymax?Ymin)mm where Ymin,Ymax,Xmin,Xmax,Zminand Zmaxcorrespond respectively to the minimum and maximum values of the Y,X and Z coordinates in the point cloud of the object of interest.Size of the cells can be selected based on the end-effector size or the desired resolution of the grid for a particular application.Therefore,initially the object of interest is mapped into an occupancy grid with cube cells of the same size as the robot’s tool plate size,in the present case 130mm×130mm×130mm.

    The major issue arises when cells are partially occupied.That usually happens where the object has acute edges and also near the borders of the object of interest.

    where CI,J,Krepresents a cell in the object model and

    Fig.5.(a)Object of interest 3D model,and(b)object discretized to a set of uniform cubic cells.

    Fig.6.(a)Smaller cell and larger cell representation,and(b)merging smaller cell to reduce data volume.

    In order to solve the problem,the grid resolution is increased and two different cells(larger and smaller)sizes are used(Fig.5(b)).The smaller cells(Cs)are cubes with size of 65mm×65mm×65mm with each larger cell(CB)containing 8 smaller cells in two layers of four(Fig.6(a)).Another problem with 3D occupancy grids is that they require large memory space,especially when fine resolution is required[18].Considering that the RGB-D sensor is perceiving the scene from a point of view that is perpendicular to the surface of the object,the regions in the back are essentially occluded.Therefore,to achieve computational efficiency and decrease complexity and the number of cells,the front and back layers of the larger cells are merged into one layer and the smaller cells are replaced with cuboid cells of size 65mm×65mm×130mm(Fig.6(b)),leading to a 2.5D occupancy grid model.The occupancy grid is formed by determining the state of each cell.The smaller cellsare classified with two states as occupied or free.A cell is called occupied if there exist at least some vertices in the cell.If the smaller cell is occupied,the cell value is 1,otherwise it is 0.The state variables associated with the larger cellsare:occupied,unknown and free which are identified based on the smaller cell’s value.If more than two smaller cells of a larger cell are occupied,the value of the associated larger cell is 1,else if one or two of the small cells are occupied the value is 0.5,and if all the small cells are free then it is 0(9).The occupancy grid corresponding to the larger cells is represented by a H×V×P matrix(10).Initially all cells are considered as unknown.Therefore,the occupancy matrix,OccMatrix is initialized with a value equal to 0.5 wherecontains occupancy information associated with the larger cell(I,J,K)and cell(0,0,0)is located in the upper left corner of the grid.

    OccMatrix

    The occupancy matrix provides the required information to plan an off line trajectory to explore the object surface.Therefore,in the next step the occupancy matrix is used for global path planning.In addition,to preserve the dynamic characteristic of the occupancy grid and achieve a more accurate and efficient exploration,the occupancy grid will be updated using extra sensory information when the robot is in close proximity of the object.

    2)Global Trajectory Generation

    In order to explore the object completely and optimize the trajectory,the robot should explore all the occupied cells and avoid the free ones.The robot’s end-effector pose at each time step is defined by a set of points:

    TrajectoryGlobal

    where occ is the number of occupied cells in the occupancy grid.

    In this work,the start point is always the first occupied cell on the front upper row of the occupancy grid and the initial moving direction is horizontal.The robot position at each moving step(posG[k])is defined by extracting the closest point in the object point cloud(ObjCloudRobot)to the center of the occupied larger cell(centre)in the robot moving direction.However,if the larger cell is not completely occupied(=0.5)the posG(k)is determined based on the new measurements from the proximity sensors(detailed in Section IV-B).As shown in Fig.7(a),the robot can move in 5 directions[horizontally(left/right),vertically down,and diagonally(left/right)down]and it must move through all the points to cover the target area.The target area is covered using a zigzag(Fig.7(b))pattern which is also called Seed Spreader motion[19].The robot begins from a start point and moves horizontally to the right towards the center of the next occupied cell.If there is no occupied cell on the right,the motion direction is changed and the robot moves vertically(diagonally)down to the next point.Then,the robot again moves horizontally but in the opposite direction until it reaches the last occupied cell on the other side of the object.The process continues until the robot has explored the entire area.

    Fig.7.(a)Accessible directions of motion,(b)global trajectory planning to ensure coverage,and(c)vertex normal calculation at the center of a larger cell.

    The set of points forming a path for the robot defined in the previous step determines the positions for the end-effector in Cartesian coordinates over the object surface.In addition to the robot position,to ensure proper alignment of the end-effector with the surface,it is required to estimate the surface normal and calculate the end-effector orientation at each point.The local normal at each point is calculated using the neighbor cells(Fig.7(c)).Given that there are four smaller cells forming a larger cell, five points situated in the center of each cell in three dimensions are obtained.Each set of the three points forms a triangle for a total of four triangles that can be generated.First,the normal to each triangle(Ni)is calculated and normalized.The normal of each triangle is computed as the cross product between the vectors representing two sides of the triangle.The following equations define the normal vector,N,calculated from a set of three vertices,E,F,G,point coordinates:

    The resulting normals are then normalized such that the length of the edges does not come into account.The normalized vector Nnormis computed as:

    where Nnx=Nx/norm,Nny=Ny/norm,Nnz=Nz/norm.

    Then,taking the average of the four triangle normals provides the estimated object’s surface normal at the center of the larger occupied cell and the object’s surface orientation is deduced from the normal vector to the surface(Fig.7(c)).

    IV.ONLINE(PROXIMITY/CONTACT)PATH PLANNING

    The acquisition speed of the Kinect and its low cost have been major selection criteria for this sensor to be used for 3D modelling.However,the information from a Kinect sensor is not sufficient and accurate enough for developing a full reactive surface following approach which also requires close and physical interaction with an object[20].Furthermore,the object may move or deform under the influence of the physical interaction.Therefore,to compensate for errors in the rough 2.5D profile of the surface provided by the RGB-D sensor and to further refine the path planning,a model-free adaptive fuzzy self-tuning PID position controller and an orientation controller are developed using real-time proximity and contact sensory information provided by a custom designed compliant wrist[21]attached to the endeffector of the robot,as shown in Fig.8(a).The compliant wrist provides a means of detecting objects both in contact and proximity to the end-effector,as well as adding a degree of compliance to the end-effector which enables the latter to slide on the object without damaging it and to adapt to its surface changes when contact happens.As shown in Fig.8(b),the compliant wrist is equipped with eight infrared distance measurement sensors.The infrared sensors are arranged in two arrays of four sensors each:an external array and an internal array.The external sensor array allows to measure distances at multiple points between the wrist and the object located in front of the end-effector.The internal sensor array is situated between the base of the compliant wrist structure and a moveable plate,as shown in Fig.8(a),which allows the device to determine surface orientation and distance to an object when the robot is contacting with it.The sensing layers estimate an object pose in the form of a 3D homogeneous transformation matrix.The rotation and translation parameters are obtained using the distance measurements from either array of four infrared(IR)sensors.Equation(18)shows how the transformation matrix is calculated using distances provided by the internal sensors(contact sensing layer).A similar calculation can be made from the external sensors using values A0,B0,C0and D0measured by external sensors(Fig.8(c)).The transformation matrix determines the object pose with respect to the compliant wrist frame,which is transferred to the robot’s base frame(19).

    Fig.8.(a)Compliant wrist,(b)internal and external sensors arrangement,and(c)distance measurements from internal and external sensors.

    Fig.9.Block diagram of the real-time position and orientation controller.

    A.Adaptive Position/Orientation Control

    The surface following and tracking of the objects can be accomplished either with contact(for applications such as polishing,welding,or cleaning)or without contact(for applications like painting and inspection).In order to guide the robot under consideration of kinematic motion constraints,an adaptive on-line trajectory generation is proposed to generate continuous command variables and modify the position and orientation of the robot based on the online sensory information when the robot end-effector is in close proximity of(proximity control mode)or in contact with(contact control mode)an object.The robot position over the object’s surface is controlled using a fuzzy self-tuning PID controller and its orientation is corrected whenever there is an error between the surface orientation estimated via proximity/contact sensors and the end-effector’s orientation to make the system adaptive to the working environment(Fig.9).

    1)Adaptive Position Control

    The typical PID controller has been widely used in industry because of its simplicity and performance to reduce the error and enhance the system response.To design a PID controller it is required to select the PID gains KP,KI,and KDproperly(20).However,the PID controller with fixed parameters is inefficient to control systems with uncertain models and parameter variations.

    where Ekis defined in(21).

    Fuzzy control is ideal to deal with nonlinear systems where there is a wide range of operating conditions,an inexact model exists and accurate information is not required.It has a better performance in dealing with uncertainty and change in environment than a PID controller but PID has a faster response and it minimizes the error better than fuzzy control.Therefore,to take advantage of both controllers,a fuzzy self-tuning PID controller is designed to control the robot’s movement and to smoothly follow an object’s surface from a specific distance or alternatively while maintaining contact.

    The fuzzy self-tuning PID controller’s output is a real-time adjustment value(?)of the position based on the error(E)between the current robot distance from the object(D)and the desired distance to the object d?z(this value is zero when contact with the object is required).D is obtained by proximity sensors or by contact sensors on the compliant wrist when the robot is in close proximity of or in contact with the object respectively.The PID gains are tuned using the fuzzy system rather than the traditional approaches.The ? is then sent to the robot controller to eliminate the error and make the system adaptive to the changes in the working environment.

    The fuzzy controller input variables are the distance error(Ek)and change of error(?Ek),and the outputs are the PID controller parameters.The standard triangular membership functions and Mamdani type inference are adopted for the fuzzification due to their less computation,which enhances speed of the system reaction

    The error(Ek)in distance is represented by five linguistic terms:large negative(LN),small negative(SN),zero(Z),small positive(SP),and large positive(LP).Similarly,the fuzzy set of change of error(?E)is expressed by negative big(NB),negative(N),zero(Z),positive(P),and positive big(PB),both defined over the interval from?50mm to 50mm.When the robot is in close proximity of the object,it can move backward,forward or not change depending on the robot distance to the surface.The output linguistic terms for each PID gain,andare(Fig.(10)):large forward(LF),forward(F),no change(NC),backward(B),large backward(LB)over the interval[?1,1].Tables I?III define the fuzzy rules for the,andgains.The center of gravity(COG)method is used to defuzzify the output variable(22).

    Fig.10.Fuzzy membership functions for:(a)error,(b)change of error,and(c)PID gains estimates ,and

    In order to obtain feasible and optimum value for the PID parameters,the actual gains are computed using the range of each parameter determined experimentally(23)from the outputs of the fuzzy inference system.

    where KP∈[0,1],KI∈[0.01,0.1],KD∈[0.003,0.01].

    TABLE IFUZZYRULEBASE OFk

    TABLE IFUZZYRULEBASE OFk

    NB LB LB LB B NC N LB B B B NC Z LB B NC F LF P NC F F F LF PB NC F LF LF LF

    TABLE IIFUZZYRULEBASE OF

    TABLE IIFUZZYRULEBASE OF

    NB LB LB LB F LF N LB B B NC NC Z B B NC F LF P NC B F F LF PB NC F LF LF LF

    TABLE IIIFUZZYRULEBASE OF

    TABLE IIIFUZZYRULEBASE OF

    NB LF LF LB B LB N LF F F NC LB Z F F NC B LB P NC F B B LB PB NC B LB LB LB

    2)Adaptive Orientation Control

    When the robot is in close proximity of(or in contact with)the object,the orientation of the object surface is estimated by proximity/contact sensors in terms of the pitch()and yaw()with respect to the wrist plane.Let the rotation matrix of the surface generic orientation,be denoted withwith respect to the wrist plane,and the rotation of the end-effector obtained from the initial global trajectory,be described bythe desired rotation of the end-effector,to align the end-effector with the surface can be determined by(24)

    In a surface following task,theis the orientation to be tracked by the robot.However,in order to obtain a smooth orientation,it is not directly sent to the controller.Instead,the orientation of the end-effector is updated using an adaptive orientation signal()at each moving step to compensate for the orientation error between the end-effector and the surface.The error decreases exponentially to eliminate the orientation error(adjustto zero)rapidly.

    where η is a proportional gain to decrease the error and minimize the convergence time.A large gain value reduces the convergence time but may lead to tracking loss or task failure due to fast motion,while a small gain value increases the convergence time.Therefore,in order to increase the convergence time,reduce oscillation near equilibrium point and preserve the system stability,the gain value is varied according to the orientation error value(27).The gain value is small initially(η0)while the error is larger,and the error is reduced exponentially until it reaches a specific threshold(?)near the equilibrium point(=0).Then,as the orientation error got smaller,the gain switches to a larger gain value(η0< η1)to eliminate the remaining orientation error as fast as possible[22].By using the adaptive orientation signal,the robot end-effector orientation angles are calculated by(28).

    B.Updating the Occupancy Grid

    The object of interest might not be acquired with sufficient accuracy by the Kinect sensor alone due to its limited depth resolution or not be segmented completely from the point cloud.In order to update the occupancy grid previously described,the compliant wrist provides closer and higher accuracy feedback about the object’s location.As shown in Fig.11,the compliant plate is able to cover four smaller cells(one largest cell),given the selected initial resolution of the occupancy grid.The external sensors,which are protruding on the four sides of the compliant plate,can provide real-time look-ahead information while the robot is moving.As the robot moves along the global trajectory to its new pose,the existing occupancy map is updated using the new measurements.For example when the robot is moving to the right,the sensor situated on the right side will allow an update of the state of the two cells located ahead on the right,especially when the next cell value is 0.5(unknown cell).If the sensor detects the object,the cell value changes to 1(occupied),else it will change to zero(empty).The robot keeps moving in the previous direction if the next cell is occupied,else it stops there and moves to the next pose along the global trajectory.

    Fig.11.Cells coverage by the wrist embedded sensors.

    V.SWITCHING CONTROL SCHEME

    To achieve a seamless and efficient integration of vision and real-time sensory information and smoothly switch between different interaction modes,a mode switching scheme is also proposed as it provides the manipulator with the capability to operate independently in each of the 3 stages(free motion,proximity,and contact)or in transition in between stages to perform more complicated tasks as required by different applications.Since this work focuses on the task space control,a discrete-time switching system is proposed in order to switch smoothly between global trajectory following and sensor based motion control.The proposed switched-control system consists of continuously working subsystems and a discrete system that supervise the switching between the subsystems[23].The proposed switching control system is modeled as follows:

    The value δ determines which function fδ(e)(Ek,Dk)controls the system behavior at each moving step(k)and fF,fP,fCcorrespond to free motion,proximity and contact control modes respectively.The switching signal δ(30)is determined based on the distance error(values are in mm).The error values shown in(30)are estimated experimentally based on the extensive compliant wrist sensor’s performance analysis detailed in[21].For a compliant wrist equipped with different IR sensors,or other robot-mounted distance measurement devices,re-calibration of(30)would be necessary,but the general framework would remain valid as such:

    In the free motion phase,which refers to the robot movement in an unconstrained workspace,the 3D model of the object is used to plan a global trajectory(detailed in Section III)and guide the robot from an initial position towards the object where the robot pose is given by(31)

    The proximity phase enables the robot to refine the reference(global)trajectory using the proximity sensing information to track the desired trajectory within a specific distance without contact or adjust the robot pose before contact happens(transition)to damp the system and align the robot with surface.It also enables the robot to update the cell of the occupancy grid while moving along the global trajectory.

    where ?Pis position control signal generated by the selftuning fuzzy PID controller andis adaptive orientation control signal obtained using proximity sensors.

    The contact with surface is triggered by the internal sensors where the contact plate is deflected under externally applied forces.The contact motion mode enables the robot to more precisely adapt to the changes and forces generated by the surfaces with which it comes into contact.When the contact is detected,the internal sensors provide the fuzzy inputs(error and change of error)instead of external sensors to make sure that the robot remains in contact with the object and prevent the robot from being pushed too much into the object.

    where ?PCandare adaptive position and orientation signals obtained from contact sensory information.

    VI.EXPERIMENTAL RESULTS

    To validate the feasibility and assess the accuracy of the proposed method,experiments are carried out with a 7-DOF CRS F3 manipulator which consists of a 6-DOF robot arm mounted on a linear track.The algorithms used for this work were developed in C++and run on a computer with an Intel core i7 CPU and Windows 7.

    A.Object Segmentation

    Table IV shows the efficiency of the proposed segmentation algorithm in extracting the object of interest from the scene shown in Fig.3 where there are four objects of different shapes and colors.The original point cloud provided by the Kinect sensor after applying the depth filter includes 41204 points from which over 99%of those corresponding to the objects of interest have been successfully recovered from the point cloud.

    TABLE IVOBJECT SEGMENTATION RESULTS

    B.Contact Interaction

    In this experiment,it is desired to closely follow the surface of a real automotive door panel(Fig.12)while maintaining contact and accommodating its curvature.As shown in Fig.12(a),the Kinect sensor is located facing the door panel and the robot moves on a two-meter linear track to fully reach the surface.The Kinect collects raw 3D information on the environment.The RGB image captured by the Kinect(Fig.12(b))is presented to the operator.The operator selects the object of interest and its point cloud is automatically extracted from the scene,except for the window area since glass is not imaged well with Kinect sensor technology and also the window area is beyond the robot workspace.A global trajectory is then generated using the proposed coverage path planning method(Section III-B)to completely explore and scan the object of interest’s surface(Fig.13(a)).

    Fig.12. (a)Automotive door panel set up with respect to the robot and the Kinect sensor,and(b)front view of real automotive door panel.

    Fig.13(a)shows the global(of fline)trajectory generated using the proposed coverage path planning method,which consists of a set of 3D points(Fig.13(b)).Once the global trajectory is generated,the free motion mode is initially activated to guide the robot towards the start point.When the robot is in close proximity of the object,the proximity(external)sensors detect the object,the motion control mode is switched to the proximity mode,where the adaptive pose controllers adjust the robot position and orientation using the external sensors in preparation for a safe contact with the object.When the initial contact happens the control mode switches to contact mode,and the adaptive pose controllers generate the depth and orientation control signals to maintain contact with the object.The fuzzy inputs and the depth control signal are shown in Fig.13(c).The position control signal(?)is varying from?20mm to 25mm,which shows the smooth and stable motion of the robot over the object surface.The ? signal changes according to the error(E)and change of error(?E).When the robot moves horizontally the change of the ? signal is small but when the robot moves vertically on the object surface,the E and?E are changed more substantially(due to change of the door curvature).As a result,the ? signal is larger to compensate the error as desired(Fig.13(c)).

    The internal sensors detect the contact when the compliant wrist is deflected.In order to make sure that the robot is perfectly in contact with the object and detects the contact by the internal sensors,the desired distance to the object is set to d?z=?3mm.Therefore,the average distance obtained from sensors in each moving step(k)should be around?3mm to eliminate the error(Ek=Dk?d).The average distance measurement of the internal sensors(Fig.13(d))from the object during the surface following process was about?3.42mm,which shows that the robot maintained contact during the interaction(Fig.(14)).The robot orientation error is compensated using the adaptive orientation signal().The robot moves to the next pose defined by the global trajectory and the same process as above repeats for each moving step until the robot has explored the entire surface.Fig.15 shows the system performance in refining the position and orientation of the robot during contact interaction.The depth adjustment over the object surface using the adaptive position controllers(depicted in red)is shown in Fig.15(a),and compared with the depth measurements estimated from the RGB-D data(depicted in blue).The orientation correction is represented in Fig.15(b),the global object surface orientation with respect to the robot base frame around the y-axis(in blue)and z-axis(in red)estimated as part of the off line path planning process are also updated locally using the online sensory information(in green and purple)from the compliant wrist.

    Fig.13.(a)Trajectory planning to ensure full coverage over the region of interest.(b)Global trajectory.(c)Fuzzy self-tuning PID controller performance.(d)Internal sensors dataset.

    Fig.14.Robot performance at following the door panel’s curved surface.

    C.Proximity Interaction

    In this experiment,it is desired to scan and follow the object’s surface(Fig.12(b))at a fixed distance from it.Fig.16(a)shows the adaptive self-tuning fuzzy-PID performance in controlling the robot movement over the surface of the object while maintaining a desired distance(50mm)from the object but still accommodating its curvature(Figs.16(c)and(d)).In Fig.16(c),the blue curve represents the pre-planned trajectory which is locally refined(orange curve)using the sensory information provided by the external sensors.The global trajectory is also refined locally in orientation(Fig.16(d)).The average of the distance measurements(Fig.16(b))from the surface was 56.8mm,which verifies that the robot remains within the desired distance from the object during the surface following.

    Fig.15.System performance in refining the robot pose in contact mode.

    Fig.16.The system performance in refining the robot pose in proximity interaction.

    VII.CONCLUSION

    This work presented the design and implementation of a switching motion control system for automated and efficient proximity/contact interaction and surface following of objects of arbitrary shape.The proposed control architecture enables the robot to detect and follow the surface of an object of interest,with or without contact,using the information provided by a Kinect sensor and augmented locally using a specially designed instrumented compliant wrist.A tri-modal discrete switching control scheme is proposed to supervise the robot motion continuously and switch between different control modes and related controllers when it finds the robot within certain regions of the workspace relative to the object surface.The proposed method was implemented and validated through experimental results with a 7-DOF robotic manipulator.The experimental results demonstrated that the robot can successfully scan an entire region of an object of interest while closely following the curved surface of the object with or without contact.

    The adaptive controllers and the switching control scheme can be improved in future work.This includes designing new adaptive controllers that are less sensitive and have faster but smoother response to changes in the environment,and using a hybrid switched control system to allow actual data fusion from multiple sensing sources which would result in smoother switching between the interaction modes.

    ACKNOWLEDGMENTS

    The authors wish to acknowledge the contribution of Mr.Pascal Laferrière to this research through the development of the instrumented compliant wrist device.

    欧美+亚洲+日韩+国产| 纯流量卡能插随身wifi吗| 精品一区二区三区av网在线观看| 精品不卡国产一区二区三区| 在线永久观看黄色视频| 91成年电影在线观看| 一级,二级,三级黄色视频| 少妇的丰满在线观看| 9191精品国产免费久久| 国产精品久久久久久亚洲av鲁大| 亚洲成人国产一区在线观看| 老鸭窝网址在线观看| 十分钟在线观看高清视频www| 亚洲 欧美一区二区三区| 一区二区三区激情视频| 亚洲成人国产一区在线观看| 午夜精品在线福利| 精品国产乱子伦一区二区三区| av免费在线观看网站| 亚洲国产精品sss在线观看| 视频区欧美日本亚洲| 1024视频免费在线观看| 成人18禁在线播放| 日韩精品中文字幕看吧| 成人三级黄色视频| 亚洲人成77777在线视频| 亚洲国产欧美一区二区综合| 成人18禁在线播放| 在线观看免费视频日本深夜| 色在线成人网| 深夜精品福利| 久久性视频一级片| 日韩欧美国产在线观看| 夜夜看夜夜爽夜夜摸| 久久狼人影院| 亚洲精品中文字幕一二三四区| 99久久精品国产亚洲精品| 欧美最黄视频在线播放免费| 看片在线看免费视频| 精品国产国语对白av| 此物有八面人人有两片| 午夜福利免费观看在线| 99久久精品国产亚洲精品| 色播在线永久视频| 精品人妻1区二区| 久久精品人人爽人人爽视色| 三级毛片av免费| 国产人伦9x9x在线观看| 亚洲 欧美 日韩 在线 免费| 国产精品一区二区三区四区久久 | 90打野战视频偷拍视频| 午夜老司机福利片| 国产亚洲精品综合一区在线观看 | 中文亚洲av片在线观看爽| 一区二区日韩欧美中文字幕| 69av精品久久久久久| 99精品久久久久人妻精品| 欧美日韩亚洲国产一区二区在线观看| 黄色成人免费大全| 亚洲国产看品久久| 亚洲精品久久国产高清桃花| svipshipincom国产片| 欧美国产日韩亚洲一区| 久久午夜综合久久蜜桃| 精品一区二区三区视频在线观看免费| 97碰自拍视频| 中文字幕人成人乱码亚洲影| 亚洲第一青青草原| 亚洲av电影在线进入| 日韩高清综合在线| 久久性视频一级片| 欧美色视频一区免费| 18禁黄网站禁片午夜丰满| 国内精品久久久久久久电影| 色尼玛亚洲综合影院| 99精品在免费线老司机午夜| 久久久久国产精品人妻aⅴ院| 精品不卡国产一区二区三区| 亚洲美女黄片视频| 午夜福利,免费看| 欧美日韩亚洲综合一区二区三区_| 国产精品久久久久久精品电影 | 黄色毛片三级朝国网站| 亚洲最大成人中文| 亚洲色图av天堂| 亚洲专区中文字幕在线| 亚洲成人精品中文字幕电影| 亚洲av成人av| 免费搜索国产男女视频| 亚洲一卡2卡3卡4卡5卡精品中文| 亚洲国产精品久久男人天堂| 国产亚洲精品综合一区在线观看 | 99riav亚洲国产免费| 午夜精品久久久久久毛片777| 非洲黑人性xxxx精品又粗又长| 精品日产1卡2卡| 国产aⅴ精品一区二区三区波| 亚洲国产欧美网| 成人精品一区二区免费| 激情在线观看视频在线高清| 日本黄色视频三级网站网址| 色哟哟哟哟哟哟| 男人舔女人下体高潮全视频| 美女高潮到喷水免费观看| 日本 欧美在线| 免费在线观看黄色视频的| 两人在一起打扑克的视频| 两个人看的免费小视频| 人妻丰满熟妇av一区二区三区| 999精品在线视频| 国产午夜精品久久久久久| 极品人妻少妇av视频| 精品久久久精品久久久| 国产免费av片在线观看野外av| 一级,二级,三级黄色视频| 黄片播放在线免费| 波多野结衣高清无吗| 国产av精品麻豆| 亚洲成国产人片在线观看| 欧美最黄视频在线播放免费| 啦啦啦观看免费观看视频高清 | 一级a爱视频在线免费观看| 九色亚洲精品在线播放| 亚洲成人免费电影在线观看| 国产91精品成人一区二区三区| 国产精品影院久久| av视频免费观看在线观看| 午夜福利在线观看吧| 高清在线国产一区| 亚洲精品国产区一区二| 91成年电影在线观看| 中出人妻视频一区二区| 欧美日韩瑟瑟在线播放| 在线免费观看的www视频| 成年人黄色毛片网站| 久久青草综合色| 神马国产精品三级电影在线观看 | 国产av精品麻豆| 亚洲avbb在线观看| 亚洲专区中文字幕在线| 免费女性裸体啪啪无遮挡网站| 日韩欧美在线二视频| 一区二区三区激情视频| 少妇裸体淫交视频免费看高清 | 1024视频免费在线观看| 黑人操中国人逼视频| 国产精品香港三级国产av潘金莲| tocl精华| 亚洲一区二区三区不卡视频| 首页视频小说图片口味搜索| 成年版毛片免费区| 99热只有精品国产| 国产麻豆69| 欧美日本视频| 乱人伦中国视频| 久久久国产欧美日韩av| 亚洲精品国产精品久久久不卡| 俄罗斯特黄特色一大片| 如日韩欧美国产精品一区二区三区| 国产区一区二久久| 久久精品国产99精品国产亚洲性色 | 涩涩av久久男人的天堂| 宅男免费午夜| 亚洲精品中文字幕一二三四区| 成人手机av| www国产在线视频色| 国产日韩一区二区三区精品不卡| 欧美绝顶高潮抽搐喷水| 久久精品91蜜桃| av在线播放免费不卡| 一个人免费在线观看的高清视频| 欧美乱妇无乱码| 亚洲精品国产一区二区精华液| 精品一区二区三区视频在线观看免费| 国产高清有码在线观看视频 | tocl精华| 午夜激情av网站| 精品乱码久久久久久99久播| 国产单亲对白刺激| 午夜精品国产一区二区电影| 久久天躁狠狠躁夜夜2o2o| 怎么达到女性高潮| av欧美777| 国产精品久久视频播放| 精品久久久久久久久久免费视频| 亚洲欧美日韩另类电影网站| 国产精品野战在线观看| 精品国产乱子伦一区二区三区| 怎么达到女性高潮| 一边摸一边做爽爽视频免费| 亚洲国产看品久久| 国产一区二区激情短视频| 人成视频在线观看免费观看| 给我免费播放毛片高清在线观看| 人妻久久中文字幕网| 成人欧美大片| 亚洲熟女毛片儿| 久久人人精品亚洲av| 精品人妻在线不人妻| 香蕉丝袜av| 女警被强在线播放| 国内精品久久久久精免费| 亚洲人成网站在线播放欧美日韩| 午夜精品国产一区二区电影| 视频区欧美日本亚洲| 国产成人一区二区三区免费视频网站| 国产亚洲精品综合一区在线观看 | 精品日产1卡2卡| 亚洲av五月六月丁香网| 婷婷六月久久综合丁香| 亚洲性夜色夜夜综合| 给我免费播放毛片高清在线观看| 亚洲欧洲精品一区二区精品久久久| 亚洲av片天天在线观看| 此物有八面人人有两片| av在线天堂中文字幕| 给我免费播放毛片高清在线观看| 欧洲精品卡2卡3卡4卡5卡区| 无限看片的www在线观看| 国产男靠女视频免费网站| av有码第一页| 在线观看免费午夜福利视频| 免费在线观看日本一区| 身体一侧抽搐| 天堂动漫精品| 中文字幕人妻熟女乱码| 国产乱人伦免费视频| 天堂动漫精品| 精品久久久久久成人av| 久久国产乱子伦精品免费另类| 久久精品成人免费网站| 岛国在线观看网站| 90打野战视频偷拍视频| 精品国内亚洲2022精品成人| 中文字幕人成人乱码亚洲影| 两人在一起打扑克的视频| 久久精品91蜜桃| 亚洲七黄色美女视频| 国产片内射在线| 老司机靠b影院| 大型av网站在线播放| 国产99久久九九免费精品| avwww免费| 国产熟女午夜一区二区三区| 国产精品98久久久久久宅男小说| 久久精品国产亚洲av高清一级| 一级毛片精品| 国产精品久久久人人做人人爽| 一区二区三区精品91| 搞女人的毛片| av视频免费观看在线观看| 亚洲成a人片在线一区二区| 精品国产超薄肉色丝袜足j| 高清黄色对白视频在线免费看| 99国产精品99久久久久| 1024香蕉在线观看| 色尼玛亚洲综合影院| 亚洲美女黄片视频| 免费在线观看亚洲国产| 一区二区三区高清视频在线| 午夜免费激情av| 午夜免费鲁丝| 岛国视频午夜一区免费看| 亚洲一区高清亚洲精品| 少妇粗大呻吟视频| 国产色视频综合| 亚洲最大成人中文| 国产伦人伦偷精品视频| 好看av亚洲va欧美ⅴa在| 老汉色av国产亚洲站长工具| 国产av一区在线观看免费| 亚洲国产毛片av蜜桃av| bbb黄色大片| 精品国产亚洲在线| 亚洲人成电影观看| 精品不卡国产一区二区三区| 日本 欧美在线| 一本大道久久a久久精品| 男女床上黄色一级片免费看| 级片在线观看| 国产主播在线观看一区二区| 中出人妻视频一区二区| 欧美av亚洲av综合av国产av| 精品国产一区二区久久| 亚洲 国产 在线| 中国美女看黄片| 欧美性长视频在线观看| 色综合站精品国产| 9色porny在线观看| 精品国产美女av久久久久小说| 亚洲欧美精品综合久久99| 99久久久亚洲精品蜜臀av| 中亚洲国语对白在线视频| 欧美绝顶高潮抽搐喷水| 国产亚洲精品久久久久5区| 精品高清国产在线一区| 美女高潮到喷水免费观看| 两个人看的免费小视频| 国产免费av片在线观看野外av| 欧美色欧美亚洲另类二区 | 久久人妻福利社区极品人妻图片| 狂野欧美激情性xxxx| 纯流量卡能插随身wifi吗| 日日干狠狠操夜夜爽| 亚洲色图综合在线观看| 久久国产精品影院| 国产一区二区三区综合在线观看| 黄色女人牲交| 非洲黑人性xxxx精品又粗又长| 国产精品亚洲一级av第二区| 精品免费久久久久久久清纯| 中文字幕久久专区| 亚洲欧美日韩无卡精品| 日韩欧美免费精品| 校园春色视频在线观看| 午夜福利高清视频| 久久久久久国产a免费观看| 十分钟在线观看高清视频www| 757午夜福利合集在线观看| 91麻豆精品激情在线观看国产| 给我免费播放毛片高清在线观看| 好看av亚洲va欧美ⅴa在| 国产高清视频在线播放一区| 久久久久久久午夜电影| 精品久久蜜臀av无| 亚洲中文字幕一区二区三区有码在线看 | 国产精品1区2区在线观看.| 国产精品,欧美在线| 久久久国产成人免费| 欧美日本视频| 国产免费av片在线观看野外av| bbb黄色大片| 黑人巨大精品欧美一区二区mp4| 日韩精品免费视频一区二区三区| 中文字幕另类日韩欧美亚洲嫩草| 欧洲精品卡2卡3卡4卡5卡区| 999精品在线视频| 性色av乱码一区二区三区2| 在线播放国产精品三级| 午夜福利高清视频| 国产av精品麻豆| 欧美色视频一区免费| 脱女人内裤的视频| 激情视频va一区二区三区| 女性生殖器流出的白浆| 母亲3免费完整高清在线观看| 色尼玛亚洲综合影院| 大型黄色视频在线免费观看| 日韩欧美免费精品| xxx96com| 午夜精品国产一区二区电影| 夜夜爽天天搞| 日本三级黄在线观看| 亚洲全国av大片| 两个人视频免费观看高清| 50天的宝宝边吃奶边哭怎么回事| 久久久国产欧美日韩av| 18禁裸乳无遮挡免费网站照片 | 午夜精品久久久久久毛片777| 欧美日韩精品网址| 亚洲精品粉嫩美女一区| 成人亚洲精品av一区二区| 国产欧美日韩一区二区三区在线| 美女大奶头视频| 18美女黄网站色大片免费观看| 50天的宝宝边吃奶边哭怎么回事| 国产成人一区二区三区免费视频网站| 欧美国产精品va在线观看不卡| 天堂动漫精品| 亚洲精品国产区一区二| 欧美日韩亚洲综合一区二区三区_| 在线播放国产精品三级| 成人av一区二区三区在线看| 欧美日韩亚洲综合一区二区三区_| 丰满人妻熟妇乱又伦精品不卡| 日本免费一区二区三区高清不卡 | 亚洲 欧美一区二区三区| 午夜视频精品福利| 国产一区二区三区在线臀色熟女| 日本vs欧美在线观看视频| 91精品国产国语对白视频| 日本免费一区二区三区高清不卡 | 久久精品亚洲熟妇少妇任你| 午夜影院日韩av| 亚洲五月婷婷丁香| 咕卡用的链子| 丰满的人妻完整版| 性欧美人与动物交配| 日韩精品中文字幕看吧| 少妇被粗大的猛进出69影院| 精品欧美国产一区二区三| 欧美绝顶高潮抽搐喷水| www.999成人在线观看| 日韩一卡2卡3卡4卡2021年| 成年版毛片免费区| 琪琪午夜伦伦电影理论片6080| 久久精品国产99精品国产亚洲性色 | 美国免费a级毛片| 欧美在线黄色| 成年人黄色毛片网站| 国产熟女xx| 搡老熟女国产l中国老女人| 国产不卡一卡二| 国产精华一区二区三区| 国产精品电影一区二区三区| 国产一级毛片七仙女欲春2 | 91成年电影在线观看| 国产精品一区二区在线不卡| 12—13女人毛片做爰片一| 国产精品日韩av在线免费观看 | 久久精品91无色码中文字幕| netflix在线观看网站| 天天躁狠狠躁夜夜躁狠狠躁| 免费高清在线观看日韩| 欧美大码av| 国产人伦9x9x在线观看| 757午夜福利合集在线观看| 欧美亚洲日本最大视频资源| 日本 欧美在线| 午夜免费鲁丝| 一进一出好大好爽视频| 青草久久国产| 亚洲avbb在线观看| 久久国产乱子伦精品免费另类| 国产精品秋霞免费鲁丝片| 97人妻天天添夜夜摸| 日韩欧美免费精品| 亚洲午夜理论影院| 久久久精品欧美日韩精品| 韩国av一区二区三区四区| 香蕉丝袜av| 亚洲最大成人中文| 91av网站免费观看| 日韩大尺度精品在线看网址 | 电影成人av| 一本大道久久a久久精品| 日韩欧美一区视频在线观看| 亚洲欧美日韩另类电影网站| 深夜精品福利| 国产aⅴ精品一区二区三区波| 悠悠久久av| 黄色视频,在线免费观看| 国产99久久九九免费精品| 国产又色又爽无遮挡免费看| 操美女的视频在线观看| 亚洲精品一卡2卡三卡4卡5卡| 国产精品日韩av在线免费观看 | 女生性感内裤真人,穿戴方法视频| 黑人欧美特级aaaaaa片| 欧美黑人精品巨大| 非洲黑人性xxxx精品又粗又长| 国产精品影院久久| 国产av又大| 国产精品综合久久久久久久免费 | 好看av亚洲va欧美ⅴa在| 日韩欧美一区二区三区在线观看| 色播亚洲综合网| 韩国av一区二区三区四区| 成人亚洲精品一区在线观看| 一级毛片高清免费大全| 搡老岳熟女国产| 大香蕉久久成人网| 国产亚洲欧美精品永久| 天天躁夜夜躁狠狠躁躁| 啪啪无遮挡十八禁网站| 国产精品 欧美亚洲| 亚洲精品一区av在线观看| 欧美乱妇无乱码| av免费在线观看网站| 免费观看人在逋| 两个人看的免费小视频| 宅男免费午夜| 中出人妻视频一区二区| 国产精品综合久久久久久久免费 | 搡老熟女国产l中国老女人| 国产一区二区三区综合在线观看| 久久人人爽av亚洲精品天堂| 亚洲av美国av| 人人妻人人澡人人看| av电影中文网址| 淫秽高清视频在线观看| 精品国产乱子伦一区二区三区| 午夜影院日韩av| 日本一区二区免费在线视频| 成年版毛片免费区| 亚洲久久久国产精品| 日韩一卡2卡3卡4卡2021年| 国产精品综合久久久久久久免费 | 夜夜躁狠狠躁天天躁| www.999成人在线观看| 欧美一级毛片孕妇| 自线自在国产av| 久久天躁狠狠躁夜夜2o2o| 色婷婷久久久亚洲欧美| 日本a在线网址| 成人国产综合亚洲| 国产精品精品国产色婷婷| 69精品国产乱码久久久| 久久人人97超碰香蕉20202| 制服诱惑二区| 亚洲欧美激情在线| 这个男人来自地球电影免费观看| 国产精品国产高清国产av| а√天堂www在线а√下载| 日日爽夜夜爽网站| 国产免费av片在线观看野外av| 老司机福利观看| 嫩草影视91久久| 亚洲一卡2卡3卡4卡5卡精品中文| 欧美一区二区精品小视频在线| 69av精品久久久久久| 久久久久久久久中文| 国产91精品成人一区二区三区| 女人被狂操c到高潮| 午夜福利18| 久久久久国产一级毛片高清牌| 一区二区日韩欧美中文字幕| 久久久久精品国产欧美久久久| 日韩欧美一区视频在线观看| 精品久久蜜臀av无| 一a级毛片在线观看| svipshipincom国产片| 久久天堂一区二区三区四区| 91大片在线观看| 日本免费一区二区三区高清不卡 | 国产99白浆流出| 在线观看午夜福利视频| 日韩精品中文字幕看吧| 亚洲黑人精品在线| 亚洲国产精品合色在线| 亚洲欧美一区二区三区黑人| 成人国产综合亚洲| 免费看十八禁软件| 一区二区三区激情视频| 国产成人影院久久av| 黑人巨大精品欧美一区二区mp4| 桃红色精品国产亚洲av| 国产一级毛片七仙女欲春2 | 亚洲一区高清亚洲精品| 亚洲第一av免费看| 亚洲国产精品久久男人天堂| 久久精品国产清高在天天线| 亚洲欧美日韩高清在线视频| 欧美激情极品国产一区二区三区| 成人欧美大片| 日韩视频一区二区在线观看| 久久这里只有精品19| 国产野战对白在线观看| 自拍欧美九色日韩亚洲蝌蚪91| 欧美av亚洲av综合av国产av| 老司机福利观看| 久久精品aⅴ一区二区三区四区| 国产精品一区二区在线不卡| 亚洲人成77777在线视频| 欧美乱码精品一区二区三区| 亚洲精品久久国产高清桃花| 村上凉子中文字幕在线| 精品国产乱子伦一区二区三区| 两个人视频免费观看高清| 纯流量卡能插随身wifi吗| 午夜成年电影在线免费观看| 免费高清在线观看日韩| 亚洲中文字幕日韩| 操美女的视频在线观看| 亚洲成av片中文字幕在线观看| 欧美精品啪啪一区二区三区| 亚洲成av人片免费观看| 国产成人系列免费观看| 可以在线观看毛片的网站| 欧美在线一区亚洲| 国产精品久久久久久亚洲av鲁大| 成人18禁在线播放| 久久中文字幕一级| 妹子高潮喷水视频| 精品欧美一区二区三区在线| 国产精品日韩av在线免费观看 | 女警被强在线播放| 亚洲激情在线av| 女人爽到高潮嗷嗷叫在线视频| 午夜福利视频1000在线观看 | 久久影院123| 亚洲欧美一区二区三区黑人| 18禁观看日本| 久久影院123| 搡老妇女老女人老熟妇| 亚洲专区中文字幕在线| 国产精品一区二区在线不卡| а√天堂www在线а√下载| 亚洲专区中文字幕在线| 久久影院123| 亚洲av熟女| 大型黄色视频在线免费观看| 久久午夜综合久久蜜桃| 国产高清有码在线观看视频 | 精品久久久精品久久久| 亚洲人成电影观看| 国产一区在线观看成人免费| 国产成人精品久久二区二区免费| 18美女黄网站色大片免费观看| 波多野结衣av一区二区av| 精品久久久久久久久久免费视频| 色播亚洲综合网| 国产三级在线视频| 窝窝影院91人妻| 高清毛片免费观看视频网站| 一级a爱片免费观看的视频| 成人精品一区二区免费| 国产色视频综合| 日本 av在线| 麻豆一二三区av精品| 黄频高清免费视频| 成人av一区二区三区在线看| 日韩欧美国产在线观看| 最近最新中文字幕大全免费视频| 亚洲精品在线观看二区|