• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Data-Driven Human-Robot Interaction Without Velocity Measurement Using Off-Policy Reinforcement Learning

    2022-10-26 12:23:38YongliangYangZihaoDingStudentRuiWangHamidrezaModaresSeniorandDonaldWunsch
    IEEE/CAA Journal of Automatica Sinica 2022年1期

    Yongliang Yang,, Zihao Ding, Student, Rui Wang,,Hamidreza Modares, Senior, and Donald C. Wunsch,

    Abstract—In this paper, we present a novel data-driven design method for the human-robot interaction (HRI) system, where a given task is achieved by cooperation between the human and the robot. The presented HRI controller design is a two-level control design approach consisting of a task-oriented performance optimization design and a plant-oriented impedance controller design. The task-oriented design minimizes the human effort and guarantees the perfect task tracking in the outer-loop, while the plant-oriented achieves the desired impedance from the human to the robot manipulator end-effector in the inner-loop. Data-driven reinforcement learning techniques are used for performance optimization in the outer-loop to assign the optimal impedance parameters. In the inner-loop, a velocity-free filter is designed to avoid the requirement of end-effector velocity measurement. On this basis, an adaptive controller is designed to achieve the desired impedance of the robot manipulator in the task space. The simulation and experiment of a robot manipulator are conducted to verify the efficacy of the presented HRI design framework.

    I. INTRODUCTION

    ROBOTIC manipulators have played an important role in engineering applications, such as master-slave teleoperation systems [1], construction automation [2], space engineering [3], to name a few. When the human is restricted by the physical limitation or when operating in a harsh environment is required, the robot manipulators can handle strenuous and dangerous work that is difficult or impossible for human operators. However, robot manipulators cannot completely replace the human because of robots' lack of highlevel of learning and inference ability. Therefore, the interaction between the human and the robot could take the best of human and robot capabilities. Successful applications of human-robot interaction (HRI) system designs include physiotherapy [4], human amplifier [5], assistive haptic device[6]. In this paper, we employ the two-level design framework[7] for the HRI systems with novel outer-loop and inner-loop designs to obviate the requirement of velocity measurement and the complete knowledge of system dynamics.

    Related Work:The interaction of the robot and the human is different from that between the robot and the environment.For the case when the robot only interacts with the environment, the control system is task-oriented and is designed based on feedback from the environment. Classical manipulator controller design mainly focuses on stability consideration [8], [9], or performance optimization with a given objective index [10]. In contrast, the HRI is more useful since the human intention and experience could be taken into account and exploited to guide the controller design to cooperate with the robot [11]. For example, in some cases, the reference cannot be measured directly by the sensors of the robot. The vision and tactile sensation of humans could provide guidance for the robot controller design. In this paper,we exploit the human to perceive the task-oriented reference,which is allowed to be unknown to the robotic manipulator.

    Another motivation of the human-robot collaboration is the high-level performance planned by the human. Stability is a basic-level requirement for the controller design. The highlevel performance specifications, such as desired reference and constraints, might not be specified directly to the robot manipulator but can be perceived by the human and transmitted to the robotic manipulator via interaction. In the robot manipulator controller design problem, the impedance model is widely used to achieve the desired spring-damping dynamics from the environment interaction to the reference tracking error [12]–[14]. In this way, robustness, feasibility,and safety of the robot control systems can be furtherguaranteed [15]–[17]. Robot impedance control using the advanced method can be found in [18], [19]. In impedance control design, the impedance model is usually known a priori. However, the desired impedance model is usually not unique. In this paper, the human is involved in the performance specification to determine the optimal impedance parameters with respect to the target of human effort minimization and perfect reference tracking. In contrast to existing results, such as [11], [20], the presented manipulator torque input design in this paper does not require the measurement of the robotic manipulator end-effector velocity while still guaranteeing satisfactory results.

    Reinforcement learning (RL), a methodology that investigates performance optimization with consideration about the interaction between the intelligent agent and the environment [21], has been widely used in sequential decision problems, such as optimal control problem [22]–[25], robust stabilization [26], [27], and differential games [28]–[30], as well as real-world applications, including transportation system [31], cruise control [32], to name a few. The off-policy RL method, a model-free version of the RL algorithm, allows one to learn and improve a policy different from the policy applied to the system [33], [34]. This feature allows one to obviate the requirement of the knowledge of the system dynamics and learn about the consequence of a policy without even applying it to the system. In addition, in some applications, the task for the robotic manipulator system might not be repetitive, where the iterative method using data resampling is not suitable for the online feature. The offpolicy RL technique is able to efficiently reuse the data sampled for learning purposes [35]. That is, the learning phase and the data generation process are completely decoupled, and resampling is no longer required. In this paper, we exploit the off-policy RL technique for the task-oriented impedance optimization design to optimize the impedance parameters for a given performance.

    Presented Design:In this paper, we develop a novel twolevel HRI system with task-oriented impedance optimization and plant-oriented adaptive robot controller design. For taskoriented impedance optimization, the human effort is minimized with unknown human dynamics, and the impedance optimization is achieved using the data-driven offpolicy RL technique. In the plant-oriented manipulator controller design, adaptive control is developed to compensate for the unknown robotic dynamics with an additional velocityfree filter to avoid the requirement of the end-effector velocity measurement. The interaction between the human and the robot is taken into account for the robot manipulator design.The overall design scheme is given in Fig. 1, which could assist the human to minimize the effort to achieve a specific task with effort minimization and reduced measurement demand, and without the requirement of HRI model knowledge.

    Fig. 1. The hierarchical human-robotic interaction algorithm. 1) Inner loop design: The unknown robot dynamics (8) from the human force xh to the Cartesian end-effector x behaves like the prescribed robot impedance model (11). 2) Outer loop design: The red dotted line represents the interaction between the human and the robot.

    Fig. 2. Impedance following in plant-oriented inner-loop design.

    Contributions:The contribution of this paper is two-fold.First, the off-policy RL method is used for the HRI taskoriented controller design, where the behavior policy is used to generate the online data for learning, and the target policy is updated in the learning phase. In contrast to thepartiallymodel-free RL method developed in [7], the human and impedance dynamics can becompletelyobviated while optimizing the impedance parameters to minimize the human effort and achieve perfect reference tracking. Second, in the HRI plant-oriented inner-loop, a nonlinear sliding mode synchronization error is defined for the adaptive impedance controller design. Differently from [11], in the plant-oriented design, the adaptive impedance design does not require the velocity measurement of the manipulator end-effector. In addition, the presented adaptive velocity-free robot controller guarantees the asymptotic impedance-following performance.

    The remainder of this paper is as follows. In Section II, we briefly review the dynamic model of the HRI system in the task space. In Section III, we define a novel nonlinear sliding mode synchronization error and present the velocity-free controller design for the robotic manipulator. In Section IV,the data-driven off-policy RL algorithm is used to avoid the requirement of complete system dynamics for the impedance optimization aiming at minimizing the human effort and achieve perfect reference tracking. The presented novel innerand outer-loop designs are verified in Section V using a twolink planar robot manipulator. Experimental results using the developed HRI framework are investigated in Section VI. In Section VII, concluding remarks are drawn in the end.

    II. PROBLEM FORMULATION

    A. Preliminaries

    The following notations and inequalities from [36] are required for subsequent analysis.

    Considerx=[x1···xn]T∈Rnandy=[y1···yn]T∈Rn.First, the functions t anh(·) and c osh(·) are the scalar hyperbolic functions. Define the following two vector/matrix functions

    The following facts about T anh(x) and C osh(x) hold.

    B. System Description

    1) Robot Manipulator Models in Joint and Task Spaces

    Consider the dynamical model ofn-link robot manipulator in joint space

    whereq∈Rnis the position of the robot arm in the joint space,nis the degree-of-freedom,Mq(q)∈Rn×nis the symmetric positive definite inertia matrix,denotes the Corioliscentrifugal matrix,is the Coulomb friction term,Gq(q) is the vector of gravitational forces, τqis the generalized input torque acting at the joints,Khis the gain andxhis the human control effort, andJis the Jacobian matrix.

    It is desired to have a description of the end-effectorxin the task space other than the joint variableqin the joint space.Consequently, define the mapping

    wherex∈Rndenotes the position of the end-effector in thesdimensional task space and φ(·):Rn→Rnis a nonlinear transformation. Using the chain rule, one has

    whereJ(q)∈Rn×nis the Jacobian matrix defined as

    Suppose that the task space and joint space have the same dimension. Then, the Jacobian matrix is a square matrix.Further assume that the Jacobian matrix is non-singular. Thus,the dynamics of the end-effector in the task space can be written as [17]

    According to [36], [37], the robot manipulator dynamics in the joint space and the task space have the following properties.

    Property 1:For the inertia matrixMx(q), there exist positive constantsm1andm2such that

    Additionally, the following common assumption in existing robot manipulator controller designs is utilized in this paper.

    Assumption 1:There exist constants βm, βg, βcsuch that

    hold, ?qa,qb,qc∈Rn, wherexa=φ(qa) andxb=φ(qb) with the mapping φ (·) defined in (5).

    Remark 1:The task space robot manipulator model (8)considered in this paper has been widely investigated in existing literature [7], [17], [37]. However, this task space model requires the task space and the joint space to have the same dimension, and the Jacobian matrix is also assumed to be non-singular. Special cases with a singular Jacobian matrix should be taken into account [39]. Future work aims to deal with the ill-conditioned Jacobian matrix.

    2) Robot Impedance Model

    Consider the robot impedance model in the Cartesian space

    whereumandxmare the input and output of the prescribed robot impedance model respectively,Mm,Bm, andKmare the desired inertia, damping, and stiffness parameter matrices,respectively. Augmenting the impedance model with the stateyields the dynamics as

    4) Human Model

    Human control characteristics are complex because various kinds of compensators, such as an oculomotor control, a proprioceptive control, and a neuromuscular control, are related to each other [40]. The neuromuscular dynamics is modeled as the first-order lag [41] and proportion differential(PD) controller [42]. In this paper, we employ the human transfer function as [7]

    with the equivalent state space representation

    whereKdandKpare the human dynamics parameter,xhis the human effort, anduhis the input to the human to be designed later.

    C. Human-Robot Interaction Problem

    Problem 1 (HRI with Prescribed Performance):Consider the HRI systems (8)–(19) and the reference signalxd. It is desired to design the input torque of the robot manipulator to assist the human operator in performing a given task with minimum effort and optimize the overall HRI performance.

    1) Find the torque input τ in (8) without measurement of the velocitysuch that the model-following errore(t)→0 ast→∞, where the model-following erroreis defined ase=xm?x.

    2) Find the optimal impedance model parameters{Mm,Bm,Km} for (11), design impedance control inputumin(11) such that the performance index

    is minimized.

    Remark 2:Note that the control objective of achievingx→xdis separated into two subtasksx→xmandxm→xdas in Problem 1. The human plays a vital role in the hierarchical framework and can improve the robot tracking performance and significantly reduce the system requirement, which is important in many scenarios. For example, high-accuracy sensing devices are costly, and sensors cannot completely avoid measurement errors. Therefore, it is difficult for the robot to perceive and achieve the original task directly. The involvement of the human can greatly help the robot to perceive the environment.

    In the following, Problem 1 is solved by using a hierarchical human-robotic interaction design scheme, which consists of

    a) an inner loop design in Section III to guarantee that the unknown robot dynamics (8) from the human forcexhto the Cartesian end-effectorxbehaves like the prescribed robot impedance model (11);

    b) an outer loop design in Section IV to help the human minimizing the effort to complete the task with interaction with the robot.

    The overall hierarchical human-robotic interaction design is illustrated in Fig. 1.

    III. INNER LOOP DESIGN USING PRESCRIBED PERFORMANCE ADAPTIVE CONTROL

    In this section, the inner loop design on the robot level is presented to guarantee that the model following from the human effortxhto the robot manipulator end-effectorxbehaves like the impedance model fromxhtoxm, as shown in Fig. 2. In the inner-loop, the interaction between the human and the robot is exploited for the robotic manipulator input torque design.

    Fig. 3. Task-oriented outer-loop design.

    A. Velocity-Free Filter Design

    In order to obviate the requirement of the velocity measurement, inspired by [36], we design the velocity-free filter

    B. Synchronization Error Dynamics

    From (22), one has

    C. Robot Dynamics With Matched Condition

    In the following, we investigate the case when the robot dynamics satisfies the matched condition.

    Assumption 2:The robotic dynamics (8) can be parameterized as

    Assumption 2 indicates that the robot dynamics can be linearly parameterized, which is a commonly used [1], [14],[37], [43]. By adding and subtracting the termsCx(q,q˙)randYmθ to (32), one has

    From (22) and the fact thate=xm?x, adding and subtracting[Tanh(e)+Tanh(ε)] to π, and applying Property 5 to π yield

    Adding and subtracting the following term

    Inserting (40) into (38), the dynamics of the synchronization signalrcan be determined as

    Then, the input torque design (33) and the adaptive parameter update (64) guarantee that the model-following erroreconverges to the origin asymptotically.

    Proof:Inserting the input torque (33) into (41), one can obtain

    Applying Property 2 and opening the terms in the bracket in(49), one has

    After some manipulations and applying Property 1, the Lyapunov function candidate derivative can be determined as

    With the parameter design (45), one has ?kn(αη+αχ)2=1?km1. Then, By completing the squares on the right hand side of (52), one can obtain

    D. Robot Dynamics with Unmatched Condition

    In the following, we investigate the case when the robot dynamics does not satisfy the matched condition.

    Assumption 3:The robotic dynamics (8) can be parameterized as

    Theorem 2:Consider the parameter design condition

    Then, the input torque design (33) and the adaptive parameter update (64) guarantee that the model-following error e converges to small neighborhoods of the origin.

    Proof:Inserting the input torque (33) into (56) yields

    Consider the Lyapunov function candidate defined in (47).DifferentiatingVzyields

    Considering the fact (57) and using similar manipulations as in Theorem 1, one can obtain

    Following the steps (52) and (53), one has

    whereBis defined in (53). Following discussions in Theorem 1,the boundedness ofecan be guaranteed.

    Remark 3:For implementation purpose, the parameter estimation can be determined by the calculation as

    whereYmandare defined in (36). As shown in (34), the adaptive parameter learning depends on the synchronization signalr, which is determined by the velocity tracking errore˙(22). However, from (64), by using the velocity-free filter(21), only the filter outputyand model-following erroreare required to update the parameter estimation. On this basis, the input torque design (33) does not require the velocity measurement of. Therefore, the filter design (21) is completely velocity-free.

    Remark 4:It is worthwhile to discuss the inner-loop design in terms of the following aspects.

    1) The results in this section are mainly concerned with the scenario when the Cartesian task space and joint space have the same dimension. For under-actuated and redundant robots,the joint space dimension mismatches the Cartesian space dimension and the Jacobian matrix is not a squared matrix. In these cases, typical approaches, such as direct kinematic inversion in the position regime and the Jacobian transformation in the velocity regime [44], can be used to determine the Jacobian matrix.

    2) For the Cartesian space robot manipulator dynamics (8),both the matched parameterization (Assumption 2) and the unmatched parameterization (Assumption 3) are investigated in Sections III-C and III-D, respectively.

    IV. OUTER LOOP DESIGN USING OFF-POLICY REINFORCEMENT LEARNING

    In this section, we present the feedback and feedforward designs for the impedance model and the human input such that the closed-loop system is a linear dynamical system with external control input in a feedback form in a feedback form,as shown in Fig. 3.

    Fig. 4. The simulation results of the outer loop. xd represents the target trajectory, and xm is the output of the impedance model. e d =[ed1 ed2]T and are the position tracking error and the velocity tracking error of the presented design, respectively. xh=[xh1 xh2]T denotes the human force input. (a) The convergence process of the iterative matrix P(k). (b) The convergence process of the iterative matrix K(k) . (c) The trajectory of xm1.(d) The trajectory of xm2 . (e) The tracking error of xm1. (f) The tracking error of xm2. (g) The human force input. (h) The human force total input.

    A. Outer Loop Closed-Loop Dynamics

    1) Impedance Input Design:Recall the impedance model(11), which can be rewritten as

    B. Performance Optimization for Outer Loop

    Denote the augmented state as. From (70)and (17), the dynamics ofXcan be expressed as

    Inserting now (76) into (17), one has the impedance control inputuein the feedback form as

    with the feedback gain

    Theorem 3:Consider the closed-loop system (77) with the impedance model (12), human operator dynamics (19). Then,the optimal value ofK?which minimizes the performance index (20) is given by

    wherePis the solution to the algebraic Riccati equation(ARE)

    withQ=diag(γ1In,γ2In,γ3In),R=γ4In. Then, the optimal feedback control that solves Problem 1 is determined by

    Proof:First, the outer-loop design in Section IV-A yields the closed-loop dynamics as a linear dynamical system (77)with linear feedback (78). Based on the linear quadratic (LQ)optimal theory [45], the ARE solution provides the optimal feedback policy that minimizes the performance index (20).Therefore, the optimal feedback policy for the closed-loop takes the form in (81). ■

    Remark 5:Note that in (79), once we obtain the optimal feedback gainK?, the impedance parametersandBm,Kmare in bilinear form. To simplfy the impedance parameter calculation, without loss of generality, we fix the parameterMmand determine the impedance parametersBm,Kmaccording to (79).

    Remark6:The overall design targetx→xdis achieved by the hierarchical outer-loop design (which guaranteesxm→xd)and the inner-loop design (which guaranteesx→xm) as follows.

    1) From (16), one hased=xd?xm. Therefore, the human inputuh=Keedonly depends on the desired reference signalxdand the impedancexm. It should be noted that the robot end-effectorxis not required for the human input designuh=Keed. As discussed in Theorem 3, the outer-loop design guarantees the optimality of the performanceV(20) and the convergence of the impedance tracking errored.

    2) From (33) and the facte=xm?x, the robot torque input design τxdepends on {ε,xm,x}. As discussed in Theorems 2 and 3, the inner-loop design can guarantee the boundedness of the model-following errore.

    C. Model-Free Learning With Complete Unknown Dynamics

    ?

    Corollary 1:Algorithms 1 and 2 are equivalent to each other in the sense that the off-policy Bellman equation (84)provides the same iterative value functionP(k)as the solution to the off-policy Bellman equation (82), and the same iterative policy gain updateK(k+1)as the case with on-policy RL in(83).

    Proof:The proof is similar to that in [29] and is omitted here. ■

    Remark 7:In [7], the integral RL method is developed for the outer-loop design, which is a partially model-free method,i.e., A is not needed but B is still required for the policy improvement. In contrast, in this paper, the off-policy RL based outer-loop design does not require the augmented system dynamics {A,B}. From the definitions of {A,B} in(77), one can conclude that the complete impedance dynamics{A,B} in (13) and the human dynamics {Ah,Bh} in (70) are not required in this paper, which makes the outer-loop design be completely model-free.

    Remark 8:To guarantee the existence and uniqueness of the solution to the LS equation (92), probing noise is added into the behavior policyueto guarantee that the data matrix Γ contains sufficiently rich information and is nonsingular. Such requirement is closely related to the persistent excitation in adaptive parameter estimation method [29].

    V. SIMULATION STUDY

    In this section, the presented design scheme is verified under a real-world scenario of a two-link planar robot manipulator to follow a target reference signal. The reference signal is designed as a periodic squared wave with an amplitude of 0.5. The simulation is carried out by using the Runge-Kutta method with an adaptive step size in MATLAB.

    In the outer-loop,xdis the reference trajectory, andxmis the impedance model output. The target in the outer-loop design is to achievexm→xd. Once this target is achieved, the impedance model outputxmprovides the reference for the robotic manipulator to follow. The reference signalxdis now assigned as a squared wave as shown in Figs. 4(c) and 4(d).According to equation (70), the human force is calculated withuhbeing the input andxhbeing the output.

    A. Outer Loop Design

    In outer-loop design, the system matrix is defined as in (77).The first 0.5 s of the simulation is the time for data collection.The human input is calculated according to (70).

    Next, we consider the performance function (20) with γ1=700, γ2=1, γ3=1, γ4=600 . γ1~γ3represent the state regulation performance weights forand ‖xh‖2respectively. γ4denotes the control cost penalty weight for‖ue‖2. The parameters design for γ1~γ4is a trade-off between the control performance and the control cost penalty.Then, the performance optimization in the outer-loop can be formulated as linear quadratic regulation (LQR) problem

    whereQ=diag(γ1In,γ2In,γ3In) andR=γ4I2. Based on the LQ optimal control theory [45], the necessary and sufficient condition to the above LQR problem is the ARE equation(80), which can be solved using the complete model knowledge as

    In the outer-loop design, we employ the data-driven method to obviate the requirement of complete knowledge for solving the ARE. For the off-policy RL method, the online data are generated using the behavior policy, which is considered as the following for this case,

    After enough data are collected, the learning phase is launched by solving the LS equation (92) iteratively, as shown in Algorithm 2. After 15 iterations, the approximate solution to the ARE equation can be shown as

    which is close to the model-based solution {P?,K?} to the ARE (80).

    The evolution of the signals in the outer-loop is shown in Fig. 4. The convergence processes of the iterative matricesP(k)andK(k)are shown in Figs. 4(a) and 4(b). It can be seen that bothP(k)andK(k)converge toward the optimums within 3 iterations. Figs. 4(c) and 4(d) reflect the evolutions ofThe signalsedandare shown in Figs. 4(e) and 4(f), respectively, from which it can be seen that the probing noise is added into the behavior policy during [0,0.5] second for the off-policy RL method to guarantee the existence and uniqueness of the LS equation (92). The human force in the HRI is displayed in Fig. 4(g). Finally, in Fig. 4(h), it can be seen that the outer loop stateXin (77) converges to the origin and the learning policy can stabilize the closed-loop system.

    B. Inner Loop Design

    In inner-loop design,xmis the impedance model output andxis the robotic manipulator end-effector position. The innerloop design aims to achievex→xm. The impedance model output (xm) is shown in Figs. 4(c) and 4(d).

    Next, we consider the robot manipulator [37] with jointspace dynamics as in (4)

    wheres1=sin(q1),s12=sin(q1+q2),c1=cos(q1) andc12=cos(q1+q2). Following (9), one can obtain the Cartesian space manipulator dynamics (8). For more details about the transformation from joint-space model to Cartesian space model, interested readers are referred to [37]. Finally, we select the torque parameter in (33) ask=[75 75]Tand the adaptive parameter in (34), as Γ =[5 5 5 5 5 5 5]. The value ofkwill affect the convergence of tracking error. Increasing the value ofkwould accelerate the convergence speed.However, a large value ofkwill lead to a large control effort of the manipulator torque input. Therefore, the selection ofkshould guarantee the closed-loop signals boundedness and satisfactory control effort. Additionally, the presented innerloop design is compared with the existing robot manipulator input torque design [20].

    1) Case 1 (Matched Condition):In this case, we investigate the type of task space robotic dynamics satisfying the matched condition as discussed in Section III-C. That is, the robotic dynamics can be linearly parameterized under Assumption 2.The simulation results of the inner-loop design are shown in Fig. 5. Figs. 5(a) and 5(b) reflect the tracking results of the manipulator end-effector. Figs. 5(c) and 5(d) show tracking errors of the presented design and method [20], respectively.The average tracking accuracy of the proposed HRI design is 2.4% in contrast to 4.5% using the method in [20], from which one can observe that, with the presented HRI design, the manipulator tracking error has less fluctuations. In addition,compared to [20], in our method, the feedback design does not require the velocity measurement. These results indicate that the presented inner-loop design under Assumption 2 can guarantee the satisfactory tracking performance for the manipulator.

    Fig. 5. The simulation results of Case 1. xd represents the target trajectory,and xm is the output of the impedance model. x and xc are the presented method in this paper and the method developed in [20], respectively. em=[em1 em2]T and emc=[emc1 emc2]T are the tracking errors of the presented design and method [20]. τ=[τ1 τ2]T denotes the manipulator torque input.(a) The trajectories of x1 and xm1. (b) The trajectories of x2 and xm2. (c) The tracking error of the manipulator end-effector in a dimension. (d) The tracking error of the manipulator end-effector in another dimension. (e) The manipulator torque input in a dimension. (f) The manipulator torque input in another dimension.

    2) Case 2 (Unmatched Condition):This case verifies the situation described in Section III-D. In this case the robotic task space model cannot be linearly parameterized, as shown in Assumption 3. Besides, the manipulator model is considered as

    where τdis selected asFrom above equation, one can observe that the manipulator cannot be linearly parameterized due to the existence of τd.

    The signals in the inner-loop design are shown in Fig. 6.Figs. 6(a) and 6(b) show the tracking performance of the manipulator actuator. In Figs. 6(c) and 6(d). the tracking errors generated by using the presented design and the method in [20] are displayed, respectively. The average tracking accuracy of the proposed HRI design is 2.1% in contrast to 8.4% using the method in [20]. It can be shown that, with the proposed HRI design, satisfactory tracking performance can be achieved in the presence of τd. In contrast, for the method in [20], the existence of τdresults in large fluctuation. The manipulator input τ=[τ1τ2]Tis shown in Figs. 6(e) and 6(f).From Fig. 6, it is shown that the presented HRI design is more efficient for the unmatched manipulator dynamics compared with the method in [20].

    Fig. 6. The simulation results of Case 2. xd represents the target trajectory,and xm represents the output of the impedance model. x and xc are the presented method in this paper and the compared method in [20],respectively. e m=[em1 em2]T and e mc=[emc1 emc2]T are the tracking errors of the presented design and method [20]. τ =[τ1 τ2]T represents the manipulator input. (a) The trajectories of x1 and xm1. (b) The trajectories of x2 and xm2.(c) The tracking error of the manipulator end-effector in a dimension. (d) The tracking error of the manipulator end-effector in another dimension. (e) The manipulator torque input in a dimension. (f) The manipulator torque input in another dimension.

    3) Case 3 (Robustness Against Disturbance):In this case,we verify the robustness against disturbance of proposed HRI design, where an additional noise signal is added into the manipulator input. The noise under consideration is a square wave signal with an amplitude of 50 during 1 0 s ~15 s.

    The simulation results are shown in Fig. 7. Figs. 7(a) and 7(b) show the tracking results of the presented HRI design and the method in [20], respectively. Figs. 7(c) and 7(d) display the tracking errors, where the average tracking accuracy 2.1%of the proposed HRI design outperforms the tracking accuracy 4.3% of the method in [20]. Therefore, the presented HRI design is more robust compared to the method in [20].Figs. 7(e) and 7(f) show the evolution of manipulator input.These results verify the robustness of the presented HRI design against the disturbance.

    Fig. 7. The simulation results of Case 3. xd is the target trajectory, and xm is the impedance model output. x and xc are the presented method in this paper and the comparison method [20]. em=[em1 em2]T and emc=[emc1 emc2]T are the tracking errors of the presented design and method [20].τ=[τ1 τ2]T are the manipulator input. (a) The trajectories of x1 and xm1.(b) The trajectories of x2 and x m2. (c) The tracking error of the manipulator end-effector in a dimension. (d) The tracking error of the manipulator endeffector in another dimension. (e) The manipulator torque input in a dimension. (f) The manipulator torque input in another dimension.

    VI. EXPERIMENT STUDY

    In this section, an experiment is conducted to validate the presented HRI design, which is implemented on Universal Robot 5 (UR5) platform as shown in Fig. 8. UR5 has a working space range of 0.85 m and an accuracy of 0.3 mm at the actuator. The PC configurations are Intel(R) Core(TM) i7-7700K 4.20 GHz CPU, 16 G memory, 2 T mechanical hard disk, NVIDIA GeForce GTX 1080TI independent graphics card, and 8 G video memory. The PC is connected to the UR5 robot controller through a local area network. To be consistent with the two-link planar robot manipulator in Section V, in this experiment, we only keep two joints to be controlled and the other joints to be fixed for the operation task.

    Fig. 8. The Universal Robot 5.

    The content of the experiment is that the manipulator under the HRI system control assists the operator in accomplishing the task of moving between two points in the plane. The robot manipulator assists the human operator to finish the operation.The experimental tasks are the same as the simulation, as shown in Fig. 9. Fig. 9(a) shows the task setting, where the end-effector is specified to move between point A and point B.Fig. 9(b) shows the starting location and Fig. 9(c) shows the terminal location. Figs. 9(d) and 9(e) display the tracking performance of the manipulator actuator in the experiment,wherexd=[xd1xd2]Tis the desired position,xm=[xm1,xm2]Tdenotes the output of the impedance model andx=[x1,x2]Tdenotes the measured position of the robot manipulator. Figs.9(d) and 9(e) show the position of the robot’s end-effector in two directions, respectively. The average tracking accuracy of the manipulator during the whole operation is 2.2%.

    Fig. 9. The process and the result of the experiment. (a) Task setting. (b)Starting position of the task. (c) Terminal position of the task. (d) The position of the robot’s end-effector on one direction. (e) The position of the robot’s end-effector on the other direction.

    VII. CONCLUSIONS

    This paper investigates the controller design for the HRI system with consideration of both performance optimization and reference tracking. The HRI system consists of two-level design counterparts, the task-oriented outer-loop design, and the plant-oriented inner-loop design. The human effect is taken into account for the impedance model parameter optimization to minimize the human effort in the outer loop.The robot manipulator controller is developed to guarantee that the robot manipulator behaves like the optimized impedance model in the inner loop. The requirement of model knowledge in the outer-loop design is obviated by using the data-driven off-policy RL method. To avoid the requirement of end-effector velocity measurement, we designed a velocityfree filter and an adaptive controller to achieve the desired impedance of the robot manipulator in the task space.Numerical simulation and experiment based on a robot manipulator verify the efficacy of the presented HRI design.

    APPENDIX PROOF OF LEMMAS 1 AND 2

    Proof of Lemma 1:1) Considering the term χ defined in(42), with Properties 1, 3 and the properties of hyperbolic functions, one has

    777米奇影视久久| xxxhd国产人妻xxx| 国产精品一区二区精品视频观看| 男女午夜视频在线观看| 日韩视频一区二区在线观看| 亚洲成a人片在线一区二区| 91老司机精品| 黄色视频在线播放观看不卡| 亚洲国产毛片av蜜桃av| 午夜福利在线免费观看网站| 黄色片一级片一级黄色片| 国产精品一区二区免费欧美| 久久久精品区二区三区| 欧美老熟妇乱子伦牲交| 电影成人av| 精品久久蜜臀av无| 国产精品国产高清国产av | 精品一品国产午夜福利视频| 亚洲av第一区精品v没综合| 精品一区二区三区四区五区乱码| 国产av国产精品国产| 激情视频va一区二区三区| av欧美777| 国产免费福利视频在线观看| 久久久国产一区二区| 一本久久精品| 久久久国产欧美日韩av| 日韩视频一区二区在线观看| 可以免费在线观看a视频的电影网站| 精品国产国语对白av| 国产高清视频在线播放一区| 欧美日韩中文字幕国产精品一区二区三区 | 桃红色精品国产亚洲av| 久久午夜综合久久蜜桃| 建设人人有责人人尽责人人享有的| 亚洲国产毛片av蜜桃av| 欧美黄色片欧美黄色片| 亚洲精品美女久久久久99蜜臀| 啦啦啦 在线观看视频| 国产一区二区三区在线臀色熟女 | 亚洲色图综合在线观看| 午夜激情av网站| 在线观看免费视频日本深夜| 不卡一级毛片| 999久久久国产精品视频| 精品亚洲成a人片在线观看| 一区福利在线观看| 99re6热这里在线精品视频| 亚洲av电影在线进入| 男女下面插进去视频免费观看| 青草久久国产| 免费不卡黄色视频| 多毛熟女@视频| 亚洲 欧美一区二区三区| 无限看片的www在线观看| 欧美激情久久久久久爽电影 | 国产激情久久老熟女| 不卡av一区二区三区| 91成人精品电影| 成在线人永久免费视频| 欧美一级毛片孕妇| 成年人午夜在线观看视频| tube8黄色片| 国产有黄有色有爽视频| 欧美日韩亚洲国产一区二区在线观看 | 中文字幕最新亚洲高清| 午夜视频精品福利| 国产在线免费精品| av福利片在线| 丰满少妇做爰视频| 成年动漫av网址| 久久亚洲真实| 多毛熟女@视频| 夜夜骑夜夜射夜夜干| 男女之事视频高清在线观看| 免费观看av网站的网址| 久久人妻熟女aⅴ| 99精品欧美一区二区三区四区| 最近最新中文字幕大全免费视频| 最新的欧美精品一区二区| 菩萨蛮人人尽说江南好唐韦庄| √禁漫天堂资源中文www| 高潮久久久久久久久久久不卡| 色综合婷婷激情| 黄色视频在线播放观看不卡| 少妇粗大呻吟视频| 国产淫语在线视频| 国产精品九九99| 精品福利永久在线观看| 一区二区av电影网| 黑人猛操日本美女一级片| 少妇猛男粗大的猛烈进出视频| 黄色视频在线播放观看不卡| 午夜精品国产一区二区电影| av在线播放免费不卡| 国产三级黄色录像| 男女边摸边吃奶| 久久精品亚洲av国产电影网| 日韩精品免费视频一区二区三区| av有码第一页| 国产成人精品无人区| 久久天躁狠狠躁夜夜2o2o| 操出白浆在线播放| 脱女人内裤的视频| 99国产精品一区二区三区| 天堂俺去俺来也www色官网| 日本欧美视频一区| 国产精品自产拍在线观看55亚洲 | 人成视频在线观看免费观看| 99久久国产精品久久久| 欧美国产精品一级二级三级| 黄色毛片三级朝国网站| 欧美日韩av久久| 91成人精品电影| 99精品欧美一区二区三区四区| 亚洲七黄色美女视频| 性少妇av在线| 国产在线精品亚洲第一网站| 国产老妇伦熟女老妇高清| 乱人伦中国视频| 十八禁网站网址无遮挡| 久久久久国内视频| 99久久精品国产亚洲精品| 怎么达到女性高潮| 老司机在亚洲福利影院| 亚洲精品在线观看二区| 国产精品二区激情视频| 亚洲精华国产精华精| 日韩欧美一区二区三区在线观看 | 一本综合久久免费| 精品国产超薄肉色丝袜足j| 少妇精品久久久久久久| 亚洲av美国av| 中亚洲国语对白在线视频| 一本色道久久久久久精品综合| 国产免费视频播放在线视频| 精品一区二区三区视频在线观看免费 | 欧美在线一区亚洲| av片东京热男人的天堂| 久久 成人 亚洲| 国产欧美日韩一区二区三区在线| 99国产极品粉嫩在线观看| 精品第一国产精品| 亚洲av成人一区二区三| 999久久久精品免费观看国产| 啦啦啦免费观看视频1| 两个人免费观看高清视频| 国产日韩一区二区三区精品不卡| 一区二区三区国产精品乱码| 日日爽夜夜爽网站| 国产欧美日韩一区二区三区在线| 他把我摸到了高潮在线观看 | 欧美大码av| 免费日韩欧美在线观看| 婷婷成人精品国产| av欧美777| 女人久久www免费人成看片| 亚洲精品中文字幕一二三四区 | 久久 成人 亚洲| 午夜两性在线视频| 国产在线观看jvid| 国产男女超爽视频在线观看| 亚洲人成电影免费在线| 天堂8中文在线网| 国产视频一区二区在线看| 999久久久国产精品视频| 91麻豆精品激情在线观看国产 | 99九九在线精品视频| 侵犯人妻中文字幕一二三四区| av线在线观看网站| 欧美黄色片欧美黄色片| 一边摸一边抽搐一进一小说 | 精品熟女少妇八av免费久了| 中文字幕最新亚洲高清| a级毛片黄视频| 极品少妇高潮喷水抽搐| 成人免费观看视频高清| 青草久久国产| 超碰97精品在线观看| 成年动漫av网址| 国产亚洲精品久久久久5区| 久久久精品国产亚洲av高清涩受| 国产aⅴ精品一区二区三区波| 岛国在线观看网站| 国产欧美亚洲国产| 国产精品欧美亚洲77777| 欧美亚洲日本最大视频资源| 老鸭窝网址在线观看| 国精品久久久久久国模美| 亚洲第一av免费看| 成年人黄色毛片网站| 亚洲av片天天在线观看| 99精品欧美一区二区三区四区| 无人区码免费观看不卡 | 久久性视频一级片| 一本大道久久a久久精品| 日本欧美视频一区| 欧美精品人与动牲交sv欧美| 国产亚洲午夜精品一区二区久久| av免费在线观看网站| 亚洲欧美色中文字幕在线| 日韩熟女老妇一区二区性免费视频| 大香蕉久久成人网| 久久99热这里只频精品6学生| 亚洲精品成人av观看孕妇| 巨乳人妻的诱惑在线观看| 人人妻人人澡人人爽人人夜夜| 一区在线观看完整版| 中文字幕最新亚洲高清| 精品福利观看| 久久毛片免费看一区二区三区| 亚洲一码二码三码区别大吗| √禁漫天堂资源中文www| 亚洲va日本ⅴa欧美va伊人久久| 999久久久国产精品视频| 男女无遮挡免费网站观看| 亚洲专区中文字幕在线| 黄片大片在线免费观看| 啪啪无遮挡十八禁网站| 亚洲精华国产精华精| a在线观看视频网站| 男女无遮挡免费网站观看| 人人妻,人人澡人人爽秒播| 美女午夜性视频免费| 精品国产一区二区三区久久久樱花| 制服诱惑二区| 久久99热这里只频精品6学生| 美女视频免费永久观看网站| 91麻豆精品激情在线观看国产 | 亚洲一区中文字幕在线| 日本欧美视频一区| 少妇裸体淫交视频免费看高清 | 精品一区二区三区av网在线观看 | 国产精品国产av在线观看| 欧美激情高清一区二区三区| 久久精品国产99精品国产亚洲性色 | 欧美亚洲日本最大视频资源| 母亲3免费完整高清在线观看| 亚洲 欧美一区二区三区| av欧美777| 国产成人一区二区三区免费视频网站| 精品少妇一区二区三区视频日本电影| 女人久久www免费人成看片| 国产激情久久老熟女| 91字幕亚洲| 亚洲一区二区三区欧美精品| 99国产精品一区二区三区| 国产欧美日韩精品亚洲av| av又黄又爽大尺度在线免费看| 黄色 视频免费看| cao死你这个sao货| 一本大道久久a久久精品| 99国产精品免费福利视频| 欧美一级毛片孕妇| 十八禁高潮呻吟视频| 国产精品成人在线| 一级黄色大片毛片| 精品福利永久在线观看| av不卡在线播放| 老汉色av国产亚洲站长工具| 法律面前人人平等表现在哪些方面| 老熟女久久久| 久久午夜亚洲精品久久| 精品福利观看| 国产免费福利视频在线观看| 一本久久精品| 国产一区二区三区在线臀色熟女 | 午夜激情久久久久久久| videos熟女内射| 久久午夜亚洲精品久久| 又紧又爽又黄一区二区| 精品福利永久在线观看| 色播在线永久视频| 国产日韩欧美亚洲二区| 丝袜美腿诱惑在线| 成人国语在线视频| 亚洲一区中文字幕在线| 少妇被粗大的猛进出69影院| 丝袜人妻中文字幕| 国产片内射在线| 建设人人有责人人尽责人人享有的| 高清视频免费观看一区二区| 成年人黄色毛片网站| 黄片播放在线免费| 亚洲精品在线美女| 国产精品九九99| 十分钟在线观看高清视频www| 757午夜福利合集在线观看| 搡老岳熟女国产| 久久精品91无色码中文字幕| 亚洲情色 制服丝袜| 蜜桃国产av成人99| 视频区欧美日本亚洲| 精品欧美一区二区三区在线| 777久久人妻少妇嫩草av网站| 欧美午夜高清在线| 免费人妻精品一区二区三区视频| 久久人人爽av亚洲精品天堂| 自线自在国产av| 狠狠婷婷综合久久久久久88av| 老司机福利观看| 侵犯人妻中文字幕一二三四区| 99re6热这里在线精品视频| 国产麻豆69| 免费日韩欧美在线观看| 五月开心婷婷网| 亚洲人成伊人成综合网2020| 精品亚洲乱码少妇综合久久| 在线看a的网站| 午夜成年电影在线免费观看| 在线亚洲精品国产二区图片欧美| 精品免费久久久久久久清纯 | 色尼玛亚洲综合影院| 久久久久久免费高清国产稀缺| 欧美激情极品国产一区二区三区| 777米奇影视久久| 精品少妇一区二区三区视频日本电影| 久久久国产精品麻豆| 国产成人欧美在线观看 | 精品少妇一区二区三区视频日本电影| 欧美日韩中文字幕国产精品一区二区三区 | 国产av精品麻豆| 欧美另类亚洲清纯唯美| 亚洲国产欧美日韩在线播放| 久久天躁狠狠躁夜夜2o2o| 欧美精品啪啪一区二区三区| 91成年电影在线观看| 色精品久久人妻99蜜桃| 热re99久久精品国产66热6| 制服人妻中文乱码| 国产亚洲欧美在线一区二区| 国产精品自产拍在线观看55亚洲 | 国产精品久久久久久人妻精品电影 | 一级毛片女人18水好多| 精品视频人人做人人爽| 丝袜人妻中文字幕| 精品久久久久久电影网| 午夜成年电影在线免费观看| 啦啦啦在线免费观看视频4| 国产深夜福利视频在线观看| 人妻 亚洲 视频| 纵有疾风起免费观看全集完整版| 肉色欧美久久久久久久蜜桃| 免费人妻精品一区二区三区视频| 午夜激情久久久久久久| 欧美 亚洲 国产 日韩一| 精品亚洲成国产av| 国产有黄有色有爽视频| 国产国语露脸激情在线看| 99国产精品一区二区三区| 国产亚洲欧美在线一区二区| 中文字幕av电影在线播放| 91麻豆精品激情在线观看国产 | 国产精品成人在线| 亚洲熟女毛片儿| 久久亚洲真实| a级毛片黄视频| 中文字幕另类日韩欧美亚洲嫩草| 国产国语露脸激情在线看| 精品高清国产在线一区| a级片在线免费高清观看视频| 男人操女人黄网站| 欧美国产精品va在线观看不卡| 午夜福利,免费看| www.熟女人妻精品国产| 久热这里只有精品99| 大码成人一级视频| 久久国产精品男人的天堂亚洲| 久久这里只有精品19| 丝袜在线中文字幕| 亚洲人成电影免费在线| 欧美日韩亚洲高清精品| 免费在线观看黄色视频的| 国产亚洲一区二区精品| 在线观看免费日韩欧美大片| 激情视频va一区二区三区| 中亚洲国语对白在线视频| 不卡av一区二区三区| 国产亚洲精品久久久久5区| 久久精品国产综合久久久| 亚洲精品国产精品久久久不卡| 黄片播放在线免费| 高清视频免费观看一区二区| 黄色a级毛片大全视频| 欧美午夜高清在线| 欧美精品av麻豆av| 亚洲成人国产一区在线观看| 91老司机精品| 精品少妇黑人巨大在线播放| 日本av手机在线免费观看| 亚洲av美国av| 精品国产乱子伦一区二区三区| 91麻豆精品激情在线观看国产 | 国产成人欧美在线观看 | 国产精品av久久久久免费| 精品少妇内射三级| 国产男女超爽视频在线观看| 久久精品亚洲精品国产色婷小说| 狠狠婷婷综合久久久久久88av| 亚洲午夜理论影院| 黄色视频,在线免费观看| 精品少妇内射三级| 国产男女超爽视频在线观看| 无限看片的www在线观看| 丝袜美腿诱惑在线| 免费在线观看视频国产中文字幕亚洲| 久久人妻熟女aⅴ| 一区二区三区乱码不卡18| 午夜福利在线免费观看网站| 国产亚洲精品一区二区www | 老司机福利观看| 国产日韩欧美在线精品| 午夜激情久久久久久久| 少妇被粗大的猛进出69影院| 免费av中文字幕在线| 80岁老熟妇乱子伦牲交| 国产高清videossex| 80岁老熟妇乱子伦牲交| 亚洲情色 制服丝袜| 精品乱码久久久久久99久播| 黄色a级毛片大全视频| 日本黄色日本黄色录像| 免费一级毛片在线播放高清视频 | 欧美国产精品va在线观看不卡| 亚洲国产精品一区二区三区在线| 欧美日韩精品网址| 久久精品成人免费网站| 在线十欧美十亚洲十日本专区| 亚洲欧美一区二区三区黑人| 国产精品国产高清国产av | 高清在线国产一区| 国产精品免费一区二区三区在线 | 男男h啪啪无遮挡| 又大又爽又粗| 日韩中文字幕视频在线看片| xxxhd国产人妻xxx| 动漫黄色视频在线观看| 一区二区日韩欧美中文字幕| av网站免费在线观看视频| 欧美日韩亚洲国产一区二区在线观看 | 美女国产高潮福利片在线看| avwww免费| 久久天躁狠狠躁夜夜2o2o| 国产97色在线日韩免费| 国产精品久久久久久精品古装| 欧美人与性动交α欧美精品济南到| 日韩免费av在线播放| 视频区图区小说| 男女午夜视频在线观看| 色综合欧美亚洲国产小说| 久久狼人影院| 亚洲精品国产色婷婷电影| 免费在线观看完整版高清| 亚洲欧美日韩高清在线视频 | 亚洲自偷自拍图片 自拍| 91av网站免费观看| 曰老女人黄片| 十八禁人妻一区二区| 国产精品久久久久久精品古装| 日本一区二区免费在线视频| 亚洲美女黄片视频| 免费av中文字幕在线| 高清视频免费观看一区二区| 国产成人精品久久二区二区免费| 国产有黄有色有爽视频| 亚洲精品一卡2卡三卡4卡5卡| 欧美亚洲 丝袜 人妻 在线| 免费在线观看影片大全网站| 热99re8久久精品国产| 国产野战对白在线观看| 免费久久久久久久精品成人欧美视频| 蜜桃在线观看..| 人人妻人人添人人爽欧美一区卜| 国产精品.久久久| 美国免费a级毛片| 99国产精品一区二区蜜桃av | 黄片小视频在线播放| 我的亚洲天堂| 国产成人精品在线电影| 成年人午夜在线观看视频| 一级,二级,三级黄色视频| 丰满饥渴人妻一区二区三| 国产极品粉嫩免费观看在线| tube8黄色片| 一级毛片电影观看| 19禁男女啪啪无遮挡网站| 亚洲av成人一区二区三| 精品福利永久在线观看| 建设人人有责人人尽责人人享有的| 波多野结衣一区麻豆| 韩国精品一区二区三区| 老司机午夜福利在线观看视频 | 少妇的丰满在线观看| 久久精品aⅴ一区二区三区四区| 久久人妻av系列| 日韩三级视频一区二区三区| 国产亚洲精品一区二区www | 久久午夜综合久久蜜桃| av有码第一页| 超碰成人久久| 久久久久国内视频| 18禁观看日本| av又黄又爽大尺度在线免费看| 国产免费av片在线观看野外av| 人人澡人人妻人| 日韩熟女老妇一区二区性免费视频| 欧美 亚洲 国产 日韩一| 亚洲av国产av综合av卡| 伊人久久大香线蕉亚洲五| 一本色道久久久久久精品综合| 欧美黑人精品巨大| 久久青草综合色| 午夜福利乱码中文字幕| 亚洲精品久久午夜乱码| 久久中文字幕一级| 国产亚洲av高清不卡| av福利片在线| 巨乳人妻的诱惑在线观看| 成人影院久久| 色94色欧美一区二区| 欧美激情久久久久久爽电影 | 韩国精品一区二区三区| 亚洲国产中文字幕在线视频| 天堂动漫精品| 成人永久免费在线观看视频 | 亚洲av成人不卡在线观看播放网| 无人区码免费观看不卡 | 国产成人av激情在线播放| 国产成人精品在线电影| 亚洲精品一卡2卡三卡4卡5卡| 在线观看免费午夜福利视频| 国产日韩欧美在线精品| 淫妇啪啪啪对白视频| 亚洲性夜色夜夜综合| 亚洲色图av天堂| 亚洲人成电影免费在线| 久久精品熟女亚洲av麻豆精品| 人人妻人人添人人爽欧美一区卜| 亚洲av第一区精品v没综合| 建设人人有责人人尽责人人享有的| 99热网站在线观看| 一进一出好大好爽视频| 大片免费播放器 马上看| 亚洲精品自拍成人| 韩国精品一区二区三区| 正在播放国产对白刺激| 免费高清在线观看日韩| 女同久久另类99精品国产91| 国产一区二区在线观看av| 久久毛片免费看一区二区三区| 少妇猛男粗大的猛烈进出视频| 亚洲国产欧美一区二区综合| 亚洲av成人不卡在线观看播放网| 伦理电影免费视频| 叶爱在线成人免费视频播放| 亚洲熟女毛片儿| 国产精品亚洲av一区麻豆| 国产精品二区激情视频| 老司机亚洲免费影院| 亚洲天堂av无毛| 搡老岳熟女国产| 亚洲 欧美一区二区三区| 国产成人精品久久二区二区91| 飞空精品影院首页| 亚洲avbb在线观看| 91成年电影在线观看| 国产亚洲av高清不卡| 在线av久久热| 99国产精品免费福利视频| 在线十欧美十亚洲十日本专区| 最黄视频免费看| 亚洲精品久久成人aⅴ小说| 午夜福利视频精品| 最近最新免费中文字幕在线| 丝瓜视频免费看黄片| 免费av中文字幕在线| 极品教师在线免费播放| 999久久久精品免费观看国产| 亚洲综合色网址| 黄色视频不卡| 黄色视频在线播放观看不卡| 三级毛片av免费| 久久中文看片网| 亚洲自偷自拍图片 自拍| 午夜精品久久久久久毛片777| 午夜福利乱码中文字幕| 精品国产国语对白av| 欧美亚洲 丝袜 人妻 在线| 丁香欧美五月| 国产99久久九九免费精品| 交换朋友夫妻互换小说| 人人妻,人人澡人人爽秒播| 黑人欧美特级aaaaaa片| 一夜夜www| 亚洲全国av大片| 丰满人妻熟妇乱又伦精品不卡| 久久国产精品大桥未久av| 日本黄色视频三级网站网址 | 9热在线视频观看99| 极品人妻少妇av视频| 男女床上黄色一级片免费看| 欧美激情久久久久久爽电影 | 午夜福利乱码中文字幕| 午夜91福利影院| 国产精品成人在线| 成年动漫av网址| 欧美乱码精品一区二区三区| 亚洲熟女精品中文字幕| avwww免费| 美国免费a级毛片| 亚洲一卡2卡3卡4卡5卡精品中文| 一区福利在线观看| 久久精品国产综合久久久| 成人黄色视频免费在线看| 一区福利在线观看|