• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Optimal confrontation position selecting games model and its application to one-on-one air combat

    2024-02-29 08:23:28ZekunDunGenjiuXuXinLiuJiyunLiyingWng
    Defence Technology 2024年1期

    Zekun Dun , Genjiu Xu ,b,*, Xin Liu , Jiyun M , Liying Wng

    a School of Mathematics and Statistics, Northwestern Polytechnical University, Xi'an 710072, China

    b International Joint Research Center on Operations Research, Optimization and Artificial Intelligence, Xi'an 710129, China

    c AVIC Xi’an Aeronautics Computing Technique Research Institute, Xi'an 710065, China

    d Unmanned System Research Institute, Northwestern Polytechnical University, Xi'an 710072, China

    Keywords: Unmanned aerial vehicles (UAVs)Air combat Continuous strategy space Mixed strategy Nash equilibrium

    ABSTRACT In the air combat process, confrontation position is the critical factor to determine the confrontation situation, attack effect and escape probability of UAVs.Therefore, selecting the optimal confrontation position becomes the primary goal of maneuver decision-making.By taking the position as the UAV's maneuver strategy, this paper constructs the optimal confrontation position selecting games (OCPSGs)model.In the OCPSGs model, the payoff function of each UAV is defined by the difference between the comprehensive advantages of both sides, and the strategy space of each UAV at every step is defined by its accessible space determined by the maneuverability.Then we design the limit approximation of mixed strategy Nash equilibrium (LAMSNQ) algorithm, which provides a method to determine the optimal probability distribution of positions in the strategy space.In the simulation phase, we assume the motions on three directions are independent and the strategy space is a cuboid to simplify the model.Several simulations are performed to verify the feasibility, effectiveness and stability of the algorithm.? 2023 China Ordnance Society.Publishing services by Elsevier B.V.on behalf of KeAi Communications

    1.Introduction

    With the development of technology,Unmanned Aerial Vehicles(UAVs)have come in a wide variety of applications and have played an indispensable role in modern air combat.The UAV combat is also emerging as an important research issue in the modern warfare[1].Generally, decisions of UAVs in the air combat mainly include tactical decisions [2,3] and maneuver decisions [4].The Tactical decision is a kind of exogenous decision focusing on the offensive and defensive behaviors similar to the fire attack, while the maneuver decision can be regarded as an endogenous decision focusing on taking advantage in the confrontation situation through controlling the maneuvers of UAVs.In this paper, we mainly study on the latter.In the maneuver decision problem, the confrontation relationship between UAVs is regarded as a pursuit and evasion scenario that UAVs aim to constantly take the initiative in the battlefield by formulating the optimal flight position.Therefore,the key to this problem is how the aircraft should select the best maneuver strategy.

    Common research methods used in decision-making for air combat maneuvers include the optimization theory method[5-8],the expert system method [9-11] and the reinforcement learning method [12-15].

    The optimization theory method uses genetic algorithm,Bayesian preference and statistical theory to make the best maneuver decision from a finite and discrete strategy space through optimizing the objective function of the aircraft.However,the realtime performance of numerous optimization algorithms is low in large-scale problems.The expert system method uses artificial intelligence technology and computer technology to summarize a rule base, where UAVs can find the best strategies for different battle scenarios.Nevertheless,it is difficult and time-consuming to get a complete rule base that can cover the diverse combat situations.The reinforcement learning method, continually learns strategies to maximize revenues or achieve specific goals through the interaction between agents and the environment.This approach has been used extensively in air combat decision-making,while it is not interpretable and stable.

    In the combat process, the attack effect and the interception capacity of UAVs are mainly linked to the situation of UAVs.Therefore, how should the aircraft make the best maneuver decision without knowing the opponent's plan for maintaining the dominant situation? [16] Obviously, this is a game process.Game theory is a type of mathematical method for studying the problem of interactive optimization of decision-making[17].In Comparison with the methods mentioned above, the most important point of game theory is to take the uncertainty of the decision-making of opponents into account.From this point of view, the approach of game theory is more suited to the study of the optimal interactive maneuver decision of UAVs.

    Matrix game is a basic and useful tool to study the maneuver decision problem for UAVs.In 1987, Austin et al.[18] pioneer the framework of matrix game model to analyze the optimal interactive maneuver decision for air-to-air combat.Subsequently, many researchers [19-22] use matrix games to model the maneuver decision-making process of air combat.This method can only be used to solve the optimal decision in the current situation,but the result is not necessarily the optimal decision in the whole process of air combat.For the farsightedness of decision, Virtanen et al.[23,24] combine the influence diagram method with the game method, proposing the multistage influence diagram game model for modeling the maneuver decisions of aircrafts in one-on-one air combat.Based on this model,some scholars[25,26]expand the air combat scenario from one-on-one to multi-UAVs.Compared with other maneuver decision models based on game approach, this method can reflect the preference of the pilot and have good performance under uncertain information, but it is difficult to obtain reliable prior knowledge[27].

    In all above maneuver decision methods, the strategy space of the UAV is generally discretized into given finite actions according to the basic air combat maneuver divided by NASA [18,28], where UAV's strategy is composed of dynamic maneuver variables,such as overload and rolling angle.However, the finite strategy space will limit the flexibility of UAV's maneuver and lead to a gap between the simulation environment and the actual environment.

    In essence, the confrontation situation is to find the best position, and the position is observable compared with the maneuver actions.Additionally,the strategy space should be continued since the ever-changing and dynamic character of the battle situation.Therefore, we use the position as a maneuver strategy for UAVs in this paper.Based on the strategy space composed of the positions that the UAV can reach,we introduce a new optimal confrontation position selecting games (OCPSGs) model.In order to solve this game model with continuous strategy space,we propose the limit approximation of mixed strategy Nash equilibrium (LAMSNQ)algorithm.

    Situation assessment take a virtual role in the decision-making process.Huang et al.[6] define a maneuver decision factor to evaluate the situation, which is composed of angle factor, height factor,distance factor and velocity factor.Austin et al.[28]and Park et al.[29] define the scoring function to evaluate the situation,which are all composed of distance factor,angle factor and velocity factor,but the former has one more terrain factor.¨Ozpala et al.[30]and Chen et al.[31] define the total superiority and the index of situation, respectively, to evaluate the air combat situation, which are also includes distance, angle and speed.In OCPSGs model, we assume the confrontation situation is affected by the relative distance,angles between speed vector and line of sight(lag angle and lead angle), and speeds of UAVs, which are all determined by the positions of both sides.Based on these factors we define the comprehensive advantage function to evaluate the confrontation situation.The payoff function is defined as the difference between the comprehensive advantages of both sides.It leads to a twoperson zero-sum game model, where the strategy is the UAV's position.The strategy space of the UAV at every moment is the accessible space determined by maneuverability of the UAV.In the decision-making process,the UAV is not able to predict the precise position of the opponent at the next stage.Based on this uncertainty, we can find an optimal probability distribution for all positions in the accessible space to maximize the expected utility of the UAV.Such a probability distribution is actual a mixed strategy in the strategy space.Consequently, the OCPSGs model is transformed into the problem of solving the mixed strategy Nash equilibrium(MSNE).

    Generally,linear programming and intelligent search algorithm can be used to solve the MSNE of finite and discrete strategy spaces.However, these two methods are no longer suitable for nonlinear objective function and continuous strategy space scenarios.For that reason, the LAMSNQ algorithm is based on integrating the nonlinear objective function.Firstly,we divide the strategy space of each step of the UAV into several small spaces of the same size,and then calculate the optimal probability distribution on the small spaces based on maximizing the expected utility.This optimal probability distribution is the approximate mixed strategy Nash equilibrium (AMSNE) of the OCPSGs model.Then, we select the center of the small space corresponding to the maximum probability of the AMSNE as the optimal confrontation position of the UAV.Finally, we simulate several experiments under different conditions and confrontation scenarios, and the results verify the feasibility, effectiveness and stability of the algorithm.

    The remainder of this paper is organized as follows.Section 2 introduces the problem description and related notations.Section 3 proposes the OCPSGs model and designs the LAMSNQ algorithm.Section 4 conducts simulation analysis.Section 5 concludes the paper.

    2.Problem formulation and preliminaries

    2.1.Problem description

    Let us consider an one-on-one confrontation scenario in a threedimensional space,and the schematic diagram of UAVs is displayed in Fig.1.We assume that each UAV knows the position and speed of another side of the current moment and one side will launch an attack once another side enters its attack range.

    Fig.1.Geometry relationship of UAVs in the 3-dimensional space.

    There are two UAVs in the maneuver decision model.Specifically,the confrontation sides are labeled as the red aircraft and the blue aircraft,which are denoted byrandb,respectively.Each UAV is fully described by its location and velocity.P denotes the position vector and v denotes the speed vector.The maneuver decision process ofrandbincludes modeling the confrontation situation assessment, constructing the OCPSGs model to describe the interactive process of decision-making and solving the proposed game model.

    Fig.2.Diagram of variable relationship.

    2.2.Kinematics characterization of the UAV and related notations

    In this paper, we regard UAV as a point mass rather than a specific model in reality.For the red force, the position vector and speed vector are given by

    wherexrandyrare horizontal coordinates,zris the altitude ofr,andis the derivative of the position vector ofrwith respect to time.

    For the blue, the position vector and the speed vector can be expressed as

    Letdrbdenote the distance betweenrandbon the line of sight,that is

    Considering that the persistence of maneuvers will cause delays in the implementation of strategies, the aircraft needs to make decisions step by step according to the time interval denoted by Δt,which reflects the dynamic nature of the strategy.Accordingly,we propose the time delay constraint to describe this process,that is,rmakes a decision at timetandbreacts at timet+ 1,namely,.

    At timet, the speed vector of the aircraft is given by

    And the speed of the aircraft is given by

    For reference,a graphical representation of θrand θbcan be seen in Fig.2.The angles are shown from the point of view of the red.We denote by θrthe angle between Pb-Prand vr,named as lag angle,and θbfor the angle between Pb-Prand vb,named as lead angle.At timet, lag angle and lead angle are given by

    By analyzing the equations of the relative distance, angle between the speed vector and line of sight,and speed of the aircraft,it can be found that they are only related to the position vectors of both sides.Therefore, we will utilize this character to design the decision-making model of air combat.

    2.3.Assessment of confrontation situation

    In this subsection, we define a comprehensive advantage function of UAV in the basis of relative distance, angle between the speed vector and line of sight, and speed to evaluate the current situation and guide the maneuver strategy at the next moment.

    2.3.1.Distanceadvantagefunction

    Distance advantage is not only related to the distance between both sides but also related to the attack range of the weapons that carried by UAV [31].In the confrontation process, the attack probability of weapons will be reduced if the distance between the two sides is too far,while there will be security problems if the distance is too close.Therefore,there exists an optimal distance to maximize the attack probability on the premise of UAV's security.

    For the red, the distance advantage function [32] can be configured as

    whereKris a coefficient used to denote by the maneuver ability of the red,Dris the optimal attack distance.For the aircraft, a higherKrvalue indicates better maneuverability of the UAV.The value of distance advantage is in the range of(0,1],with the maximum of 1 whendrbis equal toDr.

    2.3.2.Angleadvantagefunction

    Park et al.[29] propose two necessary conditions for the aircraft's air superiority, that is, it has to be located at the rear of the opponent and heading for it.This indicates that the angle has a critical impact on the superiority of the combat situation.

    It can be seen from Fig.2,if θr=θb=0,the red pursues the blue and the angle advantage of the red is the maximum.On the contrary, θr=θb=π means the red is pursued by the blue, and the angle advantage of the red is zero.If θr=0 and θb= π, the angle advantage of both sides are equal, namely, the two sides are wellmatched.Based on the above analysis, we define the angle advantage function [18] of the red as follows:

    and the angle advantage function of the blue is defined by

    Therefore, the angle advantage would have values between 0 and 1.For the red,Frawill be 1 when it is on the tail ofb, and 0 whenbis on the tail ofr.

    2.3.3.Speedadvantagefunction

    In addition to distance and angle, speed is also a vital factor in the combat situation.A higher speed will make the aircraft have more initiative, and make it easier to approach to or escape from the opponent.We denote by vrand vbthe scalar speed of the red and the blue respectively, which are given by

    We denote by vr*the ideal speed of the red,

    where vmaxis the maximum scalar speed of the UAV.For the red,when the distance between UAVs is much farther than the optimal attack distanceDr,the red should accelerate to shorten the distance with the opponent,namely,the ideal speed tends to vmax.Whendrbis equal toDr, vr*is equal to vbin order to keep the distance advantage.For the other cases, vr*is between vband vmax, and the actual value of vr*is determined by the distancedrb.The ideal speed of the blue vb*can be obtained in the same way.Simultaneously,the ideal speed is always larger than the speed of the opponent to keep the relative superiority.

    For the red,the speed advantage function is defined as the ratio of actual speed to ideal speed, which is given by

    In this equation, |vr-vr*|/vr*depicts the deviation degree between the actual speed and the ideal speed of the red.The smaller deviation means the larger speed advantage, that is, the UAV can reach the optimal attack distance faster and form the favorable attack conditions in shorter time.Similarly,we can define the speed advantage of the blue in the same way.

    2.3.4.Comprehensiveadvantagefunction

    The comprehensive advantage should be determined by the above factors.In this paper, we apply the method of convex combination of the factors [6,15,30,31] to describe the comprehensive advantage function.

    Combining distance advantage, angle advantage and speed advantage, we define

    to evaluate the real-time comprehensive advantage function of the red,and the comprehensive advantage function of the blue is

    where ω1,ω2and ω3are weight coefficients, representing the importance of the corresponding advantage factors, 0 ≤ω1,ω2,ω3≤1 and ω1+ω2+ω3= 1.

    2.4.Mixed strategy Nash equilibrium on continuous strategy space

    In this paper,UAV's strategy is on the basis of mixed strategy of the game model.Consequently, we introduce some relative descriptions about the mixed strategy and mixed strategy Nash equilibrium in this subsection.

    (Strategic Form Game)A strategic form 2-person game Γ is a tuple 〈{r,b},S=Sr×Sb,{Ui}i=r,b〉, where

    (1)ris the red aircraft andbis the blue aircraft;

    (2)SrandSbare the strategy spaces ofrandbrespectively;

    (3)Ur:Sr×Sb→R andUb:Sr×Sb→R are payoff functions of the red and the blue.

    (Zero-Sum Game)A two-person zero-sum game is a tuple〈{r,b},S=Sr×Sb,{Ui}i=r,b〉, whereSrandSbare the strategy sets of the players,andUiis a function such thatUi:Sr×Sb→R andUb(sr,sb) = -Ur(sr,sb), for any strategy pair (sr,sb)∈Sr×Sb.

    (Mixed Strategy on Continuous Strategy Space)On continuous strategy space,a mixed strategy forrisfand a mixed strategy forbisg, so thatf:Sr→[0,1] is the probability density function ofr’s strategy andg:Sb→[0,1] is the probability density function ofb’s strategy.The probability density functionsf(·)andg(·)satisfy the following two conditions:

    (1)f(sr)≥0 andg(sb)≥0, for allsr∈Srandsb∈Sb;

    (Expected Payoff on Continuous Strategy Space)Let the probability density functions of the red and the blue aref(sr) andg(sb),respectively,Ur(sr,sb)andUb(sr,sb)are payoffs ofrandbwith the strategy pair(sr,sb),wheresr∈Srandsb∈Sb.Then,the expected payoff of the red on continuous strategy space is given by

    The expected payoff of the blue on continuous strategy space is given by

    If Γ is a zero-sum game,we have

    (Mixed Strategy Nash Equilibrium)Mixed strategy Nash equilibrium is a relatively stable state when players have conflicts of interest in the rational situation,and no player would unilaterally change this state.A mixed strategy pair (f*(sr),g*(sb)) is a mixed strategy Nash equilibrium for a 2-person game, if for allf(sr) andg(sb), it holds that

    If Γ is a zero-sum game, we can obtain another definition of mixed strategy Nash equilibrium from Eqs.(18)-(20).A mixed strategy pair(f*(sr),g*(sb))is a mixed strategy Nash equilibrium if and only if for allf(sr) andg(sb), it holds that

    3.OCPSGs model

    In this section, we propose the optimal confrontation position selecting games model to describe the dynamic interaction process and make the optimal position decision for the UAV.In the confrontation process,the pure strategy is the position of the UAV,and the strategy space is composed of the positions that an UAV can reach determined by the maneuverability and time interval.

    3.1.Strategy set and payoff function of the game

    In this subsection, we construct the game model of air combat,and analyze the strategy set and payoff functions.

    (1) Player set:N= {r,b};

    (2) Strategy set:S={Sr,Sb}, where

    Ωrand Ωbare the accessible spaces ofrandbat the next step,respectively, which are determined by the kinematic limitations,the maneuverability and the time interval.Specifically,the strategy of UAViat timetisand

    (3) Payoff functions:Ur(sr,sb)andUb(sr,sb)are payoff functions of the red and blue with the strategy pair(sr,sb),respectively,which are denoted by

    wheresr∈Sr,sb∈Sb.Obviously,

    Then we analyze the payoff functions at timet+ 1.The distance advantage function ofrat timet+1 is given by

    The angle advantage function ofrat timet+1 is given by

    The speed advantage function ofrat timet+1 is given by

    where

    Accordingly,we can get the comprehensive advantage function ofrat timet+ 1, which is given by

    Similarly, the comprehensive advantage function ofbcan be expressed as

    The payoff functions ofrandbat timet+1 are defined by

    Considering the time delay constraint in subsection 2.2, the position of the blue at timet+1 is determined that we don't need to consider its strategy probability density function at timet.Under this constraint, the expected payoff ofrat timet+1 according to Eq.(16) can be simplified as

    According to Eq.(19),the equilibrium strategy ofrat timetis the probability density function maximizing the expected payoff,which is given by

    So far,we have analyzed the strategy set and payoff functions for this game model.The optimal strategy of the UAV at timetis to select the optimal position at timet+ 1, which is relative to the mixed strategy Nash equilibrium of this game model.In the next subsection,we will propose an algorithm to solve the approximate equilibrium.

    3.2.LAMSNQ algorithm

    Inspired by paper [33], we design the LAMSNQ algorithm to obtain the mixed strategy Nash equilibrium of the OCPSGs model,and use the equilibrium solution to determine the optimal strategy.The algorithm steps for UAVrat timetare illustrated as follows.

    Fig.3.Decision-making framework for UAVs.

    Step 1: At first, we need to determine the strategy space ofr.According to the kinematic limitations and time interval, we can calculate the accessible space ofrat the next step, which are denoted by Ωrreferring to the strategy space ofr.

    Step 2: We divide the strategy space ΩrintoMequally small spaces denoted by Ωri,i= 1,2,…,M.

    Step 3: After dividing the strategy space, we need to find the optimal probability distributions on the small spaces Ωri.According to Eq.(35), it holds that the determination of the optimal probability distribution is based on maximizing the expected payoff.Consequently, we can transform this problem into the following optimization problem, which is given by

    prirepresents the probability ofrreaching theith small space Ωri.Rris the expected payoff ofr.By solving the programming problem in Eq.(36), we can derive the optimal probability distributionp*=.

    Step 4: In the case of continuous strategy space, we fit the optimal probability density functionof strategy space by the linear regression of (ξr1,ξr2,…,ξrM) and.ξriis the center position ofith small space.

    we can obtain the maximum expected payoff Er on the continuous strategy space.Substituting p* into the objective function of Eq.(36), we have the maximum expected payoff Rr* on the discrete strategy space.We set ε = 10-3, if the termination conditionis satisfied,is the mixed strategy Nash equilibrium ofr, so we can use the discrete probability distributionp*to approximate the probability of a strategy on the continuous space.Otherwise, the strategy space Ωrneeds to be subdivided into 2Mequal parts,3Mequal parts…and we need to redefine the strategy space and repeat the above steps after that.

    Step 6: If the termination condition is satisfied, we can derive the optimal strategy ofr, that is, the position with the maximum probability.The coordinate of the optimal position is the center of the small space with the maximum probability, which is denoted by, where

    Step 7:At each step in the combat process,rupdates its position according to the optimal position obtained in step 6 and determines whether the distance between two sides is less than or equal to its own optimal attack distance, namely,drb≤Dr.If satisfied, the confrontation ends, and the winning probabilities of both sides are defined as Softmax functions, which are given by

    whereFrandFbare terminal comprehensive advantage functions of the red and the blue respectively, and the side with the higher winning probability wins the air combat.To illustrate the process of the algorithm, the flow chart of LAMSNQ algorithm is displayed in Fig.4.

    4.Simulation and analysis

    4.1.Model reduction

    To simplify the analysis,we suppose that the motions of UAVs onX-axis,Y-axis andZ-axis are independent of each other.Consequently, the accessible space of the aircraft within a time interval forms a cuboid that is determined by its upper and lower limit of the motion in the direction of three axes.This cuboid is the strategy space in which the aircraft will select the optimal position,and the position of the UAV at the next step is the optimal position in the cuboid.The strategy spaces of both sides are defined as follows.

    Then we analyze the strategy space at timet.The strategy space at timetis the accessible zone of the UAV, which is limited by the maximum direction acceleration and minimum direction acceleration.Given the position and speed of the UAV at timet, we can describe its accessible space onX-axis.Fori∈{r,b}, we have

    Under the assumption of motion independence, the direction probability density functions on the strategy space at timet+1 are denoted byTherefore, the probability density function ofrin Eq.(34) is denoted by

    Accordingly, Eq.(34) can be further expressed as

    Based on the above simplification process,we can transform the first five steps of the LAMSNQ algorithm into the following form.

    Step 1: According to Eqs.(43) and (44), we can calculate the ranges of motion ofrin the direction of coordinate axes,which are denoted byrespectively.Then we can get a cuboidreferring to the strategy space ofr.

    Step 2: We divide the strategy set on each coordinate axis ofrintoMequal intervals, and the strategy space is divided intoM3small cuboids.are the left endpoints on themth interval ofX-axis, thenth interval ofY-axis and thelth interval ofZaxis, respectively, which are defined by

    With the increase ofM,the computational complexity increases exponentially.In order to decrease the computational complexity,we selectMpoints denoted by {ξ1, ξ2, …, ξM}, where ξkis an abbreviation for ξk,k,k, for allk=1,2,…,M.We want to approximate the probability density function of the strategy space ofrby the probability distribution at these points solved in step 3.

    Step 3: After dividing the strategy space, we need to find the optimal probability distributions in the direction of three coordinate axes of the points {ξ1,ξ2,…,ξM}.Based on maximizing the expected payoff,we can define the following optimization problem.

    Pxk,PykandPzkrepresent the probability ofrreaching thekth point onX-axis,Y-axis andZ-axis respectively.Rris the expected payoff ofr.By solving the programming problem in Eq.(48),we can derive the optimal probability distribution, where

    Step 4:We fit the probability density function ofX-axisby linear regression of.Similarly,we can obtain the probability density functions ofY-axis andZ-axis,namely,.

    Fig.4.The flow chart of LAMSNQ algorithm.

    We can obtain the maximum expected payoffEron the continuous strategy space.SubstitutingP*into the objective function of Eq.(48), we have the maximum expected payoffR*on the discrete strategy space.We set ε = 10-3, if the termination conditionis satisfied,is the mixed strategy Nash equilibrium ofr, so we can use the discrete probability distributionP*to approximate the probability of a strategy on the continuous space.Otherwise,the strategy set in the direction of each coordinate axis needs to be subdivided into 2Mequal parts,3Mequal parts … and we need to redefine the strategy space and repeat the above steps after that.

    Step 6: If the termination condition is satisfied, we can derive the optimal strategy ofr, that is, the position with the maximum probability on each axis.The coordinate of the optimal position is denoted bywhere

    4.2.Numerical simulation and result analysis

    4.2.1.Settingsofparameters

    To verify the effectiveness of the OCPSGs model,simulations are performed in this section.We use Python to do all simulations and all experiments satisfy the time delay constraint.The winning probabilities of UAVs are defined in Eqs.(39) and (40).In order to simplify the simulation scene, we simulate the confrontation process in a two-dimensional plane.We use Python SciPy Optimizers to solve the nonlinear programming in Eq.(48).

    The initial position and speed ofrare Pr=[5000,5000]and vr=[200,1].The initial position and speed ofbare Pb=[1,1]and vb=[200,1].The maximum scalar speed, maximum direction acceleration and minimum direction acceleration for each side are vmax=400 m/s,, anda= - 90 m/s2, respectively.Meanwhile, the optimal attack distances and the maneuver ability of UAVs areDr=Db=1500 m andKr=Kb=2,respectively,and the weight coefficients in Eqs.(14)and(15)are set as ω1=ω2=ω3=1/3.The simulation time interval is Δt= 0.5 s.

    4.2.2.Bluekeepsconstantspeedflight

    In this simulation,bkeeps the constant speed whileris equipped with the method designed in this paper.The trajectories of both sides are shown in Fig.5(a).The change trends of UAVs'comprehensive advantage are shown in Fig.5(b).

    In the starting stage,bis on the tail ofr,so the blue is superior to the red.In the later stage,rperforms turn round and face tob,whilebis flying straightly.From Fig.5(b), we can see that the comprehensive advantage ofris constantly dominant the comprehensive advantage ofbafter Step 7.Obviously,rwin this game as its comprehensive advantage is bigger than that ofb, which is as expected.The red still wins under the initial condition that the angle advantage of the blue is dominant, which shows the effectiveness of our algorithm.

    4.2.3.Bothsidesadoptthemethodinthepaper

    In this section,the two UAVs are all equipped with the method proposed in this paper.The related figures are shown as follows.

    From Fig.6 we can see that the two sides are far away from the initial state and both sides tend to approach each other.In the process of approaching,the tangent line of the trajectory at a point is approximated as the speed direction of the UAV.Therefore,it can be seen that the two gradually adjust the nose to aim at each other in the process of confrontation.Finally,the confrontation comes to an end when the distance between UAVs is less than or equal to the optimal attack distance of one party.

    As can be seen from Fig.7(a),the distance between the two sides gradually decreases.At Step 23,the distance is less than the optimal attack distance and the confrontation ends.In Fig.7(b),the distance advantage is gradually increasing with the approaching of two sides.Given the initial values ofKandDare equal,so the distance advantage of both sides is also equal.

    At the initial moment, we can calculate that θr= θb, but the angle advantage of the blue is larger.In Fig.8(a), as the confrontation continues, lead angle θbgradually increases and remains an obtuse angle while lag angle θrgradually decreases to an acute angle,indicating that the red keeps adjusting its speed direction to aim at the blue.In Fig.8(b), the angle advantage of the blue is no longer dominant after step 13, and the two sides are nearly in a head-to-head position after that, so the angle advantage of both sides is gradually tending to 0.5.The above analysis echoes the trajectories of both sides in Fig.6, that is, the red reverses its direction in the battle, and then the two sides gradually approach.

    Fig.5.Confrontation trajectories and comprehensive change trend of UAVs: (a) Confrontation trajectories of UAVs; (b) Comprehensive advantage of UAVs.

    Fig.6.Confrontation trajectories of the red and the blue.

    As can be seen from Fig.9(a),before Step 13,the deviation index of the red|vr-vr*|/vr*gradually decreases and has been larger than|vb- vb*|/vb*.Hence, speed advantage of the red is smaller than that of the blue in Fig.9(b).From Step 14 to Step 17,we have |vb-vb*|/vb*>|vr-vr*|/vr*.During the whole confrontation,the speeds of both sides are less than the maximum speed vmax.Combined with the formula of speed advantage and the change of distance between the two sides, we do the following analysis: Before Step 13, the red adjusts the angle to turn to face the blue behind itself,and the turn will inevitably cause a reduction in speed.According to the definition of speed advantage,the deviation between the red's speed and the ideal speed before Step 13 is larger than that of the blue,resulting in the speed advantage of the red is smaller than that of the blue.

    Fig.10 shows the change trend of the comprehensive advantages of both sides in the confrontation process.Since the distance advantages of both sides are equal, so its effect on the comprehensive advantage can be neglected.Besides,the angle advantages of both sides are symmetrical.Accordingly, the main factor affecting the comprehensive advantage is the speed advantage.Through Figs.9(b)and Fig.10,we can see that the change trend of the comprehensive advantage is similar to the change trend of the speed advantage.At Step 23, the confrontation termination conditiondrb≤Dis satisfied, at this time the comprehensive advantage of the red is greater than that of the blue, so the winning probability of the red is higher and the red wins in the final.

    4.2.4.Decisiontimeinterval,simulationtimeintervalanddivision intervalnumber

    In LAMSNQ algorithm, we divide the strategy set on each coordinate axis intoNequal intervals, called the division interval number,to solve the approximate mixed strategy Nash equilibrium.Decision time interval refers to the physical time interval that triggers a simulation calculation in the system.Simulation time interval refers to the time interval we set in the simulation for the UAV to make a decision.

    Fig.7.The change trend of UAV's distance and distance advantage value: (a) The distance between r and b; (b) The distance advantage of r and b.

    Fig.8.The change trend of UAV's angle and angle advantage value: (a) The angle trend of r and b; (b) The angle advantage of r and b.

    Fig.9.The change trend of speed deviation degree and speed advantage value: (a) The degree of deviation from ideal speed; (b) The speed advantage of r and b.

    Fig.10.Changes in the comprehensive advantage of the red and the blue.

    Fig.11.The relationship between division interval number and decision time.

    Theoretically,we can take the time interval as small as possible to approximate continuous.However, the efficiency of the algorithm itself should be considered in practical application.From this point of view, we want to find the relationship between interval division number and decision time.

    In Fig.11, lateral axis represents the division interval number and the vertical axis represents the decision time interval.The red line and green line represent the simulation time interval.It can be seen that the decision time interval increases with the increase ofN.When the simulation time interval is 0.5 s and 0.1 s, the correspondingNis about 23 and 10, respectively.

    To ensure the simulation time interval is longer than the decision time interval, we know that the division interval number should not exceed 10 if the simulation time interval is 0.1 s,and the division interval number should not exceed 23 if the simulation time interval is 0.5 s.

    4.2.5.Influenceofdivisionintervalnumberontheresult

    Fig.12.Influence of segmentation fineness on trajectory in different scenes: (a) The blue keeps constant speed; (b) Both sides use the method in the paper.

    Fig.12(a) shows the trajectories of the two sides when the division interval numberNis equal to 10 and 20 respectively withbis constant speed andradopts the method in the paper.Fig.12(b)shows the trajectories of the two sides whenNis 10 and 20 respectively withbandrboth adopt the method in the paper.The simulation time interval is 0.5 s,and we can see that the trajectories change range of both sides are very small as the division interval number changes.

    To sum up, as long as the division interval number is less than the maximum division number corresponding to the decision time interval, the change of division interval number will not significantly change the final trajectory.Therefore,the LAMSNQ algorithm is stable.

    5.Conclusions

    In this paper, we introduce the OCPSGs model to study maneuver decision problems between two UAVs from different sides.This model provides a new perspective for the UAV air combat maneuver decision problem from the effect of position selecting.Then,we propose the LAMSNQ algorithm to solve the approximate mixed strategy Nash equilibrium of the OCPSGs model.The LAMSNQ algorithm provides a method to describe the best probability distribution on the strategy space given the maneuverability of the UAV.In the simulation phase, we simplify the model by assuming that the movements of the UAV in three directions are independent of each other, and the strategy space of each step of the UAV is a cuboid.We set two scenarios to verify the feasibility and effectiveness of the LAMSNQ algorithm.Then, we find the maximum number of strategy space partitions under the premise of the shorter decision time than a given simulation time.Finally,the trajectories of UAVs in two scenarios are drawn under different division interval numbers.The range of trajectory changes on both sides is minor, and the results of the game have not changed,demonstrating the stability of the algorithm.

    Due to the lack of pertinent UAV mobility data and the specific model describing UAV mobility, the model was simplified in the simulation phase.From the theoretical level,this paper studies the problem of UAV selecting the optimal position to occupy the superior confrontation situation in the game framework.Future work would investigate the motion characteristics of the UAV and the feasibility of the strategy space so that our method can make more scientific maneuver strategies in air combat.

    Declaration of competing interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    Acknowledgements

    The authors would like to acknowledge National Key R&D Program of China (Grant No.2021YFA1000402) and National Natural Science Foundation of China(Grant No.72071159)to provide fund for conducting experiments.

    国产成人免费无遮挡视频| 欧美久久黑人一区二区| 69av精品久久久久久 | 1024香蕉在线观看| 国产精品国产av在线观看| 亚洲国产精品999| 欧美激情 高清一区二区三区| 人人妻人人爽人人添夜夜欢视频| 99国产精品免费福利视频| 国产精品九九99| 黄频高清免费视频| 久久免费观看电影| 日韩一卡2卡3卡4卡2021年| 窝窝影院91人妻| 久久人人爽人人片av| 啦啦啦免费观看视频1| 99精国产麻豆久久婷婷| 十分钟在线观看高清视频www| 久久 成人 亚洲| 超碰97精品在线观看| 亚洲精品一二三| 亚洲国产欧美一区二区综合| 高清在线国产一区| 久久久久久免费高清国产稀缺| 亚洲视频免费观看视频| 国产av又大| 国产高清国产精品国产三级| 日韩一区二区三区影片| 精品少妇黑人巨大在线播放| 两性夫妻黄色片| 久久热在线av| 欧美日韩亚洲高清精品| 老司机影院毛片| 亚洲国产av影院在线观看| av在线播放精品| 丝袜在线中文字幕| 视频区欧美日本亚洲| 搡老岳熟女国产| 精品国产乱码久久久久久男人| 国产精品秋霞免费鲁丝片| 日日摸夜夜添夜夜添小说| 动漫黄色视频在线观看| 成人免费观看视频高清| 日本一区二区免费在线视频| 免费一级毛片在线播放高清视频 | www.999成人在线观看| 久久精品成人免费网站| 久久人人爽av亚洲精品天堂| 纯流量卡能插随身wifi吗| 亚洲久久久国产精品| 国产黄频视频在线观看| 国产精品一区二区免费欧美 | 日本wwww免费看| 99国产精品99久久久久| 国产亚洲精品第一综合不卡| 亚洲av日韩在线播放| 一级片'在线观看视频| 欧美国产精品va在线观看不卡| 无遮挡黄片免费观看| 99精国产麻豆久久婷婷| 亚洲国产中文字幕在线视频| 国产无遮挡羞羞视频在线观看| 国产一卡二卡三卡精品| 亚洲人成电影免费在线| 日韩免费高清中文字幕av| 91成人精品电影| 肉色欧美久久久久久久蜜桃| 黑人猛操日本美女一级片| 亚洲欧美精品综合一区二区三区| 久久精品aⅴ一区二区三区四区| 精品福利永久在线观看| 黄片播放在线免费| 亚洲av成人不卡在线观看播放网 | 9热在线视频观看99| 久久亚洲精品不卡| 欧美成狂野欧美在线观看| 色视频在线一区二区三区| 国产日韩欧美视频二区| av有码第一页| 国产97色在线日韩免费| 黄色视频在线播放观看不卡| 一二三四社区在线视频社区8| 99精品久久久久人妻精品| 一级毛片精品| 成人国产一区最新在线观看| 国产成人精品久久二区二区91| av网站在线播放免费| 男人爽女人下面视频在线观看| 成年动漫av网址| 男女边摸边吃奶| 成人18禁高潮啪啪吃奶动态图| 精品国产超薄肉色丝袜足j| 99国产精品免费福利视频| 欧美日韩一级在线毛片| 亚洲va日本ⅴa欧美va伊人久久 | 日本猛色少妇xxxxx猛交久久| 精品福利观看| 日韩三级视频一区二区三区| 在线观看免费午夜福利视频| 老司机深夜福利视频在线观看 | xxxhd国产人妻xxx| tube8黄色片| 性色av乱码一区二区三区2| 国产成人欧美在线观看 | 黑人操中国人逼视频| 男女午夜视频在线观看| 国产精品久久久久久人妻精品电影 | 91成年电影在线观看| 免费不卡黄色视频| 国产精品一区二区精品视频观看| 成年人免费黄色播放视频| 黄色片一级片一级黄色片| 99九九在线精品视频| av在线app专区| 久久人妻熟女aⅴ| 国产男人的电影天堂91| 不卡一级毛片| 男女之事视频高清在线观看| 久久久久视频综合| 精品少妇久久久久久888优播| 欧美一级毛片孕妇| 国产在线视频一区二区| 一本一本久久a久久精品综合妖精| 狂野欧美激情性bbbbbb| 一级a爱视频在线免费观看| 欧美日韩黄片免| 国产xxxxx性猛交| 免费在线观看完整版高清| 天堂中文最新版在线下载| 两性夫妻黄色片| 可以免费在线观看a视频的电影网站| 久久久久久久精品精品| 下体分泌物呈黄色| 男女午夜视频在线观看| 亚洲久久久国产精品| 久久久久久久久久久久大奶| 色婷婷av一区二区三区视频| 国产成人av激情在线播放| 一二三四在线观看免费中文在| 人人妻,人人澡人人爽秒播| 俄罗斯特黄特色一大片| 亚洲精品国产色婷婷电影| 欧美 日韩 精品 国产| 午夜福利免费观看在线| 精品一区在线观看国产| 国产精品影院久久| 欧美日韩成人在线一区二区| 人妻人人澡人人爽人人| 日本精品一区二区三区蜜桃| 爱豆传媒免费全集在线观看| 午夜两性在线视频| 精品国产乱码久久久久久男人| 免费av中文字幕在线| 蜜桃国产av成人99| 欧美精品人与动牲交sv欧美| 一级毛片精品| 另类亚洲欧美激情| 日韩一卡2卡3卡4卡2021年| a 毛片基地| 中文字幕人妻丝袜一区二区| 久久久久视频综合| 亚洲国产成人一精品久久久| 久久精品人人爽人人爽视色| 中文字幕人妻熟女乱码| 亚洲伊人久久精品综合| 亚洲五月婷婷丁香| 欧美日韩福利视频一区二区| 1024视频免费在线观看| 久久精品亚洲av国产电影网| 午夜激情久久久久久久| 高清在线国产一区| 欧美人与性动交α欧美软件| 亚洲中文字幕日韩| 国产日韩一区二区三区精品不卡| 免费在线观看视频国产中文字幕亚洲 | 美女福利国产在线| 女人被躁到高潮嗷嗷叫费观| 国产一区二区三区综合在线观看| 在线观看免费日韩欧美大片| 法律面前人人平等表现在哪些方面 | 91成人精品电影| 亚洲av电影在线进入| 日本wwww免费看| 91精品国产国语对白视频| 中文字幕制服av| 亚洲久久久国产精品| 亚洲美女黄色视频免费看| 国产高清国产精品国产三级| 久久久久国产精品人妻一区二区| 男人添女人高潮全过程视频| 成年美女黄网站色视频大全免费| 日韩大片免费观看网站| 女人爽到高潮嗷嗷叫在线视频| 极品少妇高潮喷水抽搐| 精品国产一区二区三区久久久樱花| 少妇人妻久久综合中文| 丰满少妇做爰视频| 免费黄频网站在线观看国产| 午夜福利在线免费观看网站| 狠狠婷婷综合久久久久久88av| 岛国毛片在线播放| 久久久久久久久久久久大奶| 国产高清videossex| 亚洲色图综合在线观看| 午夜精品国产一区二区电影| 成人免费观看视频高清| 精品熟女少妇八av免费久了| 色视频在线一区二区三区| 亚洲精品在线美女| 欧美黑人欧美精品刺激| 青草久久国产| 9色porny在线观看| 韩国高清视频一区二区三区| 久久久久精品人妻al黑| 青草久久国产| 欧美精品av麻豆av| 一二三四在线观看免费中文在| 国产精品国产av在线观看| 亚洲精品自拍成人| 久久ye,这里只有精品| 丝袜脚勾引网站| 国产亚洲精品第一综合不卡| 国产av精品麻豆| 国产成人欧美在线观看 | 国产精品久久久久久人妻精品电影 | a级片在线免费高清观看视频| 亚洲九九香蕉| 18禁黄网站禁片午夜丰满| 淫妇啪啪啪对白视频 | 午夜免费观看性视频| 亚洲色图 男人天堂 中文字幕| 欧美日韩亚洲国产一区二区在线观看 | 美国免费a级毛片| 国产亚洲av片在线观看秒播厂| 亚洲九九香蕉| 老司机午夜十八禁免费视频| 亚洲色图综合在线观看| 狠狠精品人妻久久久久久综合| 大香蕉久久网| 日韩一卡2卡3卡4卡2021年| 一本综合久久免费| 搡老岳熟女国产| 国产成人精品久久二区二区91| 男女高潮啪啪啪动态图| 性色av一级| 啦啦啦 在线观看视频| 国产成人av教育| 亚洲自偷自拍图片 自拍| 成年动漫av网址| 如日韩欧美国产精品一区二区三区| 99精国产麻豆久久婷婷| 日本一区二区免费在线视频| 9191精品国产免费久久| 女人高潮潮喷娇喘18禁视频| 99精国产麻豆久久婷婷| 国产男女内射视频| 人妻人人澡人人爽人人| 久久精品国产亚洲av香蕉五月 | 乱人伦中国视频| 亚洲,欧美精品.| 精品久久久久久电影网| 一本大道久久a久久精品| 亚洲成av片中文字幕在线观看| 性少妇av在线| 亚洲欧美日韩另类电影网站| 亚洲中文av在线| 日韩大片免费观看网站| 男人爽女人下面视频在线观看| 男女无遮挡免费网站观看| 黄频高清免费视频| 国产成人啪精品午夜网站| 韩国高清视频一区二区三区| 国产三级黄色录像| 菩萨蛮人人尽说江南好唐韦庄| 热99国产精品久久久久久7| 两个人免费观看高清视频| 日本av免费视频播放| 丰满少妇做爰视频| h视频一区二区三区| 两性夫妻黄色片| 亚洲国产欧美网| 亚洲国产看品久久| 后天国语完整版免费观看| 精品亚洲乱码少妇综合久久| 亚洲人成77777在线视频| 午夜激情久久久久久久| 午夜两性在线视频| 久久久国产精品麻豆| 亚洲综合色网址| 一区二区三区激情视频| 国产男女超爽视频在线观看| 可以免费在线观看a视频的电影网站| 少妇 在线观看| 日本黄色日本黄色录像| 一区二区av电影网| 亚洲欧美日韩另类电影网站| 亚洲激情五月婷婷啪啪| 欧美日韩精品网址| 大型av网站在线播放| 一区二区三区乱码不卡18| 国产欧美亚洲国产| 黄色怎么调成土黄色| 国产欧美日韩一区二区三 | 午夜视频精品福利| 91国产中文字幕| 欧美精品亚洲一区二区| av网站免费在线观看视频| 亚洲专区字幕在线| 国产av精品麻豆| 在线观看免费高清a一片| 亚洲av美国av| 50天的宝宝边吃奶边哭怎么回事| 亚洲男人天堂网一区| svipshipincom国产片| 日本欧美视频一区| 777米奇影视久久| 一个人免费看片子| 十分钟在线观看高清视频www| 成人国产一区最新在线观看| 一区二区av电影网| av天堂在线播放| 国产成人av教育| 国产高清视频在线播放一区 | 国产日韩欧美亚洲二区| 一区二区av电影网| 午夜免费成人在线视频| 99热网站在线观看| 一个人免费看片子| 精品欧美一区二区三区在线| 国产xxxxx性猛交| 一本色道久久久久久精品综合| 午夜免费成人在线视频| 久9热在线精品视频| 亚洲国产欧美网| 欧美黑人精品巨大| 国产熟女午夜一区二区三区| 美国免费a级毛片| 亚洲,欧美精品.| 成年女人毛片免费观看观看9 | 成年美女黄网站色视频大全免费| 精品第一国产精品| 日本五十路高清| 这个男人来自地球电影免费观看| 美女福利国产在线| 亚洲视频免费观看视频| 老司机午夜十八禁免费视频| 一级黄色大片毛片| 亚洲人成77777在线视频| 亚洲第一欧美日韩一区二区三区 | 超碰成人久久| 日本精品一区二区三区蜜桃| 久久久久国产一级毛片高清牌| 免费在线观看影片大全网站| 狠狠狠狠99中文字幕| 中文字幕av电影在线播放| 80岁老熟妇乱子伦牲交| 人人妻人人澡人人看| 亚洲午夜精品一区,二区,三区| 日韩三级视频一区二区三区| 欧美国产精品一级二级三级| 久热这里只有精品99| 午夜久久久在线观看| 国产淫语在线视频| 欧美另类一区| 国产91精品成人一区二区三区 | 亚洲熟女毛片儿| 1024视频免费在线观看| 韩国高清视频一区二区三区| 亚洲熟女精品中文字幕| 少妇粗大呻吟视频| 国产成人av教育| 乱人伦中国视频| 老司机午夜十八禁免费视频| 国产一区有黄有色的免费视频| 少妇的丰满在线观看| 国产在线观看jvid| 欧美另类亚洲清纯唯美| 热99久久久久精品小说推荐| 91老司机精品| 日韩,欧美,国产一区二区三区| xxxhd国产人妻xxx| 90打野战视频偷拍视频| 一本大道久久a久久精品| 天堂中文最新版在线下载| 欧美性长视频在线观看| 欧美在线一区亚洲| 亚洲专区国产一区二区| 久久九九热精品免费| 国产无遮挡羞羞视频在线观看| 国产av又大| 女人被躁到高潮嗷嗷叫费观| 老司机午夜十八禁免费视频| 巨乳人妻的诱惑在线观看| 久久久国产一区二区| 日韩 欧美 亚洲 中文字幕| 亚洲国产中文字幕在线视频| 亚洲第一青青草原| 亚洲 欧美一区二区三区| 性高湖久久久久久久久免费观看| 美国免费a级毛片| 欧美xxⅹ黑人| 狂野欧美激情性bbbbbb| 成年人黄色毛片网站| 女人被躁到高潮嗷嗷叫费观| 日本撒尿小便嘘嘘汇集6| 97在线人人人人妻| 亚洲综合色网址| 一区二区日韩欧美中文字幕| 亚洲情色 制服丝袜| 亚洲成av片中文字幕在线观看| 国产99久久九九免费精品| 国产在线免费精品| 午夜福利影视在线免费观看| 国产精品久久久久久人妻精品电影 | 一边摸一边做爽爽视频免费| 日本av免费视频播放| 午夜精品国产一区二区电影| 桃花免费在线播放| 国产精品香港三级国产av潘金莲| 国产日韩一区二区三区精品不卡| 国产在线免费精品| 夜夜骑夜夜射夜夜干| 国产免费福利视频在线观看| 少妇 在线观看| 成年人黄色毛片网站| 亚洲精品国产av蜜桃| 免费观看a级毛片全部| 韩国精品一区二区三区| 一本色道久久久久久精品综合| 男人添女人高潮全过程视频| 制服人妻中文乱码| 日韩熟女老妇一区二区性免费视频| 丰满少妇做爰视频| 国产精品久久久久久人妻精品电影 | 丰满饥渴人妻一区二区三| 国产熟女午夜一区二区三区| 涩涩av久久男人的天堂| 国产成人啪精品午夜网站| 欧美激情 高清一区二区三区| 啦啦啦中文免费视频观看日本| 91成年电影在线观看| 欧美 亚洲 国产 日韩一| av网站免费在线观看视频| 亚洲三区欧美一区| 一区二区av电影网| 男人添女人高潮全过程视频| 9色porny在线观看| 老熟妇仑乱视频hdxx| 欧美成狂野欧美在线观看| 国产高清视频在线播放一区 | 一区福利在线观看| 国产高清videossex| 侵犯人妻中文字幕一二三四区| 中文欧美无线码| 欧美+亚洲+日韩+国产| 一本—道久久a久久精品蜜桃钙片| 亚洲成人免费av在线播放| 亚洲国产欧美在线一区| 80岁老熟妇乱子伦牲交| 国产精品一区二区免费欧美 | 亚洲精品国产av成人精品| av网站免费在线观看视频| 在线观看www视频免费| 欧美日韩福利视频一区二区| 国产日韩欧美在线精品| 精品第一国产精品| 国产精品自产拍在线观看55亚洲 | 丰满少妇做爰视频| a级毛片在线看网站| 激情视频va一区二区三区| 美女主播在线视频| 免费少妇av软件| 久久午夜综合久久蜜桃| 国产精品一区二区在线观看99| 高清欧美精品videossex| 亚洲一区二区三区欧美精品| 成人av一区二区三区在线看 | 久久毛片免费看一区二区三区| 亚洲精品久久午夜乱码| 欧美日韩亚洲综合一区二区三区_| 国产精品99久久99久久久不卡| 亚洲熟女精品中文字幕| 无遮挡黄片免费观看| 国产高清videossex| 国产欧美亚洲国产| 五月天丁香电影| 日韩大码丰满熟妇| 丝瓜视频免费看黄片| 可以免费在线观看a视频的电影网站| 久久久欧美国产精品| av在线播放精品| 久久人妻熟女aⅴ| 麻豆乱淫一区二区| 日本vs欧美在线观看视频| 久久亚洲精品不卡| 久久国产亚洲av麻豆专区| 欧美xxⅹ黑人| 国产精品久久久久久精品古装| 女人精品久久久久毛片| 欧美午夜高清在线| 91精品伊人久久大香线蕉| 亚洲精品中文字幕一二三四区 | 捣出白浆h1v1| 热99国产精品久久久久久7| 一区二区三区精品91| 十八禁高潮呻吟视频| 法律面前人人平等表现在哪些方面 | 法律面前人人平等表现在哪些方面 | 91麻豆av在线| 欧美老熟妇乱子伦牲交| av国产精品久久久久影院| 99国产精品免费福利视频| 精品国内亚洲2022精品成人 | 搡老乐熟女国产| 在线观看人妻少妇| 高清视频免费观看一区二区| 亚洲精品乱久久久久久| 亚洲欧美色中文字幕在线| 亚洲成国产人片在线观看| 搡老岳熟女国产| 午夜激情久久久久久久| 男人添女人高潮全过程视频| 亚洲国产中文字幕在线视频| 80岁老熟妇乱子伦牲交| 精品一区在线观看国产| 亚洲第一青青草原| 精品一区二区三区av网在线观看 | 免费黄频网站在线观看国产| 亚洲成国产人片在线观看| 免费少妇av软件| av网站在线播放免费| 在线永久观看黄色视频| 欧美久久黑人一区二区| 国产精品二区激情视频| 日韩免费高清中文字幕av| 免费人妻精品一区二区三区视频| 在线观看人妻少妇| 色视频在线一区二区三区| 中文字幕另类日韩欧美亚洲嫩草| 久久久精品区二区三区| 爱豆传媒免费全集在线观看| 久久久久精品人妻al黑| 曰老女人黄片| 国产成人一区二区三区免费视频网站| 国产精品久久久人人做人人爽| 一级毛片精品| 色精品久久人妻99蜜桃| 精品人妻一区二区三区麻豆| av天堂久久9| 久久国产精品大桥未久av| 国产日韩欧美视频二区| 国产精品亚洲av一区麻豆| 久久久欧美国产精品| 亚洲av成人一区二区三| 又大又爽又粗| 国产深夜福利视频在线观看| 午夜久久久在线观看| 午夜福利免费观看在线| 国产日韩欧美视频二区| 女人精品久久久久毛片| 亚洲成av片中文字幕在线观看| 色婷婷久久久亚洲欧美| 99精品久久久久人妻精品| 日韩欧美一区视频在线观看| 精品国产国语对白av| 大香蕉久久成人网| 丝袜喷水一区| 啦啦啦中文免费视频观看日本| 啪啪无遮挡十八禁网站| 久久精品国产a三级三级三级| 制服人妻中文乱码| 国内毛片毛片毛片毛片毛片| 色婷婷久久久亚洲欧美| 欧美精品人与动牲交sv欧美| 亚洲欧洲精品一区二区精品久久久| 超碰97精品在线观看| 成人黄色视频免费在线看| 日韩制服丝袜自拍偷拍| 91麻豆av在线| 另类亚洲欧美激情| av国产精品久久久久影院| 日韩欧美免费精品| 亚洲欧美精品综合一区二区三区| 91国产中文字幕| 18禁观看日本| 夜夜夜夜夜久久久久| 搡老岳熟女国产| 男女高潮啪啪啪动态图| 国产在线视频一区二区| 在线观看舔阴道视频| 高清黄色对白视频在线免费看| 99热全是精品| 日韩有码中文字幕| 国产成人系列免费观看| 免费av中文字幕在线| 国产在线视频一区二区| 亚洲欧美清纯卡通| 人成视频在线观看免费观看| 动漫黄色视频在线观看| 极品人妻少妇av视频| 国产淫语在线视频| 国产成人av激情在线播放| 黑丝袜美女国产一区| 黄色视频在线播放观看不卡| 国产av又大| 日本91视频免费播放| 久久久久精品国产欧美久久久 | 亚洲精华国产精华精| 亚洲熟女毛片儿| 国产成人av教育| 精品一区二区三卡| 久久影院123| 黄色视频,在线免费观看| 老司机靠b影院| 欧美国产精品一级二级三级|