• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Approximate dynamic programming for Stochastic Resource Allocation Problems

    2020-08-05 09:42:56AliForootaniRaffaeleIervolinoMassimoTipaldiandJoshuaNeilson
    IEEE/CAA Journal of Automatica Sinica 2020年4期

    Ali Forootani,,Raffaele Iervolino,,Massimo Tipaldi,and Joshua Neilson

    Abstract—A stochastic resource allocation model, based on the principles of Markov decision processes(MDPs), is proposed in this paper. In particular,a general-purpose framework is developed, which takes into account resource requests for both instant and future needs.The considered framework can handle two types of reservations(i.e.,specified and unspecified time interval reservation requests),and implement an overbooking business strategy to further increase business revenues. The resulting dynamic pricing problems can be regarded as sequential decision-making problems under uncertainty, which is solved by means of stochastic dynamic programming (DP) based algorithms. In this regard, Bellman’s backward principle of optimality is exploited in order to provide all the implementation mechanisms for the proposed reservation pricing algorithm.The curse of dimensionality,as the inevitable issue of the DP both for instant resource requests and future resource reservations,occurs. In particular,an approximate dynamic programming(ADP) technique based on linear function approximations is applied to solve such scalability issues.Several examples are provided to show the effectiveness of the proposed approach.

    I.Introduction

    RESOURCE allocation is defined as the set of problems in which one has to assign resources to tasks over some finite time horizon to customer requests.Many important realworld matters can be cast as resource allocation problems,including applications in air traffic flow management[1],energy[2],logistics,transportation,and fulfillment[3].These problems are notoriously difficult to solve for two reasons.First,they typically exhibit stochasticity,i.e.,the requests to be processed may arrive randomly according to some stochastic process which,itself,depends on where resources are allocated.Second,they exhibit extremely large state and action spaces,making solution by traditional methods infeasible[4],[5].In the fields of operational research and artificial intelligence,primarily, the state space as well as decisions(or actions)are discrete.In this regard,resource allocation problems with discrete states and decisions are studied at length under the umbrella of Markov decision processes(MDPs)[6].Systems with uncertainty and nondeterminism can be naturally modelled as MDPs[7],[8].For instance,emotion recognition in text[9],speaker detection[10],and fault-tolerant routing[11]are considered asMDPs.

    The optimal policy for MDPs can be computed by applying exact dynamic programming(DP)techniques thanks to their strength in solving sequential decision making problems[7].However,it is well known that such techniques suffer from theCurse of Dimensionality,which is due to state and action space explosion of real-world applications[8].For this reason,efforts have been devoted to finding the techniques able to solve this problem in an approximate way[12].This field has evolved under a variety of names including approximate dynamic programming(ADP),neuro-dynamic programming,and reinforcement learning [13]–[15].

    In this paper, resource allocation problems are formulated and solved via a general-purpose MDP-based framework,which can be used for different real business contexts.We address both instant(i.e.,customers requires a resource to be allocated immediately)and advance(i.e.,the customer books a resource for future use)resource requests.It is considered that the same resource can be sold at different price values at different times to take the advantage of heterogeneous preferences of customers over time,e.g.,a seat on an airplane or a room in a hotel.Both the formulation and the corresponding resolution for resource allocation problems with instant resource requests were firstly explored by the authors in[16],where only exact DP approaches were applied.In[17], the authors further extended the approach to incorporate the possibility that, besides an immediate allocation request,a customer can book a resource in advance for future utilisation.Two types of booking procedures were considered, that is to say,booking resources with specified and unspecified time interval options.

    The main differences of this paper with[17]are the following:i)new assumptions and procedures both in modeling and in the resource reservation approach;ii)the usage of the unweighted sample based least squares(USLS)-ADP algorithm instead of a temporal difference ADP based approach[18]to solve the curse of dimensionality;iii)comprehensive and analytical algorithms suitable for computer based implementation.As for the first aspect, the proposed solution manages the overbooking situations, which occur when the number of allocated resources at the current time slot can not be confirmed since new resources have been already allocated for the next one(due to the advance resource reservation mechanism),and the overall needs can not be satisfied by the system overall capacity.Such overbooking situations occur since the system handles both instant and future resource requests.Managing them entails a significant update in the modelling of resource allocation problems,its dynamics,and the resulting allocation policy.

    The USLS-ADP algorithm was presented for the first time by the authors in[19],where its convergence properties over an infinite time horizon are also discussed.The USLS-ADP algorithm inherits both the contraction mapping and monotonicity properties of the exact discounted DP algorithms[20].Thanks to this,it is suited for both finite and infinite time horizons.As a consequence,it can be used for solving resource allocation problems with instant and advance reservation requests.The latter case actually involves time intervals over finite (possibly very large)time horizons.

    As for the third aspect, we provide the implementation mechanisms of the reservation pricing algorithms,starting from the steps defined in[17].For instance,we show how to exploit Bellman’s principle of optimality[7]to assess the allocation of future resources at different prices and their impact on the current expected total revenue.More specifically,a stochastic prediction of the system evolution(up to the time when the new resource is requested)is performed.Then,the set of possible prices is applied and assessed based on how they affect the current expected total revenue.When the most suitable price is chosen, the complete pricing policy is renewed.This approach shows how to bridge the gap between model predictive control(MPC)and DP[13].

    Various parts of the proposed modeling and optimization approach,consisting of DP,reservation procedure,and ADP,are implemented in the MATLAB environment for resource allocation problems in a general framework.Moreover,different examples are provided to support and evaluate the effectiveness of the method.

    This paper is organized as follows.Section II shows how to model resource allocation problems via MDPs.Resource allocation problem modeling with specified time interval reservation requests and the related pricing algorithm are provided in Sections III–V.Resource allocation problems with unspecified reservation time intervals are outlined in Section VI.Section VII addresses the usage of the proposed reservation pricing algorithm for resource allocation problems with large state space.Simulation results are provided in Section VIII.Section IX outlines the scientific literature relevant to this work.Finally,Section X concludes the paper.

    II.Preliminaries and Modelling Resource Allocation Problems as MDPs

    This section shows how to formulate resource allocation problems as a set of constrained parallel discrete-time birth death processes(BDPs)[21],which are integrated into one Markov decision process(MDP).The configuration of the resulting MDP based framework can be controlled by the price manager,who assigns a price amongmpossible choices by applying a specific pricing policy.

    Such approach was firstly introduced by the authors in[16].Hereafter,the main aspects of such framework are outlined along with the notation used in this manuscript.We also provide some preliminaries on how to solve the related decision making problem via DP based techniques[7],[20].For more details,the reader can refer to[16].

    A. MDP Notation

    B. Modeling Resource Allocation Problems as MDP

    This paragraph addresses the problem of dynamically pricingN∈ N equivalent resources and allocating them to customers.A set ofmhourly prices(or prices per unit of time)is given,and price managers can select one of them in order to maximize the expected total revenue.They can also reject resource requests from customers,if deemed not convenient from a profit standpoint.Price managers can charge different prices for the same resource over time depending on the resource availability and expected profit.As shown in[16],it is possible to formulate such price management system as a set of BDPs.In particular,there is a dedicated BDP for each feasible priceci(withi=1...m),which allows modelling the unpredictable behavior of customers in requesting and releasing resources.As a result,the system evolves as a set of parallel BDPs.By assigning a specific price at each time slot,the price manager defines which BDP is active for one(possible) birth and one(possible)death,whereas all the others are active only for one(possible)death.In this regard,we assume that:

    ?At maximum,only one customer can request a resource at each time slot.Moreover,each customer can request it for either immediate or future reservation(the latter addressed in the next sections).

    ?The time slot duration is chosen so that,for each priceci,at most one customer associated to each BDP may leave at

    III.Modeling Resource Allocation Problems with Specified Time Interval Reservation Requests

    IV.DP Framework for Solving Resource Allocation Problems with Advance Resource Requests

    This section introduces the DP framework used to compute a proper pricing policy for resource allocation problems with advance resource requests.Such framework along with its related definitions is used in the next section,where the reservation pricing algorithm is described.

    V.Reservation Pricing Algorithm for Specified Time Interval Reservation Requests

    This section describes the algorithm for calculating the price profile to be proposed by the price manager in case of a reservation request for the future time interval [h1,h2].

    The following assumptions are made:

    ?The initial reservation vectorsxjfor all the time slotsj∈[0,T?1]are provided. Note that,as time goes by,these vectors can be updated since customers can request resources in advance to satisfy their future needs.

    ?without any loss of generality,the underlying Ti-MDP is computationally tractable, that is to say, ADP and Monte Carlo simulations are not needed.

    In the algorithm,a backwards iteration strategy is proposed.More specifically,the whole time interval is scanned backwards,and for eachh∈[h1,h2]the best price is calculated. At the same time, the temporary set of reservations is updated along with the associated optimal value function and policy.Finally,the resulting price profile is proposed to the customer.In case the customer accepts it,the new set of reservations is confirmed,as well as the associated optimal value function and policy.

    The reservation pricing algorithm consists of the following steps(at the beginningk=0):

    VI.Modeling Resource Allocation Problems with Specified and Unspecified Time Interval Reservation Requests

    In the normal course of events,it may happen that customers request resources for future utilization without specifying their release time.As a result,unlike the allocation timeh>k,such resources are released according to the system stochastic dynamics,that is to say,the release event is linked to the death rate μiof the proposed price.This section outlines the modeling of the underlying Ti-MDP with specified and unspecified time interval reservation vectors,as well as the associated pricing algorithm.

    Letsbe the set of unspecified time interval reservations.In particular,we define

    VIII.Simulation Results

    The proposed approach has been evaluated over numerical cases to show its effectiveness.Both exact DP and ADP techniques have been used with the support of on-purpose developed MATLAB programs.The latter is configurable,meaning that they provide the capability of defining the values of the problem to be solved,e.g.,λ ’sand μ’ s fori∈{1,...,m},the number of resources(N)and prices(m),and the time horizon(T).For all the examples we have used the basis functions given by(52),i.e.,the number of resources associated to each pricecimultiplied by parameterri.

    It can be shown that the state space explosion can occur even with relatively small values ofmandN.Indeed,increments in such parameters cause an exponential growth in the size of the state space[19].Hence,the exact DP becomes impractical even for relatively small problem instances.The simulation examples are divided into two main parts.The first part is dedicated to the DP reservations algorithm results,while the second part is for the USLS-ADP algorithm.

    Policies computed by DP techniques can be represented by means of lookup tables.In other words,for a given state and time slotj,one can associate the corresponding action calculated by the proposed algorithm.However,such representation can be impractical even for small state and action spaces.Therefore,more compact representations could be used[18].In this regard,we apply a statistical index showing the frequency distribution of each action at each time slot over the entire state space.It is worth highlighting that such statistical index for policies can also be impractical in cases of large action spaces.

    In all the proposed examples, we have used the following state dependent birth and death probabilities:

    We have assumed that the death (birth)probability increases(decreases)with the number of the allocated resources.

    Example 2:Number of pricesm=3, number of resourcesN=6, pricec=[0.9 1.0 1.1], λmax=[0.55 0.5 0.3], λmin=[0.3 0.2 0.1] ,μmax=[0.5 0.6 0.6].

    The cardinality of the state space is 84,and the exact DP algorithm can be applied.Here,for the sake of simplicity, we enclosed the prices in bracket,e.g.,c=[c1c2c3].

    The following operational scenario has been considered:

    ?Specified time interval reservation requests at the consecutive time slotsj=2,3,4 with the duration reported in Table I.

    TABLE I Specified Time Interval Reservation Requests for the Example 2

    ?Unspecified time interval reservation request at the time slotj=1 for the time sloth=20.

    The frequency distribution of the different actions(normalised versus the state space cardinality and over the finite time horizonT=40)is shown in Fig.3.In particular,the above-defined operational scenario(with reservation requests)is compared with the case of no reservation requests,where the DPoff-line solution calculated at the beginning can be applied for instant resource needs.To analyze the result,one has to start from the terminal stage and move backwards.Since there are neither unspecified nor specified time interval reservations for the time slotsj>21, based on the principle of optimality,the frequency distributions of the reservation and no reservation cases are identical.

    Fig.3.Comparison of the action frequency distributions for the reservation and no-reservation cases(Example2).

    As shown in the same figure,the frequency distribution of the rejection control ν increases for the intervals with reservations.It is noticed that for the case of no reservation,the control ψ is never applied.Therefore,it is not plotted.

    Additionally,the specified time interval reservation vector setxand the unspecified time interval reservation vector setsare depicted in Fig.4.As shown in the figure, the pricec3is more likely to be chosen than the others.Moreover,for the case of specified time interval reservations,the algorithm does not propose the pricec2to the costumer requests;hence,we do not plot the associated plot.In the case of unspecified time interval reservations, the algorithm only proposes the pricec3;hence,we do not plot the other prices.

    Fig.4.Specified and unspecified time interval reservation vectorsx and s(Example2).

    Finally,it is worth highlighting that the specified time interval reservation request [h1,h2]=[14,21]is rejected by the algorithm.

    Example 3:The same resource allocation problem set-up of the Example 2 is considered, but with a more complex operational scenario.

    The following operational scenario has been considered:

    ?Specified time interval reservation requests handled at the time slotsj=1,4,5,7,...,12 with the durations reported in Table II.

    TABLE II Specified Time Interval Reservation Requests for the Example 3

    ?Unspecified time interval reservation requests handled at the time slotsj=2,3,6 for the time slotsh=12,22,25,respectively.

    Thanks to the principle of optimality,the frequency distribution curves of the reservation and no reservation cases are identical for the time slotsj>25,see Fig.5.However,for the time intervals14 ≤j≤16,there are differences between such curves.For the interval 1 4 ≤j≤16,the rejection policy ν is on its maximum point since the system is fully reserved for the associated time slots.

    Fig.5.Comparison of the action frequency distributions for the reservation and no-reservation cases(Example3).

    Furthermore,the vectorsxandsare depicted in Fig.6.Unlike the previous scenario,the pricec2is proposed for some intervals.The total number of customers corresponding to reservation vectorsxands(regardless of the associated prices)are depicted in Fig.7,which shows that all the resources are allocated for the time interval 14 ≤j≤16.

    IX.Related Work

    In this section,an essential literature review about the most pertinent articles on MDP modeling and relative solutions for resource allocation problems is provided.A homogeneous continuous-time Markov chain is proposed in[26]to model the patient flows for the hospital ward management.The optimization of the matching between the resources and the demands is performed by means of a local search heuristic algorithm.MDPs are employed in[27]for Business Process Management with the goal of making appropriate decisions to allocate the resources by trying to minimize the long-term cost and to improve the performance of the business process execution.A heuristic based Reinforcement Learning approach is adopted as optimization method.In[28],a resource allocation problem for the vehicular cloud computing systems is discussed.Since the objective is the maximization of the total long-term expected reward, the optimization is formulated as an infinite horizon MDP with the state space,action space,reward model,and transition probability distribution of the particular case study. An iteration algorithm is utilized to develop the optimal scheme that computes which action has to be taken in a certain state.MDP based modelling and solution methodology for scheduling patients in a multiclass,multi-resource surgical system is employed in[29].The proposed model provides a scheduling policy for all surgeries,and minimizes a combination of the lead time between patient request and surgery date,overtime in the operating room,and congestion in the wards.A least square temporal difference ADP approach is to deal with the curse of dimensionality.One of the most important operations in the production of growing-finishing pigs is the marketing of pigs for slaughter.In[30],a sequential marketing decisions at the herd level is considered as a high dimensional infinite horizon MDP.A value iteration ADP algorithm is used to estimate the value function for this infinite time horizon problem.The stochastic behavior of the food banks inventory system has been modelled by using an MDP in[31],which has the advantage of indicating the best way to allocate supplies based on the inventory level of the food bank.Such paper presents a novel transformation of the state space to account for the large distribution quantities observed in practice and shows that the particular underlying stochastic behavior can be approximated by a normal distribution.Similarly to our approach,both stochastic and deterministic aspects are addressed.In[32],a sequential resource allocation problem with an objective function aimed at equitable and effective service for the problem of distributing a scarce resource to meet the customer's demands is carried out.In this work, through a DP framework, the structure of the optimal allocation policy for a given sequence of the customer's demand is characterized as continuous probability distributions.In this regard, by using the identified optimal structure,a heuristic base allocation policy for the instances with discrete demand distribution has been proposed.

    In some other works,resource allocation problems are treated as multi-agent systems.In[33],for instance, the dynamic of the agents is considered as second order differential equations,while they communicate over weightbalanced and strongly connected digraphs.The optimization problem is formulated as a constrained convex objective function.The effectiveness of the method,however,is evaluated for a small number of agents,only.An alternative approach can be found in[34],where the distribution of a common resource between two sources of time varying demand is carried out to develop the time-efficient methods for minimizing delays at severely congested airports.In this work, the problem is formulated as a DP optimization and the objective is based on the second moments of the stochastic queue lengths.It is shown that for sufficiently high volumes of the demand,optimal values can be well approximated by the quadratic functions of the system state.Again,a heuristic based approach is applied as ADP method. A comparison between Monte Carlo tree search and rolling horizon optimisation approaches is carried out in[35]for two challenging dynamic resource allocation problems: the wild fire management and the queuing network control.Even though this work shows interesting results, the reported techniques are suitable just for the specific applications considered. Another example of resource allocation strategies can be found in[36], where the problems of budget allocations of non profit organizations on geographically distinct areas is tackled.The proposed solution consists in formulating the overall resource allocation problem as a twostage stochastic mixed integer programming problem.A heuristic-based approach is finally used to simplify the original formulation.

    An interesting variant to the solution of (stochastic)resource allocation problems is represented by the MPC,especially suited when the dynamics of the systems is considered to be variable over time(time-variant processes).In[37],for instance,it is shown how the stochastic resource allocation problem can be addressed by suitably modifying the MDP and the optimal control problem and using MPC to allocate resources over time.In particular,a new class of algorithms for the stochastic unreliable resource allocation problem is proposed, when there is a chance that the task will not be completed by that resource.However,a well-defined and accurate prediction model is a priori needed for an effective strategic allocation control.Similarly,in[38],the solution for the stochastic resource allocation problem makes use of MPC integrated with machine learning and Markov chain model.The theory is based on a three layer lambda architecture and particularly tailored to the case study of a dispatch decision problem from an energy distribution utility.

    As a general remark,it is noted that our paper provides a sufficiently comprehensive modeling framework, which is not limited to a specific application.Moreover,the optimization algorithms for price policy calculation exploit the most advanced ADP techniques to address the scalability issues of real-world applications,instead of resorting to heuristic or example driven methods.To the best of the authors' knowledge,the current literature on this topic do not have these features.

    Admittedly,one of main difficulties of applying ADP based approaches is the choice of proper basis functions,which is a current area of research(see Section VII-A).Moreover,it is assumed to know the system stochastic mechanisms and the related probability distributions.If this is not feasible,one can resort to computer simulators(generating samples according to such probability distribution)or Reinforcement Learning based approaches[8],[12].

    X.Conclusions

    Resource reservations in resource allocation problems have been modelled as a general-purpose MDP based framework.Stochastic DP based approaches have been proposed to compute proper pricing policies,and show how Bellman’s principle of optimality can play a role in the implementation of the resulting pricing reservation algorithms.However, the resulting framework,which also includes an overbooking business strategy, becomes computationally intractable in case of realistic resource allocation problems.As a consequence,ADP techniques based on linear function approximations have been employed to solve such scalability issues.In particular,the novel USLS-ADP algorithm has been applied.Examples addressing both specified and unspecified time interval reservation requests have been shown,solved,and analyzed to verify the soundness of the proposed approach.

    As for future work,we plan to apply the proposed framework to relevant business applications,such as flight ticket booking,urban parking management,and smart energy management systems.This implies defining the probability distributions of the underlying stochastic processes.In case of their unavailability,it is possible to resort to computer simulators(generating samples according to such probability distribution)or reinforcement learning based approaches.

    欧美日韩亚洲国产一区二区在线观看| 真人做人爱边吃奶动态| 一级作爱视频免费观看| 最近最新免费中文字幕在线| 最近视频中文字幕2019在线8| 两个人免费观看高清视频| 老汉色∧v一级毛片| 女人爽到高潮嗷嗷叫在线视频| 又爽又黄无遮挡网站| 精品欧美国产一区二区三| 亚洲精品色激情综合| 欧美又色又爽又黄视频| 国产精品亚洲美女久久久| 精品第一国产精品| 免费在线观看成人毛片| 国产精品美女特级片免费视频播放器 | 九色成人免费人妻av| 精品人妻1区二区| 韩国av一区二区三区四区| 亚洲 欧美一区二区三区| 看片在线看免费视频| 淫妇啪啪啪对白视频| 99久久国产精品久久久| 后天国语完整版免费观看| 久久人妻福利社区极品人妻图片| 黄片小视频在线播放| 18禁黄网站禁片免费观看直播| 色哟哟哟哟哟哟| 精品少妇一区二区三区视频日本电影| 亚洲成av人片在线播放无| 日本三级黄在线观看| 亚洲一区高清亚洲精品| av片东京热男人的天堂| 亚洲av熟女| 亚洲国产精品成人综合色| 国产一区二区三区在线臀色熟女| 亚洲18禁久久av| 长腿黑丝高跟| 亚洲18禁久久av| 欧美在线黄色| 久久人人精品亚洲av| 黄色 视频免费看| 在线观看www视频免费| 国产乱人伦免费视频| 国产伦在线观看视频一区| 91麻豆av在线| 嫁个100分男人电影在线观看| 成年版毛片免费区| 天堂av国产一区二区熟女人妻 | 97超级碰碰碰精品色视频在线观看| 又大又爽又粗| 成人午夜高清在线视频| 国产精品久久久av美女十八| 变态另类成人亚洲欧美熟女| 精品福利观看| 人人妻人人澡欧美一区二区| 757午夜福利合集在线观看| 亚洲中文日韩欧美视频| 亚洲人成网站高清观看| 成年免费大片在线观看| 成年免费大片在线观看| 国产人伦9x9x在线观看| 91九色精品人成在线观看| 免费在线观看视频国产中文字幕亚洲| 国内精品久久久久精免费| 亚洲精品av麻豆狂野| 好看av亚洲va欧美ⅴa在| 老司机在亚洲福利影院| 国产午夜福利久久久久久| 免费看a级黄色片| 黄色女人牲交| 99久久久亚洲精品蜜臀av| 丁香六月欧美| 亚洲av中文字字幕乱码综合| 欧美成人免费av一区二区三区| 亚洲精品久久国产高清桃花| 香蕉国产在线看| 亚洲精品色激情综合| 日本 av在线| 欧美黑人精品巨大| 一进一出好大好爽视频| 淫秽高清视频在线观看| 免费人成视频x8x8入口观看| 在线观看免费日韩欧美大片| 久久久久性生活片| 亚洲欧美激情综合另类| 午夜福利视频1000在线观看| 精品久久久久久久末码| 99国产精品一区二区蜜桃av| 黄频高清免费视频| 国产亚洲精品久久久久5区| 亚洲七黄色美女视频| 国产成人精品久久二区二区91| 成人午夜高清在线视频| 人成视频在线观看免费观看| 最近最新中文字幕大全免费视频| 亚洲黑人精品在线| 麻豆av在线久日| 国产精华一区二区三区| 一边摸一边抽搐一进一小说| 久久久久九九精品影院| 好男人在线观看高清免费视频| 欧美成狂野欧美在线观看| 可以在线观看的亚洲视频| 特大巨黑吊av在线直播| 啦啦啦观看免费观看视频高清| 欧美精品啪啪一区二区三区| 久久久久亚洲av毛片大全| 国产成年人精品一区二区| 制服丝袜大香蕉在线| 十八禁人妻一区二区| e午夜精品久久久久久久| 性欧美人与动物交配| 国内久久婷婷六月综合欲色啪| 久久热在线av| 亚洲 欧美一区二区三区| svipshipincom国产片| 欧美久久黑人一区二区| 欧美成人免费av一区二区三区| 亚洲成a人片在线一区二区| 1024手机看黄色片| 日本撒尿小便嘘嘘汇集6| 欧美中文综合在线视频| 中出人妻视频一区二区| 天堂影院成人在线观看| av福利片在线| xxxwww97欧美| 国产精品久久电影中文字幕| 91大片在线观看| 99精品在免费线老司机午夜| 中文字幕高清在线视频| 欧美日韩国产亚洲二区| 91在线观看av| 又紧又爽又黄一区二区| 动漫黄色视频在线观看| 欧美又色又爽又黄视频| 亚洲成人中文字幕在线播放| 亚洲欧美日韩无卡精品| 欧美在线黄色| 美女黄网站色视频| 欧美日韩亚洲综合一区二区三区_| 午夜亚洲福利在线播放| 国产精品 国内视频| 桃红色精品国产亚洲av| 日本一区二区免费在线视频| 一级作爱视频免费观看| 97超级碰碰碰精品色视频在线观看| 亚洲国产精品久久男人天堂| 日本一区二区免费在线视频| 给我免费播放毛片高清在线观看| 午夜福利视频1000在线观看| 成人国语在线视频| 国产97色在线日韩免费| 成人18禁在线播放| 波多野结衣巨乳人妻| 亚洲国产日韩欧美精品在线观看 | 国产亚洲av嫩草精品影院| 中国美女看黄片| 99久久国产精品久久久| 非洲黑人性xxxx精品又粗又长| 欧美日韩亚洲国产一区二区在线观看| 天堂影院成人在线观看| 一区二区三区国产精品乱码| 日日夜夜操网爽| 高潮久久久久久久久久久不卡| 欧美久久黑人一区二区| 淫妇啪啪啪对白视频| 中文字幕最新亚洲高清| 精品一区二区三区av网在线观看| 色综合婷婷激情| 757午夜福利合集在线观看| 成年版毛片免费区| 日本黄大片高清| 久久久久久免费高清国产稀缺| 夜夜看夜夜爽夜夜摸| 精品不卡国产一区二区三区| 色哟哟哟哟哟哟| 18禁观看日本| 国语自产精品视频在线第100页| 91在线观看av| 免费在线观看视频国产中文字幕亚洲| 两性午夜刺激爽爽歪歪视频在线观看 | 老司机午夜福利在线观看视频| 最近最新中文字幕大全电影3| 午夜久久久久精精品| 窝窝影院91人妻| 色尼玛亚洲综合影院| 久久久国产成人精品二区| 国产av一区在线观看免费| 天堂影院成人在线观看| 国产1区2区3区精品| 国产激情欧美一区二区| 99精品在免费线老司机午夜| 日韩精品中文字幕看吧| 三级国产精品欧美在线观看 | 最新美女视频免费是黄的| 午夜日韩欧美国产| 久久久久国产精品人妻aⅴ院| 每晚都被弄得嗷嗷叫到高潮| 欧美午夜高清在线| 日韩欧美免费精品| 久久九九热精品免费| 成人18禁高潮啪啪吃奶动态图| 亚洲人成网站在线播放欧美日韩| av国产免费在线观看| 国产亚洲精品第一综合不卡| 国语自产精品视频在线第100页| 丁香欧美五月| 国产精品av视频在线免费观看| 五月玫瑰六月丁香| 久久精品aⅴ一区二区三区四区| 国产真人三级小视频在线观看| 亚洲精品一区av在线观看| 亚洲无线在线观看| 后天国语完整版免费观看| 级片在线观看| 亚洲成人免费电影在线观看| 草草在线视频免费看| 亚洲一码二码三码区别大吗| 午夜激情福利司机影院| 久久久久精品国产欧美久久久| 搡老熟女国产l中国老女人| 曰老女人黄片| 怎么达到女性高潮| 国产乱人伦免费视频| 国产一区二区在线观看日韩 | 88av欧美| 国产精品美女特级片免费视频播放器 | 国产视频内射| 99久久99久久久精品蜜桃| 国产一级毛片七仙女欲春2| 成人三级黄色视频| 国产精品九九99| 中亚洲国语对白在线视频| 别揉我奶头~嗯~啊~动态视频| 亚洲免费av在线视频| 欧美一级a爱片免费观看看 | 国产欧美日韩精品亚洲av| 国产成人精品无人区| 特大巨黑吊av在线直播| 欧美乱色亚洲激情| 丁香欧美五月| 亚洲一码二码三码区别大吗| 欧美中文日本在线观看视频| 国产成+人综合+亚洲专区| 97碰自拍视频| 亚洲无线在线观看| 久久午夜亚洲精品久久| 天堂影院成人在线观看| 老司机在亚洲福利影院| 一a级毛片在线观看| 国产伦人伦偷精品视频| 国产单亲对白刺激| 99国产精品一区二区三区| 一级a爱片免费观看的视频| 香蕉丝袜av| 99久久国产精品久久久| 91老司机精品| 久久久久久大精品| 日韩三级视频一区二区三区| 亚洲黑人精品在线| 成人三级做爰电影| 一二三四在线观看免费中文在| 久久这里只有精品中国| av有码第一页| 欧美成人一区二区免费高清观看 | 国产成人啪精品午夜网站| 久久精品亚洲精品国产色婷小说| 国产三级中文精品| 久久精品影院6| 亚洲一卡2卡3卡4卡5卡精品中文| 最新美女视频免费是黄的| 精品不卡国产一区二区三区| 国产av一区二区精品久久| 性色av乱码一区二区三区2| 国产亚洲欧美98| 亚洲av中文字字幕乱码综合| 国产高清视频在线播放一区| 国产精品免费一区二区三区在线| 成人午夜高清在线视频| 色综合欧美亚洲国产小说| 色老头精品视频在线观看| avwww免费| www.www免费av| 夜夜躁狠狠躁天天躁| 少妇粗大呻吟视频| 嫩草影院精品99| 黄色片一级片一级黄色片| 真人做人爱边吃奶动态| 91字幕亚洲| 精品久久久久久久毛片微露脸| 最新在线观看一区二区三区| 天堂av国产一区二区熟女人妻 | 国产真人三级小视频在线观看| 亚洲精品中文字幕一二三四区| 欧洲精品卡2卡3卡4卡5卡区| 一区二区三区激情视频| 日韩欧美三级三区| 国产真实乱freesex| 亚洲av成人一区二区三| 村上凉子中文字幕在线| 久久精品国产亚洲av高清一级| √禁漫天堂资源中文www| 亚洲专区字幕在线| 国内少妇人妻偷人精品xxx网站 | 亚洲av美国av| 精品欧美国产一区二区三| 欧美最黄视频在线播放免费| 久久久久久人人人人人| 天堂√8在线中文| 一本久久中文字幕| 亚洲人成网站高清观看| 999精品在线视频| 床上黄色一级片| 三级毛片av免费| 国产午夜精品论理片| 五月伊人婷婷丁香| 精品国产乱码久久久久久男人| xxxwww97欧美| 国产成+人综合+亚洲专区| 女人高潮潮喷娇喘18禁视频| videosex国产| 97超级碰碰碰精品色视频在线观看| 亚洲欧美日韩东京热| 婷婷六月久久综合丁香| 麻豆国产av国片精品| 99国产精品99久久久久| 日本五十路高清| 久久久久久久午夜电影| 日韩有码中文字幕| 精品高清国产在线一区| 亚洲av成人不卡在线观看播放网| 欧美在线一区亚洲| 日日夜夜操网爽| 此物有八面人人有两片| 色综合站精品国产| 成年免费大片在线观看| 亚洲午夜精品一区,二区,三区| 日韩欧美一区二区三区在线观看| 国产欧美日韩一区二区精品| 脱女人内裤的视频| 久久精品人妻少妇| 九九热线精品视视频播放| 97超级碰碰碰精品色视频在线观看| 两性午夜刺激爽爽歪歪视频在线观看 | 草草在线视频免费看| 这个男人来自地球电影免费观看| 正在播放国产对白刺激| 国产亚洲精品久久久久久毛片| 久久久久久久久久黄片| 18禁黄网站禁片午夜丰满| 搡老熟女国产l中国老女人| 亚洲欧美日韩无卡精品| 亚洲国产精品成人综合色| 丰满人妻熟妇乱又伦精品不卡| 亚洲午夜精品一区,二区,三区| 亚洲男人天堂网一区| 国产麻豆成人av免费视频| 亚洲va日本ⅴa欧美va伊人久久| 国产一区在线观看成人免费| 久久久久九九精品影院| 中文资源天堂在线| 18禁黄网站禁片免费观看直播| cao死你这个sao货| 可以在线观看毛片的网站| 男女下面进入的视频免费午夜| 麻豆国产av国片精品| 熟妇人妻久久中文字幕3abv| 色噜噜av男人的天堂激情| 一本久久中文字幕| 男男h啪啪无遮挡| 国产成人系列免费观看| 久久久久九九精品影院| 中文字幕av在线有码专区| 精品不卡国产一区二区三区| 国产高清激情床上av| 给我免费播放毛片高清在线观看| 色尼玛亚洲综合影院| 国语自产精品视频在线第100页| 久久久久亚洲av毛片大全| 亚洲精品在线观看二区| 亚洲九九香蕉| 法律面前人人平等表现在哪些方面| 国产精品98久久久久久宅男小说| 五月玫瑰六月丁香| 亚洲精品国产一区二区精华液| 小说图片视频综合网站| 哪里可以看免费的av片| 欧美3d第一页| 亚洲中文字幕一区二区三区有码在线看 | 欧美av亚洲av综合av国产av| 青草久久国产| 欧美精品啪啪一区二区三区| 变态另类丝袜制服| 成人三级做爰电影| 日本免费一区二区三区高清不卡| 亚洲一码二码三码区别大吗| 成人精品一区二区免费| 丰满的人妻完整版| 老司机午夜十八禁免费视频| 小说图片视频综合网站| 国产高清有码在线观看视频 | 国产一区二区三区视频了| 欧美zozozo另类| 99国产精品99久久久久| 少妇被粗大的猛进出69影院| 老司机午夜十八禁免费视频| 全区人妻精品视频| 日本a在线网址| 成人一区二区视频在线观看| 一区福利在线观看| 一进一出抽搐动态| 三级男女做爰猛烈吃奶摸视频| 国产一区二区三区在线臀色熟女| 国产又黄又爽又无遮挡在线| 久久久国产欧美日韩av| 国产熟女午夜一区二区三区| 亚洲午夜精品一区,二区,三区| 精品久久久久久久毛片微露脸| 淫妇啪啪啪对白视频| 国产精品av视频在线免费观看| 成人国语在线视频| 国产精品av视频在线免费观看| 天堂影院成人在线观看| 久久天堂一区二区三区四区| 欧美丝袜亚洲另类 | 国产成人aa在线观看| 岛国在线观看网站| 男女做爰动态图高潮gif福利片| 久久99热这里只有精品18| 在线观看美女被高潮喷水网站 | 久久久久久大精品| 狂野欧美激情性xxxx| 国产精品1区2区在线观看.| 变态另类丝袜制服| 老司机福利观看| 亚洲成人免费电影在线观看| 久久久国产精品麻豆| 欧美日韩亚洲国产一区二区在线观看| 国产视频一区二区在线看| 女生性感内裤真人,穿戴方法视频| 最新在线观看一区二区三区| 成人国产综合亚洲| 18禁国产床啪视频网站| 国产视频内射| 国产精品免费视频内射| 国产一区二区激情短视频| 性欧美人与动物交配| 国产精品久久久久久久电影 | 一级毛片女人18水好多| 午夜影院日韩av| 色精品久久人妻99蜜桃| 国产激情欧美一区二区| 99re在线观看精品视频| 在线观看日韩欧美| 亚洲午夜精品一区,二区,三区| 亚洲精品粉嫩美女一区| 欧美成人免费av一区二区三区| 亚洲中文字幕一区二区三区有码在线看 | 久久香蕉精品热| or卡值多少钱| 午夜福利成人在线免费观看| 一边摸一边做爽爽视频免费| 一个人免费在线观看的高清视频| 一卡2卡三卡四卡精品乱码亚洲| 国产午夜福利久久久久久| 亚洲七黄色美女视频| 日韩大尺度精品在线看网址| www日本黄色视频网| 在线观看免费日韩欧美大片| 一本大道久久a久久精品| 国产精品免费一区二区三区在线| 桃色一区二区三区在线观看| 在线观看66精品国产| av在线天堂中文字幕| 成人18禁高潮啪啪吃奶动态图| 婷婷精品国产亚洲av在线| 日本 欧美在线| 亚洲国产欧美网| 久久中文看片网| aaaaa片日本免费| 国产一级毛片七仙女欲春2| 我要搜黄色片| 国产成人av激情在线播放| 国产男靠女视频免费网站| 久久久久九九精品影院| 婷婷亚洲欧美| 久久精品综合一区二区三区| 最新美女视频免费是黄的| 亚洲美女黄片视频| 一本一本综合久久| 免费在线观看视频国产中文字幕亚洲| 国产精品,欧美在线| 国产区一区二久久| 亚洲国产高清在线一区二区三| 一夜夜www| 窝窝影院91人妻| 精品国产美女av久久久久小说| x7x7x7水蜜桃| 国产一区二区三区视频了| 欧美zozozo另类| 久久精品国产综合久久久| 男女床上黄色一级片免费看| 亚洲国产中文字幕在线视频| 成人国产一区最新在线观看| 欧美黑人巨大hd| 国产伦在线观看视频一区| 亚洲精品美女久久av网站| 国产精品久久电影中文字幕| 久久欧美精品欧美久久欧美| 久久九九热精品免费| 国产精品98久久久久久宅男小说| www.精华液| 亚洲一区中文字幕在线| 极品教师在线免费播放| 一级毛片女人18水好多| 久久精品aⅴ一区二区三区四区| 精品一区二区三区四区五区乱码| 日本黄色视频三级网站网址| 最近最新中文字幕大全免费视频| netflix在线观看网站| 久久精品国产清高在天天线| 最新在线观看一区二区三区| 全区人妻精品视频| 99久久无色码亚洲精品果冻| 女人爽到高潮嗷嗷叫在线视频| 变态另类丝袜制服| 在线播放国产精品三级| 亚洲国产看品久久| 午夜老司机福利片| 亚洲电影在线观看av| 亚洲国产欧美一区二区综合| 国产成+人综合+亚洲专区| 欧美日韩中文字幕国产精品一区二区三区| 久久伊人香网站| 欧美在线黄色| 女人被狂操c到高潮| 神马国产精品三级电影在线观看 | av福利片在线观看| 老鸭窝网址在线观看| 欧美日韩国产亚洲二区| 嫩草影院精品99| 天堂av国产一区二区熟女人妻 | 夜夜爽天天搞| 久久久久免费精品人妻一区二区| 中文字幕人妻丝袜一区二区| 99热这里只有精品一区 | 亚洲午夜理论影院| 听说在线观看完整版免费高清| 欧美高清成人免费视频www| 欧美性猛交黑人性爽| 久久热在线av| 国产精品av视频在线免费观看| 欧美一级毛片孕妇| 国产三级在线视频| 国产区一区二久久| 日韩免费av在线播放| 真人一进一出gif抽搐免费| 国产精品亚洲一级av第二区| 亚洲av熟女| 久久久久免费精品人妻一区二区| 久久久久久大精品| 亚洲av片天天在线观看| 久久久久久国产a免费观看| 亚洲av成人精品一区久久| 色综合亚洲欧美另类图片| 欧美极品一区二区三区四区| 神马国产精品三级电影在线观看 | 日日摸夜夜添夜夜添小说| 男女床上黄色一级片免费看| 精品久久久久久久久久久久久| 欧美黑人精品巨大| 成熟少妇高潮喷水视频| 欧美在线黄色| 欧美极品一区二区三区四区| 欧美在线黄色| 97超级碰碰碰精品色视频在线观看| 成人av一区二区三区在线看| 1024手机看黄色片| 12—13女人毛片做爰片一| 日本一二三区视频观看| 国产精品亚洲一级av第二区| 18禁裸乳无遮挡免费网站照片| 精品少妇一区二区三区视频日本电影| 母亲3免费完整高清在线观看| 国内精品一区二区在线观看| 国产亚洲精品久久久久5区| 国产午夜精品论理片| 丝袜美腿诱惑在线| 国产午夜精品论理片| 日韩欧美免费精品| 搞女人的毛片| 性色av乱码一区二区三区2| 亚洲一区二区三区不卡视频| 免费观看人在逋| netflix在线观看网站| 九九热线精品视视频播放| 天堂av国产一区二区熟女人妻 | 欧美在线一区亚洲| 777久久人妻少妇嫩草av网站| 日本 欧美在线| 变态另类成人亚洲欧美熟女| 久久天躁狠狠躁夜夜2o2o| 精品不卡国产一区二区三区| 18禁裸乳无遮挡免费网站照片| 在线a可以看的网站| 亚洲精品久久国产高清桃花| 在线观看www视频免费| 久久午夜亚洲精品久久| 麻豆久久精品国产亚洲av| 99在线视频只有这里精品首页| 少妇粗大呻吟视频| 麻豆成人午夜福利视频| 日韩欧美 国产精品| 中文字幕精品亚洲无线码一区|