• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Online scheduling of image satellites based on neural networks and deep reinforcement learning

    2019-04-28 05:44:20HaijiaoWANGZhenYANGWugenZHOUDalinLI
    CHINESE JOURNAL OF AERONAUTICS 2019年4期

    Haijiao WANG , Zhen YANG ,*, Wugen ZHOU , Dalin LI

    a University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100190, China

    b National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China

    KEYWORDS Deep reinforcement learning;Dynamic scheduling;Image satellites;Neural network;Online scheduling

    Abstract In the ‘‘Internet Plus” era, space-based information services require effective and fast image satellite scheduling. Most existing studies consider image satellite scheduling to be an optimization problem to solve with searching algorithms in a batch-wise manner. No real-time speed method for satellite scheduling exists. In this paper, with the idea of building a real-time speed method, satellite scheduling is remodeled based on a Dynamic and Stochastic Knapsack Problem(DSKP), and the objective is to maximize the total expected prof it. No existing algorithm could be able to solve this novel scheduling problem properly. With inspiration from the recent achievements in Deep Reinforcement Learning(DRL)in video games,AlphaGo and dynamic controlling,a novel DRL-based method is applied to training a neural network to schedule tasks.The numerical results show that the method proposed in this paper can achieve relatively good performance with real-time speed and immediate respond style.

    1. Introduction

    During the‘‘Internet Plus”era,real-time information service is one of the important functions of space-based internet information systems.1Effective and fast image satellite scheduling is the key to maintaining stability and providing space-based information services.2However, image satellite scheduling is notoriously difficult as an NP-hard problem. Furthermore,real-time information services require real-time scheduling,which requires both immediate responses and real-time running performance.

    Over past few decades,scholars in related fields have shown interest in this problem: Potter et al.3studied the scheduling problem of a linear finite deterministic model using a backtracking method. By focusing on the arrival of new tasks,Liu and Tan4modeled the problem of multi-satellite dynamic scheduling based on Dynamic Constraints Satisfaction Problem (DCSP) and proposed a search algorithm to solve the model. Lema??tre et al.5,6studied the management of satellite resources and the observation scheduling of agile satellites.They further compared the performance of a greedy algorithm,a dynamic programming algorithm,and a constraint programming algorithm. Li et al.7proposed a fuzzy neural network to choose the timing of dynamic scheduling and then implemented a hybrid rescheduling policy. Liu8proposed three descriptive models and turned the dynamic scheduling problem into a random and fuzzy scheduling problem. In recent years, the particle swarm optimization,9the genetic algorithm10-12and other heuristic search algorithms13-16were applied to this domain. In addition, many other scholars have studied the problem of satellite management from different perspectives: Some scholars try to only reschedule the disturbed area restoratively to improve the response speed17and others try to solve the dynamic scheduling problem when an initial scheduling scheme is absent, such as a multi-agent system18-20and rolling scheduling.21

    Generally, most of the existing studies schedule tasks in a batch-wise manner; these studies consider image satellite scheduling as an optimization problem, and intelligent optimization algorithms or heuristic algorithms are applied to solving these optimization problems; both of algorithms require time to collect information of tasks, and schedule collected tasks in a batch-wise manner (in a batch-wise manner,one collects tasks first, then schedules them once, and the scheduling usually happens at the end of scheduling period).

    A few researches could respond tasks with immediate style based on some heuristic rules or expert knowledge. However,these rules are usually hard to obtain and have a limited range of applications.2

    Deep Reinforcement Learning (DRL) is about learning how to maximize a numerical reward signal.22Reinforcement learning is an ideal solution for dynamic decision-making problems. At present, a series of deep reinforcement learning algorithms, such as DQN (Deep Q-Learning Network),23DPG (Deterministic Policy Gradient),24and DDPG (Deep Deterministic Policy Gradient),25are proposed. These algorithms have a wide range of applications, such as Go26,27and resource allocation.28

    In this paper, a novel DRL-based method is proposed to build an image satellite scheduling solution with immediate response capability and real-time speed:First,we build a novel satellite scheduling model based on the Dynamic and Stochastic Knapsack Problem (DSKP); the objective of this novel model is to maximize the total expected prof it instead of the total current prof it, which allows us to schedule tasks without f inishing collecting all the tasks, and to respond to a task immediately whenever it arrives. Second, we build a network(which is known as the scheduling network in the rest of this paper) and train this network to learn how to schedule with Asynchronous Advantage Actor-Critic (A3C) algorithm, a DRL algorithm. If well-trained, the scheduling network could schedule tasks in an immediate response style without further training. In addition, because reinforcement learning is an online learning method, scheduling network could keep improving its performance or adapt to the change in environment through continuous training.

    The rest of this paper is organized as follows. The description and mathematical statements of our model are discussed in Section 2.In Section 3,the architecture and training method of the scheduling network are shown. In Section 4, a series of simulations are run to test and analyze the proposed method.

    2. Problem description and mathematic statement

    2.1. Classic model about satellite dynamic scheduling

    In classic studies, the dynamic scheduling of image satellite is described as follows: given a finite time period and an image satellite with limited storage,tasks arrive dynamically,the time windows of the satellite for tasks can be obtained through preprocessing, and the tasks could only be observed inside the time windows. If task is accepted successfully, the prof it associated with the accepted task is received. The objective is to maximize the total prof it from accepted tasks at the end of given time period.

    As we mentioned in Section 1, the model of image satellite dynamic scheduling is usually built as an optimization problem, which has the similar formation as follows:

    subject to

    where G is the total prof it, ωiis the prof it of the i th task, and Diusually indicates the decision for the i th task.Eq.(2)represents the constraints of the satellite, which are usually related to resources such as storage, and time constrains. Satellite could only observe one task at a time.

    Apparently, models like Eq. (1) require all tasks involved throughout the time period.Because the full tasks’information could only be obtained at the end, an effective and immediate scheduling response would not be very difficult.

    2.2. Satellite scheduling model based on DSKP

    Kleywegt and Papastavrou29studied a Dynamic and Stochastic Knapsack Problem (DSKP), and the DSKP is knapsack problem in dynamic and changing environment.In the DSKP,a pack of limited size is given, items with different values and sizes arrive dynamically,and an immediate decision(accept or reject)for each item must be made immediately.The goal is to obtain more total value without breaking limit of the pack’s size.

    Based on DKSP and classic model of satellite scheduling,we remodeled the image satellite dynamic scheduling problem.The image satellite scheduling problem with immediate response requirement is described as follows: Given an image satellite with limited storage, tasks arrive dynamically. Each task has an associated prof it. The prof it of a task is unknown prior to arrival and becomes known upon arrival. Each arriving task can be either accepted or rejected.If a task is accepted,the prof it associated with the task is received.This process will stop when the deadline is met or the satellite is exhausted(when it has no remaining storage or no enough free time to accept a task, just like a full pack). The objective is to maximize the total prof it of the accepted tasks when stopped.However, the decision for each task should be made right after its arrival. The satellite could accept a task if and only if the remaining storage is suff icient and the task has no time conf lict with previous accepted tasks (see Fig. 1). And we make an assumption that a task cannot be recalled once it is rejected,i.e., once the decision of a task is made, it cannot be changed.

    Let the StorMaxbe the storage limit of a satellite, and let Stor t( ) denote the remaining storage of satellite on time t; let tfreet( )denote the free time intervals for accepting tasks on time t, and let ttransbe the shortest transfer time between tasks.tfreet( ) is defined as follows:

    When a task arrives in the system, a decision about its acceptance is made based on the scheduling policy.A scheduling policy is a strategy or criterion that could determine a task’s acceptance based on the state of the system. The state of the system includes the state of the arriving task, and the resource state of the satellite.

    Let π be the policy used to make scheduling decisions.is the decision made for the i th task under policy π, andis defined as follows:

    The expected total prof it at time t=Aiunder the policy π could be defined as follows:

    where Tend=min(T, Tstor, Tfree), Tstoris the time when the storage is exhausted(Stor Tstor( )≤0)and Tfreeis the time when no free time is available to accept task (tfreeTfree( )=?).is made at Ai. ωjand Ajdenote the prof it and the arrival time of the j th task respectively; the j th task is the task that arrives after the i th task. Typically, Vπ0( ) denotes all the expected total prof it that could be obtained under policy π throughout the whole time period.

    Fig. 1 Time conf lict and free time of satellite.

    Let ∏πdenote all the policies that we could use.The objective is to f ind the optimal total expected prof it

    Apparently,the optimality of Vπdepends on the optimality of π, and thus we must f ind an optimal π*.

    The current scheduling algorithms are designed to build an optimal scheduling solution with search algorithm under given tasks and satellite. They could be regarded as a scheduling policy which needs full information of all tasks;therefore, they could not adapt to a model with uncertainties.

    Inspired by AlphaGo27and AlphaGo Zero,30which uses neural network as function approximator and pattern matcher to evaluate the game Go and choose the next move (i.e., an policy of Go), we parameterized the scheduling policy with neural network, and we expect to f ind the optimal scheduling policy with reinforcement learning method.

    A summary of the most important notation is given in Table 1.

    3. DRL-based method for dynamic scheduling

    3.1. Introduction of DRL

    DRL is the process of learning what to do—how to make optimal decisions—in a dynamic environment.22More importantly, reinforcement learning has very wide range of uses in sequential-decision making problems, such as αGo27,30and robot control.31In these problems,a series of decisions should be made to achieve an optimal cumulative reward (or longterm benef it), such as winning a game of αGo or maintaining balance as long as possible.

    The basic idea of reinforcement learning is shown in Fig.2,the agent makes decisions Diusing the observed state siof the environment; Then a new state and an immediate reward Risignal is received. The goal is to obtain more cumulativerewards.To get the optimal cumulative reward,reinforcement learning usually has a similar iterative formula as follows22:

    Table 1 Summary of notations.

    Fig. 2 A simple illustration of reinforcement learning.

    where siis the current state, si+1is the next state, and S is all the possible states; a and b are the decisions for siand si+1respectively; D is all possible and logical decisions; Riis the immediate reward, and Vπis the cumulative reward (longterm benef it).γ is the discount factor used to control the future influence, and it should be less than 1 if the time is infinite.P(si,a,si+1) is the transition rate between state siand si+1with decision a.In the model-free reinforcement learning algorithm,P(si, a, si+1) does not need to be known as prior knowledge.

    From Eq. (8), we can see that the DRL algorithm learns how to predict long-term benef it with current state based on an iterative and bootstrap procedure. With enough training,the DRL algorithm would learn to predict the future longterm benef it based on current state as well as try to make decision with higher future long-term benef it based on current state.22

    3.2. Scheduling network

    Eq.(5)and Eq.(8)have similar forms,they all indicate how to get more expected future prof it, and it is clear that DRL is an ideal solution for our model. So, we adopt one of the DRL algorithms to learn the scheduling policy.

    3.2.1. Architecture

    Fig. 3 Architecture of scheduling network.

    The output of the network has two parts: an estimation value of total expected prof it (denoted as V(si; θv)) as Critic and the decision (denoted as(si, θπ)) for task as Actor(see Fig. 3). θvis the network parameters of the Critic, and θπis the network parameters of the Actor.

    If a task is accepted, the time window inside the longest free time interval is chosen as the observation time for the task.

    The network consists of three fully connected layer, each layer is followed with Relu activation function ().

    The parameters of the network are shown in Table 2.

    3.2.2. Training method

    We adopted the Asynchronous Advantage Actor-Critic(A3C)32to implement and train our scheduling network because of its stability and high usability. As a model-free DRL algorithm, A3C allows us to train the scheduling network without knowing the task distribution as prior information.

    A train program based on Satellite Tool Kit (STK) is designed to simulate dynamic scheduling situation, include tasks,and provide time windows for these tasks.During training, the training program continues to produce new task for scheduling work to schedule until Tend(see Fig.4):when a task arrives,the scheduling network makes a decision about it,and then receives an instant reward.The instant reward is not used to train the network directly because this instant reward could not represent the following influence of the decision. During training, the parameters of networks are updated after every N-Steps. And the advantage value (see Eq. (11)) of each decision is used to train the scheduling network,and the advantage value (see Eq. (11)) is numerical reward value with respect to the future rewards.

    The Actor could learn to make optimal decisions Dπi for task, while the Critic learn to predict expected total prof it Vπ.The Critic improves its prediction ability based on the real cumulative prof it at the end of each situation. And the Actor improves its scheduling ability based on the prediction from Critic.Scheduling one task is referred to as a step.The parameters of networks are updated after every N-Steps (denoted as T-Stepmaxor reaching the Tendof a scheduling episode (see Fig. 4).

    The parameters of the networks are updated by the algorithm that could be defined as follows32:

    Table 2 Conf iguration of networks.

    Fig. 4 N-Steps training process.

    where α is updating rate of network’s parameters. ?θπis the gradient operator.

    The reward for the decision is the prof it of the task,which is defined as follows:

    where ωmaxis the maximum prof it of a task.

    4. Presentation of results

    Numerical simulations are run to fully test and analyze the performance of the scheduling network. The scheduling networks are implemented in Python language,and deep learning framework of TensorFlow33is utilized. Because there is no benchmark dataset for image satellite scheduling, we designed a program to generate tasks during these simulations.Besides,STK is used to simulate the satellite and calculate the time windows of the tasks.

    4.1. Performance analysis

    4.1.1. Training

    To support our idea of introducing deep reinforcement learning into image satellite scheduling, we trained scheduling network and conducted some simulations. Parameters of satellite and tasks in these simulations are given in Table 3.

    Table 3 Parameters of satellite and tasks during training.

    Table 4 Hyper-parameters of training.

    The hyper-parameters used in training process are referred to Ref.32, and values of these hyper-parameters are shown in Table 4.

    First, we train our network to schedule tasks for different time periods (see Fig. 5). The total prof it, total accepted task count, and average prof it (the ratio with respect to the total prof it and total accepted task count) are selected as indexes to evaluate the training performance. The results are shown in Fig. 5. Because each episode consists of different tasks, we the average value to show the clear changing trend of performance (the red lines in Fig. 5).

    The scheduling network could achieve a higher total prof it in every simulation through training, which indicates that the proposed method works.

    During scheduling, two factors affect the total prof it,namely accepting more tasks (usually meaning solving time conf licts between tasks) and choosing tasks with higher prof it(usually leading to a higher average prof it). In Fig. 5, we noticed that the scheduling part of the simulations learned to obtain higher average prof its and accepted more tasks.

    From Fig. 5, we can see that the network could learn to adapt to them and learn to how to schedule,which largely indicates that the scheduling network could f ind a stable scheduling policy π*that could produce satisfying solution real-time speed and immediate respond style.

    4.1.2. Generalization and test

    In this simulation, we try to test well-trained scheduling network. First, we train our scheduling network with the setup of Fig. 6(a); we then continue to apply the well-trained scheduling network in other simulations (see Figs. 6(b) and 6(c)) without further training.

    The parameters of these simulations are shown in Table 5.The results are shown in Fig. 6.

    Fig. 5 Training curves of performance.

    Fig. 6 Generality test of scheduling network.

    Compared to the First Come First Serve (FCFS) (Control group), the scheduling network could show a stable performance in both testing simulations (see Fig. 6(b) and 6(c)) and training simulations (see Fig. 6(a)), which shows the stability and generalization ability of the scheduling network.

    4.1.3. Adaptability

    In this section, we trained our scheduling network for 10000 episodes. Within the first 3000 episodes, each episode consistsof 45-55 tasks;then we change the arriving task count of each episode to 95-105.In this way,we expect to validate the adaptability of the scheduling network. The simulation parameters are the same as those of Fig. 5(c) besides the task count. The results are shown in Fig. 7.

    Table 5 Parameters of satellites and tasks.

    As shown in Fig. 7, although the total count of accepted tasks stays the same due to the limitation in resources(the storage has been almost exhausted with 35 accepted tasks)after we increase the total count of tasks, the total prof it continues to grow.This result occurs because,with the total count of arriving task growing to 95-105,the scheduling network changes its decision policy through training; scheduling network learns to accept tasks with higher prof it.As evidence,the average prof it grows after we increase the total task count. This result indicates that the scheduling network could adapt to a new environment once the environment is changed.

    Thus, we could draw a conclusion that the scheduling network could be applied to scheduling tasks without further training once it is well-trained (see Section 4.1.2). In addition,scheduling network could adapt to the change in the environment (see Section 4.1.3).

    4.2. Comparison with other algorithms

    To fully test and analyze the proposed method,we run a series of simulations to compare our method with other algorithms,including FCFS,21Direct Dynamic Insert Task Heuristic34(Dynamic Insert Task Heuristic is referred to as DITH in this paper)and Genetic Algorithm(GA).11A random pick method(this method randomly accepts or rejects tasks) is used as the control group.

    We generate 20 different series of tasks to compare our scheduling network with the classic algorithms. The average total prof it, the average accept task count, the average prof it of tasks, and the average response time for tasks are taken as index. The simulation results are shown in Table 6.

    Compared with the FCFS, scheduling network could achieve better performance. And compared with intelligent optimization algorithms (GA and DITH), scheduling network shows improvement in response time.

    More importantly,scheduling network could schedule tasks in an immediate style while other algorithms like GA and DITH need to collect tasks, and then schedule them once in a batch-wise manner. However, collecting tasks takes time,for example, due to the different arrival time, the first task has to wait for the last task in batch-wise manner,which makes the first task get a very slow response.

    Fig. 7 Adaptability test.

    Table 6 Performance comparison (Total Task Count=50, T=3600 s).

    5. Conclusions

    Image satellite scheduling is very important for providing effective and fast space-based information services.A novel satellite scheduling model based on the DSKP was proposed.Unlike the classical satellite scheduling model,the objective of this mode is to maximize the total expected prof it rather than total prof it,which allows us to schedule tasks in an immediate style. Due to the complexity of the problem, it is very difficult to obtain an optimal solution or heuristic rules.Therefore,the scheduling network based on DRL was proposed. Scheduling network could be trained to schedule tasks in an immediate response style, and no special expert knowledge was needed. If welltrained,the scheduling network could schedule tasks effectively and quickly.Our main contribution was proposing a real-time scheduling method for image satellite by importing DRL and neural networks into satellite scheduling.

    In future work, we plan to import imitation learning into the proposed method to improve learning stability and performance, and we plan to extend our method to multi-satellite scheduling situations.

    Acknowledgements

    This study was co-supported by the Key Programs of the Chinese Academy of Sciences (No. ZDRW-KT-2016-2) and the National High-tech Research and Development Program of China (No. 2015AA7013040).

    欧美日韩av久久| 大香蕉久久成人网| 啦啦啦 在线观看视频| 亚洲av美国av| 精品一区二区三卡| 亚洲片人在线观看| 国产免费av片在线观看野外av| 亚洲 国产 在线| 婷婷精品国产亚洲av在线 | 日韩三级视频一区二区三区| 国产亚洲欧美在线一区二区| 国产亚洲欧美在线一区二区| 国产精品自产拍在线观看55亚洲 | 多毛熟女@视频| 中文字幕最新亚洲高清| 母亲3免费完整高清在线观看| 国产三级黄色录像| 国产高清激情床上av| 久久中文看片网| 日本精品一区二区三区蜜桃| 欧美激情 高清一区二区三区| 他把我摸到了高潮在线观看| av视频免费观看在线观看| 国产av一区二区精品久久| 另类亚洲欧美激情| 一进一出好大好爽视频| 日本精品一区二区三区蜜桃| 成人免费观看视频高清| 国产男女超爽视频在线观看| 久久人妻福利社区极品人妻图片| 日本a在线网址| 满18在线观看网站| 国产精品久久视频播放| 欧美亚洲 丝袜 人妻 在线| 久久性视频一级片| 别揉我奶头~嗯~啊~动态视频| 成人国语在线视频| 国产精品永久免费网站| 国产aⅴ精品一区二区三区波| 精品一区二区三区av网在线观看| 一级毛片女人18水好多| 丝袜美足系列| 老熟妇乱子伦视频在线观看| 黄色怎么调成土黄色| 中文字幕高清在线视频| 亚洲成人手机| 一级毛片高清免费大全| 色婷婷久久久亚洲欧美| 国产视频一区二区在线看| 黄色 视频免费看| 12—13女人毛片做爰片一| 女人高潮潮喷娇喘18禁视频| 国产精品久久久av美女十八| 国产不卡一卡二| 日韩欧美三级三区| videosex国产| 高清黄色对白视频在线免费看| 国产伦人伦偷精品视频| 高清毛片免费观看视频网站 | 日韩欧美免费精品| 男人的好看免费观看在线视频 | 俄罗斯特黄特色一大片| 亚洲人成77777在线视频| 老鸭窝网址在线观看| tocl精华| 亚洲成a人片在线一区二区| 成人三级做爰电影| 黑人欧美特级aaaaaa片| 真人做人爱边吃奶动态| 香蕉久久夜色| 欧美乱码精品一区二区三区| 女性被躁到高潮视频| 老熟妇乱子伦视频在线观看| 女人被躁到高潮嗷嗷叫费观| 99国产极品粉嫩在线观看| 美女 人体艺术 gogo| 中亚洲国语对白在线视频| 国产av又大| 国产单亲对白刺激| 亚洲国产中文字幕在线视频| 人人澡人人妻人| 午夜日韩欧美国产| 亚洲专区字幕在线| 又紧又爽又黄一区二区| ponron亚洲| 国产真人三级小视频在线观看| 亚洲久久久国产精品| 精品国产乱码久久久久久男人| 亚洲五月婷婷丁香| 性少妇av在线| 国产亚洲欧美精品永久| 制服人妻中文乱码| 黑人欧美特级aaaaaa片| 亚洲欧美一区二区三区久久| www日本在线高清视频| 又紧又爽又黄一区二区| 每晚都被弄得嗷嗷叫到高潮| 午夜福利视频在线观看免费| 精品亚洲成国产av| 国产深夜福利视频在线观看| 激情视频va一区二区三区| 三上悠亚av全集在线观看| 精品久久久久久,| 国产精品久久久人人做人人爽| 国产不卡av网站在线观看| 国产成人精品在线电影| 免费女性裸体啪啪无遮挡网站| 男人操女人黄网站| 国产麻豆69| 久久精品国产99精品国产亚洲性色 | 在线观看免费视频网站a站| 在线永久观看黄色视频| 国产精品九九99| 精品国产乱码久久久久久男人| 亚洲一码二码三码区别大吗| 色精品久久人妻99蜜桃| 国产欧美日韩一区二区精品| 首页视频小说图片口味搜索| 黑人猛操日本美女一级片| 热re99久久国产66热| 亚洲熟女毛片儿| 十八禁高潮呻吟视频| 精品国产一区二区三区四区第35| 18禁国产床啪视频网站| 国产精品二区激情视频| 天堂动漫精品| 日本a在线网址| 桃红色精品国产亚洲av| 一区在线观看完整版| 免费久久久久久久精品成人欧美视频| 最新美女视频免费是黄的| 亚洲中文日韩欧美视频| 又紧又爽又黄一区二区| av中文乱码字幕在线| 精品视频人人做人人爽| 一区二区日韩欧美中文字幕| 90打野战视频偷拍视频| 欧美乱码精品一区二区三区| 欧美乱码精品一区二区三区| 日本黄色视频三级网站网址 | 午夜成年电影在线免费观看| 欧美另类亚洲清纯唯美| 亚洲中文日韩欧美视频| 日韩中文字幕欧美一区二区| 免费少妇av软件| 19禁男女啪啪无遮挡网站| a级毛片在线看网站| 国产成人免费观看mmmm| 亚洲情色 制服丝袜| 美国免费a级毛片| 欧美亚洲日本最大视频资源| 丝瓜视频免费看黄片| 欧美黑人欧美精品刺激| 99国产精品一区二区蜜桃av | 久久久久久人人人人人| 国产91精品成人一区二区三区| 满18在线观看网站| 亚洲欧美激情综合另类| 亚洲 欧美一区二区三区| 亚洲国产精品合色在线| 国产精品影院久久| 国产午夜精品久久久久久| a级毛片在线看网站| 首页视频小说图片口味搜索| 亚洲精品av麻豆狂野| 国产精品免费视频内射| 丝瓜视频免费看黄片| www.999成人在线观看| cao死你这个sao货| 亚洲欧美激情综合另类| 啦啦啦在线免费观看视频4| 婷婷成人精品国产| 久久国产乱子伦精品免费另类| 欧美激情极品国产一区二区三区| 国产黄色免费在线视频| av网站在线播放免费| 丝袜美足系列| 999久久久国产精品视频| 国产91精品成人一区二区三区| cao死你这个sao货| 国产成人欧美在线观看 | 天堂√8在线中文| 精品福利永久在线观看| 亚洲人成伊人成综合网2020| 亚洲情色 制服丝袜| 中文字幕人妻丝袜一区二区| 精品国产乱码久久久久久男人| 99久久国产精品久久久| 国产精品一区二区免费欧美| 一边摸一边做爽爽视频免费| 国产真人三级小视频在线观看| 男女免费视频国产| 色婷婷av一区二区三区视频| 男男h啪啪无遮挡| 好男人电影高清在线观看| 亚洲成av片中文字幕在线观看| 麻豆av在线久日| 熟女少妇亚洲综合色aaa.| 男男h啪啪无遮挡| 黄网站色视频无遮挡免费观看| 在线视频色国产色| 99国产综合亚洲精品| 欧美人与性动交α欧美精品济南到| 少妇的丰满在线观看| 午夜免费鲁丝| 欧美成人午夜精品| 亚洲欧美色中文字幕在线| 久久中文字幕人妻熟女| 免费不卡黄色视频| 美女福利国产在线| 精品电影一区二区在线| 美女高潮喷水抽搐中文字幕| 亚洲黑人精品在线| 精品欧美一区二区三区在线| 欧美乱色亚洲激情| 婷婷精品国产亚洲av在线 | 久久热在线av| 欧美日韩精品网址| 日本欧美视频一区| 午夜免费鲁丝| 欧美激情久久久久久爽电影 | 色综合欧美亚洲国产小说| netflix在线观看网站| 丝瓜视频免费看黄片| 久久久久久免费高清国产稀缺| 精品视频人人做人人爽| 午夜亚洲福利在线播放| 国产成人一区二区三区免费视频网站| 脱女人内裤的视频| 精品亚洲成a人片在线观看| 捣出白浆h1v1| 99热国产这里只有精品6| 看片在线看免费视频| 亚洲全国av大片| 亚洲性夜色夜夜综合| 一级,二级,三级黄色视频| 在线av久久热| 欧美日韩国产mv在线观看视频| 亚洲专区中文字幕在线| 深夜精品福利| 99香蕉大伊视频| 91老司机精品| 国产高清视频在线播放一区| 欧美日韩中文字幕国产精品一区二区三区 | 久久久水蜜桃国产精品网| 久久精品亚洲av国产电影网| 搡老乐熟女国产| 亚洲熟女毛片儿| 91麻豆精品激情在线观看国产 | 大香蕉久久成人网| 久久人妻av系列| 在线观看免费视频日本深夜| 国产不卡av网站在线观看| 欧美亚洲 丝袜 人妻 在线| 国产人伦9x9x在线观看| 欧美日韩精品网址| 久久久精品免费免费高清| 一级,二级,三级黄色视频| 女人被狂操c到高潮| 色老头精品视频在线观看| 亚洲国产看品久久| 亚洲aⅴ乱码一区二区在线播放 | 一二三四社区在线视频社区8| 欧美日韩成人在线一区二区| 欧美大码av| 成人特级黄色片久久久久久久| 男女床上黄色一级片免费看| 午夜精品在线福利| 日本vs欧美在线观看视频| 国产黄色免费在线视频| 久久人人97超碰香蕉20202| 人人妻人人澡人人看| 女人被躁到高潮嗷嗷叫费观| 香蕉丝袜av| 久久国产精品大桥未久av| 亚洲av熟女| 91麻豆av在线| 亚洲五月色婷婷综合| 亚洲精品一二三| 一区在线观看完整版| 啦啦啦视频在线资源免费观看| 韩国精品一区二区三区| 大陆偷拍与自拍| 久久草成人影院| 亚洲一区中文字幕在线| 欧美午夜高清在线| 亚洲色图 男人天堂 中文字幕| 久久午夜综合久久蜜桃| 下体分泌物呈黄色| 午夜福利乱码中文字幕| 一区二区三区精品91| 国产激情欧美一区二区| 午夜精品国产一区二区电影| 日韩视频一区二区在线观看| 狂野欧美激情性xxxx| 手机成人av网站| 欧美黄色片欧美黄色片| 欧美人与性动交α欧美精品济南到| 在线观看免费高清a一片| av中文乱码字幕在线| 狠狠狠狠99中文字幕| 人妻久久中文字幕网| 亚洲专区国产一区二区| 欧美午夜高清在线| 欧美人与性动交α欧美精品济南到| 欧美乱码精品一区二区三区| 搡老岳熟女国产| 欧美日韩成人在线一区二区| 久久国产亚洲av麻豆专区| 韩国精品一区二区三区| 久久精品人人爽人人爽视色| 亚洲免费av在线视频| 亚洲精品av麻豆狂野| 天堂俺去俺来也www色官网| 亚洲熟妇熟女久久| cao死你这个sao货| 美女扒开内裤让男人捅视频| 精品福利观看| 一本一本久久a久久精品综合妖精| 久99久视频精品免费| 一本综合久久免费| 午夜福利乱码中文字幕| 午夜精品在线福利| 99国产精品99久久久久| 村上凉子中文字幕在线| 国产精品久久视频播放| 亚洲精品久久午夜乱码| 大型黄色视频在线免费观看| 亚洲一码二码三码区别大吗| 国产片内射在线| 午夜两性在线视频| 午夜福利在线免费观看网站| 夫妻午夜视频| 一进一出抽搐gif免费好疼 | 亚洲性夜色夜夜综合| av一本久久久久| 少妇粗大呻吟视频| 男女免费视频国产| 国产在线精品亚洲第一网站| 国产男靠女视频免费网站| 视频在线观看一区二区三区| 黄色怎么调成土黄色| 国产精品影院久久| 精品福利永久在线观看| 黄色怎么调成土黄色| 婷婷精品国产亚洲av在线 | 一级毛片高清免费大全| 在线观看免费视频网站a站| 免费观看a级毛片全部| 日本vs欧美在线观看视频| 精品福利观看| 国产午夜精品久久久久久| 日本wwww免费看| 亚洲色图av天堂| 老汉色av国产亚洲站长工具| 精品欧美一区二区三区在线| 精品国产超薄肉色丝袜足j| 色94色欧美一区二区| 99热网站在线观看| 亚洲欧美精品综合一区二区三区| 亚洲av美国av| 美女 人体艺术 gogo| 国产亚洲精品久久久久久毛片 | 中文字幕高清在线视频| 日本欧美视频一区| 嫁个100分男人电影在线观看| 女人爽到高潮嗷嗷叫在线视频| 国产蜜桃级精品一区二区三区 | 精品亚洲成国产av| tocl精华| 亚洲国产精品合色在线| 国产一区二区三区综合在线观看| 国产成人欧美在线观看 | 国产欧美日韩一区二区精品| 欧美 亚洲 国产 日韩一| 亚洲,欧美精品.| 国产区一区二久久| 欧美成人午夜精品| 黄色视频,在线免费观看| 国产高清国产精品国产三级| 丝瓜视频免费看黄片| 免费人成视频x8x8入口观看| 久久 成人 亚洲| 久久久精品免费免费高清| 国产色视频综合| 亚洲va日本ⅴa欧美va伊人久久| 日本精品一区二区三区蜜桃| 久久久国产成人精品二区 | 飞空精品影院首页| 天堂中文最新版在线下载| 精品一区二区三区av网在线观看| 精品人妻1区二区| 18禁裸乳无遮挡免费网站照片 | 黄色怎么调成土黄色| 丁香欧美五月| 久9热在线精品视频| 手机成人av网站| 老司机影院毛片| 制服人妻中文乱码| 国产野战对白在线观看| 操出白浆在线播放| 久热爱精品视频在线9| 久久中文字幕一级| xxxhd国产人妻xxx| 国产精品香港三级国产av潘金莲| 一级作爱视频免费观看| 激情在线观看视频在线高清 | 美女福利国产在线| 国产精品免费一区二区三区在线 | 热99国产精品久久久久久7| 久久久水蜜桃国产精品网| 人妻久久中文字幕网| 男人舔女人的私密视频| 最近最新免费中文字幕在线| 国产精品久久久久成人av| 亚洲精品美女久久av网站| 国产男女内射视频| 老熟女久久久| av网站免费在线观看视频| 王馨瑶露胸无遮挡在线观看| 国产91精品成人一区二区三区| 女人精品久久久久毛片| 亚洲熟女毛片儿| 免费人成视频x8x8入口观看| 亚洲成人手机| 91麻豆精品激情在线观看国产 | 成人18禁在线播放| 人人妻,人人澡人人爽秒播| 这个男人来自地球电影免费观看| 老司机深夜福利视频在线观看| 中文字幕人妻丝袜一区二区| 成人三级做爰电影| 自线自在国产av| 又大又爽又粗| 欧美性长视频在线观看| 视频区欧美日本亚洲| 国产精品一区二区在线不卡| 久久国产精品男人的天堂亚洲| av福利片在线| 又紧又爽又黄一区二区| 日本a在线网址| 久久国产亚洲av麻豆专区| 国产一卡二卡三卡精品| 老司机午夜福利在线观看视频| 久久久久久人人人人人| 18禁裸乳无遮挡免费网站照片 | 18禁裸乳无遮挡免费网站照片 | 黑人欧美特级aaaaaa片| 午夜影院日韩av| 制服诱惑二区| 中文字幕精品免费在线观看视频| 亚洲成av片中文字幕在线观看| 国产精品久久电影中文字幕 | 亚洲熟妇熟女久久| 久久午夜综合久久蜜桃| avwww免费| 亚洲人成伊人成综合网2020| 美女视频免费永久观看网站| 一级片免费观看大全| 欧美精品啪啪一区二区三区| 大陆偷拍与自拍| 99国产精品99久久久久| www.999成人在线观看| 国产日韩一区二区三区精品不卡| 国产精品国产av在线观看| 亚洲综合色网址| 女人高潮潮喷娇喘18禁视频| 操出白浆在线播放| 精品国产亚洲在线| 国产精品久久视频播放| 超碰成人久久| 久久中文字幕人妻熟女| 色播在线永久视频| 精品人妻熟女毛片av久久网站| xxx96com| 久热这里只有精品99| 国产亚洲欧美精品永久| 午夜影院日韩av| 色婷婷av一区二区三区视频| 9色porny在线观看| 国产精品久久久av美女十八| 99久久精品国产亚洲精品| 亚洲 国产 在线| 久99久视频精品免费| 精品一区二区三区视频在线观看免费 | 欧美乱色亚洲激情| 黑丝袜美女国产一区| 色综合婷婷激情| 我的亚洲天堂| 亚洲三区欧美一区| 一级,二级,三级黄色视频| 岛国在线观看网站| 国内毛片毛片毛片毛片毛片| 亚洲av片天天在线观看| 91精品三级在线观看| 欧美精品av麻豆av| 午夜福利一区二区在线看| 国产成人啪精品午夜网站| 精品视频人人做人人爽| 久久久久国产一级毛片高清牌| 亚洲少妇的诱惑av| 精品一区二区三区四区五区乱码| 国产精品综合久久久久久久免费 | 久久 成人 亚洲| 在线免费观看的www视频| 伦理电影免费视频| 国产精品国产av在线观看| 色尼玛亚洲综合影院| 久热这里只有精品99| 校园春色视频在线观看| 成年动漫av网址| 久久久国产一区二区| 国产精品香港三级国产av潘金莲| 中出人妻视频一区二区| 丁香欧美五月| 久久久久久亚洲精品国产蜜桃av| 十八禁高潮呻吟视频| 性少妇av在线| 男女午夜视频在线观看| 最新在线观看一区二区三区| av中文乱码字幕在线| 免费不卡黄色视频| videos熟女内射| 狂野欧美激情性xxxx| 天天影视国产精品| 女人精品久久久久毛片| 男女午夜视频在线观看| 夜夜夜夜夜久久久久| 91成人精品电影| 两人在一起打扑克的视频| 欧美亚洲日本最大视频资源| 久久人妻熟女aⅴ| 成人精品一区二区免费| 亚洲熟妇熟女久久| 高潮久久久久久久久久久不卡| 久久中文看片网| 成人18禁在线播放| 国产不卡一卡二| 久久久久久亚洲精品国产蜜桃av| 丝袜在线中文字幕| 中国美女看黄片| 国产精品免费视频内射| 丝袜美腿诱惑在线| 91九色精品人成在线观看| 久久久久久久午夜电影 | e午夜精品久久久久久久| 亚洲熟女毛片儿| 中亚洲国语对白在线视频| 亚洲专区国产一区二区| 欧美精品人与动牲交sv欧美| 成年人午夜在线观看视频| 久久精品国产亚洲av高清一级| 好看av亚洲va欧美ⅴa在| 亚洲性夜色夜夜综合| 精品高清国产在线一区| 午夜福利免费观看在线| 精品国产乱子伦一区二区三区| 一进一出好大好爽视频| 99热网站在线观看| 亚洲国产欧美日韩在线播放| 18在线观看网站| 建设人人有责人人尽责人人享有的| 日韩人妻精品一区2区三区| 美女国产高潮福利片在线看| 一级,二级,三级黄色视频| 欧美日韩瑟瑟在线播放| 午夜亚洲福利在线播放| 精品一区二区三区av网在线观看| 宅男免费午夜| 中文字幕最新亚洲高清| 久久狼人影院| av网站在线播放免费| 欧美在线黄色| 极品人妻少妇av视频| 女人高潮潮喷娇喘18禁视频| bbb黄色大片| 91字幕亚洲| 国产极品粉嫩免费观看在线| 中文字幕精品免费在线观看视频| 俄罗斯特黄特色一大片| 99国产精品99久久久久| 天堂动漫精品| 脱女人内裤的视频| 99在线人妻在线中文字幕 | 欧美av亚洲av综合av国产av| 日本撒尿小便嘘嘘汇集6| 一二三四在线观看免费中文在| 精品福利永久在线观看| 免费观看精品视频网站| 在线免费观看的www视频| 丰满人妻熟妇乱又伦精品不卡| 美女午夜性视频免费| 成人国语在线视频| 国产野战对白在线观看| 欧美一级毛片孕妇| 在线永久观看黄色视频| 久久国产精品人妻蜜桃| 宅男免费午夜| 看片在线看免费视频| 久久精品国产亚洲av高清一级| 亚洲五月色婷婷综合| 久久久久久久久久久久大奶| 国产无遮挡羞羞视频在线观看| 日本wwww免费看| 久久狼人影院| 精品一区二区三区av网在线观看| 成人亚洲精品一区在线观看| 欧美精品啪啪一区二区三区| 国产精华一区二区三区| 亚洲伊人色综图| 大片电影免费在线观看免费| 男女床上黄色一级片免费看| 香蕉久久夜色| 在线观看午夜福利视频| 亚洲精品在线美女|