• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Spark Scheduling Strategy for Heterogeneous Cluster

    2018-07-12 10:54:52XuewenZhangZhonghaoLiGongshenLiuJiajunXuTiankaiXieandJanPanNees
    Computers Materials&Continua 2018年6期

    XuewenZhang,ZhonghaoLi,GongshenLiu,,JiajunXu,TiankaiXieandJanPanNees

    Abstract:As a main distributed computing system, Spark has been used to solve problems with more and more complex tasks. However, the native scheduling strategy of Spark assumes it works on a homogenized cluster, which is not so effective when it comes to heterogeneous cluster. The aim of this study is looking for a more effective strategy to schedule tasks and adding it to the source code of Spark. After investigating Spark scheduling principles and mechanisms, we developed a stratifying algorithm and a node scheduling algorithm is proposed in this paper to optimize the native scheduling strategy of Spark. In this new strategy, the static level of nodes is calculated, the dynamic factors such as the length of running tasks, and CPU usage of work nodes are considered comprehensively. And through a series of comparative experiments in alienation cluster,the new strategy costs less running time and lower CPU usage rate than the original Spark strategy, which verifies that the new schedule strategy is more effective one.

    Keywords:Spark, optimize scheduling, stratifying algorithm, performance optimization.

    1 Introduction

    The born of Spark [Zaharia, Chowdhur, Franklin et al. (2010)] has given possibility to upgrading distributed computing on computer clusters, improving obviously in memory R/W and implement efficiency compared with earlier Hadoop [Zaharia, Chowdhur, Das et al. (2011a, 2012b)]. But, as a general platform, Spark still has some problems in producing process, waiting for optimization.

    For performance, in dealing with the underlying task scheduling, Spark does not consider two special impact factors hidden in actual production environment. One is the complexity of the task itself, the other one is heterogeneity of the cluster environment.The former will lead to bottleneck of a single point, resulting in localized distribution of tasks with high complexity, which will result in restrictions of overall efficiency of the cluster by single point blocking, making smooth completion of hydration operations unfeasible. The latter will cause that the cluster cannot fully play the potential of all computing nodes, that is, limited by the cluster platform, the advantage of high computing performance nodes is not able to be played, resulting in wasting of the efficiency upgrades brought by hardware. The above two, one is the inefficientperformance of Spark in the time dimension, the other is the limited performance in the hardware dimension. To solve problems above, in this paper, a new scheduling strategy is proposed for Heterogeneous cluster in Spark, looking forward to improve efficient of the whole system.

    In this paper, we will reveal some analyses of original Spark scheduling strategy with summary of the existing researches in distributed cluster heterogeneity in Section 2. In Section 3, our new strategy will be described in detail. Section 4 presents the feasibility and evaluation of the strategy by some experimental verification. In Section 5, we conclude the program and propose several future research directions.

    2 Related works

    Because Hadoop technology is more mature and wider and longer used in the enterprise,the optimization schemes for cluster heterogeneity of distributed computing platform are mainly focused on Hadoop platform. Many enterprises seek ways to use big data efficiently and effectively, which needs some new optimization algorithm proposed. In 2005, Nightingale et al. [Nightingale, Chen and Flinn (2005)] proposed the Speculative Task Execution Strategy (STES), the idea is that when free nodes appear in the cluster,they silently predict the implementation of the local copy of other nodes’ tasks, thereby enhancing the utilization of the nodes to avoid unnecessary CPU-vacancy. And it saves snapshot before execution to keep track of recovery when the prediction fails. In 2008,Zaharia et al. [Zaharia, Konwinski, Joseph et al. (2008)] proposed Improved STES,which replaced task progress rate with expected completion time of the task as a scheduling basis. Zaharia et al. [Zaharia, Borthakur, Sen et al. (2010)] proposed the strategy of Delay Scheduling, and developed a multi-user FAIR Scheduler, which solves the problem of fairness of multi users’ resource allocation and the problem of single queue blocking. The Spark source FAIR Scheduler implementation comes from this. In 2013, Tang et al. [Tang, Zhou, Li et al. (2013)] proposed the MapReduce Task Scheduler for Deadline (MTSD). MTSD first divides all the nodes in the heterogeneous cluster into multiple levels according to the computing power, and then estimates the duration of the tasks on different levels, distinguish Map tasks and Reduce tasks, and then make the completion time and task deadline as a basis for scheduling. In 2014, Xu et al. [Xu, Cao and Wang (2016)] proposed Adaptive Task Scheduling on Dynamic Workload Adjustment (ATSDWA), which deploys a monitoring module at each node to collect the node’s resource usage and execution time. Each node initiates the maximum number of tasks according to the hardware configuration, and feeds back by Task Scheduler when each heartbeat packet transmits, then dynamically adjusts the maximum number of operable tasks of each node according to the real-time monitoring information, thus realizing dynamic adaptation of Hadoop Scheduling.

    In Spark field, Yang et al. [Yang, Zheng and Wang (2016)] put forward Heterogeneous Spark Adaptive Task Scheduling (HSATS). In the similar case to ATSDWA, the monitoring module is added to the Worker nodes, and an algorithm to dynamically adjust weights of the nodes is added in TaskScheduler, to schedule nodes according to their real-time weights. And Kaur et al. proposed a fuzzy approach for the employee performance, which provides the idea of evaluating the node performance in this paper.

    In summary, the developing process of solution for heterogeneous cluster scheduling is roughly from single queue to multi queue, static allocation to dynamic adjustment,sequential execution to forecast execution, and on this basis, scheduling feedback,residual time estimation, cluster partition and other ideas are constantly excavated and optimized.

    3 A new spark scheduling engine

    Before the researches are more concerned about how to achieve the balanced distribution of heterogeneous cluster resources, that is, through scheduling strategies to make heterogeneous clusters in the resource utilization level regarded as isomorphism. This paper hopes to make full use of cluster heterogeneity on this basis, in order to improve cluster efficiency.

    Storage system often improve the efficiency of the overall system with multi-level cache,the characteristics of the cache are high efficiency of single point and small capacity.This is the usage of the storage media heterogeneity, to promote efficiency of the overall system through the introduction of high-performance heterogeneous media. Similarly, the current distributed cluster is still in a single-tier structure, and the calculating components in fact are similar to the storage components, whose performance continues to increase.As a result, the single-tier cluster structure is capable to be optimized. Therefore, the heterogeneous cluster optimization ideas in this paper are to stratify the cluster, to optimize overall efficiency. In the practical application scenario, it can be improvable to efficiency of the whole distributed system that a small number of high speed components are introduced into the cluster as high-performance nodes. The key problem to the strategy exists in how to layer the cluster and distribute the tasks with different complexity on demand to all levels of nodes.

    3.1 Task identification

    After releasing a Job, Spark will divide it into several Stages, and further divide the Stages into multiple Tasks, according to the dependencies between each other. Task is the basic unit of the final scheduling. This part will analyze the feasibility of task identification from a micro perspective.

    First of all, when the submitted program calls Transformation or Action interface in Spark, SparkContext checks the incoming closures to ensure that they are transferred in serialization and executed in deserialization. The clean-up closure function is written into the created Resilient Distributed Datasets (RDD) [Zaharia and Matei (2011)] as the initialization parameter. At this point, Spark bounds RDD data and the function together.Because RDD is the main line of Spark’s data stream, nether the implementation of Stage scheduling nor Task scheduling can affect the RDD binding relationship with its closure.When the Executor assigned to the Worker node begins to actually work, it will parse and call the executional function bounded on the RDD to iterate over the data.

    So from the micro perspective, the closure operation can be traced back. As a result, it is available to analyze the complexity of closures written in the RDD when it is created, and write the complexity into the RDD as the reference indicators of the practical scheduling.Cp(p=1,2,…,M) is the complexity of each task, and M is the total number of tasks.

    The scheduling policy will determine the priority of the scheduling based on complexity of the tasks.

    3.2 Cluster stratification

    With hardware updating and high-performance hardware introduced, cluster heterogeneity will tend to be obvious. Currently Spark scheduling is only based on the number of CPU cores, this assumption is not “fair” for actual heterogeneous clusters.Because computing power of single cores are significantly different and GPU and other high-computing-ability hardware perform much better than CPU. Actually, distributing tasks by the number of cores ignores the heterogeneity of each core itself, thus scheduling granularity is not enough satisfying.

    Therefore, we propose a scheduling scheme that stratifies the cluster according to their core power, which is served as one of the targets of the final scheduling strategy. The synoptic idea is to implement several benchmark works on each node and record every duration, then divide nodes according to the specified number of layers.

    Pi(i=1,2,…,N) is index of node performance,Nis the number of nodes. The index represents computing performance of CPU of corresponding node. The larger the index is,the more powerful the CPU is.

    K is the total of layers, Ljis the level of the computing performance of node j. In the formula, α is a constant greater than 1, defined as hierarchical exponent, so that the final stratification result obeys an exponential function with the hierarchical index, which means, there are less nodes in higher level, and nodes in lower levels exist more. This result meets the expectations of stratification.

    Based on the two definitions above, we can get the stratifying algorithm of nodes in cluster.

    Algorithm 1 Nodes stratifying Input: workers in the cluster Output: L 1: Traverse all Workers in Cluster, run the same set of benchmark tasks separately;2: Count the executing time t of each Worker, and get array T=1t/;3: Sort array T in ascending order;4: normalize array T according to T [0];5: Sign the first node as L1;6: for i=2, 3…T. size -1 do 7: get the corresponding layered result of each node according to Eq. (1) and save in array L;8: return L;

    According to this algorithm, a set of nodes at each level are available, then we can define Pifor the corresponding Lj, that is, nodes in the same level have the same performing index, the main purpose of the dividing policy is to eliminate the haphazard of scheduling.In addition, various benchmark tasks in Algorithm 1 result in multiple grading results,then we can create a grading matrix, whose rows represent nodes and columns represent tasks. For every node, the majority of grades from all benchmark tasks can be referred to as the final stratifying consequence.

    3.3 Node detection

    In the above, we discuss task scheduling of Spark based on the number of cores, which is actually a static way of analysis to the system. Cores themselves may be busy or free,furthermore, busy cores may keep different task queue. So it is necessary to monitor the usage and task queue of each CPU dynamically, in order to optimize the task scheduling method.

    Spark cluster is a master-slave structure, Master and Worker communicate using RPC mechanism, so each Worker node can be added with a detecting module. To ensure the connection of Master and Worker, there will be information Interaction calledHeartbeatbetween the two nodes, so that we can take advantage of the opportunity of each heartbeat to synchronize the current resource usage, Master preserves a resource utilization table of Workers in the local as the basis for the next resource allocation, to achieve the dynamic assignment of resources.

    Specific parameters to be detected are queue length, CPU usage and node-performance index. Length of the queue on a single node will affect the completing time of the node.Generally, the longer the queue is, the longer the expected executing time is, tasks scheduled for it should be reduced. There are dependences when Spark execute tasks. If the queue is too long and tasks on other nodes depend on a task in the queue, the block of entire operation is possible, so it is significant to detect the length of the queue to avoid too-long task queue during scheduling. CPU usage directly reflects whether the core is busy or not, if CPU occupancy is too high, it indicates that the node lacks available computing resources, and redistributing tasks may need to wait for the release of computing resources, resulting in delays, so the scheduling should balance the CPU usage of each node. Node-performance index, as described in Definition 2 above, reflects the capacity of the node, scheduling should reasonably distribute task according to the capacity of each node. In addition, tasks with high complexity should be allocated to the nodes with high capacity, to reduce load of low-performing nodes and thus improve overall efficiency. Fig. 1 shows the scheduling structure of Spark.

    After startingJob,Driverwill create the corresponding TaskScheduler as the task scheduler of entire work. When tasks need for allocation,Driverrequires resources fromCluster ManagerofMasteraccording to the task demand. At this time,Cluster Managerwill get availableExecutors’ information on theWorkeras the assignable resource pool for its task. When the task is assigned to anExecutor, the Executor registers to the Driver and keeps connection through Heartbeat message. Therefore, it can synchronize the CPU status to TaskScheduler with a newly-created Heartbeat message, and TaskScheduler records and saves it as its scheduling metric.

    The message interaction in detail is:

    · When Executor is assigned to the Driver by the Master for Job, it sends registration information to Driver, so that Driver obtains the basic information of Executor;

    · With instantiation and initialization starting, Executor starts a Heartbeater, as shown in Fig. 2, while Heartbeater will be bend with a HeartbeatTask, which means the task performed at each heartbeat;

    · At each heartbeat, Executor creates a Heartbeat message that wraps data needed to be transferred to Driver. The message can be reconstructed to transfer customized information;

    · The sent message will be received and analyzed by the Heartbeat Receiver of Driver.Obtained parameters are stored in Driver for subsequent use.

    Figure 1: Spark cluster scheduling structure

    Figure 2: Spark cluster scheduling structure

    3.4 Stratification scheduling

    After separately analyzing Spark features from task, cluster and node gradations, this section will combine the three factors to achieve an integral scheduling strategy.

    Firstly, before task is executed, nodes stratifying are pre-implemented to analyze the current cluster environment. As performance of all nodes, the analysis is written to the configuration of Spark system, which will be read as parameter and stored as SparkEnv during initialization. The pre-implementation can be triggered by the following event:

    · Hardware changes;

    · Stratification data is empty or changes;

    · Duration is too long since last updating.

    When creating RDD, Spark parses operating complexity of the incoming closure and records as Cpon the RDD. Through DAGScheduler [Mo, Yang and Cui (2014)] and TaskScheduler, Job will remain the following parameters before resource allocation:

    · Cpfor Task;

    · (Pi,runningTasks.len,CPUusage) for Executor to be distributed, representing separately node performance of the Executor, the length of its task queue, and CPU usage of the nodes recorded in TaskScheduler.

    Related to the task to be scheduled, scores of each Executor are calculated by a module introduced in TaskScheduler.

    · Evaluation function. For Executor q, define lenqas the length of its task queue, Uqas its CPU usage, f(lenq,Uq) as an index of its current executing power.

    where θ , μ are corresponding coefficient. Smaller the function equals, more powerful the Executor is.

    · Complexity conversion function. Define g(Cp) as a function to converse complexity to the threshold of node-performance index.

    where Cmaxis the peak of complexity. The result is the best node-performance index for the task.

    Based on the equations and analysis above, we can get the scheduling algorithm of nodes in cluster.

    Algorithm 2 Node scheduling Input: Executor array G, length, Ut?Output: Selected executor S 1: foreach Executor q:2: exclude all Executors whose queue length or CPU usage exceeds the threshold length and Ut?.3: calculate the evaluation function of each Executor according to Eq. 2.4: sort the array G in descending order according to its level.5: Get g(Cp) according to Eq. 3.6: for i=1, 2 …G.size do 7: if there is an Executor greater than g(Cp) in queue G, select the one with the smallest evaluation function;8: else select the executor S with the smallest evaluation function which level is closest to g(Cp)9: Select randomly if step 3 and 4 both have no result;10: return S

    4 Evaluation

    We conduct experiments from simulation in theory and real cluster condition to evaluate the algorithm. Indicators of experiments are mainly operation time and whether the distribution of nodes is balanced.

    4.1 Simulation

    · Node: parameters are performance index, CPU usage and task queue, created by Gaussian distribution;

    · Task: parameters are complexity and CPU demand, created by Gaussian distribution.Task generation obeys Poisson distribution.

    · Execution: tasks are allocated to nodes and added to their task queues, while CPU demand is appended to usage of nodes. Execution time is proportional to complexity of the task and inversely proportional to performance index of the node.

    · Scheduling: original theory of Spark is considered as random algorithm.

    With some fundamental conception above, this simulation experiment will statistic run several times. The mean of the running time is considered as the indicator of implementing efficiency. Keeping configuration of nodes constant, we execute ten different groups of tasks with the two algorithms. As the result, operating times of new algorithm are always shorter than the traditional one, as shown in Fig. 3. When we solely change the number of tasks, we get the result integrated to Fig. 4, which indicates that operating time grows by the task load increases, while operations with the new strategy are always efficient than the random method.

    Figure 3: Time comparison of multiple experiments with the same task number

    Figure 4: Time comparison of multiple experiments with different task number

    4.2 Experiments

    To evaluate the algorithm in hardware platform, we prepared four programs to run in Spark system with separately original method and our algorithm, such as SparkWordCount, SparkKmeans, SparkPageRank and SparkLR (Spark Logistic Regression).

    Three machines from Ali Cloud Server Elastic Compute Service (ECS) are deployed to construct a simple heterogeneous cluster. One is both Master and Worker and the other two are Workers. The configurations are shown as following:

    Table 1: Configuration of nodes

    We executed the four programs above on the micro-cluster to process data with multiple scales of samples. In terms of operating efficiency, operating time of tasks is an appropriate criterion. For SparkWordCount, SparkPageRank and SparkKmeans, we change the size of data set. For SparkLR, the number of sample points is 100,000 and dimension is configured to 10, and independent variable is the number of iterations.Dependent variables of all experiments are operating time. The results shown in Figs. 5-8 indicate that our algorithm has an improvement of operation time. In terms of nodes,usage of CPU and balance of nodes both need to be considered. For a single worker node,we monitored its CPU usage when executing the same task with two different strategies.As shown in Fig. 9, at most time CPU loads less with our algorithm than traditional Spark method. Operating the same task with the same size of data for several times, Fig. 10compared average task distribution for the three nodes in the cluster. The indicator is occupancy rate of task queue on each node. We made presentation of percentage of every node with each strategy. As calculated, variance of the new strategy is 6.08, compared with 9.25 using original Spark. Then it is absolute that the distribution of our algorithm is more uniform than traditional method.

    Figure 5: Word Count time consumptions of different job size

    Figure 6: Spark LR time consumptions of different job size

    Figure 7: Spark Page Rank time consumptions of different job size

    Figure 8: Spark K Means time consumptions of different job size

    Figure 9: CPU usage monitor

    Figure 10: CPU distribution in different system

    5 Conclusions

    To develop Spark and distributed computing, it is significant to improve the performance of Spark system. We propose several new ideas in the field of task scheduling and integrate a scheduling strategy for heterogeneous cluster, compared with the random method of original Spark system. This strategy stratifies all nodes in cluster according to their computing performance and monitor their usage. Then task with complexity identified will be distributed to proper node to execute. Furthermore, we conducted several simulations and experiments proving that the new method is more uniform and efficient than traditional one of Spark system.

    The new scheduling strategy takes advantage of capacity of computing nodes sufficiently,so that it effectively optimizes the operational efficiency of Spark in the actual cluster environment. Besides, it also enlightens a way to quickly increase the performance of cluster. Spark cluster can be recognized with high-performance components such as a small number of GPUs, and then the strategy will adjust structure of the cluster. Tasks with large-computing demand can be identified and assigned to these high-performance components in priority, so that the overall system improves its operational efficiency regardless of increasing complexity of tasks by making full use of the newly introduced high-performance components, which contains some engineering significance.

    The following work will concern optimization of specific algorithms in this paper, such as auto-identification of tasks’ complexity. Furthermore, optimization of parameters is also meaningful, some measures such as machine learning and probability methods are worth to be considered.

    Acknowledgement:This work is supported by the National Natural Science Foundation of China (Grant No. 61472248, 61772337) and the SJTU-Shanghai Songheng Content Analysis Joint Lab.

    啦啦啦免费观看视频1| 亚洲国产高清在线一区二区三 | 老司机深夜福利视频在线观看| 精品少妇一区二区三区视频日本电影| 午夜福利影视在线免费观看| 欧美黑人欧美精品刺激| 国产精品国产高清国产av| 久久精品国产清高在天天线| 久热爱精品视频在线9| 夜夜看夜夜爽夜夜摸| 男人舔女人的私密视频| 99久久综合精品五月天人人| 久久精品aⅴ一区二区三区四区| 无人区码免费观看不卡| 天天躁狠狠躁夜夜躁狠狠躁| 欧美大码av| 日韩 欧美 亚洲 中文字幕| 精品国内亚洲2022精品成人| av网站免费在线观看视频| 女性被躁到高潮视频| 精品一区二区三区四区五区乱码| 婷婷六月久久综合丁香| 天天一区二区日本电影三级 | 国产精品99久久99久久久不卡| 久久久久久亚洲精品国产蜜桃av| 色播亚洲综合网| 他把我摸到了高潮在线观看| 亚洲少妇的诱惑av| 成人国产一区最新在线观看| 亚洲欧美精品综合一区二区三区| 久久久久久久精品吃奶| 亚洲 欧美 日韩 在线 免费| 91九色精品人成在线观看| 色老头精品视频在线观看| 妹子高潮喷水视频| 男女午夜视频在线观看| 精品国产美女av久久久久小说| 在线天堂中文资源库| 亚洲av第一区精品v没综合| 国产精华一区二区三区| 一二三四社区在线视频社区8| 国产亚洲精品一区二区www| 香蕉丝袜av| 亚洲精品在线观看二区| 日日摸夜夜添夜夜添小说| 国产精品精品国产色婷婷| av片东京热男人的天堂| 国产高清激情床上av| 成人永久免费在线观看视频| 夜夜爽天天搞| 精品久久久久久,| 超碰成人久久| 美女扒开内裤让男人捅视频| 国产亚洲精品久久久久5区| 搡老岳熟女国产| 国产精品99久久99久久久不卡| √禁漫天堂资源中文www| 亚洲国产欧美一区二区综合| 国产亚洲精品第一综合不卡| 久久婷婷成人综合色麻豆| av天堂久久9| 欧美成人免费av一区二区三区| 色综合欧美亚洲国产小说| 女人精品久久久久毛片| 国产成人系列免费观看| 多毛熟女@视频| 人成视频在线观看免费观看| 动漫黄色视频在线观看| 无人区码免费观看不卡| 88av欧美| 欧美日韩乱码在线| 别揉我奶头~嗯~啊~动态视频| 精品欧美一区二区三区在线| 日日摸夜夜添夜夜添小说| 在线观看66精品国产| 97碰自拍视频| 久久婷婷人人爽人人干人人爱 | 最近最新免费中文字幕在线| 美女午夜性视频免费| 丝袜美足系列| 一本久久中文字幕| 美女午夜性视频免费| 黄色成人免费大全| 人人澡人人妻人| 欧美在线一区亚洲| 在线观看一区二区三区| 黄网站色视频无遮挡免费观看| 黄色视频,在线免费观看| 亚洲三区欧美一区| 免费一级毛片在线播放高清视频 | 手机成人av网站| 高清在线国产一区| 91精品国产国语对白视频| 国产精品美女特级片免费视频播放器 | 国产免费av片在线观看野外av| 亚洲成人久久性| 首页视频小说图片口味搜索| 亚洲av日韩精品久久久久久密| 在线观看免费视频日本深夜| www.精华液| 91精品三级在线观看| 成年版毛片免费区| 亚洲中文字幕一区二区三区有码在线看 | 欧美av亚洲av综合av国产av| 国产成人欧美| 老司机午夜十八禁免费视频| 天堂影院成人在线观看| 国产亚洲精品第一综合不卡| 亚洲欧美精品综合一区二区三区| 欧美日韩亚洲国产一区二区在线观看| 99riav亚洲国产免费| 国产午夜精品久久久久久| 午夜视频精品福利| 亚洲中文日韩欧美视频| 久久久久国产精品人妻aⅴ院| 久久热在线av| 成人三级做爰电影| 嫁个100分男人电影在线观看| av有码第一页| 露出奶头的视频| 亚洲男人天堂网一区| 国产私拍福利视频在线观看| 欧美av亚洲av综合av国产av| 亚洲成人久久性| 精品久久蜜臀av无| 国产野战对白在线观看| 国产一区二区三区视频了| 成人三级黄色视频| 视频区欧美日本亚洲| 成人手机av| √禁漫天堂资源中文www| 日韩国内少妇激情av| 久久人人97超碰香蕉20202| 亚洲成人国产一区在线观看| 母亲3免费完整高清在线观看| 精品久久久久久久毛片微露脸| 亚洲五月色婷婷综合| 黄频高清免费视频| 色播亚洲综合网| 欧美色视频一区免费| 18禁裸乳无遮挡免费网站照片 | 亚洲第一av免费看| 在线永久观看黄色视频| 久久久久久大精品| 色播亚洲综合网| 国产成人欧美在线观看| 精品一区二区三区av网在线观看| 看黄色毛片网站| 亚洲专区国产一区二区| 在线观看免费视频日本深夜| 一级a爱片免费观看的视频| 亚洲欧美日韩另类电影网站| 久久青草综合色| 精品久久蜜臀av无| 在线十欧美十亚洲十日本专区| 日韩成人在线观看一区二区三区| 久久天堂一区二区三区四区| 中文字幕人妻丝袜一区二区| 精品欧美国产一区二区三| 欧美色视频一区免费| 精品熟女少妇八av免费久了| 香蕉国产在线看| 中文字幕另类日韩欧美亚洲嫩草| 人人妻人人爽人人添夜夜欢视频| 欧美黄色片欧美黄色片| 午夜福利,免费看| 久久精品aⅴ一区二区三区四区| 久久狼人影院| 国产精品日韩av在线免费观看 | 九色国产91popny在线| www.熟女人妻精品国产| 精品无人区乱码1区二区| 高潮久久久久久久久久久不卡| 亚洲成人免费电影在线观看| 丰满的人妻完整版| 亚洲中文av在线| 色精品久久人妻99蜜桃| 日本一区二区免费在线视频| 精品欧美一区二区三区在线| 在线国产一区二区在线| 亚洲中文字幕日韩| 淫秽高清视频在线观看| www.精华液| 一进一出抽搐gif免费好疼| 丝袜美腿诱惑在线| 在线播放国产精品三级| 久久久国产欧美日韩av| 69精品国产乱码久久久| 国产精品亚洲美女久久久| 午夜免费观看网址| 久9热在线精品视频| 国产熟女xx| 男女做爰动态图高潮gif福利片 | 免费在线观看影片大全网站| 久久久久久久精品吃奶| 黑人操中国人逼视频| 久久国产精品男人的天堂亚洲| 久久久久九九精品影院| 国产亚洲精品一区二区www| 精品人妻在线不人妻| 欧美人与性动交α欧美精品济南到| 在线观看舔阴道视频| 一级a爱片免费观看的视频| 青草久久国产| 色老头精品视频在线观看| 啦啦啦 在线观看视频| 成年女人毛片免费观看观看9| 精品国产超薄肉色丝袜足j| 亚洲九九香蕉| 黄色a级毛片大全视频| 亚洲国产毛片av蜜桃av| 欧美乱妇无乱码| 国产成+人综合+亚洲专区| 欧美黄色片欧美黄色片| 欧洲精品卡2卡3卡4卡5卡区| av中文乱码字幕在线| a在线观看视频网站| 色综合婷婷激情| 久久精品国产亚洲av香蕉五月| 99香蕉大伊视频| 老熟妇乱子伦视频在线观看| 久久人妻福利社区极品人妻图片| 免费观看精品视频网站| 精品人妻在线不人妻| 非洲黑人性xxxx精品又粗又长| 一级a爱视频在线免费观看| 久久久久久久午夜电影| 久久国产精品人妻蜜桃| 日韩欧美一区视频在线观看| 女人精品久久久久毛片| 亚洲 国产 在线| 母亲3免费完整高清在线观看| 一级毛片高清免费大全| 99国产精品免费福利视频| 少妇的丰满在线观看| 国产成人欧美| 一级片免费观看大全| 精品高清国产在线一区| 亚洲av成人av| 国产精品永久免费网站| 天天躁狠狠躁夜夜躁狠狠躁| 极品人妻少妇av视频| 麻豆成人av在线观看| 国产精品自产拍在线观看55亚洲| 高潮久久久久久久久久久不卡| 国产一区在线观看成人免费| 国产97色在线日韩免费| 午夜福利高清视频| 久久午夜亚洲精品久久| 亚洲欧美日韩另类电影网站| 亚洲黑人精品在线| 国产亚洲精品一区二区www| 国产精品二区激情视频| 亚洲第一电影网av| 国产在线观看jvid| 亚洲免费av在线视频| 久久久久久免费高清国产稀缺| 男女之事视频高清在线观看| 丝袜美足系列| 美女高潮到喷水免费观看| 亚洲国产精品久久男人天堂| 99re在线观看精品视频| 性少妇av在线| 黑人巨大精品欧美一区二区蜜桃| 一本久久中文字幕| 在线观看66精品国产| 国内毛片毛片毛片毛片毛片| 欧美一级a爱片免费观看看 | 天天躁狠狠躁夜夜躁狠狠躁| 99久久精品国产亚洲精品| 激情视频va一区二区三区| 最近最新中文字幕大全免费视频| 欧美乱色亚洲激情| 国产精品爽爽va在线观看网站 | 国产伦一二天堂av在线观看| 免费高清在线观看日韩| 波多野结衣巨乳人妻| 日韩欧美三级三区| 欧洲精品卡2卡3卡4卡5卡区| 亚洲国产看品久久| av片东京热男人的天堂| 人人妻,人人澡人人爽秒播| 真人一进一出gif抽搐免费| av免费在线观看网站| av在线天堂中文字幕| 亚洲 欧美一区二区三区| 一进一出好大好爽视频| 国内久久婷婷六月综合欲色啪| 国产三级在线视频| 操美女的视频在线观看| 人人妻,人人澡人人爽秒播| 香蕉国产在线看| 亚洲欧美精品综合一区二区三区| 亚洲国产精品成人综合色| 色播在线永久视频| 久久久久久大精品| 成人精品一区二区免费| 亚洲第一青青草原| 黄色丝袜av网址大全| av天堂在线播放| 一个人观看的视频www高清免费观看 | 亚洲av成人不卡在线观看播放网| 69精品国产乱码久久久| 每晚都被弄得嗷嗷叫到高潮| 久9热在线精品视频| 午夜两性在线视频| 久久久久九九精品影院| 中文亚洲av片在线观看爽| 亚洲av第一区精品v没综合| 色播在线永久视频| 中文字幕人成人乱码亚洲影| 亚洲一码二码三码区别大吗| 日韩国内少妇激情av| www.999成人在线观看| 一卡2卡三卡四卡精品乱码亚洲| 乱人伦中国视频| 波多野结衣一区麻豆| 一二三四在线观看免费中文在| 两个人视频免费观看高清| 亚洲国产欧美网| 亚洲熟妇熟女久久| 9色porny在线观看| 岛国视频午夜一区免费看| 成人18禁在线播放| 免费av毛片视频| 法律面前人人平等表现在哪些方面| 国产成人系列免费观看| 99久久综合精品五月天人人| 美女午夜性视频免费| 少妇熟女aⅴ在线视频| 国产黄a三级三级三级人| 欧美大码av| 午夜福利一区二区在线看| 色综合欧美亚洲国产小说| 村上凉子中文字幕在线| 日韩一卡2卡3卡4卡2021年| 天天躁夜夜躁狠狠躁躁| 国产成人一区二区三区免费视频网站| 日韩精品青青久久久久久| 操出白浆在线播放| 成人国产一区最新在线观看| 免费在线观看亚洲国产| 亚洲少妇的诱惑av| 操美女的视频在线观看| av中文乱码字幕在线| 一级作爱视频免费观看| 亚洲全国av大片| tocl精华| 黄色 视频免费看| 大香蕉久久成人网| 激情视频va一区二区三区| www.www免费av| 97人妻天天添夜夜摸| 极品教师在线免费播放| 国产精品亚洲美女久久久| 黄色视频,在线免费观看| 亚洲国产欧美网| 午夜福利免费观看在线| 女人爽到高潮嗷嗷叫在线视频| 美女高潮到喷水免费观看| 日日爽夜夜爽网站| 久久久水蜜桃国产精品网| 两个人看的免费小视频| 长腿黑丝高跟| 国产精品野战在线观看| 亚洲人成77777在线视频| 啦啦啦观看免费观看视频高清 | 精品久久久精品久久久| 18禁美女被吸乳视频| 欧美性长视频在线观看| cao死你这个sao货| 亚洲色图av天堂| 大型av网站在线播放| 又黄又爽又免费观看的视频| 色精品久久人妻99蜜桃| 亚洲电影在线观看av| 国产成人影院久久av| 久久香蕉精品热| 久久久久九九精品影院| 国产成人欧美| 国产欧美日韩综合在线一区二区| 婷婷精品国产亚洲av在线| 99久久国产精品久久久| 亚洲伊人色综图| 久久久精品国产亚洲av高清涩受| 午夜福利18| 中出人妻视频一区二区| 一a级毛片在线观看| www.精华液| 他把我摸到了高潮在线观看| 成人18禁高潮啪啪吃奶动态图| 亚洲激情在线av| 精品第一国产精品| 日韩三级视频一区二区三区| netflix在线观看网站| 成人免费观看视频高清| 香蕉久久夜色| 大型av网站在线播放| 亚洲一区中文字幕在线| 久久天躁狠狠躁夜夜2o2o| 国产精品免费视频内射| 嫁个100分男人电影在线观看| 日韩中文字幕欧美一区二区| tocl精华| 成人国产一区最新在线观看| 性欧美人与动物交配| 巨乳人妻的诱惑在线观看| 在线av久久热| 国产高清激情床上av| 伦理电影免费视频| 99精品欧美一区二区三区四区| 国产成人av激情在线播放| 91精品三级在线观看| 亚洲在线自拍视频| 老汉色av国产亚洲站长工具| 男男h啪啪无遮挡| 99国产综合亚洲精品| 免费在线观看视频国产中文字幕亚洲| 国产精品 国内视频| 久久国产精品影院| 久久香蕉激情| 午夜影院日韩av| 午夜福利成人在线免费观看| 久久天堂一区二区三区四区| 国产激情欧美一区二区| 丝袜美腿诱惑在线| 97人妻天天添夜夜摸| 高清毛片免费观看视频网站| 亚洲国产欧美日韩在线播放| 国产成人一区二区三区免费视频网站| 97人妻精品一区二区三区麻豆 | 日日摸夜夜添夜夜添小说| 欧洲精品卡2卡3卡4卡5卡区| 久久亚洲精品不卡| 国产人伦9x9x在线观看| 亚洲熟妇中文字幕五十中出| 中文字幕高清在线视频| 国产亚洲精品久久久久5区| 欧美国产精品va在线观看不卡| 人成视频在线观看免费观看| 精品一区二区三区视频在线观看免费| 国产熟女午夜一区二区三区| 国产野战对白在线观看| svipshipincom国产片| 18禁国产床啪视频网站| 日日夜夜操网爽| 露出奶头的视频| 国产成+人综合+亚洲专区| 国产高清有码在线观看视频 | av天堂久久9| 国产一区二区三区综合在线观看| 欧美日本视频| 日本a在线网址| 黑人巨大精品欧美一区二区mp4| 国产欧美日韩一区二区精品| 亚洲美女黄片视频| 丰满人妻熟妇乱又伦精品不卡| 免费在线观看完整版高清| 国产99久久九九免费精品| 欧美日韩中文字幕国产精品一区二区三区 | 成人特级黄色片久久久久久久| 亚洲人成电影免费在线| 亚洲美女黄片视频| 亚洲国产精品久久男人天堂| 欧美黑人欧美精品刺激| 搡老熟女国产l中国老女人| 最近最新中文字幕大全免费视频| 激情视频va一区二区三区| 三级毛片av免费| 一级a爱片免费观看的视频| 国产精品永久免费网站| 女生性感内裤真人,穿戴方法视频| 午夜福利影视在线免费观看| 在线观看午夜福利视频| 成人国产一区最新在线观看| 亚洲男人的天堂狠狠| 丰满的人妻完整版| 真人做人爱边吃奶动态| 久久久久久久精品吃奶| 老司机靠b影院| 99久久精品国产亚洲精品| 精品国产一区二区三区四区第35| 我的亚洲天堂| 少妇裸体淫交视频免费看高清 | 日本在线视频免费播放| 免费高清视频大片| 欧美乱码精品一区二区三区| 一级片免费观看大全| 老司机靠b影院| 亚洲成国产人片在线观看| 一进一出好大好爽视频| 黄色毛片三级朝国网站| 99国产精品一区二区蜜桃av| 悠悠久久av| 久久草成人影院| 男女做爰动态图高潮gif福利片 | 午夜福利一区二区在线看| 高潮久久久久久久久久久不卡| 丝袜美腿诱惑在线| 亚洲成人免费电影在线观看| 欧美色欧美亚洲另类二区 | 黄片播放在线免费| 精品国产乱码久久久久久男人| 侵犯人妻中文字幕一二三四区| 久久精品91无色码中文字幕| 国产成人欧美| 亚洲第一欧美日韩一区二区三区| 欧美激情 高清一区二区三区| 免费一级毛片在线播放高清视频 | 中文亚洲av片在线观看爽| 三级毛片av免费| 日本 av在线| 国产激情欧美一区二区| 51午夜福利影视在线观看| 国产精品一区二区精品视频观看| 在线观看免费日韩欧美大片| 欧美+亚洲+日韩+国产| 国产精品久久电影中文字幕| 国产精品日韩av在线免费观看 | 亚洲五月婷婷丁香| 亚洲伊人色综图| 国产av一区在线观看免费| videosex国产| 老熟妇乱子伦视频在线观看| 国产欧美日韩综合在线一区二区| 亚洲精品久久成人aⅴ小说| 日日夜夜操网爽| 男女之事视频高清在线观看| 亚洲一区二区三区色噜噜| 女人精品久久久久毛片| 亚洲,欧美精品.| 黑人操中国人逼视频| 一边摸一边抽搐一进一小说| 日韩精品青青久久久久久| 欧美国产日韩亚洲一区| 成人免费观看视频高清| 欧美日韩瑟瑟在线播放| 久久久久国内视频| 国产一区二区三区视频了| 中文字幕色久视频| 一二三四社区在线视频社区8| 欧美国产精品va在线观看不卡| 真人一进一出gif抽搐免费| 亚洲第一av免费看| 日日夜夜操网爽| 免费人成视频x8x8入口观看| 久久久久久人人人人人| 多毛熟女@视频| 国产精品日韩av在线免费观看 | 99精品久久久久人妻精品| 国产不卡一卡二| 免费在线观看日本一区| 久久精品国产亚洲av高清一级| 12—13女人毛片做爰片一| 亚洲精品国产一区二区精华液| 亚洲av成人av| 日韩精品中文字幕看吧| 国产麻豆成人av免费视频| 日本 av在线| 在线国产一区二区在线| 一本久久中文字幕| 成人三级黄色视频| 99精品在免费线老司机午夜| 大型av网站在线播放| 亚洲 国产 在线| 正在播放国产对白刺激| 免费观看人在逋| 老司机靠b影院| 国产成人啪精品午夜网站| 亚洲人成77777在线视频| 黄网站色视频无遮挡免费观看| 少妇的丰满在线观看| 变态另类丝袜制服| 熟妇人妻久久中文字幕3abv| 久久香蕉激情| av视频免费观看在线观看| 精品国产国语对白av| 久久性视频一级片| 亚洲国产看品久久| 丝袜美足系列| 给我免费播放毛片高清在线观看| 中文字幕人成人乱码亚洲影| 中文字幕高清在线视频| 一区二区三区精品91| 欧美在线黄色| 色综合婷婷激情| 国产精华一区二区三区| 亚洲国产看品久久| 国产1区2区3区精品| 十八禁人妻一区二区| 无遮挡黄片免费观看| 日日夜夜操网爽| 视频在线观看一区二区三区| 亚洲成人精品中文字幕电影| 50天的宝宝边吃奶边哭怎么回事| 国产精品久久久久久人妻精品电影| 国产精品久久久久久亚洲av鲁大| 国产精品一区二区免费欧美| 久久精品国产亚洲av高清一级| 亚洲 国产 在线| 国产日韩一区二区三区精品不卡| 国产高清有码在线观看视频 | 韩国av一区二区三区四区| 午夜精品在线福利| 一区福利在线观看| 亚洲av成人一区二区三| 亚洲色图 男人天堂 中文字幕| 久热这里只有精品99| netflix在线观看网站| 国产av又大| 国产免费av片在线观看野外av| 无限看片的www在线观看| 操出白浆在线播放| 国产免费av片在线观看野外av| 国产激情欧美一区二区|