• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Run-Time Dynamic Resource Adjustment for Mitigating Skew in MapReduce

    2021-04-26 07:21:08ZhihongLiuShuoZhangYapingLiuXiangkeWangandDongYin

    Zhihong Liu,Shuo Zhang,Yaping Liu,Xiangke Wang and Dong Yin

    1College of Intelligence and Technology,National University of Defense Technology,Changsha,410073,China

    2Cyberspace Institute of Advanced Technology,Guangzhou University,Guangzhou,510006,China

    ABSTRACT MapReduce is a widely used programming model for large-scale data processing.However,it still suffers from the skew problem,which refers to the case in which load is imbalanced among tasks.This problem can cause a small number of tasks to consume much more time than other tasks,thereby prolonging the total job completion time.Existing solutions to this problem commonly predict the loads of tasks and then rebalance the load among them.However,solutions of this kind often incur high performance overhead due to the load prediction and rebalancing.Moreover,existing solutions target the partitioning skew for reduce tasks,but cannot mitigate the computational skew for map tasks.Accordingly,in this paper,we present DynamicAdjust,a run-time dynamic resource adjustment technique for mitigating skew.Rather than rebalancing the load among tasks,DynamicAdjust monitors the runtime execution of tasks and dynamically increases resources for those tasks that require more computation.In so doing,DynamicAdjust can not only eliminate the overhead incurred by load prediction and rebalancing,but also culls both the partitioning skew and the computational skew.Experiments are conducted based on a 21-node real cluster using real-world datasets.The results show that DynamicAdjust can mitigate the negative impact of the skew and shorten the job completion time by up to 40.85%.

    KEYWORDS MapReduce;task scheduling;resource allocation;data skew;big data

    1 Introduction

    Over the past decade,MapReduce has been found to have tremendous practical value for big data processing.Many companies,including Facebook,Amazon,Twitter and Yahoo!,have used MapReduce to process data-intensive jobs on a daily basis.Although several different big data platforms,such as Hadoop,Hive,Spark,and Storm,have recently emerged one after another,these platforms are all essentially based on the fundamental concept of MapReduce.In MapReduce,there are two kinds of tasks,namely map tasks and reduce tasks.Each map task takes a single chunk of input data as input and outputs the intermediate results to the local disks.Subsequently,each reduce task fetches the intermediate results remotely and outputs the final results into the distributed file system.By breaking down a large data analytic job into a number of smaller tasks and executing these tasks on multiple machines in a distributed manner,MapReduce can dramatically reduce the job completion time.

    While MapReduce is highly successful,it remains affected by many issues.One of these,skew,is particularly challenging to eliminate;by “skew,” we here refer to the case in which the load is imbalanced among tasks.Skew can make a small number of map or reduce tasks take significantly longer to finish than other tasks of the same kind,thereby increasing the job completion time.Let us take the PageRank application as an example,which is a link analysis algorithm that counts the number of incoming edges of each vertex using a graph.This application exhibits skew when the input graph includes nodes with far more incoming edges than others,which is very common in real workloads.Kwon et al.[1] have shown the existence of skew in PageRank using a real-world dataset.In more detail,the slowest map task takes more than twice compared to the secondslowest map task,with the latter still being five times slower than average.Since an individual job’s running time is bounded by the time taken to complete the slowest task,skew can severely prolong the job completion time.

    Many skew mitigation approaches for MapReduce have been proposed.The majority of these adopt a common approach that estimates the distribution of the intermediate key-value pairs and then reassigns these key-value pairs to tasks.However,for the purpose of predicting the key-value distribution,this will cause a synchronization barrier,as it requires either waiting for all map tasks to be completed [2,3] or adding a sampling procedure prior to the actual job beginning [4-7].Some other approaches [8,9] have speculatively launched replica tasks for lagging tasks with the expectation that the former will be completed faster than the latter.Nevertheless,executing replica tasks for tasks affected by data skew cannot improve the job running time.Unlike the above methods,which reassign key-value pairs before the task execution,the approach proposed by Kwon et al.involves a real-time skew mitigation technique,called Skewtune [10],which repartitions the remaining workloads of lagging tasks and creates a new MapReduce job to process the remaining load.Nevertheless,this approach creates significant runtime overhead that extends to 30 s [10].This overhead should not be underestimated even for short running tasks that only last 100 s,as this kind of task is very common in production workloads [11,12].

    In light of the limitations of existing approaches,a run-time partitioning skew mitigation technique based on dynamic resource allocation has also been proposed [13,14].This technique dynamically allocates resources in order to reduce tasks with reference to the estimated load of these tasks.The reduce tasks that take more work to process will be allocated more resources.Hence,this approach can decrease the variation in the running times of reduce tasks,thereby mitigating the negative impact of data skew.However,this mitigation solution focuses on the skew for reduce tasks;it cannot handle skew occurring in map tasks.As we can see in [1],skew is prevalent in the map stage in practice,which limits the applicability of this approach.

    In light of the above,this paper presents DynamicAdjust,a run-time dynamic resource adjustment approach for skew mitigation in MapReduce.Unlike existing solutions,DynamicAdjust monitors the execution of tasks at runtime and adjusts resources for the slow-running tasks while there are still idle resources to draw on.Moreover,DynamicAdjust can mitigate the skew that arises in both the map and reduce stages.The contributions of this work can be summarized as follows:

    —We develop a phase-aware remaining task time prediction technique,which considers the progress rate in each phase individually.Since each phase of a task runs at different speeds in real-world workloads,this phase-aware task remaining time prediction technique can outperform existing prediction techniques that assume the progress rate is constant.

    —We propose a resource adjustment algorithm that dynamically increases the size of containers for the slow running tasks at run-time.In this way,our approach can decrease the variation in task running time,thereby mitigating the skew.

    —We evaluate DynamicAdjust through experiments based on a 21-node real cluster using real-world datasets.The experimental results show that DynamicAdjust can shorten the job completion time by up to 40.85%.

    A preliminary version of this work was previously introduced in a letter in [15].This paper extends our preliminary work in a number of respects.First,the skew detection algorithm and the phase-aware remaining task time prediction are presented here.Second,we have provided more details of the skew mitigation strategy.Third,additional experiments using the RelativeFrequency and Grep applications have been conducted to evaluate the effectiveness of DynamicAdjust.

    The remainder of this paper is organized as follows.Section 2 describes the background and the motivation of our work.Section 3 presents the architecture of DynamicAdjust,while Section 4 illustrates the design of DynamicAdjust in detail.We provide the experimental evaluations in Section 5.Finally,the existing works related to DynamicAdjust are summarized in Section 6,after which our conclusions are presented in Section 7.

    2 Background and Motivation

    In this section,we discuss the two types of skew that arise in MapReduce and outline the resource management mechanism employed in a widely used MapReduce implementation (i.e.,Hadoop YARN),which forms the motivation for this study.

    2.1 Skew in MapReduce

    Skew is a prevalent issue that can be encountered in many application domains,including database operations [16],scientific computing [1] and search engine algorithms [17].The two kinds of skewness in MapReduce are summarized below:

    —Computational Skew:This type of skew can occur in both the map and the reduce stages.In the map stage,each map task processes a sequence of records in the form of key-value pairs.Under ideal circumstances,these records have similar resource requirements (e.g.,CPU and memory),resulting in the execution time being the same.In reality,however,there may be some expensive records that require more computation;this can cause variation in the task runtime,even though the size of the data chunk processed by each map task is the same.Similarly,in the reduce stage,some expensive key groups processed by the tasks can also skew the running time of the reduce tasks,regardless of the sizes of these key groups.

    —Partitioning Skew:This type of skew arises most commonly in the reduce stage.In the original MapReduce systems,a hash function,HashCode(intermediate key) MOD number of reduce tasks,is used to distribute the intermediate data among reduce tasks [18].This hash function works well when the keys are uniformly distributed;however,it may fail while processing non-uniform data.This may be caused by two factors.First,some keys appear with higher frequency than others in the intermediate data,causing the reducers that process these keys to become overloaded.Second,the sizes of values for some keys are significantly larger than others,even when the key frequencies are uniformed;thus,load imbalance among reduce tasks may still arise.

    2.2 Resource Management in Hadoop YARN

    Hadoop YARN [19] is the state-of-the-art implementation for MapReduce.The task scheduling functionalities are divided into two categories:ResourceManager,which is in charge of allocating resources to the running applications while satisfying constraints of capacity,fairness,etc.,andApplicationMaster,which is responsible for negotiating the acquisition of the appropriate resources fromResourceManagerand allocating the resources to the tasks for each job.In addition,YARN uses containers to manage the resources in the cluster.In a given container,the amount of resources (e.g.,〈2GB RAM;1CPU〉) can be customized for each task.However,YARN neglects to address the fact that resource requirements vary for different tasks,while also failing to provide dynamic resource adjustment for these tasks;hence,these tasks will require more resources (i.e.,tasks take more work to process) and may run slower than other tasks because of the resource deficiency.Since the job completion time is determined by the finish time of the slowest task,the static resource scheduling mechanism in native YARN may prolong the job completion time in cases where the load of tasks is unevenly distributed.

    Most existing works [2-4,20,21] only target the partitioning skew and neglect the computational skew that can arise in both the map and reduce stages.Moreover,a common approach is adopted to these solutions that predicts and then redistributes the task load to achieve a better balance,which requires additional (sometimes heavy) overhead in terms of key distribution sampling and load reassignment.In comparison,DynamicAdjust uses an alternative approach that dynamically increases the resources available to the tasks that require more computation;this not only effectively mitigates both the computational and partitioning skew,but also simultaneously eliminates the mitigation overhead.In the following,we will discuss the design of DynamicAdjust in detail.

    3 System Architecture

    Fig.1 illustrates the architecture of DynamicAdjust.There are five main modules:theSkewDetector,theSkewMitigator,theResourceRequester,theContainerAdjusterand theResourceScheduler.TheSkewDetectoris responsible for monitoring the progress of each task and detecting the skew at run-time.TheSkewMitigatortakes the tasks detected to have skew as input,calculates the amounts of resources that need to be adjusted and notifies theResourceRequesterto ask theResourceManagerfor resources.TheResourceRequesterthen communicates with theResourceManager;specifically,it is in charge of sending the resource adjustment requests toResourceManagerthrough heartbeat messages and handling the resource adjustment responses accordingly.TheContainerAdjustertalks to theNodeManagerand adjusts the container size by calling the remote function inNodeManager.Finally,theResourceSchedulerallocates the cluster’s resources based on scheduling policies such as fairness and capacity.It should be noted that current resource scheduling in native YARN cannot dynamically perform resource adjustment for a given task.We have therefore extended the resource scheduling to handle the resource adjustment requests.

    The workflow of DynamicAdjust is as follows:

    1.TheSkewDetectorgathers the task reports from the running tasks at runtime.After assessing the progress,theSkewDetectordetects skewed tasks and then sends a notification to theSkewMitigator.

    2.Upon receiving a notification that a skewed task has been detected,theSkewMitigatorcomputes the amount of resources required for these tasks,then sends a notification to theResourceRequester.Subsequently,the resource adjustment request is sent by theResourceRequesterto theResourceScheduler.

    3.Once the resource adjustment request is received by theResourceScheduler,it will schedule free resources to be sent to theResourceRequesterin the associatedApplicationMaster,based on the resource inventory in the cluster.

    4.When positive responses are received by theResourceRequester,it sends a notification to theContainerAdjuster.TheContainerAdjusterthen executes the resource adjustment with the cooperation of theNodeManager.

    Figure 1:Architecture of dynamicadjust

    4 DynamicAdjust Design

    In this section,we describe the design of our proposed skew mitigation approach in detail.There are two main challenges in DynamicAdjust.First,in order to mitigate skew at run-time,it is necessary to develop an accurate skew detection technique.Second,after the skew has been detected,the next challenging step is to effectively mitigate the negative impact of skew without incurring noticeable overhead.We will provide our proposed solutions for these challenges in the following sections.

    4.1 Skew Detection

    It is nontrivial to identify skew before task execution while being agnostic to the characteristics of the data that a task is processing.Although each task’s input data is of identical size,skew(e.g.,computational skew) may also arise.Many existing solutions detect skew by gathering the information of key-value pairs or sampling the load distribution;however,these solutions either have to wait for all map tasks to be completed or perform sampling before actual jobs are run.Le et al.[22] have demonstrated that waiting for all map tasks to be completed will significantly increase the job completion time.A run-time skew detection technique is therefore desirable for skew mitigation.

    ?

    4.1.1 Identifying Skewed Tasks

    We consider tasks with a large workload to be skewed tasks,on which DynamicAdjust will execute the skew mitigation algorithm (further details are provided in Section 4.2).To achieve the goal of identifying skewed tasks,we need to answer the following two questions:(1) Which tasks should be labeled as skewed tasks? (2) When to detect skewed tasks thus activating the skew mitigation.

    Notably,one of the most significant characteristics of skewed tasks is that they take more time to complete than other tasks [23].Hence,similar to the existing work in [23],we determine a task to be a skewed task candidate when the task’s estimated running time exceeds the historical average task running time byθslowpercent.Here,the historical average task running time is the average of the durations of the completed tasks.We introduce a threshold,θlate,such that we will not initiate the skew detection algorithm untilθlatepercent of tasks have been completed.As a result,the task durations for the completed tasks can be used as historical data1Note that using historical statistics of the completed tasks to improve the scheduling in MapReduce is common[24,25]..Moreover,performing mitigation for skewed tasks that are near completion will be minimally beneficial,even though they have a longer run-time.Therefore,we select one task from the skewed task candidates,i.e.,the task with the longest remaining time at the moment of detection.We will describe how the remaining and running time of a task is predicted in Section 4.1.2.

    The next question is that of when skewed tasks should be detected.In DynamicAdjust,we will request resources for skew mitigation once a task is identified as skewed.As a result,if we begin the skew detection too early,there may still be new tasks waiting to be executed at that time.Thus,requesting resources for skewed tasks will result in resource competition with the new tasks,thereby postponing their executions.If we start the skew detection too late,moreover,DynamicAdjust may miss the right time to mitigate the skew,meaning that no benefit will be gained.Similar to existing speculation mechanisms in Hadoop [8],we begin the skew detection when there is no task waiting for resources.

    Algorithm 1 outlines the skew detection strategy in detail.Note that map-side skew and reduce-side skew are treated separately,but the strategy in Algorithm 1 is applicable to both of these cases.As shown in lines 2-4 of Algorithm 1,DynamicAdjust waits until no task is requesting the container andθlatepercent of tasks have been completed before evaluating the tasks for skew mitigation.It then iterates the running tasks to obtain their estimated task running time (Line 5-10).If the estimated task running time exceeds the average task duration byθslowpercent,this task will be labeled as a skewed task candidate.After that,DynamicAdjust selects the candidate with the longest remaining time as the task most likely to be skewed at this time(lines 11-16).The time complexity of this algorithm isO(n),wherenrepresents the number of map tasks.

    4.1.2 Phase-Aware Task Remaining Time Prediction

    In order to monitor the task progress,Hadoop arranges a fixed percentage of progress to each phase of a task [8,9]:For example,66.7% for the map phase and 33.3% for the merge phase in a map task,as well as 33.3% for each of the three phases (i.e.,copy,sort,and reduce) in a reduce task.Moreover,in each phase,Hadoop monitors the fraction of data processed and sets the corresponding score in this phase accordingly.Hence,for a reduce task that is processed halfway through the sort phase,its progress score is 33.3%+0.5×33.3%=49.95%.The stateof-the-art solution,LATE,simplifies the estimation of the time remaining for tasks by assuming that the progress rate is stable in each phase.The remaining task time is then calculated bywhere progressRate=[8].However,the duration of each phase is not necessarily proportional to its progress percentage;as a result,the remaining task time may be incorrectly predicted.Fig.2 presents the average duration of each phase while running PageRank and InvtIndex.We can observe that the durations of the map phases are 70.62% and 94.31% of the total task durations for InvtIndex and PageRank respectively;this contradicts the assumption made by LATE.As shown in Fig.2b,the same result can be obtained in the reduce stage.Furthermore,Fig.3 compares the progress rate of each phase between PageRank and InvtIndex.It can be seen that there are significant differences in the progress rate for different phases of a task.Hence,LATE’s estimation of the time remaining for a task based on an unchanged progress rate is problematic.

    There are two potential ways to eliminate this problem:(1) Adjust the percentage of the task progress for each phase and make it proportional to the duration of the corresponding phase;(2) Use different progress rates in different phases.Both of these approaches treat each phase separately and target the same goal.To keep our algorithm simple,we opt for the latter approach,which only requires modifying the remaining time calculation logic inApplicationMasterand continues to use the progress score provided by Hadoop.More specifically,we predict the remaining time of a task based on the sum of the remaining time left in the current and subsequent phases.Here,we calculate the time remaining for a task in the current phase based on the progress rate measured in this phase,then calculate the remaining time for the following phases based on the average progress rates in the corresponding phases of the completed tasks.Note that we record the historical data and the average progress rate per phase so to improve the ease of calculation.Accordingly,the phase-aware task remaining timecan be calculated as follows:

    whereand progressRatecur respectively denote the remaining percentage still needing to be executed and the estimated progress rate of the current phase,while Pcentcur,Pcentcom and Pcentfol are the percentages of task progress for the current,completed and following phases,respectively (e.g.,33.3 percent for the Copy phase by default).Moreover,ProScore is the task progress score provided by Hadoop,is the duration of the completed phase,is the time elapsed in the current phase,is the time elapsed for the task in question,and AvgDurtcur and AvgDurtfol represent the average durations of the current and following phases,respectively.We also introduce a scaling factor,Factor,to scale up the estimated durations of the following phases with reference to the ratio between the estimated duration of the current phase and the historical average duration of the current phase.

    Figure 2:Average duration of each phase (a) map stage (b) reduce stage

    Accordingly,the task running timecan be calculated as follows:

    Figure 3:Progress rate of each phase (a) map stage (b) reduce stage

    To illustrate how the phase-aware task remaining time prediction works,we take a reduce task of a PageRank job as an example.There are three phases of a reduce task:Namely,Copy,Sort and Reduce.These different phases run at different rates,as shown in Fig.2b.Suppose that a reduce task has completed the Copy phase at 102 s and is now in the Sort phase with a progress score of 50 at 105 s.According to LATE,the remaining time of this task is 105 s and the estimated task completion time is 210 s.However,the progress rate in the Sort phase is obviously faster than in the Copy phase.By contrast,our approach predicts the remaining time based on the estimated progress rate of the current phaseprogressRatecurand the average duration of the following phase (i.e.,the Reduce phase)AvgDurtfol.Suppose that we obtainAvgDurtfol=22 s from the historical profile and setFactor=1.We can thus obtain=×(105?102)+22=25 s and=130 s.Fig.4 illustrates the prediction logic of LATE and DynamicAdjust for this example.From the figure,we can observe that the prediction of DynamicAdjust is closer to the average task duration 132 s (see Fig.2b).

    Figure 4:An example of remaining time prediction for LATE and DynamicAdjust

    4.2 Skew Mitigation

    After the skewed tasks have been detected,we next need to mitigate them in order to reduce the job completion time.Unlike existing solutions,which repartition the workload of skewed tasks,we opt to dynamically adjust the resources for these tasks.To do this,it is necessary to determine the impact of resource allocation on task duration.In the following sections,we will describe our skew mitigation approach in detail.

    4.2.1 Impact of Resources on Task Running Time

    As Hadoop YARN only supports allocation for two types of resources (i.e.,CPU and memory),we here concentrate on understanding the impact of CPU and memory allocation on the task completion time.

    We perform a series of experiments on our Testbed cluster (see Section 5 for more details) by varying the resource allocation for the tasks.More specifically,with respect to the impact of the CPU,we record the task durations by changing the CPU allocation for this task from 1vCoreto 8vCoresand fixing the memory allocation at 1GB.It is notable here that the reason why we fix the memory allocation to 1GBis to isolate the impact of the CPU allocation.Fig.5 illustrates the impact of the CPU for a task that has been randomly selected from InvtIndex.It can be clearly seen from the figure that the task duration decreases sharply in the beginning as the CPU allocation increases.When the CPU allocation exceeds a certain threshold (i.e.,3vCoresin this experiment),small changes in the task duration can be observed.Accordingly,the non-linear regression method is used to model the relationship between CPU allocation and the task duration by the following equation:

    whereTtaskis the task running time and Alloccpuis the CPU allocation to the task.The regression result is indicated in Fig.5 by the green line.We also found that the fitting result is inaccurate when CPU allocation is large;this is because,as the CPU allocation gradually increases,the resource bottleneck may switch from the CPU to other types of resources (e.g.,network bandwidth),meaning that the benefits of additional CPU allocation to task durations would decrease.A similar observation is also made in [26],in which augmenting the network bandwidth beyond a certain threshold was found to have no impact on job completion time when the job is restricted by disk performance.We then use a piecewise inversely proportional model to remedy this regression error.As can be seen from in Fig.5,the red line fits better than the green line.

    Similarly,we perform a set of experiments by varying the tasks’memory allocation.The CPU resources allocated to a task are determined by the number ofvCoresallocated to the task.Memory allocation,by contrast,is controlled by two configurations:Logical RAM limit and maximum JVM heap size limit2The logical RAM limit and maximum JVM heap size for a map task correspond to mapreduce.map.memory.mb and mapreduce.map.java.opts respectively in the Hadoop configuration file mapred-default.xml [27]..The former is a unit used to manage the resources logically,while the latter setting reflects the maximum heap size of the JVM that runs the task.Hence,the maximum JVM heap size limit is the actual controller that determines the maximum usable memory for the task.Consequently,we record the task durations by varying the JVM heap size limit from 200 MB (the default value) to 4000 MB and maintaining the CPU allocation at 1 vCore.Similarly,we fix the CPU allocation to 1 vCore in order to isolate the impact of memory allocation.We further use the non-linear regression method to ascertain the relationship between task duration and memory allocation by following the below inverse proportional model:

    whereTtaskis the task running time andAllocmemis the memory allocated to the task.Fig.6 plots the impact of memory allocation for a task randomly selected from InvtIndex.It is clear that the inverse proportional model is also applicable to the relationship between task duration and memory allocation.In more detail,task duration decreases dramatically at the beginning;after the memory allocation becomes sufficient for the task,minimal improvement can be obtained by continuing to increase the memory allocation.However,with respect to the map task,no change in the task duration can be observed while increasing the memory allocation.This is because each map task processes one split of the data chunk in HDFS,and the size of each data chunk is 128 MB.The minimum JVM heap size (200 MB) is already sufficient for the map task.

    Figure 5:Relationship between the task duration and the CPU allocation in InvtIndex (a) a map task (b) a reduce task

    Figure 6:Relationship between task duration and memory allocation in InvtIndex (a) a map task(b) a reduce task

    4.2.2 Skew Mitigation Strategy

    Since the native Hadoop scheduler allocates an identical amount of resources to each task of the same type (map or reduce task),tasks that require more computation will run for longer than other tasks due to the resource bounding,which will significantly prolong the job completion time.On the other hand,we have demonstrated in the previous section that the impact of resource allocation on the duration can be modeled as an inverse proportional model.We accordingly adopt a simple strategy that dynamically adjusts resources for the detected skewed tasks in a way that accelerates their task running times,thereby mitigating the negative impact of the skew.

    More specifically,we increase the CPU allocation toAllocoldfor the skewed tasks,whereAllocoldis the original CPU allocation of the task.Moreover,DynamicAdjust does not perform the memory adjustment at run-time;this is because YARN uses the JVM-based container to execute a task,and the current technique does not support dynamically changing the JVM heap size for a container.For simplicity,DynamicAdjust sets the JVM heap size for each task to 80%of the logical memory limit when the task is first launched.The rationale behind this is that we manage to maximize the memory under the condition that the resource bounding is also obeyed.

    In short,this resource allocation strategy is sample but works well in practice.It roughly predicts the amount of resources required for the skewed tasks to run as fast as the normal tasks,then gives preferential treatment to these tasks by increasing their resource allocation accordingly.Even though the memory allocation is not changed dynamically at run-time,we increase the default setting of the JVM heap size before the tasks are run.We further observed that 800 MB3The logical RAM limit is 1 GB and the JVM heap size limit is 400 MB by default.Eighty percent of the minimum logical RAM limit is 800 MB.is in fact a sufficient JVM heap size for a task during our experiments.

    Admittedly,DynamicAdjust also incurs overhead,specifically the amount of additional resources allocated to the skewed tasks,since DynamicAdjust scales up the CPU resource allocation for skewed tasks in order to accelerate their running times.However,the number of skewed tasks is small compared to the number of tasks in total;as a result,the amount of additional resources incurred by DynamicAdjust is also small.

    5 Evaluation

    Our experiments are conducted on a Hadoop cluster with 21 nodes.Each node has four Genuine Intel 2 GHz processors,8 GB RAM and an 80 GB high-speed hard drive.We deployResourceManagerandNameNodeon one node,while the other 20 nodes are deployed as workers.Each worker is configured with eight virtual cores and 7 GB RAM (leaving 1 GB for background processes).The minimum CPU and memory allocations for each container are 1 vCore and 1 GB respectively.The HDFS block size is set to 128 MB and the replication level is set to 3.The configuration of Cgroups and the output compression are enabled.

    The following MapReduce applications are used in the evaluation:

    —InvtIndex:The inverted index data structure is widely used in the search engine indexing algorithm.This application is implemented in Hadoop and builds the inverted index for a text dataset.We use a 30 GB Wikipedia dataset in the evaluation.

    —RelativeFrequency:This measures the proportion of time for which wordwjco-occurs with wordwiwithin a specific range,which can be denoted asF(wj|wi).We adopt the implementation provided in [28] and use a 30 GB Wikipedia dataset as input.

    —PageRank:This is a link analysis algorithm that estimates the importance of the website.It assigns a rank to each page (vertex) according to the structure of the hyperlinks (edges)connected to it.We adopt the implementation provided in [29] and use the web crawl as the input dataset.

    —Grep:This application is a benchmark included in the Hadoop distribution,which extracts matching strings from text files and counts how many times they occur.In the evaluation,we grep “computer” from a 30 GB Wikipedia dataset.

    5.1 Accuracy of Skew Detection

    To evaluate the accuracy of our skew detection technique,we run MapReduce jobs with only the skew detection activated and without performing the skew mitigation operation.Accordingly,after the skewed tasks are detected,we label the tasks and let them run normally.At the end,we verify whether these tasks are truly skewed tasks.Consistent with [9],skewed tasks are determined by the task durations that exceed the average by 50%.Moreover,we perform the experiments in the same way as in Hadoop-LATE (the implementation of LATE in Hadoop),the state-of-the-art skew detection solution.Fig.7 presents the comparison of the skew detection precision4The precision is calculated by .between DynamicAdjust and Hadoop- LATE.

    Figure 7:The skew detection accuracy between DynamicAdjust and Hadoop-LATE

    It is clear from the figure that DynamicAdjust has significantly higher precision than Hadoop-LATE.More specifically,DynamicAdjust improves the precision by 47.61%,37.5%,5.7% and 22.17% for InvtIndex,RelativeFrequency,PageRank and Grep,respectively.

    To facilitate understanding of the reasons why DynamicAdjust is superior,the skew detection statistics of InvtIndex are illustrated in detail in Tab.1.It is clear from the table that in the map stage,all the truly skewed tasks can be identified by both methods (i.e.,Hadoop LATE and DynamicAdjust).However,the False Positive values for Hadoop-LATE and DynamicAdjust are 25 and 5,respectively;this indicates that 25 normal tasks are detected as skew tasks by Hadoop-LATE,whereas DynamicAdjust misjudges five tasks as skewed ones.For the reduce tasks,moreover,none of the skewed tasks can be detected by Hadoop-LATE;by contrast,DynamicAdjust detects four out of five skewed tasks.Hadoop-LATE has low precision because it assumes the progress speed of each phase remains unchanged;however,this is not true,as demonstrated in Section 4.1.

    Table 1:Skew detection details of InvtIndex (the number of map and reduce tasks in the experiments are 235 and 60,respectively.TP:True Positive;TN:True Negative;FP:False Positive;FN:False Negative.)

    5.2 Performance of the Skew Mitigation

    To evaluate the performance of the skew mitigation for DynamicAdjust,we compare it against the following methods:(1) Native Hadoop YARN;(2) A speculation-based straggler mitigation method (Hadoop-LATE);(3) A repartition-based skew mitigation method (SkewTune) and (4) A resource allocation-based skew mitigation method (DREAMS [13]).Since SkewTune is deployed on Hadoop v1,it is slot-based and provides no resource isolation for the slots.To facilitate fair comparison of the above methods,we add the resource isolation between slots in Hadoop v1 (0.21.0) and implement SkewTune on it.Moreover,each node is configured with six map slots and two reduce slots while using SkewTune.Fig.8 compares the job completion time for different MapReduce applications while running different mitigation methods.It can be seen from the figure that DynamicAdjust performs better than other skew mitigation methods in all cases.More specifically,DynamicAdjust reduces the job completion time by 32.86%,40.85%,37.27%and 22.53% for InvtIndex,RelativeFrequency,PageRank and Grep,respectively.Furthermore,no noticeable improvement can be observed in the experiments for Hadoop-LATE and SkewTune,which perform worse than YARN in some cases;for example,SkewTune consumes more time than native YARN in the PageRank application,as shown in Fig.8c.In addition,DREAMS improves the job completion time for InvtIndex and RelativeFrequency,but yields no improvement for PageRank and Grep,which exhibit skew in the map stage.

    To better explain the reason for the superiority of DynamicAdjust over other skew mitigation methods,we here illustrate the detailed execution timeline for different skew mitigation methods while running PageRank.As shown in Fig.9a,for native YARN,the durations are significantly longer for some tasks than others in the map stage.Since the reduce function can only be performed after all intermediate data have been transferred,these slow-running map tasks delay the reduce phase start time.It is clear from Fig.9a that a large amount of time (from 100 s to 200 s) is wasted due to the need to wait for the slow-running map tasks.Different from the map stage,all tasks have similar durations in the reduce stage.

    Fig.9b presents the execution timeline for Hadoop-LATE,which replicates the execution of these slow-running tasks by leveraging idle containers.This approach may accelerate these tasks;nevertheless,since these slow-running tasks process a skewed workload,while replicating these tasks elsewhere will not alter the workload they need to process,the improvement obtained is not significant.As shown in Fig.9b,the slow-running tasks still cannot be accelerated.

    Figure 8:Job completion time comparison (a) InvtIndex (b) RelativeFrequency (c) PageRank(d) Grep

    In comparison,SkewTune splits UP the unprocessed work of a skewed task and launches a new job (called the mitigation job) to process the incomplete work.This skewed task can consequently be accelerated through the use of free cluster resources.As can be seen from Fig.9c,one of the slow-running map tasks is processed by a mitigation job and is finished earlier than the other skewed tasks.However,SkewTune can only mitigate one task at a time;in other words,if there is a mitigation job in progress,no new mitigation jobs can be launched (see Section 3.2 in [10]).As a result,if several skewed tasks arise concurrently,SkewTune may miss the best time to mitigate all skew.Fig.9c presents one such example.Note that this is a real MapReduce application running on real-world data (ClueWeb09 web crawl [29]).As Fig.9c shows,after the mitigation job is completed,no more tasks can be selected as skewed tasks;this is because SkewTune only selects the task with remaining time that is two times greater than the repartitioning overheadw(as reported in [10],w=30 s).Therefore,SkewTune cannot improve the job completion time in this case.

    Fig.9d presents the execution timeline for DREAMS.As the figure shows,there is no improvement over native YARN for PageRank.This is because DREAMS offers no help to alleviate the skew in the map stage.This approach adjusts the resources for reduce tasks based on the estimated partition sizes;unfortunately,in PageRank,there is no data skew in the reduce stage.As shown in Fig.9d,each reduce task has a similar duration,meaning that there is no room left for DREAMS to improve.

    Figure 9:Job execution timeline comparison (a) YARN (b) Hadoop-LATE (c) SkewTune (d)DREAMS (e) DynamicAdjust

    Similar to DREAMS,DynamicAdjust is a resource-aware skew mitigation technique.However,DynamicAdjust mitigates the skew in both the map and reduce stages.This approach detects the skew based on the estimated remaining task time and adjusts the resources for skewed tasks at run-time.It is clear from Fig.9e that the skewed map tasks’durations are decreased significantly;as a result,the job completion time can be dramatically improved.

    6 Related Work

    Data skew for MapReduce.This problem has recently triggered widespread concern in the area of large-scale processing.In terms of skew detection,Chen et al.[9] consider a task to be a skew task when the predicted task durations exceed the average by 50%.

    In terms of skew mitigation,Gulfer et al.proposed a load-balancing algorithm that reassigns the partitions based on a cost model.However,this kind of approach necessitates waiting for the completion of all map tasks before the reduce tasks can be launched;as shown in [30],this significantly prolongs the job.A progressive sampler [5] is proposed to predict the key distribution of the intermediate data.Rather than reassigning the large partitions before launching the tasks,SkewTune [10] repartitions the skewed partitions at run-time.However,this incurs an overhead associated with repartitioning the data and concatenating the output.In addition to the above solutions,Irandoost et al.[31] propose the traffic cost-aware partitioner (TCAP) to handle reducerside data skewness in MapReduce.This approach attempts to balance the cost of network traffic during shuffling while balancing the reducer load;nevertheless,this approach only targets data skewness in the reduce stage.

    Stragglers in MapReduce.Dean et al.[18] first identified the stragglers problem in MapReduce in,in which these authors back up the running of pending tasks as the job nears completion.LATE [8] extends this work by speculatively launching a backup task for those slow-running tasks.However,running a replica task for a data-skew task (which has more data to process) may have a counter-productive impact.This is because duplicating the execution of the overload tasks cannot reduce the workload for these tasks.Unlike the work of [18],Mantri [23] culls stragglers based on their causes and executes tasks in descending order according to their input sizes.Nevertheless,Mantri assumes that the task workloads are obtained before a stage begins.

    Resource-aware scheduling.As the application of large data processing to the IoT [32-34]has become increasingly common,the desire for fine-grained resource scheduling has become widespread.The Hadoop version 1 (MRv1) uses a slot-based resource allocation scheme and neglects the run-time task resource consumption while allocating the resources.Many solutions have accordingly been proposed to better utilize the resources in the cluster.For instance,Polo et al.[35] proposed a resource-aware scheduler,named RAS,which uses specific slots for scheduling.However,RAS places reduce tasks before map tasks;unfortunately,even when reduce tasks are allocated slots,they cannot begin until the map tasks have generated the intermediate data,meaning that the resources occupied by waiting reduce tasks are wasted.Therefore,Hadoop YARN [19] is proposed,which supports customizing the container size in the same way as specifying the amount of resources.However,YARN considers the resource consumption of the map task and the reduce task to be the same,which is problematic for skewed jobs.MROrchestrator,proposed by Sharma et al.[36],identifies the resource deficit based on the resource profiles and dynamically adjusts the resource allocation.Nevertheless,monitoring the resource consumption of each task at run-time is difficult to implement,since there are many background processes in the server that dynamically affect resource usage.Compared with MROrchestrator,our solution detects the skew task based on the remaining task time prediction,which is a simpler and more efficient approach.Moreover,there are other categories of resource scheduling policies in MapReduce,such as the work in [37-39].These approaches schedule the resource in terms of the number of task slots in an attempt to achieve improved fairness or better resource utilization.However,these methods do not attempt to address data skew.

    7 Conclusion

    In this paper,we have presented DynamicAdjust,a run-time skew mitigation method.Unlike existing solutions,which estimate and then rebalance the loads of tasks,DynamicAdjust instead adjusts the resources at run-time to account for tasks that require more computation.This can completely eliminate the overhead caused by the load estimation and rebalancing.Compared to our previous work,DREAMS,DynamicAdjust makes no assumption regarding the causes of the skew,but instead monitors the progress of tasks;this enables it to mitigate both the partitioning skew and the computational skew that arise in the map or reduce stages.Finally,we conducted experiments based on real MapReduce workloads with a 21-node Hadoop cluster.The results demonstrated that DynamicAdjust increases the skew detection precision by up to 47.64% when compared to a baseline method named Hadoop-LATE.The results also revealed that DynamicAdjust can effectively mitigate the negative impact of skew and reduce the job completion time by up to 40.85% when compared to native YARN.Furthermore,existing solutions such as Hadoop-LATE and SkewTune are found to provide no improvement in our experiments,while DREAMS cannot bring any benefit for PageRank and Grep applications that have skew in the map stage.

    Acknowledgement:We thank Aimal Khan,Peixin Chen and Qi Zhang for very useful discussions.

    Funding Statement:This work was funded by the Key Area Research and Development Program of Guangdong Province (2019B010137005) and the National Natural Science Foundation of China (61906209).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久午夜亚洲精品久久| 女人高潮潮喷娇喘18禁视频| 夜夜爽天天搞| 国产精品一区二区精品视频观看| 精品国产乱子伦一区二区三区| 一边摸一边抽搐一进一小说 | 亚洲熟妇中文字幕五十中出 | 亚洲av欧美aⅴ国产| 欧美精品人与动牲交sv欧美| 日韩视频一区二区在线观看| 国产真人三级小视频在线观看| 精品国产美女av久久久久小说| 国产91精品成人一区二区三区| 亚洲第一青青草原| 一进一出好大好爽视频| 亚洲伊人色综图| 9热在线视频观看99| av免费在线观看网站| 成年版毛片免费区| 欧美性长视频在线观看| 曰老女人黄片| 变态另类成人亚洲欧美熟女 | 国产精品久久视频播放| 日韩大码丰满熟妇| 最新的欧美精品一区二区| 欧美激情 高清一区二区三区| 好看av亚洲va欧美ⅴa在| 久久99一区二区三区| 麻豆av在线久日| 99香蕉大伊视频| 91麻豆av在线| xxxhd国产人妻xxx| 757午夜福利合集在线观看| 国产一区二区三区视频了| 无遮挡黄片免费观看| 99久久人妻综合| 亚洲人成伊人成综合网2020| 国产一区有黄有色的免费视频| 国内久久婷婷六月综合欲色啪| 国产av又大| 久久人妻熟女aⅴ| 叶爱在线成人免费视频播放| 男女之事视频高清在线观看| 天堂中文最新版在线下载| 精品福利永久在线观看| 超色免费av| 日韩有码中文字幕| 久久久久视频综合| 亚洲欧洲精品一区二区精品久久久| 国产日韩一区二区三区精品不卡| 久久青草综合色| a级毛片黄视频| 韩国精品一区二区三区| 久久香蕉国产精品| 人人妻人人澡人人看| 99国产精品99久久久久| 日韩成人在线观看一区二区三区| 亚洲精华国产精华精| 夫妻午夜视频| 热re99久久精品国产66热6| 美女高潮喷水抽搐中文字幕| 99久久国产精品久久久| 韩国av一区二区三区四区| 日本五十路高清| 99精国产麻豆久久婷婷| 久久久久久亚洲精品国产蜜桃av| 激情在线观看视频在线高清 | 成人国语在线视频| 老司机在亚洲福利影院| 99re6热这里在线精品视频| 香蕉国产在线看| 人妻丰满熟妇av一区二区三区 | 色精品久久人妻99蜜桃| 国产精品一区二区免费欧美| 成人国产一区最新在线观看| 51午夜福利影视在线观看| 91麻豆av在线| 亚洲av片天天在线观看| 在线观看免费视频日本深夜| 又紧又爽又黄一区二区| 老司机午夜十八禁免费视频| 亚洲五月婷婷丁香| 99热只有精品国产| 超碰成人久久| 美女扒开内裤让男人捅视频| 男女下面插进去视频免费观看| 日韩欧美一区二区三区在线观看 | 欧美日韩亚洲高清精品| 一区二区三区激情视频| 一边摸一边抽搐一进一出视频| 欧美成人午夜精品| 日韩欧美国产一区二区入口| 亚洲国产毛片av蜜桃av| 1024香蕉在线观看| 天堂√8在线中文| 少妇裸体淫交视频免费看高清 | 日韩免费av在线播放| 黄色女人牲交| 精品国产一区二区三区久久久樱花| 夜夜夜夜夜久久久久| 欧美人与性动交α欧美软件| 一进一出好大好爽视频| 午夜老司机福利片| 久久久久国产精品人妻aⅴ院 | 国产激情欧美一区二区| 国产精品影院久久| 久久亚洲真实| 免费一级毛片在线播放高清视频 | 国产成人欧美| 91av网站免费观看| 日本wwww免费看| 国产亚洲欧美精品永久| 性少妇av在线| 精品久久久久久久毛片微露脸| 叶爱在线成人免费视频播放| 新久久久久国产一级毛片| 香蕉久久夜色| 亚洲avbb在线观看| 欧美色视频一区免费| 丝袜美腿诱惑在线| 久久精品aⅴ一区二区三区四区| 黑人巨大精品欧美一区二区mp4| 亚洲 欧美一区二区三区| 久久久久久久久免费视频了| 一级毛片精品| 黑人操中国人逼视频| 国产精品1区2区在线观看. | 日韩欧美一区视频在线观看| 黄色女人牲交| 久久精品熟女亚洲av麻豆精品| 亚洲男人天堂网一区| 91av网站免费观看| 亚洲av熟女| 欧美精品一区二区免费开放| av国产精品久久久久影院| 老熟女久久久| 久久天躁狠狠躁夜夜2o2o| xxx96com| 丝瓜视频免费看黄片| 欧美日韩亚洲综合一区二区三区_| 大片电影免费在线观看免费| 老熟妇仑乱视频hdxx| 国产av精品麻豆| 老司机福利观看| 久热爱精品视频在线9| 人妻一区二区av| 午夜老司机福利片| 变态另类成人亚洲欧美熟女 | 日日摸夜夜添夜夜添小说| 看黄色毛片网站| 色老头精品视频在线观看| 丰满迷人的少妇在线观看| 精品福利永久在线观看| 久久国产精品男人的天堂亚洲| 在线观看免费视频日本深夜| 午夜日韩欧美国产| 国产男女内射视频| a级毛片在线看网站| 久久九九热精品免费| 久久这里只有精品19| 欧美中文综合在线视频| 欧美日韩黄片免| 国产男女超爽视频在线观看| 久久ye,这里只有精品| 久久久久久久国产电影| 亚洲av熟女| 黄色a级毛片大全视频| 久久精品国产a三级三级三级| 久久精品国产亚洲av高清一级| 中文字幕最新亚洲高清| 新久久久久国产一级毛片| 亚洲精华国产精华精| 99精品在免费线老司机午夜| 精品人妻在线不人妻| 欧美日韩一级在线毛片| 超碰成人久久| 国产精品二区激情视频| 大片电影免费在线观看免费| 成人国产一区最新在线观看| 高清毛片免费观看视频网站 | 成年人午夜在线观看视频| 另类亚洲欧美激情| 无人区码免费观看不卡| 亚洲欧美日韩另类电影网站| 久久中文字幕一级| 亚洲 欧美一区二区三区| 欧美av亚洲av综合av国产av| 久久精品成人免费网站| 免费久久久久久久精品成人欧美视频| 50天的宝宝边吃奶边哭怎么回事| 极品人妻少妇av视频| 久久久久国产一级毛片高清牌| 精品熟女少妇八av免费久了| 亚洲黑人精品在线| 亚洲av电影在线进入| 少妇被粗大的猛进出69影院| 女人高潮潮喷娇喘18禁视频| 9热在线视频观看99| 久久 成人 亚洲| 国产免费男女视频| 亚洲精品久久成人aⅴ小说| 一个人免费在线观看的高清视频| 国产熟女午夜一区二区三区| 久久午夜综合久久蜜桃| 国产精品99久久99久久久不卡| 日韩成人在线观看一区二区三区| 一个人免费在线观看的高清视频| 成人黄色视频免费在线看| 欧美精品av麻豆av| 黑人巨大精品欧美一区二区mp4| 在线免费观看的www视频| 99热网站在线观看| 欧美日韩亚洲高清精品| 欧美亚洲 丝袜 人妻 在线| 亚洲欧美一区二区三区久久| 黄色视频不卡| 在线播放国产精品三级| 国产一区有黄有色的免费视频| 国产精品成人在线| a级毛片黄视频| 美女高潮到喷水免费观看| 大片电影免费在线观看免费| 国产亚洲精品第一综合不卡| 免费在线观看黄色视频的| 欧美不卡视频在线免费观看 | 99精品久久久久人妻精品| 亚洲一区中文字幕在线| 久久久久久久午夜电影 | 亚洲中文字幕日韩| 亚洲久久久国产精品| 国产精品久久久久久人妻精品电影| 久久久国产成人免费| 99riav亚洲国产免费| 色婷婷av一区二区三区视频| 日本一区二区免费在线视频| 国产欧美日韩一区二区精品| 久久久久久久久久久久大奶| 满18在线观看网站| 国内毛片毛片毛片毛片毛片| 女性生殖器流出的白浆| 欧美老熟妇乱子伦牲交| 亚洲熟妇熟女久久| 精品国产乱码久久久久久男人| 国产精品美女特级片免费视频播放器 | 国产亚洲欧美精品永久| 中文字幕精品免费在线观看视频| 男人的好看免费观看在线视频 | 两性夫妻黄色片| 最近最新中文字幕大全电影3 | 巨乳人妻的诱惑在线观看| 如日韩欧美国产精品一区二区三区| 日韩免费高清中文字幕av| 亚洲成av片中文字幕在线观看| 99久久综合精品五月天人人| 99国产极品粉嫩在线观看| 大香蕉久久成人网| 国产精品1区2区在线观看. | 日日摸夜夜添夜夜添小说| 天天影视国产精品| 国产成人av教育| 一区二区三区激情视频| 在线观看午夜福利视频| 欧美精品亚洲一区二区| 亚洲aⅴ乱码一区二区在线播放 | 美女午夜性视频免费| 操出白浆在线播放| 婷婷成人精品国产| 大香蕉久久成人网| 午夜两性在线视频| 国产黄色免费在线视频| 777久久人妻少妇嫩草av网站| 精品国产一区二区三区久久久樱花| 黄色怎么调成土黄色| 国产av一区二区精品久久| 首页视频小说图片口味搜索| 极品少妇高潮喷水抽搐| 这个男人来自地球电影免费观看| 高潮久久久久久久久久久不卡| 国产精品一区二区精品视频观看| 极品少妇高潮喷水抽搐| 天天影视国产精品| 女性生殖器流出的白浆| 高清黄色对白视频在线免费看| 免费观看人在逋| 露出奶头的视频| 在线看a的网站| 精品国产一区二区三区久久久樱花| 黄频高清免费视频| 亚洲国产精品一区二区三区在线| 又大又爽又粗| 午夜精品久久久久久毛片777| 可以免费在线观看a视频的电影网站| 美女国产高潮福利片在线看| 日韩 欧美 亚洲 中文字幕| 久久亚洲真实| 亚洲情色 制服丝袜| 老司机靠b影院| 黑丝袜美女国产一区| 精品欧美一区二区三区在线| 久久久久国产精品人妻aⅴ院 | 日韩欧美在线二视频 | 69av精品久久久久久| 亚洲精品国产色婷婷电影| 妹子高潮喷水视频| 国产三级黄色录像| 99久久精品国产亚洲精品| 亚洲午夜精品一区,二区,三区| 欧美黑人欧美精品刺激| 极品人妻少妇av视频| 777久久人妻少妇嫩草av网站| 中文字幕人妻丝袜制服| 欧美人与性动交α欧美软件| 性色av乱码一区二区三区2| 黑人操中国人逼视频| 欧美激情极品国产一区二区三区| 精品一区二区三区av网在线观看| 日韩欧美三级三区| 搡老乐熟女国产| 国产av一区二区精品久久| 亚洲综合色网址| 新久久久久国产一级毛片| 无人区码免费观看不卡| 日韩欧美国产一区二区入口| 麻豆av在线久日| 真人做人爱边吃奶动态| 韩国精品一区二区三区| 亚洲熟妇中文字幕五十中出 | 18在线观看网站| 国产成人一区二区三区免费视频网站| 可以免费在线观看a视频的电影网站| 国产精品 欧美亚洲| 搡老熟女国产l中国老女人| 久久久精品区二区三区| 精品久久久久久,| av有码第一页| 久久天躁狠狠躁夜夜2o2o| 一级作爱视频免费观看| 自拍欧美九色日韩亚洲蝌蚪91| 女性被躁到高潮视频| 亚洲,欧美精品.| 亚洲人成伊人成综合网2020| 午夜激情av网站| av有码第一页| 多毛熟女@视频| 成人av一区二区三区在线看| 亚洲国产精品一区二区三区在线| 国产成人av激情在线播放| 亚洲美女黄片视频| 青草久久国产| 老司机深夜福利视频在线观看| 午夜福利,免费看| 女人精品久久久久毛片| 亚洲熟女毛片儿| 久久中文字幕人妻熟女| 制服人妻中文乱码| 自线自在国产av| 视频在线观看一区二区三区| 欧美乱色亚洲激情| 男女之事视频高清在线观看| 亚洲午夜精品一区,二区,三区| 亚洲av熟女| 久久久久国产精品人妻aⅴ院 | 精品国产亚洲在线| 青草久久国产| 老熟妇仑乱视频hdxx| 中文字幕人妻熟女乱码| 成人国产一区最新在线观看| 国产成人免费观看mmmm| 亚洲第一av免费看| 欧洲精品卡2卡3卡4卡5卡区| 国产免费av片在线观看野外av| 免费av中文字幕在线| 婷婷精品国产亚洲av在线 | 欧美乱色亚洲激情| 免费少妇av软件| 99精品欧美一区二区三区四区| 桃红色精品国产亚洲av| 国产亚洲欧美98| 免费观看精品视频网站| 欧美日韩精品网址| 99久久人妻综合| 国产av精品麻豆| 久久精品亚洲av国产电影网| 欧美日韩亚洲国产一区二区在线观看 | 高清av免费在线| 亚洲精品一卡2卡三卡4卡5卡| av有码第一页| 777米奇影视久久| 免费在线观看影片大全网站| 啪啪无遮挡十八禁网站| 黑人操中国人逼视频| 天堂动漫精品| 国产激情久久老熟女| 亚洲av成人不卡在线观看播放网| 高清毛片免费观看视频网站 | 久久国产精品人妻蜜桃| 丁香欧美五月| 无人区码免费观看不卡| 建设人人有责人人尽责人人享有的| 日韩欧美免费精品| 国产激情欧美一区二区| a级片在线免费高清观看视频| 可以免费在线观看a视频的电影网站| 丁香欧美五月| 国产精品欧美亚洲77777| 一进一出好大好爽视频| 一级毛片女人18水好多| 亚洲av电影在线进入| 美国免费a级毛片| 天堂中文最新版在线下载| 色老头精品视频在线观看| 中亚洲国语对白在线视频| 国产视频一区二区在线看| 中文欧美无线码| 久久婷婷成人综合色麻豆| 久99久视频精品免费| www.自偷自拍.com| 首页视频小说图片口味搜索| 久久精品aⅴ一区二区三区四区| 久久ye,这里只有精品| 男女床上黄色一级片免费看| 成人18禁在线播放| 久久久国产欧美日韩av| 国产精品成人在线| 欧美日韩av久久| 在线永久观看黄色视频| 久久精品91无色码中文字幕| 两性午夜刺激爽爽歪歪视频在线观看 | 欧美日韩亚洲综合一区二区三区_| 热99re8久久精品国产| 亚洲精品久久午夜乱码| 亚洲精品国产区一区二| 精品国产美女av久久久久小说| 视频在线观看一区二区三区| aaaaa片日本免费| 大型黄色视频在线免费观看| 少妇的丰满在线观看| 国产不卡一卡二| 国内久久婷婷六月综合欲色啪| 1024香蕉在线观看| 窝窝影院91人妻| 亚洲人成77777在线视频| av不卡在线播放| 在线看a的网站| 亚洲国产精品合色在线| 久久精品亚洲av国产电影网| 18禁观看日本| 777米奇影视久久| 一进一出好大好爽视频| 人人妻人人添人人爽欧美一区卜| 黑丝袜美女国产一区| 超碰成人久久| 十八禁人妻一区二区| 老熟妇仑乱视频hdxx| 国产亚洲欧美98| 精品一品国产午夜福利视频| 黄色怎么调成土黄色| 精品久久久久久久毛片微露脸| 在线十欧美十亚洲十日本专区| 中国美女看黄片| 成人av一区二区三区在线看| 国产单亲对白刺激| 亚洲成人国产一区在线观看| 看黄色毛片网站| 男女免费视频国产| 中文亚洲av片在线观看爽 | 男人舔女人的私密视频| 国产熟女午夜一区二区三区| 激情视频va一区二区三区| 国产亚洲精品久久久久5区| 久久国产亚洲av麻豆专区| 嫩草影视91久久| 免费高清在线观看日韩| 亚洲精品国产色婷婷电影| 午夜精品国产一区二区电影| 精品国产乱码久久久久久男人| 国产男女内射视频| 美女视频免费永久观看网站| 日韩熟女老妇一区二区性免费视频| 两人在一起打扑克的视频| 超碰成人久久| 欧美黄色片欧美黄色片| 可以免费在线观看a视频的电影网站| 免费日韩欧美在线观看| 亚洲久久久国产精品| 最新在线观看一区二区三区| 国产成人欧美| 亚洲一区二区三区欧美精品| 超碰成人久久| 狠狠婷婷综合久久久久久88av| 黄色毛片三级朝国网站| 久久久久精品人妻al黑| 夜夜躁狠狠躁天天躁| 美女高潮喷水抽搐中文字幕| 丰满迷人的少妇在线观看| 久久精品国产清高在天天线| 久久久久精品国产欧美久久久| 两个人免费观看高清视频| 日韩欧美在线二视频 | 首页视频小说图片口味搜索| 午夜影院日韩av| 欧美黄色淫秽网站| tube8黄色片| 久久久久视频综合| 丁香欧美五月| 老司机福利观看| 久久精品熟女亚洲av麻豆精品| 美女扒开内裤让男人捅视频| 欧美黑人精品巨大| 国产人伦9x9x在线观看| 欧美激情 高清一区二区三区| 色播在线永久视频| 亚洲一区二区三区欧美精品| 99久久人妻综合| 黑人猛操日本美女一级片| 亚洲精品国产精品久久久不卡| 老司机亚洲免费影院| 久久午夜亚洲精品久久| 丁香欧美五月| 免费观看精品视频网站| 最近最新中文字幕大全免费视频| 国产精品久久久久久精品古装| x7x7x7水蜜桃| 国产不卡av网站在线观看| 91精品国产国语对白视频| 免费人成视频x8x8入口观看| 女人被躁到高潮嗷嗷叫费观| 欧美另类亚洲清纯唯美| 国产一区二区激情短视频| 老熟妇乱子伦视频在线观看| 一进一出抽搐动态| 老熟女久久久| 久久这里只有精品19| 美女视频免费永久观看网站| 久久九九热精品免费| 手机成人av网站| 国产有黄有色有爽视频| 午夜免费鲁丝| 大香蕉久久网| 国产精品一区二区在线不卡| 波多野结衣av一区二区av| 人人妻人人澡人人看| 超碰97精品在线观看| 亚洲国产欧美一区二区综合| 超碰成人久久| 美女高潮到喷水免费观看| 欧美久久黑人一区二区| 久久精品熟女亚洲av麻豆精品| 99热国产这里只有精品6| 国产亚洲精品久久久久久毛片 | 免费观看精品视频网站| 在线免费观看的www视频| 国产高清国产精品国产三级| 国产不卡av网站在线观看| 正在播放国产对白刺激| 99热只有精品国产| 99精品在免费线老司机午夜| 色94色欧美一区二区| av电影中文网址| 国产成人精品久久二区二区91| av免费在线观看网站| 在线观看免费日韩欧美大片| 悠悠久久av| 日本vs欧美在线观看视频| 巨乳人妻的诱惑在线观看| 欧美国产精品一级二级三级| 黄片小视频在线播放| 丁香六月欧美| 久久中文字幕人妻熟女| 久久久久国产精品人妻aⅴ院 | 久热爱精品视频在线9| 欧美精品一区二区免费开放| 三上悠亚av全集在线观看| 国产成人免费观看mmmm| 国产精品香港三级国产av潘金莲| 最新美女视频免费是黄的| 日本黄色日本黄色录像| 午夜91福利影院| 中国美女看黄片| 自线自在国产av| 大片电影免费在线观看免费| 国产精品免费一区二区三区在线 | 91麻豆精品激情在线观看国产 | 99re在线观看精品视频| 一区二区三区国产精品乱码| 超色免费av| 日本欧美视频一区| 一区二区三区国产精品乱码| 嫁个100分男人电影在线观看| 免费少妇av软件| 美女福利国产在线| 在线观看免费日韩欧美大片| 亚洲国产看品久久| 99久久99久久久精品蜜桃| 成年人黄色毛片网站| 亚洲精品久久成人aⅴ小说| 久久精品国产综合久久久| 亚洲熟女精品中文字幕| 国产在线观看jvid| 中文字幕最新亚洲高清| 国产午夜精品久久久久久| 中文字幕av电影在线播放| 午夜福利在线免费观看网站| av不卡在线播放| 久久久久久久精品吃奶| 一二三四在线观看免费中文在| 久久久久国内视频| 老熟妇仑乱视频hdxx| 日韩一卡2卡3卡4卡2021年| tube8黄色片| √禁漫天堂资源中文www| 欧美午夜高清在线| 亚洲精品久久午夜乱码| 免费高清在线观看日韩| 女性被躁到高潮视频|