• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An Improved Memory Cache Management Study Based on Spark

    2018-10-09 08:45:30SuzhenWangYanpiaoZhangLuZhangNingCaoandChaoyiPang
    Computers Materials&Continua 2018年9期

    Suzhen Wang, Yanpiao Zhang, Lu Zhang, Ning Cao and Chaoyi Pang

    Abstract: Spark is a fast unified analysis engine for big data and machine learning, in which the memory is a crucial resource. Resilient Distribution Datasets (RDDs) are parallel data structures that allow users explicitly persist intermediate results in memory or on disk, and each one can be divided into several partitions. During task execution,Spark automatically monitors cache usage on each node. And when there is a RDD that needs to be stored in the cache where the space is insufficient, the system would drop out old data partitions in a least recently used (LRU) fashion to release more space. However,there is no mechanism specifically for caching RDD in Spark, and the dependency of RDDs and the need for future stages are not been taken into consideration with LRU. In this paper, we propose the optimization approach for RDDs cache and LRU based on the features of partitions, which includes three parts: the prediction mechanism for persistence, the weight model by using the entropy method, and the update mechanism of weight and memory based on RDDs partition feature. Finally, through the verification on the spark platform, the experiment results show that our strategy can effectively reduce the time in performing and improve the memory usage.

    Keywords: Resilient distribution datasets, update mechanism, weight mode.

    1 Introduction

    Human society has entered the era of Big Data which has become one of the most important factors in production. The big data of industry and enterprise that can reach hundreds of TB or even hundreds of PB at a time has far exceeded the processing capacity of traditional computing techniques and information system. Cloud computing enables convenient and on-demand access to a shared pool of configurable computing resources,see Li et al. [Li and Liu (2017); Liu and Xiao (2016); Zhao, Wang, Xu et al. (2015);Thanapal and Nishanthi (2013)]. Spark has been widely adopted for large-scale data analysis, see Apache Spark [Apache Spark (2018)]. One of the most important capabilities in Spark is persisting (or caching) a dataset in memory or on disk across operations, see Lin et al. [Lin, Wang and Wu (2014); Zaharia, Chowdhury, Franklin et al. (2010)]. However,the user manually selects the RDD to cache by experience, which leads to several uncertainties and impact on efficiency. At the same time, the Spark can automatically monitor cache usage on each node and drop out old data partitions in a LRU fashion.

    However, in Spark, the LRU algorithm only considers the time feature of the node where the partition is located but not the partition features.

    Recognizing this problem, scholars have researched various replacement algorithms. By studying FIFO, LRU and other cache algorithms, the AWRP algorithm that calculates the weight for each object is designed in Swain et al. [Swain and Paikaray (2011)], but it assumes that the size of the blocks is equal. For the Spark framework, Bian et al. [Bian,Yu, Ying et al. (2017)] propose an adaptive cache management strategy from three aspects: automatic selection algorithm, parallel cache cleanup algorithm and lowest weight replacement algorithm. But it ignores the factor that the weight is changing during execution. The WR algorithm in Duan et al. [Duan, Li, Tang et al. (2016)] is proposed to calculate the weight of RDDs by considering the partitions computation cost, the number of use for partitions, and the sizes of the partitions. However, it does not take into account the change in the remainder usage frequency of the cache partition during task running.Before the persistence, Jiang et al. [Jiang, Chen, Zhou et al. (2016)] further consider whether the persistence is needed by judging the cost required and the computing cost.Zhang et al. [Zhang, Shou, Xu et al. (2017); Chen, Zhang, Shou et al. (2017); Chen and Zhang (2017)] put forward fine-grained RDD check-pointing and kick-out selection strategy according to DAG diagram, which effectively reduces RDD computing cost and maximizes memory utilization rate. Meng et al. [Meng, Yu, Liu et al. (2017)] taking full account of the distributed storage characteristics of RDD partition point out the influence of complete RDD partition and incomplete RDD partition on cache memory, but he does not discuss the choice of RDD caches.

    By analyzing the researches above, in this paper, we optimize the selection strategy for caching RDD and the LRU replacement algorithm from the following aspects:

    (1) We propose a prediction mechanism for caching RDDs. The use frequency of each RDD is obtained based on the DAG diagram, and it is further decided whether to cache the RDD according to the cost between the persistence and computation.

    (2) We put forward a weight replacement algorithm based on RDD partition features According to the characteristics of partition, we calculate the weight model by using entropy method.

    (3) The update mechanism for storage memory and weight of partition. Whenever the usage frequency of the partition changes, we update the partition weights and the storage memory in real time.

    2 Introduction of Spark cache

    2.1 Resilient distribution datasets

    RDD is distributed collection of objects, that is, each one can be divided into multiple partitions. RDDs support four types of operations: Creations, transformations, controls and actions. The SparkContext is responsible for the creation of RDD. Additionally, the transformation operations create a new dataset from an existing one, and the control operations mainly persist the RDD. In the course of the execution of the program, the operation of producing RDD is delayed until the action operation happened, see Ho et al.[Ho, Wu, Liu et al. (2017)].

    During the task execution, a DGA graph based on the Lineage can be created by DAGSheduler, and further divides the stage based on the dependency between the RDDs.See Gounaris et al. [Gounaris, Kougka, Tous et al. (2017); Geng (2015)] for more details.Each stage creates a batch of tasks which are then assigned to various executor processes.After all the tasks in a stage are executed, the reused RDD would be stored in the cache for further use, as illustrated in Fig. 1. The reused data is more common in some iterative computations, such as the PageRank andK-means mentioned in Zaharia et al. [Zaharia,Chowdhury, Das et al. (2012); Xu, Li, Zhang et al. (2016); Zhang, Shou, Xu et al. (2017);Chen, Zhang (2017); Napoleon and Lakshmi (2010)]. It can effectively reduce the cost of computing by caching the reused RDD to the storage memory. During the execution,users can specify the caching level and the object to be stored, such as the MEMORY_ONLY and MEMORY_ONLY_2 mean to store the RDD to the memory[Ding and He (2004)].

    Figure 1: The schematic diagram of parallel computing for RDD

    2.2 Memory management mode

    As shown in Fig. 2, the schematic diagram of memory partition in Spark 2.0.1 is obtained by analyzing the source code, see Dabokele et al. [Dabokele (2016); Hero1122 (2017)].The memory is firstly divided into two main parts: Memoryoverhead (default 384 M) and ExecutorMemory. ExecutorMemory can be further divided into Reserved Memory (default 300 M) and UsableMemory. If the system memory is less than 1.5*ReservedMemory, there would be an abnormality report. Note that 60% (the proportions can be modified) of the UsableMemory is used for storage and calculation. HeapExecutorMemory is used for task computing, and HeapStorageMemory is mainly used to cache the intermediate results that need to be reused. After Spark 1.6, the StorageMemory and the ExecutorMemory can be dynamically converted to each other, what is called Unified. Stroage and Execution can borrow each other’s memory. It is important to note that when there is no enough memory in both space, the Storage portion will spill the data that over 50% to the disk ( based on the storage_level) until the memory borrowed is returned, this is because the execution(Execution) is more important than a cache (Storage). There the release data is also based on the LRU algorithm.

    Figure 2: The diagram of Spark memory allocation

    2.3 Cache mechanism in Spark

    All of the calculations in Spark are done in the memory, and when there is a RDD that needs to be reused, it would be cached by experience. When storage capacity is insufficient, something essential needs to be done with replacement algorithm to reclaim the memory. The default algorithm is LRU which is mentioned in He et al. [He, Kosa and Scott (2007)]. The principle of LRU algorithm is: (1) The data newly added is inserted into the head of the linked list. (2) The data accessed is moved to the chain header. (3)When the storage space of the linked list is insufficient, the data at the end of the linked list is discarded. As shown in Fig. 3.

    Figure 3: The principle of LRU algorithm

    Figure 4: The cache replacement schematic of RDD

    In Spark, the LRU replacement algorithm is implemented by LinkedHashMap with the characteristics of a double linked list, see Lan [Lan (2013)]. Because of its inability to predict the future use of each page, it will release the least recently used page, see Wang[Wang (2014)]. However, in Spark, different RDD partitions in the same storage memory are heterogeneous, that is, they are different in the size and usage frequency. In this case,it would lead to a lot of unnecessary calculations only by the time factor.

    For instance, let RDD and the corresponding usage frequency represent for the series of RDDs: , , , . The Fig. 4 shows the cache replacement schematic of RDD. While the RDD is cached, the usage frequency decreases by one. In accordance with the ideology of LRU, when RDD0 is used for the second time,it will be placed in front of RDD1 along with the frequency of RDD0 drops to zero,which means the RDD0 would not be reused in future operations. When RDD3 will be cached, the RDD1 with frequence of 3 would be released because of the insufficient memory. That is we will recalculate the partition next time, which would make unnecessary computational cost. From above we can see that when a RDD partition does not need to be reused in the next calculation, it may still occupy memory space. While when a RDD may be reused next time, it may have been freed from memory. The LRU algorithm is also used when the ExecutorMemory is not enough, and the StorageMemory would put the more valuable partition data to the disk or somewhere else. Neither cache replacement nor memory recall can meet the demand of task computing well, so it is necessary to develop the replacement strategy based on partitioning features of RDD.

    3 Cache replacement model of RDD

    In this section, we will learn from the follow parts. Firstly, analyze the influence factors of RDD cache, and then propose three innovations: The prediction mechanism for persistence, the weight model by using the entropy method, and the update mechanism of weight and memory based on RDDs partition feature.

    Note that each job contains several RDDs, now letbe the set of RDDs, whilebe the set of partitions ofRi.

    Definition 1Task execution speedup. There we use the task execution speedup expressed byTEspto measure the task performance with the optimized algorithm. There will be better performance of the task execution with the greater speedup. The Formula is as follows:

    SetTLRUas the execution time with LRU algorithm, and theToptas the execution time with optimized algorithm.

    3.1 Analysis of influence factors

    To improve the research, it is necessary to learn the characteristics of RDD. The characteristic elements are as follows:

    (1) The frequency of utilization

    In order to avoid the unnecessary computation, it is necessary to make a judgment on the usage frequency of RDD. When an action occurs, the DAGScheduler creates a DAG based on the Lineage of RDD. By traversing the DAG diagram, we useto represent the characteristics of the RDD, in which theNirepresents the total usage frequency during the entire program. The RDD with largerNiis more worthy of being cached.

    (2) The remainder use frequency of RDD partition

    The RDD caching is in the form of partitions, and the residual frequency of the partitions decreases gradually in the course of task execution. There letNijbe the usage frequency for each RDD partition. Before caching theRi, we set the equation:Ni=Nij. When the first caching occurs ofRi, the value of theNijis reduced by one, and also it continues to decrease whenever theRiis used.

    (3) Computational cost

    When the cache memory is insufficient, the LRU algorithm will release the least recently used RDD. In the system, the algorithm only takes into account the time feature of the node where the partition is located. In fact, there would be unnecessary computational overhead provided that the partition eliminated needs to be reused next time. Therefore,the computational cost of partition should be a crucial factor. The partition with higher cost shouldn't be replaced. Here we usedefined in Duan et al. [Duan, Li, Tang et al.(2016)] to express the computational cost of RDD partition.

    LetETijrepresent for the finish time, while theSTijas the start time. At the same time, the execution time of a RDD is determined by the maximum time of all partitions, so the computational cost of RDD is as follows:

    (4) The size of partition

    The partitions that occupy the larger memory space should be preferentially eliminated to release more resources.

    3.2 The prediction mechanism for caching

    The prediction mechanism is divided into two parts:

    (1) WhenNiis equal to 2 In this case, the frequency will change to 1 by storing RDD in the cache. If not cache,there would be a recomputation cost. So it is necessary to decide whether it is worth to cache the RDD according to the relation:

    (2) WhenNi>2, it is suggested to be cached

    The execution process is as follows:

    Algorithm 1: RDD automatic cache prediction algorithm.Input:RDD sequence: R={R1...Ri...Rn}the usage frequency of RDD: NR the partition of RDD: RP the size of RDD partition: SRP the frequency to be used of RDD partition: NRP the remaining memory size of the storage node: Scach the C is donated as the set of partitions cached.Initialization:NR=NRP For (i=1 to n)If (NR=2 and SRP2 and SRPScach)call Algorithm 2 end if end for

    3.3 Weight replacement model

    Replacement operations are required when the storage space is insufficient to accommodate the RDD that needs to be cached. Also, the Storage section can apply to all free memory in the Execution section. When the execution requires more memory, the storage portion will spill the data to the disk (based on the storage_level) until the memory borrowed is returned. Forced discard data is also based on LRU algorithm. In order to represent the importance of an index in the whole analysis process, we adopt the weight form. Therefore, to reduce the cost of the re-computation, we put forward the weight calculation model based on partition feature by using entropy method in Zuo et al.[Zuo, Cao and Dong (2013)] to optimize LRU algorithm. The entropy method determines the index weight based on the degree of variation of each index value. Entropy is a measure of the degree of disorder in the system and can be used to measure the effective information of known data packets and determine weights. By determining the weights based on the calculation of the entropy, the weight of each partition is determined according to the degree of difference in the feature values of the partitions. When there is a large difference between certain eigenvalues of the evaluation object, the entropy is smaller which means the number of valid information provided by this feature is larger.Accordingly, the weight of the object should be larger. Conversely, if the difference between a certain feature value is small, the entropy value should be larger. It indicates that the feature provides less information and the weight should be smaller. When a feature value of a partition is exactly the same, the entropy value reaches a maximum,which means that the feature value is useless information. The process of the weight calculation is as follows:

    (1) Converting the characteristic of partition into matrix form

    Assuming that there arenpartitions in the storage memory, and each partition hasmfeature attributes. In this paper we setm=3, which means there are three features:Nij,andSRij. LetXijbe the value of thej-th index of thei-th partition, the matrix is as follows:

    (2) Normalization processing

    Since the measurement units of each feature are inconsistent, the standardized operations must be performed before computing. That is, the absolute value of the eigenvalue should be converted to the relative value to solve the homogeneity problem of different eigenvalues. TheNijandare positive correlation index, andSRijbelongs to negative correlation index. The normalized treatment formula is as follows:

    For convenience, the normalized dataX'ijis still represented byXij.

    (3) Under thej-th feature, thei-th partition occupies the proportion of the feature

    (4) The entropy of thej-th feature

    Where thek>0,lnis natural logarithm, then theej≥0, andk=1/lnm2.

    (5) The difference coefficient of thej-th eigenvalue

    (6) The weight of each feature

    (7) The weight of each partition

    Finally, by calculating the weight of each partition, the partition with lowest weight should be considered first to be replaced when the replacement happened. The process is as follows:

    Firstly, we need to compare the weight of the partition to be cached with the lowest weight:

    (1) If there is a qualified partition in the cache, release the partition. Otherwise,conversely turn to (2).

    (2) Put the RDD which needs to be cached to the wait cache area, and wait for weight updating in the storage memory.

    The execution process is as follows:

    Algorithm 2: weight replacement Input:partition in cache area: Rp, weight:v the size of Rp:SRP surplus space of storage memory:Scach the C is donated as the set of partitions cached the set of the size for partitions cached:{CSRP1 ....CSRPp}for (i=1 to p)if (CRPi .weigth < v )put CRPi to weigthList[j] according to the weight from small to large end if end for for (i=1 to weigthList.length)if (SRP

    3.4 The update mechanism for storage memory and weight of partition

    As we can see that in the process of task execution, the frequency of each partition to be used is constantly decreasing with the use of partitions, and the weights of the corresponding partitions are also changing. So we propose the update mechanism for storage memory and weight of partition: Whenever a partition in a storage area is used,the usage frequency of the partition is reduced by one, and all partitions are traversed to update the weight. At the same time, the partition to be used with frequency zero should be released to release more memory. During task execution, the weight of the partition should be updated whenever the remainder usage frequency of partitions in the storage area is reduced. The execution process is as follows:

    Algorithm 3: Update mechanism Input:the remainder usage frequency of partitions:NRP the C is donated as the set of partitions cached:C={CRP1,...,CRPi,...,CRPp}while a RDD will be used in computing:for (i=1 to p)if (CRPi will be used) then NRPi=NRPi -1 renew CRPi.weigth end if if (NRPi==0)C=C-CRPi end if end for

    4 Experimental verification

    The environment required in this experiment is as follows:

    (1) Cluster environment: Six virtual machines that created by two laptop computers and a desktop.

    (2) Cloud environment: Use Spark 2.0.1 as the computing framework and Hadoop Yarn as resource scheduling module.

    (3) Monitoring environment: nmon and nmon analyser.

    (4) Use The PageRank andK-means as the task algorithm, and choose three datasets from SNAP et al. [SNAP (2018)] and [UCI (2007)] respectively. The datasets are shown in Tab. 1 and Tab. 2.

    Table 1: The Description of datasets from SNAP

    Table 2: The Description of datasets from UCI

    4.1 The verification for RDD cache prediction mechanism

    This experiment was carried out under the PageRank and K-means tasks respectively,meanwhile completed under different sizes of data sets. In this experiment, we mainly compare and analyze the difference in execution time and memory utilization rate under without RDD cache selection and with RDD cache selection. As shown in Fig. 5, the results of each experiment are obtained by running 5 times. The Figs. 5(a) and 5(b) show the experimental results under the PageRank task, while the Figs. 5(b) and 5(c) are under the K-means. Through the comprehensive analysis of Figs. 5(a) and 5(c), when the dataset is relatively small, the task execution time is short, so the difference is not obvious. Along with the amount of data increases, the performance with the prediction mechanism for caching is well. Through the comprehensive analysis of Figs. 5(b) and 5(d), the memory usage of the task with the prediction mechanism is very high mainly because the cache data will occupy storage memory after the cache mechanism is optimized. In summary, the prediction mechanism for caching reduce the execution time and improve the rate of memory usage to a certain extent.

    Figure 5: Schematic diagram of task execution time and memory utilization under optimization and unoptimized cache mechanism

    4.2 The verification for weight replacement algorithm

    This part is implemented by modifying theevictBlocksToFreeSpacefunction in the source file namedMemoryStore.

    The datasets of WikiTalk and NIPS_1987-2015 have been used in this verification separately. To ensure that there are multiple RDDs to be cached, the dataset is divided into several small datasets separately in the process of data reading by analyzing implementation code of PangRank and K-means. Now this process satisfies the conditions for multiple RDDs to be cached.

    Another condition is that there are RDDs that need to be cached, so we verify the weight replacement algorithm by using the RDD cache prediction mechanism proposed by this paper.This algorithm is called only when the storage memory is insufficient. So this experiment is verified under different memory sizes: 1 G, 2 G, 3 G and 4 G. As shown in Fig. 6,compared with LRU algorithm, when the memory is small, our improved algorithm could not reduce the execution time well for the reason that the analyzing of the partition features and the updating for weight and memory could occupy much time. With the increase of memory, the weight replacement algorithm performs well. When the memory is large, there is enough memory to store the cache partition, and the number of the partition replacement is reduced. As we can see that the execution time of the two algorithms is similar with enough memory.

    As Fig. 7 shown, we use the task execution speedup to measure the task performance.According to the Formula (1), it respectively shows the speedup under PageRank with WikiTalk andK-means with NIPS_1987-2015. As we can see, the optimized algorithm shows the poor performance in 1 G memory. While with the larger memory, it shows good performance. When the memory is large enough, the optimization algorithm advantage is no longer obvious.

    Figure 7: Task execution speedup with different tasks

    5 Conclusion

    The By analyzing the characteristics of the Spark RDD data model, we propose three points: (1) Proposing prediction mechanism for RDD cache through the usage frequency of RDD, and the cost of re-computation and cache. (2) Based on the entropy method, we propose a weight model based on RDD partitioning feature by analyzing the remainder frequency of RDD partition, computational cost and the size of partition. (3) Putting forward the update mechanism for storage memory and weight of partition. In the actual running scene, the memory of the cluster environment is limited. For the multitask execution with big data, this method can effectively reduce the task execution time and improve the memory utilization.

    Acknowledgement:This paper is partially supported by Education technology research Foundation of the Ministry of Education (No. 2017A01020).

    一级,二级,三级黄色视频| 久久精品熟女亚洲av麻豆精品| 久久久久久久大尺度免费视频| 婷婷色av中文字幕| 久久人人97超碰香蕉20202| 久久久久久久大尺度免费视频| 亚洲国产欧美日韩在线播放| www.熟女人妻精品国产| 国产男女超爽视频在线观看| 欧美日韩精品网址| 热99re8久久精品国产| 91麻豆精品激情在线观看国产 | 亚洲精品国产区一区二| 99热网站在线观看| 大香蕉久久网| av免费在线观看网站| 美女国产高潮福利片在线看| 秋霞在线观看毛片| 97人妻天天添夜夜摸| 国产亚洲av高清不卡| 啦啦啦啦在线视频资源| 久久精品国产a三级三级三级| 一个人免费看片子| 中文字幕人妻丝袜一区二区| 狠狠婷婷综合久久久久久88av| 国产成人精品无人区| 亚洲欧美一区二区三区黑人| 如日韩欧美国产精品一区二区三区| 中文字幕人妻熟女乱码| 亚洲精品国产一区二区精华液| 成人影院久久| 国产av国产精品国产| 午夜91福利影院| 国产精品一区二区精品视频观看| 18禁观看日本| a级毛片黄视频| 亚洲欧洲日产国产| 久久免费观看电影| 亚洲国产日韩一区二区| 18禁观看日本| 黄色怎么调成土黄色| 性少妇av在线| 国产精品久久久久久人妻精品电影 | 久久久国产精品麻豆| 超碰97精品在线观看| 香蕉丝袜av| 精品国产超薄肉色丝袜足j| 黑人巨大精品欧美一区二区mp4| 人妻人人澡人人爽人人| 丰满迷人的少妇在线观看| 精品欧美一区二区三区在线| 男女国产视频网站| 中文字幕人妻熟女乱码| av福利片在线| 天天操日日干夜夜撸| 国产在线免费精品| 欧美老熟妇乱子伦牲交| 一区二区日韩欧美中文字幕| 精品人妻1区二区| 91字幕亚洲| 欧美乱码精品一区二区三区| 69精品国产乱码久久久| 成在线人永久免费视频| 在线精品无人区一区二区三| 捣出白浆h1v1| 国产免费现黄频在线看| 女性生殖器流出的白浆| 久久人妻福利社区极品人妻图片| 午夜福利免费观看在线| 久久中文字幕一级| 国产精品自产拍在线观看55亚洲 | 每晚都被弄得嗷嗷叫到高潮| 亚洲专区中文字幕在线| 性色av一级| 精品国产超薄肉色丝袜足j| 97人妻天天添夜夜摸| 99国产精品一区二区三区| 亚洲色图综合在线观看| 久久久精品94久久精品| 国产男人的电影天堂91| 亚洲成国产人片在线观看| 国产欧美日韩一区二区精品| 久久女婷五月综合色啪小说| 国产精品国产av在线观看| 热re99久久精品国产66热6| 18禁裸乳无遮挡动漫免费视频| 国产国语露脸激情在线看| 人人澡人人妻人| 中文字幕高清在线视频| 丝袜美腿诱惑在线| 亚洲精品美女久久久久99蜜臀| 欧美黑人欧美精品刺激| 女性生殖器流出的白浆| 欧美日韩中文字幕国产精品一区二区三区 | 一进一出抽搐动态| 免费在线观看完整版高清| 乱人伦中国视频| 亚洲欧美一区二区三区久久| av片东京热男人的天堂| 一区二区三区精品91| 亚洲欧美成人综合另类久久久| svipshipincom国产片| 国产免费视频播放在线视频| 国产日韩欧美视频二区| 亚洲欧美色中文字幕在线| 亚洲成av片中文字幕在线观看| 欧美激情高清一区二区三区| 黄片大片在线免费观看| 亚洲国产欧美在线一区| 又黄又粗又硬又大视频| 好男人电影高清在线观看| 精品视频人人做人人爽| 天天添夜夜摸| 亚洲精品日韩在线中文字幕| 久久人妻熟女aⅴ| 亚洲国产精品999| 国产成人系列免费观看| 91字幕亚洲| 80岁老熟妇乱子伦牲交| 久久人妻福利社区极品人妻图片| 国产精品av久久久久免费| 亚洲欧美精品自产自拍| 精品一区二区三区四区五区乱码| 啦啦啦中文免费视频观看日本| 中国国产av一级| 午夜91福利影院| 老司机亚洲免费影院| 女人被躁到高潮嗷嗷叫费观| 亚洲成av片中文字幕在线观看| 亚洲七黄色美女视频| 视频区图区小说| 91大片在线观看| 淫妇啪啪啪对白视频 | 欧美乱码精品一区二区三区| 黑人操中国人逼视频| 亚洲欧美一区二区三区久久| 亚洲欧美精品综合一区二区三区| 国产一区有黄有色的免费视频| 人妻一区二区av| 免费女性裸体啪啪无遮挡网站| 亚洲五月婷婷丁香| 亚洲欧洲日产国产| 欧美人与性动交α欧美软件| 女人被躁到高潮嗷嗷叫费观| 国产一区二区三区综合在线观看| 亚洲美女黄色视频免费看| 成人三级做爰电影| 99久久综合免费| 丰满饥渴人妻一区二区三| 精品一品国产午夜福利视频| 亚洲av日韩在线播放| 一进一出抽搐动态| 91精品伊人久久大香线蕉| 建设人人有责人人尽责人人享有的| 国产精品麻豆人妻色哟哟久久| 极品少妇高潮喷水抽搐| 欧美日韩视频精品一区| 美女脱内裤让男人舔精品视频| 国产成人欧美| 中文字幕av电影在线播放| 久久人人爽av亚洲精品天堂| 久久影院123| 亚洲国产中文字幕在线视频| 91国产中文字幕| 亚洲国产精品一区二区三区在线| 肉色欧美久久久久久久蜜桃| 成在线人永久免费视频| 国产一区二区激情短视频 | 999久久久国产精品视频| 久久久久国产精品人妻一区二区| 一区二区三区激情视频| 久热这里只有精品99| 91av网站免费观看| 在线av久久热| 国产成人精品久久二区二区免费| 亚洲伊人色综图| 大香蕉久久网| 国产色视频综合| 国产精品 欧美亚洲| 91九色精品人成在线观看| 少妇粗大呻吟视频| 美女午夜性视频免费| 性色av一级| 极品少妇高潮喷水抽搐| 成在线人永久免费视频| 国产欧美日韩一区二区精品| 777米奇影视久久| 免费一级毛片在线播放高清视频 | 欧美一级毛片孕妇| 欧美乱码精品一区二区三区| 色播在线永久视频| 色婷婷av一区二区三区视频| 精品少妇久久久久久888优播| 一级a爱视频在线免费观看| 国产免费av片在线观看野外av| 天天操日日干夜夜撸| 久久久国产成人免费| 99香蕉大伊视频| 婷婷丁香在线五月| 正在播放国产对白刺激| 欧美精品亚洲一区二区| 国产精品麻豆人妻色哟哟久久| 中文字幕人妻丝袜一区二区| 麻豆av在线久日| 国产97色在线日韩免费| 在线观看免费高清a一片| 99国产精品一区二区三区| 国产精品久久久久久精品电影小说| 日本五十路高清| 老汉色av国产亚洲站长工具| 亚洲精品日韩在线中文字幕| www.熟女人妻精品国产| 五月天丁香电影| 亚洲成人国产一区在线观看| 亚洲国产欧美在线一区| 纯流量卡能插随身wifi吗| 国产免费福利视频在线观看| 亚洲精品久久午夜乱码| 免费黄频网站在线观看国产| 18禁黄网站禁片午夜丰满| 九色亚洲精品在线播放| 最近中文字幕2019免费版| 高清在线国产一区| 日韩视频在线欧美| 波多野结衣一区麻豆| 一本大道久久a久久精品| 一区二区日韩欧美中文字幕| 日本a在线网址| 国产无遮挡羞羞视频在线观看| 黑人巨大精品欧美一区二区mp4| 欧美av亚洲av综合av国产av| 99精国产麻豆久久婷婷| 欧美一级毛片孕妇| 美国免费a级毛片| 亚洲精品国产av蜜桃| 中文字幕人妻丝袜一区二区| 欧美在线一区亚洲| 最新在线观看一区二区三区| netflix在线观看网站| 美女视频免费永久观看网站| 久久久久久久久久久久大奶| av不卡在线播放| 人人妻人人爽人人添夜夜欢视频| 男女高潮啪啪啪动态图| 午夜免费成人在线视频| 久9热在线精品视频| 女性被躁到高潮视频| 女性生殖器流出的白浆| 亚洲av电影在线观看一区二区三区| 一个人免费看片子| 国产成人av教育| 午夜激情av网站| 亚洲第一青青草原| 又大又爽又粗| 淫妇啪啪啪对白视频 | 老司机深夜福利视频在线观看 | 啪啪无遮挡十八禁网站| 国产男女内射视频| 亚洲va日本ⅴa欧美va伊人久久 | 老司机影院毛片| 欧美日韩一级在线毛片| 777米奇影视久久| 精品少妇一区二区三区视频日本电影| 高清黄色对白视频在线免费看| 91精品国产国语对白视频| 国产又爽黄色视频| 一本—道久久a久久精品蜜桃钙片| 国产一级毛片在线| 成人手机av| 手机成人av网站| 美女脱内裤让男人舔精品视频| 人人妻人人添人人爽欧美一区卜| 热99久久久久精品小说推荐| 欧美黄色淫秽网站| 丝袜人妻中文字幕| 亚洲人成电影免费在线| 一级毛片精品| 日韩 欧美 亚洲 中文字幕| 在线观看免费日韩欧美大片| 免费一级毛片在线播放高清视频 | 国产精品一区二区免费欧美 | 亚洲伊人久久精品综合| 777久久人妻少妇嫩草av网站| 91精品三级在线观看| 国产精品亚洲av一区麻豆| 欧美日韩av久久| 大香蕉久久网| 色精品久久人妻99蜜桃| 亚洲av电影在线观看一区二区三区| 男人舔女人的私密视频| 成年美女黄网站色视频大全免费| 两个人看的免费小视频| 久久人人爽人人片av| 国产av一区二区精品久久| 免费在线观看黄色视频的| 久久精品亚洲熟妇少妇任你| 午夜免费观看性视频| 国产精品 国内视频| 精品国产一区二区久久| 99国产精品99久久久久| 亚洲 国产 在线| 亚洲av电影在线进入| 国产97色在线日韩免费| 精品少妇一区二区三区视频日本电影| 男女之事视频高清在线观看| 亚洲 欧美一区二区三区| 亚洲精品一二三| av网站在线播放免费| 国产野战对白在线观看| bbb黄色大片| 激情视频va一区二区三区| 国产免费福利视频在线观看| 无限看片的www在线观看| 男女无遮挡免费网站观看| 18在线观看网站| 国产成人免费无遮挡视频| 亚洲国产毛片av蜜桃av| 一级,二级,三级黄色视频| 中文字幕最新亚洲高清| 欧美乱码精品一区二区三区| 成人国产一区最新在线观看| 欧美日韩中文字幕国产精品一区二区三区 | 老司机午夜十八禁免费视频| 亚洲自偷自拍图片 自拍| 午夜免费观看性视频| 欧美人与性动交α欧美精品济南到| 亚洲自偷自拍图片 自拍| 亚洲av电影在线进入| 各种免费的搞黄视频| 亚洲精品在线美女| 免费女性裸体啪啪无遮挡网站| av网站在线播放免费| 亚洲精品国产区一区二| 成在线人永久免费视频| 午夜成年电影在线免费观看| 亚洲精品第二区| 国产一区二区激情短视频 | 国产成人欧美| 亚洲成av片中文字幕在线观看| 日韩大片免费观看网站| 老熟妇乱子伦视频在线观看 | 日本av免费视频播放| 美女福利国产在线| a级片在线免费高清观看视频| 高清av免费在线| 久久国产精品男人的天堂亚洲| 色婷婷av一区二区三区视频| 久久精品国产a三级三级三级| 国产极品粉嫩免费观看在线| av线在线观看网站| 操美女的视频在线观看| 丝袜喷水一区| 亚洲综合色网址| 老熟女久久久| 嫁个100分男人电影在线观看| 夫妻午夜视频| 亚洲欧洲精品一区二区精品久久久| 国产高清视频在线播放一区 | 亚洲 欧美一区二区三区| 精品国产一区二区三区四区第35| 这个男人来自地球电影免费观看| 天堂8中文在线网| 韩国精品一区二区三区| 男女下面插进去视频免费观看| 天堂8中文在线网| tube8黄色片| 青草久久国产| 免费在线观看视频国产中文字幕亚洲 | 91麻豆精品激情在线观看国产 | 国产真人三级小视频在线观看| 嫁个100分男人电影在线观看| 91精品国产国语对白视频| 亚洲国产看品久久| 男人操女人黄网站| 中文字幕精品免费在线观看视频| 日韩免费高清中文字幕av| 国产成人欧美| 最近最新中文字幕大全免费视频| 国产精品 欧美亚洲| 性色av一级| 欧美日韩亚洲综合一区二区三区_| 中国国产av一级| 老熟女久久久| 欧美在线一区亚洲| 久久影院123| 青春草视频在线免费观看| 自拍欧美九色日韩亚洲蝌蚪91| 精品一区二区三卡| 久久久久精品国产欧美久久久 | 国产精品亚洲av一区麻豆| 午夜福利视频在线观看免费| 欧美一级毛片孕妇| 少妇被粗大的猛进出69影院| 久久影院123| 99精品欧美一区二区三区四区| 两个人看的免费小视频| 国产一卡二卡三卡精品| 精品国内亚洲2022精品成人 | 国产深夜福利视频在线观看| 成人国语在线视频| 日韩精品免费视频一区二区三区| 一本色道久久久久久精品综合| 18禁观看日本| 久久精品成人免费网站| 欧美乱码精品一区二区三区| 久久中文看片网| 热99re8久久精品国产| 一二三四在线观看免费中文在| 久久国产精品大桥未久av| 精品国产乱子伦一区二区三区 | 色综合欧美亚洲国产小说| 韩国精品一区二区三区| 亚洲精品日韩在线中文字幕| 亚洲成人手机| 欧美97在线视频| 黄色片一级片一级黄色片| 777久久人妻少妇嫩草av网站| 亚洲av日韩精品久久久久久密| 精品第一国产精品| 久久精品熟女亚洲av麻豆精品| 多毛熟女@视频| av视频免费观看在线观看| 成人18禁高潮啪啪吃奶动态图| 亚洲av美国av| 日日摸夜夜添夜夜添小说| 天天添夜夜摸| 老熟妇乱子伦视频在线观看 | 欧美激情久久久久久爽电影 | 欧美大码av| 欧美黑人欧美精品刺激| 国产高清国产精品国产三级| 精品熟女少妇八av免费久了| 国产免费一区二区三区四区乱码| 91麻豆精品激情在线观看国产 | 免费日韩欧美在线观看| 亚洲性夜色夜夜综合| 精品久久久精品久久久| 国产成人精品无人区| 欧美精品亚洲一区二区| 亚洲天堂av无毛| 动漫黄色视频在线观看| 久久久久国内视频| a级毛片黄视频| 国产伦人伦偷精品视频| 亚洲精品av麻豆狂野| 高清视频免费观看一区二区| 亚洲综合色网址| 涩涩av久久男人的天堂| 99热国产这里只有精品6| 亚洲欧美激情在线| 大型av网站在线播放| 精品久久久精品久久久| 亚洲中文av在线| 我的亚洲天堂| 欧美黄色片欧美黄色片| 丰满人妻熟妇乱又伦精品不卡| 午夜影院在线不卡| 少妇人妻久久综合中文| 下体分泌物呈黄色| 男女之事视频高清在线观看| 王馨瑶露胸无遮挡在线观看| 亚洲成av片中文字幕在线观看| 久久ye,这里只有精品| 国产精品麻豆人妻色哟哟久久| 精品一区二区三区四区五区乱码| 午夜视频精品福利| 中国美女看黄片| 黄色视频在线播放观看不卡| 免费观看av网站的网址| 91九色精品人成在线观看| 91成年电影在线观看| 看免费av毛片| 一二三四社区在线视频社区8| 亚洲精品国产一区二区精华液| 日韩欧美国产一区二区入口| 午夜精品久久久久久毛片777| 欧美成人午夜精品| 亚洲精品自拍成人| 亚洲一区中文字幕在线| 亚洲欧美一区二区三区黑人| 高清欧美精品videossex| 免费观看人在逋| 一级黄色大片毛片| 国产成人av教育| 男人添女人高潮全过程视频| 50天的宝宝边吃奶边哭怎么回事| 亚洲人成77777在线视频| 亚洲午夜精品一区,二区,三区| 亚洲美女黄色视频免费看| 亚洲伊人色综图| 国产精品免费大片| 中文字幕人妻丝袜一区二区| 亚洲欧美清纯卡通| 超色免费av| 999久久久国产精品视频| 最新在线观看一区二区三区| 亚洲精品一二三| 中文字幕av电影在线播放| 欧美xxⅹ黑人| 欧美日韩黄片免| 最新在线观看一区二区三区| 亚洲成人国产一区在线观看| 久久精品国产综合久久久| 久久免费观看电影| 亚洲 国产 在线| 大片电影免费在线观看免费| 国产在线视频一区二区| 色婷婷久久久亚洲欧美| 亚洲天堂av无毛| 欧美精品高潮呻吟av久久| 亚洲精品国产色婷婷电影| 亚洲av日韩精品久久久久久密| 一边摸一边抽搐一进一出视频| 在线观看免费日韩欧美大片| 91成人精品电影| 欧美大码av| 成在线人永久免费视频| 黑人巨大精品欧美一区二区mp4| 热99re8久久精品国产| av欧美777| 亚洲色图综合在线观看| 满18在线观看网站| 女人被躁到高潮嗷嗷叫费观| 高清av免费在线| 在线天堂中文资源库| www日本在线高清视频| 国产黄频视频在线观看| 无遮挡黄片免费观看| 777米奇影视久久| 精品一品国产午夜福利视频| 后天国语完整版免费观看| 美女扒开内裤让男人捅视频| 亚洲伊人色综图| 性少妇av在线| 在线观看人妻少妇| 久久久国产一区二区| 精品一区二区三卡| 法律面前人人平等表现在哪些方面 | 日本撒尿小便嘘嘘汇集6| 性色av一级| 国产欧美日韩一区二区精品| 超碰97精品在线观看| 日韩有码中文字幕| 波多野结衣av一区二区av| 亚洲国产中文字幕在线视频| 18禁国产床啪视频网站| 国产av国产精品国产| 在线精品无人区一区二区三| 亚洲中文日韩欧美视频| 午夜福利在线免费观看网站| 91九色精品人成在线观看| 亚洲精品中文字幕一二三四区 | 亚洲自偷自拍图片 自拍| 男女边摸边吃奶| 热re99久久精品国产66热6| 视频区图区小说| 黑人巨大精品欧美一区二区蜜桃| 国产成人欧美在线观看 | 又紧又爽又黄一区二区| 久9热在线精品视频| 波多野结衣av一区二区av| 欧美另类一区| 精品人妻熟女毛片av久久网站| 午夜免费鲁丝| 俄罗斯特黄特色一大片| 精品福利观看| 亚洲天堂av无毛| 韩国精品一区二区三区| 日韩中文字幕视频在线看片| 久热这里只有精品99| 国产成人av激情在线播放| 人妻一区二区av| 国产成人a∨麻豆精品| 天天添夜夜摸| 免费日韩欧美在线观看| 永久免费av网站大全| 黑人巨大精品欧美一区二区mp4| 成人黄色视频免费在线看| 欧美+亚洲+日韩+国产| 午夜激情久久久久久久| 亚洲男人天堂网一区| 91精品伊人久久大香线蕉| 男女下面插进去视频免费观看| 国产亚洲一区二区精品| 黄色片一级片一级黄色片| 欧美97在线视频| 两个人看的免费小视频| 91成年电影在线观看| www日本在线高清视频| 91国产中文字幕| 男人舔女人的私密视频| 欧美日韩福利视频一区二区| 黑人巨大精品欧美一区二区蜜桃| 首页视频小说图片口味搜索| 在线观看免费视频网站a站| 亚洲av电影在线观看一区二区三区| av有码第一页| 大片免费播放器 马上看| 日韩视频一区二区在线观看| a级毛片黄视频| 亚洲五月婷婷丁香| 91字幕亚洲| 99热网站在线观看| 精品亚洲乱码少妇综合久久| 亚洲第一欧美日韩一区二区三区 | 精品一区二区三区av网在线观看 | 男女午夜视频在线观看| 亚洲第一欧美日韩一区二区三区 | 男女午夜视频在线观看| 午夜久久久在线观看| 国产精品二区激情视频| 午夜福利影视在线免费观看| 中文字幕人妻熟女乱码| 人妻 亚洲 视频| 欧美人与性动交α欧美软件|