• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Parallel Approach to Discords Discovery in Massive Time Series Data

    2021-12-15 12:48:14MikhailZymblerAlexanderGrentsYanaKraevaandSachinKumar
    Computers Materials&Continua 2021年2期

    Mikhail Zymbler,Alexander Grents,Yana Kraeva and Sachin Kumar

    Department of Computer Science, South Ural State University, Chelyabinsk, 454080,Russian

    Abstract:A discord is a refinement of the concept of an anomalous subsequence of a time series.Being one of the topical issues of time series mining, discords discovery is applied in a wide range of real-world areas (medicine, astronomy,economics, climate modeling, predictive maintenance, energy consumption,etc.).In this article, we propose a novel parallel algorithm for discords discovery on high-performance cluster with nodes based on many-core accelerators in the case when time series cannot fit in the main memory.We assumed that the time series is partitioned across the cluster nodes and achieved parallelization among the cluster nodes as well as within a single node.Within a cluster node,the algorithm employs a set of matrix data structures to store and index the subsequences of a time series, and to provide an efficient vectorization of computations on the accelerator.At each node,the algorithm processes its own partition and performs in two phases, namely candidate selection and discord refinement, with each phase requiring one linear scan through the partition.Then the local discords found are combined into the global candidate set and transmitted to each cluster node.Next, a node performs refinement of the global candidate set over its own partition resulting in the local true discord set.Finally,the global true discords set is constructed as intersection of the local true discord sets.The experimental evaluation on the real computer cluster with real and synthetic time series shows a high scalability of the proposed algorithm.

    Keywords: Time series; discords discovery; computer cluster; many-core accelerator;vectorization

    1 Introduction

    Currently,the discovery of anomalous subsequences in a very long time series is a topical issue in a wide spectrum of real-world applications,namely medicine,astronomy,economics,climate modeling,predictive maintenance, energy consumption, and others.For such applications, it is hard to deal with multi-terabyte time series,which cannot fit into the main memory.

    Keogh et al.[1]introduced HOTSAX,the anomaly detection algorithm based on the discord concept.Adiscordof a time series can informally be defined as a subsequence that has the largest distance to its nearest non-self match neighbor subsequence.A discord looks attractive as an anomaly detector because it only requires one intuitive parameter (the subsequence length), as opposed to most anomaly detection algorithms,which typically require many parameters[2].HOTSAX,however,assumes that time series reside in main memory.

    Further,Yankov,Keoghet al.proposed a disk-aware algorithm(for brevity,referred to as DADD,Disk-Aware Discord Discovery) based on therange discordconcept [3].For a given ranger, DADD finds all discords at a distance of at leastrfrom their nearest neighbor.The algorithm requires two linear scans through the time series on a disk.Later, Yankov, Keogh et al.[4] discussed parallelization of DADD based on the MapReduce paradigm.However, in the experimental evaluation, the authors just simulated the above-mentioned parallel implementation on up to eight computers.

    Our research is devoted to parallel and distributed algorithms for time series mining.In the previous work [5], we parallelized HOTSAX for many-core accelerators.This article continues our study and contributes as follows.We present a parallel implementation of DADD on the high-performance cluster with the nodes based on many-core accelerators.The original algorithm is extended by a set of index structures to provide an efficient vectorization of computations on each cluster node.We carried out the experiments on the real computer cluster with the real and synthetic time series, which showed a high scalability of our approach.

    The rest of the article is organized as follows.In Section 2,we give the formal definitions along with a brief description of DADD.Section 3 provides the brief state of the art literature review.Section 4 presents the proposed methodology.In Section 5,the results of the experimental evaluation of our approach have been provided.Finally,Section 6 summarizes the results obtained and suggests directions for a further research.

    2 Problem Statement and the Serial Algorithm

    2.1 Notations and Definitions

    Below,we follow[4]to give some formal definitions and the statement of the problem.

    Atime series Tis a sequence of real-valued elements:T=(t1,...,tm),ti∈R.The length of a time series is denoted by|T|.

    Asubsequence Ti,nof a time seriesTis its contiguous subset ofnelements that starts at positioni:Ti,n=(ti,...,ti+n-1), 1 ≤n?m, 1 ≤i≤m-n+1.We denote the set of all subsequences of lengthninTbySnT.LetNdenotes the number of subsequences in

    Adistance functionfor any two subsequences is a nonnegative and symmetric function Rn×Rn→R.

    Two subsequencesTi,n,Tj,n∈arenon-trivial matches[6]with respect to a distance function Dist,ifand DistLet us denote a non-trivial match of a subsequenceC∈SnTbyMC.

    A subsequenceD∈SnTis said to be themost significant discordinTif the distance to its nearest nontrivial match is the largest.That is,

    A subsequenceD∈SnTis called themost significant k-th discordinTif the distance to itsk-th nearest non-trivial match is the largest.

    Given a positive parameterr,the discord at a distance at leastrfrom its nearest neighbor is called therange discord,i.e.,for discordDmin(Dist(D,MD))≥r.

    DADD,the original serial disk-based algorithm[4]addresses discovering range discords,and provides researchers with a procedure to choose therparameter.To accelerate the above-mentioned procedure, our parallel algorithm[5]for many-core accelerators can be applied,which discovers discords for the case when time series fit in the main memory.

    When computing distance between subsequences, DADD demands that the arguments have been previously z-normalized to have mean zero and a standard deviation of one.Here,z-normalization of a subsequenceC∈SnTis defined as a subsequencein which

    In the original DADD algorithm,the Euclidean distance is used as a distance measure yet the algorithm can be utilized with any distance function, which may not necessarily be a metric [4].Given two subsequencesX,Y∈SnT, the Euclidean distance between them is calculated as

    2.2 The Original Algorithm

    The DADD algorithm [4] performs in two phases, namely the candidate selection and discord refinement, with each phase requiring one linear scan through the time series on disk.Algorithm 1 depicts a pseudo code of DADD (up to the replacement of the Euclidean distance by an arbitrary distance function).The algorithm takes time seriesTand rangeras an input and outputs set of discords C.For a discordc∈C,we denote the distance to its nearest neighbor asc.dist.

    At the first phase, the algorithm scans through the time seriesT, and for each subsequences∈SnTit validates the possibility for each candidatecalready in the set C to be discord.If a candidatecfails the validation, then it is removed from this set.In the end,the newsis either added to the candidates set, if it is likely to be a discord,or it is discarded.The correctness of this procedure is proved in Yankov et al.[4].

    At the second phase,the algorithm initially sets distances of all candidates to their nearest neighbors to infinity.Then, the algorithm scans through the time seriesT, calculating the distance between each subsequences∈SnTand each candidatec.Here, when calculating EDs,c( ), the EarlyAbandonED procedure stops the summation of ∑if it reachesk=?, such that 1 ≤? ≤nfor whichIf the distance is less thanrthen the candidate is false positive and permanently removed from C.If the above-mentioned distance is less than the current value ofc.dist(and still greater thanr,otherwise it would have been removed)then the current distance to the nearest neighbor is updated.

    3 Related Work

    Being introduced in Keogh et al.[1], currently, time-series discords are considered one of the best techniques for the time series anomaly detection[7].

    The original HOTSAX algorithm [1] is based on the SAX (Symbolic Aggregate ApproXimation)transformation [8].Among the improvements of HOTSAX, we can mention the following algorithms,namelyiSAX [9] and HOT-iSAX [10] (indexable SAX), WAT [11] (Haar wavelets instead of SAX),HashDD [12] (use of a hash table instead of the prefix trie), HDD-MBR [13] (application of R-trees), and BitClusterDiscord [14] (clustering of the bit representation of subsequences).However, the abovementioned algorithms are able to discover discords if the time series fits in the main memory, and have no parallel implementations,to the best of our knowledge.

    Further, Yankov, Keogh et al.[3] overcame the main memory size limitation having proposed a diskaware discord discovery algorithm (DADD) based on therange discordconcept.For a given ranger,DADD finds all discords at a distance of at leastrfrom their nearest neighbor.The algorithm performs in two phases, namely the candidate selection and discord refinement, with each phase requiring one linear scan through the time series on the disk.

    There are a couple of worth-noting works devoted to parallelization of DADD.The DDD(Distributed Discord Discovery) algorithm [15] parallelizes DADD through a Spark cluster [16] and HDFS (Hadoop Distributed File System) [17].DDD distributes time series onto the HDFS cluster and handles each partition in a memory of a computing node.As opposed to DADD, DDD computes the distance without taking advantage of an upper bound for early abandoning, which would increase the algorithm’s performance.

    The PDD (Parallel Discord Discovery) algorithm [18] also utilizes a Spark cluster but employs transmission of a subsequence and its non-trivial matches to one or more computing nodes to calculate the distance between them.A bulk of continuous subsequences is transmitted and calculated in a batch mode to reduce the message passing overhead.PDD is not scalable since intensive message passing between the cluster nodes leads to a significant degradation of the algorithm’s performance as the number of nodes increases.

    In their further work [4], Yankov, Keoghet al.discussed the parallel version of DADD based on the MapReduce paradigm (hereinafter referred to as MR-DADD), and the basic idea is as follows.Let the input time seriesTbe partitioned evenly acrossPcluster nodes.Each node performs the selection phase on its own partition with the samerparameter and produces distinct candidate set Ci.Then the combined candidate set CPis constructed as CP=∪Pi=1Ciand transmitted to each cluster node.Next, a node performs the refinement phase on its own partition taking CPas an input, and produces the refined candidate set Ci.The final discords are given by the set CP=∩Pi=1Ci.In the experimental evaluation, the authors, however, just simulated the above-mentioned scheme on up to eight computers resulting in a near-to-linear speedup.

    Concluding this brief review, we should also mention the matrix profile (MP) concept proposed by Keogh et al.[19].MP is a data structure that annotates a time series, and can be applied to solve an impressively large list of time series mining problems including discords discovery but at computational cost of Om2( ) wheremis the time series length [19,20].Recent parallel algorithms of the MP computation include GPU-STAMP [19] and MP-HPC [21], which are implementations for graphic processors through CUDA(Compute Unified Device Architecture)technology and computer cluster through MPI (Message Passing Interface) technology,respectively.

    4 Discords Discovery on Computer Cluster with Many-Core Accelerators

    The parallelization employs a two-level parallelism,namely across cluster nodes and among threads of a single node.We implemented these levels through partitioning of an input time series and MPI technology,and OpenMP technology,respectively.Within a single node,we employed the matrix representation of data to effectively parallelize computations through OpenMP.Below, we will show an approach to the implementation of these ideas.

    4.1 Time Series Representation

    To provide parallelism at the level of the cluster nodes,we perform time series partitioning across the nodes as follows.LetPbe the number of nodes in the cluster, thenk-th partition (0 ≤k≤P-1) of the time series is defined asTstart,lenwhere

    This means the head part of every partition except the first overlaps with the tail part of the previous partition inn-1 data points.Such a technique prevents us from a loss of the resulting subsequences in the junctions of two neighbor partitions.To simplify the presentation of the algorithm, hereinafter in this section, we use symbolTand the above-mentioned related notions implying a partition on the current node but not the whole input time series.

    The time series partition is stored as a matrix of aligned subsequences to enable computations over aligned data with as many auto-vectorizable loops as possible.We avoid the unaligned memory access since it can cause an inefficient vectorization due to time overhead for the loop peeling[22].

    Let us denote the number of floats stored in the VPU (vector processing unit of the many-core accelerator) bywidthVPU.If the discord lengthnis not a multiple ofwidthVPU, then each subsequence is padded with zeroes where the number of zeroes is calculated aspad=widthVPU-(nmodwidthVPU).Thus,the aligned(and previously z-normalized) subsequenceis defnied as follows:

    Thesubsequence matrixis defined as

    4.2 Internal Data Layout

    The parallel algorithm employs the data structures depicted in Fig.1.Defining structures to store data in the main memory of a cluster node,we suppose that each structure is shared by all threads the algorithm is running on, and each thread processes its own data segment independently.Let us denote the amount of threads employing by the algorithm on a cluster node byp, and letiam(0 ≤iam≤p-1) denotes the number of the current thread.

    Figure 1:Data layout of the algorithm

    Set of discords C is implemented as an object with two basic attributes, namelycandidate indexandcandidate body,to store indices of all potential discord subsequences and their values themselves,respectively.

    Let us denote a ratio of candidates selected at a cluster node and all subsequences of the time series by ξ.The exact value of the ξ parameter is a subject of an empirical choice.In our experiments,ξ=0.01 was enough to store all candidates.Thus,we denote the number of candidates asL= ξ·Nand assume thatL?N.

    Thecandidate indexis organized as a matrix C.index∈Np×L,which stores indices of candidates in the subsequence matrixSnTfound by each thread, i.e.,i-th row keeps indices of the candidates that have been found byi-th thread.Initially,the candidate index is filled by NULL values.

    To provide a fast access to the candidate index during the selection phase,it is implemented as a deque(double-ended queue) with three attributes, namelycount,head, andtail.Thedeque countis an array C.count∈Np, which for each thread keeps the amount of non-NULL elements in the respective row of the candidate index matrix.Thedeque headandtailare arrays C.headand C.tail∈Np, respectively,which represent the second-level indices that for each thread keep a number of column in C.indexwith the most recent NULL value,and with the least recent non-NULL value,respectively.

    LetH(H<L?N)be the number of candidates selected at a cluster node during the algorithm’s first phase.Then thecandidate bodyis the matrix C.cand∈RH×n,which represents the candidate subsequences itself.The candidate body is accompanied by an array C.pos∈NH,which stores starting points of candidate subsequences in the input time series.

    After the selection phase, all the nodes exchange the candidates found to construct the combined candidate set, so at each cluster node the candidate body will contain potential discords from all the nodes.At the second phase, the algorithm refines the combined candidate set comparing the parameterrand distances between each element of the candidate body and each element of the subsequence matrix.

    To parallelize this activity,we process rows of the subsequence matrix in the segment-wise manner and employ an additional attribute of the candidate body, namelybitmap.Thebitmapis organized as a matrix C.bitmap∈Bp×H, which indicates the fact that an element of the candidate body has been successfully validated against all elements in a segment of the subsequence matrix.Thus, after the algorithm’s second phase,i-th element of the candidate body is successfully validated ifbitmap?,i( )is true.

    4.3 Parallel Implementation of the Algorithm

    In the implementation,we apply the following parallelization scheme at the level of the cluster nodes.Let the input time seriesTbe partitioned evenly acrossPcluster nodes.Each node performs the selection phase on its own partition with the same threshold parameterrand produces distinct candidate set Ci.

    Next, as opposed to MR-DADD [4], each node refines its own candidate set Ciwith respect to thervalue.Indeed, a candidate cannot be the true discord if it is pruned in the refinement phase in at least one cluster node.Thus, by the local refinement procedure, we try to reduce each candidate set Ciand, in turn,the combined candidate set CP=.In the experiments, this allows us to reduce the size of the combined candidate set at times.

    Then the combined candidate set CPis constructed and transmitted to each cluster node.Next,a node refines CPover its own partition,and produces the result Ci.Finally,the true discords set is constructed as

    The parallel implementation of the candidate selection and refinement phases is depicted in Algorithm 2 and Algorithm 3, respectively.To speed up the computations at a cluster node, we omit the square root calculation since this does not change the relative ranking of the candidates (indeed, the ED function is monotonic and concave).

    Algorithm 2:Parallel Candidate Selection (in T,r;out C)1:#pragma omp parallel 2: iam ←omp_get_thread_num()3:#pragma omp for 4:for i from 1 to N do 5: isCand ←TRUE 6:for j from 1 to C.tail iam( )do 7:if C.index iam,j( )=NULL or C.index iam,j( )-i ||<n then 8: continue 9:if ED2 SnT i,·( ), SnT C.index iam,i( ),·()()<r2 then 10: isCand ←FALSE;C.count iam( )←C.count iam( )-1 11: C.index iam,j( )←NULL;C.head iam( ) ←j 12:if isCand then 13:C.count iam( )←C.count iam( )+1 14:if C.index iam, C.head iam( )()= NULL then 15: C.index iam, C.head iam( )()←i 16:else 17: C.index iam, C.tail iam( )()←i;C.tail iam( )←C.tail iam( )+1 18:return C

    In the selection phase,we parallelize the outer loop along the rows of the subsequence matrix while in the inner loop along the candidates,each thread processes its own segment of the candidate index.By the end of the phase,the candidates found by each thread are placed into the candidate body,and all the cluster nodes exchange the resulting candidate bodies by the MPI_Send and MPI_Recv functions to form the combined candidate set,which serves as an input for the second phase.

    In the refinement phase,we also parallelize the outer loop along the rows of the subsequence matrix,and in the inner loop along the candidates, each thread processes its own segments of the candidate body and bitmap.In this implementation, we do not use the early abandoning technique for the distance calculation relying on the fact that vectorization of the square of the Euclidean distance may give us more benefits.By the end of the phase, the column-wise conjunction of the elements in the bitmap matrix will result in a set of true discords found by the current cluster node.An intersection of such sets is implemented by one of the cluster nodes where the rest nodes send their resulting sets.

    Algorithm 3:Parallel Discord Refinement(in T,r;in out C)1:C.bitmap ←TRUEp × H 2:#pragma omp parallel 3: iam ←omp_get_thread_num()4:#pragma omp for 5:for i from 1 to N do 6:for j from 1 to H do 7:C.bitmap iam,j( )←C.bitmap iam,j( ) and ED2 SnT i,·( ), C.cand j,·( )()≥r2()8:return C

    5 Experiments

    We evaluated the proposed algorithm during the experiments conducted on the Tornado SUSU computer cluster [23] with the nodes based on the Intel MIC accelerators [24].Each cluster node is equipped by the Intel Xeon Phi SE10X accelerator with a peak performance 1.076 TFLOPS (60 cores at 1.1 GHz with hyper-threading factor 4×).In the experiments, we investigated scalability of our approach and compared it with analogs, and the results are given below in Sections 5.1 and 5.2,respectively.

    5.1 The Algorithm’s Scalability

    In the first series of the experiments,we assessed the algorithm’s scaled speedup,which is defined as the speedup obtained when the problem size is increased linearly with the number of the nodes added to the computer cluster [25].Being applied to our problem, the algorithm’s scaled speedup is calculated as

    wherenis the discord length,Pis the number of the cluster nodes,mis a factor of the time series length,CP·mis a set of all the candidate discords selected by the algorithm at its first phase from a time series of lengthP·mandtP·P·mis the algorithm’s run time when the time series is processed onPnodes.

    For the evaluation,we took ECG time series[26](see Tab.1 for the summary of the data involved).In the experiments,we discovered discords on up to 128 cluster nodes with the time series factorm=106,and varied the discord’s lengthnwhile the range parameterrwas chosen empirically to provide the algorithm’s best performance.

    The results of the experiments are depicted in Fig.2.As can be seen, our algorithm adapts well to increasing both the time series length and number of cluster nodes, and demonstrates the linear scaled speedup.As expected, the algorithm shows a better scalability with larger values of the discord length because this provides a higher computational load.

    Table 1:Time series involved in the experiments on the algorithm’s scalability

    Figure 2:The scaled speedup of the algorithm

    5.2 Comparison with Analogs

    In the second series of the experiments, we compared the performance of our algorithm against the analogs we have already considered in Section 3, namely DDD [15], MR-DADD [4], GPU-STAMP [19],and MP-HPC [21].We omit the PDD algorithm [18] since in our previous experiments [5], PDD was substantially far behind our parallel in-memory algorithm due to the overhead caused by the message passing among the cluster nodes.

    Throughout the experiments,we used the synthetic time series generated according the Random Walk model[27]as that ones were employed for the evaluation by the competitors.For comparison purposes,we used the run times reported by the authors of the respective algorithms.To perform the comparison,we ran our algorithm on Tornado SUSU with a reduced number of nodes and cores at a node to make the peak performance of our hardware platform approximately equal to that of the system on which the corresponding competitor was evaluated.

    Tab.2 summarizes the performance of the proposed algorithm compared with the analogs.We can see that our algorithm outruns its competitors.As expected,direct analogs DDD and MR-DADD are inferior to our algorithm since they do not employ parallelism within a single cluster node.Additionally, indirect analogs GPU-STAMP and MP-HPC are behind our algorithm since they initially aim to solve a computationally more complex problem of computing the matrix profile, which can also be used for discords discovery among many other time series mining problems.

    Table 2:Comparison of the proposed algorithm with analogs

    6 Conclusion

    In this article,we addressed the problem of discovering the anomalous subsequences in a very long time series.Currently, there is a wide spectrum of real-world applications where it is typical to deal with multiterabyte time series, which cannot fit in the main memory:medicine, astronomy, economics, climate modeling, predictive maintenance, energy consumption, and others.In the study, we employ the discord concept, which is a subsequence of the time series that has the largest distance to its nearest non-self match neighbor subsequence.

    We proposed a novel parallel algorithm for discords discovery in very long time series on the modern high-performance cluster with the nodes based on many-core accelerators.Our algorithm utilizes the serial disk-aware algorithm by Yankov, Keogh et al.[4] as a basis.We achieve parallelization among the cluster nodes as well as within a single node.At the level of the cluster nodes, we modified the original parallelization scheme that allowed us to reduce the number of candidate discords to be processed.Within a single cluster node, we proposed a set of the matrix data structures to store and index the subsequences of a time series, and to provide an efficient vectorization of computations on the many-core accelerator.

    The experimental evaluation on the real computer cluster with the real and synthetic time series shows the high scalability of the proposed algorithm.Throughout the experiments on real computer cluster over real and synthetic time series, our algorithm showed the linear scalability, increasing in the case of a high computational load due to a greater discord length.Also, the algorithm’s performance was ahead of the analogs that do not employ both computer cluster and many-core accelerators.

    In further studies,we plan to elaborate versions of the algorithm for computer clusters with GPU nodes.

    Funding Statement:This work was financially supported by the Russian Foundation for Basic Research(Grant No.20-07-00140) and by the Ministry of Science and Higher Education of the Russian Federation(Government Order FENU-2020-0022).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    www.精华液| 在线观看人妻少妇| 男人爽女人下面视频在线观看| 中文精品一卡2卡3卡4更新| 99香蕉大伊视频| 国产成人精品久久二区二区91| 国产又爽黄色视频| 国产在线一区二区三区精| 色老头精品视频在线观看| 黄色怎么调成土黄色| 免费av中文字幕在线| 日本av免费视频播放| 啦啦啦 在线观看视频| 久久久国产欧美日韩av| 亚洲精品成人av观看孕妇| 午夜91福利影院| 91精品国产国语对白视频| 亚洲欧美一区二区三区久久| 50天的宝宝边吃奶边哭怎么回事| 少妇的丰满在线观看| 精品熟女少妇八av免费久了| 亚洲精华国产精华精| 国产精品av久久久久免费| 少妇被粗大的猛进出69影院| av在线app专区| 纯流量卡能插随身wifi吗| 我要看黄色一级片免费的| 日本猛色少妇xxxxx猛交久久| 热99re8久久精品国产| 亚洲人成电影观看| 久久久水蜜桃国产精品网| 欧美日韩av久久| av一本久久久久| 欧美日韩视频精品一区| 亚洲国产精品999| 欧美 亚洲 国产 日韩一| 日本av手机在线免费观看| 777米奇影视久久| 高清黄色对白视频在线免费看| 丝袜喷水一区| 国产日韩欧美视频二区| 日本av手机在线免费观看| 青春草亚洲视频在线观看| 亚洲综合色网址| 国产一区二区三区av在线| 人人妻人人澡人人爽人人夜夜| 97在线人人人人妻| 99久久99久久久精品蜜桃| 男女国产视频网站| 亚洲色图 男人天堂 中文字幕| 最近最新中文字幕大全免费视频| 一区在线观看完整版| 深夜精品福利| 999久久久国产精品视频| 99热全是精品| 91大片在线观看| 亚洲一卡2卡3卡4卡5卡精品中文| 国产欧美亚洲国产| 亚洲国产看品久久| 国产视频一区二区在线看| 亚洲av片天天在线观看| 欧美一级毛片孕妇| 另类亚洲欧美激情| 国产成人a∨麻豆精品| 久久久国产一区二区| 人人妻人人澡人人看| 大型av网站在线播放| 他把我摸到了高潮在线观看 | 交换朋友夫妻互换小说| 美女中出高潮动态图| 9热在线视频观看99| 人人澡人人妻人| 久久久国产一区二区| 后天国语完整版免费观看| 国产亚洲精品久久久久5区| 一区二区日韩欧美中文字幕| 欧美 日韩 精品 国产| 黄色视频在线播放观看不卡| 久久国产亚洲av麻豆专区| √禁漫天堂资源中文www| 亚洲国产av影院在线观看| 久久精品亚洲av国产电影网| 免费观看人在逋| 欧美黑人欧美精品刺激| 男女午夜视频在线观看| 欧美人与性动交α欧美软件| 亚洲精品国产色婷婷电影| 国产欧美日韩一区二区精品| 日韩视频一区二区在线观看| avwww免费| 狠狠精品人妻久久久久久综合| 欧美日韩精品网址| 久久精品国产综合久久久| 欧美精品av麻豆av| 欧美一级毛片孕妇| 人人妻人人澡人人看| 国产欧美日韩一区二区精品| 麻豆av在线久日| 欧美激情高清一区二区三区| 19禁男女啪啪无遮挡网站| 久久久久精品人妻al黑| 国产在线免费精品| 中文字幕精品免费在线观看视频| 久久综合国产亚洲精品| 午夜成年电影在线免费观看| 午夜精品久久久久久毛片777| 91av网站免费观看| 成人亚洲精品一区在线观看| 国产在线视频一区二区| 99精品欧美一区二区三区四区| 精品视频人人做人人爽| www.精华液| 午夜福利影视在线免费观看| 欧美国产精品一级二级三级| 黑人操中国人逼视频| 亚洲第一欧美日韩一区二区三区 | videos熟女内射| 三上悠亚av全集在线观看| 日韩制服丝袜自拍偷拍| 精品亚洲成a人片在线观看| 欧美成人午夜精品| 中文字幕精品免费在线观看视频| 两个人看的免费小视频| 免费久久久久久久精品成人欧美视频| 精品国产乱码久久久久久小说| 久久精品成人免费网站| 日韩人妻精品一区2区三区| www.999成人在线观看| 欧美激情久久久久久爽电影 | 丰满迷人的少妇在线观看| 一本—道久久a久久精品蜜桃钙片| 国产欧美日韩综合在线一区二区| 丝袜人妻中文字幕| 69av精品久久久久久 | 97人妻天天添夜夜摸| 欧美另类亚洲清纯唯美| 美女大奶头黄色视频| 国产在线免费精品| 欧美黑人欧美精品刺激| 美女高潮喷水抽搐中文字幕| 另类亚洲欧美激情| 日韩视频在线欧美| 精品亚洲成a人片在线观看| 免费在线观看影片大全网站| 久久人人爽人人片av| 黑人巨大精品欧美一区二区mp4| 精品卡一卡二卡四卡免费| 丰满迷人的少妇在线观看| 伦理电影免费视频| 大码成人一级视频| 国产av又大| 亚洲精品中文字幕一二三四区 | 欧美日韩亚洲高清精品| 一级黄色大片毛片| 99九九在线精品视频| 中文字幕高清在线视频| 50天的宝宝边吃奶边哭怎么回事| 多毛熟女@视频| 国产成人啪精品午夜网站| 久久久久久亚洲精品国产蜜桃av| 极品人妻少妇av视频| 美女主播在线视频| 99热国产这里只有精品6| 大香蕉久久成人网| 如日韩欧美国产精品一区二区三区| 人成视频在线观看免费观看| 日韩欧美免费精品| 国产男女超爽视频在线观看| videosex国产| 国产精品.久久久| 精品人妻熟女毛片av久久网站| avwww免费| av在线老鸭窝| 极品少妇高潮喷水抽搐| 午夜视频精品福利| 欧美+亚洲+日韩+国产| 久久人人97超碰香蕉20202| 天天躁夜夜躁狠狠躁躁| 国产亚洲av片在线观看秒播厂| 99久久人妻综合| 人人妻,人人澡人人爽秒播| 日韩制服丝袜自拍偷拍| 久久国产精品影院| 五月天丁香电影| 一区二区三区乱码不卡18| 91精品国产国语对白视频| 久久99热这里只频精品6学生| 欧美久久黑人一区二区| 亚洲国产精品一区三区| 精品第一国产精品| 日日爽夜夜爽网站| 丰满少妇做爰视频| 99热全是精品| 美女国产高潮福利片在线看| 欧美精品啪啪一区二区三区 | 久久久国产一区二区| 国产精品亚洲av一区麻豆| 免费在线观看黄色视频的| 嫁个100分男人电影在线观看| 男女下面插进去视频免费观看| 国产伦人伦偷精品视频| 亚洲九九香蕉| 久久人妻福利社区极品人妻图片| 免费女性裸体啪啪无遮挡网站| 久久精品国产亚洲av高清一级| 欧美xxⅹ黑人| 最近最新中文字幕大全免费视频| a级毛片在线看网站| 天天操日日干夜夜撸| 一级片免费观看大全| 日韩人妻精品一区2区三区| 男女免费视频国产| 久久久久久久久久久久大奶| cao死你这个sao货| 天天操日日干夜夜撸| 亚洲精华国产精华精| 日本av免费视频播放| 欧美人与性动交α欧美精品济南到| av天堂久久9| 成年女人毛片免费观看观看9 | 一级片免费观看大全| 久久久久久久久久久久大奶| 午夜福利在线观看吧| 久久天躁狠狠躁夜夜2o2o| 久久久久国内视频| 亚洲av国产av综合av卡| 亚洲色图 男人天堂 中文字幕| 国产成人av教育| 欧美日韩福利视频一区二区| 老司机午夜十八禁免费视频| 黑人猛操日本美女一级片| 高潮久久久久久久久久久不卡| 男男h啪啪无遮挡| 一本大道久久a久久精品| 日韩 亚洲 欧美在线| 国产日韩欧美视频二区| 亚洲三区欧美一区| 国产一级毛片在线| 少妇精品久久久久久久| 日韩欧美国产一区二区入口| 久久狼人影院| 精品一品国产午夜福利视频| 国产免费av片在线观看野外av| 国产一卡二卡三卡精品| 深夜精品福利| 久久精品人人爽人人爽视色| 欧美日韩亚洲综合一区二区三区_| 国产av精品麻豆| 久久久久国内视频| 久久香蕉激情| 日韩中文字幕视频在线看片| 天堂8中文在线网| 在线观看免费高清a一片| 在线观看舔阴道视频| 嫩草影视91久久| 国产精品九九99| 中国美女看黄片| av网站在线播放免费| 中国国产av一级| 9191精品国产免费久久| 亚洲精品成人av观看孕妇| 欧美+亚洲+日韩+国产| 少妇精品久久久久久久| 热99久久久久精品小说推荐| av有码第一页| 女性生殖器流出的白浆| 精品国产乱码久久久久久男人| 丝袜人妻中文字幕| 精品国产乱码久久久久久小说| svipshipincom国产片| 丰满少妇做爰视频| av天堂在线播放| 亚洲精品一二三| 欧美在线一区亚洲| 午夜精品久久久久久毛片777| 久久99热这里只频精品6学生| 久久久久久免费高清国产稀缺| 人妻一区二区av| 免费人妻精品一区二区三区视频| 国产亚洲欧美精品永久| 9色porny在线观看| 国产成人啪精品午夜网站| 久久精品亚洲熟妇少妇任你| 美女视频免费永久观看网站| av欧美777| 91精品三级在线观看| 最近中文字幕2019免费版| 桃红色精品国产亚洲av| 操美女的视频在线观看| 精品第一国产精品| 国产在视频线精品| 亚洲中文字幕日韩| 国产一区有黄有色的免费视频| 国产av一区二区精品久久| 欧美日本中文国产一区发布| cao死你这个sao货| 日本wwww免费看| 精品第一国产精品| 欧美日韩国产mv在线观看视频| 国产精品99久久99久久久不卡| 欧美 日韩 精品 国产| 天天躁狠狠躁夜夜躁狠狠躁| 欧美日本中文国产一区发布| 老鸭窝网址在线观看| 18禁黄网站禁片午夜丰满| av网站在线播放免费| 日日夜夜操网爽| 色老头精品视频在线观看| 999精品在线视频| 啦啦啦啦在线视频资源| tube8黄色片| 亚洲成av片中文字幕在线观看| 久久中文字幕一级| 精品国产乱码久久久久久小说| 色综合欧美亚洲国产小说| 99久久综合免费| 日韩一区二区三区影片| 极品少妇高潮喷水抽搐| 亚洲视频免费观看视频| 免费高清在线观看日韩| 精品少妇黑人巨大在线播放| 亚洲精品美女久久久久99蜜臀| 免费观看人在逋| 欧美日韩视频精品一区| 国产成人欧美在线观看 | 精品国产一区二区三区四区第35| 亚洲精品久久成人aⅴ小说| 最近中文字幕2019免费版| 欧美亚洲 丝袜 人妻 在线| 亚洲精品国产一区二区精华液| 真人做人爱边吃奶动态| 黑人巨大精品欧美一区二区mp4| 女人高潮潮喷娇喘18禁视频| 欧美黑人欧美精品刺激| 亚洲性夜色夜夜综合| 欧美日韩成人在线一区二区| 亚洲avbb在线观看| 女人爽到高潮嗷嗷叫在线视频| 麻豆乱淫一区二区| 亚洲国产看品久久| 亚洲性夜色夜夜综合| 99热国产这里只有精品6| 国产精品一区二区免费欧美 | av欧美777| 久久女婷五月综合色啪小说| 亚洲天堂av无毛| 免费久久久久久久精品成人欧美视频| 亚洲国产欧美网| 久热爱精品视频在线9| 久久人人97超碰香蕉20202| 国产欧美日韩一区二区三 | 国内毛片毛片毛片毛片毛片| 精品福利永久在线观看| 久久香蕉激情| 国产精品一区二区在线不卡| 国产一级毛片在线| 人妻 亚洲 视频| 亚洲av日韩在线播放| 亚洲欧美成人综合另类久久久| 人人妻人人添人人爽欧美一区卜| 国产精品一区二区精品视频观看| 97在线人人人人妻| 国产三级黄色录像| 91国产中文字幕| 久久人人爽av亚洲精品天堂| 9热在线视频观看99| 一个人免费看片子| 午夜福利在线免费观看网站| 日本av免费视频播放| av天堂久久9| 一级黄色大片毛片| 久久国产精品男人的天堂亚洲| 啦啦啦 在线观看视频| 亚洲美女黄色视频免费看| 欧美大码av| 久热爱精品视频在线9| 欧美在线黄色| 在线天堂中文资源库| 亚洲精品日韩在线中文字幕| 三级毛片av免费| 久久午夜综合久久蜜桃| 日韩 亚洲 欧美在线| 涩涩av久久男人的天堂| 国产一区二区激情短视频 | 国产区一区二久久| 久久综合国产亚洲精品| 手机成人av网站| 国产深夜福利视频在线观看| 人妻一区二区av| 成年美女黄网站色视频大全免费| 国产成人av教育| 免费在线观看日本一区| 日韩 亚洲 欧美在线| 欧美97在线视频| 中文字幕人妻丝袜制服| 国产极品粉嫩免费观看在线| 伦理电影免费视频| 丝袜脚勾引网站| 国产淫语在线视频| 男女之事视频高清在线观看| 亚洲av男天堂| 精品高清国产在线一区| 狠狠精品人妻久久久久久综合| 99国产精品免费福利视频| 亚洲精品久久久久久婷婷小说| 熟女少妇亚洲综合色aaa.| 色婷婷久久久亚洲欧美| 午夜福利视频在线观看免费| 亚洲精品美女久久av网站| 一区在线观看完整版| 黄片播放在线免费| 国产精品久久久久久精品古装| 精品视频人人做人人爽| 男女之事视频高清在线观看| 桃花免费在线播放| 亚洲成人手机| 国产精品久久久久久精品古装| 亚洲,欧美精品.| 狠狠婷婷综合久久久久久88av| 69精品国产乱码久久久| 久久久久久久久免费视频了| 色婷婷久久久亚洲欧美| 欧美日韩亚洲高清精品| 国产在线免费精品| 人人妻人人澡人人看| 男男h啪啪无遮挡| av在线app专区| 成在线人永久免费视频| 国产成人av教育| 久久精品国产综合久久久| 国产深夜福利视频在线观看| 女人久久www免费人成看片| 免费在线观看黄色视频的| 亚洲情色 制服丝袜| 12—13女人毛片做爰片一| 亚洲国产日韩一区二区| 人人妻人人澡人人看| 精品第一国产精品| 一区二区三区激情视频| 国产真人三级小视频在线观看| 免费观看人在逋| 成人国语在线视频| 亚洲欧美一区二区三区久久| 老熟女久久久| 一个人免费在线观看的高清视频 | 午夜福利在线免费观看网站| 亚洲天堂av无毛| 欧美另类一区| 中文字幕制服av| 国产av精品麻豆| av又黄又爽大尺度在线免费看| 欧美精品啪啪一区二区三区 | 丰满饥渴人妻一区二区三| 国产av一区二区精品久久| 桃红色精品国产亚洲av| 国产高清国产精品国产三级| 狂野欧美激情性bbbbbb| 五月天丁香电影| 国产激情久久老熟女| 免费久久久久久久精品成人欧美视频| 欧美性长视频在线观看| 韩国精品一区二区三区| 一个人免费在线观看的高清视频 | 免费少妇av软件| 日本a在线网址| 国产男人的电影天堂91| 国产亚洲av高清不卡| 国产在线视频一区二区| 精品久久久久久电影网| 又黄又粗又硬又大视频| 80岁老熟妇乱子伦牲交| 青草久久国产| 精品高清国产在线一区| 搡老岳熟女国产| av超薄肉色丝袜交足视频| 91精品伊人久久大香线蕉| 老熟女久久久| 精品免费久久久久久久清纯 | 高潮久久久久久久久久久不卡| 超碰97精品在线观看| 国产一区二区激情短视频 | 中国国产av一级| 国产视频一区二区在线看| 女人久久www免费人成看片| 久久久久视频综合| 女性被躁到高潮视频| 国产又色又爽无遮挡免| 日韩有码中文字幕| √禁漫天堂资源中文www| 成年人午夜在线观看视频| 一区二区日韩欧美中文字幕| 纵有疾风起免费观看全集完整版| 免费av中文字幕在线| 久久久久久久国产电影| 香蕉国产在线看| 一二三四在线观看免费中文在| 国产精品av久久久久免费| 久久99热这里只频精品6学生| 黄片小视频在线播放| 欧美日韩亚洲国产一区二区在线观看 | 中文字幕最新亚洲高清| 免费少妇av软件| e午夜精品久久久久久久| 中文字幕制服av| 国产精品成人在线| 97精品久久久久久久久久精品| 欧美精品人与动牲交sv欧美| 国产老妇伦熟女老妇高清| 叶爱在线成人免费视频播放| 又黄又粗又硬又大视频| 亚洲av电影在线进入| xxxhd国产人妻xxx| 亚洲,欧美精品.| 久久久国产一区二区| 久久久国产欧美日韩av| 极品人妻少妇av视频| 久久人人97超碰香蕉20202| 国产一区二区三区av在线| 国产亚洲一区二区精品| 国产欧美亚洲国产| 亚洲综合色网址| 动漫黄色视频在线观看| 日本欧美视频一区| 最近最新免费中文字幕在线| av天堂久久9| 悠悠久久av| 亚洲中文av在线| 午夜激情久久久久久久| 丝袜喷水一区| 桃红色精品国产亚洲av| 窝窝影院91人妻| avwww免费| 后天国语完整版免费观看| 国产亚洲精品一区二区www | 男女床上黄色一级片免费看| 后天国语完整版免费观看| 搡老熟女国产l中国老女人| 人妻 亚洲 视频| 欧美人与性动交α欧美精品济南到| 一本大道久久a久久精品| 国产欧美日韩一区二区三 | 满18在线观看网站| 女人爽到高潮嗷嗷叫在线视频| 久久精品国产亚洲av香蕉五月 | 夜夜夜夜夜久久久久| 久久久久网色| 亚洲精品美女久久av网站| 少妇人妻久久综合中文| 久热爱精品视频在线9| 老司机影院毛片| av一本久久久久| 日本五十路高清| 嫁个100分男人电影在线观看| 亚洲av成人一区二区三| 日韩电影二区| 18禁黄网站禁片午夜丰满| 一级a爱视频在线免费观看| 中文字幕人妻丝袜一区二区| 性少妇av在线| 国产视频一区二区在线看| 久久女婷五月综合色啪小说| 黑人猛操日本美女一级片| 亚洲 国产 在线| 精品福利永久在线观看| 亚洲精品av麻豆狂野| h视频一区二区三区| 国产一区二区在线观看av| 亚洲中文字幕日韩| 久久女婷五月综合色啪小说| 色综合欧美亚洲国产小说| 国产精品一二三区在线看| 欧美激情极品国产一区二区三区| 黄片播放在线免费| 精品人妻在线不人妻| 国产麻豆69| 久久久久国产精品人妻一区二区| 国产三级黄色录像| 可以免费在线观看a视频的电影网站| 激情视频va一区二区三区| 亚洲精品中文字幕一二三四区 | 成人免费观看视频高清| 亚洲国产中文字幕在线视频| 久久久久久久国产电影| 伊人亚洲综合成人网| 亚洲精品av麻豆狂野| 国产精品久久久人人做人人爽| 国产极品粉嫩免费观看在线| 久久久久精品国产欧美久久久 | 色综合欧美亚洲国产小说| 不卡一级毛片| 亚洲欧美精品自产自拍| 久久热在线av| 午夜福利乱码中文字幕| 夜夜夜夜夜久久久久| 天堂俺去俺来也www色官网| 狠狠婷婷综合久久久久久88av| 脱女人内裤的视频| av视频免费观看在线观看| 十分钟在线观看高清视频www| 欧美日韩黄片免| 纵有疾风起免费观看全集完整版| 久久久久久久精品精品| 国产成人系列免费观看| tocl精华| 免费日韩欧美在线观看| 亚洲精品国产精品久久久不卡| 国产精品 欧美亚洲| 午夜激情av网站| 香蕉丝袜av| 国产精品一区二区免费欧美 | 亚洲av美国av| 桃花免费在线播放| 欧美日韩黄片免| 国产极品粉嫩免费观看在线| 一级a爱视频在线免费观看|