• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Logformer:Cascaded Transformer for System Log Anomaly Detection

    2023-02-17 03:13:00FeiluHangWeiGuoHexiongChenLinjiangXieChenghaoZhouandYaoLiu

    Feilu Hang,Wei Guo,Hexiong Chen,Linjiang Xie,Chenghao Zhouand Yao Liu

    1Information Center,Yunnan Power Grid Company Limited,Kunming,650034,China

    2Network and Data Security Key Laboratory of Sichuan Province,University of Electronic Science and Technology of China,Chengdu,610054,China

    ABSTRACT Modern large-scale enterprise systems produce large volumes of logs that record detailed system runtime status and key events at key points. These logs are valuable for analyzing performance issues and understanding the status of the system.Anomaly detection plays an important role in service management and system maintenance,and guarantees the reliability and security of online systems. Logs are universal semi-structured data, which causes difficulties for traditional manual detection and pattern-matching algorithms.While some deep learning algorithms utilize neural networks to detect anomalies, these approaches have an over-reliance on manually designed features,resulting in the effectiveness of anomaly detection depending on the quality of the features.At the same time,the aforementioned methods ignore the underlying contextual information present in adjacent log entries.We propose a novel model called Logformer with two cascaded transformer-based heads to capture latent contextual information from adjacent log entries,and leverage pre-trained embeddings based on logs to improve the representation of the embedding space.The proposed model achieves comparable results on HDFS and BGL datasets in terms of metric accuracy,recall and F1-score.Moreover,the consistent rise in F1-score proves that the representation of the embedding space with pre-trained embeddings is closer to the semantic information of the log.

    KEYWORDS Anomaly detection;system logs;semi-structured data;pre-trained embedding;cascaded transformer

    1 Introduction

    With the development of the Internet and the ever-increasing number of Internet users, online systems are evolving and growing in functionality and size.Anomalous events can occur at any time,and high demands have been made for quality of service and security guarantees[1-4].More services mean more sources of anomalous events,and even a small anomalous event can cause the entire system to respond to it.If abnormal events cannot be detected and removed in a timely manner,the stability and availability of the entire system will be affected.Failure to ensure the stability of the system leads to customer distrust and financial losses.Logs record software status and system information at critical points,providing a rich source of data for system monitoring and anomaly detection.In other words,anomaly detection helps to detect anomalous events and safeguard the stable operation of the system.

    An online system typically produces large-scale semi-structured logs. It is impractical to detect and localize anomalies with traditional rule matching and manual screening methods based on logs. As an alternative, many machine learning algorithms have been proposed to extract features from semi-structured logs. PCA [5] is used to extract the principal components of logs to improve classification models.LR[6]and SVM[7]are proposed to classify logs based on the manual designed features. Nevertheless, the aforementioned approaches neglect the long-range dependence on the log. Later, deep learning algorithms were proposed to automatically learn representations for logs without manually designed features. However, most of them fail to extract the potential contextual information present in adjacent log entries, making them uncompetitive in terms of performance.RNN-based models have a natural advantage in capturing long-range dependencies,and LSTMs[8,9]have been proposed to learn extractive representations for logs.Given that randomly initialized LSTMs are difficult to optimize and LSTMs lack parallel computational capacity due to their recurrent structure,self-attention models have been proposed to replace LSTMs with efficient parallel structures.Transformer-based models[10]have been proposed to capture long-range dependencies in an efficient parallel fashion. However, these approaches neglect the potential contextual information present in the adjacent log entries.

    To address the above issues, we propose a novel model called Logformer, which consists of two cascaded transformer architectures with an encoder head and a decoder head.The encoder head works well in encoding adjacent log entries into a vector representation,and the decoder head learns latent context information from adjacent log entries. In order to preserve the useful features of logs, we propose a log preprocessing method to replace the regular log parser.In the meantime,we adopt the Glove algorithm to train embeddings for logs,thus making the embedding space more closely related to the log semantics.Experimental results on the HDFS dataset and BGL dataset demonstrate that Logformer outperforms other approaches.

    2 Related Work

    As an important part of a large-scale system,logs have been of great concern for many years.Many researchers have attempted to use log data to understand the runtime status of a system. However,system logs are usually semi-structured data,which is difficult to handle.In the beginning,keywords are used to find the location of systematic errors. This method can only detect an explicit single anomalous log entry and cannot detect an anomalous event based on the sequence of operations.In other words,an anomalous event in the system log cannot be detected by manually designed keywords.To address the above issues, matching methods [11,12] have been proposed for anomaly detection.These methods rely heavily on manually defined rules in advance and are unable to detect anomalous events from new sources.

    With the development of deep learning,deep learning approaches are being applied in the field of anomaly detection.Most of them consist of three steps[13]:first,logs are converted into templates by a log parser;then,the templates are fed into a neural network to learn vector representation for logs;finally,traditional machine learning algorithms such as SVM are applied to classify a log entry into normal or abnormal categories with the learned vector representation.Due to the scarce fraction of anomalous log entries,it is often difficult to extract features from anomalous data.Many researchers use unsupervised approaches[14,15]for anomaly detection.

    In terms of semi-structured log entry preprocessing, many methods use parsers. Common ones include Drain[16],AEL[17],IPLoM[18],and Spell[19].There are also approaches that attempt to obtain semantic vectors directly from embeddings without using a parser.Instead of using parsers for log entries,embeddings are an alternative way to obtain vector representations.LogNL[8]is proposed to utilize the TF-IDF algorithm to obtain template feature representations, and then construct parameter value vectors for logs of different templates.There are several literature proposals to train embeddings for logs by Word2Vec and SIF algorithms.

    The attention mechanism can better capture the long-term dependencies in the data and improve the model’s ability to extract the most relevant input features. In recent years, attention has been successfully applied to image processing,natural language processing,recommendation systems,and other fields. Xu et al. [20] proposed a model based on attention and AutoEncoder. With attention,the decoder can adaptively select the desired features in the input sequence, which improves the performance of the Encoder-Decoder structure.

    Recently,deep learning models[21-26]have achieved remarkable results in the field of anomaly detection. These approaches adopt RNN-based, LSTM-based, or transformer-based models as baselines to extract representations from log entries and predict normal or abnormal labels for log entries based on feature representations.Most of these approaches ignore the underlying contextual information present in the adjacent log entries. Different from regular deep learning models, Logformer effectively captures the long-range dependencies among adjacent log entries through a single decoder head and makes full use of the textual information of the log to train embeddings for the log.

    3 Methodology

    In this section, we describe in detail the log preprocessing method, the pre-trained embedding algorithm,and the model architecture.

    3.1 Overview Pipeline

    The model architecture of Logformer is a multi-layer of self-attention and feed-forward network,as shown in Fig.1.The Logformer consists of an encoder head,which encodes log entries into vectors,and a decoder head,which extracts latent contextual information present in adjacent log entries.Before feeding log entries into Logformer,the preprocessing method is applied to output structured log data.Given an adjacent log entry X =(x1,x2,...,xn), the embedding layer maps X to word embeddings E =(E1,E2,...,En), whereEidenotes the word embedding for a log entryxi. To exploit the textual information of the logs, pre-trained embeddings trained with the Glove algorithm are used on the logs to initialize the embedding layer.To maintain the order of log entries,position embeddings are added to both the encoder and decoder heads.The encoder head outputs representation is given as H=(H1,H2,...,Hn)for log entries,whereHiis the representation for log entryxi,and H is regarded as a new sequence.The decoder head takes the H as input and learns the interactive information among different log entries,then outputs a new representationH'for each log entry with abundant context information. Finally,H'is fed into a linear classifier layer to predict whether a log entry is normal or not.

    Figure 1:Model architecture:Logformer for log anomaly detection

    3.2 Log Preprocessing

    Original logs are semi-structured data, and log parsers are widely used in most deep learning methods[22,23].Lupton et al.[27]carried out a statistic of log parser methods,including publishing year,the number of citations and performance,as shown in Table 1,where AvgPA is an indicator of the average PA of all 16 datasets in Loghub which is detailed described in He et al.[28].It can be observed from Table 1 that the top three log parsers with the highest number of citations also fail to achieve an Avg PA of 0.9 in the large-scale open-source dataset,which means that these parsers still bring up an unacceptable error.At the same time,the log parsers discard log-level information,resulting in the final source feature log loss. As a result, Logformer takes a different approach to log preprocessing than log parsers.The log preprocessing of Logformer generally consists of two steps.First,each log entry is split by commas,spaces,and other common separators.The log is then converted to lowercase and some characters that do not contain valid information for anomaly detection are removed,such as numbers representing variables and characters not in the template such as‘for’,‘ID’and‘of’.Fig.2 shows the comparison between the original logs and the preprocessed logs.

    Table 1: Effectiveness of log parser

    Table 1 (continued)Parser Year Citations BGL PA Avg PA LTMatch 2021 18 0.933 0.889 Paddy 2020 1 0.963 0.895

    Figure 2:Original logs and preprocessed logs

    3.3 Pre-Trained Embedding

    This section contains the WordPiece and Glove algorithms, where WordPiece is a subword segmentation algorithm for natural language processing and Glove is a word embedding algorithm for pre-trained embeddings of a corpus.

    3.3.1 Tokenizer

    Although logs are preprocessed properly in the log preprocessing phase, there are still existing artificially generated words in preprocessed logs like ‘DataNode$PacketResponder’. The directly extracted semantic vectors in the form of such words are not well understood, so we still need to perform further word segmentation on the extracted contents.We chose the WordPiece tokenizer to tokenize the preprocessed logs and output the tokenized corpus. WordPiece is trained from a small vocabulary and selects the words that are most likely to occur until all words have been learned.

    3.3.2 Glove

    To match the embedding spaces with logs,Glove[29]is adopted to train the pre-trained embedding,instead of randomly initializing the embedding layer.As Word2Vec[30]is trained from words in a local sliding window and Glove is trained based on the global statistic co-occurrence matrix,Glove is superior to Word2Vec with global information of the corpus.Let us denote the co-occurrence matrix as X ∈R|V|2, where Xijmeans the times of word j occurs in the context of the word i, and V means the vocabulary size. Denoting the embedding for word i and j as wiandrespectively, Xijcan be approximated by calculating as Eq.(1).

    The training objective function can be formulated as Eq.(2).

    where f(Xij)is a weighting function and can be parameterized as Eq.(3).

    whereαand Xmaxare set to be 0.75 and 1.00, respectively.αworks well in improving the accuracy of low-frequency words by scaling up the weight of low-frequency words.Meanwhile,Xmaxlimits the maximal times of occurrence.

    3.4 Logformer

    Most existing deep learning approaches take embeddings of preprocessed logs as input and predict a log entry independently.A single log term may appear to be normal,but when it occurs consecutively in adjacent log entries,it may indicate the occurrence of an anomaly.Therefore,exploiting the longrange dependency information among adjacent log entries can be beneficial for anomaly detection.In this work,Logformer is proposed to address the aforementioned issues.The Logformer consists of a cascaded transformer architecture,an encoder head and a decoder head,where both the encoder and decoder heads have the same architecture.The encoder head contains 6 stacked layers,each consisting of two sub-layers, a multi-head self-attention layer, and a feed-forward network. While the decoder head contains only 1 layer,it also has two sub-layers similar to the encoder head.In the following,we describe the self-attention layer and the feed-forward network in detail.

    3.4.1 Multi-Head Self-Attention

    Logformer uses the multi-head self-attention to capture long-range dependencies in the adjacent log entries.The heart of the encoder and decoder heads is self-attention,which maps a query and keyvalue pair to an output where query,key,value,and output are all vectors.Conventional self-attention only contains one head,and multi-head self-attention contains several heads as shown in Fig.3.

    Figure 3:Attention mechanism

    Denoting the query, key, and value as Q, K, and V ∈Rdmodel, separately, self-attention can be calculated as follows:

    wheredmodelis the dimension of value.

    Instead of performing a single attention function, Logformer captures richer contexts with multiple individual attention functions. Multi-head self-attention can be regarded as the repetition ofhtimes of self-attention,wherehis the number of attention heads.And multi-head self-attention can be calculated as Eq.(5).

    whereWQ i,WK iandWV i∈Rdmodel×dh,dh=dmodel/h.

    3.4.2 Feed-Forward Network

    The feed-forward network is used to obtain the information in the channel dimension.It applies an expansion operation to x ∈Rdmodel×l, and then recovers the intermediate output to the original dimension,which is calculated as Eq.(6).

    3.4.3 Positional Encoding

    3.4.4 Batch Normalization

    It is well-known that normalizing the feature maps makes training faster and more stable.Logformer incorporates Batch Normalization (BN) in attention in place of the original Layer Normalization (LN) in transformer. Layer Normalization is commonly used in RNN, where the sequence length is often not fixed. Since LN does not depend on batch size and sequence depth, it performs better in RNN. Compared to LN, BN shows better robustness and generalization ability.Also, BN has the advantage that it is generally faster in inference than other batch-independent normalizers such as LN. BN treats the batch data as a whole. The batch dimension is used in the calculations of both mean and variance [31]. An important feature of Logformer is to combine contextual information from multiple adjacent log entries.We want the normalization of Logformer to be sensitive to batch size,so we choose BN instead of LN.

    4 Experiment

    4.1 Dataset

    In this paper, we select open-source datasets HDFS and BGL in LogHub [28] to validate the effectiveness of Logformer, as Table 2 shown. These two log datasets are widely used in the fields of anomaly detection. The HDFS dataset is generated by the benchmark workload in the private cloud environment,and labels are made by manually designed rules to identify the abnormal events.The BGL dataset contains logs collected from the BlueGene/L supercomputer system by Lawrence Livermore National Labs (LLNL). BGL is divided into altered and no-alter log entries. The first column in the BGL log contains‘-’or not,where‘-’means no-alter log entry,and the other is altered.

    Table 2: Statistic of dataset

    4.2 Evaluation Metrics

    We select precision,recall,and F1-score as the evaluation metrics.These three metrics are based on the confusion matrix.The confusion matrix has four categories:True positives(TP)are examples correctly labeled as positives. False positives (FP) refer to negative examples incorrectly labeled as positive. True negatives (TN) correspond to negatives correctly labeled as negative. Finally, false negatives(FN)refer to positive examples incorrectly labeled as negative.

    Precision is calculated as the percentage of correctly predicted positive samples accounting for all predicted positive samples. Recall is calculated as the percentage of correctly predicted positive samples accounting for all real positive samples.And F1-score is an indicator to compute the average of precision and recall.

    where TP,FP,and FN mean the true positives,false positives,and false negatives,respectively.

    4.3 Implementation Details

    We construct Logformer by two cascaded transformers. The number of transformer layers of encode head and decode head is 6 and 1,respectively.The hidden dimension is 128 and the number of attention heads in the logformer is 6.Layer normalization is replaced with batch normalization in the transformer layer.We use AdamW to optimize all parameters.The learning rate is set to 5e-4 and the batch size is set to 32.

    4.4 Experiment Result

    Experiments on HDFS and BGL datasets are shown in Table 3. We compared Logformer with several existing methods in two public datasets,including the data-driven method PCA[32],traditional machine learning method LR[33]and SVM[33],and the deep learning method LogRobust[34].Due to its limitation,PCA achieves poor results in both HDFS and BGL datasets.It can be observed that both conventional machine learning and deep learning methods obtain consistent high performance in the HDFS dataset.The reason behind this is that log entries in HDFS tend to be more structured than BGL,and abnormal log entries are quite different from normal log entries.In the complicated BGL dataset,it can be seen in Table 3,that LogRobust and Logformer outperform LR and SVM by a large margin.Significantly,Logformer is superior to LogRobust with an increment of 8%in F1-score,which demonstrates that Logformer is more suitable for complicated semi-structured logs.

    Table 3: Experiment results on HDFS and BGL datasets

    4.5 Ablation Study

    To invalidate the hypothesis that pre-trained embedding in logs makes the embedding space match the textual information of logs better, we conduct comparative experiments by respectively using pre-trained embedding and randomly initialized embedding in HDFS and BGL datasets. In addition,we validate the importance of extracting context information existing in adjacent log entries by adding/removing the encoder and decoder head.Finally,we discuss the influence of the number of log entries on anomaly detection through experiments.

    As shown in Table 4, the pre-trained embedding is superior in randomly initializing embedding in all metrics in both HDFS and BGL datasets. Pre-training can extract the prior knowledge of the task,improve the performance of Logformer by training the embeddings,which makes good use of the textual information of logs.

    Table 4: Experiment result on the effectiveness of pretraining embedding

    When validating the effectiveness of the encoder and decoder header, we set three comparative studies.w/o means predicting directly after an average sum of the pre-trained embedding;w encoder means predicting from the output of the dencoder head; w encoder-decoder means predicting from the output of the decoder head. It can be observed from Table 5 that the encoder head obtains a better result than pre-trained embedding in two datasets,which demonstrates that the self-attention layer in the encoder head can capture the log-range dependency information existing in a log entry.After adding both the encoder and decoder head,the results are better than only adding the encoder head,which means the decoder head improves the performance of Logformer by extracting potential information from adjacent log entries.

    Table 5: Experiment result on the effectiveness of decoder head

    As shown in Table 6,we try to use different batch sizes for experiments.In the Logformer,batch size represents the number of logs used for context combination.With the increase in batch size,the detection performance of the Logformer is also slightly improved,which is in line with the intuition.But a larger batch size means larger resource consumption and longer time,and the batch size can be adjusted as needed during actual use.

    Table 6: Experiment result on the effectiveness of batch size

    The experimental results show that the Logformer using the complete cascaded structure achieves the best results in all datasets, which proves that our proposed cascaded structure is efficient. The encoder first combines multiple semantic vectors to complete the encoding, and then the context information association between adjacent log entries is captured by the decoder.The semantic vector containing rich context information can make a significant contribution to system log anomaly detection.

    5 Conclusion

    This paper proposes an anomaly detection method with the cascaded structure that can make full use of potential context information among adjacent logs. We also propose a log preprocessing method to convert semi-structured logs into structured logs by removing common punctuation and other redundant characters in logs. To make the embedding space match with the textual semantic of logs,the WordPiece algorithm is adopted to tokenize the preprocessed logs into subwords,and the Glove algorithm is used to train embedding based on the log corpus. A cascaded structure model Logformer is finally designed to learn vector representation for each log entry and extract long-range dependency among adjacent log entries.

    Logformer achieves superior performance compared to conventional machine learning algorithms and some deep learning models. From the perspective of two ablation studies, Logformer outperforms other approaches by efficiently extracting potential information existing in adjacent log entries. However, there are still more challenges in the real scenario. We do not provide the expert system to realize the feedback mechanism,and the feedback mechanism often plays a crucial role in an online system.The above problems are the direction of our future efforts.

    Acknowledgement:The authors wish to express their appreciation to the reviewers for their helpful suggestions which greatly improved the presentation of this paper.

    Funding Statement:This work was supported by the National Natural Science Foundation of China(Nos. 62072074, 62076054, 62027827, 61902054, 62002047), the Frontier Science and Technology Innovation Projects of National Key R&D Program (No. 2019QY1405), the Sichuan Science and Technology Innovation Platform and Talent Plan (No. 2020TDT00020), the Sichuan Science and Technology Support Plan(No.2020YFSY0010).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    成人一区二区视频在线观看| 欧美黑人欧美精品刺激| 亚洲欧美清纯卡通| 在线天堂最新版资源| 69av精品久久久久久| 精品午夜福利在线看| 国产精品免费一区二区三区在线| 啦啦啦韩国在线观看视频| 国产精品一区二区免费欧美| av国产免费在线观看| 亚洲色图av天堂| 国产精品伦人一区二区| 非洲黑人性xxxx精品又粗又长| 成人国产综合亚洲| 在线看三级毛片| 级片在线观看| 99久久精品国产亚洲精品| 最后的刺客免费高清国语| 最好的美女福利视频网| 亚洲专区国产一区二区| 美女高潮的动态| 亚洲欧美日韩高清专用| 亚洲精品久久国产高清桃花| 亚洲av日韩精品久久久久久密| 真人做人爱边吃奶动态| 久久6这里有精品| 国内毛片毛片毛片毛片毛片| 听说在线观看完整版免费高清| 他把我摸到了高潮在线观看| 好看av亚洲va欧美ⅴa在| 国产高清三级在线| 国产成人啪精品午夜网站| 成人性生交大片免费视频hd| 成人性生交大片免费视频hd| 黄片小视频在线播放| 成人av一区二区三区在线看| 免费无遮挡裸体视频| av天堂在线播放| 婷婷色综合大香蕉| 亚洲专区中文字幕在线| 淫妇啪啪啪对白视频| 能在线免费观看的黄片| 青草久久国产| 在线观看美女被高潮喷水网站 | 亚洲午夜理论影院| 欧美黄色淫秽网站| 男插女下体视频免费在线播放| 嫩草影院入口| 俄罗斯特黄特色一大片| 又粗又爽又猛毛片免费看| 黄色配什么色好看| 亚洲成av人片在线播放无| 久久人妻av系列| 非洲黑人性xxxx精品又粗又长| 久久中文看片网| 国产老妇女一区| 男人舔奶头视频| 宅男免费午夜| 国产高清视频在线观看网站| 欧美一区二区精品小视频在线| 非洲黑人性xxxx精品又粗又长| 亚洲七黄色美女视频| 国产高清激情床上av| 国产午夜精品久久久久久一区二区三区 | 亚洲欧美激情综合另类| 日本在线视频免费播放| 精品国内亚洲2022精品成人| 久久99热6这里只有精品| 淫秽高清视频在线观看| 精品国产三级普通话版| 国产主播在线观看一区二区| 欧美日本视频| 99久久成人亚洲精品观看| 亚洲av免费在线观看| 久久99热这里只有精品18| 国产精品日韩av在线免费观看| 日本免费a在线| 男女之事视频高清在线观看| 日韩中字成人| 亚洲欧美日韩无卡精品| 身体一侧抽搐| 日本成人三级电影网站| 一个人看的www免费观看视频| 国产成人a区在线观看| 亚洲欧美激情综合另类| 国产单亲对白刺激| 婷婷丁香在线五月| 99riav亚洲国产免费| 我要看日韩黄色一级片| 亚洲狠狠婷婷综合久久图片| 亚洲av成人av| 人人妻,人人澡人人爽秒播| 757午夜福利合集在线观看| netflix在线观看网站| 国产极品精品免费视频能看的| eeuss影院久久| 黄色女人牲交| 亚洲国产高清在线一区二区三| 特级一级黄色大片| 免费人成在线观看视频色| 国产探花在线观看一区二区| 真实男女啪啪啪动态图| 在线免费观看不下载黄p国产 | 少妇高潮的动态图| 日韩中文字幕欧美一区二区| 免费人成在线观看视频色| 99久久成人亚洲精品观看| 精品午夜福利视频在线观看一区| 美女 人体艺术 gogo| 日本黄色片子视频| 成年人黄色毛片网站| 久久久精品欧美日韩精品| 亚洲国产精品成人综合色| 亚洲第一电影网av| 黄色一级大片看看| 久久精品91蜜桃| 日韩人妻高清精品专区| 欧美日韩黄片免| 亚洲中文字幕一区二区三区有码在线看| 日韩欧美精品免费久久 | 欧美潮喷喷水| 美女大奶头视频| 俄罗斯特黄特色一大片| 最近最新免费中文字幕在线| 久久久久久久午夜电影| 国产成人福利小说| 亚洲国产欧洲综合997久久,| 淫秽高清视频在线观看| 久久亚洲真实| 一个人免费在线观看的高清视频| a级毛片a级免费在线| 久久婷婷人人爽人人干人人爱| av天堂在线播放| 国内少妇人妻偷人精品xxx网站| 成人特级黄色片久久久久久久| 日韩国内少妇激情av| 波多野结衣巨乳人妻| 波多野结衣巨乳人妻| 国产三级在线视频| 久久亚洲真实| 成年人黄色毛片网站| 三级毛片av免费| 日韩高清综合在线| 夜夜爽天天搞| 一区二区三区免费毛片| 亚洲成人久久爱视频| x7x7x7水蜜桃| 国产精品永久免费网站| 成年免费大片在线观看| 国产精品美女特级片免费视频播放器| 亚洲人成电影免费在线| 国产亚洲精品av在线| 午夜免费成人在线视频| 精品午夜福利在线看| 搡老妇女老女人老熟妇| 国产成人福利小说| 欧美黑人巨大hd| 欧美乱妇无乱码| 男女视频在线观看网站免费| 给我免费播放毛片高清在线观看| 久久精品国产亚洲av天美| 久久精品91蜜桃| or卡值多少钱| 99久国产av精品| 欧美最黄视频在线播放免费| 别揉我奶头~嗯~啊~动态视频| 亚洲最大成人av| 国产蜜桃级精品一区二区三区| 日本黄大片高清| 99久久久亚洲精品蜜臀av| 久久久久久久久大av| 最近在线观看免费完整版| 亚洲无线观看免费| 亚洲av免费在线观看| 日本撒尿小便嘘嘘汇集6| 国产激情偷乱视频一区二区| 久久国产精品影院| av天堂中文字幕网| 日本黄大片高清| 日韩免费av在线播放| 日本免费一区二区三区高清不卡| 熟女电影av网| 一区福利在线观看| 亚洲黑人精品在线| 国产成人aa在线观看| 精品熟女少妇八av免费久了| 精品一区二区三区视频在线| 老司机午夜十八禁免费视频| 亚洲欧美精品综合久久99| 9191精品国产免费久久| 久久国产精品影院| 不卡一级毛片| 村上凉子中文字幕在线| 久久久久久国产a免费观看| 男插女下体视频免费在线播放| 亚洲人与动物交配视频| 成年女人毛片免费观看观看9| 99久久成人亚洲精品观看| 人妻制服诱惑在线中文字幕| 黄色一级大片看看| 午夜福利在线在线| 久久久久久久午夜电影| 欧美性感艳星| 中国美女看黄片| 一进一出抽搐动态| 欧美另类亚洲清纯唯美| 亚洲在线自拍视频| 窝窝影院91人妻| 欧美精品啪啪一区二区三区| 亚洲国产精品999在线| 久久久久国内视频| 别揉我奶头 嗯啊视频| h日本视频在线播放| 日本免费一区二区三区高清不卡| 国产免费一级a男人的天堂| 3wmmmm亚洲av在线观看| 久久人人爽人人爽人人片va | 69人妻影院| aaaaa片日本免费| 亚洲avbb在线观看| 首页视频小说图片口味搜索| 婷婷丁香在线五月| 国产综合懂色| 熟妇人妻久久中文字幕3abv| 国产精品综合久久久久久久免费| 日韩成人在线观看一区二区三区| 精品国产三级普通话版| 久久久久久大精品| 十八禁人妻一区二区| 国产av麻豆久久久久久久| 亚洲欧美日韩无卡精品| 性色av乱码一区二区三区2| 亚洲成av人片免费观看| 国产亚洲欧美98| 国产综合懂色| 女同久久另类99精品国产91| 不卡一级毛片| 亚洲精品成人久久久久久| 免费观看人在逋| 三级毛片av免费| 亚洲人成网站在线播| 99热精品在线国产| 变态另类成人亚洲欧美熟女| 亚洲成人中文字幕在线播放| 噜噜噜噜噜久久久久久91| 日韩亚洲欧美综合| 级片在线观看| 变态另类成人亚洲欧美熟女| 国产三级在线视频| h日本视频在线播放| 欧美日本亚洲视频在线播放| 宅男免费午夜| 亚洲乱码一区二区免费版| 亚洲激情在线av| 久久久久国内视频| 禁无遮挡网站| 级片在线观看| 国产精品一区二区性色av| av欧美777| 久久精品综合一区二区三区| 成人av在线播放网站| 人妻丰满熟妇av一区二区三区| 精品国产亚洲在线| 深夜精品福利| bbb黄色大片| 99久久精品国产亚洲精品| 欧美极品一区二区三区四区| 国产午夜精品论理片| 日本与韩国留学比较| 久久精品国产亚洲av香蕉五月| 黄色视频,在线免费观看| 能在线免费观看的黄片| 免费av观看视频| 亚洲,欧美精品.| 亚洲av不卡在线观看| 我的老师免费观看完整版| 国产精品日韩av在线免费观看| 中文字幕熟女人妻在线| 亚洲精华国产精华精| 桃色一区二区三区在线观看| 91狼人影院| 999久久久精品免费观看国产| 日本免费一区二区三区高清不卡| 亚洲乱码一区二区免费版| 亚洲国产精品sss在线观看| 精品午夜福利在线看| 免费在线观看成人毛片| 成人国产一区最新在线观看| 97热精品久久久久久| 天堂av国产一区二区熟女人妻| 精品福利观看| 久久精品人妻少妇| 一个人免费在线观看的高清视频| 青草久久国产| 午夜免费男女啪啪视频观看 | 欧美精品国产亚洲| 黄色丝袜av网址大全| 国产野战对白在线观看| 精品人妻熟女av久视频| 一本综合久久免费| 久久欧美精品欧美久久欧美| 五月玫瑰六月丁香| 色精品久久人妻99蜜桃| 欧美黄色淫秽网站| 色综合婷婷激情| 国产一级毛片七仙女欲春2| www.999成人在线观看| 青草久久国产| 亚洲国产高清在线一区二区三| 欧美成人性av电影在线观看| 国产在线男女| 亚洲美女视频黄频| 国产淫片久久久久久久久 | 欧美+日韩+精品| 精华霜和精华液先用哪个| 99久久无色码亚洲精品果冻| 人妻久久中文字幕网| 久久久久免费精品人妻一区二区| 一区二区三区激情视频| 国产私拍福利视频在线观看| 97热精品久久久久久| 99热这里只有精品一区| 国产精品98久久久久久宅男小说| 嫩草影视91久久| 欧美性猛交╳xxx乱大交人| 别揉我奶头 嗯啊视频| 在线a可以看的网站| 亚洲18禁久久av| 简卡轻食公司| 赤兔流量卡办理| 国产高潮美女av| 免费观看人在逋| bbb黄色大片| 欧美最黄视频在线播放免费| 精品人妻1区二区| 最近视频中文字幕2019在线8| 男人狂女人下面高潮的视频| 欧美激情久久久久久爽电影| 男人舔女人下体高潮全视频| 久久久久九九精品影院| 中文字幕人妻熟人妻熟丝袜美| 欧美精品啪啪一区二区三区| 成人国产一区最新在线观看| 露出奶头的视频| 国产美女午夜福利| 免费在线观看影片大全网站| 欧美一区二区亚洲| 国产又黄又爽又无遮挡在线| 精品一区二区三区视频在线观看免费| 欧美在线黄色| 日日干狠狠操夜夜爽| 直男gayav资源| 国产午夜福利久久久久久| 亚洲综合色惰| 51国产日韩欧美| h日本视频在线播放| 五月玫瑰六月丁香| 国产69精品久久久久777片| 国产精品人妻久久久久久| 国产乱人视频| 俄罗斯特黄特色一大片| 麻豆成人午夜福利视频| 国产精品电影一区二区三区| 亚洲精品亚洲一区二区| 成年女人永久免费观看视频| 极品教师在线视频| 国产探花极品一区二区| 午夜影院日韩av| 久久久久久久精品吃奶| 一级av片app| 婷婷亚洲欧美| 精品久久国产蜜桃| 亚洲av电影不卡..在线观看| 亚洲成人免费电影在线观看| eeuss影院久久| 国模一区二区三区四区视频| 久久伊人香网站| 在线观看免费视频日本深夜| 村上凉子中文字幕在线| 国产精品久久视频播放| 国产成人a区在线观看| 成人三级黄色视频| 午夜福利视频1000在线观看| 成人永久免费在线观看视频| 久久99热这里只有精品18| 美女cb高潮喷水在线观看| 免费在线观看日本一区| 亚洲第一区二区三区不卡| 一二三四社区在线视频社区8| 在现免费观看毛片| 一本综合久久免费| 亚州av有码| 桃色一区二区三区在线观看| 日日摸夜夜添夜夜添av毛片 | 人人妻人人澡欧美一区二区| 亚洲国产精品成人综合色| 怎么达到女性高潮| 国产精品亚洲一级av第二区| 宅男免费午夜| 久久久久性生活片| 黄色配什么色好看| 国产欧美日韩精品亚洲av| 中文字幕精品亚洲无线码一区| 在线播放无遮挡| www.999成人在线观看| 在线看三级毛片| av天堂在线播放| aaaaa片日本免费| 亚洲精品在线观看二区| 国产亚洲精品久久久com| 搡老熟女国产l中国老女人| 久久这里只有精品中国| 亚洲最大成人手机在线| 欧美高清成人免费视频www| 最新中文字幕久久久久| 久久性视频一级片| 午夜免费男女啪啪视频观看 | 51午夜福利影视在线观看| 天堂网av新在线| 亚洲国产精品sss在线观看| 啪啪无遮挡十八禁网站| 尤物成人国产欧美一区二区三区| 国内少妇人妻偷人精品xxx网站| 亚洲狠狠婷婷综合久久图片| 欧美+日韩+精品| 欧美在线一区亚洲| 小说图片视频综合网站| 午夜精品久久久久久毛片777| 欧美日韩综合久久久久久 | 在线a可以看的网站| 久久精品国产清高在天天线| 一级作爱视频免费观看| 国产成+人综合+亚洲专区| 人人妻人人澡欧美一区二区| 日日干狠狠操夜夜爽| 在线观看舔阴道视频| 欧美在线黄色| 三级毛片av免费| 欧美不卡视频在线免费观看| 久久人人爽人人爽人人片va | 观看免费一级毛片| 久久国产精品影院| 有码 亚洲区| 乱码一卡2卡4卡精品| 午夜a级毛片| 欧美xxxx性猛交bbbb| 三级男女做爰猛烈吃奶摸视频| 午夜福利免费观看在线| 级片在线观看| 成年女人永久免费观看视频| 两性午夜刺激爽爽歪歪视频在线观看| 精品久久久久久久久av| 国产精品一区二区免费欧美| 久久国产乱子伦精品免费另类| 国模一区二区三区四区视频| 国产精品人妻久久久久久| 久久精品国产亚洲av香蕉五月| 亚洲欧美清纯卡通| 日韩精品中文字幕看吧| 久久精品国产自在天天线| 国产麻豆成人av免费视频| 成人高潮视频无遮挡免费网站| 国产精品久久久久久精品电影| 国产精品亚洲av一区麻豆| 精品一区二区三区视频在线观看免费| 高清毛片免费观看视频网站| 老熟妇乱子伦视频在线观看| 免费人成视频x8x8入口观看| 国产成人啪精品午夜网站| 一区福利在线观看| 亚洲精品乱码久久久v下载方式| 久久国产乱子伦精品免费另类| 亚洲成人久久爱视频| 午夜两性在线视频| 国产精品精品国产色婷婷| 一本精品99久久精品77| 村上凉子中文字幕在线| 成人av在线播放网站| 亚洲精品在线美女| 亚洲精华国产精华精| 9191精品国产免费久久| 国产精品日韩av在线免费观看| 日韩欧美在线二视频| 精品久久久久久,| 在线看三级毛片| av中文乱码字幕在线| 麻豆av噜噜一区二区三区| 99久久精品热视频| 亚洲av中文字字幕乱码综合| 亚洲av电影在线进入| 国产亚洲精品综合一区在线观看| 宅男免费午夜| 欧美国产日韩亚洲一区| 91av网一区二区| 精品人妻熟女av久视频| 熟女电影av网| 中亚洲国语对白在线视频| 精品熟女少妇八av免费久了| 成人国产综合亚洲| 嫩草影院精品99| 啦啦啦观看免费观看视频高清| av欧美777| 99久久精品国产亚洲精品| 每晚都被弄得嗷嗷叫到高潮| 嫩草影院入口| 国产白丝娇喘喷水9色精品| 免费人成视频x8x8入口观看| 内射极品少妇av片p| 久久久久久大精品| 亚洲在线自拍视频| 五月伊人婷婷丁香| 精品无人区乱码1区二区| 夜夜看夜夜爽夜夜摸| 久久亚洲真实| 午夜日韩欧美国产| 宅男免费午夜| 亚洲天堂国产精品一区在线| 少妇被粗大猛烈的视频| 中出人妻视频一区二区| av中文乱码字幕在线| 成人性生交大片免费视频hd| 日韩中文字幕欧美一区二区| 女人十人毛片免费观看3o分钟| 久久久久久久久中文| a级毛片免费高清观看在线播放| 国产探花在线观看一区二区| 久久人人爽人人爽人人片va | 99热精品在线国产| 香蕉av资源在线| 国产成人啪精品午夜网站| 老熟妇仑乱视频hdxx| 欧美黑人巨大hd| 91狼人影院| 最近视频中文字幕2019在线8| 免费人成在线观看视频色| 亚洲av成人av| 啦啦啦观看免费观看视频高清| 美女黄网站色视频| 中文字幕久久专区| 亚洲内射少妇av| 中文字幕高清在线视频| 国产精品一区二区三区四区久久| 欧美国产日韩亚洲一区| 热99在线观看视频| 91麻豆av在线| 人人妻人人澡欧美一区二区| 国内精品久久久久久久电影| 婷婷精品国产亚洲av在线| 日本黄色片子视频| 久久国产乱子免费精品| 高潮久久久久久久久久久不卡| 国产精品爽爽va在线观看网站| 精品福利观看| 一区二区三区免费毛片| 久久精品91蜜桃| 日韩亚洲欧美综合| 日本免费一区二区三区高清不卡| 欧美黑人巨大hd| 亚洲专区国产一区二区| 欧美zozozo另类| 欧美激情国产日韩精品一区| 观看美女的网站| 婷婷亚洲欧美| 嫩草影院新地址| 人妻制服诱惑在线中文字幕| 一个人观看的视频www高清免费观看| 国产伦一二天堂av在线观看| 在线观看美女被高潮喷水网站 | 搡老岳熟女国产| 亚洲人成网站高清观看| 国产精品三级大全| 午夜日韩欧美国产| 99国产综合亚洲精品| 99国产精品一区二区三区| 成年女人永久免费观看视频| 波多野结衣巨乳人妻| 色综合站精品国产| 欧美高清成人免费视频www| 一二三四社区在线视频社区8| www.熟女人妻精品国产| 无遮挡黄片免费观看| 给我免费播放毛片高清在线观看| 赤兔流量卡办理| 久久精品国产清高在天天线| 99久久精品国产亚洲精品| 伦理电影大哥的女人| 国产精品三级大全| 精品一区二区三区av网在线观看| 搡老岳熟女国产| 久久99热6这里只有精品| 一区福利在线观看| 亚洲国产精品久久男人天堂| 成人三级黄色视频| 中文字幕精品亚洲无线码一区| 看十八女毛片水多多多| 国产精品爽爽va在线观看网站| 亚洲av电影不卡..在线观看| 国内精品美女久久久久久| 亚洲av第一区精品v没综合| 十八禁国产超污无遮挡网站| 国产蜜桃级精品一区二区三区| 精品一区二区三区视频在线| 婷婷亚洲欧美| 国内毛片毛片毛片毛片毛片| 99久久精品国产亚洲精品| 国产精品久久久久久久电影| 嫩草影院新地址| .国产精品久久| 能在线免费观看的黄片| 丁香欧美五月| 最新在线观看一区二区三区| 永久网站在线| 男插女下体视频免费在线播放| 久久久久国产精品人妻aⅴ院| 亚洲国产欧洲综合997久久,|