• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Incident and Problem Ticket Clustering and Classification Using Deep Learning

    2024-01-12 14:48:38FENGHailinHANJingHUANGLeijunSHENGZiweiGONGZican
    ZTE Communications 2023年4期

    FENG Hailin, HAN Jing, HUANG Leijun,SHENG Ziwei, GONG Zican

    (1. Zhejiang A&F University, Hangzhou 310007, China;2. ZTE Corporation, Shenzhen 518057, China;3. Huazhong University of Science and Technology, Wuhan 430074,China)

    Abstract: A holistic analysis of problem and incident tickets in a real production cloud service environment is presented in this paper. By extracting different bags of words, we use principal component analysis (PCA) to examine the clustering characteristics of these tickets. Then Kmeans and latent Dirichlet allocation (LDA) are applied to show the potential clusters within this Cloud environment. The second part of our study uses a pre-trained bidirectional encoder representation from transformers (BERT) model to classify the tickets, with the goal of predicting the optimal dispatching department for a given ticket. Experimental results show that due to the unique characteristics of ticket description, pre-processing with domain knowledge turns out to be critical in both clustering and classification. Our classification model yields 86%accuracy when predicting the target dispatching department.

    Keywords: problem ticket; ticket clustering; ticket classification

    1 Introduction

    For cloud service providers, maintaining an outstanding service level agreement with minimum downtime and incident response time is critical to the business. In order to provide such a prominent high-level reliability and availability, IT operation plays an important role. However, the emergence of modern computing architectures, such as virtual machines, containers, server-less architecture, and micro-services, brings additional challenges to the management of such environments[1-2].

    Problem and incident tickets have been a long-standing mechanism in carrying on any issues reflected by customers,or any alerts raised by monitoring systems. According to the Information Technology Infrastructure Library (ITIL) specification, the incident, problem, and change (IPC) systems fulfill the tracking, analysis, and mitigation of problems[3]. Change requests are nowadays mostly managed differently due to the practice of DevOps. Incident and problem tickets often share the same system and process. An incident or problem ticket usually starts with a short description of the problem that has been originally observed. The ticket itself may be augmented by the personnel assigned along the debugging and resolution process. There are also multiple software platforms and services to help enterprises manage those tickets, including BMC Remedy, IBM Smart Cloud Control Desk, SAP Solution Manager, ServiceNow, etc.[4]

    However, dispatching an incident or problem ticket is still basically a manual process depending on human knowledge.Some of the ticket management systems offer insights such as agent skill level, capacity, and relevance. There are some early works attempting to dispatch tickets based on the agent’s speed from historical data[5]. Our observation reveals that dispatching to individual agents might be a secondary issue. Instead, finding the matching department for a specific issue appears to be a primary one especially if a prompt resolution period is the desired outcome. It is not uncommon for some tickets to go through multiple departments before it lands on the right one. For example, a service unavailable problem might be caused by security settings, networking, hosting services, applications, or even databases, and the specific problem may be resolved by one of the departments or by multiple departments.Therefore, it is essential to find the most likely department, especially at the beginning when the problem was initially reported to resolve the issue efficiently. The specific technical challenge of classifying an early ticket is that the only available feature is problem description.

    2 Related Work

    Since the day when computer systems were created, IT operation has been a critical issue. With the prevalence of online services, in order to minimize system downtime and maintain premium service level agreements, IT operation plays a central role in achieving such a goal. Especially in today’s highly distributed multi-layered cloud environment, it is untrivial to effectively find the matching departments to resolve the issue.

    Artificial intelligence has been applied in IT operations, especially in anomaly detection[11-12], problem troubleshooting[13-14], and security[15-16]. A few works have attempted to improve the efficiency of ticket dispatching. BOTEZATU et al.[5]tried to find the most cost-effective agent for ticket resolution, rather than finding a matching group or department.SHAO et al.[17]focused on the transfer information in ticket resolution and formulated a model based on prior resolution steps. AGARWAL et al.[18]used a supported vector machine and a discriminative term to predict the matching department.While we use ticket descriptions and other attributes to find the best department, our solution is quite different from the previous works.

    In terms of ticket analysis, there are only a few works on alerts or ticket clustering. LIN et al.[19]used graph theory and similarity measures as Jaccard as the cluster mechanism.MANI et al.[20]proposed a technique combining latent semantic indexing and a hierarchicaln-gram algorithm. AGARWAL et al.[21]used a mixture of data mining, machine learning, and natural language parsing techniques to extract and analyze unstructured tests in IT tickets. JAN et al.[22]proposed a framework for text analysis in an IT service environment. We examine the clustering characteristics to discover the content of the ticket descriptions specific to the system under investigation.Our approach is generic to all systems with minor adjustments of synonyms and user dictionaries. When it comes to clustering itself, we believe our dataset is also unique as it is from the latest container-based cloud environment which is more complicated than prior systems.

    3 Design of Clustering System

    We apply different topic modeling algorithms to cluster the tickets based on their descriptions and compare their performance by calculating their sum of square error (SSE) and silhouette scores. The clustering results indicate the number of major topics in the ticket description corpus. Since it is an unsupervised learning process, it saves great effort from data annotation. For ticket classification, word embedding models have shown much better performance. Therefore, we only adopt the supervised approach using a pre-trained BERT model[6]which is fine-tuned with domain-specific labeled data.

    Fig. 1 illustrates the overall steps we perform ticket description clustering. First, data preprocessing is performed by extracting texts, merging synonyms, removing stop words, etc. After tokenizing the texts, we construct 4 types of bags of words(BoW), including binary BoW, term frequency (TF) BoW, term frequency inverse document frequency (TF-IDF)[7]BoW, and expert-weighted BoW. For each of the BoW, we apply principal component analysis (PCA) to check for clustering possibility and use K-Means to cluster the topics. We also perform latent dirichlet allocation (LDA)[8]for topic extraction and modeling.Finally, we show some of the sample topics in the cluster.

    ▲Figure 1. Data analysis flow

    4 Experiments

    We use two datasets from an enterprise-scale cloud provider, comprising 468 infrastructure-level and 787 Platform as a Service (PaaS)-level incident tickets, respectively. Since both datasets have similar data formats, we use the same analysis methods, which are mainly unsupervised machine learning approaches such as K-means and LDA. Our goal is to learn and make use of the inherent homogeneity of the complicated ticket descriptions by analyzing them.

    For model training, we use the number, title or subject, and description from the datasets, in which the title or subject is a summary of the incident, and the description is a detailed text describing the problem. Some of the description texts are in the semi-structured form. For example, more than half the infrastructure-level ticket descriptions consist of explicit attributes like symptoms, progress, network topology, conclusions,steps, and remarks. We focus on the symptom attribute rather than using the entire text body since prediction needs to be performed when the ticket only has a symptom description.Some of the corpus such as file names, URL links, and system logs are filtered as part of preprocessing.

    4.1 Data Preprocessing

    We extract the text of the symptom attribute from the ticket description. If the description does not contain an explicit attribute of “symptom”, the whole text is used. For the symptom text, we utilize regular expressions to filter unwanted data like picture-attached file name, date, time, URL and also delete the system logs as many as possible. We also perform spell checking using a dictionary.

    Our next step is to convert the symptom texts into individual word tokens. Since most of the incident descriptions are a mixture of both Chinese and English, we use different tokenization tools for each language. “Jieba” is used for Chinese and “spaCy”for English. We also remove stop words from the output token and merge synonyms, e.g., “db” and database are the same, so they are uniformly replaced by a database. The lists of stop words are from Baidu[9]and github[10].We merge both and extend some ticket-specific stop words for the experiments.

    Given that some titles are similar to the symptom in terms of interfering texts and marks, they are preprocessed in the same way. The process described above ultimately generates a list of most frequently used tokens in both the title and symptom token lists. We sample high frequency Chinese and English words from the results, which are shown in Table 1.

    4.2 Clustering Using BoW Models

    First, we study the clustering characteristics of the incident tickets using the BoW model. For the tokens we extract during preprocessing, we choose the top high frequency words fortitle and symptom respectively. We combine the tokens from title and symptom based on a predefined weight so that each ticket is transformed into a word frequency vector, and accordingly, the dataset is represented by a word frequency matrix.

    ▼Table 1. Sample of high frequency words

    We apply principal component analysis (PCA) to the normalized word frequency matrix of the dataset, aiming to select the number of appropriate components using cumulative explained variance results. For example, Fig. 2 shows the PCA results of the symptom word frequency matrix, indicating that if 100 components are selected, and more than 90% of the variance can be explained.

    After the number of the principal components is selected,we project the word frequency matrix to these components and use K-means for clustering analysis. For a given range of cluster numbers (i.e., values ofK), we generate SSE and silhouette coefficient curves. As the best practice, the number of clusters is determined at the inflection point of the downward trend SSE curve or at the point when the upward trend silhouette coefficient curve becomes a plateau. The results are shown in Fig. 2. The SSE curve does not show an obvious inflection point, and the absolute value of the silhouette coefficient is too small even though the trend meets the demand(silhouette coefficient is between -1 and 1. The closer it is to 1, the more reasonable the clustering is). We conclude that it may not be a viable approach to evaluating the best cluster size by using PCA.

    We also perform experiments using other models such as TF-IDF to generate a word frequency matrix, and the results are similar to PCA, indicating the word frequency matrix may not apply to incident tickets.

    4.3 Clustering Using Latent Dirichlet Allocation (LDA)Model

    ▲Figure 2. Principal component analysis (PCA) results, K-means SSE,and K-means silhouette curves using bag of words (BoW) model

    In this section, we use LDA to extract dominant topics from the topic, symptom, and the combined token list. To determine the performance of the optimal number of topics, we compare different perplexity scores and coherence scores when applying different topic numbers. We select the topic number at the inflection point where the perplexity curve or the coherence curve turns.

    Fig. 3 shows the results of the cloud infrastructure ticket data using LDA with 1-30 topics. Based on the characteristics of the curves, we can select the number of topics to be 14.The top ten keywords and the probabilities for each of the topics are shown in Table 2.

    ▲Figure 3. LDA perplexity and coherence curves

    ▼Table 2. Topics and keywords in each topic

    We evaluate the probability of topic appearance for each incident ticket and then cluster the topics usingK-means. Fig. 4 shows the SSE curve and silhouette coefficient curve respectively. The curves demonstrate more significant turning points than the ones using the BoW model, indicating the LDA model is more suitable for incident ticket clustering.

    With 14 topics, the SSE of LDA-allocated tokens is about 20, while the SSE of BoW is about 215, an order of higher magnitude. Though it is unpragmatic to map the SSE score to the exact accuracy, the lower the score the more accurate the prediction. Similarly, the silhouette coefficient for LDA results with 14 topics are about 0.37, compared with less than 0.10 using BoW models. As the score measures how apart of the cluster ranging from -1 to 1, a value close to 1 indicates clearly distinguished clusters.

    Table 3 shows the titles of tickets in one cluster. Storage related problems consist of a majority of the tickets, especially during upgrade and backup stages. The next is networking issues.

    4.4 Incident Ticket Classification and Prediction

    ▲Figure 4. K-means SSE and silhouette curves using latent Dirichlet allocation (LDA) model

    ▼Table 3. Samples of title descriptions in one cluster

    Our ticket clustering experiments reveal that incident tickets do have clustering characteristics. In order to take full advantage of prior knowledge, e.g., to assign coming tickets to the same department which has resolved similar ones before,we study the classification and prediction of incident tickets in this section. We use a similar dataset with more fields, including ticket ID, ticket description, resolution, resolution groups, categories, sub-categories, and components. After removing null values, the categories and record numbers are shown in Table 4.

    There are 115 sub-categories and 49 of them contain 1 000 records or more. The 49 sub-categories consist of 96% of the total tickets, and 30 of them contain 3 000 records or more consisting of 87% of the total records. When it comes to components, there are 663 in total, among which there are 88 items with more than 1 000 records accounting for 79% of the total amount, and 34 items with more than 3 000 records ac-counting for 54% of the total amount.

    ▼Table 4. Categories and record numbers

    In order to achieve fine granularity of the classification, we use the combination of sub-categories and components as the label. There are 29 top labels with more than 3 000 records.

    We compare multiple classification algorithms including TF-IDF, LDA and BERT. As expected, BERT achieved the best precision and recall for the same dataset. Both TF-IDF and LDA with the regression model yield a prediction accuracy of less than 80%. We build the incident classification model based on BERT which is shown in Fig. 5.

    1) Architecture of our model

    · The input layer is a text layer with preprocessed incident description text.

    ▲Figure 5. BERT classification network architecture

    · The preprocessing layer is a Chinese processing model devised by Google (suited for the BERT model). Every ticket text is transformed into 3 vectors: input_word_ids, input_mask and input_type_ids with 128 dimensions respectively. Input_word_ids denotes the ID of the word. The lost elements of input_word_ids vector are filled with 0. For the corresponding numbers in an input_mask vector, they should be 1 while the remaining elements are 0. Input_type_ids can clarify different sentences. In this classification study, we set all of its elements to 0.

    · BERT_encoder is an advanced BERT model devised by Google. BERT_encoder has 12 layers (bert_zh_L-12_H-768_A-12|) and the output of the BERT_encoder consists of pooled_output (each text corresponds to a vector of 768 elements), sequence_output (each word in each text corresponds to a vector of 768 elements) and encoder_outputs (output of inner units). We only focus on pooled_output in this experiment.

    · The dropout layer aims at avoiding overfitting. The probability of dropout is set to 0.1.

    · The classifier layer is a fully connected layer that outputs the probability of each ticket belonging to a certain classification in the labels.

    2) Training and testing data preparation

    We use the following steps as data preprocessing to generate training and testing data:

    · Delete all the incident tickets containing null value category information or empty ticket descriptions.

    · Modify the classification labels into lowercase and delete the redundant blank space. This operation is devised from observing the original data, where some categories and items are generally the same but only differ in lowercase and uppercase.For example, iCenter and Icenter.

    · Delete tickets with ambiguous items and category labels like“other, others, to be classified, and other pending problems”.

    · Merge the item and category labels in the form of component.category such as intelligent maintenance.itsp serve website.

    · After the merging operation, delete labels and their incident tickets data whose statistic number is less than the threshold (we set 3 000 in this experiment).

    · Remove HTML formatting and redundant space (including line feed punctuation) from the incident description texts. For the English content, all the letters are also put in lowercase.

    · Shuffle the resulting incident data. 70% of the dataset is utilized as the training set and the remaining 30% is used as the test set.

    · Each classification label and its quantity of relevant incident tickets are given in Table 5 (29 classification labels with more than 3 000 records respectively are reserved).

    As a result, 103 094 incident tickets are identified as training data and 44 183 incident tickets are collected as test data.

    For training the model, we adopt the Sparse Categorical Crossentropy as the loss function, Sparse Categorical Accuracy for accuracy measurement and optimize the model with AdamW. The experiment sets the initial learning rate to 3e-5 and the epoch to 5. The original training data are partitionedinto a training set and a validation set at the ratio of 9∶1 in this pre-training procedure (i.e., the number of incident tickets used for model training is the number of preprocessed incident tickets × 70% × 90%).

    ▼Table 5. Top labels (combination of sub-categories and components)and record numbers

    Fig. 6 shows the training loss, training accuracy, validation loss, and validation accuracy of each epoch.

    To verify our model after pretraining, we perform classification tasks on the test set. The assessment results are illustrated in Table 6. The overall precision is up to 86%. The confusion matrix of prediction results is shown in Table 7. The number in cell (i, j) denotes the number of tickets, the labels of which areibut predicted to bejin this model. Therefore,the numbers of correctly classified incident tickets lie on the diagonal while the number lying off the diagonal shows the discrepancies in classification.

    ▲Figure 6. Training loss and accuracy vs validation loss and accuracy

    ▼Table 6. Prediction accuracy on test data

    From Table 8, we can see that a majority of classification error cases occurs among different items of the same subcategories. For example, it is confusing for the model to classify the items of some sub-categories like desktop cloud, PC side zmail incidents, ifol finance and uds failure. In addition,7.4% tickets of OS issues - installation are wrongly predictedto be AIOps-itsp service website and 7.4% tickets of individual network issues-restriction are wrongly predicted to be network proxy-usage issues.

    ▼Table 7. Confusion matrix of prediction results

    ▼Table 8. Sample labels of classification error

    AIOps: artificial intelligence for IT operations

    5 Conclusions

    In this paper, we demonstrate the semantic characteristics of problem and incident tickets. Taking the ticket data from a real production Cloud environment, we compare different text mining techniques. LDA and K-Means are applied to show the ticket clusters. We use BERT as the deep learning framework with fine-tuning to build a resolution department matching system. Using sub-category and component fields in the ticket description, our classification model achieves 86% accuracy when predicting the best match department to resolve the ticket.

    99久久精品国产国产毛片| 黄色配什么色好看| 岛国毛片在线播放| av网站免费在线观看视频| 最黄视频免费看| 2018国产大陆天天弄谢| 91精品伊人久久大香线蕉| 免费看av在线观看网站| 日韩欧美一区视频在线观看| 免费观看在线日韩| 最黄视频免费看| 色视频在线一区二区三区| 免费黄色在线免费观看| 日本欧美视频一区| 99视频精品全部免费 在线| 欧美日韩精品成人综合77777| 国产精品一区二区在线不卡| 欧美丝袜亚洲另类| 蜜桃在线观看..| 人成视频在线观看免费观看| 国产精品人妻久久久影院| 国产成人av激情在线播放| 一级黄片播放器| 久久这里有精品视频免费| 永久网站在线| 高清av免费在线| 亚洲综合精品二区| 内地一区二区视频在线| 日本黄色日本黄色录像| 国产精品成人在线| 伦理电影免费视频| 9热在线视频观看99| 亚洲欧美色中文字幕在线| 欧美最新免费一区二区三区| 亚洲欧美成人综合另类久久久| 久久99热这里只频精品6学生| 国产乱来视频区| 成年女人在线观看亚洲视频| 男女边摸边吃奶| 国产男女内射视频| www日本在线高清视频| 亚洲少妇的诱惑av| 国产精品一区二区在线观看99| 少妇被粗大猛烈的视频| 久久99热这里只频精品6学生| 欧美3d第一页| 黄色一级大片看看| 国产一区二区在线观看日韩| 99久久综合免费| 中文字幕亚洲精品专区| 成年人免费黄色播放视频| 久久99蜜桃精品久久| 久久影院123| 97精品久久久久久久久久精品| 秋霞伦理黄片| 亚洲激情五月婷婷啪啪| 丰满乱子伦码专区| 少妇人妻精品综合一区二区| 亚洲欧美成人精品一区二区| 18在线观看网站| 男女国产视频网站| 亚洲五月色婷婷综合| 秋霞伦理黄片| 国产一区二区三区av在线| 免费大片18禁| 久久久久久人人人人人| 久久鲁丝午夜福利片| 黄片无遮挡物在线观看| 亚洲欧洲日产国产| 51国产日韩欧美| 少妇被粗大的猛进出69影院 | 国产极品粉嫩免费观看在线| 在线观看美女被高潮喷水网站| 丰满饥渴人妻一区二区三| 18禁国产床啪视频网站| 18在线观看网站| 亚洲av电影在线观看一区二区三区| 亚洲五月色婷婷综合| 97超碰精品成人国产| 18在线观看网站| 亚洲情色 制服丝袜| 天堂俺去俺来也www色官网| 熟女av电影| 只有这里有精品99| 国产成人av激情在线播放| 人妻系列 视频| 久久狼人影院| 国产极品天堂在线| 国产成人av激情在线播放| 丝袜在线中文字幕| www.av在线官网国产| 日韩伦理黄色片| 91精品伊人久久大香线蕉| 久久精品国产亚洲av天美| 亚洲人成77777在线视频| 久久久久久伊人网av| 人妻人人澡人人爽人人| 午夜影院在线不卡| 国产成人精品无人区| 有码 亚洲区| 亚洲国产色片| 在线精品无人区一区二区三| 亚洲综合色惰| 午夜激情av网站| 国产又色又爽无遮挡免| 久久99精品国语久久久| 精品少妇久久久久久888优播| 久久久久久久大尺度免费视频| 黄色配什么色好看| 美女国产视频在线观看| 99久久精品国产国产毛片| 亚洲三级黄色毛片| 国产黄色免费在线视频| 黄片播放在线免费| 如何舔出高潮| 欧美日韩一区二区视频在线观看视频在线| 精品久久国产蜜桃| 天堂俺去俺来也www色官网| 亚洲精品美女久久av网站| 国产一区二区在线观看日韩| 国产精品久久久久成人av| 巨乳人妻的诱惑在线观看| 国产精品 国内视频| 亚洲精品国产色婷婷电影| 久久精品久久久久久久性| 捣出白浆h1v1| 国产精品人妻久久久影院| 国产成人aa在线观看| 人妻一区二区av| 少妇被粗大猛烈的视频| av电影中文网址| 国产淫语在线视频| 欧美日韩成人在线一区二区| 啦啦啦视频在线资源免费观看| 老熟女久久久| 久久99一区二区三区| av一本久久久久| 热re99久久精品国产66热6| www.av在线官网国产| 亚洲av.av天堂| 男人爽女人下面视频在线观看| 少妇被粗大的猛进出69影院 | 在线 av 中文字幕| 久久毛片免费看一区二区三区| 欧美日韩成人在线一区二区| 日日摸夜夜添夜夜爱| 91成人精品电影| 一级片免费观看大全| 丰满乱子伦码专区| 国产精品蜜桃在线观看| 国产av精品麻豆| 亚洲综合色网址| 亚洲五月色婷婷综合| 久久久久国产网址| 成人国产av品久久久| 亚洲国产精品专区欧美| 男人爽女人下面视频在线观看| 蜜桃在线观看..| 少妇被粗大猛烈的视频| 又黄又粗又硬又大视频| 成人午夜精彩视频在线观看| 91国产中文字幕| 在线观看美女被高潮喷水网站| 久久精品国产亚洲av天美| 在线看a的网站| 青春草亚洲视频在线观看| 亚洲av综合色区一区| 夫妻性生交免费视频一级片| 中文字幕最新亚洲高清| 日韩制服丝袜自拍偷拍| 满18在线观看网站| 久久影院123| 黑人巨大精品欧美一区二区蜜桃 | 人妻系列 视频| 少妇高潮的动态图| av又黄又爽大尺度在线免费看| 免费黄色在线免费观看| 国产成人精品福利久久| 亚洲精品色激情综合| 麻豆乱淫一区二区| 伦理电影大哥的女人| 精品午夜福利在线看| 青春草国产在线视频| 女人久久www免费人成看片| 日韩中文字幕视频在线看片| 香蕉精品网在线| 日本免费在线观看一区| 久久影院123| 亚洲伊人色综图| 国产精品 国内视频| av国产精品久久久久影院| 丝袜喷水一区| 国产精品成人在线| av电影中文网址| 97超碰精品成人国产| 国产一区二区在线观看av| a 毛片基地| 日韩,欧美,国产一区二区三区| 亚洲熟女精品中文字幕| 婷婷色综合www| 亚洲精品一二三| 久久这里只有精品19| 水蜜桃什么品种好| 五月开心婷婷网| 一级片'在线观看视频| 婷婷成人精品国产| 亚洲欧美一区二区三区黑人 | 性色av一级| 国产av码专区亚洲av| 免费播放大片免费观看视频在线观看| 久久久久网色| 国产欧美亚洲国产| 啦啦啦在线观看免费高清www| 亚洲精品456在线播放app| 制服丝袜香蕉在线| 黄色怎么调成土黄色| 日日啪夜夜爽| 日本色播在线视频| 日本欧美国产在线视频| 国产一区二区三区综合在线观看 | kizo精华| www.色视频.com| 国产片特级美女逼逼视频| 国产欧美亚洲国产| 精品久久国产蜜桃| 乱人伦中国视频| 大话2 男鬼变身卡| av在线播放精品| 男女无遮挡免费网站观看| 日本午夜av视频| 成年人午夜在线观看视频| 免费看光身美女| 亚洲精品色激情综合| 黄色 视频免费看| 最近最新中文字幕大全免费视频 | 人体艺术视频欧美日本| 国产精品女同一区二区软件| 国产免费又黄又爽又色| 久久精品国产自在天天线| 亚洲国产精品成人久久小说| 人体艺术视频欧美日本| 人人妻人人添人人爽欧美一区卜| 日韩欧美一区视频在线观看| 18禁裸乳无遮挡动漫免费视频| 热re99久久精品国产66热6| 中文字幕人妻丝袜制服| 黄色怎么调成土黄色| 青春草视频在线免费观看| 五月伊人婷婷丁香| 亚洲人与动物交配视频| 99精国产麻豆久久婷婷| 黄片播放在线免费| 免费人成在线观看视频色| 国产国语露脸激情在线看| 侵犯人妻中文字幕一二三四区| 夫妻性生交免费视频一级片| 高清不卡的av网站| 国产午夜精品一二区理论片| 777米奇影视久久| 中文字幕亚洲精品专区| 婷婷色麻豆天堂久久| 人妻人人澡人人爽人人| 高清视频免费观看一区二区| av一本久久久久| 久久久a久久爽久久v久久| 一区在线观看完整版| 水蜜桃什么品种好| 国产熟女欧美一区二区| 欧美精品av麻豆av| 亚洲第一av免费看| 亚洲美女视频黄频| 欧美3d第一页| 亚洲精品,欧美精品| 老司机影院成人| 日本黄色日本黄色录像| 欧美精品av麻豆av| 精品国产一区二区三区四区第35| www.色视频.com| 亚洲精品中文字幕在线视频| 日产精品乱码卡一卡2卡三| 国产亚洲精品久久久com| 国产在线视频一区二区| 精品一区在线观看国产| 免费少妇av软件| 菩萨蛮人人尽说江南好唐韦庄| 性色av一级| 男的添女的下面高潮视频| 一级毛片我不卡| 免费看光身美女| 久久ye,这里只有精品| 国产精品久久久av美女十八| 美女国产视频在线观看| 中文字幕免费在线视频6| 视频在线观看一区二区三区| 午夜激情av网站| 免费观看在线日韩| 精品国产国语对白av| 91精品三级在线观看| 丰满乱子伦码专区| 成人综合一区亚洲| av片东京热男人的天堂| 如何舔出高潮| 街头女战士在线观看网站| 人妻人人澡人人爽人人| 成人亚洲精品一区在线观看| 在线观看免费高清a一片| 亚洲国产精品999| 国产精品免费大片| av在线app专区| 看非洲黑人一级黄片| 高清视频免费观看一区二区| 欧美性感艳星| 中国美白少妇内射xxxbb| 免费久久久久久久精品成人欧美视频 | 韩国高清视频一区二区三区| 这个男人来自地球电影免费观看 | 一本—道久久a久久精品蜜桃钙片| 黄色视频在线播放观看不卡| 青青草视频在线视频观看| 美女福利国产在线| 亚洲国产成人一精品久久久| 国产av码专区亚洲av| 桃花免费在线播放| 人妻人人澡人人爽人人| 自线自在国产av| 人人妻人人澡人人爽人人夜夜| 大片免费播放器 马上看| 热re99久久精品国产66热6| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 国产淫语在线视频| 国产黄频视频在线观看| 美女xxoo啪啪120秒动态图| 在线亚洲精品国产二区图片欧美| 在线天堂最新版资源| 国产精品免费大片| 亚洲av综合色区一区| 九九在线视频观看精品| 国产男人的电影天堂91| 久久人人爽人人片av| 波野结衣二区三区在线| 国产精品久久久久久av不卡| 伦理电影大哥的女人| av国产精品久久久久影院| 高清视频免费观看一区二区| 色视频在线一区二区三区| 亚洲,欧美精品.| 男女国产视频网站| 少妇被粗大的猛进出69影院 | 91成人精品电影| 国产一区二区在线观看av| 精品99又大又爽又粗少妇毛片| 免费高清在线观看视频在线观看| 性色av一级| 亚洲精品,欧美精品| 日韩一区二区三区影片| 亚洲精品久久久久久婷婷小说| 日韩 亚洲 欧美在线| 中文天堂在线官网| 久久久久精品性色| 精品福利永久在线观看| 夜夜骑夜夜射夜夜干| 熟女电影av网| 在现免费观看毛片| 你懂的网址亚洲精品在线观看| av卡一久久| 九草在线视频观看| 91在线精品国自产拍蜜月| 99re6热这里在线精品视频| 国产日韩一区二区三区精品不卡| 深夜精品福利| 亚洲 欧美一区二区三区| 十八禁网站网址无遮挡| 一区二区三区四区激情视频| videosex国产| 日韩欧美精品免费久久| 肉色欧美久久久久久久蜜桃| a级毛片黄视频| 天天操日日干夜夜撸| 国产精品嫩草影院av在线观看| 国产精品久久久久久久电影| 99久国产av精品国产电影| 寂寞人妻少妇视频99o| 亚洲四区av| 亚洲欧美精品自产自拍| a级毛片黄视频| 22中文网久久字幕| 狂野欧美激情性bbbbbb| 2022亚洲国产成人精品| 侵犯人妻中文字幕一二三四区| 成人亚洲欧美一区二区av| 一级a做视频免费观看| 欧美日韩亚洲高清精品| 永久免费av网站大全| 亚洲高清免费不卡视频| 深夜精品福利| 啦啦啦在线观看免费高清www| 久热久热在线精品观看| 国产成人精品无人区| 亚洲精品久久午夜乱码| 亚洲图色成人| 久久久久久久久久久久大奶| 午夜激情久久久久久久| 成人二区视频| 大话2 男鬼变身卡| 久久久久人妻精品一区果冻| 亚洲国产欧美日韩在线播放| 人成视频在线观看免费观看| 91在线精品国自产拍蜜月| 精品视频人人做人人爽| 99热国产这里只有精品6| 少妇 在线观看| 久久久久国产网址| 男女边吃奶边做爰视频| 内地一区二区视频在线| 国产精品 国内视频| 男男h啪啪无遮挡| 伦精品一区二区三区| 丰满迷人的少妇在线观看| 国产伦理片在线播放av一区| 日日摸夜夜添夜夜爱| 亚洲av电影在线进入| 亚洲精品国产av成人精品| 男女免费视频国产| 免费黄色在线免费观看| 亚洲色图 男人天堂 中文字幕 | 国产精品 国内视频| 亚洲精品成人av观看孕妇| 日本与韩国留学比较| 国产成人欧美| 亚洲伊人久久精品综合| 夜夜爽夜夜爽视频| 欧美精品av麻豆av| 久久鲁丝午夜福利片| 大片免费播放器 马上看| 一级毛片黄色毛片免费观看视频| 中文乱码字字幕精品一区二区三区| 久久久久久久久久久久大奶| 另类精品久久| 亚洲国产av影院在线观看| 亚洲国产精品国产精品| 国产成人精品婷婷| 18禁国产床啪视频网站| 啦啦啦视频在线资源免费观看| 黑丝袜美女国产一区| 免费日韩欧美在线观看| 亚洲激情五月婷婷啪啪| 香蕉精品网在线| 日韩制服骚丝袜av| 成人综合一区亚洲| 成人漫画全彩无遮挡| 高清视频免费观看一区二区| 18禁在线无遮挡免费观看视频| 国产av码专区亚洲av| 亚洲国产色片| 香蕉国产在线看| 黑人巨大精品欧美一区二区蜜桃 | 18禁国产床啪视频网站| 国产1区2区3区精品| 欧美日韩综合久久久久久| 大香蕉97超碰在线| a级毛片在线看网站| 另类精品久久| 亚洲综合色惰| 两个人免费观看高清视频| 欧美性感艳星| 国产免费现黄频在线看| 中文乱码字字幕精品一区二区三区| 最新的欧美精品一区二区| 女人被躁到高潮嗷嗷叫费观| 免费不卡的大黄色大毛片视频在线观看| h视频一区二区三区| 精品第一国产精品| 性高湖久久久久久久久免费观看| 两性夫妻黄色片 | 亚洲成人av在线免费| 夜夜骑夜夜射夜夜干| 欧美精品一区二区大全| 一边摸一边做爽爽视频免费| 观看av在线不卡| 国产亚洲av片在线观看秒播厂| 一级a做视频免费观看| 欧美激情国产日韩精品一区| 亚洲婷婷狠狠爱综合网| 午夜福利在线观看免费完整高清在| 国产精品一区www在线观看| 超色免费av| 18禁在线无遮挡免费观看视频| 高清视频免费观看一区二区| 热re99久久国产66热| 肉色欧美久久久久久久蜜桃| 少妇 在线观看| 亚洲熟女精品中文字幕| av在线app专区| 国产精品三级大全| 欧美激情极品国产一区二区三区 | 久久99一区二区三区| 亚洲精品成人av观看孕妇| 国产欧美日韩综合在线一区二区| 国产日韩欧美在线精品| 免费大片18禁| 香蕉丝袜av| 一二三四在线观看免费中文在 | 国产成人aa在线观看| 国产精品一国产av| 蜜桃国产av成人99| 国产黄频视频在线观看| 三级国产精品片| 日本与韩国留学比较| 性色av一级| 中文欧美无线码| 999精品在线视频| 日韩伦理黄色片| 自线自在国产av| 国产成人91sexporn| 黑人巨大精品欧美一区二区蜜桃 | 另类精品久久| 蜜臀久久99精品久久宅男| 97人妻天天添夜夜摸| 高清黄色对白视频在线免费看| 一本久久精品| 你懂的网址亚洲精品在线观看| 91精品伊人久久大香线蕉| 色94色欧美一区二区| 精品国产国语对白av| 亚洲人成77777在线视频| 精品卡一卡二卡四卡免费| 90打野战视频偷拍视频| 男男h啪啪无遮挡| 亚洲av电影在线进入| 亚洲高清免费不卡视频| videosex国产| 大片免费播放器 马上看| 99精国产麻豆久久婷婷| 我要看黄色一级片免费的| 天堂中文最新版在线下载| 欧美 亚洲 国产 日韩一| 午夜福利在线观看免费完整高清在| 又黄又粗又硬又大视频| 人体艺术视频欧美日本| 亚洲综合精品二区| 色94色欧美一区二区| 国产老妇伦熟女老妇高清| 久久99精品国语久久久| 涩涩av久久男人的天堂| 精品第一国产精品| 免费观看av网站的网址| 美女视频免费永久观看网站| 国产欧美另类精品又又久久亚洲欧美| 欧美激情 高清一区二区三区| 99久久综合免费| 日韩人妻精品一区2区三区| 纯流量卡能插随身wifi吗| xxx大片免费视频| 日本黄色日本黄色录像| av女优亚洲男人天堂| 日韩伦理黄色片| 精品人妻在线不人妻| 国产1区2区3区精品| a级毛片在线看网站| 国产深夜福利视频在线观看| 国产成人精品婷婷| 国产一区二区激情短视频 | 国产亚洲精品第一综合不卡 | 国产淫语在线视频| 久久影院123| 成人国产麻豆网| 三上悠亚av全集在线观看| 午夜av观看不卡| 国产一级毛片在线| 少妇人妻 视频| 成年av动漫网址| 一本—道久久a久久精品蜜桃钙片| 精品视频人人做人人爽| 欧美精品av麻豆av| 免费看光身美女| 18禁在线无遮挡免费观看视频| 免费女性裸体啪啪无遮挡网站| av女优亚洲男人天堂| 汤姆久久久久久久影院中文字幕| 女的被弄到高潮叫床怎么办| 久久这里只有精品19| 久久狼人影院| 男男h啪啪无遮挡| 蜜臀久久99精品久久宅男| 少妇精品久久久久久久| 亚洲色图综合在线观看| 男女啪啪激烈高潮av片| 91成人精品电影| 看免费av毛片| 母亲3免费完整高清在线观看 | 免费黄频网站在线观看国产| 人人妻人人添人人爽欧美一区卜| 边亲边吃奶的免费视频| 亚洲美女搞黄在线观看| 亚洲av电影在线观看一区二区三区| 天堂中文最新版在线下载| 久久久久久久国产电影| 成人影院久久| 午夜福利在线观看免费完整高清在| 久久久久久久大尺度免费视频| www.熟女人妻精品国产 | 久久人人爽人人片av| 免费不卡的大黄色大毛片视频在线观看| 日韩,欧美,国产一区二区三区| 国产黄色视频一区二区在线观看| 亚洲国产精品专区欧美| 亚洲在久久综合| 日韩 亚洲 欧美在线| 十分钟在线观看高清视频www| 十八禁网站网址无遮挡| 亚洲欧美一区二区三区国产| 亚洲综合色惰| 欧美日本中文国产一区发布| 97精品久久久久久久久久精品| 18禁裸乳无遮挡动漫免费视频|