• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    The Impact of Semi-Supervised Learning on the Performance of Intelligent Chatbot System

    2022-08-24 03:31:28SudanPrasadUpretyandSeungRyulJeong
    Computers Materials&Continua 2022年5期

    Sudan Prasad Uprety and Seung Ryul Jeong*

    The Grad uate School of Business Information Technology,Kookmin University,Seoul,02707,Korea

    Abstract:Artificial intelligent based dialog systems are getting attention from both business and academic communities.The key parts for such intelligent chatbot systems are domain classification,intent detection,and named entity recognition.Various supervised, unsupervised, and hybrid approaches are used to detect each field.Such intelligent systems,also called natural language understanding systems analyze user requests in sequential order:domain classification,intent,and entity recognition based on the semantic rules of the classified domain.This sequential approach propagates the downstream error;i.e.,if the domain classification model fails to classify the domain,intent and entity recognition fail.Furthermore,training such intelligent system necessitates a large number of user-annotated datasets for each domain.This study proposes a single joint predictive deep neural network framework based on long short-term memory using only a small user-annotated dataset to address these issues.It investigates value added by incorporating unlabeled data from user chatting logs into multi-domain spoken language understanding systems.Systematic experimental analysis of the proposed joint frameworks, along with the semi-supervised multi-domain model, using open-source annotated and unannotated utterances shows robust improvement in the predictive performance of the proposed multi-domain intelligent chatbot over a base joint model and joint model based on adversarial learning.

    Keywords: Chatbot; dialog system; joint learning; LSTM; natural language understanding;semi-supervised learning

    1 Introduction

    Natural language understanding (NLU) and Speech understanding (SU) play a significantly important role in human-computer interaction(HCI)applications.Intelligent NLU systems,including chatbots, robots, voice control interfaces, and virtual assistants, are well-known HCI applications developed to communicate with humans via natural language.HCI is now a global trend and has drawn attention from different communities with the advancement and rapid development of machine learning(ML)and deep neural network(DNN)and reinforcement learning.ELIZA[1]was the first machine with ability to exhibit human behavior to understand human language and communicate with humans using pattern matching to respond to user.The modeling process of a single domain conversational system or intelligent chatbot consists of detecting intent and recognizing entities from the user query.Virtual customer assistants,or chatbots,reduce information overload and call center efforts, enabling better customer experience (CX) on HCI applications or company websites.Some institutions also deploy role-based assistants that can significantly help improve interactions with their customers, business partners, and employees.By reducing the complexity of data and rules,organizations can focus on repetitive and simple interactions where customer needs are well-satisfied and understood.Organizations are struggling to manage the growth of such user query data.They have been implementing intelligent chatbot to provide service to customers 24/7 with or without call center help to address these issues.Such intelligent systems have three most important parts:domain classification, intent detection, and entity recognition.For a multi-tasking chatbot, the domain classification model first classifies the domain and then intent and entity are recognized based on the frames of the classified domain,as shown in Fig.1.A large amount of user-annotated data is needed to train a multi-domain dialog system.Major intelligent chatbot systems, such as Amazon Alexa,Apple Siri,Google Dialogflow,IBM Watson,Microsoft Cortana,and Samsung Bixby support multi domain conversation[2].A typical multi-tasking or multi-domain chatbot system(as shown in Fig.1)mainly has domain classification, intent prediction, entity recognition, and response generation or dialog management parts.Most intelligent chatbot process user queries in a sequential order:domain classification,intent prediction,slot prediction.Each has its separate machine learning(ML)model and is predicted in the sequential order.A large number of user-annotated examples of utterances in each domain is essential before training the model.In addition, separate models are generated for the domain, intent, and entity, making it difficult to manage large sets of models.Furthermore,with this approach, an error in the domain prediction step may lead to errors in intent prediction and entity recognition,ultimately reducing predictive performance of the chatbot.Typical supervised ML algorithms such as Bayesian algorithm, Support Vector Machine (SVM), Logistic Regression,and Neural Networks(NNs)could extract domain and intent from user queries with separate model.However,the advanced deep learning(DL)approaches,increased computing powers,generating large number of open-source dataset enable training a single joint model for domain classification,intent prediction, and entity recognition using a single set of utterances [3] containing multiple domain,intent,and slot or entity information,reducing the number of trained ML models[4].

    This study reduces human efforts for manual annotation of utterances by incorporating unannotated datasets from various data sources,such as user query logs into a DNN algorithm,i.e.,a single jointly trained long short-term memory (LSTM) based NLU model of a multi-domain intelligent chatbot.The single jointly trained LSTM based NLU model reduces the number of classification and recognition models used in sequential approaches and attempts to mitigate downstream error propagation.LSTM was proposed in 1997 by Hochreiter and Schmidhuber for sequential modeling[5],which is a concept of adding an extra memory cell to a recurrent neural network(RNN),achieving better performance in representing and storing historical information.In the standard LSTM network,information transmission is one-way,and each memory cell can use historical information but cannot use the future one.Bidirectional LSTM(Bi-LSTM,shown in Fig.2)was introduced to transmit and store past and future information in each memory cell.

    The principle of Bi-LSTM is to connect the same output of each input cell with two opposite timings.Forward LSTM can forward historical information to the next step, and LSTM networks directed backward can obtain future contextual information.Furthermore, extra unlabeled data [6]contributes to an increase in the information gain for DL model trained with the LSTM algorithm.A single semi-supervised multi-domain joint model(SEMI-MDJM)based on LSTM outperforms a joint base model and an adversarial multi-domain joint model in each task i.e.,domain classification,intent prediction,and entity recognition.

    Figure 1:General architecture of multi-domain chatbot

    Figure 2:Bi-LSTM joint model

    The remainder of this work is structured as follows.Next section presents related prior work on intelligent dialog system.Section 3 provides a proposed LSTM based semi-supervised joint framework.Section 4 presents the experimental results and detail analysis of predictive accuracies,and additional analyses on the importance of unannotated data in the context of a general chatbot having multiple domains.Finally,Section 5 concludes with a discussion and interesting areas for future study.

    2 Literature Review

    The first idea of an HCI appli cation comes from the Turing test or“imitation”game created by Alan Turing in 1950.ELIZA was the first conversational system developed in 1966 based on pattern matching that respond to a user using keywords from the user query[1].In 1980s,another HCI application called ALICE was developed using artificial intelligence markup language(AIML)[7]to mitigate the drawbacks of ELIZA.The performance of AIML was further improved[8]by applying multiple parameter design pattern to the decomposition rules.With the rapid development and advancement of ML algorithms,emergence of DL techniques,and natural language processing(NLP),these intelligent chatbot systems are gaining popularity in various fields.Conversational systems help reduce various costs by automating the workflow of a customer or call center,resulting in rapid response to customer queries[9].Almansor and Hussain classified conversational systems into non-task-oriented and taskoriented categories [10].Non-task-oriented systems are generally retrieval-based chatbots, which provide a similar or highly ranked list of information related to user input.In contrast,task-oriented conversational systems are supervised or unsupervised models performing users’specific tasks based on ML algorithm rather than decomposition rules or keyword filtering.Recently,commercial chatbot systems such as Microsoft Cortana,IBM Watson,Amazon Alexa,Google Dialogflow,Apple Siri,and Bixby are gaining interest from organizations[11].These systems are mainly implemented in medical education,health assistance,educational system,e-learning,e-commerce[12],sports,games,privacy,infrastructure,and other fields[2].Recently,public administrators have begun implementing chatbot systems for real-time customer services[13].Autonomous vehicles and smart home systems also embed natural language interactions applications[14].The implementation of these dialog systems requires technical knowledge about NLP and NLU[15–17].Recent new studies report various new NLP and NLU,such as bag-of-concepts and bag-of-narratives[18].

    Although there are several technical and logical parts involved in implementing intelligent chatbot systems,NLU is at the core part of a chatbot.In an intelligent chatbot,the role of NLU is to parse the user query and learn what the user means.NLU systems contain three main subsystems:domain classifier, intent detector, and entity recognition models [19].Generally, a multi domain chatbot has three unsupervised or supervised ML models for recognizing each field.Different supervised and unsupervised learning algorithms include term frequency and inverse document frequency(TFIDF), bag of words, word2vec, SVM, Bayes algorithm, NNs, boosting, maximum entropy, and deep belief networks [20] are widely applied to extract intent and slots in sequential NLU models.These separate pipelined ML models are created using a large number of utterances or examples[3].Creating and annotating these utterances demands huge human efforts.Recently,much open research shares previously annotated large datasets from diverse domains in multiple languages.In addition,unannotated user-query data can be used and analyzed in the future.Vedula et al.[21] curated and released an annotated dataset of 25 k utterances for developing an intent model.Schuster et al.[22]curated 57 k annotated examples for English,Thai,and Spanish languages for three different domains–weather,alarm,and reminder–to develop cross-lingual dialog system.Larson et al.[23]evaluated for out of scope with a dataset containing 150 intent classes from 10 different domains.Furthermore,these sequential frameworks are at a high risk of introducing downstream errors to the intent detection and entity recognition phase.Since each predictive model is trained with sequence of text corpus,contextual information of the previous step has significant importance for traditional ML algorithms and recent DL approaches.These text data i.e.,utterances or examples are time-series in nature,for which an LSTM-based DL framework demonstrates state-of-the-art performance[24].

    2.1 Domain Prediction

    Domain prediction is the process of filtering user input to a specific category in a multi-tasking dialog system.Many previous works on domain prediction exist.Hakkani-Tur et al.[25]proposed a semi-supervised domain prediction model using AdaBoost with user click logs on Bing web search engine.Zheng et al.[26] proposed an out-of-domain detection mechanism to avoid unnecessary responses to user input.Xu et al.[27]proposed a contextual domain classification mechanism to reduce consecutive queries by a user to different domains.Gupta et al.[28]proposed an RNN-based context encoding method to improve the predictive accuracy and computational efficiency of an NLU model using two different domains.

    2.2 Intent Detection

    Intent prediction is the main part of NLU system.Intent means what a user means or wants to obtain from the system.Although traditional intent predictor models are based on SVM and ANN,with the advancement in DL and sequence modeling,RNN and LSTM algorithms have demonstrated state-of-the-art performance in text classification tasks.Liu et al.[29]proposed attention-based RNN to predict intent and slot.In addition,a hybrid approach that combines LSTM and a convolutional neural network(CNN)shows performance improvement in intent prediction using the ATIS dataset[30].Goo et al.[31] proposed Slot-Gated Bi-LSTM model with an attention mechanism to predict intent.Systems can make errors for similar words that appear in different contexts.Confusion2vec[32] can reduce confusing errors and predict the intent of user input.For multi-task and multi-turn dialog systems,previous domain information can be used as contextual information for new turns to improve the performance of dialog systems[33].In addition,incorporating previous contextual session information[34]into intent and slot prediction models can improve predictive performance.

    2.3 Entity Extraction or Slot Filling

    Entity extraction,also called entity recognition(NER),extracts attributes such as location,place,date, and time from user query text.Entity extraction aims to extract entities of interest from user input text.As important information of user input can appear at any position, entity extraction becomes a more challenging process[24],making it difficult to extract entities from text.Early NER prediction systems relied on rules or dictionaries created by humans.After that,supervised learning based on SVM,decision trees,hidden Markov chain,conditional random fields,and dynamic vector representations[35]have been used to extract entities from text.Recently,ANNs and DL techniques such as LSTM,CNNs[36]have been used to extract entities from user text.Liu and Lane introduced slot filling based on RNN algorithms[29].Dernoncourt et al.[37]proposed NeuroNER tools based on ANN for non-expert users of ANNs.Generally,models trained over previously build NER algorithms such as a distantly supervised slot-filling system[38]proposed at Stanford and a tweeter-based NER system [39] can improve the performance of entity extraction systems.The main challenges and misconceptions for NER system development were investigated in detail by Ratinov et al.[40] to improve prediction accuracy on the CoNLL dataset.An entity extraction model based on sequence modeling[41]can further improve its predictive performance.

    Although these individual training approaches improve the performance of an individual model,there will be a lack of contextual sharing between each model,and the total number of models increases with the total number of domains.The total number of models for a typical traditional dialog system is calculated as Eq.(1).

    where N represents the total number of domains.The total number of predictive ML models in a typical traditional multi-domain chatbot system is the sum of domain predictive model, N number of intent,and N number of slot models.If the number of domain increases,the number of predictive models also increases.Thus, various joint training approaches that incorporates higher correlation information between intent and entity show better performance with a single joint predictive model.

    2.4 Joint Training for Multi-Domain Intelligent Chatbot System

    Joint training based on LSTM in a conversational system involves sharing cost or loss functions among domain,intent,and entity predictors.There are some prior works on joint modeling for intent detection and entity recognition.Liu et al.[29]proposed a joint model based on Attention Bi-RNN to recognize intent and entity with higher predictive performance.Ma et al.[30]introduced a sparse attention patterns to a jointly trained model based on LSTM for intent detection and slot extraction.Bekoulis et al.[42]applied adversarial learning to a joint model for various datasets,such as biomedical data, real estate data, and news data, achieving state-of-the-art performance for entity and relation extraction.Goo et al.[31] added related information for joint training between intent detector and slot extractor model.Zhang et al.[43]applied the hierarchical relationship between slots and intent to the joint model based on capsule neural networks.Recently,transfer learning i.e.,pre-trained models,such as DialogGLUE(BERT show state-of-the-art-performance for joint model[44].For multi-taskoriented conversational systems, a predictive domain model is trained separately, which could bring downstream error propagation,i.e.,if an intelligent chatbot system fails to classify the domain then intent predictor and entity extractor does not work anymore[3].

    There are some prior works on multi-task-oriented joint models based on LSTM with a single cell.Hakkani-Tur et al.[4]introduced the RNN-LSTM framework for a multi-task oriented chatbot.Kim and Lee used real user chatting logs from Microsoft Cortana and jointly trained the model with Bi-LSTM algorithm to inhance the classification accuracy by mitigating downstream error propagation [3].We refer readers to Abdul-Kader et al.[12] and Ahmad et al.[15] studies, which provide comprehensive literature reviews of various ML and rule-based techniques used in chatbot systems or NLU studies.

    2.5 Adversarial Learning

    Adversarial learning regularizes neural networks and improves the classification accuracy of DNN algorithms by combining small noise or perturbations with annotated data,thereby increasing the loss function of a DL model[45].Many DNN algorithms have recently been used in NLU and SU systems.Miyato et al.[6]observed the incorrect decision for DNNs with intentional random noise to the DNNs along with input examples.Furthermore,they proposed an object detection algorithm based on DNN using an adversarial learning to improve the classification accuracy of a ML model[6].

    Semi-supervised learning with adversarial perturbations shows classification improvement for intelligent chatbot system having multi-domains [46].Adversarial learning to DNNs (as shown in Fig.3) generates small perturbations to the embedding layer along with input examples that gives variations to input,which the learning model can easily misclassify.

    Figure 3:Adversarial joint model based on Bi-LSTM

    2.6 Semi-Supervised Learning for NLU

    Semi-supervised learning is the process of training ML model with both annotated and unannotated utterances.First, the supervised model developed with annotated or labeled dataset and then predicts and labels unannotated samples.Afterward, retraining the originally annotated datasets,along with machine-annotated datasets,creates new predictive supervised models.This entire training,predicting, and retraining process using predicted datasets, along with originally labeled utterances,presents the concept of semi-supervised learning shown in Fig.4.The semi-supervised technique helps reduce human efforts in the manual annotation of utterances and helps create a self-learning model with robust information gain, ultimately improving a predictive performance or accuracy.A semi-supervised learning approach can help annotators annotate new user inputs with a small userannotated dataset.

    There are extensive prior studies on semi-supervised learning approaches for developing SLU and NLU models for intent prediction and entity extraction.Diverse techniques have been used to predict intent for a single domain dialog system using a semi-supervised learning approach [47].A semi-supervised joint model for intent prediction and slot fillings [45,48] reduces human efforts in annotating examples,improving the model’s performance with robust information gain.For further investigation, we refer readers to Singh et al.[11], which provide comprehensive literature reviews of data extraction, data processing, various data sources, and reinforcement and ensemble learning methods used in NLU studies.

    Although semi-supervised learning has recently been used in multi-domain dialog systems, this study is,to the best of our knowledge,the first to apply semi supervised-learning to a single joint model based on LSTM.Compared with a prior joint model and adversarial joint model,our approach trains a single LSTM-based model using small user-annotated examples and unannotated samples from user chatting logs resulting in higher predictive performance and reduced human efforts in create annotated examples of AI dialog systems.

    Figure 4:Semi-supervised joint model

    3 Semi-Supervised Intelligent Chatbot

    Our proposed SEMI-MDJM(shown in Fig.4)focuses on self-automating the annotating process with user chatting logs, which could be the important data source for intelligent chatbot.Each component of SEMI-MDJM is discussed in the following subsection.

    3.1 Data Preprocessing

    User chatting logs are unstructured text data and should be converted into a structured example that a DNN algorithm can use it to train the model.Bag of words,term-frequency-matrices,and vector space[49]methods are widely applied to transform unstructured data into structured dataset.TF-IDF uses term frequency matrices to extract information from text data.Creating these matrices involves various data cleansing and wrangling approaches including tokenization,stemming,POS tagging as shown in Fig.5.Then a word embedding set is created from the preprocessed cleaned corpus.

    Figure 5:Text preprocessing

    Furthermore,the previously developed joint model is used to predict unlabeled user chatting logs,and annotated utterances are added to the previous training dataset,retraining the model to increase the information gain for the LSTM cell.Then,utterances are preprocessed and fed into a Bi-LSTM cell to extract previous and future information.Then the single LSTM model predicts domain,intent,and extract the entity.

    3.2 Embedding and Bi-LSTM Layer

    The embedding layer feeds the sequential data to a LSTM cell by creating embedding vector of words.Word embedding for word sequence w1...wn∈W is given as Eq.(2):

    Fig.2 shows a Bi-LSTM model with forward and backward propagation of information.Due to the bidirectional information propagation, previous and future contextual information can be memorized for each LSTM cell.

    The final training objective of MDJM is to minimize shared loss among domain,intent,and entity.Total cumulative loss is calculated using Eq.(3):

    The losses ld,li,ltof each output layers are calculated for each annotated utterance.Then,the shared loss among domain,intent,and entity is calculated as ld+li+ltin each gradient step.Finally,the model is optimized using the shared lossθ.The algorithm of the proposed semi-supervised multidomain chatbot system is designed as follows:

    Algorithm 1:Semi-Supervised Multi Domain Intelligent Chatbot System 1:Input:Prepare and preprocess annotated and unannotated dataset 2:Create word embedding layer, ew ∈ R64 for each w ∈W 3:Create Bi-LSTM cells 4:Create encoder and decoder for each utterances 5:Train the model and calculate the loss function–Use seq2seq for slot loss–Use cross entropy for intent loss and domain loss 6:Calculate shared loss or cost function L(θ,θd,θi,θt)= Lα(θ)7:Optimize the model using Adam optimizer 8:Predict unlabeled data using the model created from Step 1 to Step 7 9:Add predicted dataset to the original training dataset 10:Retrain the model by following Step 1 to Step 7 α∈{θ,θd,θi,θt}

    3.3 Evaluation and Optimization

    3.3.1 Evaluation Criteria

    There are many standard performance matrices and criteria for comparing predictive performance between various classifiers[50].The widely used measures in text classifications are predictive accuracy(ACC)and F-score.A detail description of these criteria can be clarified using the confusion matrix described in Tab.1.The classification or predictive accuracy of a predictive model is defined as in Eq.(4):

    In the above equation,TP denotes the false positive rate for a predictive model on all classes,whereas TN denotes the true negative rate.FP denotes the false positive,and FN is the false-negative rate of the model.

    Table 1:Classification confusion matrix

    The precision or positive predictive value of a given classification model is calculated as in Eq.(5):

    Recall,which is also called sensitivity or true positive rate of a classifier,is calculated using Eq.(6):

    Specificity,also called selectivity or true negative rate,is calculated as in Eq.(7):

    Another criterion is F1-score,which is the harmonic mean of precision and recall of a ML model,is calculated using Eq.(8):

    Area under curve(AUC)shown in Eq.(9)is another famous criterion to measure the accuracy ML algorithm:

    In the above equation, sensitivity is the interaction between sensitivity and 1-specificity; specificity is the percentage of false ratings predicted as false.In this study, classification accuracy is used for performance comparison.

    3.3.2 Optimization

    Adam, stochastic gradient descent (SDG), and RMSProp are the three most widely used optimizers for ANNs and DL models.This study used the Adam optimizer for training our proposed model,adversarial model,and joint base model.The Adam optimizer helps control the sparse gradient problems of a model.It is a widely used optimization mechanism for DL applications such as NLU and SU models by expanding stochastic gradient descent.

    4 Experiment

    This study used 43 k of user annotated dataset containing weather,alarm,and reminder domains(shown in Fig.6) of multi-domain intelligent chatbot system [22,51].The dataset contains three different domains with12 intent labels and 11 unique entities.

    A sample of the preprocessed user utterances is shown in Fig.7.Furthermore, the publicly available large unannotated user chatting log dataset [52] of 25 k user utterances from 21 domains are collected and only 2 k,i.e.,2,510 of the unannotated user queries(alarm,reminder,and weather)dataset is used for semi-supervised learning.

    The utterances are restructured(as shown in Fig.7)into annotated sets of user queries,entities,intent labels in respective order.User queries are enclosed with the BOS and EOS symbols.Dataset are then divided into training, evaluation, and testing dataset in 70:20:10 ratios, as shown in Tab.2.Annotated and unannotated utterances are then preprocessed using Python NLU tokenization library.Each input example size is fixed to 50 characters and created word embedding of size 64.Then LSTM model from TensorFlow library is used to train and predict user queries.

    Figure 7:Preprocessed sample dataset

    Table 2:Train,Eval test dataset

    The experiments were conducted by using tensorflow 1.10.0 library on python 3.6.The experimental platform runs Windows 10 with an Intel Core CPU at a clock speed of 1.60 GHz with 8 GB RAM.

    To evaluate SEMI-MDJM, we conducted experimental analysis and compared with a prior MDJM and“multi-domain joint model with adversarial learning”(MDJM-ADV)[51].SEMI-MDJM is created by annotating publicly available user chatting logs using MDJM and retraining the proposed model by adding this predicted dataset to the original training sets.LSTM cell of each model is created with 100 hidden neurons.Then the model is trained for 20 epochs and optimized with Adam optimizer.The learning rate is set to 0.01 and batch size of training dataset is set to 16.The MDJM shares the loss function among domain, intent, entities predictors, whereas MDJM-ADV further adds the adversarial loss to the original MDJM model.Incorporating user chatting logs into the base MDJM provides information gain for each output layer.Fig.8 shows the training and test loss for MDJM,MDJM-ADV and SEMI-MDJM.

    Tab.3 presents the classification accuracy of previous joint model along with our proposed SEMI-MDJM.SEMI-MDJM outperforms the joint base model, MDJM, and the adversarial joint model, MDJM-ADV, in terms of classification accuracy for the domain, intent, and entity.

    Figure 8:Test and training loss for each model

    Table 3:Accuracy of each model

    5 Conclusion

    In this study,we proposed a semi-supervised joint model,SEMI-MDJM,for intelligent chatbot system to extract the domain, intent, and entity of user queries using a single model ML model based on LSTM to mitigate the propagation of downstream error.This is a limitation of the typical sequential approach and reduces the effort required to manage a large number of NLU predictive models and manual data annotation.Experimental results showed a significant improvement in the predictive performance of each model, i.e., - domain, intent, and entity-predictions, based on semisupervised learning compared to the joint base model and joint model with adversarial learning.The proposed SEMI-MDJM reduces the number of trained models to one along with the self-annotation process,which reduces human effort necessary to annotate and manage multiple intent detector and entity extractor.In addition,it provides a self-learning approach to the conversational dialog system by continuously incorporating domain-related utterances from user chatting logs into the initially developed MDJM.Furthermore, it reduces the human effort required to annotate a large number of the domain,intent,and entity examples.We encourage testing our proposed SEMI-MDJM model with domain related to education, health for various languages with large datasets for future study.In addition,incremental prediction and annotation of all unannotated dataset can also improve and reduce the proposed model’s overfitting problem.

    Funding Statement:This research was supported by the BK21 FOUR (Fostering Outstanding Universities for Research) funded by the Ministry of Education (MOE, Korea) and National Research Foundation of Korea(NFR).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    不卡av一区二区三区| 亚洲国产欧美日韩在线播放| www.999成人在线观看| 深夜精品福利| 国产97色在线日韩免费| 精品国产超薄肉色丝袜足j| 国产又色又爽无遮挡免| 在线av久久热| 男女无遮挡免费网站观看| 99热网站在线观看| 亚洲全国av大片| 国产成人一区二区三区免费视频网站| 国产亚洲精品第一综合不卡| 亚洲成国产人片在线观看| 一区福利在线观看| 青青草视频在线视频观看| 少妇精品久久久久久久| 19禁男女啪啪无遮挡网站| 中亚洲国语对白在线视频| 国产麻豆69| 色播在线永久视频| 97精品久久久久久久久久精品| 久久久久网色| 国产亚洲一区二区精品| 久久亚洲国产成人精品v| 国产欧美日韩一区二区精品| 中文字幕av电影在线播放| 69av精品久久久久久 | 在线精品无人区一区二区三| 男女高潮啪啪啪动态图| 国产欧美日韩精品亚洲av| 91九色精品人成在线观看| 日本a在线网址| 午夜福利影视在线免费观看| 日韩一卡2卡3卡4卡2021年| 亚洲午夜精品一区,二区,三区| 中亚洲国语对白在线视频| 亚洲全国av大片| 青青草视频在线视频观看| 俄罗斯特黄特色一大片| 国产精品一二三区在线看| 操美女的视频在线观看| 欧美日韩中文字幕国产精品一区二区三区 | 午夜福利一区二区在线看| 无限看片的www在线观看| 一区二区三区精品91| 无遮挡黄片免费观看| 男女国产视频网站| 性色av乱码一区二区三区2| 操出白浆在线播放| 亚洲欧美激情在线| 黄色片一级片一级黄色片| 亚洲国产毛片av蜜桃av| 人成视频在线观看免费观看| 纯流量卡能插随身wifi吗| 欧美日韩成人在线一区二区| av在线老鸭窝| 青春草亚洲视频在线观看| 天天操日日干夜夜撸| 女人被躁到高潮嗷嗷叫费观| √禁漫天堂资源中文www| 成人亚洲精品一区在线观看| 少妇猛男粗大的猛烈进出视频| 中文字幕人妻丝袜制服| 国产一级毛片在线| 成人三级做爰电影| 国产欧美亚洲国产| 97人妻天天添夜夜摸| 国产精品亚洲av一区麻豆| 十八禁网站网址无遮挡| 国产精品99久久99久久久不卡| 精品久久久久久电影网| 欧美成狂野欧美在线观看| 天堂俺去俺来也www色官网| 十八禁人妻一区二区| 久久香蕉激情| h视频一区二区三区| 五月天丁香电影| 国产一区二区在线观看av| 在线天堂中文资源库| 国产欧美日韩精品亚洲av| 亚洲第一欧美日韩一区二区三区 | 久久久久久久大尺度免费视频| 免费观看a级毛片全部| 亚洲国产精品999| av电影中文网址| 狠狠婷婷综合久久久久久88av| 欧美人与性动交α欧美软件| 91麻豆精品激情在线观看国产 | 久久精品亚洲熟妇少妇任你| 黄片大片在线免费观看| 亚洲 国产 在线| 亚洲成人手机| 亚洲一码二码三码区别大吗| av在线老鸭窝| 欧美性长视频在线观看| 亚洲一卡2卡3卡4卡5卡精品中文| 久久亚洲国产成人精品v| 精品少妇内射三级| 欧美人与性动交α欧美精品济南到| 另类精品久久| 久久影院123| 亚洲激情五月婷婷啪啪| 妹子高潮喷水视频| 在线观看免费午夜福利视频| 每晚都被弄得嗷嗷叫到高潮| 熟女少妇亚洲综合色aaa.| 国产精品.久久久| 精品久久久精品久久久| 国产成人影院久久av| 丰满饥渴人妻一区二区三| 中文字幕最新亚洲高清| 免费高清在线观看视频在线观看| 另类精品久久| 国产成人一区二区三区免费视频网站| 97在线人人人人妻| 国产免费视频播放在线视频| 国产麻豆69| 欧美97在线视频| 一级a爱视频在线免费观看| 大码成人一级视频| 成年美女黄网站色视频大全免费| 国产深夜福利视频在线观看| 99九九在线精品视频| tocl精华| 另类亚洲欧美激情| 热99国产精品久久久久久7| 欧美中文综合在线视频| 桃红色精品国产亚洲av| 中文字幕av电影在线播放| 午夜老司机福利片| 美女脱内裤让男人舔精品视频| 人人妻,人人澡人人爽秒播| 天堂8中文在线网| 国产精品自产拍在线观看55亚洲 | 国产成人av教育| 久久久精品免费免费高清| 伊人亚洲综合成人网| 久久综合国产亚洲精品| 精品国产乱码久久久久久男人| av线在线观看网站| 婷婷丁香在线五月| 香蕉丝袜av| 国产无遮挡羞羞视频在线观看| 老司机亚洲免费影院| 中文字幕人妻丝袜一区二区| 成年人午夜在线观看视频| 大香蕉久久网| 欧美国产精品va在线观看不卡| 精品一区在线观看国产| 久久精品国产综合久久久| 久久久久国内视频| 老汉色av国产亚洲站长工具| 国产精品一区二区精品视频观看| 中文字幕av电影在线播放| 国产一区二区在线观看av| 男女之事视频高清在线观看| 日韩欧美一区视频在线观看| 免费一级毛片在线播放高清视频 | 高清av免费在线| 精品少妇黑人巨大在线播放| 日韩制服丝袜自拍偷拍| 无限看片的www在线观看| 老司机福利观看| 五月开心婷婷网| 少妇 在线观看| 免费久久久久久久精品成人欧美视频| 伊人亚洲综合成人网| 成年动漫av网址| 在线 av 中文字幕| 性少妇av在线| 国产伦人伦偷精品视频| 男女无遮挡免费网站观看| 色老头精品视频在线观看| tocl精华| 啦啦啦免费观看视频1| 欧美日韩国产mv在线观看视频| 又大又爽又粗| 国产真人三级小视频在线观看| 91精品伊人久久大香线蕉| 欧美亚洲 丝袜 人妻 在线| av在线老鸭窝| 蜜桃国产av成人99| 久久久久国内视频| 午夜影院在线不卡| 91精品国产国语对白视频| 精品国产一区二区三区久久久樱花| 久久久久国产精品人妻一区二区| 国产人伦9x9x在线观看| 久久免费观看电影| 精品人妻熟女毛片av久久网站| 91精品伊人久久大香线蕉| 亚洲国产av新网站| 国产免费现黄频在线看| 欧美在线黄色| 国产高清视频在线播放一区 | 国产精品麻豆人妻色哟哟久久| 亚洲少妇的诱惑av| 自线自在国产av| 精品亚洲成国产av| 新久久久久国产一级毛片| 女性被躁到高潮视频| 成人三级做爰电影| 欧美日韩福利视频一区二区| 啦啦啦 在线观看视频| 纯流量卡能插随身wifi吗| 亚洲av片天天在线观看| 首页视频小说图片口味搜索| 91av网站免费观看| 日韩欧美一区视频在线观看| 国产精品成人在线| 他把我摸到了高潮在线观看 | 精品国产一区二区久久| 久久久国产欧美日韩av| 欧美中文综合在线视频| 国产在线免费精品| 亚洲va日本ⅴa欧美va伊人久久 | 操美女的视频在线观看| av线在线观看网站| 亚洲国产中文字幕在线视频| 亚洲精品久久成人aⅴ小说| 成人18禁高潮啪啪吃奶动态图| 国内毛片毛片毛片毛片毛片| 人妻 亚洲 视频| a级毛片黄视频| 正在播放国产对白刺激| 午夜老司机福利片| 黄片小视频在线播放| 一二三四在线观看免费中文在| 中文精品一卡2卡3卡4更新| 国产有黄有色有爽视频| 无遮挡黄片免费观看| 午夜激情av网站| 高潮久久久久久久久久久不卡| 69av精品久久久久久 | 久久精品aⅴ一区二区三区四区| 激情视频va一区二区三区| 日本猛色少妇xxxxx猛交久久| 十八禁人妻一区二区| 亚洲精品粉嫩美女一区| 精品视频人人做人人爽| 亚洲精品中文字幕在线视频| 又黄又粗又硬又大视频| 淫妇啪啪啪对白视频 | 搡老岳熟女国产| av国产精品久久久久影院| 国产男人的电影天堂91| 亚洲人成电影免费在线| 亚洲人成77777在线视频| 欧美在线黄色| 久久久久久久大尺度免费视频| 在线十欧美十亚洲十日本专区| 两个人看的免费小视频| 在线观看免费午夜福利视频| 亚洲,欧美精品.| 免费在线观看影片大全网站| 成年人午夜在线观看视频| 久久ye,这里只有精品| 精品久久蜜臀av无| 欧美另类亚洲清纯唯美| 欧美另类亚洲清纯唯美| 免费av中文字幕在线| 黄片播放在线免费| 看免费av毛片| 欧美久久黑人一区二区| 中文字幕av电影在线播放| 一区二区三区乱码不卡18| 人妻人人澡人人爽人人| 在线十欧美十亚洲十日本专区| 亚洲三区欧美一区| 97在线人人人人妻| 欧美变态另类bdsm刘玥| 五月开心婷婷网| 色精品久久人妻99蜜桃| 人人妻人人澡人人看| 亚洲国产中文字幕在线视频| 亚洲精品av麻豆狂野| 日韩 亚洲 欧美在线| www.自偷自拍.com| 少妇 在线观看| 丝瓜视频免费看黄片| 91精品伊人久久大香线蕉| 亚洲情色 制服丝袜| 2018国产大陆天天弄谢| 在线观看免费午夜福利视频| 成人手机av| 电影成人av| 亚洲精品国产精品久久久不卡| 日韩精品免费视频一区二区三区| 国产一区有黄有色的免费视频| 亚洲欧美精品自产自拍| 在线观看免费日韩欧美大片| 成年女人毛片免费观看观看9 | 午夜精品国产一区二区电影| 乱人伦中国视频| 欧美变态另类bdsm刘玥| 99国产精品99久久久久| 50天的宝宝边吃奶边哭怎么回事| 一区二区三区四区激情视频| 亚洲欧美精品综合一区二区三区| 女人被躁到高潮嗷嗷叫费观| 一区福利在线观看| 大香蕉久久网| 免费看十八禁软件| 性少妇av在线| av天堂久久9| 美女国产高潮福利片在线看| 国产精品国产三级国产专区5o| 涩涩av久久男人的天堂| 欧美激情 高清一区二区三区| 一区二区日韩欧美中文字幕| 精品久久久久久久毛片微露脸 | 亚洲第一欧美日韩一区二区三区 | 久久免费观看电影| 少妇人妻久久综合中文| av在线播放精品| 麻豆乱淫一区二区| videos熟女内射| 女人久久www免费人成看片| 国产色视频综合| 国产成人啪精品午夜网站| 免费在线观看视频国产中文字幕亚洲 | 下体分泌物呈黄色| 国产亚洲av片在线观看秒播厂| 国产在线免费精品| 成年人黄色毛片网站| 成人国语在线视频| 国产成人精品久久二区二区免费| av免费在线观看网站| 欧美国产精品va在线观看不卡| 国产男人的电影天堂91| 亚洲国产看品久久| 色播在线永久视频| 视频区欧美日本亚洲| 在线天堂中文资源库| 日韩欧美免费精品| 18禁黄网站禁片午夜丰满| 黄色a级毛片大全视频| 97精品久久久久久久久久精品| 美女午夜性视频免费| 美女脱内裤让男人舔精品视频| 精品国产一区二区久久| 亚洲色图综合在线观看| 韩国高清视频一区二区三区| 亚洲国产精品999| 国产有黄有色有爽视频| 日本一区二区免费在线视频| 脱女人内裤的视频| 午夜视频精品福利| 午夜福利,免费看| 久久免费观看电影| 极品人妻少妇av视频| 大片免费播放器 马上看| 亚洲全国av大片| av有码第一页| 国产熟女午夜一区二区三区| 久久青草综合色| 韩国高清视频一区二区三区| 亚洲,欧美精品.| 这个男人来自地球电影免费观看| 久久久久视频综合| 无限看片的www在线观看| 新久久久久国产一级毛片| 90打野战视频偷拍视频| 欧美日韩福利视频一区二区| 男女国产视频网站| 欧美国产精品va在线观看不卡| 午夜精品国产一区二区电影| 岛国毛片在线播放| 别揉我奶头~嗯~啊~动态视频 | 国产精品久久久av美女十八| 女性生殖器流出的白浆| 免费高清在线观看视频在线观看| 丰满人妻熟妇乱又伦精品不卡| 高清欧美精品videossex| 女人被躁到高潮嗷嗷叫费观| 亚洲精品一二三| 69av精品久久久久久 | 亚洲av日韩在线播放| 99久久国产精品久久久| 国产一区二区 视频在线| 一边摸一边抽搐一进一出视频| 不卡一级毛片| 精品少妇黑人巨大在线播放| 久久香蕉激情| 国产99久久九九免费精品| 91字幕亚洲| 国产三级黄色录像| 国产麻豆69| 免费在线观看视频国产中文字幕亚洲 | 男女午夜视频在线观看| 午夜福利在线免费观看网站| 99久久国产精品久久久| 啦啦啦视频在线资源免费观看| 国产一级毛片在线| 亚洲人成77777在线视频| 各种免费的搞黄视频| 成人影院久久| 欧美性长视频在线观看| 成年av动漫网址| av有码第一页| 国产精品秋霞免费鲁丝片| 女人爽到高潮嗷嗷叫在线视频| 涩涩av久久男人的天堂| 午夜福利一区二区在线看| 欧美精品一区二区大全| 国产男人的电影天堂91| 水蜜桃什么品种好| 人妻久久中文字幕网| 亚洲国产欧美网| av线在线观看网站| 免费高清在线观看日韩| 一本色道久久久久久精品综合| 国产成+人综合+亚洲专区| 美女中出高潮动态图| 满18在线观看网站| 久久久国产一区二区| 午夜激情av网站| 国产又色又爽无遮挡免| 免费高清在线观看视频在线观看| 久久综合国产亚洲精品| 成人国产一区最新在线观看| 国产一区二区三区在线臀色熟女 | 欧美xxⅹ黑人| 一区二区av电影网| 亚洲欧美一区二区三区久久| 99国产极品粉嫩在线观看| 亚洲av电影在线进入| 菩萨蛮人人尽说江南好唐韦庄| 精品少妇内射三级| 天天躁夜夜躁狠狠躁躁| 成人亚洲精品一区在线观看| 桃花免费在线播放| 欧美日韩国产mv在线观看视频| 国产日韩欧美在线精品| 老鸭窝网址在线观看| av一本久久久久| 国产精品久久久久久精品古装| 在线天堂中文资源库| 国产97色在线日韩免费| 不卡av一区二区三区| 久久精品国产亚洲av香蕉五月 | 制服人妻中文乱码| 在线亚洲精品国产二区图片欧美| 久久国产精品大桥未久av| tube8黄色片| 亚洲熟女毛片儿| 国产精品久久久人人做人人爽| 成人黄色视频免费在线看| 少妇猛男粗大的猛烈进出视频| 伊人久久大香线蕉亚洲五| 少妇裸体淫交视频免费看高清 | xxxhd国产人妻xxx| 亚洲 国产 在线| 亚洲av日韩在线播放| 国产人伦9x9x在线观看| 成年av动漫网址| 最近最新免费中文字幕在线| 国产av国产精品国产| 女人被躁到高潮嗷嗷叫费观| 国产精品一区二区在线观看99| 日韩中文字幕视频在线看片| 99热全是精品| 大陆偷拍与自拍| 久久ye,这里只有精品| 亚洲精品国产区一区二| 19禁男女啪啪无遮挡网站| 成年女人毛片免费观看观看9 | 啦啦啦啦在线视频资源| 亚洲第一青青草原| 最黄视频免费看| 夜夜骑夜夜射夜夜干| 精品一区二区三卡| 十八禁人妻一区二区| 欧美日韩国产mv在线观看视频| av欧美777| 免费人妻精品一区二区三区视频| 天天躁日日躁夜夜躁夜夜| 80岁老熟妇乱子伦牲交| 国产极品粉嫩免费观看在线| 丝袜美足系列| 91大片在线观看| 咕卡用的链子| 黄色片一级片一级黄色片| 国产成人免费无遮挡视频| 三上悠亚av全集在线观看| 国产免费现黄频在线看| 少妇的丰满在线观看| 亚洲国产欧美日韩在线播放| 交换朋友夫妻互换小说| 国产日韩欧美视频二区| 好男人电影高清在线观看| 亚洲一码二码三码区别大吗| 日日夜夜操网爽| 在线观看免费视频网站a站| 亚洲欧美成人综合另类久久久| 国产精品免费视频内射| 成在线人永久免费视频| 久久久精品94久久精品| 婷婷色av中文字幕| 国产人伦9x9x在线观看| 老司机靠b影院| 一本一本久久a久久精品综合妖精| 久久久国产成人免费| 我要看黄色一级片免费的| 天天躁狠狠躁夜夜躁狠狠躁| 午夜福利在线观看吧| 国产亚洲精品一区二区www | bbb黄色大片| 国产亚洲av高清不卡| 久久中文字幕一级| 国产av国产精品国产| 国产精品久久久人人做人人爽| 极品人妻少妇av视频| 婷婷色av中文字幕| 国产精品 国内视频| 咕卡用的链子| 国产欧美日韩精品亚洲av| 爱豆传媒免费全集在线观看| 老熟妇仑乱视频hdxx| 视频在线观看一区二区三区| 午夜福利影视在线免费观看| 日韩,欧美,国产一区二区三区| 久久久精品94久久精品| 亚洲国产毛片av蜜桃av| 国产极品粉嫩免费观看在线| 麻豆乱淫一区二区| 在线看a的网站| 天天躁日日躁夜夜躁夜夜| 人人妻人人爽人人添夜夜欢视频| 大码成人一级视频| 久久人人97超碰香蕉20202| 桃红色精品国产亚洲av| 曰老女人黄片| 成人亚洲精品一区在线观看| 精品第一国产精品| 亚洲欧美色中文字幕在线| 美女主播在线视频| 九色亚洲精品在线播放| 成年美女黄网站色视频大全免费| 亚洲av片天天在线观看| 99精品欧美一区二区三区四区| 一本—道久久a久久精品蜜桃钙片| 亚洲中文字幕日韩| 中文字幕人妻丝袜制服| 91精品国产国语对白视频| 亚洲成人国产一区在线观看| 亚洲国产欧美在线一区| 三上悠亚av全集在线观看| 免费高清在线观看视频在线观看| 久久精品亚洲熟妇少妇任你| 99国产综合亚洲精品| 亚洲精品久久午夜乱码| 国产成人a∨麻豆精品| 久久天堂一区二区三区四区| 正在播放国产对白刺激| 99精品欧美一区二区三区四区| 久久久久久久久免费视频了| 国产欧美亚洲国产| 精品高清国产在线一区| 亚洲性夜色夜夜综合| 一区二区三区精品91| 老司机亚洲免费影院| 亚洲精品自拍成人| 国产一级毛片在线| 亚洲免费av在线视频| 国产av一区二区精品久久| 人人澡人人妻人| 午夜福利在线免费观看网站| 欧美激情久久久久久爽电影 | 精品人妻一区二区三区麻豆| 成人三级做爰电影| 久久天堂一区二区三区四区| 久久99热这里只频精品6学生| 久久国产精品大桥未久av| 丝袜脚勾引网站| 欧美日韩视频精品一区| 精品亚洲乱码少妇综合久久| 多毛熟女@视频| 99re6热这里在线精品视频| 亚洲色图综合在线观看| 国产精品自产拍在线观看55亚洲 | 亚洲国产av影院在线观看| 丝袜在线中文字幕| 欧美黄色淫秽网站| 日韩欧美一区二区三区在线观看 | 日韩欧美一区二区三区在线观看 | 亚洲成av片中文字幕在线观看| 美女福利国产在线| 久久人妻熟女aⅴ| 亚洲中文字幕日韩| 亚洲国产av影院在线观看| 青春草视频在线免费观看| 黑人欧美特级aaaaaa片| 久久久水蜜桃国产精品网| 日韩免费高清中文字幕av| 亚洲一码二码三码区别大吗| 热re99久久精品国产66热6| 亚洲伊人久久精品综合| 国产野战对白在线观看| 777米奇影视久久| kizo精华| 精品高清国产在线一区| 满18在线观看网站| 亚洲中文字幕日韩| 99久久国产精品久久久| 国产精品麻豆人妻色哟哟久久| 国产精品 欧美亚洲| 十八禁网站免费在线| 亚洲专区字幕在线| 十八禁人妻一区二区| 久久人人97超碰香蕉20202| 大码成人一级视频| 1024香蕉在线观看| 亚洲国产欧美在线一区|