• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    The Impact of Semi-Supervised Learning on the Performance of Intelligent Chatbot System

    2022-08-24 03:31:28SudanPrasadUpretyandSeungRyulJeong
    Computers Materials&Continua 2022年5期

    Sudan Prasad Uprety and Seung Ryul Jeong*

    The Grad uate School of Business Information Technology,Kookmin University,Seoul,02707,Korea

    Abstract:Artificial intelligent based dialog systems are getting attention from both business and academic communities.The key parts for such intelligent chatbot systems are domain classification,intent detection,and named entity recognition.Various supervised, unsupervised, and hybrid approaches are used to detect each field.Such intelligent systems,also called natural language understanding systems analyze user requests in sequential order:domain classification,intent,and entity recognition based on the semantic rules of the classified domain.This sequential approach propagates the downstream error;i.e.,if the domain classification model fails to classify the domain,intent and entity recognition fail.Furthermore,training such intelligent system necessitates a large number of user-annotated datasets for each domain.This study proposes a single joint predictive deep neural network framework based on long short-term memory using only a small user-annotated dataset to address these issues.It investigates value added by incorporating unlabeled data from user chatting logs into multi-domain spoken language understanding systems.Systematic experimental analysis of the proposed joint frameworks, along with the semi-supervised multi-domain model, using open-source annotated and unannotated utterances shows robust improvement in the predictive performance of the proposed multi-domain intelligent chatbot over a base joint model and joint model based on adversarial learning.

    Keywords: Chatbot; dialog system; joint learning; LSTM; natural language understanding;semi-supervised learning

    1 Introduction

    Natural language understanding (NLU) and Speech understanding (SU) play a significantly important role in human-computer interaction(HCI)applications.Intelligent NLU systems,including chatbots, robots, voice control interfaces, and virtual assistants, are well-known HCI applications developed to communicate with humans via natural language.HCI is now a global trend and has drawn attention from different communities with the advancement and rapid development of machine learning(ML)and deep neural network(DNN)and reinforcement learning.ELIZA[1]was the first machine with ability to exhibit human behavior to understand human language and communicate with humans using pattern matching to respond to user.The modeling process of a single domain conversational system or intelligent chatbot consists of detecting intent and recognizing entities from the user query.Virtual customer assistants,or chatbots,reduce information overload and call center efforts, enabling better customer experience (CX) on HCI applications or company websites.Some institutions also deploy role-based assistants that can significantly help improve interactions with their customers, business partners, and employees.By reducing the complexity of data and rules,organizations can focus on repetitive and simple interactions where customer needs are well-satisfied and understood.Organizations are struggling to manage the growth of such user query data.They have been implementing intelligent chatbot to provide service to customers 24/7 with or without call center help to address these issues.Such intelligent systems have three most important parts:domain classification, intent detection, and entity recognition.For a multi-tasking chatbot, the domain classification model first classifies the domain and then intent and entity are recognized based on the frames of the classified domain,as shown in Fig.1.A large amount of user-annotated data is needed to train a multi-domain dialog system.Major intelligent chatbot systems, such as Amazon Alexa,Apple Siri,Google Dialogflow,IBM Watson,Microsoft Cortana,and Samsung Bixby support multi domain conversation[2].A typical multi-tasking or multi-domain chatbot system(as shown in Fig.1)mainly has domain classification, intent prediction, entity recognition, and response generation or dialog management parts.Most intelligent chatbot process user queries in a sequential order:domain classification,intent prediction,slot prediction.Each has its separate machine learning(ML)model and is predicted in the sequential order.A large number of user-annotated examples of utterances in each domain is essential before training the model.In addition, separate models are generated for the domain, intent, and entity, making it difficult to manage large sets of models.Furthermore,with this approach, an error in the domain prediction step may lead to errors in intent prediction and entity recognition,ultimately reducing predictive performance of the chatbot.Typical supervised ML algorithms such as Bayesian algorithm, Support Vector Machine (SVM), Logistic Regression,and Neural Networks(NNs)could extract domain and intent from user queries with separate model.However,the advanced deep learning(DL)approaches,increased computing powers,generating large number of open-source dataset enable training a single joint model for domain classification,intent prediction, and entity recognition using a single set of utterances [3] containing multiple domain,intent,and slot or entity information,reducing the number of trained ML models[4].

    This study reduces human efforts for manual annotation of utterances by incorporating unannotated datasets from various data sources,such as user query logs into a DNN algorithm,i.e.,a single jointly trained long short-term memory (LSTM) based NLU model of a multi-domain intelligent chatbot.The single jointly trained LSTM based NLU model reduces the number of classification and recognition models used in sequential approaches and attempts to mitigate downstream error propagation.LSTM was proposed in 1997 by Hochreiter and Schmidhuber for sequential modeling[5],which is a concept of adding an extra memory cell to a recurrent neural network(RNN),achieving better performance in representing and storing historical information.In the standard LSTM network,information transmission is one-way,and each memory cell can use historical information but cannot use the future one.Bidirectional LSTM(Bi-LSTM,shown in Fig.2)was introduced to transmit and store past and future information in each memory cell.

    The principle of Bi-LSTM is to connect the same output of each input cell with two opposite timings.Forward LSTM can forward historical information to the next step, and LSTM networks directed backward can obtain future contextual information.Furthermore, extra unlabeled data [6]contributes to an increase in the information gain for DL model trained with the LSTM algorithm.A single semi-supervised multi-domain joint model(SEMI-MDJM)based on LSTM outperforms a joint base model and an adversarial multi-domain joint model in each task i.e.,domain classification,intent prediction,and entity recognition.

    Figure 1:General architecture of multi-domain chatbot

    Figure 2:Bi-LSTM joint model

    The remainder of this work is structured as follows.Next section presents related prior work on intelligent dialog system.Section 3 provides a proposed LSTM based semi-supervised joint framework.Section 4 presents the experimental results and detail analysis of predictive accuracies,and additional analyses on the importance of unannotated data in the context of a general chatbot having multiple domains.Finally,Section 5 concludes with a discussion and interesting areas for future study.

    2 Literature Review

    The first idea of an HCI appli cation comes from the Turing test or“imitation”game created by Alan Turing in 1950.ELIZA was the first conversational system developed in 1966 based on pattern matching that respond to a user using keywords from the user query[1].In 1980s,another HCI application called ALICE was developed using artificial intelligence markup language(AIML)[7]to mitigate the drawbacks of ELIZA.The performance of AIML was further improved[8]by applying multiple parameter design pattern to the decomposition rules.With the rapid development and advancement of ML algorithms,emergence of DL techniques,and natural language processing(NLP),these intelligent chatbot systems are gaining popularity in various fields.Conversational systems help reduce various costs by automating the workflow of a customer or call center,resulting in rapid response to customer queries[9].Almansor and Hussain classified conversational systems into non-task-oriented and taskoriented categories [10].Non-task-oriented systems are generally retrieval-based chatbots, which provide a similar or highly ranked list of information related to user input.In contrast,task-oriented conversational systems are supervised or unsupervised models performing users’specific tasks based on ML algorithm rather than decomposition rules or keyword filtering.Recently,commercial chatbot systems such as Microsoft Cortana,IBM Watson,Amazon Alexa,Google Dialogflow,Apple Siri,and Bixby are gaining interest from organizations[11].These systems are mainly implemented in medical education,health assistance,educational system,e-learning,e-commerce[12],sports,games,privacy,infrastructure,and other fields[2].Recently,public administrators have begun implementing chatbot systems for real-time customer services[13].Autonomous vehicles and smart home systems also embed natural language interactions applications[14].The implementation of these dialog systems requires technical knowledge about NLP and NLU[15–17].Recent new studies report various new NLP and NLU,such as bag-of-concepts and bag-of-narratives[18].

    Although there are several technical and logical parts involved in implementing intelligent chatbot systems,NLU is at the core part of a chatbot.In an intelligent chatbot,the role of NLU is to parse the user query and learn what the user means.NLU systems contain three main subsystems:domain classifier, intent detector, and entity recognition models [19].Generally, a multi domain chatbot has three unsupervised or supervised ML models for recognizing each field.Different supervised and unsupervised learning algorithms include term frequency and inverse document frequency(TFIDF), bag of words, word2vec, SVM, Bayes algorithm, NNs, boosting, maximum entropy, and deep belief networks [20] are widely applied to extract intent and slots in sequential NLU models.These separate pipelined ML models are created using a large number of utterances or examples[3].Creating and annotating these utterances demands huge human efforts.Recently,much open research shares previously annotated large datasets from diverse domains in multiple languages.In addition,unannotated user-query data can be used and analyzed in the future.Vedula et al.[21] curated and released an annotated dataset of 25 k utterances for developing an intent model.Schuster et al.[22]curated 57 k annotated examples for English,Thai,and Spanish languages for three different domains–weather,alarm,and reminder–to develop cross-lingual dialog system.Larson et al.[23]evaluated for out of scope with a dataset containing 150 intent classes from 10 different domains.Furthermore,these sequential frameworks are at a high risk of introducing downstream errors to the intent detection and entity recognition phase.Since each predictive model is trained with sequence of text corpus,contextual information of the previous step has significant importance for traditional ML algorithms and recent DL approaches.These text data i.e.,utterances or examples are time-series in nature,for which an LSTM-based DL framework demonstrates state-of-the-art performance[24].

    2.1 Domain Prediction

    Domain prediction is the process of filtering user input to a specific category in a multi-tasking dialog system.Many previous works on domain prediction exist.Hakkani-Tur et al.[25]proposed a semi-supervised domain prediction model using AdaBoost with user click logs on Bing web search engine.Zheng et al.[26] proposed an out-of-domain detection mechanism to avoid unnecessary responses to user input.Xu et al.[27]proposed a contextual domain classification mechanism to reduce consecutive queries by a user to different domains.Gupta et al.[28]proposed an RNN-based context encoding method to improve the predictive accuracy and computational efficiency of an NLU model using two different domains.

    2.2 Intent Detection

    Intent prediction is the main part of NLU system.Intent means what a user means or wants to obtain from the system.Although traditional intent predictor models are based on SVM and ANN,with the advancement in DL and sequence modeling,RNN and LSTM algorithms have demonstrated state-of-the-art performance in text classification tasks.Liu et al.[29]proposed attention-based RNN to predict intent and slot.In addition,a hybrid approach that combines LSTM and a convolutional neural network(CNN)shows performance improvement in intent prediction using the ATIS dataset[30].Goo et al.[31] proposed Slot-Gated Bi-LSTM model with an attention mechanism to predict intent.Systems can make errors for similar words that appear in different contexts.Confusion2vec[32] can reduce confusing errors and predict the intent of user input.For multi-task and multi-turn dialog systems,previous domain information can be used as contextual information for new turns to improve the performance of dialog systems[33].In addition,incorporating previous contextual session information[34]into intent and slot prediction models can improve predictive performance.

    2.3 Entity Extraction or Slot Filling

    Entity extraction,also called entity recognition(NER),extracts attributes such as location,place,date, and time from user query text.Entity extraction aims to extract entities of interest from user input text.As important information of user input can appear at any position, entity extraction becomes a more challenging process[24],making it difficult to extract entities from text.Early NER prediction systems relied on rules or dictionaries created by humans.After that,supervised learning based on SVM,decision trees,hidden Markov chain,conditional random fields,and dynamic vector representations[35]have been used to extract entities from text.Recently,ANNs and DL techniques such as LSTM,CNNs[36]have been used to extract entities from user text.Liu and Lane introduced slot filling based on RNN algorithms[29].Dernoncourt et al.[37]proposed NeuroNER tools based on ANN for non-expert users of ANNs.Generally,models trained over previously build NER algorithms such as a distantly supervised slot-filling system[38]proposed at Stanford and a tweeter-based NER system [39] can improve the performance of entity extraction systems.The main challenges and misconceptions for NER system development were investigated in detail by Ratinov et al.[40] to improve prediction accuracy on the CoNLL dataset.An entity extraction model based on sequence modeling[41]can further improve its predictive performance.

    Although these individual training approaches improve the performance of an individual model,there will be a lack of contextual sharing between each model,and the total number of models increases with the total number of domains.The total number of models for a typical traditional dialog system is calculated as Eq.(1).

    where N represents the total number of domains.The total number of predictive ML models in a typical traditional multi-domain chatbot system is the sum of domain predictive model, N number of intent,and N number of slot models.If the number of domain increases,the number of predictive models also increases.Thus, various joint training approaches that incorporates higher correlation information between intent and entity show better performance with a single joint predictive model.

    2.4 Joint Training for Multi-Domain Intelligent Chatbot System

    Joint training based on LSTM in a conversational system involves sharing cost or loss functions among domain,intent,and entity predictors.There are some prior works on joint modeling for intent detection and entity recognition.Liu et al.[29]proposed a joint model based on Attention Bi-RNN to recognize intent and entity with higher predictive performance.Ma et al.[30]introduced a sparse attention patterns to a jointly trained model based on LSTM for intent detection and slot extraction.Bekoulis et al.[42]applied adversarial learning to a joint model for various datasets,such as biomedical data, real estate data, and news data, achieving state-of-the-art performance for entity and relation extraction.Goo et al.[31] added related information for joint training between intent detector and slot extractor model.Zhang et al.[43]applied the hierarchical relationship between slots and intent to the joint model based on capsule neural networks.Recently,transfer learning i.e.,pre-trained models,such as DialogGLUE(BERT show state-of-the-art-performance for joint model[44].For multi-taskoriented conversational systems, a predictive domain model is trained separately, which could bring downstream error propagation,i.e.,if an intelligent chatbot system fails to classify the domain then intent predictor and entity extractor does not work anymore[3].

    There are some prior works on multi-task-oriented joint models based on LSTM with a single cell.Hakkani-Tur et al.[4]introduced the RNN-LSTM framework for a multi-task oriented chatbot.Kim and Lee used real user chatting logs from Microsoft Cortana and jointly trained the model with Bi-LSTM algorithm to inhance the classification accuracy by mitigating downstream error propagation [3].We refer readers to Abdul-Kader et al.[12] and Ahmad et al.[15] studies, which provide comprehensive literature reviews of various ML and rule-based techniques used in chatbot systems or NLU studies.

    2.5 Adversarial Learning

    Adversarial learning regularizes neural networks and improves the classification accuracy of DNN algorithms by combining small noise or perturbations with annotated data,thereby increasing the loss function of a DL model[45].Many DNN algorithms have recently been used in NLU and SU systems.Miyato et al.[6]observed the incorrect decision for DNNs with intentional random noise to the DNNs along with input examples.Furthermore,they proposed an object detection algorithm based on DNN using an adversarial learning to improve the classification accuracy of a ML model[6].

    Semi-supervised learning with adversarial perturbations shows classification improvement for intelligent chatbot system having multi-domains [46].Adversarial learning to DNNs (as shown in Fig.3) generates small perturbations to the embedding layer along with input examples that gives variations to input,which the learning model can easily misclassify.

    Figure 3:Adversarial joint model based on Bi-LSTM

    2.6 Semi-Supervised Learning for NLU

    Semi-supervised learning is the process of training ML model with both annotated and unannotated utterances.First, the supervised model developed with annotated or labeled dataset and then predicts and labels unannotated samples.Afterward, retraining the originally annotated datasets,along with machine-annotated datasets,creates new predictive supervised models.This entire training,predicting, and retraining process using predicted datasets, along with originally labeled utterances,presents the concept of semi-supervised learning shown in Fig.4.The semi-supervised technique helps reduce human efforts in the manual annotation of utterances and helps create a self-learning model with robust information gain, ultimately improving a predictive performance or accuracy.A semi-supervised learning approach can help annotators annotate new user inputs with a small userannotated dataset.

    There are extensive prior studies on semi-supervised learning approaches for developing SLU and NLU models for intent prediction and entity extraction.Diverse techniques have been used to predict intent for a single domain dialog system using a semi-supervised learning approach [47].A semi-supervised joint model for intent prediction and slot fillings [45,48] reduces human efforts in annotating examples,improving the model’s performance with robust information gain.For further investigation, we refer readers to Singh et al.[11], which provide comprehensive literature reviews of data extraction, data processing, various data sources, and reinforcement and ensemble learning methods used in NLU studies.

    Although semi-supervised learning has recently been used in multi-domain dialog systems, this study is,to the best of our knowledge,the first to apply semi supervised-learning to a single joint model based on LSTM.Compared with a prior joint model and adversarial joint model,our approach trains a single LSTM-based model using small user-annotated examples and unannotated samples from user chatting logs resulting in higher predictive performance and reduced human efforts in create annotated examples of AI dialog systems.

    Figure 4:Semi-supervised joint model

    3 Semi-Supervised Intelligent Chatbot

    Our proposed SEMI-MDJM(shown in Fig.4)focuses on self-automating the annotating process with user chatting logs, which could be the important data source for intelligent chatbot.Each component of SEMI-MDJM is discussed in the following subsection.

    3.1 Data Preprocessing

    User chatting logs are unstructured text data and should be converted into a structured example that a DNN algorithm can use it to train the model.Bag of words,term-frequency-matrices,and vector space[49]methods are widely applied to transform unstructured data into structured dataset.TF-IDF uses term frequency matrices to extract information from text data.Creating these matrices involves various data cleansing and wrangling approaches including tokenization,stemming,POS tagging as shown in Fig.5.Then a word embedding set is created from the preprocessed cleaned corpus.

    Figure 5:Text preprocessing

    Furthermore,the previously developed joint model is used to predict unlabeled user chatting logs,and annotated utterances are added to the previous training dataset,retraining the model to increase the information gain for the LSTM cell.Then,utterances are preprocessed and fed into a Bi-LSTM cell to extract previous and future information.Then the single LSTM model predicts domain,intent,and extract the entity.

    3.2 Embedding and Bi-LSTM Layer

    The embedding layer feeds the sequential data to a LSTM cell by creating embedding vector of words.Word embedding for word sequence w1...wn∈W is given as Eq.(2):

    Fig.2 shows a Bi-LSTM model with forward and backward propagation of information.Due to the bidirectional information propagation, previous and future contextual information can be memorized for each LSTM cell.

    The final training objective of MDJM is to minimize shared loss among domain,intent,and entity.Total cumulative loss is calculated using Eq.(3):

    The losses ld,li,ltof each output layers are calculated for each annotated utterance.Then,the shared loss among domain,intent,and entity is calculated as ld+li+ltin each gradient step.Finally,the model is optimized using the shared lossθ.The algorithm of the proposed semi-supervised multidomain chatbot system is designed as follows:

    Algorithm 1:Semi-Supervised Multi Domain Intelligent Chatbot System 1:Input:Prepare and preprocess annotated and unannotated dataset 2:Create word embedding layer, ew ∈ R64 for each w ∈W 3:Create Bi-LSTM cells 4:Create encoder and decoder for each utterances 5:Train the model and calculate the loss function–Use seq2seq for slot loss–Use cross entropy for intent loss and domain loss 6:Calculate shared loss or cost function L(θ,θd,θi,θt)= Lα(θ)7:Optimize the model using Adam optimizer 8:Predict unlabeled data using the model created from Step 1 to Step 7 9:Add predicted dataset to the original training dataset 10:Retrain the model by following Step 1 to Step 7 α∈{θ,θd,θi,θt}

    3.3 Evaluation and Optimization

    3.3.1 Evaluation Criteria

    There are many standard performance matrices and criteria for comparing predictive performance between various classifiers[50].The widely used measures in text classifications are predictive accuracy(ACC)and F-score.A detail description of these criteria can be clarified using the confusion matrix described in Tab.1.The classification or predictive accuracy of a predictive model is defined as in Eq.(4):

    In the above equation,TP denotes the false positive rate for a predictive model on all classes,whereas TN denotes the true negative rate.FP denotes the false positive,and FN is the false-negative rate of the model.

    Table 1:Classification confusion matrix

    The precision or positive predictive value of a given classification model is calculated as in Eq.(5):

    Recall,which is also called sensitivity or true positive rate of a classifier,is calculated using Eq.(6):

    Specificity,also called selectivity or true negative rate,is calculated as in Eq.(7):

    Another criterion is F1-score,which is the harmonic mean of precision and recall of a ML model,is calculated using Eq.(8):

    Area under curve(AUC)shown in Eq.(9)is another famous criterion to measure the accuracy ML algorithm:

    In the above equation, sensitivity is the interaction between sensitivity and 1-specificity; specificity is the percentage of false ratings predicted as false.In this study, classification accuracy is used for performance comparison.

    3.3.2 Optimization

    Adam, stochastic gradient descent (SDG), and RMSProp are the three most widely used optimizers for ANNs and DL models.This study used the Adam optimizer for training our proposed model,adversarial model,and joint base model.The Adam optimizer helps control the sparse gradient problems of a model.It is a widely used optimization mechanism for DL applications such as NLU and SU models by expanding stochastic gradient descent.

    4 Experiment

    This study used 43 k of user annotated dataset containing weather,alarm,and reminder domains(shown in Fig.6) of multi-domain intelligent chatbot system [22,51].The dataset contains three different domains with12 intent labels and 11 unique entities.

    A sample of the preprocessed user utterances is shown in Fig.7.Furthermore, the publicly available large unannotated user chatting log dataset [52] of 25 k user utterances from 21 domains are collected and only 2 k,i.e.,2,510 of the unannotated user queries(alarm,reminder,and weather)dataset is used for semi-supervised learning.

    The utterances are restructured(as shown in Fig.7)into annotated sets of user queries,entities,intent labels in respective order.User queries are enclosed with the BOS and EOS symbols.Dataset are then divided into training, evaluation, and testing dataset in 70:20:10 ratios, as shown in Tab.2.Annotated and unannotated utterances are then preprocessed using Python NLU tokenization library.Each input example size is fixed to 50 characters and created word embedding of size 64.Then LSTM model from TensorFlow library is used to train and predict user queries.

    Figure 7:Preprocessed sample dataset

    Table 2:Train,Eval test dataset

    The experiments were conducted by using tensorflow 1.10.0 library on python 3.6.The experimental platform runs Windows 10 with an Intel Core CPU at a clock speed of 1.60 GHz with 8 GB RAM.

    To evaluate SEMI-MDJM, we conducted experimental analysis and compared with a prior MDJM and“multi-domain joint model with adversarial learning”(MDJM-ADV)[51].SEMI-MDJM is created by annotating publicly available user chatting logs using MDJM and retraining the proposed model by adding this predicted dataset to the original training sets.LSTM cell of each model is created with 100 hidden neurons.Then the model is trained for 20 epochs and optimized with Adam optimizer.The learning rate is set to 0.01 and batch size of training dataset is set to 16.The MDJM shares the loss function among domain, intent, entities predictors, whereas MDJM-ADV further adds the adversarial loss to the original MDJM model.Incorporating user chatting logs into the base MDJM provides information gain for each output layer.Fig.8 shows the training and test loss for MDJM,MDJM-ADV and SEMI-MDJM.

    Tab.3 presents the classification accuracy of previous joint model along with our proposed SEMI-MDJM.SEMI-MDJM outperforms the joint base model, MDJM, and the adversarial joint model, MDJM-ADV, in terms of classification accuracy for the domain, intent, and entity.

    Figure 8:Test and training loss for each model

    Table 3:Accuracy of each model

    5 Conclusion

    In this study,we proposed a semi-supervised joint model,SEMI-MDJM,for intelligent chatbot system to extract the domain, intent, and entity of user queries using a single model ML model based on LSTM to mitigate the propagation of downstream error.This is a limitation of the typical sequential approach and reduces the effort required to manage a large number of NLU predictive models and manual data annotation.Experimental results showed a significant improvement in the predictive performance of each model, i.e., - domain, intent, and entity-predictions, based on semisupervised learning compared to the joint base model and joint model with adversarial learning.The proposed SEMI-MDJM reduces the number of trained models to one along with the self-annotation process,which reduces human effort necessary to annotate and manage multiple intent detector and entity extractor.In addition,it provides a self-learning approach to the conversational dialog system by continuously incorporating domain-related utterances from user chatting logs into the initially developed MDJM.Furthermore, it reduces the human effort required to annotate a large number of the domain,intent,and entity examples.We encourage testing our proposed SEMI-MDJM model with domain related to education, health for various languages with large datasets for future study.In addition,incremental prediction and annotation of all unannotated dataset can also improve and reduce the proposed model’s overfitting problem.

    Funding Statement:This research was supported by the BK21 FOUR (Fostering Outstanding Universities for Research) funded by the Ministry of Education (MOE, Korea) and National Research Foundation of Korea(NFR).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    伊人久久精品亚洲午夜| 国产男人的电影天堂91| 国产精品人妻久久久久久| 成年女人在线观看亚洲视频| 精品亚洲成a人片在线观看 | 麻豆成人午夜福利视频| 国产精品一区二区在线不卡| 久久久成人免费电影| 免费人妻精品一区二区三区视频| 国产在线免费精品| 亚洲在久久综合| 精品视频人人做人人爽| 亚洲人成网站高清观看| 欧美日韩精品成人综合77777| 国产伦精品一区二区三区四那| 国产精品不卡视频一区二区| 欧美人与善性xxx| 九草在线视频观看| 国产精品一区二区在线观看99| 高清黄色对白视频在线免费看 | 免费观看av网站的网址| 22中文网久久字幕| 我的老师免费观看完整版| 欧美高清成人免费视频www| 国产成人精品一,二区| a 毛片基地| 久久6这里有精品| 视频中文字幕在线观看| 国产精品免费大片| 国产男女内射视频| 日韩av免费高清视频| 成人黄色视频免费在线看| 中文天堂在线官网| 久久精品国产亚洲av涩爱| 如何舔出高潮| 少妇人妻精品综合一区二区| 高清欧美精品videossex| 熟女人妻精品中文字幕| 免费观看性生交大片5| 18禁在线播放成人免费| 免费av中文字幕在线| 少妇高潮的动态图| 国产黄片美女视频| 日本黄色日本黄色录像| 亚洲精品一二三| 免费黄频网站在线观看国产| 秋霞伦理黄片| 亚洲av不卡在线观看| 三级国产精品片| a级毛色黄片| 嘟嘟电影网在线观看| a级毛色黄片| 人妻系列 视频| 最近的中文字幕免费完整| 国产69精品久久久久777片| 久久99热6这里只有精品| 成年av动漫网址| 久久久久久久久久成人| 日本色播在线视频| 久久 成人 亚洲| 久久久久久久大尺度免费视频| 亚洲性久久影院| 亚洲精品久久久久久婷婷小说| 有码 亚洲区| 大香蕉97超碰在线| 免费少妇av软件| 国产免费福利视频在线观看| 国产高清三级在线| 91在线精品国自产拍蜜月| a级毛片免费高清观看在线播放| 亚洲一级一片aⅴ在线观看| 国产成人免费观看mmmm| 精品一区二区免费观看| 精华霜和精华液先用哪个| 日韩伦理黄色片| 草草在线视频免费看| 国内揄拍国产精品人妻在线| 又大又黄又爽视频免费| 亚洲综合色惰| 亚洲国产精品专区欧美| 男女下面进入的视频免费午夜| 欧美少妇被猛烈插入视频| 高清欧美精品videossex| 国产美女午夜福利| 久久久久久久亚洲中文字幕| 国产黄色免费在线视频| 国产亚洲最大av| 国产精品99久久99久久久不卡 | 国产黄色视频一区二区在线观看| 99久久精品一区二区三区| 91久久精品电影网| 国国产精品蜜臀av免费| 色视频www国产| 亚洲久久久国产精品| 欧美三级亚洲精品| 国产免费一级a男人的天堂| 欧美三级亚洲精品| 国产 一区精品| 久久久久久九九精品二区国产| 人妻夜夜爽99麻豆av| 天天躁夜夜躁狠狠久久av| 亚洲第一区二区三区不卡| 2022亚洲国产成人精品| 亚洲美女黄色视频免费看| 免费播放大片免费观看视频在线观看| 一本色道久久久久久精品综合| 日韩大片免费观看网站| 又粗又硬又长又爽又黄的视频| 18禁在线播放成人免费| 久久青草综合色| 午夜老司机福利剧场| 内地一区二区视频在线| 天天躁夜夜躁狠狠久久av| 高清毛片免费看| 中文字幕免费在线视频6| 欧美成人午夜免费资源| 日日啪夜夜爽| 国产淫语在线视频| 性色avwww在线观看| 久久久国产一区二区| 久久久久久久精品精品| 久久 成人 亚洲| 在线 av 中文字幕| 嘟嘟电影网在线观看| 亚洲av不卡在线观看| 乱码一卡2卡4卡精品| 精品久久久噜噜| 国产免费一级a男人的天堂| 国产亚洲午夜精品一区二区久久| 极品教师在线视频| www.av在线官网国产| 精品国产三级普通话版| 男人和女人高潮做爰伦理| 国产伦精品一区二区三区四那| 日产精品乱码卡一卡2卡三| 久久久久人妻精品一区果冻| 欧美高清性xxxxhd video| 成人亚洲欧美一区二区av| 一本—道久久a久久精品蜜桃钙片| 性色avwww在线观看| 亚洲三级黄色毛片| 观看av在线不卡| 亚洲成人手机| 成人美女网站在线观看视频| 欧美精品人与动牲交sv欧美| 久久精品久久久久久噜噜老黄| 一个人免费看片子| 精品亚洲成a人片在线观看 | av又黄又爽大尺度在线免费看| 日韩强制内射视频| av国产精品久久久久影院| 久久人人爽人人片av| 99久久精品一区二区三区| 中文字幕av成人在线电影| 国产精品av视频在线免费观看| 最新中文字幕久久久久| 男女边吃奶边做爰视频| 国产精品嫩草影院av在线观看| 亚洲国产毛片av蜜桃av| 高清午夜精品一区二区三区| 视频区图区小说| 色视频在线一区二区三区| a级毛色黄片| 80岁老熟妇乱子伦牲交| 高清黄色对白视频在线免费看 | 视频区图区小说| 久久 成人 亚洲| 最近手机中文字幕大全| 日本爱情动作片www.在线观看| 九色成人免费人妻av| 欧美另类一区| 精品国产三级普通话版| 91久久精品电影网| 国产毛片在线视频| 国产成人a区在线观看| 视频中文字幕在线观看| 日韩成人伦理影院| 乱码一卡2卡4卡精品| 纯流量卡能插随身wifi吗| 男人狂女人下面高潮的视频| 久久久久网色| 我要看黄色一级片免费的| 一级毛片我不卡| 免费大片18禁| 免费观看性生交大片5| 日韩制服骚丝袜av| 国内揄拍国产精品人妻在线| 亚洲成人中文字幕在线播放| 日韩av在线免费看完整版不卡| 一个人看视频在线观看www免费| 欧美日本视频| 精品一区二区三卡| 青春草视频在线免费观看| 国产av码专区亚洲av| 亚洲美女视频黄频| 校园人妻丝袜中文字幕| 久久国内精品自在自线图片| 亚洲美女搞黄在线观看| 日韩欧美一区视频在线观看 | 超碰av人人做人人爽久久| 一级毛片电影观看| 女人十人毛片免费观看3o分钟| 欧美成人a在线观看| 久久韩国三级中文字幕| 色视频www国产| 国产成人午夜福利电影在线观看| 国产精品一区二区在线观看99| 在线观看av片永久免费下载| 一区二区av电影网| 18禁在线播放成人免费| 日韩 亚洲 欧美在线| 亚洲人成网站高清观看| 一个人看的www免费观看视频| 久久久色成人| 亚洲欧美日韩另类电影网站 | 亚洲人成网站高清观看| 日日摸夜夜添夜夜爱| 国产视频内射| 国产久久久一区二区三区| 免费看不卡的av| 一级毛片黄色毛片免费观看视频| 秋霞伦理黄片| 18禁在线播放成人免费| av国产久精品久网站免费入址| 五月天丁香电影| 日日啪夜夜爽| 国产欧美日韩精品一区二区| 日本-黄色视频高清免费观看| 日韩中字成人| 国产亚洲欧美精品永久| 久久精品久久精品一区二区三区| 亚洲成人一二三区av| av视频免费观看在线观看| 寂寞人妻少妇视频99o| 视频区图区小说| 国精品久久久久久国模美| 久久精品国产亚洲网站| 国产伦精品一区二区三区四那| 亚洲中文av在线| av播播在线观看一区| 成人18禁高潮啪啪吃奶动态图 | av又黄又爽大尺度在线免费看| 亚洲精品国产色婷婷电影| 狂野欧美激情性xxxx在线观看| 男人爽女人下面视频在线观看| 99久久人妻综合| 麻豆成人av视频| 欧美日韩一区二区视频在线观看视频在线| 亚洲怡红院男人天堂| 伦理电影免费视频| 美女福利国产在线 | 狠狠精品人妻久久久久久综合| 青春草视频在线免费观看| 国产成人一区二区在线| 午夜福利在线观看免费完整高清在| 黄片wwwwww| 成人18禁高潮啪啪吃奶动态图 | 亚洲久久久国产精品| 亚洲真实伦在线观看| 色吧在线观看| 久久久久人妻精品一区果冻| a级毛色黄片| 国产成人精品福利久久| 久久99热这里只有精品18| 18禁裸乳无遮挡免费网站照片| 美女高潮的动态| 在线观看国产h片| 午夜激情福利司机影院| 国产又色又爽无遮挡免| 汤姆久久久久久久影院中文字幕| 在线观看三级黄色| 日韩免费高清中文字幕av| 免费看不卡的av| 久久久久精品久久久久真实原创| 免费久久久久久久精品成人欧美视频 | 妹子高潮喷水视频| 久久久欧美国产精品| 在线看a的网站| 我的女老师完整版在线观看| 91久久精品电影网| 日韩精品有码人妻一区| 一级片'在线观看视频| 一级片'在线观看视频| 九色成人免费人妻av| 能在线免费看毛片的网站| 乱系列少妇在线播放| 在线免费观看不下载黄p国产| 内地一区二区视频在线| 日韩一区二区视频免费看| 99国产精品免费福利视频| 久久女婷五月综合色啪小说| 免费av中文字幕在线| 欧美3d第一页| 97在线人人人人妻| 秋霞伦理黄片| 免费大片黄手机在线观看| 亚洲精品亚洲一区二区| 欧美精品国产亚洲| 精品久久久久久久末码| 少妇熟女欧美另类| 日韩一区二区视频免费看| 日韩,欧美,国产一区二区三区| 网址你懂的国产日韩在线| 美女主播在线视频| 国产亚洲最大av| 久热久热在线精品观看| 中文字幕久久专区| 国产成人精品久久久久久| 日本猛色少妇xxxxx猛交久久| 精品一区二区三区视频在线| 最近最新中文字幕大全电影3| 日韩欧美一区视频在线观看 | 日本午夜av视频| 亚洲,一卡二卡三卡| 嫩草影院入口| 国产精品久久久久久久久免| 日韩电影二区| 麻豆成人av视频| 亚洲国产av新网站| 啦啦啦在线观看免费高清www| 国产男女超爽视频在线观看| 亚洲国产高清在线一区二区三| 一级毛片黄色毛片免费观看视频| 亚洲精品aⅴ在线观看| 国产极品天堂在线| 人妻夜夜爽99麻豆av| 国产精品伦人一区二区| 国产高清三级在线| 亚洲丝袜综合中文字幕| 性高湖久久久久久久久免费观看| 成人毛片a级毛片在线播放| 少妇人妻久久综合中文| 综合色丁香网| 久久久久精品性色| 亚洲第一av免费看| 一级毛片黄色毛片免费观看视频| 亚洲综合色惰| 高清欧美精品videossex| 高清毛片免费看| 色综合色国产| 只有这里有精品99| 免费av中文字幕在线| 亚洲成人一二三区av| 午夜福利在线观看免费完整高清在| 在线观看美女被高潮喷水网站| 国产色爽女视频免费观看| 91午夜精品亚洲一区二区三区| 亚洲,一卡二卡三卡| 亚洲人成网站高清观看| 大话2 男鬼变身卡| 七月丁香在线播放| 男男h啪啪无遮挡| 国产69精品久久久久777片| 赤兔流量卡办理| 成人毛片60女人毛片免费| 久久青草综合色| 国产淫语在线视频| 日本午夜av视频| 国产亚洲欧美精品永久| 深爱激情五月婷婷| 91精品伊人久久大香线蕉| 亚洲综合精品二区| 最近2019中文字幕mv第一页| 久久精品久久久久久久性| 女的被弄到高潮叫床怎么办| 久久精品国产鲁丝片午夜精品| 日韩视频在线欧美| 欧美日本视频| 色综合色国产| 男女免费视频国产| 国产精品福利在线免费观看| 少妇丰满av| 国产亚洲91精品色在线| 亚洲国产精品一区三区| 国产精品久久久久久av不卡| 日韩欧美精品免费久久| 男人爽女人下面视频在线观看| 麻豆精品久久久久久蜜桃| 国产成人精品福利久久| 国产高清国产精品国产三级 | 丝袜喷水一区| 成人一区二区视频在线观看| 免费观看a级毛片全部| 99热6这里只有精品| 三级国产精品欧美在线观看| 日韩成人av中文字幕在线观看| 午夜免费观看性视频| 国内揄拍国产精品人妻在线| 欧美3d第一页| 久久久亚洲精品成人影院| 91精品国产国语对白视频| 国产伦精品一区二区三区四那| 日本色播在线视频| 日韩 亚洲 欧美在线| 亚洲电影在线观看av| 深爱激情五月婷婷| av在线老鸭窝| 国产av码专区亚洲av| 老熟女久久久| 成人一区二区视频在线观看| 十八禁网站网址无遮挡 | 精品久久国产蜜桃| 亚洲欧美日韩卡通动漫| 色视频在线一区二区三区| 91久久精品国产一区二区三区| 亚洲内射少妇av| 男女边吃奶边做爰视频| 亚洲欧美日韩卡通动漫| av女优亚洲男人天堂| 中文字幕制服av| 一级毛片aaaaaa免费看小| 亚洲久久久国产精品| av.在线天堂| av播播在线观看一区| 国产一区亚洲一区在线观看| 嫩草影院新地址| a 毛片基地| 国产深夜福利视频在线观看| 欧美高清成人免费视频www| 美女脱内裤让男人舔精品视频| 亚洲精品,欧美精品| 日韩大片免费观看网站| 午夜免费鲁丝| 黄色视频在线播放观看不卡| 亚洲欧美日韩另类电影网站 | 欧美变态另类bdsm刘玥| 免费av不卡在线播放| 亚洲精品日韩在线中文字幕| 亚洲国产精品专区欧美| 人人妻人人添人人爽欧美一区卜 | 韩国av在线不卡| 成人美女网站在线观看视频| 国产在线免费精品| 只有这里有精品99| 色5月婷婷丁香| 久久鲁丝午夜福利片| 亚洲精品乱码久久久久久按摩| 99久久中文字幕三级久久日本| 精品少妇久久久久久888优播| 久久久久网色| 精品亚洲成a人片在线观看 | 日韩中文字幕视频在线看片 | 男人添女人高潮全过程视频| 成人免费观看视频高清| 蜜臀久久99精品久久宅男| 午夜激情久久久久久久| 亚洲婷婷狠狠爱综合网| 狂野欧美激情性bbbbbb| 伊人久久国产一区二区| 在线精品无人区一区二区三 | 国精品久久久久久国模美| 免费在线观看成人毛片| 成人综合一区亚洲| 国产高清不卡午夜福利| 国产免费一级a男人的天堂| 国产 一区精品| 久久影院123| 国产黄色视频一区二区在线观看| 国产一区有黄有色的免费视频| av福利片在线观看| av免费观看日本| 免费大片18禁| 免费黄色在线免费观看| 国产欧美日韩精品一区二区| 欧美xxⅹ黑人| 色吧在线观看| 成人无遮挡网站| 亚洲av在线观看美女高潮| 在线精品无人区一区二区三 | 欧美日韩国产mv在线观看视频 | 高清黄色对白视频在线免费看 | 汤姆久久久久久久影院中文字幕| 中文乱码字字幕精品一区二区三区| 国产伦精品一区二区三区四那| 大话2 男鬼变身卡| 亚洲av福利一区| 18禁裸乳无遮挡免费网站照片| 亚洲精品一区蜜桃| 三级经典国产精品| 国产精品秋霞免费鲁丝片| av在线播放精品| 纯流量卡能插随身wifi吗| 国产欧美亚洲国产| 免费av中文字幕在线| 男人添女人高潮全过程视频| 国产精品一及| 亚洲不卡免费看| 国产乱人视频| 一二三四中文在线观看免费高清| 大又大粗又爽又黄少妇毛片口| 亚洲欧美日韩无卡精品| 一区二区三区乱码不卡18| 18+在线观看网站| 日日撸夜夜添| av线在线观看网站| 色网站视频免费| 久久人人爽人人爽人人片va| 熟女人妻精品中文字幕| 国产av一区二区精品久久 | 观看av在线不卡| 在线观看免费高清a一片| 国产深夜福利视频在线观看| 欧美区成人在线视频| av又黄又爽大尺度在线免费看| 国产人妻一区二区三区在| 小蜜桃在线观看免费完整版高清| 免费高清在线观看视频在线观看| 国产综合精华液| 香蕉精品网在线| 伦理电影大哥的女人| 色吧在线观看| 男人爽女人下面视频在线观看| 国产伦精品一区二区三区四那| 午夜激情久久久久久久| 精品亚洲乱码少妇综合久久| 久久久久久伊人网av| 国产精品蜜桃在线观看| 午夜日本视频在线| 日本一二三区视频观看| 亚洲最大成人中文| 免费av中文字幕在线| 又爽又黄a免费视频| 街头女战士在线观看网站| 男人爽女人下面视频在线观看| 人人妻人人澡人人爽人人夜夜| 啦啦啦中文免费视频观看日本| 国产精品.久久久| 大码成人一级视频| 国产精品一二三区在线看| 女人久久www免费人成看片| 亚洲国产精品一区三区| 亚洲性久久影院| 少妇裸体淫交视频免费看高清| 一个人免费看片子| 国产一区二区三区综合在线观看 | av国产久精品久网站免费入址| 国产精品.久久久| 毛片一级片免费看久久久久| 国产精品一区二区三区四区免费观看| av免费在线看不卡| 日韩中文字幕视频在线看片 | 午夜视频国产福利| 一区在线观看完整版| 美女cb高潮喷水在线观看| 国模一区二区三区四区视频| 成人综合一区亚洲| 欧美日韩在线观看h| 午夜福利影视在线免费观看| 精品一品国产午夜福利视频| 婷婷色综合大香蕉| 老司机影院成人| 国产探花极品一区二区| 丰满人妻一区二区三区视频av| 精品人妻熟女av久视频| 内射极品少妇av片p| 七月丁香在线播放| 国产精品国产三级国产专区5o| 亚洲色图av天堂| 多毛熟女@视频| 直男gayav资源| 亚洲精品日韩在线中文字幕| 卡戴珊不雅视频在线播放| 一级爰片在线观看| 九九在线视频观看精品| 日韩av免费高清视频| 内地一区二区视频在线| 日韩av在线免费看完整版不卡| 好男人视频免费观看在线| kizo精华| 五月玫瑰六月丁香| 亚洲精品中文字幕在线视频 | 国产中年淑女户外野战色| 99视频精品全部免费 在线| 一级毛片久久久久久久久女| 国产成人a区在线观看| 欧美bdsm另类| 亚洲欧美一区二区三区国产| 99精国产麻豆久久婷婷| 黄色怎么调成土黄色| 国产欧美日韩精品一区二区| 日韩制服骚丝袜av| 免费看av在线观看网站| 在线看a的网站| 欧美xxxx黑人xx丫x性爽| 国产伦在线观看视频一区| 亚洲国产欧美在线一区| 亚洲图色成人| 深夜a级毛片| 下体分泌物呈黄色| 国产有黄有色有爽视频| 天堂8中文在线网| 国产淫语在线视频| av免费在线看不卡| 在线亚洲精品国产二区图片欧美 | 美女xxoo啪啪120秒动态图| 男人添女人高潮全过程视频| 日本vs欧美在线观看视频 | 亚洲国产最新在线播放| 尾随美女入室| 亚洲国产欧美人成| 国产成人精品一,二区| 日本wwww免费看| 久久久久久久大尺度免费视频| 亚洲精品日韩在线中文字幕| 久久久久网色| 天堂中文最新版在线下载| 美女福利国产在线 | 免费观看av网站的网址| 大片免费播放器 马上看| 亚洲最大成人中文| 欧美一区二区亚洲| 久热这里只有精品99| 一边亲一边摸免费视频| 国产亚洲av片在线观看秒播厂| 少妇丰满av|