• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Study of BERT-Based Classification Performance of Text-Based Health Counseling Data

    2023-03-12 09:01:32YeolWooSungDaeSeungParkandCheongGhilKim

    Yeol Woo Sung,Dae Seung Park and Cheong Ghil Kim

    Namseoul University,Cheonan,Chungcheongnam-do,Korea

    ABSTRACT The entry into a hyper-connected society increases the generalization of communication using SNS.Therefore,research to analyze big data accumulated in SNS and extract meaningful information is being conducted in various fields.In particular, with the recent development of Deep Learning, the performance is rapidly improving by applying it to the field of Natural Language Processing,which is a language understanding technology to obtain accurate contextual information.In this paper, when a chatbot system is applied to the healthcare domain for counseling about diseases,the performance of NLP integrated with machine learning for the accurate classification of medical subjects from text-based health counseling data becomes important.Among the various algorithms,the performance of Bidirectional Encoder Representations from Transformers was compared with other algorithms of CNN,RNN,LSTM,and GRU.For this purpose,the health counseling data of Naver Q&A service were crawled as a dataset.KoBERT was used to classify medical subjects according to symptoms and the accuracy of classification results was measured.The simulation results show that KoBERT model performed high performance by more than 5%and close to 18%as large as the smallest.

    KEYWORDS BERT;NLP;deep learning;healthcare;machine learning

    1 Introduction

    With the develop ment of high-speed wireless communication and the spread of various mobile devices, the Internet is overflowing with various opinions and information of individuals especially using SNS [1,2].As a result, online data has rapidly increased and is being used in various fields to extract meaningful information from such accumulated unstructured data, in which SA (Sentiment Analysis) and Chatbot services using NLP (Natural Language Processing) are representative.In addition, people analyze product sale, service strategies, and lyrical trends by exploring subjective sentiment information in articles and reviews [3–6].Shanmuganathan et al.[3] proposed a machine learning methodology to detect the flu (influenza) virus spreading among people mainly across Asia.Zeng et al.[4]proposed a mixed-model account of CNN-BiLSTM-TE(Convolutional Neural Network, Bidirectional Long-Short-Term Memory, and Subject Extraction) model to solve the problems of low precision, insufficient feature extraction, and poor contextual ability of existing text sentiment analysis methods.Heo et al.[5],presented an approach for detection of adverse drug reactions from drug reviews to compensate limitations of the spontaneous adverse drug reactions reporting system.Salminen et al.[6]introduced a cross-platform online hate classifier.

    The situation where face-to-face contact is restricted due to the prolonged COVID-19 pandemic is also being applied to healthcare fields.Under this circumstance,an area attracting attention is nonface-to-face healthcare services such as teleconsultation, telemedicine, and remote monitoring [7,8].Nasr et al.[7]mentioned the need for innovative models to replace the traditional health care system as the number of individuals with chronic diseases increases significantly,and the models further evolved into the smart health care system of the future including hospitals,nursing homes and long-term care centers.It addressed the need to provide more personalized health care services and less reliance on traditional offline health care institutions such as hospitals,nursing homes,and long-term healthcare centers.In particular, Rahaman et al.[8] introduced IoT-based health monitoring systems as the most important healthcare application field as IoT is an important factor that changes technological infrastructure.Here,it is possible to reduce contact for health promotion and disease prevention such as counseling and management in the pre-hospital stage and healthcare in situations where face-toface treatment is difficult [9,10].Miner et al.[9] introduced the several useful aspects of Chatbots in the fight against the COVID-19 pandemic together with challenges in information dissemination.Jovanovi′c et al.[10] tried to provide the goals of healthcare chatbots by characterizing in service provisioning and highlighting design aspects that require the community’s attention,emphasizing the human-AI interaction aspects and the transparency in AI automation and decision making.As for those services AI is applied in the form of digital healthcare including chatbots[11–13].Wu et al.[11]summarized the latest developments related to telemedicine and discussed the obstacles and challenges to its wide adoption with a focus on the impact of COVID-19.Kandpal et al.[12] introduced contextual based chatbots using Machine Learning and Artificial Intelligence techniques to store and process the training models which help the chatbot to give a better and appropriate response when the user asks domain specific questions to the bot.Softi′c et al.[13] presented the health chatbot application created on the Chatfuel platform which can identify users’symptoms through a series of queries and guides them to decide whether to go to doctor.Digital healthcare enables prevention,diagnosis,treatment,and follow-up management anytime,anywhere,which is usually a combination of digital technology, smart technology, and health management technology.Table 1 shows the use of AI in the healthcare industry divided by Clinical and Non-clinical area [14].Chebrolu et al.[14]introduced how health care organizations can scale up their AI investments by pairing with a robust security and data governance strategy.AI is being used in symptom analysis,scientific discovery,and risk management in the clinical field.In the non-clinical field, it is being used in management tasks automation,fraud and misuse detection and prevention,and artificial intelligence-based counselling.

    As for chatbot, it is a freely communicating software with humans using NLP that provides an appropriate answer to a user’s question or various related information through voice or text conversation,and generally uses a chat interface on the web or messenger[15].Kim et al.[15]examined existing attempts to utilize Information Technology(IT)in the field of counseling and psychotherapy,and current trends in recent overseas cases of the application of Artificial Intelligence(AI)technology in the field of chatbots.Even though chatbots can perform many tasks, the main function is to understand human speech and respond appropriately.The deep learning-based NLP AI engine,which has been continuously developing in recent years [12], is being applied in a way that enhances this function.In the healthcare field,In the healthcare field,AI chatbot collects user data through symptom related conversations in advance,so that it can be used as basic data for the user during face-to-face consultation or treatment with a doctor.In particular,the deep learning-based dialogue engine helps to accurately recognize the patient’s conversations with various expressions according to the context.

    Table 1: Applications of AI in healthcare

    This paper verifies the performance of NLP algorithms based on deep learning,which is necessary to accurately determine the treatment subject when consulting a user for a disease when operating a consultation healthcare system using a chatbot.Among the performance of Bidirectional Encoder Representations from Transformers (BERT) was compared with other algorithms of CNN, RNN,LSTM, and GRU.For this purpose, we crawled the health counseling data of Naver Q&A service as dataset.A Korean BERT model, KoBERT, was used to classify medical subjects according to symptoms and the accuracy of classification results was measured.

    The rest of this paper is structured as follows.Section 2 reviews the basic of CNN,RNN,LSTM and GRU; Section 3 overviews BERT; Section 4 introduces dataset and implementation; Section 5 includes simulation results.Finally,Section 6 concludes this paper.

    2 Background

    There are two approaches to developing a chatbot depending on the algorithms and the techniques adopted: pattern matching and machine learning approaches.This section reviews four machine learning algorithms for dialog modeling.They are all representative deep learning algorithms for NLP with a well-known reputation in time series data analysis and context identification[16].

    2.1 CNN

    CNN (Convolutional Neural Network) is a model that extracts various features of data using filters.It is mainly used to find patterns for image recognition.As there are filters as many as the number of channels of data,image features are extracted while moving the filters.After applying the image to the filter,we use pooling to resize it to emphasize the features.Recently,research using CNN has been conducted in the field of NLP,and it is showing effective results.Fig.1 shows the architecture of the CNN model[17].

    Figure 1:CNN architecture

    2.2 RNN

    Recurrent Neural Network (RNN) classifies text by finding patterns in ordered data.As the previous input information is accumulated, the current information is expressed.Fig.2 [18] depicts the structure of RNN with its own recurrent weight(W),which reflects the time series characteristics of predicting future information based on present information through past information throughWwhile recognizing the pattern[19].

    Figure 2:RNN architecture

    The equations required for processing RNN are shown as(1)and(2).

    In the above equations,the input valueXTaffects the next output valuehT+1while outputting the result valuehT.Eq.(2)expresses the exaggeration that the previous data affects the current data as an equation.In RNN, thehTvalue, called the state value, is a state value representing the present time based on time.hT-1represents the previous ecological value.This model always refers to the value ofhT-1to calculate the value ofhT.In other words,YTis a value obtained by multiplying the value ofhTby the weightWhY.

    2.3 LSTM

    LSTM(Long Short Term Memory)is a structure in which cell-state is added to RNN.In RNN,when the time interval between two points is large, the learning ability is greatly reduced.This limitation is because Back Propagation process does not convey enough information.The reason is that the weight is lost as it passes through numerous layers.LSTM was introduced to solve this problem.Using LSTM, information can be effectively transmitted without losing even at many intervals as long as certain information does not adversely affect the gradient[20].

    Fig.3 shows the structure of the LSTM which introduces Cell State and serves to deliver previously inputted information.It consists of Forget Gate which determines whether to reflect cell state among past information, Input Gate for how much input information will be reflected, and Output Gate which determines how much Cell State will be reflected in the State value to be transmitted to the next cell[20].

    Figure 3:LSTM architecture

    Forget Gate plays a role in deleting data deemed unnecessary among the transferred information.The equation of Forget Gate is as(3)[20].

    Input Gate determines how much of the current input value is reflected in Cell State.A lot of highvalue data is reflected in cell state,and if not,the reflection is minimized.The equations are(4)–(6)[20].

    Output Gate is the gate that determines whether to transfer the final Cell State value to Hidden State.Finally,thehtvalue is obtained and transferred to the next cell.The equation of the output gate is(7),(8)[20].

    2.4 GRU

    Gated Recurrent Units(GRUs)are inspired by LSTM.It is a structure that efficiently processes the existing gate by reducing it to two.Although it is used as a hidden state method,it effectively solves the Long-Term Dependency problem[21].

    Fig.4 shows the structure of the GRU.It consists of Reset Gate and Update Gate.The Reset Gate determines how the new input is merged with the old memory.Update Gate determines how much of the previous memory to remember.The equations are(9)–(12)[21].

    ztreceives the previous hidden state and input and determines how much value to reflect in the previous hidden state.After receiving the previous hidden state and input,rtprocesses it as a sigmoid function.epresents the current cell state,andhtdetermines how much to delete the previous hidden state and reflect the current state and becomes the hidden state of the next step[21].

    Figure 4:GRU architecture

    3 BERT

    The BERT (Bidirectional Encoder Representations from Transformers) model [22] shown in Fig.5, one of the deep learning models, encodes the context of the input data in both directions based on the Transformer module [23], unlike the existing language representation models reviewed in Section 2.In this way, the BERT model provides a universal numerical representation of natural language that can be applied to many tasks related to natural language processing.

    As is well known, deep learning models have achieved great results in a short time in the computer vision field for object detection and segmentation.This is because the model pre-trained with ImageNet extracted general-purpose feature vectors for images.If fine-tuning is performed with data on a given task based on this pre-trained model,good results can be obtained with a relatively small amount of data.Since the BERT model learns a feature vector that reflects the surrounding semantic information from a large amount of data, it can be applied to various natural language processing tasks using this feature vector.

    Figure 5:BERT pre-training

    As for the pre-training,there are two methods.One is the method of predicting a masked word by masking some of the words in a sentence,the other is the method of predicting whether two sentences are a continuous sentence when given.Masked LM randomly masks the input value and predicts the masked token through the context of the surrounding words.At this time,randomly change to[Mask]token at a rate of 15%in each sentence.80%of these are converted to[Mask]tokens.10%change the token to a random word,and 10%leave the token as the original word.[Mask]token is used only for pre-training,not for fine-tuning[22].In NSP(Next Sentence Prediction),QA(Question Answering)or NLI (Natural Language Inference), it is important to understand the relationship between two sentences.BERT predicts whether two sentences are continuous or not[22].

    Pre-training using a large-scale corpus requires a lot of resources and time for learning,so people use the pre-trained model published by Google and fine-tune it.The BERT model that has been trained in advance using a Korean dataset of SK T-Brain’s KoBERT[24]is emerging.

    4 Dataset and Implementation

    In this section, we introduce a proposed research framework to compare the performance of NLP integrated with machine learning for the accurate classification of medical subjects on textbased health consultation data.Performance measurement is performed by comparing the accuracy of classification.The first step of the proposed model is data collection for training.The next step is data preprocessing.After that,model training is performed.Finally,a validation step is run with the test data as shown in Fig.6.

    A crawling software dedicated to data collection was implemented with C#Winform Application and.Net Framework 4.5.The access identifier Selector Path was used to access the page and to identify specific elements for collection and organization.However, since random tag IDs are assigned to medical answers, it is difficult to access with a simple Selector Path, so Full Xpath was used.In addition,the answers to questions from the collected data were filtered with only the answers of experts with a medical license.

    Figure 6:Research framework

    Text data containing symptom information is necessary for executing various algorithms through minimal pre-processing after crawling the contents of the Naver Q&A Service.The pre-processing process is described in more detail later.The total number of data collected in this way is 15,958.20% of them divided using the scikit-learn library is used to evaluate the performance of learning models mentioned above.As a result, the total number of data used for training is 12,766, and the number of data used for testing is 3192.As for the labeling of the data set, the counseling subject information in the crawling data was classified into nine categories of medical subjects such as Orthopedics, Dermatology, Otorhinolaryngology, Surgery, Neurosurgery, Thoracic Surgery,Colorectal/Anal Surgery,Neurology,and Internal Medicine,and they are assigned from 0 to 8 in order,respectively.The period of data collection was from December 03,2021,to January 15,2022.Table 2 shows sample data examples with English translation from the datasets crawled.

    Table 2: Data sample

    Table 2 (continued)Question Label I have a headache and I think I’m going to vomit~Why is this? 3 How to get rid of tiny thorns on your hands~How do I remove this thorn? 1 Around the anus when sitting~Why is this? 6 If the liver level is high,endoscopy~Or should I go to the hospital? 8 Do I have a migraine?~What should I do? 7

    Fig.7 shows the amount of data for each medical subject.It shows that the data except for the thoracic surgery data are evenly distributed.

    Figure 7:Dataset distribution

    As for pre-processing,it is carried out in the same order as shown in Fig.8.First,special characters and space characters are removed in order to leave only Korean in the crawled data.In addition,stopwords are removed from sentences in which special characters and whitespace characters are removed.For the morpheme analyzer,the open-source Korean morpheme analyzer Okt(Open Korean text)[25]made by twitter is used.This analyzer supports four functions:normalization,tokenization,rooting,and phrase extraction.After morphological analysis using Okt,rare words are removed based on the number of appearances.The threshold value is set to 2 times.Consequently,9,869 rare words are removed out of a total of 18,809 words, so that the ratio of words that appeared twice or less is about 52.5%.

    For BERT, the KoBERT model developed by SK T-Brain was used.To measure the accuracy according to the models,CNN,RNN,LSTM,GRU,and BERT are implemented.The pre-processed data is embedded through an embedding layer.Finally, using the dense layer and the activation function softmax function,it is outputted.

    Fig.9 shows the structure of the classification model using BERT.The text pre-processed through BERT Tokenizer goes through the BERT model and the Dense Layer for final classification.Through this,the label is finally predicted according to the input value.

    Implementation is carried out in Google’s Colab[26].The questions are labeled as medical subjects.To compare model performance,CNN,RNN,LSTM,GRU,and BERT models are implemented using Tensorflow and Pytorch libraries.

    Figure 8:Data pre-processing stages

    Figure 9:Overall structure of BERT classification model

    5 Simulation Result

    This study was conducted in Colab, a machine learning environment provided by Google, to classify Korean health counseling data.The specifications of the Colab environment are shown in Table 3.

    Table 3: Colab specifications

    This paper verifies the performance of NLP algorithms based on deep learning by comparing the accuracy of classifying the treatment subject from text-based health counselling data.In general,the results of the classification model come out as two values,positive and negative[18].At this time,four values of the confusion matrix were used to check whether this value was correctly classified.The four values can be divided into True Positive (TP), False Positive (FP), or True Negative (TN) and False Negative(FN).These evaluation methods are Precision,Recall,and Accuracy,which are often used as evaluation indicators for machine learning classification models,and the equations for each evaluation indicator are as follows[27]:

    Precision refers to a value that is actually True positive among all data.Recall refers to a value in which the value determined to be actually True by the recall is actually True.Finally, accuracy refers to the percentage of all data that is correctly judged as True.The reason why multiple evaluation indicators are used is that each evaluation indicator cannot be an absolute evaluation indicator.In this paper,the model was evaluated through the above four evaluation indicators.

    The difference in the results according to algorithms was clearly shown.

    Table 4 and Fig.10 show the results for each model.Recall result of each algorithm showed 73.7%of CNN,53.3%of RNN,66.4%of LSTM,70.5%of GRU,and about 75.8%of BERT.Next,the result values for Precision showed performance of CNN 73.7%,RNN 51.9%,LSTM 69.8,GRU 69.1,and BERT 74.6%.And the F1-score test result showed the performance of CNN 73.6%, RNN 52.4%,LSTM 69.8%,GRU 69.1%,and BERT 75.1%.Finally,the Accuracy measurement result showed the performance of CNN 74.2%,RNN 58%,LSTM 71.7%,GRU 70.1%,and BERT 76.3%.In the case of BERT,it not only shows the best performance in accuracy,but also shows excellent performance in all other conditions.The accuracy appeared in the order of BERT,CNN,LSTM,GRU,and RNN.It was confirmed that BERT was about 2%different than the second highest CNN.

    Table 4: Simulation results

    Figure 10:Comparison results

    6 Conclusions and Future Work

    In this paper,a medical subject classification model according to symptom data for a healthcare chatbot was implemented, and the performance of each algorithm was analyzed.The dataset was collected by crawling the data of the Naver Q&A service.As a result of comparing the performance of CNN,RNN,LSTM,GRU,and BERT,the BERT model showed the best performance in all four evaluation indicators.

    It is expected that this system will allow the user to assign appropriate medical subjects according to the symptoms of the user.In this study, 9 medical subjects were classified, but performance verification is required after securing additional data through the expansion of medical subjects.Next, performance comparison was made through one BERT model, but in the situation where various domestic BERT models and various natural language processing algorithms are emerging,in order to improve the performance of this model, data set processing and expansion, additional algorithm performance analysis,and other additional studies such as performance verification of the Korean BERT model are needed.Furthermore,further studies on the medical chatbot system and the intervention module of the multi-chatbot system based on this model are planned.

    Funding Statement:This work was supported by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-2021R1I1A4A01049755) and by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) Support Program(IITP-2020-0-01846)supervised by the IITP(Institute of Information and Communications Technology Planning and Evaluation).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久这里有精品视频免费| 日本-黄色视频高清免费观看| 男男h啪啪无遮挡| 爱豆传媒免费全集在线观看| 欧美日韩在线观看h| 91午夜精品亚洲一区二区三区| 国产黄片美女视频| 欧美激情国产日韩精品一区| 国产高清三级在线| 久久人人爽人人片av| 在现免费观看毛片| 亚洲色图综合在线观看| 亚洲三级黄色毛片| 久久人人爽人人片av| 中国三级夫妇交换| 菩萨蛮人人尽说江南好唐韦庄| 精品亚洲成a人片在线观看| 热re99久久精品国产66热6| 老司机影院成人| 少妇裸体淫交视频免费看高清| 草草在线视频免费看| 国产成人精品久久久久久| 大陆偷拍与自拍| 国产黄片美女视频| 国产男女超爽视频在线观看| 人妻制服诱惑在线中文字幕| 丝袜喷水一区| 精品亚洲乱码少妇综合久久| 色视频www国产| 插阴视频在线观看视频| 亚洲精品久久午夜乱码| 国产亚洲最大av| 97超视频在线观看视频| 婷婷色麻豆天堂久久| 日韩人妻高清精品专区| 永久网站在线| 日本黄色日本黄色录像| 99国产精品免费福利视频| 午夜免费男女啪啪视频观看| 国产免费又黄又爽又色| 99热这里只有是精品50| 国产爽快片一区二区三区| 久久久久久久大尺度免费视频| 精品人妻熟女av久视频| 精品卡一卡二卡四卡免费| 啦啦啦视频在线资源免费观看| 亚洲精品国产成人久久av| 国产亚洲午夜精品一区二区久久| 少妇人妻久久综合中文| 国产精品秋霞免费鲁丝片| 国产一区二区在线观看日韩| 在线看a的网站| 成人亚洲欧美一区二区av| 久久久欧美国产精品| 精品一品国产午夜福利视频| 国产成人精品一,二区| 伊人久久国产一区二区| 久久久久久久久久久丰满| 妹子高潮喷水视频| 热re99久久精品国产66热6| 97精品久久久久久久久久精品| 女性被躁到高潮视频| 少妇猛男粗大的猛烈进出视频| 国产精品一区二区在线不卡| 日韩大片免费观看网站| 精品国产一区二区久久| 深夜a级毛片| 日本与韩国留学比较| 日本-黄色视频高清免费观看| 大香蕉久久网| 成年人免费黄色播放视频 | 黄色视频在线播放观看不卡| 一级a做视频免费观看| 永久免费av网站大全| 丁香六月天网| 亚洲精品日本国产第一区| 九色成人免费人妻av| 乱码一卡2卡4卡精品| 国国产精品蜜臀av免费| 国产成人一区二区在线| 久久久久国产精品人妻一区二区| 国产亚洲91精品色在线| 日韩欧美 国产精品| 久久这里有精品视频免费| 一区二区三区免费毛片| 亚洲精品日韩av片在线观看| 亚洲国产日韩一区二区| 人人妻人人爽人人添夜夜欢视频 | 黄色一级大片看看| 少妇被粗大的猛进出69影院 | 成人亚洲精品一区在线观看| 99精国产麻豆久久婷婷| 大话2 男鬼变身卡| 久久人妻熟女aⅴ| 夫妻性生交免费视频一级片| 国产精品一区二区在线观看99| h日本视频在线播放| 亚洲成色77777| 欧美精品一区二区大全| 汤姆久久久久久久影院中文字幕| 亚洲精品乱码久久久久久按摩| 欧美区成人在线视频| 九九在线视频观看精品| 高清视频免费观看一区二区| 国产在线视频一区二区| 成人亚洲精品一区在线观看| 国产av码专区亚洲av| h日本视频在线播放| 色5月婷婷丁香| 亚洲三级黄色毛片| 在线观看人妻少妇| 高清在线视频一区二区三区| 精品亚洲成a人片在线观看| av在线app专区| 黄色欧美视频在线观看| 久久婷婷青草| 熟女av电影| 在线播放无遮挡| 午夜影院在线不卡| a级一级毛片免费在线观看| 观看免费一级毛片| www.av在线官网国产| 亚洲精品一区蜜桃| 一级毛片我不卡| 国产精品欧美亚洲77777| 亚洲激情五月婷婷啪啪| 内射极品少妇av片p| 亚洲va在线va天堂va国产| 午夜激情久久久久久久| 日本色播在线视频| 欧美日韩一区二区视频在线观看视频在线| av线在线观看网站| 99久国产av精品国产电影| 偷拍熟女少妇极品色| 久久免费观看电影| 青春草视频在线免费观看| 水蜜桃什么品种好| 偷拍熟女少妇极品色| 国产成人精品福利久久| 最后的刺客免费高清国语| 99久久精品热视频| 婷婷色麻豆天堂久久| 精华霜和精华液先用哪个| 日韩不卡一区二区三区视频在线| 成人黄色视频免费在线看| 精华霜和精华液先用哪个| 极品人妻少妇av视频| 国产精品免费大片| 久久久久久久精品精品| 一二三四中文在线观看免费高清| 久久6这里有精品| 亚洲情色 制服丝袜| 好男人视频免费观看在线| 精品久久久精品久久久| 免费在线观看成人毛片| 美女视频免费永久观看网站| 国产精品麻豆人妻色哟哟久久| 乱人伦中国视频| 久久婷婷青草| 99re6热这里在线精品视频| 欧美三级亚洲精品| 久久99一区二区三区| 永久免费av网站大全| 亚洲精品日韩av片在线观看| 亚洲精品视频女| 亚洲欧美精品专区久久| 最新中文字幕久久久久| 美女国产视频在线观看| 日本猛色少妇xxxxx猛交久久| 国产亚洲最大av| 国产高清国产精品国产三级| 精品少妇久久久久久888优播| 成人国产av品久久久| 欧美老熟妇乱子伦牲交| 赤兔流量卡办理| 午夜视频国产福利| 欧美+日韩+精品| 国产伦理片在线播放av一区| 精品视频人人做人人爽| 大又大粗又爽又黄少妇毛片口| 日韩一区二区视频免费看| 亚洲国产精品999| 美女脱内裤让男人舔精品视频| 熟女电影av网| 在线观看免费日韩欧美大片 | 成人亚洲精品一区在线观看| 日本-黄色视频高清免费观看| 精品人妻偷拍中文字幕| 国产精品成人在线| 日本vs欧美在线观看视频 | 国产在视频线精品| 99热6这里只有精品| 亚洲av综合色区一区| 国产高清三级在线| 青春草视频在线免费观看| 99re6热这里在线精品视频| 久久久久久久久久人人人人人人| 免费av不卡在线播放| 老司机影院毛片| 欧美精品人与动牲交sv欧美| 国产精品成人在线| 大码成人一级视频| av国产精品久久久久影院| 夜夜骑夜夜射夜夜干| 三上悠亚av全集在线观看 | 大话2 男鬼变身卡| 少妇人妻一区二区三区视频| 少妇人妻一区二区三区视频| 搡女人真爽免费视频火全软件| 国产成人91sexporn| av天堂久久9| 国语对白做爰xxxⅹ性视频网站| 五月天丁香电影| 中文字幕免费在线视频6| av在线app专区| 欧美高清成人免费视频www| 久久精品国产亚洲av涩爱| 一级毛片黄色毛片免费观看视频| 制服丝袜香蕉在线| 日本av免费视频播放| 亚洲欧洲精品一区二区精品久久久 | 日本欧美视频一区| 成年人免费黄色播放视频 | 久久久亚洲精品成人影院| 久久精品久久久久久噜噜老黄| 日本wwww免费看| 在线观看三级黄色| 王馨瑶露胸无遮挡在线观看| 狂野欧美激情性bbbbbb| 久久久久久久久久久丰满| 亚洲欧美成人精品一区二区| 国产有黄有色有爽视频| 国产精品久久久久成人av| 国模一区二区三区四区视频| 美女cb高潮喷水在线观看| 97精品久久久久久久久久精品| 极品教师在线视频| 国产成人精品一,二区| 黄色配什么色好看| 成人毛片a级毛片在线播放| 欧美精品一区二区免费开放| 国产男人的电影天堂91| 久久97久久精品| 国产黄色视频一区二区在线观看| 韩国高清视频一区二区三区| 久久久久久久久久久丰满| 内射极品少妇av片p| 制服丝袜香蕉在线| 搡女人真爽免费视频火全软件| 国产高清国产精品国产三级| 国产精品久久久久久精品电影小说| 秋霞在线观看毛片| 一级毛片aaaaaa免费看小| 欧美激情极品国产一区二区三区 | av在线播放精品| 五月天丁香电影| 国产精品蜜桃在线观看| 免费人成在线观看视频色| 一个人看视频在线观看www免费| 久久人妻熟女aⅴ| 亚洲国产av新网站| 成人国产麻豆网| 日韩三级伦理在线观看| 亚洲国产欧美在线一区| 国产精品一区二区在线不卡| 国产亚洲午夜精品一区二区久久| 久久99蜜桃精品久久| 成人毛片a级毛片在线播放| 成人免费观看视频高清| 日韩,欧美,国产一区二区三区| 男人添女人高潮全过程视频| 十八禁高潮呻吟视频 | 人妻一区二区av| 成年人免费黄色播放视频 | 黑丝袜美女国产一区| 日韩人妻高清精品专区| 观看av在线不卡| 在线观看国产h片| 国产精品嫩草影院av在线观看| 国产爽快片一区二区三区| 亚洲性久久影院| 成人黄色视频免费在线看| 搡老乐熟女国产| 狂野欧美白嫩少妇大欣赏| 99久久人妻综合| 亚洲av二区三区四区| 69精品国产乱码久久久| 丰满迷人的少妇在线观看| 久久久久国产精品人妻一区二区| 高清毛片免费看| 午夜福利,免费看| 日本与韩国留学比较| 亚洲国产欧美在线一区| 日本欧美国产在线视频| 国产亚洲av片在线观看秒播厂| 久久久久久久久久成人| 国产免费福利视频在线观看| 97超视频在线观看视频| 国产av国产精品国产| 久久久久久久大尺度免费视频| 日韩在线高清观看一区二区三区| 色94色欧美一区二区| 99热国产这里只有精品6| 91精品国产国语对白视频| 中文字幕亚洲精品专区| 精品少妇黑人巨大在线播放| 日本av免费视频播放| 少妇裸体淫交视频免费看高清| 国产精品久久久久久精品电影小说| 九九爱精品视频在线观看| 日韩,欧美,国产一区二区三区| 亚洲国产精品一区二区三区在线| 亚州av有码| 91精品伊人久久大香线蕉| 香蕉精品网在线| 成年人免费黄色播放视频 | 在线观看免费日韩欧美大片 | freevideosex欧美| 国产亚洲欧美精品永久| 春色校园在线视频观看| 少妇 在线观看| 亚洲国产成人一精品久久久| 亚洲人成网站在线播| 精品99又大又爽又粗少妇毛片| 老熟女久久久| 国产高清三级在线| 国产乱人偷精品视频| 久久久久久久亚洲中文字幕| 成年女人在线观看亚洲视频| 久久av网站| 欧美三级亚洲精品| 欧美精品人与动牲交sv欧美| 国产熟女午夜一区二区三区 | av在线app专区| 欧美变态另类bdsm刘玥| 大香蕉久久网| 亚洲精品aⅴ在线观看| 中文欧美无线码| 99久久精品热视频| 自拍偷自拍亚洲精品老妇| 欧美三级亚洲精品| 男女边摸边吃奶| 成人国产av品久久久| 王馨瑶露胸无遮挡在线观看| 久久精品国产鲁丝片午夜精品| 一个人看视频在线观看www免费| 男女啪啪激烈高潮av片| 免费av中文字幕在线| 国产精品伦人一区二区| 五月开心婷婷网| 有码 亚洲区| 丰满饥渴人妻一区二区三| 伊人久久精品亚洲午夜| 亚洲精品自拍成人| 我要看日韩黄色一级片| 性色av一级| 欧美人与善性xxx| 桃花免费在线播放| 女性被躁到高潮视频| 秋霞伦理黄片| 国产精品偷伦视频观看了| 91精品国产国语对白视频| 中文在线观看免费www的网站| 午夜福利,免费看| 永久网站在线| 久久精品久久久久久噜噜老黄| 午夜91福利影院| 免费少妇av软件| 精品人妻偷拍中文字幕| 精品久久国产蜜桃| 日日爽夜夜爽网站| 欧美成人午夜免费资源| 中文字幕人妻丝袜制服| 我要看日韩黄色一级片| a 毛片基地| 久久久欧美国产精品| 国产 一区精品| 好男人视频免费观看在线| 精品亚洲乱码少妇综合久久| 精品熟女少妇av免费看| 夫妻午夜视频| 国产精品国产三级国产专区5o| 久久久久久久久久成人| 免费人妻精品一区二区三区视频| 成人影院久久| 久热久热在线精品观看| 亚洲av二区三区四区| www.色视频.com| 亚洲婷婷狠狠爱综合网| 免费少妇av软件| 日本av免费视频播放| 热re99久久国产66热| 国产乱来视频区| 中文字幕人妻熟人妻熟丝袜美| 老熟女久久久| 麻豆精品久久久久久蜜桃| 国产亚洲精品久久久com| 99热国产这里只有精品6| 国产熟女欧美一区二区| 国产永久视频网站| av免费在线看不卡| 国产精品熟女久久久久浪| 午夜免费观看性视频| 深夜a级毛片| 91久久精品国产一区二区成人| 婷婷色av中文字幕| 久久国产精品大桥未久av | 一本大道久久a久久精品| 日本欧美视频一区| 亚洲精品久久久久久婷婷小说| 精品久久国产蜜桃| 国内精品宾馆在线| 久久av网站| 最新的欧美精品一区二区| 日韩三级伦理在线观看| 日本爱情动作片www.在线观看| 一区二区三区免费毛片| 亚洲va在线va天堂va国产| 国产日韩欧美视频二区| 男人添女人高潮全过程视频| 大陆偷拍与自拍| 91aial.com中文字幕在线观看| 成人国产麻豆网| 免费大片18禁| 嫩草影院入口| 熟女人妻精品中文字幕| 高清毛片免费看| 午夜免费男女啪啪视频观看| 寂寞人妻少妇视频99o| 视频区图区小说| 人人澡人人妻人| 少妇人妻精品综合一区二区| 中文在线观看免费www的网站| 高清视频免费观看一区二区| 亚洲真实伦在线观看| 一区二区三区免费毛片| 伊人久久国产一区二区| 纵有疾风起免费观看全集完整版| 欧美国产精品一级二级三级 | 精品熟女少妇av免费看| 亚洲欧洲精品一区二区精品久久久 | 狂野欧美激情性bbbbbb| 日韩成人伦理影院| 五月伊人婷婷丁香| 久久97久久精品| 精品一区二区三卡| av福利片在线| 国产精品嫩草影院av在线观看| 中文在线观看免费www的网站| 久久鲁丝午夜福利片| 亚洲欧洲国产日韩| 五月天丁香电影| 欧美日本中文国产一区发布| 啦啦啦在线观看免费高清www| 国模一区二区三区四区视频| 校园人妻丝袜中文字幕| 日日啪夜夜爽| a级毛片在线看网站| 亚洲国产最新在线播放| 久久久欧美国产精品| 亚洲精品乱久久久久久| 香蕉精品网在线| 亚洲精品视频女| 中文字幕精品免费在线观看视频 | 国产亚洲91精品色在线| 国产白丝娇喘喷水9色精品| 欧美精品国产亚洲| 下体分泌物呈黄色| 国产精品国产三级国产专区5o| 夫妻性生交免费视频一级片| 丰满乱子伦码专区| 久久精品国产亚洲av天美| 午夜日本视频在线| 国产在视频线精品| 成年人免费黄色播放视频 | 美女cb高潮喷水在线观看| 女性生殖器流出的白浆| 中国国产av一级| 亚洲av成人精品一区久久| 蜜臀久久99精品久久宅男| 欧美少妇被猛烈插入视频| 国产中年淑女户外野战色| 精品国产露脸久久av麻豆| 成人特级av手机在线观看| 中文在线观看免费www的网站| 久久国产乱子免费精品| 日韩 亚洲 欧美在线| 在线看a的网站| 一区二区三区免费毛片| 伊人久久精品亚洲午夜| 97在线视频观看| 午夜影院在线不卡| 日韩制服骚丝袜av| 如何舔出高潮| 久久99蜜桃精品久久| 国产精品99久久99久久久不卡 | 国产日韩欧美视频二区| 中文字幕人妻丝袜制服| 亚洲成人手机| 久久久久久久久久成人| 日本猛色少妇xxxxx猛交久久| av黄色大香蕉| 美女主播在线视频| 一级黄片播放器| 亚洲精品aⅴ在线观看| 最后的刺客免费高清国语| 五月开心婷婷网| 一级毛片久久久久久久久女| 午夜免费男女啪啪视频观看| 国产男人的电影天堂91| 精品99又大又爽又粗少妇毛片| av播播在线观看一区| 人妻系列 视频| 最新的欧美精品一区二区| 水蜜桃什么品种好| 国产成人精品无人区| 精品久久久精品久久久| 国产色爽女视频免费观看| 各种免费的搞黄视频| 丝瓜视频免费看黄片| 亚洲真实伦在线观看| 国产老妇伦熟女老妇高清| 看十八女毛片水多多多| 丰满迷人的少妇在线观看| a级一级毛片免费在线观看| 各种免费的搞黄视频| 国产日韩欧美亚洲二区| av福利片在线| 晚上一个人看的免费电影| 日本与韩国留学比较| 欧美xxxx性猛交bbbb| 交换朋友夫妻互换小说| 麻豆成人午夜福利视频| av.在线天堂| 99九九在线精品视频 | 免费大片18禁| 97超碰精品成人国产| 91久久精品国产一区二区三区| 精品一品国产午夜福利视频| av免费在线看不卡| 国产在线一区二区三区精| 久久99精品国语久久久| 少妇 在线观看| 久久久午夜欧美精品| 性色avwww在线观看| 欧美最新免费一区二区三区| 久久6这里有精品| 亚洲无线观看免费| 久久久久人妻精品一区果冻| 久久狼人影院| 精品久久国产蜜桃| 日韩视频在线欧美| 日韩三级伦理在线观看| 又黄又爽又刺激的免费视频.| 日韩精品有码人妻一区| 久久精品久久久久久久性| 黄片无遮挡物在线观看| 天堂俺去俺来也www色官网| 日韩欧美 国产精品| 2021少妇久久久久久久久久久| 大陆偷拍与自拍| 欧美日韩国产mv在线观看视频| 黄色日韩在线| 精品少妇内射三级| 日韩熟女老妇一区二区性免费视频| 久久99热这里只频精品6学生| 十八禁网站网址无遮挡 | 国产又色又爽无遮挡免| 夫妻午夜视频| 两个人免费观看高清视频 | 美女中出高潮动态图| 高清av免费在线| 国产色爽女视频免费观看| 亚洲av成人精品一二三区| 中文在线观看免费www的网站| 桃花免费在线播放| av在线老鸭窝| 国产男人的电影天堂91| 91成人精品电影| 欧美精品国产亚洲| 久久狼人影院| 91久久精品国产一区二区三区| kizo精华| 天堂8中文在线网| 久久久久久久久久成人| 日本av手机在线免费观看| 九草在线视频观看| 亚洲成人手机| 精品国产一区二区三区久久久樱花| 18禁在线无遮挡免费观看视频| h日本视频在线播放| av国产精品久久久久影院| 高清视频免费观看一区二区| 青青草视频在线视频观看| 亚洲欧美成人精品一区二区| 免费大片18禁| 看十八女毛片水多多多| 人妻人人澡人人爽人人| 九九在线视频观看精品| 久久久久网色| 男女啪啪激烈高潮av片| 欧美日韩av久久| 日产精品乱码卡一卡2卡三| 熟女人妻精品中文字幕| 91午夜精品亚洲一区二区三区| 午夜久久久在线观看| 狠狠精品人妻久久久久久综合| 一区二区av电影网| 久久精品国产a三级三级三级| 日韩制服骚丝袜av| 国产一区有黄有色的免费视频| 国产成人精品婷婷| 少妇的逼水好多| 丰满迷人的少妇在线观看| 五月开心婷婷网|