• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Integrating Deep Learning and Machine Translation for Understanding Unrefined Languages

    2022-11-09 08:14:38HongGeunJiSoyoungOhJinaKimSeongChoiandEunilPark
    Computers Materials&Continua 2022年1期

    HongGeun Ji,Soyoung Oh,Jina Kim,Seong Choi and Eunil Park,*

    1Department of Applied Artificial Intelligence,Sungkyunkwan University,Seoul,03063,Korea

    2Raon Data,Seoul,03073,Korea

    3Department of Computer Science and Engineering,University of Minnesota,Minneapolis,55455,MN,USA

    Abstract:In the field of natural language processing(NLP),the advancement of neural machine translation has paved the way for cross-lingual research.Yet,most studies in NLP have evaluated the proposed language models on well-refined datasets.We investigatewhether a machine translation approach is suitable for multilingual analysis of unrefined datasets,particularly,chat messages in Twitch.In order to address it,we collected the dataset,which included 7,066,854 and 3,365,569 chat messages from English and Korean streams,respectively.We employed several machine learning classifiers and neural networks with two different types of embedding:word-sequence embedding and the final layer of a pre-trained language model.The results of the employed models indicate that the accuracy difference between English,and English to Korean was relatively high,ranging from 3% to 12%.For Korean data(Korean,and Korean to English),it ranged from 0% to 2%.Therefore,the results imply that translation from a low-resource language (e.g.,Korean)into a high-resource language (e.g.,English) shows higher performance,in contrast to vice versa.Several implications and limitations of the presented results are also discussed.For instance,we suggest the feasibility of translation from resource-poor languages for using the tools of resource-rich languages in further analysis.

    Keywords: Twitch;multilingual;machine translation;machine learning

    1 Introduction

    In linguistic and computer science research,one of the most challenging research topics is to develop systems for high-quality translation and multi-linguistic processing.Thus,many scholars have attempted to propose state-of-the-art translation services and systems to improve the results of translation.

    In addition to some translation research,natural language processing (NLP) technologies have been rapidly improving.Because of international collaboration in research and development,the majority of NLP research aims to investigate resource-rich languages that are widely used in global society.Hence,NLP research is more focused on English rather than other languages [1].

    Because of insufficient research and development in under-resourced languages,several scholars attempted to apply English NLP technologies to understand and investigate other languages [2-4].For instance,Patel and colleagues used machine translation for sentiment analysis of movie reviews and then compared the results of the translation approach with native Hindustani NLP [3].

    To employ NLP technologies for low-resource languages,a two-step approach can be used.First,well-constructed translation methodologies should be employed to translate the contents in low-resource language into high-resource language.Second,the translated content is represented as vectors by various word embedding algorithms.Therefore,improved translation methodologies can enhance the results of NLP technologies in other languages.

    Within this trend,several studies have attempted to develop state-of-the-art translation techniques.One of the remarkable improvements is Google’s neural machine translation system(GNMT) [5].Compared with the phrase-based production system,GNMT reduced errors by 40%when using human evaluation [5].Using rapidly improving machine translation techniques,Kocich et al.[6] successfully categorized the sentiments in an online social network dataset using an English sentiment library.

    However,most recent studies have been conducted for well-refined content.With unrefined content,there can be some hindrances,for example,when chat messages are processed and explored.Communication in chat messages (known as “netspeak”) has unique language characteristics in spelling and grammar,including the use of acronyms and abbreviations [7].Moreover,because a lot of me-media channels,which are interactive media platforms for viewers and streamers,are globally introduced,a huge amount of chat messages and content in various languages is produced.Thus,we aim to investigate whether machine translation can be applicable for multilingual analysis of unrefined content.To address it,unrefined chat messages of both English and Korean streamers inTwitch[8],a widely used online streaming service,are collected for analysis.

    2 Related Work

    Machine learning and deep learning approaches have become mainstream in NLP research.Also,the cross-lingual approaches in NLP have also been extensively explored and achieved considerable results.Thanks to these approaches,diverse tasks can be performed for limitedresource languages (e.g.,Spanish and Hindi) and not only for languages with rich resources (e.g.,English) [2-4].

    Among these tasks,a text categorization task using bilingual-corpus datasets was represented as the cost-effective methodology resulting in comparable accuracy [9].

    Moreover,with the advancement of neural machine translation (NMT) beyond the conventional translation models,several cross-lingual approaches applied this technique [3,10,11].Patel and colleagues showed comparable accuracy of sentiment classification by translating low-resource languages into English (as a high-resource language) [3].Furthermore,performance of NMT models can be enhanced by focusing on topic-level attention during the process of translation [11].

    Recent cross-lingual approaches have been improved by a pre-trained language model based on neural networks [12,13].The pre-trained word-embedding techniques,such asSkip-Gram[14],andGloVe[15],capture different properties of the words.Moreover,in the case of learning the contextual meaning and structure of the syntax,several state-of-the-art pre-trained language models were introduced,includingCoVe[16],ELMo[17],andBERT[18].The transformer encoder enabled these models to handle the complex representation of contextual semantics.All the representative pre-trained language models were trained on refined large text corpora (such asWikipediain English,as a commonly used language).

    By favor of these properties,several studies have applied pre-trained language models on large-scale data [19].However,the majority of prior studies have been conducted using relatively well-refined datasets (e.g.,Wikipedia,social networking sites,microblogs,or user reviews) [20].As pre-trained language models were implemented to read the whole sequence of words and showed remarkable improvements in NLP tasks,we attempt to examine whether applying advanced pretrained language models to the unrefined content to learn the entire context of words can be recommended in the field of machine translation.

    Thus,we investigates whether machine translation approaches are applicable to the classification task of unrefined data compared with the evaluation of the original language.

    3 Method

    To validate our approach on unrefined data,we used chat messages in a representative live-streaming platform,Twitch.InTwitch,there are active interactions and communications between viewers and streamers [21].We selected a straightforward binary classification task for chat messages: predicting whether a specific viewer inTwitchis a subscriber who pays for live game-streaming services.

    3.1 Data Acquisition and Preprocessing

    We collected the 50 most-followed English and Korean streamers fromTwitchMetrics[22].Specifically,we collected all chat messages from five recent streams of each streamer using an open-source crawler,Twitch-Chat-Downloader[23].The dataset included 7,066,854 and 3,365,569 chat messages from English and Korean streams,respectively.

    Fig.1 shows the whole data preprocessing procedures.During the preprocessing,we first excluded the chat messages with URLs,user tags annotated with @,and emoticons.In addition,we eliminated the notifications which indicated who subscribed to the streamers.We did not apply stemming or lemmatization to prevent the information loss in short messages.In addition,we removed the chat messages less than five words which cannot convey the states of the viewers.Subsequently,we usedGoogle Translation APIto translate English chat messages to Korean and vice versa.The chat messages that were not translated properly were removed.Finally,we used 1,321,445 English (EN) and English-to-Korean (EN2KO) and 109,419 Korean (KO) and Koreanto-English (KO2EN) chat messages.Moreover,to classify whether a specific viewer is a subscriber,we identified the subscription badges of viewers,which were displayed in messages.

    3.2 Embedding

    We employed two techniques for embedding: word-sequence and sentence embedding.

    3.2.1 Word-Sequence Embedding

    We employed two tokenization techniques according to the target language.In the case of English (ENandKO2EN),we employed theTokenizerof Python libraryKeras[24].We tokenized the Korean chat messages (KOandEN2KO) using theOpen Korea Textof Korean NLP library,KoNLPy[25].After examining the tokenization techniques,we embedded the tokens in 256-dimensional vectors.

    Figure 1:Workflow procedures

    3.2.2 Sentence Embedding:BERT

    We used the embedding vector extracted from the last layer of a widely-used pre-trained language model,BERT,which reflects the context of the sentences.Among the wide range ofBERTmodel sizes,we choseBERT-base-uncasedmodel to the English chat messages (ENandKO2EN) [26].For Korean chat messages (KOandEN2KO),we appliedKoBERT[27].With the employment ofBERTmodel,we used the hidden states of first token of input sequence (called[CLS] token) in the last layer ofBERT,a 768-dimensional vector,as one of the embedding techniques.

    3.2.3 Classification Models

    We applied both machine learning classifiers and deep neural networks:Logistic Regression,Na?ve Bayes,Random Forest,XGBoost,Multilayer Perceptron(MLP),STACKED-LSTM,andCONV-LSTM.TheSTACKED-LSTMmodel consists of two long short-term memory (LSTM)layers with 128 recurrent neurons and a fully connected layer.TheCONV-LSTMhas onedimensional convolutional layer with 64 filters,max-pooling layer,LSTMlayer with 128 recurrent neurons,and a fully connected layer.The output of the fully connected layer is passed through the softmax activation function.

    We divided the collected chat messages into training (80%) and testing (20%) sets.Therefore,the training sets included 87,535 (KO) and 1,057,156 (EN) chat messages.The number of chat messages in the test dataset was 21,884 (KO) and 264,289 (EN).We applied thesynthetic minority over-sampling technique(SMOTE) for the machine learning classifiers [28];moreover,we adjusted class weights in the cross-entropy function of the deep neural networks to handle class imbalance(Fig.2) [29,30].

    Figure 2:Class distribution for English and Korean datasets

    4 Results

    4.1 Classification Models with English Data

    The accuracy of the classifiers using English data (ENandEN2KO) is summarized in Tab.1.Among classifiers using untranslated English (EN),Random Forestwith word-sequence embedding showed the highest performance,with the accuracy of 89.35%.TheSTACKED-LSTMmodel with word-sequence embedding showed the highest accuracy (82.03%) among the models with Englishto-Korean input data (EN2KO).

    The average accuracy of the models with word-sequence embedding was slightly higher with untranslated data (EN: 78.79%) compared with translated data (EN2KO: 73.30%).Similarly,in the case ofBERTembedding,the models with untranslated data (EN: 80.17%) outperformed the models with translated data (EN2KO: 78.13%).

    In the case of theNa?ve Bayesclassifier,performance was better withBERTembedding rather than word-sequence embedding,which was approximately 25% (EN) and 27% (EN2KO),respectively.

    As shown on the left side of Fig.3,the accuracy of classifiers with the word-sequence embedding of the untranslated data (EN) was higher than forBERTembedding (Random Forest,XGBoost,CONV-LSTM,and STACKED-LSTM).

    4.2 Classification Models with Korean Data

    Tab.2 represents the accuracy of classifiers using Korean data as input (KOandKO2EN).Random ForestwithBERTembedding showed the highest performance for both translated and untranslated data (KO: 86.92%,KO2EN: 86.70%).The average accuracy of classifiers was similar for untranslated and translated input data (KO: 73.74%,KO2EN: 72.11%).This aligns with the results ofBERTembedding (KO: 80.30%,KO2EN: 79.33%).

    Table 1:Classification metrics with English data

    In addition,the accuracy ofNa?ve Bayeswas much higher withBERTembedding (KO:76.42%,KO2EN: 79.95%) rather than word-sequence embedding (KO: 25.32%,KO2EN: 26.31%).The right side of Fig.3 shows the accuracy of the classifiers trained on Korean data (KO,KO2EN).Overall,the classifiers with relatively high accuracy had different embedding methods.

    5 Discussion

    We aimed to validate whether machine-translated datasets are applicable in the NLP tasks.We conducted binary classification with unrefined data (chat messages in live-streaming platform,Twitch) by using several machine learning classifiers and neural networks.Moreover,we employed two different types of embedding: word-sequence embedding and the output layer ofBERT.We chose both English (resource-rich) and Korean (resource-poor) languages for the validation and named the datasets as follows:EN,KO,EN2KO,andKO2EN.

    Figure 3:Classification accuracy for English data (EN,EN2KO) and Korean data (KO,KO2EN)

    According to our results,the accuracy difference betweenENandEN2KOwas relatively high,ranging from 3% to 12%.For Korean data (KOandKO2EN),it ranged from 0% to 2%.Therefore,the results imply that translation from a low-resource language (e.g.,Korean) into a high-resource language (e.g.,English) shows higher performance,in contrast to vice versa.

    Among the classifiers showing high accuracy for English (ENandEN2KO),the wordsequence embedding was highly employed.Meanwhile,in Korean (KOandKO2EN),there are no significant differences in dominance between word-sequence andBERTembedding.This shows that contextual approaches ofBERTto unrefined data does not effectively impact the analysis.

    In the case of classifiers resulting in low accuracy,Na?ve Bayesin the current study,BERTembedding showed much higher accuracy compared to the word-sequence embedding in the multilingual analysis of unrefined content.

    Overall,the evaluation of all classifiers implies that using machine translation from resourcepoor (e.g.,Korean) to resource-rich (e.g.,English) language for input data (KO2EN) does not significantly affect the performance.This would suggest the feasibility of translation from resource-poor languages for using the tools of resource-rich languages in further analysis.

    Although we investigated the efficacy of machine translation from a low-resource language to a high-resource language,several limitations must be considered.First,our evaluation of the task was limited to English and Korean.We may further investigate whether our approach produces comparable results in other languages.Also,using a highly improved classifier may be considered due to the rapid advancement in the field of machine learning.Therefore,these limitations can be addressed in future work.

    Table 2:Classification metrics with Korean data

    Acknowledgement: Prof.Eunil Park thanks to TwitchMetrics,and ICAN Program,IITP grant funded by the Korea government (MSIT) (No.IITP-2020-0-01816).

    Funding Statement: This work was supported by Institute of Information &communications Technology Planning &Evaluation (IITP) grant funded by the Korea government (MSIT) (No.2021-0-00358,AI·Big data based Cyber Security Orchestration and Automated Response Technology Development).

    Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

    又黄又爽又刺激的免费视频.| 最后的刺客免费高清国语| 交换朋友夫妻互换小说| 又黄又爽又刺激的免费视频.| 久热久热在线精品观看| 国产片特级美女逼逼视频| 免费在线观看成人毛片| 中文字幕制服av| 日韩欧美一区视频在线观看 | 91精品国产九色| 免费看不卡的av| 麻豆久久精品国产亚洲av| 亚洲综合色惰| 男插女下体视频免费在线播放| 国产91av在线免费观看| 免费av不卡在线播放| 黄片wwwwww| 青春草亚洲视频在线观看| 精品一区二区三区视频在线| 国产日韩欧美在线精品| 1000部很黄的大片| 一级a做视频免费观看| 亚洲最大成人手机在线| 欧美97在线视频| 你懂的网址亚洲精品在线观看| 男的添女的下面高潮视频| 精品国产三级普通话版| 欧美成人一区二区免费高清观看| 国产精品伦人一区二区| 中国美白少妇内射xxxbb| 亚洲一区二区三区欧美精品 | 成人高潮视频无遮挡免费网站| 国产黄色免费在线视频| 国产亚洲最大av| 亚洲欧美中文字幕日韩二区| 国产成人午夜福利电影在线观看| 亚洲一区二区三区欧美精品 | 国产欧美亚洲国产| 国产精品秋霞免费鲁丝片| 久久97久久精品| 国产精品蜜桃在线观看| 十八禁网站网址无遮挡 | 国产男女内射视频| 国产极品天堂在线| 国产一区二区在线观看日韩| 国产精品国产三级国产av玫瑰| 免费播放大片免费观看视频在线观看| 亚洲第一区二区三区不卡| 国产精品.久久久| 色综合色国产| 日本熟妇午夜| 国产一区二区三区综合在线观看 | 久久精品国产a三级三级三级| 欧美3d第一页| 啦啦啦啦在线视频资源| 男人狂女人下面高潮的视频| 国产色爽女视频免费观看| 一级毛片黄色毛片免费观看视频| 国产国拍精品亚洲av在线观看| 2021少妇久久久久久久久久久| 男女那种视频在线观看| 纵有疾风起免费观看全集完整版| 亚洲内射少妇av| 麻豆成人午夜福利视频| 人妻 亚洲 视频| 亚洲一区二区三区欧美精品 | 国产精品一区二区在线观看99| 少妇丰满av| 91久久精品国产一区二区成人| 欧美成人精品欧美一级黄| 国产成人freesex在线| eeuss影院久久| 免费观看无遮挡的男女| 蜜桃久久精品国产亚洲av| 人人妻人人澡人人爽人人夜夜| 一区二区av电影网| 男插女下体视频免费在线播放| 99热这里只有是精品50| 男女边摸边吃奶| 国产黄a三级三级三级人| 99久久中文字幕三级久久日本| 男女边吃奶边做爰视频| 一区二区三区乱码不卡18| videos熟女内射| 免费av不卡在线播放| av国产免费在线观看| 在线观看美女被高潮喷水网站| 久久99热6这里只有精品| 80岁老熟妇乱子伦牲交| 人人妻人人澡人人爽人人夜夜| 3wmmmm亚洲av在线观看| 欧美一区二区亚洲| 黄色视频在线播放观看不卡| 涩涩av久久男人的天堂| 成人免费观看视频高清| 婷婷色综合大香蕉| 男女国产视频网站| 国内精品宾馆在线| 免费看日本二区| 麻豆成人av视频| 男人舔奶头视频| 亚洲国产欧美在线一区| 91精品伊人久久大香线蕉| 久久久久久久大尺度免费视频| 在线播放无遮挡| 全区人妻精品视频| av国产免费在线观看| 亚洲精品国产色婷婷电影| 午夜爱爱视频在线播放| 又黄又爽又刺激的免费视频.| 九九在线视频观看精品| eeuss影院久久| av在线天堂中文字幕| 噜噜噜噜噜久久久久久91| 91久久精品国产一区二区三区| 欧美日韩视频高清一区二区三区二| 国产免费视频播放在线视频| 亚洲aⅴ乱码一区二区在线播放| 婷婷色综合www| 国语对白做爰xxxⅹ性视频网站| 亚洲国产色片| 成人二区视频| 亚洲av成人精品一区久久| 看黄色毛片网站| 交换朋友夫妻互换小说| 一本色道久久久久久精品综合| 亚洲国产精品成人久久小说| 大香蕉久久网| 精品久久国产蜜桃| 国产69精品久久久久777片| 岛国毛片在线播放| 日韩三级伦理在线观看| 又爽又黄无遮挡网站| 看十八女毛片水多多多| 久久久久久国产a免费观看| 美女高潮的动态| 色综合色国产| 欧美国产精品一级二级三级 | 午夜福利视频1000在线观看| 80岁老熟妇乱子伦牲交| 亚洲国产av新网站| 日韩欧美精品免费久久| 午夜福利视频精品| 久久精品熟女亚洲av麻豆精品| 一区二区三区精品91| 在线观看美女被高潮喷水网站| 99热这里只有精品一区| 国产永久视频网站| 欧美成人a在线观看| 黄色欧美视频在线观看| av免费观看日本| 性色avwww在线观看| 久久精品熟女亚洲av麻豆精品| 国产乱人视频| 国产精品爽爽va在线观看网站| 女的被弄到高潮叫床怎么办| 国产女主播在线喷水免费视频网站| 男女国产视频网站| 又爽又黄a免费视频| 男的添女的下面高潮视频| 在线观看美女被高潮喷水网站| 亚洲av在线观看美女高潮| 边亲边吃奶的免费视频| 毛片一级片免费看久久久久| 亚洲av中文字字幕乱码综合| 久久国产乱子免费精品| 麻豆成人午夜福利视频| 亚洲av二区三区四区| 有码 亚洲区| 亚洲精品456在线播放app| 熟女电影av网| 欧美日韩视频高清一区二区三区二| 别揉我奶头 嗯啊视频| 成人毛片60女人毛片免费| 深爱激情五月婷婷| 亚洲三级黄色毛片| 少妇丰满av| 在现免费观看毛片| 黄色配什么色好看| 国国产精品蜜臀av免费| 久久精品国产鲁丝片午夜精品| 91久久精品电影网| 男男h啪啪无遮挡| 精品一区二区三卡| 九九久久精品国产亚洲av麻豆| 麻豆成人午夜福利视频| av福利片在线观看| 欧美日韩视频高清一区二区三区二| av在线app专区| 天天躁夜夜躁狠狠久久av| 校园人妻丝袜中文字幕| 成人午夜精彩视频在线观看| 久久国产乱子免费精品| 亚洲最大成人手机在线| 女人久久www免费人成看片| 高清av免费在线| 亚洲精品中文字幕在线视频 | 在线 av 中文字幕| 特大巨黑吊av在线直播| 网址你懂的国产日韩在线| 久久国产乱子免费精品| 一级毛片我不卡| 国产精品伦人一区二区| 国产精品一区二区在线观看99| 我的女老师完整版在线观看| 王馨瑶露胸无遮挡在线观看| 九九爱精品视频在线观看| 成人综合一区亚洲| 中文天堂在线官网| 国产精品人妻久久久影院| 日本爱情动作片www.在线观看| 99久久人妻综合| 亚洲,欧美,日韩| eeuss影院久久| 亚洲国产最新在线播放| 欧美97在线视频| 欧美精品人与动牲交sv欧美| 综合色丁香网| 九九爱精品视频在线观看| 欧美日韩综合久久久久久| 777米奇影视久久| 麻豆乱淫一区二区| 成年版毛片免费区| 一本色道久久久久久精品综合| 最近2019中文字幕mv第一页| 在线播放无遮挡| 美女主播在线视频| 黄色怎么调成土黄色| 国产欧美日韩一区二区三区在线 | 亚洲成人精品中文字幕电影| 日本爱情动作片www.在线观看| 特大巨黑吊av在线直播| 亚洲在久久综合| 亚洲精品日韩av片在线观看| 精品少妇久久久久久888优播| 极品教师在线视频| 91狼人影院| 日本与韩国留学比较| 在线观看国产h片| 下体分泌物呈黄色| 免费观看在线日韩| 美女内射精品一级片tv| 精品少妇黑人巨大在线播放| 久久久精品欧美日韩精品| 高清在线视频一区二区三区| 亚洲欧美日韩无卡精品| 在现免费观看毛片| 国产精品久久久久久久电影| 一个人观看的视频www高清免费观看| 人人妻人人看人人澡| 中国三级夫妇交换| 伊人久久国产一区二区| eeuss影院久久| 久久久久国产精品人妻一区二区| 国产男女超爽视频在线观看| 色播亚洲综合网| 丝瓜视频免费看黄片| 亚洲无线观看免费| 麻豆成人av视频| 免费观看a级毛片全部| 午夜福利视频1000在线观看| 久久亚洲国产成人精品v| 99热国产这里只有精品6| 亚洲精品国产av成人精品| 日韩视频在线欧美| 国产精品偷伦视频观看了| 一级毛片我不卡| 成人欧美大片| 中文天堂在线官网| 在线观看国产h片| 在线观看一区二区三区激情| 国产精品久久久久久精品电影| 成年免费大片在线观看| 神马国产精品三级电影在线观看| 国产亚洲av嫩草精品影院| 黄色配什么色好看| 最近中文字幕高清免费大全6| 大又大粗又爽又黄少妇毛片口| 一区二区av电影网| 99热这里只有精品一区| 黄片wwwwww| 国产乱来视频区| 日本黄大片高清| 亚洲精品影视一区二区三区av| 麻豆成人av视频| 久久久久久国产a免费观看| av在线老鸭窝| 亚洲一级一片aⅴ在线观看| 女人被狂操c到高潮| 男的添女的下面高潮视频| 久久久久九九精品影院| 三级男女做爰猛烈吃奶摸视频| 综合色丁香网| 伦精品一区二区三区| 日本猛色少妇xxxxx猛交久久| 精品亚洲乱码少妇综合久久| 九九久久精品国产亚洲av麻豆| 蜜桃亚洲精品一区二区三区| 亚洲真实伦在线观看| 下体分泌物呈黄色| 亚洲av欧美aⅴ国产| 亚洲精品国产av成人精品| 五月开心婷婷网| 综合色丁香网| 精品少妇黑人巨大在线播放| 观看美女的网站| 国产精品国产av在线观看| 久久人人爽人人爽人人片va| 亚洲三级黄色毛片| 国产精品国产三级国产av玫瑰| 欧美成人午夜免费资源| 日本wwww免费看| 国产精品久久久久久精品古装| 国产乱人视频| 51国产日韩欧美| 永久网站在线| 日韩亚洲欧美综合| 制服丝袜香蕉在线| 日韩亚洲欧美综合| 亚洲精品aⅴ在线观看| 午夜老司机福利剧场| 国产白丝娇喘喷水9色精品| 91久久精品国产一区二区三区| 国产日韩欧美在线精品| 777米奇影视久久| 免费黄频网站在线观看国产| 中国三级夫妇交换| 精品亚洲乱码少妇综合久久| 亚洲av.av天堂| 欧美成人午夜免费资源| 菩萨蛮人人尽说江南好唐韦庄| 久久久久久久国产电影| 日韩不卡一区二区三区视频在线| 精品99又大又爽又粗少妇毛片| av网站免费在线观看视频| 久久久色成人| 国产免费一级a男人的天堂| 99久久精品国产国产毛片| av福利片在线观看| 18禁裸乳无遮挡免费网站照片| 国产白丝娇喘喷水9色精品| 国产av国产精品国产| 麻豆成人av视频| av在线天堂中文字幕| 免费av观看视频| 欧美另类一区| 少妇的逼水好多| 国国产精品蜜臀av免费| 亚洲精品一二三| 久久久精品94久久精品| 亚洲精华国产精华液的使用体验| 国产精品爽爽va在线观看网站| 成人午夜精彩视频在线观看| 舔av片在线| 国产午夜精品一二区理论片| 亚洲av电影在线观看一区二区三区 | 97在线人人人人妻| 少妇人妻久久综合中文| 亚洲人成网站在线播| 亚洲av欧美aⅴ国产| 一区二区三区免费毛片| av.在线天堂| 禁无遮挡网站| 寂寞人妻少妇视频99o| 久久国内精品自在自线图片| 纵有疾风起免费观看全集完整版| 最近2019中文字幕mv第一页| 一区二区三区精品91| 韩国高清视频一区二区三区| 亚洲精品日韩av片在线观看| 亚洲人成网站在线播| 成人国产麻豆网| 亚洲精品乱码久久久v下载方式| 欧美bdsm另类| 久久久久久久久久久丰满| 99久久精品热视频| 免费黄频网站在线观看国产| 2018国产大陆天天弄谢| 免费看不卡的av| 我要看日韩黄色一级片| 高清视频免费观看一区二区| 免费观看性生交大片5| 亚洲欧美日韩东京热| 女人十人毛片免费观看3o分钟| 精品酒店卫生间| 亚洲精品影视一区二区三区av| 老司机影院毛片| 国语对白做爰xxxⅹ性视频网站| 国产av码专区亚洲av| 男插女下体视频免费在线播放| 国产av码专区亚洲av| 国产亚洲av嫩草精品影院| 寂寞人妻少妇视频99o| av女优亚洲男人天堂| 我的老师免费观看完整版| 国产精品久久久久久精品电影小说 | 只有这里有精品99| 久久久久性生活片| 美女主播在线视频| av黄色大香蕉| 777米奇影视久久| 秋霞伦理黄片| 久久综合国产亚洲精品| 特级一级黄色大片| 卡戴珊不雅视频在线播放| 三级经典国产精品| 插阴视频在线观看视频| 国产免费一级a男人的天堂| 国产片特级美女逼逼视频| 日韩一本色道免费dvd| 在线免费观看不下载黄p国产| 久久久亚洲精品成人影院| 在线免费观看不下载黄p国产| 国产黄色免费在线视频| 又爽又黄无遮挡网站| av免费观看日本| 九九爱精品视频在线观看| eeuss影院久久| 亚洲人成网站在线播| 美女国产视频在线观看| 午夜精品一区二区三区免费看| 高清av免费在线| 午夜免费男女啪啪视频观看| 青青草视频在线视频观看| 国产一区二区三区综合在线观看 | 九九爱精品视频在线观看| 高清日韩中文字幕在线| 男人添女人高潮全过程视频| a级毛片免费高清观看在线播放| 久热这里只有精品99| 久久久久久久午夜电影| 日韩国内少妇激情av| 一二三四中文在线观看免费高清| 国产黄片美女视频| 97热精品久久久久久| 寂寞人妻少妇视频99o| 亚洲国产色片| 亚洲精品日韩在线中文字幕| 极品教师在线视频| 日韩大片免费观看网站| 欧美变态另类bdsm刘玥| 久久99热这里只频精品6学生| 亚洲成色77777| 婷婷色综合www| 在线看a的网站| 水蜜桃什么品种好| 亚洲精品视频女| 亚洲精品成人av观看孕妇| 国产精品福利在线免费观看| 亚洲av在线观看美女高潮| 大码成人一级视频| a级一级毛片免费在线观看| 少妇裸体淫交视频免费看高清| 成人高潮视频无遮挡免费网站| 毛片一级片免费看久久久久| 国产有黄有色有爽视频| 国产 精品1| av播播在线观看一区| 高清毛片免费看| 97超视频在线观看视频| 免费av观看视频| 国产av国产精品国产| 高清午夜精品一区二区三区| 2018国产大陆天天弄谢| 欧美日本视频| 久久韩国三级中文字幕| 身体一侧抽搐| 性色av一级| 国产伦精品一区二区三区视频9| 免费少妇av软件| 日韩强制内射视频| 99久久九九国产精品国产免费| 国产亚洲精品久久久com| 在线观看av片永久免费下载| 久久97久久精品| 精品国产一区二区三区久久久樱花 | 丝袜美腿在线中文| 熟女电影av网| 99热全是精品| 国产精品av视频在线免费观看| tube8黄色片| 2018国产大陆天天弄谢| 不卡视频在线观看欧美| 毛片女人毛片| 免费高清在线观看视频在线观看| 欧美高清性xxxxhd video| 国产亚洲91精品色在线| 久久久久久久久久成人| 午夜老司机福利剧场| 久久精品人妻少妇| 九九爱精品视频在线观看| 久久久国产一区二区| 在线精品无人区一区二区三 | 乱码一卡2卡4卡精品| 有码 亚洲区| 亚洲最大成人中文| 久久久色成人| 亚洲天堂av无毛| 在线观看人妻少妇| 日韩强制内射视频| 看免费成人av毛片| 日韩亚洲欧美综合| 亚洲天堂av无毛| 一级爰片在线观看| 日韩成人av中文字幕在线观看| 国产有黄有色有爽视频| 国产视频内射| 最近最新中文字幕免费大全7| 18+在线观看网站| 美女内射精品一级片tv| 亚洲精品自拍成人| 蜜桃久久精品国产亚洲av| 高清av免费在线| 久久鲁丝午夜福利片| 禁无遮挡网站| 国产 一区精品| 99热这里只有是精品50| 视频中文字幕在线观看| 亚洲欧美精品自产自拍| 免费看日本二区| 亚洲自偷自拍三级| 日本熟妇午夜| 精品一区二区三区视频在线| 国产一区二区在线观看日韩| 成年女人在线观看亚洲视频 | 亚洲欧美一区二区三区黑人 | 美女高潮的动态| 国产精品国产三级国产av玫瑰| 国产精品不卡视频一区二区| 插阴视频在线观看视频| av播播在线观看一区| 久久久a久久爽久久v久久| av在线天堂中文字幕| 国产有黄有色有爽视频| 女人被狂操c到高潮| 亚洲精品色激情综合| 精品少妇黑人巨大在线播放| 五月天丁香电影| 国产一区二区在线观看日韩| 秋霞伦理黄片| 一个人观看的视频www高清免费观看| 欧美精品一区二区大全| 高清毛片免费看| 综合色av麻豆| 欧美日韩综合久久久久久| 青春草国产在线视频| 中国三级夫妇交换| 十八禁网站网址无遮挡 | 亚洲精品国产av成人精品| 久久99蜜桃精品久久| 亚洲人成网站高清观看| 色视频在线一区二区三区| 国产免费福利视频在线观看| 亚洲av不卡在线观看| 久久人人爽人人爽人人片va| 伦理电影大哥的女人| 水蜜桃什么品种好| 久久久成人免费电影| 熟女av电影| 国产亚洲av嫩草精品影院| 成人黄色视频免费在线看| 精品视频人人做人人爽| 久久韩国三级中文字幕| 久久久午夜欧美精品| 免费黄频网站在线观看国产| 国产黄片视频在线免费观看| 国产爱豆传媒在线观看| 亚洲精品一区蜜桃| 国产精品女同一区二区软件| 精品熟女少妇av免费看| 嘟嘟电影网在线观看| 国产成人精品婷婷| 成人亚洲精品一区在线观看 | 高清在线视频一区二区三区| 久久人人爽av亚洲精品天堂 | 男女边吃奶边做爰视频| 在线a可以看的网站| 女人被狂操c到高潮| 久久久国产一区二区| 成人亚洲精品一区在线观看 | 极品少妇高潮喷水抽搐| 在线观看美女被高潮喷水网站| 我的老师免费观看完整版| 亚洲美女搞黄在线观看| 在线观看国产h片| 亚洲欧美精品自产自拍| 免费黄网站久久成人精品| 狂野欧美激情性xxxx在线观看| 插逼视频在线观看| 极品教师在线视频| a级毛片免费高清观看在线播放| 插逼视频在线观看| 亚洲av不卡在线观看| 狂野欧美激情性xxxx在线观看| 欧美三级亚洲精品| 久久久久久久大尺度免费视频| 91久久精品国产一区二区三区| 免费观看av网站的网址| 特大巨黑吊av在线直播| 亚洲最大成人av| 欧美三级亚洲精品| 熟妇人妻不卡中文字幕| 深爱激情五月婷婷| av网站免费在线观看视频| 中文字幕人妻熟人妻熟丝袜美| 亚洲av一区综合| 天天躁夜夜躁狠狠久久av| 久久久a久久爽久久v久久| 国产免费视频播放在线视频| 久久99精品国语久久久| 免费人成在线观看视频色| 欧美成人一区二区免费高清观看| 亚洲天堂av无毛| 国产又色又爽无遮挡免| 国产老妇伦熟女老妇高清|