• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Cross-Language Transfer Learning-based Lhasa-Tibetan Speech Recognition

    2022-11-10 02:29:30ZhijieWangYueZhaoLichengWuXiaojunBiZhuomaDawaandQiangJi
    Computers Materials&Continua 2022年10期

    Zhijie Wang,Yue Zhao,*,Licheng Wu,Xiaojun Bi,Zhuoma Dawa and Qiang Ji

    1School of Information Engineering,Minzu University of China,Beijing,100081,China

    2School of Chinese Ethnic Minority Languages and Literatures,Minzu University of China,Beijing,100081,China

    3Department of Electrical,Computer,and Systems Engineering,Rensselaer Polytechnic Institute,Troy,NY 12180-3590,USA

    Abstract:As one of Chinese minority languages,Tibetan speech recognition technology was not researched upon as extensively as Chinese and English were until recently.This,along with the relatively small Tibetan corpus,has resulted in an unsatisfying performance of Tibetan speech recognition based on an end-to-end model.This paper aims to achieve an accurate Tibetan speech recognition using a small amount of Tibetan training data.We demonstrate effective methods of Tibetan end-to-end speech recognition via cross-language transfer learning from three aspects:modeling unit selection,transfer learning method,and source language selection.Experimental results show that the Chinese-Tibetan multi-language learning method using multilanguage character set as the modeling unit yields the best performance on Tibetan Character Error Rate (CER) at 27.3%,which is reduced by 26.1%compared to the language-specific model.And our method also achieves the 2.2% higher accuracy using less amount of data compared with the method using Tibetan multi-dialect transfer learning under the same model structure and data set.

    Keywords:Cross-language transfer learning;low-resource language;modeling unit;Tibetan speech recognition

    1 Introduction

    With the training data of fewer than 30 h,the speech recognition system usually is called a lowresource system.Because it lacks sufficient training data of target languages,the low-resource system has a poor performance on recognition accuracy[1].In recent years,with the development of deep learning technology,speech recognition technology has become increasingly mature and widely used in major languages such as Chinese and English.In contrast,Tibetan speech recognition technology has received some limitations,such as lack of speech data and linguistic resources.In view of the above problems,this paper uses the transfer learning method to add Chinese and English data in training data which can make up for the lack of Tibetan data and also uses end-to-end model,because it avoids the need for linguistic resources like dictionaries and phonetic knowledge.

    The latest research results show that transfer learning has received extensive attention from researchers and has been used in many fields with satisfactory results.For instance,Zia et al.[2]use transfer learning to classify citrus plant diseases.Faisal et al.[3]and Reda et al.[4]studied Corona Virus Disease 2019(Covid-19)diagnosis technology based on transfer learning.Fu et al.[5]completed the detection of malware based on the Long-Short Term Memory(LSTM)model through the transfer learning method.Xu et al.[6]use this method to recognize weeds.Besides,transfer learning has increasingly been used in speech recognition fields to improve the performance of acoustic models for low-resource language.Yang[1]transferred the weights of the first five hidden layers of Deep Neural Networks-Hidden Markov Model(DNN-HMM)trained with Chinese speech data for low-resource Uyghur speech recognition,which reduced the Word Error Rate (WER) to 18.75%.Li[7]used the Deep Feedforward Sequential Memory Networks-Connectionist Temporal Classification(DFSMNCTC)acoustic model pre-trained with 10,000 h of Chinese data and fine-tuned with Uyghur speech data,which decreased the WER of Uyghur speech recognition to 5.87%.

    In Tibetan speech recognition,transfer learning has also been used to improve the accuracy.Kang[8]pre-trained Visual Geometry Group Net(VGGNet)on two Chinese corpora THCHS-30 and STCMDS,to get better initialization weights,and then he used the Pre-training + Bi-LSTM + CTC model as the Amdo-Tibetan speech recognition model.The experimental results show that the CER is reduced to 26.6% by using the transfer learning method and decoding with CTC.Yan et al.[9]transferred the hidden layers of the Chinese speech recognition model to the Lhasa-Tibetan model.Based on the semi-orthogonal factorization of the acoustic model of Time-Deep Neural Networks-Hidden Markov Model(TDNN-HMM),as a result,the Lhasa-Tibetan CER is reduced by 14.74%.The existing works not only use high-resource languages as source languages,such as Chinese and English,but also use a low-resource language similar to the target language.Yue[10]applied the Lhasa-Tibetan dialect model as the initial model and fine-tuned the model with Amdo-Tibetan dialect data.The experimental results showed that the CER was reduced by 4%compared with the dialect-specific model using only Amdo-Tibetan dialect data.The study[11]based on Wavenet-CTC end-to-end model studied the multi-dialect and multi-task transfer learning and used Amdo-Tibetan dialect as training data to improve the accuracy of Lhasa-Tibetan speech recognition.As a result,the CER was reduced by 2.7%.

    Modeling units are essential for transfer learning.Wang et al.[12]used phonemes as modeling units and evaluated the different phoneme sets.Their work found that the phoneme set with consonant suffix and long vowel as the modeling unit is better than other phoneme sets.For Tibetan-Mandarin bilingual speech recognition,Wang et al.[13]used characters instead of phonemes as the modeling unit for the end-to-end speech recognition model.This method provides a direction for the low-resource languages which lack pronunciation dictionaries to build a speech recognition model.

    Inspired by the above works,this paper explores the Lhasa-Tibetan end-to-end speech recognition model based on transfer learning in three folds:transfer learning method,modeling units,and source language selection.First,multi-language learning is compared with pre-training technology to build the transfer model.Second,in terms of the modeling units,four kinds of modeling units are evaluated,i.e.,the Latin letter set,the multi-language character set,the Chinese syllable and the Tibetan letter set,and the Latin and Tibetan letter set.Finally,for source data,Chinese,English,and Chinese-English mixed data are used as the source languages for transfer learning.By studying different kinds of combinations of the above three,we optimize the Lhasa-Tibetan speech recognition model using a small amount of Tibetan training data.

    The rest of this paper is organized as follows:Section 2 introduces the data sets used in this paper and audio data processing process.We detail the technical principles used in this paper and the processing of the text data used in the experiments in Section 3.Experiments are explained in detail and their results are discussed in Section 4.Finally,we describe our conclusions in Section 5.

    2 Data

    2.1 Data Sets

    Lhasa-Tibetan dialect data comes from an open Tibetan multi-dialect speech data set,TIBMD@MUC[14],which is used in the work of[10,11].The text data of this data set consists of two parts:one part is 1369 sentences of Tibetan spoken language selected from the book of“Spoken Tibetan Language”,and the other part is 8000 sentences of news,electronic novels,and Tibetan poetry collected on the Internet.The recorders are some college students.The Lhasa dialect data is divided into 2.7 h of training data and 0.58 h of test set.

    The English data comes from LibriSpeech Automatic Speech Recognition (ASR) corpus.This data set is a large corpus containing about 1000 h of English speech.The data comes from audio books from the LibriVox project.In this paper we use 34.5 h of data as training data.The Chinese data comes from the open-source THCHS-30,and the training data duration is 31.5 h.The text of THCHS-30 is selected from large-capacity news,and most of the people involved in the recording are college students who can speak fluent Mandarin.Data statistics are shown in Tab.1.

    Table 1:Statistics on three languages

    2.2 Data Processing

    All audio files in three languages are converted into Windows Audio Volume(WAV)format with 16KHz sampling rate and 16-bit quantization accuracy.In addition,39 Mel Frequency Cepstrum Coefficient(MFCC)features of each observation frame are extracted from speech data using a 25 ms window with 10 ms overlaps.

    3 Method

    3.1 Source Language Selection

    There are many language families in the world,such as Sino-Tibetan,Indo-European,Semitic,etc.Among them,Indo-European and Sino-Tibetan are utilized most extensively[15].English belongs to the former,while Chinese and Tibetan constitutes part of the latter.

    For a language,pronunciation,grammar,and vocabulary are three elements of speech,and they are indispensable and interdependent[16].In terms of phonetics,each syllable in the Sino-Tibetan language family has a fixed tone,distinguishing the meaning of vocabulary and grammar.However,most languages of the Indo-European language family have no tones,and only a few languages have simply tones.About grammar,the Sino-Tibetan language family is an analytic language,which expresses grammatical relationships with word order and function words.Usually,the word order is relatively fixed.However,the Indo-European language family is an inflectional language,and there are many variations of verbs,nouns,and adjectives.These tortuous variations represent different meanings of sentences[17].For vocabulary,most vocabularies have precise definitions in Sino-Tibetan.However,in Indo-European languages,the same word may mean verbs,nouns,and adjectives.

    In summary,compared with English,Chinese has more similarities with Tibetan.Therefore,Chinese should be used as the source language of transfer learning for the Tibetan speech recognition model.In experiences,we evaluate the performance of Chinese,English,and Chinese-English mixed as the source language.

    3.2 Modeling Units

    The end-to-end model integrates the traditional acoustic and language models into a single model with no lexicon.In our work,we adopted four kinds of modeling units for cross-language transfer learning end-to-end model.They are the Latin letter set,the multi-language character set including Tibetan characters,Chinese characters,and English words,the Chinese Pinyin and Tibetan letter set,and the Latin and Tibetan letter set.

    For the Latin letter set,Tibetan characters are converted to Latin letter sequences by Wylie transliteration.Chinese characters are transcribed by Pinyin with no tone.After this,the texts of sentences are uniformly expressed in Latin letters for Chinese,English,and Lhasa-Tibetan.Meanwhile,in English text,we also convert the uppercase letters to their corresponding lowercase letters.Some punctuation marks in the sentence are deleted.Besides,there are some other processing,like that the text“I’m”is changed to“I am”,and“H?M”is changed to“h m”.As for the Lhasa-Tibetan text,the mark“'”is replaced with“f”,“+”with“q”,and“.”with“v”.To be more precise,these substitutes of Latin letters have never appeared in the Wylie transliteration text for Lhasa-Tibetan.

    Multi-language character set consists of Chinese characters,English words and Tibetan characters from all experimental texts.For Chinese Pinyin and Tibetan letter set,Chinese text is transcribed by Pinyin with no tone,and Tibetan characters are rewritten horizontally in Tibetan letters from left to right.This method is from the work of[13].

    For Latin and Tibetan letter set,Chinese characters and English words are expressed by Latin letters,and Tibetan characters are transcribed in Tibetan letters.The Tibetan text example is shown in Fig.1.

    Figure 1:(a) Tibetan original text.(b) Tibetan letter text.(c) Tibetan Wylie transliteration text.(d)Tibetan Wylie transliteration text after processing

    3.3 Transfer Learning Methods

    Transfer learning is based on the common sense between the source domain and target domain.Therefore,it can assist the learning of the target domain and avoid time consumption or unideal performance caused by learning from scratch[18].According to the existence of labels in the source and target data,transfer learning is categorized as follows:fine-tuning,multi-task learning,domainadversarial training,and zero-shot learning.

    When the amount of target data is small and the amount of source data is large,fine-tuning method usually is used for training model.It pre-trains a model using source data and then puts the target data into the model to fine-tune the parameters.

    For the fine-tuning method,it only pays attention to the recognition effect of the target language but does not attend to the performance on the source data.Multi-task learning is different from fine-tuning method.It completes multiple tasks in a model simultaneously and aims to improve the accuracy of each task.Multi-task learning is often applied for multi-language speech recognition.Dong et al.[19]found that the hidden feature layer of the multilingual speech recognition model contains some common features of human language.Huang et al.[20]used European languages to improve Chinese speech recognition.Through multi-task learning,the recognition performance of 50 h of Chinese data is the same as that of 100 h alone.

    Domain-adversarial training is mainly used for the case that the data of target tasks is similar to the source tasks,such as speech recognition tasks with target data containing the noise and source data with noiseless data.Zero-shot learning is mainly used for two domains with quite different data.For example,the source picture has a cat and dog,and the target picture is a monkey.

    In speech recognition,transfer learning is generally used for learning acoustic model because all languages have similarities in acoustic features.In this paper,we adopt an end-to-end model for transfer learning speech recognition.End-to-end model integrates the traditional acoustic model and language model into a single model,which does not need to create a pronunciation dictionary and a separate language model.Therefore,it is especially suitable for languages with a lack of language knowledge and data.We compare the fine-tuning method with the multi-language learning method for the end-to-end model to evaluate which way is better to transfer both acoustic and linguistic knowledge.

    The fine-tuning method first uses a large amount of source language data for the pre-training end-to-end model and then retrains the pre-trained model with Tibetan data,as shown in Fig.2.The multi-language learning method trains an end-to-end model using the joint data of Chinese,English,and Tibetan data,as shown in Fig.3.We adopt the WaveNet-CTC as the end-to-end model in two transfer learning methods,as shown in Fig.4.

    Figure 2:Tibetan speech recognition model based on fine-tuning transfer learning

    Figure 3:Tibetan speech recognition model based on multi-language learning

    Figure 4:WaveNet end-to-end speech recognition model

    4 Experiments

    4.1 Experimental Setting

    In this paper,the WaveNet network has 15 layers,consisting of 3 dilated residual blocks stacks with 5 layers in each block.The dilation rates from the first layer to the fifth layer are 1,2,4,8,16.The filter size of causal dilated convolutions is 7.The number of hidden units in the gating layers is 128,the number of hidden units in the residual connection is 128,and the learning rate is 2×10-4.The model has about 44 million parameters.Due to different modeling units,the number of model parameters is not exactly the same in each experiment.All models are trained for 50 epochs with the adaptive moment estimation(Adam)optimizer with a batch size of 10 on a Linux system loaded with two Nvidia RTX 2070 Super GPUs.All experiments are carried out under the above experimental configuration to ensure fairness when comparing experimental results.

    In the experimental evaluation,the fine-tuning model uses Chinese,English,and Chinese-English mixed data as pre-training data,and Tibetan data is as fine-tuning data.The multi-language learning model uses Chinese-Tibetan mixed data,English-Tibetan mixed data,and Chinese-English-Tibetan mixed data as training data.Besides,there are two baseline models for comparison.One is the model which only uses 2.7 h Lhasa-Tibetan speech data as training data in our experiments.The other is the one from work[11]which used the Tibetan multi-dialect data including 4.4 h Lhasa-Tibetan for training.It is the multi-dialect learning model based on the same WaveNet-CTC structure,hyerparameters and Tibetan data set with our method.

    4.2 Experimental Evaluation Criteria

    In this paper,we use edit distance to calculate the recognition error rate.The error rate(ER) is expressed as Eq.(1).

    The L1represents the predicted text,L2represents the original text,dic(.) is the function to calculate the edit distance between texts,and len(.)is the function to calculate the sequence length.The Letter Error Rate (LER)is the letter-level ER of text transcribed by Latin letters,the Syllable Error Rate(SER)is single syllable ER of the text transcribed by Latin letters,the Tibetan CER is the character-level ER of text transcribed by Tibetan characters,and the Tibetan letter error rate is the letter-level ER of text transcribed by Tibetan letters.

    4.3 Experimental Results and Analysis

    The experimental results of baseline models,the multi-language learning method,and the finetuning method are shown in Tabs.2-4,respectively.

    Table 2:The error rates of baseline models for Lhasa-Tibetan speech recognition

    Table 3:The error rates of multi-language learning model for Lhasa-Tibetan speech recognition

    Table 3:Continued

    Table 4:The error rates of fine-tuning model for Lhasa-Tibetan speech recognition

    From Tabs.2 and 4,we can find that the fine-tuning transfer learning model performs better than the Lhasa-Tibetan specific model.The model with Chinese data as pre-training data achieves the best recognition accuracy in Tab.4,which is 1.5%higher than multi-dialect model.It also can be seen in Tabs.2 and 3 that the performances of Chinese-Tibetan multi-language learning models are also better than the Lhasa-Tibetan specific model,except for the Latin letter set as the modeling unit.Using Chinese-English mixed data,the fine-tuning model achieves the best performance on the Latin letter set as a modeling unit.Compared with the Lhasa-Tibetan specific model,the LER and SER of the fine-tuning model are reduced by 2.8%and 4%,respectively.However,the multi-language learning model has better performance than the fine-tuning model on other modeling units.

    The Chinese-Tibetan multi-language learning model in Tab.3 achieves the best Tibetan CER among all models.Based on the multi-language character set,the Tibetan CER dropped by 26.1%and 2.2%,respectively,compared with the Lhasa-Tibetan specific model and the multi-dialect model.Furthermore,on the modeling unit of the Chinese Pinyin and Tibetan letter set,the one of Latin and Tibetan letter set,the Tibetan CER of the Chinese-Tibetan multi-language model has decreased by 16.7% and 18%,respectively,compared with the Lhasa-Tibetan specific model.The end-to-end multi-language learning model using Chinese-Tibetan multi-language character set as the modeling unit is better than other methods.It achieves the lowest CER at 27.3%.Under the same model and experimental environment,it is 2.2%accuracy higher than the model using the multi-dialect learning method in work[11].Therefore,compared with the work[11],the method in this paper achieved the higher accuracy using less amount of data.

    The above experimental results show that Chinese as a source language is more suitable than English as a transfer language for Tibetan.This is because there are more similarities in pronunciation between Chinese and Tibetan languages.English belongs to the Indo-European language family,but Chinese and Tibetan both belong to the Sino-Tibetan language family,the same language family can share more knowledge in transfer learning.Meanwhile,the experimental results also show that the multi-language learning model,using Chinese-Tibetan joint training data and multi-language character sets as the modeling unit,has the lowest Tibetan character error rate.These show that the multi-language learning model shares both the acoustic features of speech and language knowledge of the grammar,vocabulary,et al.between languages.Although Lhasa-Tibetan speech recognition based on multi-dialect transfer learning method in the work of[11]has a good performance,other Tibetan dialects,such as Amdo-Tibetan,also lack of speech corpus,which are much smaller than Chinese speech corpus,so they cannot contribute more to improve Lhasa-Tibetan speech recognition using transfer learning.

    5 Conclusion

    Under limited target data,Lhasa-Tibetan end-to-end speech recognition methods based on crosslanguage transfer learning are explored in this paper.Three aspects of modeling unit selection,transfer learning method,and source language selection are discussed.From the analysis of the experimental results,the end-to-end multi-language learning model using Chinese-Tibetan bilingual character set as the modeling unit is better than other methods.It achieves the lowest CER at 27.3%,and also has the largest CER reduction compared to the Lhasa-Tibetan specific model with a 26.1% reduction.Compared with the work[11],it achieves 2.2% higher accuracy using less amount of data.The experiments show that our method can learn both shared acoustic features and shared language knowledge between Chinese and Lhasa-Tibetan.Therefore,Chinese language is more suitable than English to establish a Tibetan speech recognition model as a cross-language transfer learning.Future work will explore the accuracy of Tibetan speech recognition using phoneme sets and even more finely divided articulatory feature sets as modeling units.

    Acknowledgement:Thanks for the help of Professor Zhao Yue and all students in the research group.

    Funding Statement:This work was supported by three projects.Zhao Y received the Grant with Nos.61976236 and 2020MDJC06.Bi X J received the Grant with No.20&ZD279.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    av不卡在线播放| 亚洲精品久久午夜乱码| 一区二区三区精品91| 亚洲男人天堂网一区| 国产日韩欧美视频二区| 欧美中文综合在线视频| 成年美女黄网站色视频大全免费| 久久狼人影院| 97人妻天天添夜夜摸| 精品第一国产精品| 十八禁人妻一区二区| 欧美黄色片欧美黄色片| 男人舔女人的私密视频| 一二三四中文在线观看免费高清| 最黄视频免费看| videosex国产| 一级爰片在线观看| 国产精品一区二区精品视频观看| 伦理电影免费视频| 亚洲天堂av无毛| 国产精品一区二区在线观看99| av在线老鸭窝| 欧美日韩av久久| 免费日韩欧美在线观看| 男女国产视频网站| 69精品国产乱码久久久| 夫妻性生交免费视频一级片| 一级爰片在线观看| 国产乱来视频区| 91精品伊人久久大香线蕉| 亚洲av福利一区| 免费高清在线观看视频在线观看| 天天添夜夜摸| 亚洲成人一二三区av| 一级毛片电影观看| 男女边摸边吃奶| 午夜免费观看性视频| 亚洲欧美日韩另类电影网站| 欧美人与性动交α欧美精品济南到| 国产探花极品一区二区| 在现免费观看毛片| 久久久久久久久免费视频了| 最黄视频免费看| 大话2 男鬼变身卡| 午夜福利乱码中文字幕| 国产精品蜜桃在线观看| 国产精品三级大全| 国产精品国产三级专区第一集| 三上悠亚av全集在线观看| xxxhd国产人妻xxx| 国产一区亚洲一区在线观看| av有码第一页| 在线观看免费视频网站a站| 国产成人欧美| 久久午夜综合久久蜜桃| 少妇人妻久久综合中文| 亚洲av日韩在线播放| 无遮挡黄片免费观看| 日韩中文字幕欧美一区二区 | 精品久久久久久电影网| 久久精品国产亚洲av高清一级| avwww免费| avwww免费| 亚洲精品第二区| 午夜福利视频精品| 午夜老司机福利片| 狂野欧美激情性xxxx| 国产在线视频一区二区| videos熟女内射| 国产黄色视频一区二区在线观看| 看免费av毛片| 多毛熟女@视频| 午夜老司机福利片| 亚洲av中文av极速乱| 黑丝袜美女国产一区| 国产精品香港三级国产av潘金莲 | 嫩草影院入口| 日韩中文字幕视频在线看片| 青草久久国产| 中文字幕人妻丝袜一区二区 | 精品久久久久久电影网| 十八禁人妻一区二区| av.在线天堂| 一级片免费观看大全| 亚洲综合色网址| 久久精品亚洲av国产电影网| 免费观看av网站的网址| 亚洲久久久国产精品| 久久精品熟女亚洲av麻豆精品| 纯流量卡能插随身wifi吗| 国产精品免费视频内射| 亚洲欧美一区二区三区久久| 丰满迷人的少妇在线观看| 久久久久视频综合| 在线观看免费午夜福利视频| 老鸭窝网址在线观看| 国产高清不卡午夜福利| 在线精品无人区一区二区三| 青春草视频在线免费观看| 老熟女久久久| 久久久精品区二区三区| 欧美人与性动交α欧美软件| 啦啦啦中文免费视频观看日本| www.精华液| 亚洲国产最新在线播放| 另类精品久久| 亚洲欧美中文字幕日韩二区| 大片免费播放器 马上看| 中国国产av一级| av又黄又爽大尺度在线免费看| 操出白浆在线播放| 亚洲人成电影观看| 欧美在线一区亚洲| 国产av码专区亚洲av| www.av在线官网国产| 亚洲人成网站在线观看播放| 一个人免费看片子| 久久人人97超碰香蕉20202| 亚洲欧美激情在线| 欧美日韩av久久| 中文字幕精品免费在线观看视频| 一二三四中文在线观看免费高清| 两个人看的免费小视频| 国产精品三级大全| 爱豆传媒免费全集在线观看| 久久这里只有精品19| 亚洲欧美精品自产自拍| 国产高清不卡午夜福利| 啦啦啦 在线观看视频| 亚洲视频免费观看视频| 国产xxxxx性猛交| videos熟女内射| 自线自在国产av| 亚洲国产欧美日韩在线播放| 亚洲欧美激情在线| 亚洲欧美精品综合一区二区三区| av电影中文网址| netflix在线观看网站| 啦啦啦在线免费观看视频4| 久久人人爽av亚洲精品天堂| 男女边吃奶边做爰视频| 中文字幕色久视频| 亚洲av男天堂| 精品一区二区免费观看| 少妇人妻 视频| 久久久精品国产亚洲av高清涩受| 两性夫妻黄色片| 免费人妻精品一区二区三区视频| 亚洲国产欧美一区二区综合| 永久免费av网站大全| 制服丝袜香蕉在线| 中文字幕人妻丝袜制服| 人人妻人人澡人人爽人人夜夜| 老汉色av国产亚洲站长工具| 欧美黑人欧美精品刺激| 少妇人妻 视频| 国产精品一国产av| 久久精品亚洲av国产电影网| 天天操日日干夜夜撸| 成人毛片60女人毛片免费| 国产伦人伦偷精品视频| 国产熟女欧美一区二区| 国产高清不卡午夜福利| 国产免费现黄频在线看| 性高湖久久久久久久久免费观看| 国产一区二区三区综合在线观看| 亚洲男人天堂网一区| 国产极品天堂在线| 国产精品国产三级专区第一集| 日韩av在线免费看完整版不卡| 日韩一本色道免费dvd| 中文天堂在线官网| 久久精品国产亚洲av涩爱| 99re6热这里在线精品视频| 一本一本久久a久久精品综合妖精| 亚洲视频免费观看视频| 久久亚洲国产成人精品v| 国产伦理片在线播放av一区| a级片在线免费高清观看视频| 亚洲精品一二三| 黄片小视频在线播放| 熟女av电影| 欧美黑人欧美精品刺激| 看免费成人av毛片| 看免费成人av毛片| 亚洲av日韩精品久久久久久密 | 久久人人爽人人片av| 18禁动态无遮挡网站| 国产一区有黄有色的免费视频| 亚洲成人免费av在线播放| 日本欧美视频一区| 久热这里只有精品99| 国产一卡二卡三卡精品 | 永久免费av网站大全| 电影成人av| 亚洲av国产av综合av卡| 伊人亚洲综合成人网| 丝袜人妻中文字幕| 国产精品成人在线| 美女主播在线视频| 精品久久久久久电影网| 18禁裸乳无遮挡动漫免费视频| 国产男人的电影天堂91| 国产激情久久老熟女| 国产视频首页在线观看| 观看美女的网站| 少妇猛男粗大的猛烈进出视频| 午夜91福利影院| 欧美 亚洲 国产 日韩一| 91成人精品电影| 久久精品熟女亚洲av麻豆精品| 国产成人一区二区在线| 久久久久久久久久久免费av| 黄色一级大片看看| 亚洲一码二码三码区别大吗| 91aial.com中文字幕在线观看| av又黄又爽大尺度在线免费看| 青春草亚洲视频在线观看| 9191精品国产免费久久| 最近2019中文字幕mv第一页| 久久久久久久国产电影| 青春草视频在线免费观看| 亚洲精品美女久久av网站| 51午夜福利影视在线观看| 在线观看免费日韩欧美大片| 日本vs欧美在线观看视频| 亚洲欧洲精品一区二区精品久久久 | 欧美日韩亚洲综合一区二区三区_| a级毛片在线看网站| 国产极品粉嫩免费观看在线| 一边亲一边摸免费视频| 亚洲精品av麻豆狂野| 欧美日韩精品网址| 自线自在国产av| 人妻 亚洲 视频| 免费高清在线观看日韩| 日韩大码丰满熟妇| 亚洲免费av在线视频| www.熟女人妻精品国产| 色播在线永久视频| 日韩 亚洲 欧美在线| 国产爽快片一区二区三区| 一区二区日韩欧美中文字幕| 国产成人精品福利久久| 男的添女的下面高潮视频| 操出白浆在线播放| 男女边吃奶边做爰视频| 热re99久久国产66热| 亚洲五月色婷婷综合| 欧美亚洲日本最大视频资源| 精品视频人人做人人爽| 色94色欧美一区二区| 亚洲人成电影观看| 国产一区亚洲一区在线观看| 最近中文字幕高清免费大全6| a级毛片黄视频| 久久精品国产a三级三级三级| kizo精华| 亚洲七黄色美女视频| 亚洲欧洲国产日韩| 熟女少妇亚洲综合色aaa.| 亚洲精品中文字幕在线视频| 精品国产乱码久久久久久男人| 久久久久精品国产欧美久久久 | 国产成人精品无人区| 欧美少妇被猛烈插入视频| 亚洲精品国产av蜜桃| av网站免费在线观看视频| 一区二区三区四区激情视频| 亚洲第一av免费看| 国产黄色免费在线视频| 两个人看的免费小视频| 如何舔出高潮| 欧美激情高清一区二区三区 | 亚洲成人国产一区在线观看 | 中文字幕色久视频| 一边摸一边抽搐一进一出视频| 亚洲成av片中文字幕在线观看| 日日撸夜夜添| 男男h啪啪无遮挡| 国产精品香港三级国产av潘金莲 | 亚洲精品aⅴ在线观看| 中国三级夫妇交换| 9热在线视频观看99| 另类亚洲欧美激情| 国产高清不卡午夜福利| 一级片'在线观看视频| 超碰成人久久| 狠狠精品人妻久久久久久综合| 欧美激情极品国产一区二区三区| 18在线观看网站| 在线观看三级黄色| 亚洲欧美一区二区三区国产| 日韩 欧美 亚洲 中文字幕| 午夜福利网站1000一区二区三区| 成人国语在线视频| 一二三四在线观看免费中文在| 水蜜桃什么品种好| 国产又爽黄色视频| 亚洲av欧美aⅴ国产| 久久久久精品国产欧美久久久 | 99热全是精品| www.自偷自拍.com| 亚洲视频免费观看视频| 如日韩欧美国产精品一区二区三区| 亚洲欧美清纯卡通| 少妇被粗大的猛进出69影院| 18在线观看网站| 丁香六月欧美| 十八禁高潮呻吟视频| 久久精品国产亚洲av涩爱| 中文欧美无线码| 精品一区在线观看国产| videos熟女内射| 国产成人精品在线电影| 亚洲精品国产一区二区精华液| 亚洲精品第二区| 女人久久www免费人成看片| 十分钟在线观看高清视频www| 国产老妇伦熟女老妇高清| 欧美黑人精品巨大| 亚洲av国产av综合av卡| 夫妻午夜视频| 晚上一个人看的免费电影| 久久国产精品大桥未久av| 亚洲综合色网址| 中文乱码字字幕精品一区二区三区| 国产精品嫩草影院av在线观看| 十八禁高潮呻吟视频| 欧美日韩一区二区视频在线观看视频在线| 欧美日韩福利视频一区二区| 最近中文字幕高清免费大全6| 成年av动漫网址| 这个男人来自地球电影免费观看 | 日韩视频在线欧美| 人成视频在线观看免费观看| 国产精品三级大全| 国产乱来视频区| 亚洲成av片中文字幕在线观看| 国产精品一国产av| 国语对白做爰xxxⅹ性视频网站| 久久毛片免费看一区二区三区| 母亲3免费完整高清在线观看| av一本久久久久| 97人妻天天添夜夜摸| 少妇人妻精品综合一区二区| 国产精品一二三区在线看| 亚洲欧美成人精品一区二区| 看非洲黑人一级黄片| 老司机影院成人| 亚洲av日韩精品久久久久久密 | 午夜福利免费观看在线| 国产欧美日韩综合在线一区二区| av国产精品久久久久影院| 18禁动态无遮挡网站| 国产一区二区激情短视频 | av在线app专区| av.在线天堂| 高清av免费在线| 一二三四在线观看免费中文在| 国产精品久久久人人做人人爽| 午夜日本视频在线| 两个人看的免费小视频| 婷婷成人精品国产| www.av在线官网国产| 日本一区二区免费在线视频| 日日爽夜夜爽网站| 韩国精品一区二区三区| 亚洲成人av在线免费| 一本色道久久久久久精品综合| 亚洲四区av| 一区福利在线观看| 国产黄色免费在线视频| 黄片播放在线免费| 飞空精品影院首页| 色94色欧美一区二区| 亚洲av男天堂| 99久国产av精品国产电影| 国产成人欧美在线观看 | 男女下面插进去视频免费观看| 熟女少妇亚洲综合色aaa.| 国产精品一区二区精品视频观看| 国产精品久久久人人做人人爽| 十八禁人妻一区二区| 校园人妻丝袜中文字幕| 男人舔女人的私密视频| 亚洲人成77777在线视频| 日韩大片免费观看网站| 中文乱码字字幕精品一区二区三区| 久热这里只有精品99| 久久性视频一级片| 少妇猛男粗大的猛烈进出视频| 亚洲国产精品国产精品| 男女之事视频高清在线观看 | 久久天躁狠狠躁夜夜2o2o | 国产日韩一区二区三区精品不卡| 国产乱来视频区| 老汉色av国产亚洲站长工具| 亚洲欧美成人综合另类久久久| 夜夜骑夜夜射夜夜干| 日韩一区二区视频免费看| 国产精品偷伦视频观看了| 国产1区2区3区精品| 悠悠久久av| 欧美日韩亚洲综合一区二区三区_| av网站在线播放免费| 建设人人有责人人尽责人人享有的| 国产欧美亚洲国产| 亚洲一级一片aⅴ在线观看| 午夜激情av网站| 99国产精品免费福利视频| 菩萨蛮人人尽说江南好唐韦庄| 欧美国产精品va在线观看不卡| 午夜免费男女啪啪视频观看| 国产熟女午夜一区二区三区| netflix在线观看网站| 青春草视频在线免费观看| 黄片无遮挡物在线观看| 少妇猛男粗大的猛烈进出视频| 国产一区有黄有色的免费视频| 午夜日本视频在线| 久久韩国三级中文字幕| 国产色婷婷99| 成人手机av| netflix在线观看网站| 日韩精品免费视频一区二区三区| 国产精品成人在线| 9191精品国产免费久久| 精品一品国产午夜福利视频| 两性夫妻黄色片| 大片免费播放器 马上看| 亚洲国产看品久久| 亚洲天堂av无毛| 又黄又粗又硬又大视频| 久久天堂一区二区三区四区| 国产极品粉嫩免费观看在线| 欧美国产精品va在线观看不卡| 免费在线观看完整版高清| 亚洲国产av影院在线观看| 国产精品99久久99久久久不卡 | 欧美日韩视频高清一区二区三区二| 女人高潮潮喷娇喘18禁视频| 一级,二级,三级黄色视频| 国产1区2区3区精品| 精品视频人人做人人爽| 免费观看a级毛片全部| 欧美 亚洲 国产 日韩一| 人妻一区二区av| 色吧在线观看| 精品国产超薄肉色丝袜足j| 91精品三级在线观看| 成年女人毛片免费观看观看9 | 亚洲av电影在线进入| 亚洲一区二区三区欧美精品| 亚洲精品乱久久久久久| 亚洲熟女毛片儿| 亚洲人成77777在线视频| 又粗又硬又长又爽又黄的视频| 妹子高潮喷水视频| 成年美女黄网站色视频大全免费| 国产高清国产精品国产三级| 日日啪夜夜爽| 在线观看免费日韩欧美大片| 日韩免费高清中文字幕av| 人人妻人人爽人人添夜夜欢视频| 亚洲av男天堂| 久久精品久久久久久久性| 精品国产一区二区三区久久久樱花| 国产亚洲最大av| 涩涩av久久男人的天堂| 久久性视频一级片| www日本在线高清视频| 国产在线视频一区二区| 高清av免费在线| 午夜精品国产一区二区电影| 在线看a的网站| 免费观看性生交大片5| 精品国产一区二区三区四区第35| 国产 一区精品| 色婷婷久久久亚洲欧美| 在线精品无人区一区二区三| 国产97色在线日韩免费| 麻豆精品久久久久久蜜桃| 叶爱在线成人免费视频播放| 在线观看国产h片| 日本91视频免费播放| 国产精品成人在线| 久久精品亚洲熟妇少妇任你| 亚洲久久久国产精品| av女优亚洲男人天堂| 日本黄色日本黄色录像| 中文字幕色久视频| 美女视频免费永久观看网站| 亚洲av日韩在线播放| 久久青草综合色| 精品视频人人做人人爽| 久久久久精品久久久久真实原创| 午夜激情av网站| 午夜福利在线免费观看网站| av天堂久久9| 无遮挡黄片免费观看| 亚洲熟女精品中文字幕| 久久久久网色| 成人亚洲精品一区在线观看| 久久ye,这里只有精品| 丝袜喷水一区| 国产熟女午夜一区二区三区| 国产精品久久久久久精品电影小说| av有码第一页| 国产视频首页在线观看| 欧美av亚洲av综合av国产av | 少妇 在线观看| 欧美激情高清一区二区三区 | 大话2 男鬼变身卡| a 毛片基地| 麻豆av在线久日| 超碰97精品在线观看| av免费观看日本| 丝瓜视频免费看黄片| 日本vs欧美在线观看视频| 亚洲国产毛片av蜜桃av| 在线观看三级黄色| 午夜福利一区二区在线看| 国产av一区二区精品久久| 狂野欧美激情性xxxx| 看免费av毛片| 久久国产精品大桥未久av| 在线 av 中文字幕| 这个男人来自地球电影免费观看 | 久久久久久人人人人人| 色吧在线观看| 两个人免费观看高清视频| 狂野欧美激情性xxxx| 午夜福利一区二区在线看| 我的亚洲天堂| avwww免费| 国产视频首页在线观看| 久久鲁丝午夜福利片| 午夜日韩欧美国产| 成人亚洲欧美一区二区av| 免费在线观看视频国产中文字幕亚洲 | 久久久久视频综合| 午夜福利一区二区在线看| 在线观看免费日韩欧美大片| 熟妇人妻不卡中文字幕| 国产一区亚洲一区在线观看| 国产av一区二区精品久久| 亚洲精品久久午夜乱码| 精品国产露脸久久av麻豆| 亚洲欧美中文字幕日韩二区| 91aial.com中文字幕在线观看| 自线自在国产av| 亚洲精品乱久久久久久| 免费看不卡的av| 少妇猛男粗大的猛烈进出视频| 久久精品国产亚洲av高清一级| 精品少妇一区二区三区视频日本电影 | 久久精品久久久久久噜噜老黄| 国产野战对白在线观看| 男女高潮啪啪啪动态图| 欧美人与善性xxx| 好男人视频免费观看在线| 高清黄色对白视频在线免费看| 国产乱来视频区| 亚洲国产av新网站| 秋霞伦理黄片| 校园人妻丝袜中文字幕| 欧美最新免费一区二区三区| av不卡在线播放| av在线观看视频网站免费| 日韩制服丝袜自拍偷拍| 色综合欧美亚洲国产小说| 精品国产一区二区三区四区第35| 男女边吃奶边做爰视频| 精品少妇黑人巨大在线播放| 亚洲色图综合在线观看| 亚洲成人av在线免费| 波野结衣二区三区在线| 久久97久久精品| 日韩,欧美,国产一区二区三区| 青青草视频在线视频观看| 国产伦理片在线播放av一区| 久久午夜综合久久蜜桃| 亚洲精品一二三| 国产精品一区二区在线不卡| 久久毛片免费看一区二区三区| 国产亚洲午夜精品一区二区久久| 亚洲精品久久成人aⅴ小说| 2018国产大陆天天弄谢| 日韩电影二区| 欧美中文综合在线视频| 精品人妻在线不人妻| 男女高潮啪啪啪动态图| 另类亚洲欧美激情| 中文字幕最新亚洲高清| 黄片小视频在线播放| 两个人看的免费小视频| 一本一本久久a久久精品综合妖精| 两个人看的免费小视频| 在线精品无人区一区二区三| 久久久久久久精品精品| 不卡视频在线观看欧美| av国产精品久久久久影院| www.av在线官网国产| 午夜福利视频精品| 人人妻人人澡人人爽人人夜夜| 久久精品国产综合久久久| 精品国产露脸久久av麻豆| av不卡在线播放| 亚洲av欧美aⅴ国产| 一级a爱视频在线免费观看| 亚洲av国产av综合av卡| 99热全是精品| 久久这里只有精品19| 一区二区日韩欧美中文字幕|