• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Tibetan Multi-Dialect Speech and Dialect Identity Recognition

    2019-11-25 10:22:50YueZhaoJianjianYueWeiSongXiaonaXuXialiLiLichengWuandQiangJi
    Computers Materials&Continua 2019年9期

    Yue Zhao,Jianjian Yue,Wei Song, ,Xiaona Xu,Xiali Li,Licheng Wu and Qiang Ji

    Abstract:Tibetan language has very limited resource for conventional automatic speech recognition so far.It lacks of enough data,sub-word unit,lexicons and word inventories for some dialects.And speech content recognition and dialect classification have been treated as two independent tasks and modeled respectively in most prior works.But the two tasks are highly correlated.In this paper,we present a multi-task Wave Net model to perform simultaneous Tibetan multi-dialect speech recognition and dialect identification.It avoids processing the pronunciation dictionary and word segmentation for new dialects,while,in the meantime,allows training speech recognition and dialect identification in a single model.The experimental results show our method can simultaneously recognize speech content for different Tibetan dialects and identify the dialect with high accuracy using a unified model.The dialect information used in output for training can improve multi-dialect speech recognition accuracy,and the low-resource dialects got higher speech content recognition rate and dialect classification accuracy by multi-dialect and multi-task recognition model than task-specific models.

    Keywords:Tibetan multi-dialect speech recognition,dialect identification,multi-task learning,wavenet model.

    1 Introduction

    Tibetan language is one of the most widely used minority languages in China.It is partly used in India,Bhutan and Nepal.The automatic speech recognition technology for Tibetan language has drawn more and more attention of researchers.It has shown that Tibetan speech recognition has a wide demand and immeasurable application prospects in many practical,real-life situations.

    During the long-term development of Tibetan language,different dialects have been formed.Tibetan language is divided into three major dialects in China,including üTsang,Kham and Amdo dialect.Three dialects are divided into several local sub-dialects.Tibetan dialects pronounce very differently in different regions,such as ü-Tsang and Kham dialects are tonal,but Amdo dialect is toneless.However,the w ritten characters are unified for all Tibetan dialects.Since Lhasa of ü-Tsang dialect is Tibetan standard speech,there are much more research works than other dialects on linguistics,speech recognition and corpus [Zhang (2016);Yuan,Guo and Dai (2015);Pei (2009);Li and Meng(2012);Wang,Guo and Xie (2017);Cai and Zhao (2008);Cai (2009);Han and Yu (2010)].Dialect identification has recently gained substantial interest in the field of language identification.It is more challenging than a general language identification task,since the similarities among dialects of a language are much more in terms of the eir phoneme set,word pronunciation,and prosodic traits [Shon,Ali and Glass (2018)].Traditionally,speech content recognition and dialect classification are treated as two independent tasks and modeled respectively.The work in [Shon,Ali and Glass (2018)] explored end-to-end model only for dialect recognition using both acoustic and linguistic feature on Arabic dialect speech data.However,the way humans process speech signals always decipher speech content and other meta information together and simultaneously,including languages,speaker characteristics,emotions,etc.[Tang,Li and Wang (2016)].The recent works in Li et al.[Li,Sainath,Sim et al.(2018);Toshniwal,Sainath,Weiss et al.(2018);Watanabe,Hori and Hershey (2017)] discussed how to learn a single end-to-end model for joint speech and language recognition.The work in Li et al.[Li,Sainath,Sim et al.(2018)] adopted listen,attend and spell (LAS) model for 7 English dialects and it has shown good performance compared to other LAS models for single dialect tasks.Similar work in Toshniwal et al.[Toshniwal,Sainath,Weiss et al.(2018)] with multi-task end-toend learning for 9 Indian languages obtained the largest improvement by conditioning the encoder on the speech language identity.The work in Watanabe et al.[Watanabe,Hori and Hershey (2017)] was based on hybrid attention/connectionist temporal classification(CTC) architecture where the model used a deep convolutional neural networks (CNNs)followed by bidirectional long short-term memory (BLSTM) in encoder networks,and showed that it achieved the state- of -the-art performance in several ASR benchmarks including English,Japanese,Chinese mandarin,German etc.These works suggested that end-to-end model can contribute to handling the variations between different languages or between different tasks by learning and optimizing a single neural network.

    End-to-end model has more advantages for low-resource languages than conventional DNN/HMM systems because it avoids the need of linguistic resources such as dictionaries and phonetic know ledge [Li,Sainath,Sim et al.(2018)].The work in [Sriram,Jun,Gaur et al.(2018)] proposed a general,scalable,end-to-end framework that uses the generative adversarial network (GAN) that was also used in many fields,including computer vision [Li,Jiang and Cheslyar (2018)],to enable robust speech recognition,which do not need domain expertise and simplifying assumptions.Considering limited linguistic resource for Kham dialect and Amdo dialect in Tibetan,our work tries to build an end-to-end model for Tibetan multi-task recognition.It can reduce the efforts of language-dependent processing including the use of pronunciation dictionary and word segmentation which are the big barriers when we build a conventional ASR for new Tibetan dialect.Meanwhile,we try to explore the capability of end-to-end model for capturing the variations between some small-data dialects and a big-data dialect.

    In this work,we utilize Wave Net-CTC model to train multi-task recognition on three Tibetan dialects speech data.Since Wave Net is a deep generative model with very large receptive fields,it can capture the characteristics of many different speakers with equal fidelity and model the long-term dependency on speech data [Van Den Oord,Dieleman,Zen et al.(2016)].It was efficiently applied for Multi-speaker speech generation and textto-speech.Generative model can capture the underlying data distribution as well as the mechanisms used to generate data,we believe that such ability is crucial for shared representation across speech data from different dialects in a language.Wave Net can also give the predict distribution for speech data conditioned on all previous input,so we use the dialect information as an additional label output during training in order to perform the joint speech and dialect recognition.Experimental results show the advantage of Wave Net-CTC for multi-task Tibetan speech recognition,and multi-dialect model can improve the speech content recognition accuracy for limited-resource dialects.

    2 Related works

    In Tibetan speech recognition,most of research works is about Lhasa of ü-Tsang dialect.The recent work in Wang et al.[Wang,Guo and Xie (2017)] applies the end-to-end model based on CTC technology to Lhasa-ü-Tsang continuous speech recognition,achieving better performance than the state- of -the-art bidirectional long short-term memory network.The work in Huang et al.[Huang and Li (2018)] used end-to-end model training by applying the cyclical neural network and CTC algorithm to the acoustic modeling of Lhasa-ü-Tsang speech recognition,and introduces time domain convolution operations on the output sequence of the hidden layer to reduce the time domain expansion of the network’s hidden layer which improve the training and decoding efficiency of the model.The works in Li et al.[Li,Wang,Wang et al.(2018)]introduces the tone information into Lhasa-ü-Tsang continuous speech recognition,and designs a set of phonemes with tones,which shows that the tones plays an important role in speech recognition of Lhasa-ü-Tsang recognition.

    In the speech recognition task on Tibetan-Chinese bilingual language,the work in Wang et al.[Wang,Guo,Chen et al.(2017)] solved the problem of sparsity caused by characters as a modeling unit through selecting Tibetan characters and Mandarin nontonal syllables as modeling units and adding noise algorithms.As for the speech recognition for other Tibetan dialects,due to the resources of Kham and Amdo dialect are relatively scarce,a few of related studies are about the endpoint detection,speech feature extraction,and isolated word recognition [Cai and Zhao (2008);Cai (2009);Han and Yu(2010);Li,Yu,Zheng et al.(2017)].

    As the topic of Tibetan dialect identity recognition,to our know ledge,there is almost no relevant research.Therefore,the open corpus provided in this paper can make up for this gap for relevant researchers.

    In the aspect of multi-task framework for speech recognition,many researchers have done some related works.The work in Ruder [Ruder (2017)] introduced the motivation,learning methods,working mechanism and important auxiliary task selection mechanism of multi-task framework,which provides guidance for applying multi-task framework to speech recognition.The work in Chen et al.[Chen and Mak (2015)] used multi-task framework to conduct joint training of multiple low-resource languages,exploring the universal phoneme set as a secondary task to improve the effect of the factor model of each language.The work in Siohan et al.[Siohan and Rybach (2015)] proposed two methods,namely,early system fusion and multi-task system fusion strategy to reduce the computational complexity of running multiple recognizers in parallel to recognize the speech of adults and children.The work in Tang et al.[Tang,Li and Wang (2016)]integrated speaker recognition and speech recognition into a multi-task learning framework using a recursive structure which attempts to use a unified model to simultaneously identify the two work.The work in Qian et al.[Qian,Yin,You et al.(2015)] combined two different DNNs (one for feature denoising and one for acoustic modeling) into a complete multi-task framework,in which all parameters can be used in real multi-task mode with two criteria training from scratch.The work in [Thanda and Venkatesan (2017)] combined the speaker's lip visual information with the audio input for speech recognition to learn the mapping of an audio-visual fusion feature and the frame label obtained from the GMM/HMMacoustic model,in which the secondary task is mapping visual features to frame labels derived from another GMM/HMM model.The work in Krishna et al.[Krishna,Toshniwal and Livescu (2018)] proposed a hierarchical multi-task model which step further on standard multi-task framework,and the performance in high resource and low resource language recognition were compared.The work in Yang et al.[Yang,Audhkhasi,Rosenberg et al.(2018)] conducted joint learning of accent recognizers and multi-task acoustic models to improve the performance of acoustic models.The above works have one thing in common,that is,the transfer of know ledge between tasks,which is a part of reason why the multi-task framework works.All these works demonstrate the effectiveness of multi-task mechanism.

    So it is very significant to establish an accurate Tibetan multi-dialect recognition system using the existing Lhasa-ü-Tsang speech recognition model and limited amount of other dialect data.It can not only relieve the burdensome data requirements,but also quickly expand the existing recognition model to other target language.It can accelerate the application of Tibetan speech recognition technology.

    3 Wave Net-CTC for Tibetan multi-task recognition model

    3.1 Wavenet

    Wave Net,a deep neural network,is used for generating raw audio waveforms in Van Den Oord et al.[Van Den Oord,Dieleman,Zen et al.(2016)].The model is fully probabilistic and autoregressive,with the predictive distribution for each audio sample conditioned on all previous ones.It yielded state- of -the-art performance for text-tospeech.A single Wave Net can capture the characteristics of many different speakers and model distributions over thousands of random variables.The work in Van et al.[Van Den Oord,Dieleman,Zen et al.(2016)] also shows that it can be used as a discrim inative model,returning promising results for speech recognition.In our work,we employ it to model the distribution of speech data from different dialects and different speakers.

    Wave Net model is composed of stacked dilated causal convolutional layers.The network models the joint probability of a waveform as a product of conditional probabilities as Eq.(1).

    The causal convolutions shown in Fig.1.cannot make the predictionof model at timesteptdepend on any of the future timestepsAt training time,the conditional predictions for all timesteps can be made in parallel because all timesteps of ground truth x are known.When generating with the model,the predictions are sequential:after each sample is predicted,it is fed back into the network to predict the next sample.When modeling a long sequence,causal convolutions are faster to train than RNNs,since they do not have recurrent connections.

    Figure1:A stack of causal convolutional layers [Van Den Oord,Dieleman,Zen et al.(2016)]

    A stack of dilated causal convolutional layers with dilation {1,2,4,8} is shown in Fig.2.It is more efficient than a causal convolution layers to increase the receptive field,since the filter is applied over an area larger than its length by skipping input values with a certain step.

    Stacking a few blocks of dilated causal convolutional layers has very large receptive fields size.For example,3 blocks of dilated convolution with the dilation {1,2,4,8} are stacked,where each {1,2,4,8} block has receptive field of size 16,and then the dilation repeats as {1,2,4,8,1,2,4,8,1,2,4,8}.So the stacked dilated convolutions have receptive field of size 4096.

    Figure2:A stack of dilated causal convolutional layers [Van Den Oord,Dieleman,Zen et al.(2016)]

    Wave Net uses the gated activation unit as same as the one used in the gated PixelCNN[Oord,Kalchbrenner and Kavukcuoglu (2016)].Its activation function is as Eq.(2).

    where * denotes a convolution operator,? denotes an element-wise multiplication operator,is a sigmoid function.iis the layer index.fandgdenote filter and gate,respectively,andWis learnable weight.

    Wave Net uses residual and parameterised skip connections to speed up convergence and enable training of much deeper models.The more details on Wave Net can be found in[Van Den Oord,Dieleman,Zen et al.(2016)].

    3.2 End-to-end Tibetan multi-task model

    We adopt the architecture of Speech-to-Text-Wave Net [Nam ju (2017)] for Tibetan multitask speech recognition.It uses a single CTC to sit on top of Wave Net and trains Wave Net with CTC loss.The forward-backward algorithm of CTC can map speech to text sequence.The architecture is shown as Fig.3.

    Figure3:The architecture of Wave Net-CTC [Huang and Li (2018)]

    The difference among Tibetan dialects is mainly expressed in phonetics,but minor in vocabulary and grammar.In the period of Tubo Dynasty,many works had been done for the determination in Tibetan language w riting,which still kept the basic unity of Tibetan w ritten language.So far,Tibetan people have no major obstacles in communication with w ritten language.Even if there is a small amount of differences in vocabulary,it will tend to be unified.The rules of grammar have changed slightly.Tibetan characters are w ritten in Tibetan letters from left to right,but there is a vertical superposition in syllables

    (syllables are separated by delimiter “?”.),which is a two-dimensional planar character

    shown as Fig.4.A Tibetan sentence is shown in Fig.5,where the sign “|” is used as the end sign of a Tibetan sentence.Tibetan letters are not suitable for the output symbols of end-to-end model,because the output is not a recognized Tibetan characters sequence.So a syllable of Tibetan characters is used as the CTC output unit.

    Figure4:The structure of a Tibetan syllable

    Figure5:A Tibetan sentence (It means I have eight bucks)

    Figure6:Our end-to-end model for Tibetan multi-task recognition

    In this paper,we explore to expand the Tibetan characters sequence with dialect symbols as output targets.For example,when including the Yushu-Kham dialect,we add the symbol ‘Y’ into the label inventory.We evaluate two ways to add the dialect information into label sequence.One is to add the symbol to the beginning of the target label sequence,like “Y ?? ? ??” (“ ??? ??????” means “Thanks” in English).The other is to add the symbol at the end of the label sequence,like “ ?? ? ?? Y”.

    Meanwhile,we remove the sign “|” in Tibetan sentence and replace the delimiter “?” with the space.In this work,we do not combine a language model.Our end-to-end model for Tibetan multi-task speech and dialect recognition is shown as Fig.6.

    4 Experiments

    4.1 Data

    Our experimental data is from an open and free Tibetan multi-lingual speech data set TIBMD@MUC,which can be downloaded from https://pan.baidu.com/s/14CihgqjA4AFFH1QpSTjzZw.The text corpus consistsoftwo parts.One is 1396 spoken language sentences selected from the book “Tibetan Spoken Language” [La (2005)] w ritten by La Bazelen,and the other part is collected to 8,000 sentences from online news,electronic novels and poetry of Tibetan on internet.All text corpuses include a total of 3497 Tibetan syllables.

    There are 114 recorders who were from Lhasa City in Tibet,Yushu City in Qinghai Province,Changdu City in Tibet and Tibetan Qiang Autonomous Prefecture of Ngawa.They used different dialects to speak out the same text for 1396 spoken sentences,and other 8000 sentences are read loudly in Lhasa dialect.Speech data files are converted to 16K Hz sampling frequency,16bit quantization accuracy,and wav format.

    Our experimental data for multi-task speech recognition is shown in Tab.1,which consists of 20.73 hours Lhasa-ü-Tsang,2.82 hours Yushu-Kham,and 2.15 hours Amdo pastoral dialect,and their corresponding texts contain 3497 syllables for training.We collect 0.3 hours Lhasa-ü-Tsang,0.2 hours Yushu-Kham,and 0.2 hours Amdo pastoral dialect respectively to test.

    39 MFCC features of each observation frame are extracted from speech data using a 25ms window with 10ms overlaps.

    Table1:Tibetan multi-dialect dataset statistics

    4.2 Model details

    For multi-task speech recognition,the CTC output layer contains 3502 nodes (3497+1 blank+1 space+3 dialect ID labels).The Wave Net network consists of 15 layers,grouped into 3 dilated residual block stacks of 5 layers.In every stack,the dilation rate increases by a factor of 2 in every layer,starting with rate 1 (no dilation) and reaching the maximum dilation of 16 in the last layer.The filter size of causal dilated convolutions is 7.The number of hidden units in the gating layers is 128.The number of hidden units in the residual connection is 128.The model was trained for 100 epochs with the ADAMoptimizer with batch size of 10.The learning rate was held constant at.The models were trained on one Nvidia GTX1070Ti GPU.

    For dialect-specific model for small-data dialects,we first took the multi-dialect model without dialect ID,i.e.,“Model”,as the starting point and retraining the same architecture for each dialect using a small amount of training data.We refer to this type of models as“Model-R” in Tab.2.These models got acceptable recognition rates.We also build dialect-specific models on each dialect data,as “Dialect-specific model” in Tab.2.The Model-R by retaining achieved better performance than Dialect-specific model for smalldata dialects for speech content recognition.

    Table2:Syllable error rate (%) of dialect-specific models

    For dialect identity recognition model,we used a two-layer LSTM (300 hidden units in each layer) network followed by a softmax layer to classify the dialect identities,in which cross entropy was adopted as loss function.The model was trained for 500 epochs with the ADAMoptimizer with batch size of 50.The learning rate was held constant at 0.001.The weight parameters of the e softmax layer were initialized with random uniform distribution of range [0,1].We also crop the gradient to within [-1,1] to alleviate the gradient vanishing.

    4.3 Results

    The experimental results are shown in Tab.3 and Tab.4.We refer to the model integrated with dialect ID at the beginning of the output as “ID-Model”,the model with dialect ID at the end of the output as “Model-ID”,the model without dialect information in output as “Model” respectively,and compared them with the end-to-end Dialectspecific model.

    From Tab.3,we can see that all multi-dialect speech recognition models outperform dialect-specific models for low-resource dialects,including Yushu-kham dialect and Amdo pastoral dialect.Wave Net-CTC model can capture the shared speech features and linguistic features among different dialects of a language.The underlying shared know ledge in one language can transfer from one dialect to other dialects.For Lhasa-üTsang,a big-data dialect,all multi-dialect speech recognition models performed worse than dialect-specific model.It shows that the added two small-data dialect does harm to big-data dialect for multi-dialect speech recognition.In spite of that,the ID-Model trained with dialect information at the beginning of label sequence has closer recognition rate to dialect-specific model for Lhasa.

    Table3:A comparison on syllable error rate (%) of multi-task models and task-specific models

    Inserting the dialect symbol into label sequence performed better than the model without dialect information for multi-dialect speech recognition.It shows that dialect information helps to improve the speech content recognition for multi-task models.

    From Tab.4,we can observe that multi-task learning models have the very high accuracy for dialect identity recognition.ID-Model and Model-ID outperformed dialect ID recognition model.It presents that multi-task speech recognition models can decipher speech content and dialect information together and simultaneously,and perform both well.This is the same way that human process speech signals.

    Table4:Dialect ID recognition accuracy (%) of multi-task models and task-specific model

    Besides,the ID-Model has higher accuracy than Model-ID for both speech recognition and dialect identification.Based on this observation,it shows that the speech content recognition depends upon the accuracy of dialect classification in this multi-dialect and multi-task recognition model.

    5 Conclusion

    In this paper,we proposed to use the Wave Net-CTC model for Tibetan multi-dialect and multi-task recognition.It provides a simple and effective solution for building new Tibetan dialect model without the use of dialect-specific linguistic resource.It is optimized to predict the Tibetan character sequence appended with the dialect symbol as the output target,which effectively forces the model to learn shared hidden representation that are suitable for both character prediction and dialect prediction for different dialect of a language.In future work,we will improve the speech content recognition accuracy using a Tibetan language model.

    Acknowledgement:This work is supported by the ministry of education research in the humanities and social sciences planning fund (15YJAZH120),National Natural Science Foundation (61602539,61873291),and MUC 111 Project.

    最近中文字幕高清免费大全6| 国产中年淑女户外野战色| 亚洲欧美一区二区三区国产| 99久久人妻综合| 亚洲无线观看免费| 神马国产精品三级电影在线观看| 综合色av麻豆| 亚洲国产精品成人久久小说| 十八禁国产超污无遮挡网站| 亚洲电影在线观看av| 少妇丰满av| 少妇裸体淫交视频免费看高清| 国产麻豆成人av免费视频| 日韩欧美国产在线观看| 成人亚洲欧美一区二区av| 日韩在线高清观看一区二区三区| 日韩精品青青久久久久久| 欧美精品国产亚洲| 欧美xxxx性猛交bbbb| 亚洲三级黄色毛片| 日韩欧美 国产精品| 五月玫瑰六月丁香| 亚洲国产精品成人久久小说| 国产午夜精品一二区理论片| 亚洲精品,欧美精品| 丰满乱子伦码专区| 国产视频内射| 日韩成人伦理影院| 深夜a级毛片| 青春草亚洲视频在线观看| 色网站视频免费| 国产极品天堂在线| 视频中文字幕在线观看| 欧美极品一区二区三区四区| h日本视频在线播放| 91aial.com中文字幕在线观看| 爱豆传媒免费全集在线观看| 2021少妇久久久久久久久久久| 老师上课跳d突然被开到最大视频| 欧美+日韩+精品| 2022亚洲国产成人精品| 在线观看美女被高潮喷水网站| 国产乱来视频区| 欧美变态另类bdsm刘玥| 22中文网久久字幕| 成人漫画全彩无遮挡| 少妇人妻精品综合一区二区| 91狼人影院| 蜜臀久久99精品久久宅男| 国产成人福利小说| 水蜜桃什么品种好| 中国美白少妇内射xxxbb| 午夜日本视频在线| 亚洲av成人av| 日韩一区二区三区影片| www.av在线官网国产| 日本午夜av视频| 一个人免费在线观看电影| 美女高潮的动态| av在线亚洲专区| 日本五十路高清| 亚洲最大成人av| 国产精品一二三区在线看| 国产亚洲精品av在线| 日日啪夜夜撸| 超碰av人人做人人爽久久| 亚洲欧美一区二区三区国产| 欧美日本视频| 精品国产三级普通话版| 免费观看的影片在线观看| 日韩欧美国产在线观看| 22中文网久久字幕| 一夜夜www| 亚洲精品456在线播放app| 欧美激情国产日韩精品一区| 久久精品夜夜夜夜夜久久蜜豆| 长腿黑丝高跟| 色吧在线观看| 99热精品在线国产| 欧美日韩国产亚洲二区| 97超碰精品成人国产| 高清视频免费观看一区二区 | 国产av一区在线观看免费| 国产精品麻豆人妻色哟哟久久 | 精品久久久久久电影网 | 男女啪啪激烈高潮av片| 天美传媒精品一区二区| 精品人妻一区二区三区麻豆| 1000部很黄的大片| 成人特级av手机在线观看| a级毛色黄片| 国产成人一区二区在线| www日本黄色视频网| 色播亚洲综合网| 在线播放国产精品三级| 日韩人妻高清精品专区| 免费观看性生交大片5| 晚上一个人看的免费电影| 99久久九九国产精品国产免费| 亚洲美女视频黄频| 熟女人妻精品中文字幕| 国产毛片a区久久久久| 久久精品夜夜夜夜夜久久蜜豆| 亚洲国产最新在线播放| 日韩一区二区视频免费看| 亚洲精品自拍成人| 亚洲国产精品成人综合色| av黄色大香蕉| 日日啪夜夜撸| 欧美成人一区二区免费高清观看| 免费一级毛片在线播放高清视频| 蜜臀久久99精品久久宅男| 综合色丁香网| 久久人妻av系列| 亚洲av日韩在线播放| 色哟哟·www| 日韩一区二区三区影片| 一级黄色大片毛片| 淫秽高清视频在线观看| 国产精品久久电影中文字幕| 三级国产精品欧美在线观看| 国产亚洲最大av| 干丝袜人妻中文字幕| 国内精品一区二区在线观看| 我的女老师完整版在线观看| 99热这里只有是精品在线观看| 一个人看视频在线观看www免费| 亚洲美女视频黄频| 午夜激情福利司机影院| 亚洲经典国产精华液单| 精品久久国产蜜桃| 午夜精品国产一区二区电影 | 久久久久久久亚洲中文字幕| 波多野结衣高清无吗| 91在线精品国自产拍蜜月| 国产在线一区二区三区精 | 亚洲婷婷狠狠爱综合网| 看片在线看免费视频| av播播在线观看一区| 少妇猛男粗大的猛烈进出视频 | 一卡2卡三卡四卡精品乱码亚洲| 精品国产一区二区三区久久久樱花 | 一边亲一边摸免费视频| 亚洲国产成人一精品久久久| 纵有疾风起免费观看全集完整版 | 亚洲精品影视一区二区三区av| 中文在线观看免费www的网站| 麻豆久久精品国产亚洲av| a级毛片免费高清观看在线播放| 干丝袜人妻中文字幕| av天堂中文字幕网| 亚洲最大成人av| 国产精品久久电影中文字幕| 人妻夜夜爽99麻豆av| 国产视频首页在线观看| 国产亚洲av片在线观看秒播厂 | eeuss影院久久| 日韩欧美精品免费久久| 欧美人与善性xxx| 草草在线视频免费看| 国产老妇伦熟女老妇高清| 国产成人午夜福利电影在线观看| www日本黄色视频网| 毛片女人毛片| 看免费成人av毛片| 国产一区亚洲一区在线观看| 热99re8久久精品国产| 亚洲在久久综合| 国产精品一及| 久久久国产成人精品二区| 国产精品一区二区在线观看99 | 视频中文字幕在线观看| 国产精品麻豆人妻色哟哟久久 | 免费大片18禁| 最近最新中文字幕大全电影3| 波多野结衣高清无吗| 久久韩国三级中文字幕| 亚洲精品,欧美精品| 丰满乱子伦码专区| 男女那种视频在线观看| 中文欧美无线码| 成人无遮挡网站| 赤兔流量卡办理| 少妇被粗大猛烈的视频| 1024手机看黄色片| 国产毛片a区久久久久| 国产成人精品婷婷| 别揉我奶头 嗯啊视频| 中文天堂在线官网| 日韩成人av中文字幕在线观看| 99久久人妻综合| 久久久亚洲精品成人影院| 久久99热这里只频精品6学生 | 男女下面进入的视频免费午夜| 国产亚洲精品久久久com| 久久亚洲国产成人精品v| 精品久久久久久电影网 | 少妇人妻一区二区三区视频| 大香蕉97超碰在线| 欧美不卡视频在线免费观看| 国产黄a三级三级三级人| 成人二区视频| 国产午夜福利久久久久久| 国产高清国产精品国产三级 | 国产一区二区在线av高清观看| 男人狂女人下面高潮的视频| 边亲边吃奶的免费视频| 性色avwww在线观看| 尾随美女入室| 少妇的逼好多水| 国产精品伦人一区二区| 天天躁夜夜躁狠狠久久av| 国产免费视频播放在线视频 | 国产精品国产三级专区第一集| 天天躁日日操中文字幕| 看片在线看免费视频| 亚洲美女视频黄频| 久久草成人影院| 午夜福利网站1000一区二区三区| 亚洲高清免费不卡视频| 免费看光身美女| 亚洲18禁久久av| 干丝袜人妻中文字幕| 国产极品精品免费视频能看的| 深爱激情五月婷婷| 午夜视频国产福利| 爱豆传媒免费全集在线观看| 亚洲图色成人| 久久国产乱子免费精品| 最近的中文字幕免费完整| 国语自产精品视频在线第100页| 男女边吃奶边做爰视频| 国产精品麻豆人妻色哟哟久久 | 久久6这里有精品| 免费人成在线观看视频色| 久久亚洲精品不卡| 国产精品久久久久久久久免| 国产三级中文精品| 国产在视频线精品| 国产色爽女视频免费观看| 国产又色又爽无遮挡免| 丝袜美腿在线中文| 嫩草影院入口| 久久精品国产亚洲网站| 日韩欧美国产在线观看| 一级毛片aaaaaa免费看小| 人妻制服诱惑在线中文字幕| 国产精品综合久久久久久久免费| 亚洲欧美成人综合另类久久久 | 久热久热在线精品观看| 尾随美女入室| 午夜福利在线在线| 久久久久久大精品| 午夜激情福利司机影院| 国产探花在线观看一区二区| 在线观看66精品国产| 欧美bdsm另类| 欧美zozozo另类| 国产精品蜜桃在线观看| 三级经典国产精品| 精品人妻视频免费看| 插阴视频在线观看视频| 一个人看视频在线观看www免费| 国产黄a三级三级三级人| 国产乱人偷精品视频| 91午夜精品亚洲一区二区三区| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | av女优亚洲男人天堂| 国产精品一二三区在线看| 久久这里有精品视频免费| 国产精品一区二区性色av| 国产一区二区亚洲精品在线观看| 亚洲最大成人手机在线| 乱码一卡2卡4卡精品| 天天一区二区日本电影三级| 蜜臀久久99精品久久宅男| 中文精品一卡2卡3卡4更新| 1024手机看黄色片| 免费观看精品视频网站| 国产免费一级a男人的天堂| 免费av毛片视频| 一个人观看的视频www高清免费观看| 婷婷色av中文字幕| 久久鲁丝午夜福利片| 97在线视频观看| 国产成人a区在线观看| 最新中文字幕久久久久| 一级黄片播放器| 亚洲欧美精品自产自拍| 少妇猛男粗大的猛烈进出视频 | 国产色爽女视频免费观看| 少妇被粗大猛烈的视频| 国产精品一二三区在线看| 国产精品综合久久久久久久免费| 一本久久精品| 三级经典国产精品| 日韩成人av中文字幕在线观看| 中文字幕av在线有码专区| 亚洲自偷自拍三级| av线在线观看网站| 少妇人妻一区二区三区视频| 秋霞在线观看毛片| 大话2 男鬼变身卡| 一级毛片我不卡| www.av在线官网国产| 美女xxoo啪啪120秒动态图| 亚洲丝袜综合中文字幕| 我的女老师完整版在线观看| 蜜桃久久精品国产亚洲av| 只有这里有精品99| 亚洲欧美中文字幕日韩二区| 人人妻人人澡人人爽人人夜夜 | 春色校园在线视频观看| 国产精品.久久久| 国内揄拍国产精品人妻在线| 简卡轻食公司| 亚洲第一区二区三区不卡| 亚洲高清免费不卡视频| 岛国毛片在线播放| 日本免费a在线| 国产又色又爽无遮挡免| 床上黄色一级片| 国产大屁股一区二区在线视频| 久久亚洲精品不卡| 欧美成人精品欧美一级黄| 亚洲精品日韩在线中文字幕| 亚洲精品影视一区二区三区av| 日本熟妇午夜| 乱人视频在线观看| 国产午夜精品一二区理论片| 少妇的逼好多水| 黄色欧美视频在线观看| 97超视频在线观看视频| 三级毛片av免费| 成年av动漫网址| 亚洲四区av| 三级国产精品片| 青春草视频在线免费观看| 亚洲精品亚洲一区二区| 日韩av不卡免费在线播放| 日本wwww免费看| 毛片女人毛片| 免费观看精品视频网站| 亚洲精品日韩在线中文字幕| 男女国产视频网站| 晚上一个人看的免费电影| 国产高清不卡午夜福利| 精华霜和精华液先用哪个| 91在线精品国自产拍蜜月| 观看免费一级毛片| 亚洲精品乱码久久久久久按摩| 深爱激情五月婷婷| 欧美成人精品欧美一级黄| 白带黄色成豆腐渣| 亚洲伊人久久精品综合 | 又爽又黄无遮挡网站| 国产三级中文精品| 三级毛片av免费| 中文字幕av成人在线电影| 99在线视频只有这里精品首页| 免费观看在线日韩| 久久国内精品自在自线图片| 高清午夜精品一区二区三区| 成人无遮挡网站| 色网站视频免费| 日韩欧美精品免费久久| 人体艺术视频欧美日本| 精品久久久久久久末码| 欧美最新免费一区二区三区| 亚洲精品影视一区二区三区av| 亚洲经典国产精华液单| 美女内射精品一级片tv| 三级经典国产精品| 亚洲五月天丁香| 亚洲av二区三区四区| 卡戴珊不雅视频在线播放| 欧美不卡视频在线免费观看| 91在线精品国自产拍蜜月| 精品国内亚洲2022精品成人| 99久久九九国产精品国产免费| 99热这里只有是精品在线观看| 久久久欧美国产精品| 欧美日韩在线观看h| 久久久国产成人免费| 国产精品国产高清国产av| 成人特级av手机在线观看| 可以在线观看毛片的网站| 国产不卡一卡二| 啦啦啦啦在线视频资源| 久久精品91蜜桃| 天天躁日日操中文字幕| 久久鲁丝午夜福利片| 日韩强制内射视频| 又爽又黄a免费视频| 久久婷婷人人爽人人干人人爱| 蜜桃亚洲精品一区二区三区| 午夜日本视频在线| 精品欧美国产一区二区三| 伦理电影大哥的女人| 国产一区二区亚洲精品在线观看| 老师上课跳d突然被开到最大视频| av免费观看日本| 狂野欧美白嫩少妇大欣赏| 永久网站在线| 一区二区三区免费毛片| 日本熟妇午夜| 激情 狠狠 欧美| 欧美一级a爱片免费观看看| 国内揄拍国产精品人妻在线| 色视频www国产| 1000部很黄的大片| 18禁裸乳无遮挡免费网站照片| 九草在线视频观看| 乱人视频在线观看| 国产淫语在线视频| 亚洲av.av天堂| 久久草成人影院| 日日啪夜夜撸| 国产成人aa在线观看| 午夜福利在线观看免费完整高清在| 国产精品久久久久久av不卡| 欧美高清成人免费视频www| 亚洲av福利一区| 亚洲国产精品国产精品| 26uuu在线亚洲综合色| 成人av在线播放网站| 国产伦在线观看视频一区| 高清日韩中文字幕在线| 成人av在线播放网站| 国语自产精品视频在线第100页| 亚洲伊人久久精品综合 | 一级黄色大片毛片| 亚洲欧美清纯卡通| 人人妻人人澡人人爽人人夜夜 | 非洲黑人性xxxx精品又粗又长| 欧美日本亚洲视频在线播放| 91精品国产九色| 草草在线视频免费看| 男女那种视频在线观看| 久久精品影院6| 美女被艹到高潮喷水动态| 乱人视频在线观看| 大香蕉久久网| 一边摸一边抽搐一进一小说| 国产精品1区2区在线观看.| 最后的刺客免费高清国语| 国产午夜福利久久久久久| 亚洲人与动物交配视频| 精品人妻视频免费看| av线在线观看网站| 国产男人的电影天堂91| 精品久久久久久久人妻蜜臀av| 青春草亚洲视频在线观看| 国产精品国产高清国产av| av国产久精品久网站免费入址| av.在线天堂| 日本欧美国产在线视频| 久久久亚洲精品成人影院| 日韩一区二区三区影片| 中文精品一卡2卡3卡4更新| 国产探花在线观看一区二区| 偷拍熟女少妇极品色| 大香蕉久久网| 午夜福利视频1000在线观看| 神马国产精品三级电影在线观看| 极品教师在线视频| 国产精品一及| 人人妻人人看人人澡| 欧美一区二区亚洲| 三级经典国产精品| 日韩亚洲欧美综合| 国产精品福利在线免费观看| 蜜桃久久精品国产亚洲av| 日韩大片免费观看网站 | 国产成人91sexporn| 午夜久久久久精精品| 少妇的逼好多水| 日韩成人伦理影院| 一个人观看的视频www高清免费观看| 联通29元200g的流量卡| 校园人妻丝袜中文字幕| 人人妻人人澡人人爽人人夜夜 | 欧美色视频一区免费| 1000部很黄的大片| 欧美丝袜亚洲另类| 国产不卡一卡二| 国产亚洲av片在线观看秒播厂 | 色哟哟·www| 伦精品一区二区三区| 天天躁日日操中文字幕| 国产成人精品久久久久久| 精品无人区乱码1区二区| 日韩精品有码人妻一区| 一级毛片aaaaaa免费看小| 天美传媒精品一区二区| 久久精品91蜜桃| 国产乱来视频区| 精品久久久久久电影网 | 欧美zozozo另类| 国产免费福利视频在线观看| 麻豆精品久久久久久蜜桃| 中文欧美无线码| 中文在线观看免费www的网站| 男女国产视频网站| 国产伦在线观看视频一区| 成人无遮挡网站| 亚洲精品乱码久久久久久按摩| 久久久久久久国产电影| 久久精品影院6| 不卡视频在线观看欧美| 在现免费观看毛片| 深夜a级毛片| 男女那种视频在线观看| 免费看美女性在线毛片视频| 欧美日韩一区二区视频在线观看视频在线 | 久久久午夜欧美精品| 亚洲美女搞黄在线观看| 欧美一区二区国产精品久久精品| 白带黄色成豆腐渣| 国产黄色视频一区二区在线观看 | 国产探花在线观看一区二区| 一区二区三区四区激情视频| 永久网站在线| 蜜桃亚洲精品一区二区三区| 亚洲av成人av| www.av在线官网国产| 精品一区二区三区视频在线| 3wmmmm亚洲av在线观看| 成人欧美大片| 99国产精品一区二区蜜桃av| 久久久久久九九精品二区国产| 99久国产av精品| 国产欧美另类精品又又久久亚洲欧美| 尾随美女入室| 毛片女人毛片| 亚洲精品成人久久久久久| 亚洲在线观看片| 在线观看美女被高潮喷水网站| 免费观看在线日韩| 久热久热在线精品观看| 少妇的逼好多水| 伦理电影大哥的女人| 亚洲成人久久爱视频| 少妇丰满av| 日韩一区二区视频免费看| 少妇猛男粗大的猛烈进出视频 | 大话2 男鬼变身卡| 女的被弄到高潮叫床怎么办| 麻豆国产97在线/欧美| 国产白丝娇喘喷水9色精品| 午夜爱爱视频在线播放| 草草在线视频免费看| 国产精品人妻久久久久久| 国产精品久久视频播放| 波多野结衣高清无吗| 亚洲天堂国产精品一区在线| 一级毛片久久久久久久久女| 寂寞人妻少妇视频99o| 欧美成人免费av一区二区三区| 国产精品国产三级国产av玫瑰| 国产伦理片在线播放av一区| 国内精品宾馆在线| 免费观看a级毛片全部| 99热精品在线国产| 亚洲av熟女| 在线天堂最新版资源| 一本一本综合久久| 日韩成人伦理影院| 一个人看的www免费观看视频| 日韩成人av中文字幕在线观看| 伦精品一区二区三区| 99国产精品一区二区蜜桃av| 国产精品一区二区在线观看99 | 国产一区二区三区av在线| 中文字幕制服av| 七月丁香在线播放| 国产伦精品一区二区三区四那| 免费大片18禁| 免费av不卡在线播放| 久久久久九九精品影院| 99久久九九国产精品国产免费| 亚洲综合精品二区| 久久久亚洲精品成人影院| 51国产日韩欧美| 午夜福利高清视频| 桃色一区二区三区在线观看| 汤姆久久久久久久影院中文字幕 | 午夜福利在线观看吧| 身体一侧抽搐| 久久久久性生活片| 少妇被粗大猛烈的视频| 老女人水多毛片| 久久精品综合一区二区三区| 日韩国内少妇激情av| 日韩欧美精品免费久久| 日韩欧美 国产精品| 啦啦啦啦在线视频资源| 久久这里只有精品中国| 夜夜爽夜夜爽视频| 最近视频中文字幕2019在线8| 欧美激情久久久久久爽电影| 插阴视频在线观看视频| 日韩av不卡免费在线播放| 蜜桃亚洲精品一区二区三区| 久久久久久久久大av| 成人欧美大片| 91在线精品国自产拍蜜月| 九九在线视频观看精品| 亚洲人与动物交配视频| av国产免费在线观看| 观看免费一级毛片| 国产精品日韩av在线免费观看| 亚洲欧美日韩东京热| 久久精品夜色国产| av又黄又爽大尺度在线免费看 |