• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep Scalogram Representations for Acoustic Scene Classification

    2018-08-11 07:48:30ZhaoRenKunQianStudentMemberIEEEZixingZhangMemberIEEEVedhasPanditAliceBairdStudentMemberIEEEandBjrnSchullerFellowIEEE
    IEEE/CAA Journal of Automatica Sinica 2018年3期

    Zhao Ren,Kun Qian,Student Member,IEEE,Zixing Zhang,Member,IEEE,Vedhas Pandit,Alice Baird,Student Member,IEEE,and Bj?rn Schuller,Fellow,IEEE

    Abstract—Spectrogram representations of acoustic scenes have achieved competitive performance for acoustic scene classification.Yet,the spectrogram alone does not take into account a substantial amount of time-frequency information.In this study,we present an approach for exploring the benefits of deep scalogram representations,extracted in segments from an audio stream.The approach presented firstly transforms the segmented acoustic scenes into bump and morse scalograms,as well as spectrograms;secondly,the spectrograms or scalograms are sent into pre-trained convolutional neural networks;thirdly,the features extracted from a subsequent fully connected layer are fed into(bidirectional)gated recurrent neural networks,which are followed by a single highway layer and a softmax layer;finally,predictions from these three systems are fused by a margin sampling value strategy.We then evaluate the proposed approach using the acoustic scene classification data set of 2017 IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events(DCASE).On the evaluation set,an accuracy of 64.0%%%from bidirectional gated recurrent neural networks is obtained when fusing the spectrogram and the bump scalogram,which is an improvement on the 61.0%%%baseline result provided by the DCASE 2017 organisers.This result shows that extracted bump scalograms are capable of improving the classification accuracy,when fusing with a spectrogram-based system.

    I.INTRODUCTION

    ACOUSTIC scene classification(ASC)aims at the identification of the class(such as ‘train station’,or‘restaurant’)of a given acoustic environment.ASC can be a challenging task,since the sounds within certain scenes can have similar qualities,and sound events can overlap one another[1].Its applications are manifold,such as robot hearing or contextaware human-robot interaction[2].

    In recent years,several hand-crafted acoustic features have been investigated for the task of ASC,including frequency,energy,and cepstral features[3].Despite such year-long efforts,recently,representations automatically extracted from spectrogram images with deep learning methods[4],[5]are shown to perform better than hand-crafted acoustic features when the number of acoustic scene classes is large[6],[7].Further,compared with a Fourier transformation for obtaining spectrograms,the wavelet transformation has the ability to incorporate multiple scales,and for this reason locally can reach the optimal time-frequency resolution[8]concerning the Heisenberg uncertainty of optimal time and frequency resolution at the same time.Accordingly,wavelet features have already been applied successfully for many acoustic tasks[9]?[13],but often,the greater effort in calculating a wavelet transformation is considered not worth the extra effort if gains are not outstanding.In the theory of wavelet transformation,the scalogram is the time-frequency representation of the signal by wavelet transformation,where the brightness or the colour can be used to indicate coefficient values at corresponding time-frequency locations.Compared to spectrograms,which offer(only)a fixed time and frequency resolution,a scalogram is better suited for the task of ASC due to its detailed representation of the signal.Hence,a scalogram based approach is proposed in this work.

    We use convolutional neural networks(CNNs)to extract deep features from spectrograms or scalograms,as CNNs have proven to be effective for visual recognition tasks[14],and ultimately,spectrograms and scalograms are images.Several specific CNNs are designed for the ASC task,in which spectrograms are fed as an input[7],[15],[16].Unfortunately,those approaches are not robust and it can also be time-consuming to design CNN structures manually for each dataset.Using pre-trained CNNs from large scale datasets[17]is a potential way to break this bottleneck.ImageNet1http://www.image-net.org/is a suited such big database promoting a number of CNNs each year,such as ‘AlexNet’[18]and ‘VGG’[19].It seems promising to apply transfer learning[20]through extracting features from these pre-trained neural networks for the ASC task—the approach taken in the following.

    As to handling of audio besides considering ‘images’(the spectrograms and/or scalograms)by pre-trained deep networks,we further aim to respect its nature as a time series.In this respect,sequential learning performs better for time-series problems than static classifiers such as support vector machines(SVMs)[21]or extreme learning machines(ELMs)[17].Likewise,hidden Markov models(HMMs)[22],recurrent neural networks(RNNs)[23],and in the more recent years in particular long short-term memory(LSTM)RNNs[24]are proven effective for acoustic tasks[25],[26].As gated recurrent neural networks(GRNNs)[27]—a reduction in computational complexity over LSTM-RNNs—are shown to perform well in[13],[28],we not only use GRNNs as the classifier rather than LSTM-RNNs,but also extend the classification approach with bidirectional GRNNs(BGRNNs),which are trained forward and then backward within a specific time frame.Likewise,we are able to capture ‘forward’and ‘backward’temporal contexts,or simply said the whole sequence of interest.Unless moving with the microphone or changes of context,acoustic scenes in the real-world usually prevail for longer amounts of time,however,with potentially highly varying acoustics during such stretches of time.This allows to consider static chunk lengths for ASC,despite modelling these as a time series to preserve the order of events,even though being only interested in the ‘larger picture’of the scene than in details of events within that scene.In the data considered in this study based on the dataset of 2017 IEEE AASP Challenge on Detection and Classification of Acoustic Scenes And Events(DCASE),the instances have a(pre-)specified duration(10s per sample in the[29]).

    In this article,we make three main contributions.First,we propose the use of scalogram images to help improve the performance of only a single spectrogram extraction for the ASC task.Second,we extract deep representations from the scalogram images using pre-trained CNNs,which is much faster and more efficient in terms of conservative data requirements than manually designed CNNs.Third,we investigate the performance improvement obtained through the use of(B)GRNNs for classification.

    The remainder of this paper is structured as follows:some related work for the ASC task is introduced in Section II;in Section III,we describe the proposed approach,the pipeline of which is shown in Fig.1;the database description,experimental set up,and results are then presented in Section IV;finally,conclusions are given in Section VI.

    II.RELATED WORK

    In the following,let us outline point by point related work to the points of interest in this article,namely using spectrogramtype images as network input for audio analysis,using CNNs in a transfer-learning setting,using wavelets rather or in addition to spectral information,and finally the usage of memory-enhanced recurrent topologies for optimal treatment of the audio stream as time series data.

    Extracting spectrograms from audio clips is well known for the ASC task[7],[30].This explains why a lion’s share of the existing work using non-time-signal input to deep network architectures and particularly CNNs use spectrograms or derived forms as input.For example,spectrograms were used to extract features by autoencoders in[31].Predictions were obtained by CNNs from mel spectrograms in[32],[33].Feeding analysed images from spectrograms into CNNs has also shown success.Two image-type features based on a spectrogram,namely covariance matrix,and a secondary frequency analysis were fed into CNNs for classification in[34].

    Further,extracting features from pre-trained CNNs has been widely used in transfer learning.To name but two examples,a pre-trained ‘VGGFace’model was applied to extract features from face images and a pre-trained ‘VGG’was used to extract features from images in[17].Further,in[6],deep features of audio waveforms were extracted by a pre-trained ‘AlexNet’model[18].

    Wavelet features are applied extensively in acoustic signal classification,but in fact,in their history they were broadly used also in other contexts such as for electroencephalogram(EEG),electrooculogram(EOG),and electrocardiogram(ECG)signals[35].Recent examples particularly in the domain of sound analysis include for example successful application for snore sound classification[10],[11],besides wavelet transform energy and wavelet packet transform energy having also been proven to be effective in the ASC task[12].

    Various types of sequential learning are repeatedly and frequently applied for the ASC task.For example,in[36],experimental results have shown superiority when employing RNNs for classification.There are also some special types of RNNs that have been applied for classification in this context.As an example,LSTM-RNNs were combined with CNNs using early-fusion in[25].In[37],GRNNs were utilised as the classifier,and achieved a significant improvement using a Gaussian mixture model(GMM).

    To sum the above up,while similar methods mostly use spectrograms or mel spectrograms,minimal research has been done about the performance of scalogram representations extracted by pre-trained CNNs on sequential learning for audio analysis.This work does so and is introduced next.

    III.PROPOSED METHODOLOGY

    A.Audio-to-Image Pre-Processing

    In this work,we first seek to extract the time-frequency information which is hidden in the acoustic scenes.Hence,the following three types of representations are used in this study,which is a foundation of the following process.

    1)Spectrogram:The spectrogram as a time-frequency representation of the audio signal is generated by a short-time Fourier transform(STFT)[38].We generate the spectrograms with a Hamming window computing the power spectral density by the dB power scale.We use Hamming windows of size 40ms with an overlap of 20ms.

    2) ‘Bump’Scalogram:The bump scalogram is generated by the bump wavelet[39]transformation,which is defined by

    Fig.1.Framework of the proposed approach.First,spectrograms and scalograms(bump and morse)are generated from segmented audio waveforms.Then,one of these is fed into the pre-trained CNNs,in which further features are extracted at a subsequent fully connected layer fc7.Finally,the predictions(predicted labels and probabilities)are obtained by(B)GRNNs with a highway network layer and a softmax layer with the deep features as the input.

    Fig.2.The spectrogram and two types of scalograms are extracted from the acoustic scenes.All of the images are extracted from the first audio sequence of DCASE2017’s ‘a(chǎn)0011020.wav’with a label‘residential area’.

    where s stands for the scale,μ and σ are two constant parameters,in which σ affects the frequency and time localisation,and Ψ(sω)is the transformed signal.

    3) ‘Morse’Scalogram:The morse scalogram[40]generation is defined by

    where u(ω)is the unit step,P is the time-bandwidth product,γ is the symmetry,αP,γstands for a normalising constant,and ΨP,γ(ω)means the morse wavelet signal.

    The three image representations of one instance are shown in Fig.2.While the STFT focuses on analysing stationary signals and gives a uniform resolution,the wavelet transformation is good at localising transients in non-stationary signals,since it can provide a detailed time-frequency analysis.In our study,the training model is proposed based on the above three representations and comparisons of them are provided in the following sections.

    B.Pre-Trained Convolutional Neural Networks

    By transfer learning,the pre-trained CNNs are transfered to our ASC task for extracting the deep spectrum features.For the pre-trained CNNs,we choose ‘AlexNet’[18], ‘VGG16’,and ‘VGG19’[19],since they have proven to be successful in a large number of natural image classification tasks,including the ImageNet Challenge2http://www.image-net.org/challenges/LSVRC/.‘AlexNet’consists of five convolutional layers with[96,256,384,384,256]kernels of size[11,5,3,3,3],and three maxpooling layers.‘VGG’networks have 13([2,2,3,3,3], ‘VGG16’),or 16([2,2,4,4,4], ‘VGG19’)convolutional layers with[64,128,128,256,256]kernels and five maxpooling layers.All of the convolutional layers in the‘VGG’networks use the common kernel size ‘three’.In these three networks,the convolutional and maxpooling layers are followed by three fully connected layers{fc6,fc7,fc},and a soft-max layer for 1000 labelled classifications according to the ImageNet challenge,in which fc7 is employed to extract deep features with 4096 attributes.More details on the CNNs are given in Table I.We obtain the pre-trained ‘AlexNet’network from MATLAB R2017a3https://de.mathworks.com/help/nnet/ref/alexnet.html,and ‘VGG16’and ‘VGG-19’from MatConvNet[41].As outlined,we exploit the spectrogram and two types of scalograms as the input for these three CNNs separately and extract the deep representations from the activations on the second fully connected layer fc7.

    TABLE ICONFIGURATIONS OF THE CONVOLUTION ALNEURAL NETWORKS.‘ALEXNET’,‘VGG16’,AND ‘VGG19’AREUSED TO EXTRACTDEEP FEATURES OF THESPECTROGRAM,‘BUMP’,AND ‘MORSE’SCALOGRAMS.‘CONV’STANDS FOR THE CONVOLUTION ALLAYER

    IV.EXPERIMENTS ANDRESULTS

    A.Database

    C.(Bidirectional)Gated Recurrent Neural Networks

    As a special type of RNNs,GRNNs contain a gated recurrent unit(GRU)[27],which features an update gate u,a reset gate r,an activation h,and a candidate activation.For each ith GRU at a time t,the update gate u and reset gate r activations are defined by

    where σ is a logistic sigmoid function,Wu,Wr,Uu,and Urare the weight matrices,and ht?1stands for the activation function.At time t,the activation function and candidate activation function are defined by

    As mentioned,our proposed approach is evaluated on the dataset provided by the DCASE 2017 Challenge[29].The dataset contains 15 classes,which include ‘beach’,‘bus’,‘cafe/restaurant’, ‘car’, ‘city centre’, ‘forest path’, ‘grocery store’,‘home’,‘library’,‘metro station’,‘office’,‘park’,‘residential area’, ‘train’,and ‘tram’.As further mentioned above,the organisers split each recording into several independent 10s segments to increase the task difficulty and increase the number of instances.We train our model using a cross validation on the officially provided 4-fold development set,and evaluate on the official evaluation set.The development set contains 312 segments of audio recordings for each class and the evaluation set includes 108 segments of audio recordings for each class.Accuracy is used as the final evaluation metric.

    B.Experimental Setup

    The information flows inside the GRU with gating units,similarly to,but with separate memory cells in the LSTM.However,there is not an input gate,forget gate,and output gate which are included in the LSTM structure.Rather,there are a reset and an update gate,with overall less parameters in a GRU than in a LSTM unit so that GRNNs usually converge faster than LSTM-RNNs[27].GRNNs have been observed to be comparable and even better than LSTM-RNNs sometimes in accuracies,as shown in[42].To gain more time information from the extracted deep feature sequences,bidirectional GRNNs(BGRNNs)are an efficient tool to improve the performance of GRNNs(and in fact of course similarly for LSTM-type RNNs),as shown in[43],[44].Therefore,BGRNNs are used in this study,in which context interdependences of features are learnt in both temporal directions[45].For classification,a highway network layer and a softmax layer follow the(B)GRNNs,as highway networks are often found to be more efficient than fully connected layers for very deep neural networks[46].

    D.Decision Fusion Strategy

    It was found in a recent work that the margin sampling value(MSV)[47]method,which is a late-fusion method,was effective for fusing training models[48].Hence,based on the predictions from(B)GRNNs for multiple types of deep features,MSV is applied to improve the performance.For each prediction{Lj,pj},j=1,...,n,in which Ljis the predicted label,and pjis the probability of the corresponding label,n is the total number of models,MSV is defined by

    First,we segment each audio clip into a sequence of 19 audio instances with 1000ms and a 50%overlap.Then,two types of representations are extracted:hand-crafted features for comparison,and deep image-based features,which have been described in Section III.Hand-crafted features are as follows:

    Two kinds of low-level descriptors(LLDs)are extracted due to their previous success in ASC[29],[49],including Melfrequency cepstral coefficient(MFCC)1?14 and logarithmic Mel-frequency band(MFB)1?8.According to feature sets provided in the INTERSPEECH COMPUTATIONAL PARALINGUISTICS CHALLENGE(COMPARE)[50],in total 100 functionals are applied to each LLD,yielding 14×100=1400 MFCCs features and 8×100=800 log MFBs features.The details of hand-crafted features and the feature extraction tool openSMILE can be found in[3].

    These representations are then fed into the(B)GRNNs with 120 and 160 GRU nodes respectively with a ‘tanh’activation,followed by a single highway network layer with a ‘linear’activation function,which is able to ease gradient-based training of deep networks,and a softmax layer.Empirically,we implement this network using TensorFlow4https://github.com/tensorflow/tensorflowand TFLearn5https://github.com/tflearnwith a fixed learning rate of 0.0002(optimiser ‘rmsprop’)and a batch size of 65.We evaluate the performance of the model at the kth training epoch,k∈{23,30,...,120}.Finally,the MSV decision fusion strategy is applied to combine the(B)GRNNs models for the final predictions.

    C.Results

    We compute the mean accuracy on the 4-fold partitioned development set for evaluation according to the official protocols.Fig.3 presents the performance of both GRNNs and BGRNNs on different feature sets when stopping at the multiple training epochs.From this we can see that,the accuracies of both GRNNs and BGRNNs on MFCCs,and log MFBs features are lower than the baseline.However,the performances of deep features extracted by pre-trained CNNs are comparable with the baseline result,especially the representations extracted by the ‘VGG16’and the ‘VGG19’from spectrograms.This indicates the effectiveness of deep image-based features for this task.

    Fig.3.The performances of GRNNs and BGRNNs on different features.(a)MFCCs(MF)and log MFBs(lg)features.The performances of features from the spectrogram and scalograms(bump and morse)extracted by three CNNs.(b)AlexNet.(c)VGG16.(d)VGG19.

    Table II presents the accuracy of each model from each type of feature.For the development set,the accuracy of each type of feature is denoted as the highest one of all epochs.For the evaluation set,we choose the consistency epoch number of the development set.We find that the accuracies after decision fusion achieve an improvement based on a single spectrogram or scalogram image.In the results,the performances of BGRNNs and GRNNs are comparable on the development set but the accuracies on the BGRNNs are slightly higher than those of the GRNNs on the evaluation set,presumably because the BGRNNs cover the overall information in both the forward and backward time direction.The best performance of 84.4%on the development set is obtained when extracting features from the spectrogram and the bump scalogram by the‘VGG19’and classifying by GRNNs at epoch 20.This is an improvement of 8.6%over the baseline of the DCASE 2017 challenge(p<0.001 by a one-tailed z-test).The best result of 64.0%on the evaluation set is also obtained when extracting features from the spectrogram and bump scalogram by the‘VGG19’,but classifying by BGRNNs at epoch 20.The performance on the evaluation set is also an improvement upon the 61.0%baseline.

    V.DISCUSSION

    The proposed approach in our study improves on the baseline performance given for the ASC task in the DCASE 2017 Challenge for sound scene classification and performs better than(B)GRNNs based on a hand-crafted feature set.The accuracy of(B)GRNNs on deep learnt features from a spectrogram,bump,and morse scalograms outperform MFCC and log MFB in Fig.3.The performance of fused(B)GRNNs on deep learnt features is also considerably better than on hand-crafted features in Table II.Hence,the feature extraction method based on CNNs has proven itself to be efficient for the ASC task.We also investigate the performance when combining different spectrogram or scalogram representations.In Table II,the bump scalogram is validated as being capable of improving the performance of the spectrogram alone.

    Fig.4 shows the confusion matrix of the best results on the evaluation set.The model performs well on some classes,such as ‘forest path’, ‘home’,and ‘metro station’.Yet,other classes such as ‘library’and ‘residential area’are hard to recognise.We think this difficulty is caused by noises or that the waveforms have similar environments within the acoustic scene.

    To investigate the performance of each spectrogram or scalogram on different classes,a performance comparison ofthe spectrogram and the bump scalogram from the best result on evaluation set is shown in Table III.We can see that,the spectrogram performs better than the bump scalogram for‘beach’, ‘grocery store’, ‘office’,and ‘park’.However,the bump scalogram is optimal for the ‘bus’,‘city’,‘home’,and‘train’scenes.After fusion,the precision of some classes is improved,such as ‘cafe/restaurant’, ‘metro station’, ‘residential area’,and ‘tram’.Overall,it appears worth using the scalogram as an assistance to the spectrogram,to obtain more accurate prediction.

    TABLE IIPERFORMANCE COMPARISONS ON THE DEVELOPMENT AND THE EVALUATION SET BY GRNNS AND BGRNNS ON HAND-CRAFTED FEATURES (MFCCS (MF) AND LOG MFBS (LG)) AND FEATURES EXTRACTED BY PRE-TRAINED CNNS FROM THE SPECTROGRAM (S), BUMP SCALOGRAM (B), AND MORSE SCALOGRAM (M)

    TABLE IIIPERFORMANCE COMPARISONS ON THE EVALUATION SET FROM BEFORE AND AFTER LATE-FUSION OF BGRNNS ON THE FEATURES EXTRACTED FROM THE SPECTROGRAM (S) AND THE BUMP SCALOGRAM (B)

    Fig.4.Confusion matrix of the best performance of 64.0%on the evaluation set.Late-fusion of BGRNNs on the features extracted from the spectrogram and the bump scalogram by ‘VGG16’.

    The result from the champion on the ASC task of the DCASE challenge 2017 is 87.1%on the development set and 83.3%on the evaluation set[51],using a generative adversarial network(GAN)for training set augmentation.There is a significant difference between the best result reached by the methods proposed herein which omit data augmentation,as we focus on a comparison of feature representations,and this result of the winning DCASE contribution in 2017(p<0.001 by one-tailed z-test).We believe that in particular the GAN part in combination with the proposed method shown herein holds promise to lead to an even higher overall result.Hence,it appears to be highly promising to re-investigate the proposed method in combination with data augmentation before training in future work.

    VI.CONCLUSIONS

    We have proposed an approach using pre-trained convolutional neural networks(CNNs)and(bidirectional)gated recurrent neural networks((B)GRNNs)on the spectrogram,bump,and morse scalograms of audio clips,to achieve the task of acoustic scene classification(ASC).This approach is able to improve the performance on the 4-fold development set of the 2017 IEEE AASP Challenge on Detection and Classification of Acoustic Scenes and Events(DCASE),achieving an accuracy of 83.4%for the ASC task,compared with the baseline of 74.8%of the DCASE challenge(P<0.001,one-tailed z-test).On the evaluation set,the performance is improved from the baseline of 61.0%to 64.0%.The highest accuracy on the evaluation set is obtained when combining models from both the spectrogram and the scalogram images;therefore,the scalogram appears helpful to improve the performance reached by spectrogram images for the task of ASC.We focussed on the comparison of feature types in this contribution,rather than trying to reach overall best results by combination of‘tweaking on all available screws’such as is usually done by entries into challenges.Likewise,we did for example not consider data augmentation by generative adversarial networks(GANs)or similar topologies as for example the DCASE 2017 winning contribution did.In future studies on the task of ASC,we will thus include further optimisation steps as the named data augmentation[52],[53].In particular,we also aim to use evolutionary learning to generate adaptive‘selfshaping’CNNs automatically.This avoids having to hand-pick architectures in cumbersome optimisation runs.

    91精品一卡2卡3卡4卡| 久久99精品国语久久久| 精品人妻熟女av久视频| 91久久精品国产一区二区成人| 国产精品一区二区在线观看99| 爱豆传媒免费全集在线观看| 极品少妇高潮喷水抽搐| 身体一侧抽搐| 国产v大片淫在线免费观看| 99久久精品国产国产毛片| 人妻一区二区av| 国产国拍精品亚洲av在线观看| 十八禁网站网址无遮挡 | 我的老师免费观看完整版| 18禁动态无遮挡网站| 夫妻午夜视频| 全区人妻精品视频| 久久这里有精品视频免费| 97精品久久久久久久久久精品| 超碰av人人做人人爽久久| 岛国毛片在线播放| 久久精品国产亚洲网站| 欧美成人一区二区免费高清观看| 国产黄频视频在线观看| 妹子高潮喷水视频| 久久精品国产自在天天线| 亚洲国产精品成人久久小说| 99国产精品免费福利视频| av播播在线观看一区| 久久热精品热| 九九在线视频观看精品| 国产淫语在线视频| 亚洲性久久影院| 亚洲精品中文字幕在线视频 | 一区在线观看完整版| 97超视频在线观看视频| 亚洲内射少妇av| 欧美老熟妇乱子伦牲交| 热99国产精品久久久久久7| 国产男人的电影天堂91| av在线老鸭窝| 好男人视频免费观看在线| 在线观看一区二区三区| 亚洲在久久综合| 在线观看美女被高潮喷水网站| 国产成人aa在线观看| 九九在线视频观看精品| 国产视频内射| 国产精品三级大全| 夫妻性生交免费视频一级片| 午夜福利影视在线免费观看| 两个人的视频大全免费| 一区二区三区精品91| 国产国拍精品亚洲av在线观看| 多毛熟女@视频| 精品一区在线观看国产| 国产综合精华液| 日日摸夜夜添夜夜爱| 免费av中文字幕在线| 亚洲国产av新网站| 精品亚洲成a人片在线观看 | 中文字幕久久专区| 一区二区三区免费毛片| 在线观看免费日韩欧美大片 | 久久精品熟女亚洲av麻豆精品| 有码 亚洲区| 午夜免费观看性视频| 国产一区有黄有色的免费视频| 亚洲欧美一区二区三区国产| 国产免费视频播放在线视频| 在线观看美女被高潮喷水网站| 91久久精品国产一区二区成人| a级毛片免费高清观看在线播放| 99久久精品热视频| 日日啪夜夜撸| 青春草视频在线免费观看| 国产色婷婷99| 内地一区二区视频在线| 精品少妇久久久久久888优播| 国产精品久久久久久精品电影小说 | 日韩人妻高清精品专区| 狂野欧美激情性xxxx在线观看| 下体分泌物呈黄色| 中文字幕久久专区| 天天躁夜夜躁狠狠久久av| 夫妻午夜视频| 成年人午夜在线观看视频| 美女cb高潮喷水在线观看| 国产v大片淫在线免费观看| 乱系列少妇在线播放| 久久韩国三级中文字幕| 国产深夜福利视频在线观看| 九草在线视频观看| 免费观看av网站的网址| 一个人看的www免费观看视频| 少妇 在线观看| 欧美变态另类bdsm刘玥| 久久鲁丝午夜福利片| 麻豆精品久久久久久蜜桃| 亚洲真实伦在线观看| 黑人猛操日本美女一级片| 亚洲国产毛片av蜜桃av| 国产又色又爽无遮挡免| 亚洲精品乱码久久久久久按摩| 一级毛片电影观看| 亚洲国产色片| 婷婷色av中文字幕| 午夜免费男女啪啪视频观看| 久久久久久九九精品二区国产| 国产精品免费大片| 最近2019中文字幕mv第一页| 亚洲图色成人| 99re6热这里在线精品视频| 欧美日韩视频精品一区| 麻豆成人av视频| 熟妇人妻不卡中文字幕| 久久人人爽人人爽人人片va| 免费黄频网站在线观看国产| 久久精品国产鲁丝片午夜精品| 极品教师在线视频| 国产精品一区www在线观看| 亚洲国产av新网站| 国内少妇人妻偷人精品xxx网站| 久久鲁丝午夜福利片| 午夜免费男女啪啪视频观看| 又爽又黄a免费视频| 国产91av在线免费观看| 91精品国产九色| 一本一本综合久久| 99热国产这里只有精品6| 国产精品成人在线| 视频区图区小说| 亚洲天堂av无毛| 久久精品久久久久久噜噜老黄| 国产黄片视频在线免费观看| 纯流量卡能插随身wifi吗| 黄色日韩在线| 国产亚洲午夜精品一区二区久久| av在线播放精品| 免费看av在线观看网站| 成人免费观看视频高清| 久久久国产一区二区| 最近2019中文字幕mv第一页| 精品酒店卫生间| 亚洲欧美精品专区久久| 女人十人毛片免费观看3o分钟| 建设人人有责人人尽责人人享有的 | 99久久精品热视频| 国产精品一区www在线观看| 青春草视频在线免费观看| 91午夜精品亚洲一区二区三区| 亚洲熟女精品中文字幕| 国产精品国产三级国产av玫瑰| 毛片一级片免费看久久久久| 免费人成在线观看视频色| 国产成人精品久久久久久| 最近最新中文字幕免费大全7| 一级毛片我不卡| 高清毛片免费看| 亚洲欧美日韩东京热| 在线观看免费日韩欧美大片 | 亚洲精品aⅴ在线观看| 国产成人精品婷婷| 秋霞在线观看毛片| 亚洲,欧美,日韩| 日本av免费视频播放| 偷拍熟女少妇极品色| 99热全是精品| 黄色一级大片看看| 精品一区二区免费观看| 在现免费观看毛片| 人妻 亚洲 视频| 99热全是精品| 国产精品99久久99久久久不卡 | 亚洲国产精品专区欧美| 免费看光身美女| 亚洲人成网站在线观看播放| 99热这里只有是精品50| 91精品国产九色| 人妻系列 视频| 最近手机中文字幕大全| 国产精品人妻久久久影院| 一本色道久久久久久精品综合| 日韩人妻高清精品专区| av天堂中文字幕网| 成人亚洲精品一区在线观看 | 亚洲欧洲日产国产| 久久久午夜欧美精品| 国内揄拍国产精品人妻在线| 纯流量卡能插随身wifi吗| a级毛片免费高清观看在线播放| 久久女婷五月综合色啪小说| 99热这里只有是精品在线观看| 一级毛片 在线播放| 国产精品不卡视频一区二区| 性色av一级| av又黄又爽大尺度在线免费看| 国产精品一区二区三区四区免费观看| 中文字幕制服av| 久久ye,这里只有精品| 搡女人真爽免费视频火全软件| 黄片无遮挡物在线观看| 国产男女超爽视频在线观看| 午夜福利影视在线免费观看| 黑人高潮一二区| 1000部很黄的大片| 男女无遮挡免费网站观看| 国产精品熟女久久久久浪| 久久精品人妻少妇| 亚洲三级黄色毛片| 18禁在线播放成人免费| 纵有疾风起免费观看全集完整版| 一本一本综合久久| 亚洲中文av在线| 免费黄色在线免费观看| 亚洲精品国产成人久久av| 男女边摸边吃奶| 国产亚洲一区二区精品| 大码成人一级视频| 在线播放无遮挡| 在线观看免费高清a一片| 亚洲av欧美aⅴ国产| 免费看光身美女| 国产精品国产三级国产专区5o| 久久精品熟女亚洲av麻豆精品| 天天躁日日操中文字幕| 好男人视频免费观看在线| 大香蕉97超碰在线| 成人无遮挡网站| av又黄又爽大尺度在线免费看| 永久免费av网站大全| 一级毛片aaaaaa免费看小| 久久久久精品性色| 国产伦精品一区二区三区视频9| 午夜激情久久久久久久| 又粗又硬又长又爽又黄的视频| 色视频在线一区二区三区| 欧美日韩在线观看h| 国产一区二区在线观看日韩| 国产一区亚洲一区在线观看| 18禁在线播放成人免费| 久久久久精品性色| 国产一级毛片在线| 老师上课跳d突然被开到最大视频| 精品人妻一区二区三区麻豆| 国产淫语在线视频| 美女高潮的动态| 熟妇人妻不卡中文字幕| 国产色爽女视频免费观看| 亚洲四区av| av在线蜜桃| av.在线天堂| 婷婷色麻豆天堂久久| 有码 亚洲区| 蜜桃亚洲精品一区二区三区| 一级av片app| 老师上课跳d突然被开到最大视频| 国产伦理片在线播放av一区| 人体艺术视频欧美日本| 日韩一本色道免费dvd| av国产免费在线观看| 少妇人妻一区二区三区视频| 日韩亚洲欧美综合| 久久久久人妻精品一区果冻| 日韩亚洲欧美综合| 亚洲国产毛片av蜜桃av| 九九在线视频观看精品| 日韩不卡一区二区三区视频在线| 视频中文字幕在线观看| 国产亚洲精品久久久com| 极品教师在线视频| 欧美一级a爱片免费观看看| 十八禁网站网址无遮挡 | 中文字幕免费在线视频6| 国产欧美日韩精品一区二区| 国产一区二区三区综合在线观看 | 黄色视频在线播放观看不卡| 国产精品蜜桃在线观看| 99久国产av精品国产电影| 人妻系列 视频| 免费少妇av软件| kizo精华| 亚洲国产高清在线一区二区三| 亚洲精品自拍成人| 免费人妻精品一区二区三区视频| 日本vs欧美在线观看视频 | 观看av在线不卡| av一本久久久久| 久久午夜福利片| 日本黄色片子视频| 乱码一卡2卡4卡精品| 人人妻人人爽人人添夜夜欢视频 | 中文字幕精品免费在线观看视频 | 男女边摸边吃奶| 一级毛片 在线播放| 一级毛片久久久久久久久女| 丰满少妇做爰视频| 精品人妻熟女av久视频| 蜜桃亚洲精品一区二区三区| 国产精品麻豆人妻色哟哟久久| 欧美老熟妇乱子伦牲交| 日本免费在线观看一区| 日本色播在线视频| 美女主播在线视频| 精品久久久久久久久av| 麻豆国产97在线/欧美| 亚洲精品成人av观看孕妇| 18+在线观看网站| 成人毛片60女人毛片免费| 欧美日韩在线观看h| 一边亲一边摸免费视频| 欧美日韩国产mv在线观看视频 | 国产精品女同一区二区软件| 日本av手机在线免费观看| 王馨瑶露胸无遮挡在线观看| 天堂俺去俺来也www色官网| 亚洲av中文字字幕乱码综合| 国产av精品麻豆| 一区在线观看完整版| 色哟哟·www| 精品久久久久久久末码| 欧美丝袜亚洲另类| 亚洲真实伦在线观看| 日韩强制内射视频| 天天躁日日操中文字幕| 国产精品久久久久久精品电影小说 | av免费在线看不卡| 久久ye,这里只有精品| 老司机影院成人| 尤物成人国产欧美一区二区三区| 亚洲国产精品999| 赤兔流量卡办理| 欧美最新免费一区二区三区| 亚洲精品,欧美精品| 日韩三级伦理在线观看| 午夜老司机福利剧场| 美女视频免费永久观看网站| av黄色大香蕉| 国产在线视频一区二区| 在线观看av片永久免费下载| 美女国产视频在线观看| 噜噜噜噜噜久久久久久91| 午夜福利影视在线免费观看| 精品亚洲成a人片在线观看 | 欧美日韩综合久久久久久| 国产乱人视频| 高清av免费在线| av国产久精品久网站免费入址| 亚洲av综合色区一区| 欧美97在线视频| 久热这里只有精品99| 免费观看在线日韩| 亚洲欧美精品专区久久| 国产精品国产三级国产av玫瑰| 免费久久久久久久精品成人欧美视频 | 一级黄片播放器| 天堂俺去俺来也www色官网| 精品亚洲成国产av| 不卡视频在线观看欧美| 国产免费一级a男人的天堂| 午夜日本视频在线| 欧美日韩精品成人综合77777| 精品一区在线观看国产| 日本一二三区视频观看| 成人二区视频| 人人妻人人爽人人添夜夜欢视频 | 久久久亚洲精品成人影院| 久久久亚洲精品成人影院| 免费不卡的大黄色大毛片视频在线观看| 有码 亚洲区| 男人舔奶头视频| 97超视频在线观看视频| 亚洲欧美日韩卡通动漫| 午夜福利影视在线免费观看| 成人二区视频| 91狼人影院| 日本av免费视频播放| 亚洲av电影在线观看一区二区三区| 国产老妇伦熟女老妇高清| 国产精品麻豆人妻色哟哟久久| av天堂中文字幕网| 男人舔奶头视频| 精品久久国产蜜桃| 国产 精品1| 国产一区二区三区综合在线观看 | 黄片wwwwww| 好男人视频免费观看在线| 中文字幕久久专区| av播播在线观看一区| 亚洲国产最新在线播放| 香蕉精品网在线| 精品一品国产午夜福利视频| 日韩一本色道免费dvd| 狂野欧美激情性xxxx在线观看| 最近2019中文字幕mv第一页| 成年女人在线观看亚洲视频| 三级国产精品欧美在线观看| 国产成人精品婷婷| 亚洲人与动物交配视频| 久久亚洲国产成人精品v| 中文欧美无线码| 高清不卡的av网站| 丝瓜视频免费看黄片| 日本免费在线观看一区| 久久99精品国语久久久| 亚洲精品久久午夜乱码| 国产精品伦人一区二区| www.av在线官网国产| 各种免费的搞黄视频| 亚洲综合精品二区| 在线观看一区二区三区激情| 肉色欧美久久久久久久蜜桃| xxx大片免费视频| 熟女人妻精品中文字幕| 国产深夜福利视频在线观看| 亚洲成人av在线免费| www.色视频.com| 国产视频首页在线观看| 欧美丝袜亚洲另类| 91狼人影院| 观看av在线不卡| av在线播放精品| 欧美高清性xxxxhd video| 美女脱内裤让男人舔精品视频| 777米奇影视久久| 高清欧美精品videossex| 秋霞伦理黄片| 午夜福利在线在线| 校园人妻丝袜中文字幕| av线在线观看网站| 日本黄色日本黄色录像| 99热国产这里只有精品6| av国产久精品久网站免费入址| 午夜精品国产一区二区电影| 免费大片黄手机在线观看| 亚洲欧美成人综合另类久久久| 精品久久久久久久末码| 亚洲av综合色区一区| 日韩,欧美,国产一区二区三区| 草草在线视频免费看| 国产成人免费无遮挡视频| 亚洲精品乱码久久久v下载方式| 全区人妻精品视频| 又粗又硬又长又爽又黄的视频| 大香蕉97超碰在线| 日韩电影二区| 亚洲欧美精品专区久久| 女人十人毛片免费观看3o分钟| 新久久久久国产一级毛片| 欧美精品一区二区大全| 免费观看性生交大片5| 久久精品国产a三级三级三级| 最近中文字幕2019免费版| 亚洲内射少妇av| 最近2019中文字幕mv第一页| 日日摸夜夜添夜夜添av毛片| 91精品一卡2卡3卡4卡| 这个男人来自地球电影免费观看 | 最近手机中文字幕大全| 欧美极品一区二区三区四区| 国产欧美亚洲国产| 在线观看av片永久免费下载| 少妇人妻久久综合中文| 亚洲精品国产av成人精品| 中文字幕久久专区| 亚洲精品,欧美精品| 天美传媒精品一区二区| 亚洲不卡免费看| 亚洲成色77777| 亚洲av福利一区| 亚洲国产成人一精品久久久| 亚洲成色77777| av又黄又爽大尺度在线免费看| 女的被弄到高潮叫床怎么办| 国产永久视频网站| 卡戴珊不雅视频在线播放| 免费看不卡的av| 国产男人的电影天堂91| 中文字幕亚洲精品专区| 亚洲精品乱码久久久久久按摩| 国模一区二区三区四区视频| 精品少妇黑人巨大在线播放| a级毛片免费高清观看在线播放| 老师上课跳d突然被开到最大视频| 91aial.com中文字幕在线观看| 春色校园在线视频观看| 黄色一级大片看看| 美女xxoo啪啪120秒动态图| tube8黄色片| 精品久久久久久电影网| 极品教师在线视频| 国产精品国产三级专区第一集| 欧美精品亚洲一区二区| 成年美女黄网站色视频大全免费 | 日日摸夜夜添夜夜添av毛片| 久久久国产一区二区| 高清在线视频一区二区三区| 亚洲天堂av无毛| 少妇熟女欧美另类| 边亲边吃奶的免费视频| 高清欧美精品videossex| 波野结衣二区三区在线| av在线播放精品| 国产精品99久久久久久久久| 国产亚洲午夜精品一区二区久久| 久久精品国产亚洲av天美| 日本av免费视频播放| 在线免费观看不下载黄p国产| 国产成人精品久久久久久| 91精品伊人久久大香线蕉| 国产一区二区三区综合在线观看 | 国产在视频线精品| 色综合色国产| 精品少妇久久久久久888优播| 久热这里只有精品99| 久久人人爽人人爽人人片va| 一级爰片在线观看| 七月丁香在线播放| 久久久久久久久久久丰满| 一区二区av电影网| 男人狂女人下面高潮的视频| 亚洲精品久久久久久婷婷小说| 亚洲四区av| 欧美三级亚洲精品| 午夜福利影视在线免费观看| 人人妻人人澡人人爽人人夜夜| 久久人人爽av亚洲精品天堂 | 国产精品人妻久久久影院| 精品亚洲成a人片在线观看 | 特大巨黑吊av在线直播| 中文字幕av成人在线电影| 亚洲精品日本国产第一区| 你懂的网址亚洲精品在线观看| 久久 成人 亚洲| 久久精品国产亚洲av涩爱| 亚洲精品久久午夜乱码| 一个人看视频在线观看www免费| 亚洲av电影在线观看一区二区三区| 青春草视频在线免费观看| 亚洲av.av天堂| 国产乱人偷精品视频| 日韩制服骚丝袜av| 国产91av在线免费观看| 最黄视频免费看| 欧美丝袜亚洲另类| 亚洲精品乱码久久久久久按摩| 国产成人aa在线观看| 纵有疾风起免费观看全集完整版| 一本一本综合久久| 国产女主播在线喷水免费视频网站| 插阴视频在线观看视频| 国产有黄有色有爽视频| 六月丁香七月| 免费av不卡在线播放| 97热精品久久久久久| 精品酒店卫生间| 国产av国产精品国产| 亚洲国产日韩一区二区| 亚洲欧美日韩卡通动漫| 大香蕉久久网| 日韩成人av中文字幕在线观看| 高清视频免费观看一区二区| 国产视频首页在线观看| 日韩成人伦理影院| 综合色丁香网| 国产欧美亚洲国产| 91在线精品国自产拍蜜月| 久久av网站| 久久久久久久精品精品| 国产精品一区二区三区四区免费观看| 卡戴珊不雅视频在线播放| 一个人看的www免费观看视频| 国产免费一区二区三区四区乱码| 免费看光身美女| 91精品伊人久久大香线蕉| 性高湖久久久久久久久免费观看| 欧美成人a在线观看| 成人亚洲欧美一区二区av| 18禁在线无遮挡免费观看视频| 国产高清不卡午夜福利| 婷婷色综合www| 人妻夜夜爽99麻豆av| 水蜜桃什么品种好| 国产精品国产三级专区第一集| 欧美3d第一页| 日韩伦理黄色片| 久久久久久久久久久丰满| 99久久中文字幕三级久久日本| 久久久a久久爽久久v久久| 日韩av免费高清视频| 国产午夜精品一二区理论片| 舔av片在线| 99国产精品免费福利视频| 久久韩国三级中文字幕| 国产精品.久久久| 男男h啪啪无遮挡| 全区人妻精品视频| 国产精品熟女久久久久浪| 91午夜精品亚洲一区二区三区| 人妻系列 视频| 亚洲熟女精品中文字幕| 熟妇人妻不卡中文字幕| 日韩大片免费观看网站| 国产淫语在线视频| 高清av免费在线| 日日摸夜夜添夜夜添av毛片| 久久青草综合色| 人体艺术视频欧美日本| 五月天丁香电影| 午夜日本视频在线| 免费黄色在线免费观看| 国产极品天堂在线| 最近的中文字幕免费完整| 蜜臀久久99精品久久宅男| 91精品国产九色|