• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An Innovative Approach Utilizing Binary-View Transformer for Speech Recognition Task

    2022-11-11 10:48:14MuhammadBabarKamalArfatAhmadKhanFaizanAhmedKhanMalikMuhammadAliShahidChitapongWechtaisongMuhammadDaudKamalMuhammadJunaidAliandPeerapongUthansakul
    Computers Materials&Continua 2022年9期

    Muhammad Babar Kamal,Arfat Ahmad Khan,Faizan Ahmed Khan,Malik Muhammad Ali Shahid,Chitapong Wechtaisong,*,Muhammad Daud Kamal,Muhammad Junaid Ali and Peerapong Uthansakul

    1COMSATS University Islamabad,Islamabad Campus,45550,Pakistan

    2Suranaree University of Technology,Nakhon Ratchasima,30000,Thailand

    3COMSATS University Islamabad,Lahore Campus,54000,Pakistan

    4COMSATS University Islamabad,Vehari Campus,61100,Pakistan

    5National University of Sciences&Technology,Islamabad,45550,Pakistan

    6Virtual University of Pakistan,Islamabad Campus,45550,Pakistan

    Abstract: The deep learning advancements have greatly improved the performance of speech recognition systems, and most recent systems are based on the Recurrent Neural Network (RNN).Overall, the RNN works fine with the small sequence data, but suffers from the gradient vanishing problem in case of large sequence.The transformer networks have neutralized this issue and have shown state-of-the-art results on sequential or speech-related data.Generally, in speech recognition, the input audio is converted into an image using Mel-spectrogram to illustrate frequencies and intensities.The image is classified by the machine learning mechanism to generate a classification transcript.However, the audio frequency in the image has low resolution and causing inaccurate predictions.This paper presents a novel end-to-end binary view transformer-based architecture for speech recognition to cope with the frequency resolution problem.Firstly, the input audio signal is transformed into a 2D image using Mel-spectrogram.Secondly,the modified universal transformers utilize the multi-head attention to derive contextual information and derive different speech-related features.Moreover, a feedforward neural network is also deployed for classification.The proposed system has generated robust results on Google’s speech command dataset with an accuracy of 95.16% and with minimal loss.The binary-view transformer eradicates the eventuality of the over-fitting problem by deploying a multiview mechanism to diversify the input data,and multi-head attention captures multiple contexts from the data’s feature map.

    Keywords: Convolution neural network; multi-head attention; multi-view;RNN;self-attention;speech recognition;transformer

    1 Introduction

    The recent surge of Artificial Intelligence (AI)in modern technology has resulted in the widespread adoption of Human-Computer-Interaction (HCI)applications.Big corporations in information technology like Google, Apple, Microsoft, and Amazon are relentlessly working to improve the applicability and dynamics of HCI applications using speech recognition algorithms.The importance of recognition systems underscores vast fields, including stakeholders from the domain related to entertainment applications, utility applications and critical lifesaving appliances.e.g., YouTube [1] and Facebook [2] use speech recognition systems for captioning.Various robust voice commands applications are proposed for devices that work in areas without internet services and critical mission’s robots[3,4].Moreover,robust design of micro-controller-based devices which works based on speech commands are also proposed in literature[5].Apple Siri,Amazon Alexa,Microsoft Cortana,YouTube captions,and Google Assistant[6]deploy speech recognition systems which works based on these designs.Google and Microsoft [7], use deep neural networks-based algorithms that convert sound to text through speech recognition,process the text,and respond accordingly.Typically,deep learning algorithms processes the 1D data as audio is recorded and represented s a 1D waveform[8].The waveform of the 1D signal is represented in the sinusoidal time domain.In [9], the authors studied the 2D representation of an audio signal called the spectrogram where the frequencies spectrum is derived from the time-frequency domain through Fourier transform.

    Speech signals contains rich prominent features such as emotions and dialect.Studies have been conducted to compare the 1D audio waveform and 2D spectrogram, the spectrogram concluded that the 1D signal does not contain frequency information vital for better speech recognition [10].Studies shows that 2D spectrogram performs better to extract features for speech recognition.Since a spectrogram focuses on all the frequencies,the recognition system cannot properly differentiate[11]between relevant frequencies and noise.Fusion of mel-scale with spectrogram reduces noise which shows performance improvement in speech recognition.The mel-scale discards noise and amplifies the desired spectrum of frequencies in the 2D spectrogram.The 2D transformation(mel-spectrogram)of audio signal deploy state-of-the-art image recognition algorithms in Neural Networks (N.N)for speech recognition to improve the precision of the system by imitating the human speech perception[12].The N.N algorithms [13] process raw input data by correlating hidden patterns to recognize similar clusters in data and classify it by continuously learning and enhancing the recognition system.Recurrent NNs(RNNs)[14-16],Convolutional NNs(CNNs)[17]and Attention are commonly used to develop speech recognition systems.RNN captures sequential prediction of data using recurrence units to predict pattern for next likely scenario.RNN algorithms and their variants,i.e.,Long-Short-Term-Memory(LSTM),and Gated-Recurrent-Unit(GRU)allow the machine to process sequential data models,such as speech recognition.LSTM has become popular in recurrent networks due to its success in solving the vanishing gradient problem by retaining the long-term dependencies of data.

    However, the LSTM [18] fails to solve the vanishing gradient problem completely due to the complexity of the additional evaluation of memory cells.The RNN models are prone to over-fitting due to the difficulty of applying dropout algorithms with LSTM probabilistic units.The sequential nature of models is inconsistent with the parallelization of processing[19].RNN models require more resources and time to train due to the linearized natures of layers and random weights initialization.Many researchers have used CNN for audio classification to analyze visual imagery of audio by convolving multiple filters on data to extract features for the neural network.Deep CNN convolves multiple layers of filters on image to extract distinct features having depth depending on the number of layers.Deep networks improve algorithm’s ability by capturing unique properties using multiple convolution layers to retrieve a higher level of features.The feature-map produced from this process enhances the recognition system accuracy.

    However, these studies observe that the deeper layers of convolution tends to assimilate general/abstract level information from the input data [20].The deep CNN model tends to over-fit when the labeled data for training is less.The deep networks of the convolution model are prone to gradient vanishing/exploding problems as the network deepens,causing less precision of the recognition model.Therefore, the researchers deploy attention mechanisms with an RNN model to obtain long-term dependencies by contextualizing the feature-map.The attention model uses probabilistic learning by giving weight to the important feature using the soft-max probabilistic function.Moreover, the attention-based models reduce the vanishing gradient problem by decreasing the number of features to process important and unique features for the recognition system[21].In[22],the authors introduce one of the attention mechanism variations,self-attention,to compute the representation of the same sequence relating to different positioning.Self-attention allows input sequences to interact with all neighboring values and find contextual and positional attention within the same sequence.In[23],the authors observe the multi-view approach with a neural network algorithm to increase the efficiency of the architecture.The main objective of the paper is to improve existing speech recognition systems by building a precise method that can be implemented in any speech recognition application with a lightweight footprint.

    In[24],the authors use Fourier transform to convert the waveform signal to alternative representations characterized by a sinusoidal function.The paper uses Infrared spectroscopy through Fourier transform for analysis of biological material.In [25], the Short-Time Fourier-Transform (STFT)is used to extract features from the signal of audio by slicing the signal into windows and performing Fourier transform on each window to obtain meaningful information.Actually, Deep Learning(DL)[26] models extract intricate structures in data, and back-propagation algorithms show which parameters are used for calculating each layer representation.In fact,DL allows the computation of multiple processing for the learning of data representation having many levels of abstractions.In[27],authors elaborate the feature extraction in speech categorizing speech recognition to three stages.At first,the audio signal is divided into small chunks;secondly,the phoneme is extracted and processed,and lastly, the speech is categorized on word level.Music detection is discussed in [28], where the authors use CNN with mel kernel to separate music content from speech and noise.The mel-scale is useful for focusing on a specific type of frequency and minimizing the effect of noisy and unrelated parts.

    In [29], an attention model is used for audio tagging of Google Audio Set [30].The authors investigate Multi Instance-Learning (MIL)problem for weakly labeled audio set classification by introducing the attention model for probabilistic learning, where attention is used with a fully connected neural network for multi-label classification on audio.Multi-head attention is used in[31],where authors elaborate the implication to extract information from multiple representation subspaces at various positions by the ability of multi-head to attend to different interpretations within the data jointly.The multi-head attention is useful for obtaining different contexts within the information which improve the efficiency of the model.

    In this paper, we present a novel end-to-end binary view transformer-based architecture for speech recognition to cope with the frequency resolution problem.Firstly, the input audio signal is transformed into a 2D image using Mel-spectrogram.Secondly, the multi-view mechanism is used to enhance the frequency resolution in the image.In addition, the Modified universal transformers utilized the multi-head attention to derive contextual information and derive different speech-related features.A feed-forward neural network is also deployed for classification.The proposed system is discussed in details in the Section 5.Moreover,the proposed system has generated robust results on Google’s speech command dataset with an accuracy of 95.16% and with minimal loss.The binaryview transformer eradicates the eventuality of the over-fitting problem by deploying a multi-view mechanism to diversify the input data,and multi-head attention captures multiple contexts from the data’s feature map.

    The rest of the paper is organized as follows:The Section 2 contains the speech perception and recognition by using AI,and the proposed system is discussed in the Section 3.The Section 4 includes the experiment steps and testing.Furthermore, the Section 5 includes the experiment results and discussions.Finally,the Section 6 concludes the research work.

    2 Speech Perception and Recognition Using AI

    Perception is the ability to systematically receive information,identify essential data features and then interpret that information,while recognition is the system’s ability to identify the classification of data.To build a system using AI for the speech recognition,we need to have input data that is in the form of an audio signal.After pre-processing, the audio signal progresses to the speech recognition system,and the systems output will be a classification transcript of the audio.A microphone records the audio signal with a bit depth of 16 (recorded signal in time domain having values of 2 * 16).Audio is recorded at 16 kilohertz having a nitrous frequency of 8 kilohertz;the nitrous is a range of distinguished lower frequency,which is interpretable and differentiable by the brain for speech because most frequency changes happen at lower frequencies.The signal in the time domain is complicated to interpret, as the human ear can sense the intensity of frequency.Moreover, we use a pre-processing step to convert the signal into the frequency domain using Fourier transform,where the time-domain representation of the signal is transformed into a time-frequency domain.

    The power spectral density of the audio signal for different bands of frequencies are shown in the Fig.1,where the nitrous frequency range has most frequencies changes.We create a spectrogram by stacking periodogram adjacent to one another over time.The spectrogram is a colored 2D image representation of the audio.

    Figure 1: Periodogram of audio frequencies and the 2D representation of audio signal using spectrogram

    For speech recognition,the human brain amplifies some frequencies,while nullifying or reducing the background noise by giving more importance to the lower band of frequencies.For example,humans can tell the difference between 40 and 100 hertz,but are unable to differentiate between 10,000 and 12,000 hertz.This objective in computing is achieved through mel-scale; by applying mel-filterbank on the frequencies,we can retrieve the lower frequencies efficiently,as shown in the Fig.2.

    Figure 2:Mel filter-banks and the frequencies from linear to logarithmic

    2.1 Convolutional Neural Network

    In the field of machine learning, CNN is one of the most efficient algorithms for image recognition.Since the inception of CNN, the field of machine learning is revolutionized, and stateof-the-art results are produced.In CNN, different filters are convolved over the image to compute essential features by using the Eqs.(1)and(2),where filter B convolves over image A having k number of rows and columns.Convolution gives us a large pool of features in data that is passed to a N.N.,which helps to classify them into different classes.Many variants of CNN are produced over the years that improve the performance of these models,e.g.,Inception net[32],Resnet[33],and mobile net[34].

    2.2 Recurrent Neural Network

    RNN algorithms allow the machine to process temporal sequences of variable lengths[14].This type of algorithm is useful in processing sequential data through sequential modelling, e.g., signal processing,NLP,and speech recognition.The RNN models produce hidden-vector as a function of the former states and the next input as shown in Eq.(3),where input vectorsAare sequentially processed by recurrenceFunctionhavingwparameters on each time-stamp to produce a new state for the model.

    Recurrence models generate a sequential pattern of data that prevents parallelization for training data.The sequential nature increases the computation time of the model and limits longer sequences from processing,which causes the gradient vanishing/exploding problem.

    2.3 Attention Mechanism

    Attention is a deep learning mechanism that is mainly inspired by the natural perception ability of humans as humans receive information in raw form from the senses and transmit it to the brain[29].The brain opts for the relevant and useful information by ignoring background noises; this process polishes the data, making it easier to perceive.Moreover, the attention is a weighted probabilistic vector with the soft-max function used in a neural network, which was introduced to improve the sequential models(LSTMs,RNNs)to capture essential features in context vectors as shown in Eq.(4),andAttention_weightis elaborated in the Eq.(5).

    The attention mechanism extracts model dependencies while the effect of distance between input and output sequences is negated, which improves the model performance.Self-attention [35] is a variation of the attention mechanism that allows the vectors to interact with each other to discover the important features,so that more attention can be given.Applying attention to sequential models improves the accuracy,and state of the art results are achieved.

    2.4 Transformer

    In transformer architecture, instead of sequential representation, the positional information(input data)is embedded in positional vector with input vectors that permit parallelization.Transformer architecture consists of two parts, i.e., encoder layers and decoder layers.In a transformer,attention mechanism is used for content-based memory retrieval, where decoder attends to content that is encoded and decides which information needs to be extracted based on affinity or its position.The input data is processed by the transformer in the form of pixel blocks[36]i.e.,each row of image are embedded,and the positional information of data are encoded by using positional encoder into the input embedding,which is subsequently passed to transformer for processing.

    The positional information is extracted from the data by using positional encoderE, which is added to the input embedding.The input embedding and positional encoder have same dimensiond,so that both can be summed.The positional encoding is extracted using sine function by using the Eq.(6),and cosine function in Eq.(7)alternatively.The Eq.(6)is used for odd values,and Eq.(7)is used for even value as shown in the Eq.(8)fornlength input sequence.The sine and cosine functions are used to create a unique pattern of values for each position.

    wherepis the position to encode,andfiare the frequencies ofinumbers up tod/2 as shown in equation Eq.(9).

    In Transformer encoder layer, the embedded inputX= {x1, x2, x3,...xn} is fed into three fully connected layers to create three embeddings, namely keys, query and value; these embeddings are commonly used in the search retrieval system.During the search retrieval,the query is mapped against some keys that are associated with search candidates; this presents best match searches (values).To compute the attention value of input embeddingx1againstx2as shown in the Fig.3, transformer self-attention;Firstly,theQ,K,andVare randomly initialized having same dimension as the input embedding.The inputx1is matrix-multiplied withQto produce Query embeddingQe,and embeddingx2is matrix-multiplied withKto produce Key embeddingKe,then the resultant matrixes dot product(weighted score matrixZ)is calculated.The scoresZare then scaled-down as shown in Eq.(10)for a stable gradient,wheredKeis the dimension of keys embedding.

    The softmax function in Eq.(11)is applied toZs={z1,z2,z3,...zn}to calculate attention weights,giving probability values between zero and one.The Fig.3 input embedding is a matrix which is multiplied with theVeto produce value embedding.

    Figure 3:Transformer:Encoding layer self-attention and the transformer layer and model

    TheZsmultiplies with theVe.This process is repeated for all the inputs neighborhood, andVeare then concatenated.The functionality is elaborated in Eq.(12),whereKetis the transpose of keys embedding.The self-attention produces weighted embeddings for each input.

    3 Proposed System and Architecture

    The architecture proposed in this paper is a novel binary-view transformer architecture,an endto-end model for a speech recognition system, which is inspired by the human perception of speech and is articulated by carefully studying human physiology.The architecture consists of two primary modules i.e.,(i)Pre-processing and feature extraction module and(ii)Classification module.

    Three convolution layers are applied for the feature extraction on both inputs.The filter size is 3×3, and numbers of filters are 32, 64, and 1, respectively.After each layer of convolution, batch normalization is implemented with an activation function.Both of the inputs are then concatenated to add the extracted features in multi-view i.e.,binary view model.

    Our system incorporates a modified universal transformer[19,22],where multi-head self-attention is used with four heads capturing four different contexts at the same time.The depth of the transformer is six, i.e., six encoding and six decoding transformer layers are implemented.The transformer is tuned to 25 percent dropout after each layer,and a high-performance gaussian-error linear unit[37]activation function of the neural network is used.The adaptive computations time algorithm is then used with the aim of allowing the neural network to determine computation steps for getting inputs and computing outputs.The resultant vectors then proceed to global average pooling[38],where mean value for every feature map is computed, and soft-max then determines its probabilities.Lastly, the feature map is passed to a dense layer,i.e.,fully-connected layer;of 128 nodes and subsequently another dense layer of nodes equal to desire classes where the input is classified to its respective class.It is important to noted that the classification is vital by considering the fact that the internet data traffic is increasing with every passing day[39-42].The working of system is shown in Algorithm 1.

    4 Training and Experimentation

    For training the proposed model, we use Google’s data-set of speech command [43,44] created by Google Brain, which has speech audio files in WAV format, having a total of 105,829 command utterance files.The data-set audio files have a length of 1 s,divided into 35 classes of words.The audio was recorded in a 16-bits mono channel, and the command files are collected from 2618 different speakers having a range of dialects.

    The tool used for training the architecture is Google cloud service for machine learning,namely Google-colab, which uses a jupyter-notebook environment, and Tesla Graphics Processing Unit(GPU)K80 is provided by Google having 12 GB of GPU.

    We trained different architecture for speech recognition of 35 classes,which includes our binaryview transformer model and the models introduced by paper[3],i.e.,LSTM and attention-based recurrent convolutional architectures.We also experimented with well-known convolutions architectures of resnet and inception net, where we modify our model by replacing the Transformer with Resnet(Fig.4), proposed architecture (Fig.5)and Inception net (Fig.6).We then compute and Recurrent compare their results.We then compute and Recurrent compare their results.

    Figure 4:Binary-view ResNet

    Figure 5: System architecture of binary-view convolution the transformer, which captures multiple feature contexts of each class,making recognition more rob

    Figure 6:Binary-view inception net

    5 Results and Discussion

    Initially,the experiments of paper[3]were replicated,including the LSTM model and attention with a recurrent convolutional network.The purpose of the experimentation results is to compare and demonstrate the efficiency and shortcomings of different neural network models for the speech recognition.In terms of accuracy, we improved the validation accuracy by gradually decreasing the learning rate over epochs and increasing filters in convolutions and introducing batch normalization.The LSTM model[3]accuracy is recorded up to 93.9,and the attention based recurrent convolutional network [3] accuracy is 94.2.The transformer architecture without multiple inputs gives 94.24%accuracy.The binary-view resnet model,binary view inception net model,binary-view convolutional model,and binary-view transformer model were executed on the dataset,where validation accuracy was 94.91,94.74,95.05,and 95.16,respectively as shown in Fig.7.Moreover,the proposed transformer model produced state-of-the-art as well as a minimalistic number of parameters, i.e., 375,787.The Fig.8 shows the comparison of training and validation accuracies.

    ?

    In terms of loss,the binary-view transformer model validation loss is comparatively less,which is 0.191.Single input transformer model produces 0.227 loss.The binary-view resnet model, binaryview inception net model, binary-view convolutional model losses, and attention based recurrent convolutional network were 0.194, 0.192, 0.21, and 0.237, respectively as can be seen in Fig.9.The decline of loss exhibits a better performance of the architecture and lower chance of the model being over-fitting with the aim of eradicating the gradient vanishing/exploding problem.

    Table 1: Results comparison of the proposed approach with existing studies

    Table 1:Continued

    Figure 7:Validation accuracies of speech recognition models

    Figure 8:Training and validation accuracies of implemented models

    Figure 9:Validation loss of speech recognition model

    6 Conclusions

    This research aimed to improve the speech recognition system.We analyzed human physiology for speech perception.This research aimed to improve the speech recognition system.In addition,Binary-view transformer architecture produced state of the art results on Google’s speech command dataset [43,44].Three aspects of recognition models, i.e., validation accuracy, precision, and loss,were considered to determine the efficiency of binary-view transformer architecture.By introducing a binary-view mechanism, similar data from different sources were processed, and the attention mechanism within the transformer increases efficiency, where the best validation accuracy of 95.16 was achieved.The proposed model decreased the eventuality of gradient vanishing/exploding problem by processing long-term dependencies.Whereas the confusion matrix showed better precision of the binary-view transformer architecture compared to other models since the transformer used a multihead attention mechanism,which catches more contexts of the same data,which helped in improving model precision and the probability of model over-fitting diminish.Better precision on Google’s speech command dataset showed that our model performed better on different dialects because over 2000 speaker’s speech was precisely recognized.As shown in Tab.1,our model exhibited less loss of 0.191 compared to 0.237, 0.194, 0.192, and 0.21 of the attention based recurrent convolutional networks[3],binary-view resnet model,binary-view inception net model and binary-view convolutional model,respectively.The binary-view transformer architecture has a lightweight footprint of 375,787 trainable parameters,which can be run locally on small systems.

    Funding Statement:This research was supported by Suranaree University of Technology, Thailand,Grant Number:BRO7-709-62-12-03.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    日韩成人在线观看一区二区三区| 午夜久久久久精精品| 成人国产一区最新在线观看| 精品高清国产在线一区| 亚洲精品久久国产高清桃花| 村上凉子中文字幕在线| 国产99久久九九免费精品| 黑人欧美特级aaaaaa片| 夜夜看夜夜爽夜夜摸| 久久精品亚洲精品国产色婷小说| 久久伊人香网站| 亚洲免费av在线视频| 亚洲国产精品999在线| 日韩精品中文字幕看吧| 99久久综合精品五月天人人| 五月玫瑰六月丁香| 亚洲最大成人中文| 久久久精品国产亚洲av高清涩受| 亚洲性夜色夜夜综合| 88av欧美| 精品电影一区二区在线| 精品福利观看| 欧美大码av| 亚洲国产中文字幕在线视频| 99久久精品国产亚洲精品| 一本一本综合久久| av视频在线观看入口| 国产精品日韩av在线免费观看| 国产一区二区激情短视频| 精品国产美女av久久久久小说| 18禁观看日本| 欧美日本视频| 精品国内亚洲2022精品成人| 啪啪无遮挡十八禁网站| 亚洲国产看品久久| 久久久久久久久免费视频了| 久久久久国内视频| 99热只有精品国产| www.自偷自拍.com| 亚洲色图av天堂| 丁香六月欧美| 欧美成人免费av一区二区三区| 欧美在线一区亚洲| 99热只有精品国产| 亚洲全国av大片| 亚洲自偷自拍图片 自拍| 色综合欧美亚洲国产小说| 欧美成人性av电影在线观看| 黄片小视频在线播放| 身体一侧抽搐| 国产亚洲精品久久久久久毛片| 搡老熟女国产l中国老女人| 婷婷精品国产亚洲av| 在线十欧美十亚洲十日本专区| 国产熟女xx| 久久久国产成人精品二区| 岛国在线免费视频观看| 最近最新中文字幕大全免费视频| 亚洲熟妇熟女久久| 精品乱码久久久久久99久播| 国产精品久久久人人做人人爽| 久久久久免费精品人妻一区二区| 黑人欧美特级aaaaaa片| 亚洲第一电影网av| a级毛片在线看网站| 给我免费播放毛片高清在线观看| 日韩大码丰满熟妇| 日韩av在线大香蕉| 欧美在线一区亚洲| cao死你这个sao货| 亚洲国产精品成人综合色| 亚洲成人免费电影在线观看| svipshipincom国产片| 国产精品久久视频播放| 国产伦在线观看视频一区| 九色国产91popny在线| 国产精品久久电影中文字幕| 精品久久蜜臀av无| 两人在一起打扑克的视频| 久久九九热精品免费| 香蕉丝袜av| 国产亚洲精品久久久久5区| 国产精品爽爽va在线观看网站| 久久久久久国产a免费观看| 久久精品aⅴ一区二区三区四区| 久热爱精品视频在线9| 欧美 亚洲 国产 日韩一| 国产激情久久老熟女| 欧美性猛交╳xxx乱大交人| 好男人电影高清在线观看| 国产亚洲精品av在线| 欧美午夜高清在线| 国产精品亚洲一级av第二区| 一级毛片精品| 又爽又黄无遮挡网站| av超薄肉色丝袜交足视频| 五月玫瑰六月丁香| 色噜噜av男人的天堂激情| 欧美精品亚洲一区二区| 法律面前人人平等表现在哪些方面| 亚洲国产欧洲综合997久久,| 色哟哟哟哟哟哟| 观看免费一级毛片| 在线看三级毛片| 国模一区二区三区四区视频 | 一个人观看的视频www高清免费观看 | aaaaa片日本免费| 一个人免费在线观看的高清视频| 久久国产乱子伦精品免费另类| 久久久久久九九精品二区国产 | 老汉色∧v一级毛片| 久久久国产成人免费| 丝袜人妻中文字幕| 久久久久久久精品吃奶| 亚洲精品美女久久av网站| 久久婷婷人人爽人人干人人爱| 婷婷精品国产亚洲av在线| 亚洲精品在线美女| 757午夜福利合集在线观看| 桃红色精品国产亚洲av| 久久久久久久久久黄片| 一区二区三区激情视频| www.精华液| 99国产精品一区二区蜜桃av| 亚洲真实伦在线观看| 亚洲自偷自拍图片 自拍| 精品国产乱子伦一区二区三区| 成人手机av| 亚洲aⅴ乱码一区二区在线播放 | 亚洲成av人片免费观看| 极品教师在线免费播放| 国产精品av视频在线免费观看| 国产激情偷乱视频一区二区| 精品一区二区三区av网在线观看| 亚洲欧美日韩高清在线视频| 久久伊人香网站| x7x7x7水蜜桃| 中国美女看黄片| 国产一区二区在线av高清观看| 97人妻精品一区二区三区麻豆| 高清在线国产一区| 99国产极品粉嫩在线观看| 欧美绝顶高潮抽搐喷水| 午夜免费激情av| 五月玫瑰六月丁香| 女人高潮潮喷娇喘18禁视频| 欧美激情久久久久久爽电影| 亚洲无线在线观看| 欧美性长视频在线观看| 国产高清videossex| 岛国在线免费视频观看| 亚洲国产看品久久| 国产视频一区二区在线看| 99久久综合精品五月天人人| 可以免费在线观看a视频的电影网站| 黄色片一级片一级黄色片| 黄片大片在线免费观看| 国产精品久久久久久人妻精品电影| 黄色视频不卡| 亚洲专区国产一区二区| 亚洲天堂国产精品一区在线| 国产精品久久久久久人妻精品电影| 亚洲熟妇熟女久久| 国产91精品成人一区二区三区| 夜夜看夜夜爽夜夜摸| 在线十欧美十亚洲十日本专区| 啦啦啦观看免费观看视频高清| 欧美3d第一页| 欧美日韩福利视频一区二区| 91国产中文字幕| 久久精品91无色码中文字幕| 亚洲精品国产精品久久久不卡| 亚洲av成人av| 免费高清视频大片| 亚洲片人在线观看| 99精品在免费线老司机午夜| 日日干狠狠操夜夜爽| 亚洲18禁久久av| 国产精品国产高清国产av| 亚洲精品国产一区二区精华液| 嫁个100分男人电影在线观看| 天天添夜夜摸| 免费av毛片视频| 身体一侧抽搐| 色老头精品视频在线观看| 岛国在线观看网站| 欧美绝顶高潮抽搐喷水| 男男h啪啪无遮挡| 黑人操中国人逼视频| 免费观看精品视频网站| 精品国内亚洲2022精品成人| 国产aⅴ精品一区二区三区波| 久久这里只有精品19| 久久久国产成人免费| 国产精华一区二区三区| 亚洲中文av在线| 午夜福利在线在线| 国产一级毛片七仙女欲春2| 成人永久免费在线观看视频| 天堂影院成人在线观看| 美女扒开内裤让男人捅视频| 国产97色在线日韩免费| 亚洲人成电影免费在线| 岛国在线免费视频观看| 国产私拍福利视频在线观看| 日韩免费av在线播放| 国产午夜福利久久久久久| 天天一区二区日本电影三级| 亚洲在线自拍视频| 毛片女人毛片| av视频在线观看入口| 亚洲国产欧美人成| a在线观看视频网站| 国产69精品久久久久777片 | 国产亚洲精品久久久久5区| 久久精品成人免费网站| 99riav亚洲国产免费| 久久精品国产亚洲av高清一级| 精品久久久久久久末码| 亚洲黑人精品在线| 美女高潮喷水抽搐中文字幕| 欧美黄色片欧美黄色片| 国产又色又爽无遮挡免费看| 久久婷婷成人综合色麻豆| 国产一级毛片七仙女欲春2| 成年版毛片免费区| 中文字幕av在线有码专区| 久久草成人影院| 夜夜夜夜夜久久久久| 特级一级黄色大片| 手机成人av网站| 啦啦啦观看免费观看视频高清| 亚洲av片天天在线观看| 久久久久国产精品人妻aⅴ院| 亚洲自偷自拍图片 自拍| 全区人妻精品视频| 一个人观看的视频www高清免费观看 | 久久久久久久午夜电影| 亚洲av美国av| 国产高清激情床上av| 久久99热这里只有精品18| 老司机在亚洲福利影院| 校园春色视频在线观看| 国内久久婷婷六月综合欲色啪| 午夜日韩欧美国产| 日本黄大片高清| 国产精品久久久人人做人人爽| 亚洲七黄色美女视频| 久99久视频精品免费| 欧美成人免费av一区二区三区| 中文字幕av在线有码专区| 亚洲精品中文字幕在线视频| 五月伊人婷婷丁香| 舔av片在线| 一个人观看的视频www高清免费观看 | 国产99久久九九免费精品| 亚洲av片天天在线观看| 欧美国产日韩亚洲一区| 可以免费在线观看a视频的电影网站| 91大片在线观看| 欧美一级a爱片免费观看看 | 在线视频色国产色| 丰满人妻一区二区三区视频av | 男人的好看免费观看在线视频 | 不卡av一区二区三区| 丁香六月欧美| 人妻丰满熟妇av一区二区三区| 欧美3d第一页| 久久精品成人免费网站| 香蕉丝袜av| 国产三级黄色录像| 免费电影在线观看免费观看| 国产成人系列免费观看| 麻豆成人av在线观看| 少妇熟女aⅴ在线视频| 亚洲最大成人中文| 在线观看免费视频日本深夜| 小说图片视频综合网站| ponron亚洲| 久久天堂一区二区三区四区| 一边摸一边抽搐一进一小说| 99热这里只有精品一区 | 亚洲成a人片在线一区二区| 欧美一级a爱片免费观看看 | 亚洲片人在线观看| 久久精品国产清高在天天线| 国产三级黄色录像| 亚洲精华国产精华精| 97超级碰碰碰精品色视频在线观看| 色综合欧美亚洲国产小说| www国产在线视频色| 国产亚洲精品综合一区在线观看 | 叶爱在线成人免费视频播放| 狂野欧美白嫩少妇大欣赏| 亚洲人成网站高清观看| 不卡av一区二区三区| 三级国产精品欧美在线观看 | 99久久精品热视频| 精品国产乱码久久久久久男人| 久久久久性生活片| 黄色毛片三级朝国网站| 精品国产亚洲在线| 黄色视频不卡| 中出人妻视频一区二区| 亚洲成人免费电影在线观看| 黄色 视频免费看| 日韩欧美精品v在线| 草草在线视频免费看| 五月玫瑰六月丁香| 中国美女看黄片| 97超级碰碰碰精品色视频在线观看| 久久久久免费精品人妻一区二区| 日韩免费av在线播放| 亚洲,欧美精品.| 国产午夜精品久久久久久| 免费人成视频x8x8入口观看| 久久久久性生活片| 久久精品亚洲精品国产色婷小说| 大型黄色视频在线免费观看| 久久中文字幕一级| 精品午夜福利视频在线观看一区| 成人国产综合亚洲| 亚洲,欧美精品.| 亚洲欧美激情综合另类| 日本熟妇午夜| 婷婷精品国产亚洲av在线| 亚洲无线在线观看| 亚洲人与动物交配视频| 久久午夜亚洲精品久久| 日日干狠狠操夜夜爽| 日日爽夜夜爽网站| 一卡2卡三卡四卡精品乱码亚洲| or卡值多少钱| 亚洲精品久久成人aⅴ小说| 久热爱精品视频在线9| 波多野结衣巨乳人妻| 一级毛片精品| 少妇的丰满在线观看| 欧美日韩瑟瑟在线播放| 人人妻人人看人人澡| 日本在线视频免费播放| 热99re8久久精品国产| 日本 欧美在线| 欧美黑人欧美精品刺激| 日韩 欧美 亚洲 中文字幕| 国语自产精品视频在线第100页| 国产亚洲欧美98| 午夜福利免费观看在线| www日本黄色视频网| 日韩免费av在线播放| 男女做爰动态图高潮gif福利片| 国产成年人精品一区二区| 长腿黑丝高跟| 男人的好看免费观看在线视频 | 国产精品98久久久久久宅男小说| 波多野结衣高清作品| 亚洲国产中文字幕在线视频| 免费看a级黄色片| 99久久国产精品久久久| 久久久久久人人人人人| 色播亚洲综合网| 亚洲专区字幕在线| 99国产综合亚洲精品| 黄色毛片三级朝国网站| 久久香蕉国产精品| 丁香六月欧美| 麻豆国产97在线/欧美 | 中文字幕人成人乱码亚洲影| 免费看日本二区| 免费看a级黄色片| 又大又爽又粗| 真人做人爱边吃奶动态| 国产99白浆流出| 日日干狠狠操夜夜爽| 国内精品久久久久精免费| 老熟妇仑乱视频hdxx| 久久性视频一级片| 757午夜福利合集在线观看| av国产免费在线观看| 国内精品久久久久久久电影| 亚洲精品一卡2卡三卡4卡5卡| 麻豆国产97在线/欧美 | 可以在线观看的亚洲视频| av福利片在线观看| 舔av片在线| 最新美女视频免费是黄的| 亚洲真实伦在线观看| 国产精品 国内视频| www日本在线高清视频| 免费在线观看完整版高清| 免费在线观看黄色视频的| 午夜福利在线观看吧| 男女下面进入的视频免费午夜| 这个男人来自地球电影免费观看| 日韩欧美一区二区三区在线观看| 中文字幕最新亚洲高清| 久久天躁狠狠躁夜夜2o2o| av福利片在线观看| 欧美激情久久久久久爽电影| 欧美绝顶高潮抽搐喷水| 757午夜福利合集在线观看| 小说图片视频综合网站| 国产激情久久老熟女| 精品久久久久久久久久免费视频| netflix在线观看网站| 精品久久久久久久久久免费视频| 亚洲一区二区三区不卡视频| 欧美乱色亚洲激情| 在线观看免费视频日本深夜| 久久人妻av系列| 国产高清视频在线播放一区| 18美女黄网站色大片免费观看| 久久精品亚洲精品国产色婷小说| 亚洲精品色激情综合| 国产aⅴ精品一区二区三区波| 久久久国产欧美日韩av| 可以在线观看的亚洲视频| 欧美乱色亚洲激情| 亚洲欧美一区二区三区黑人| 99国产精品99久久久久| 美女高潮喷水抽搐中文字幕| 免费在线观看成人毛片| 欧美日韩黄片免| 日韩三级视频一区二区三区| 国产在线精品亚洲第一网站| 香蕉av资源在线| 免费在线观看成人毛片| 老汉色∧v一级毛片| 免费搜索国产男女视频| 人人妻人人看人人澡| 亚洲18禁久久av| 国内精品久久久久久久电影| 久久人人精品亚洲av| 后天国语完整版免费观看| 久热爱精品视频在线9| 亚洲人成77777在线视频| 精品久久久久久成人av| 2021天堂中文幕一二区在线观| 欧美色视频一区免费| 精品不卡国产一区二区三区| 丁香欧美五月| 日韩av在线大香蕉| 香蕉久久夜色| 九九热线精品视视频播放| 久9热在线精品视频| 日本成人三级电影网站| 中文资源天堂在线| 天堂√8在线中文| 免费av毛片视频| 欧美日韩福利视频一区二区| 夜夜躁狠狠躁天天躁| 久久婷婷人人爽人人干人人爱| 国产日本99.免费观看| 欧美三级亚洲精品| tocl精华| 亚洲精品一卡2卡三卡4卡5卡| 国产成人影院久久av| 9191精品国产免费久久| 欧美黑人精品巨大| 老司机午夜十八禁免费视频| 亚洲乱码一区二区免费版| x7x7x7水蜜桃| 亚洲美女黄片视频| 亚洲一区高清亚洲精品| 久久久久亚洲av毛片大全| 狂野欧美白嫩少妇大欣赏| 夜夜看夜夜爽夜夜摸| 国产成人aa在线观看| 日本黄大片高清| 99久久国产精品久久久| 人人妻,人人澡人人爽秒播| 欧美中文日本在线观看视频| 国产三级中文精品| 色综合欧美亚洲国产小说| 成人三级黄色视频| 国产人伦9x9x在线观看| 欧美午夜高清在线| www.熟女人妻精品国产| 国产精品一区二区三区四区久久| 91老司机精品| 欧美黄色片欧美黄色片| 啦啦啦观看免费观看视频高清| 国产av一区二区精品久久| 一本精品99久久精品77| 午夜影院日韩av| 亚洲精品国产一区二区精华液| 啦啦啦免费观看视频1| 亚洲成人久久爱视频| 国产精品久久久久久亚洲av鲁大| 久99久视频精品免费| 国产一区二区三区视频了| 色播亚洲综合网| 色哟哟哟哟哟哟| 久久久久九九精品影院| 国内少妇人妻偷人精品xxx网站 | 亚洲国产精品sss在线观看| 国产成人av教育| 亚洲成人免费电影在线观看| 精品国产乱码久久久久久男人| 亚洲精品av麻豆狂野| 91麻豆精品激情在线观看国产| 1024手机看黄色片| 国产亚洲精品久久久久久毛片| 免费搜索国产男女视频| 黑人操中国人逼视频| 亚洲av成人av| 久久热在线av| 久久精品夜夜夜夜夜久久蜜豆 | 大型黄色视频在线免费观看| 欧美日韩乱码在线| 欧美日韩瑟瑟在线播放| 一a级毛片在线观看| 国内精品久久久久久久电影| 男女午夜视频在线观看| 非洲黑人性xxxx精品又粗又长| av中文乱码字幕在线| 老鸭窝网址在线观看| 精品欧美一区二区三区在线| av片东京热男人的天堂| 国产精品国产高清国产av| 国产视频一区二区在线看| 99国产极品粉嫩在线观看| 成人午夜高清在线视频| 可以在线观看毛片的网站| 国产不卡一卡二| 久9热在线精品视频| 午夜激情av网站| 国产精品一区二区三区四区久久| 欧美色欧美亚洲另类二区| 老熟妇乱子伦视频在线观看| 国产精品九九99| 成熟少妇高潮喷水视频| 日韩欧美三级三区| 久久精品国产综合久久久| 夜夜爽天天搞| 久久久久久九九精品二区国产 | 99久久久亚洲精品蜜臀av| 久久久精品国产亚洲av高清涩受| 国产av一区二区精品久久| 中文字幕av在线有码专区| 国产精品久久久久久人妻精品电影| 三级男女做爰猛烈吃奶摸视频| 最近最新中文字幕大全免费视频| 每晚都被弄得嗷嗷叫到高潮| 最近在线观看免费完整版| 我的老师免费观看完整版| 成人av在线播放网站| 欧美日韩黄片免| 中国美女看黄片| 亚洲电影在线观看av| 免费人成视频x8x8入口观看| 男女那种视频在线观看| 亚洲18禁久久av| 黄色毛片三级朝国网站| 久久婷婷人人爽人人干人人爱| 99精品在免费线老司机午夜| 嫩草影院精品99| 少妇熟女aⅴ在线视频| 日本免费a在线| 国产精品98久久久久久宅男小说| 国产精品一区二区三区四区免费观看 | 香蕉丝袜av| 国产伦在线观看视频一区| 免费人成视频x8x8入口观看| 久久久久久九九精品二区国产 | 久久精品亚洲精品国产色婷小说| cao死你这个sao货| 精品不卡国产一区二区三区| 免费在线观看亚洲国产| 色噜噜av男人的天堂激情| 免费在线观看黄色视频的| 不卡av一区二区三区| 国产成+人综合+亚洲专区| 日本免费一区二区三区高清不卡| 香蕉丝袜av| 国产成人av激情在线播放| 神马国产精品三级电影在线观看 | 757午夜福利合集在线观看| 五月玫瑰六月丁香| 特大巨黑吊av在线直播| 免费无遮挡裸体视频| 国产一区二区三区在线臀色熟女| 国产精品久久电影中文字幕| 香蕉av资源在线| 老司机福利观看| 大型黄色视频在线免费观看| 色精品久久人妻99蜜桃| 欧美日韩精品网址| 高清毛片免费观看视频网站| av国产免费在线观看| av福利片在线| av福利片在线观看| 亚洲性夜色夜夜综合| 亚洲一卡2卡3卡4卡5卡精品中文| 亚洲国产精品久久男人天堂| 日本免费一区二区三区高清不卡| 日日干狠狠操夜夜爽| 手机成人av网站| 露出奶头的视频| 99国产精品99久久久久| 色在线成人网| 首页视频小说图片口味搜索| 熟女电影av网| 麻豆av在线久日| 日本熟妇午夜| 极品教师在线免费播放| 看黄色毛片网站| 长腿黑丝高跟| 麻豆一二三区av精品| 亚洲男人天堂网一区| 精品无人区乱码1区二区| 国产精品影院久久| 国产日本99.免费观看| 精品久久久久久,| 亚洲免费av在线视频|