• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Behavior recognition based on the fusion of 3D-BN-VGG and LSTM network

    2020-11-27 09:17:08WuJinMinYuShiQianwenZhangWeihuaZhaoBo
    High Technology Letters 2020年4期

    Wu Jin (吳 進(jìn)), Min Yu, Shi Qianwen, Zhang Weihua, Zhao Bo

    (School of Electronic and Engineering, Xi’an University of Posts and Telecommunications, Xi’an 710121, P.R.China)

    Abstract

    Key words: behavior recognition, deep learning, 3 dimensional batch normalization visual geometry group (3D-BN-VGG), long short-term memory (LSTM) network

    0 Introduction

    Video-based behavior recognition is an important application scenario in the field of computer vision. The study of behavioral recognition began in 1973 when the Swedish psychologist Johansson[1]proposed a model of moving light displays (MLD) that describes human motion behavior. It is widely applied in intelligent security monitoring, unmanned craft, industrial automation and other fields[2], and it is also applied to sports training and medical fields[3,4].

    Deep learning[5]has shown extraordinary performance in the fields of computer vision and machine learning in recent years. In 2011, Hinton and Krizhevsky of the University of Toronto in Canada earned the championship in the ImageNet Challenge image classification challenge by using the convolutional neural network (CNN) with 5 layers, which refreshed the previous achievements with great advantages[6]. Furthermore, the long short-term memory (LSTM) proposed by Yeater and Verma[7]has also achieved good results in the prediction of sequence data.

    In addition, deep learning has also attained many research results in the field of behavior recognition. Ji et al.[8]proposed the use of 3D convolutional neural networks (3D-CNN) for human behavior recognition in 2013, they also verified the 3D-CNN algorithm on the UCF-101[9]data set and achieved a precision of 85.2%. Later, Simonyan et al.[10]implemented a two-stream behavior recognition method which achieved 88% accuracy on the UCF-101 dataset. The latest research results[11], on the basis of two-stream method, add the same video multi-frame results for fusion, and use the Inception[12]network with deeper network layer as the classification network, and also add the batch normalization (BN) new algorithm[13], which has 69.4% accuracy in hmdb-51[14]and 94% accuracy in UCF-101 data set, respectively. In addition to the direct use of CNN, some methods can be used to add recurrent neural networks(RNN)[15]into the subsequent studies, such as the long-term recurrent convolutional network (LRCN) proposed by Donahue et al[16]. The method achieved an accuracy of 82.92% on the UCF-101 data set. Gammulle et al.[17]. implemented a deep fusion framework which used the fusion of two-stream and LSTM. It achieved 94% and 69% accuracy in the UCF-11 dataset and the j-HMDB respectively.

    Therefore, it is very significant to employ the deep learning algorithm for behavior recognition analysis research. It takes the optimization of deep learning network structure as the starting point, designs and implements a behavior recognition method based on the fusion of 3D-BN-VGG and LSTM networks. In order to improve the recognition accuracy, the previous structure of behavior recognition network based on 3D convolution has been improved, including stacking mode between convolution layers, the size and number of convolution kernels and the selection mode of pooling layer. At the same time, some regularization methods and deep learning techniques that have been widely used in the field of image recognition in the past two years are introduced.

    1 Network structure and algorithm design

    The output of CNN network is usually a two-dimensional convolution feature map, and the input of LSTM network is a one-dimensional feature vector. Therefore, in order to combine CNN network with LSTM network, it is necessary to vectorize the output feature map of CNN network. In Ref.[16], 2D convolutional neural network is used as feature extractor to extract spatial domain features. Alex-Net is used to extract spatial domain features. In this method, independent continuous video frames are sent into CNN network. For each video frame, an eigenvector is extracted, and then it is used as the input of LSTM network to extract time domain features through LSTM network.

    Firstly, The network structure is based on the improvement of Ref.[16]. The pre-network is changed to 3D-BN-VGG, which is pre-trained so that training speed can be accelerated when sharing loss functions and gradients with the LSTM network. And the advantage of using 3D-CNN is that the time domain features can be extracted before the feature map is sent to the network, which reduces the redundant information of the feature vector when it is finally sent to the LSTM network. And the use of 3D-CNN can input multiple video sequences at a time, because for two-dimensional convolutional neural networks, the data dimension of the input network is four-dimensional. In addition to the length, width and number of channels of the image, another dimension is batch size. In Ref.[16], batch size is actually used as the time of image sequence. Dimensions are used so that only one video sequence can be input at a time for training and testing. And its input data dimension is 5D, it can process multiple video sequences simultaneously during training and testing, which is faster and more efficient. At the same time, since the batch size dimension does not need to be used as the time domain of the video sequence, the 3D-BN-VGG and the LSTM network have only one output result, and the classification result is consistent, it is not necessary to take the average value of multiple classification results. It can do the classification directly by using the softmax function.

    Secondly, the input data of the LSTM network is a one-dimensional vector. Therefore, it is necessary to expand the feature map of the 3D-BN-VGG output into a one-dimensional one. The feature graph is expanded into one dimension respectively, but the feature graph of each video frame of video sequence is not connected by the full connection layer, but connected to the LSTM network as the time step dimension of the LSTM network. The superiority of this method has the following 2 points. First, the amount of parameters is decreased after removing the fully connected layer, which improves the generalization ability of the network and reduces the risk of over-fitting. It also reduces the screening process of the useful features of the fully connected layer, thus it can send all time domain and spatial domain features to the LSTM network, and feature filtering is handled by the LSTM network, which reduces the potential useful feature loss. Second, since 3D-BN-VGG has been used as the pre-network, the size of the time dimension is determined by the length of the input sequence, while the fully connected layer limits the input data size. Therefore, after removing the fully connected layer, the network can achieve an adaptive input length and can classify video sequences of any length.

    1.1 3D-BN-VGG network structure

    The 3D-CNN network structure is a 3D-BN-VGG network improved on the basis of 3D-CNN. The structure is based on 3D-VGG-Block for network linking, it is composed of multiple VGG-Blocks[18]for stacking connections, as shown in Fig.1.

    Fig.1 3D-VGG-Block

    VGG-Block is connected two convolution layers whose convolution kernels are 3×3 in size. The sliding step of the convolution kernels window is 1, and extracts the upper, lower, left and right features of each pixel. There are two advantages in choosing a 3×3 convolution kernel. Firstly, a 3×3 convolution kernel can extract the local image features of a small area better. The perception field of a single 3×3 convolution kernel is a 3×3 neighborhood centered on the pixels, and the perception field of three 3×3 convolution kernels are 7×7 when they are connected to each other, which can obtain the features of a larger neighborhood; and the parameters of three 3×3 convolution kernels are 3×(9×C) when they are connected, whereCis the number of channels, which is the number of convolution kernels in a layer, while the same field parameter is 72×C=49×Cwhen using 7×7 convolution kernel. It can be seen that using small convolution kernel instead of large convolution kernel can ensure that the network depth can be improved under the same field of receptivity, and the amount of parameters can be greatly reduced. Thus, the recognition rate is improved.

    3D-VGG-Block uses 3D convolutional layers to perform convolution operation on the input data to extract features. The convolution kernel size is 3×3×3, the sliding step size is 1, and the padding value of 1 is used for the convolution on the image boundary. The padding value is used as the pixel value on the image boundary. It uses a 2×2×1 pooling size so that the time domain features are not dimensionally reduced, and ReLU is used as the activation function.

    1.2 Accelerated training of batch normalization algorithm

    An important defect of deep learning algorithm is that the training network is very difficult to implement. In order to speed up the training, a BN algorithm is added to the original VGG-Block. A BN layer is added after the convolution structure of each convolution layer. After normalizing the output feature map of the convolution layer with the BN layer, the ReLU activation function is input for non-linear calculation.

    The core idea of the BN algorithm is to consider the input of the hidden layer of the network as the original image input. Since the parameters of the previous layer of the network are continuously updated during the gradient iteration process, the distribution of the output of the previous layer is constantly changing. After the normalization of the output of the previous layer is the same as the input data, the performance of the neural network can also be improved. The formula of the BN algorithm is as shown in Eq.(1).

    (1)

    Through Eq.(1), the input of a layer of neurons is normalized to a standard normal distribution with a mean value of 0 and a variance of 1 to achieve whitening. In this way, the input of the non-linear activation function falls mostly in the middle of its near linearity, which can ensure that the gradient value does not fall too fast, so as to achieve the purpose of accelerating training.

    Two learnable parametersγiandβiare also introduced in the BN algorithm, which can be learned and adjusted. Therefore, it can adaptively adjust to the activation value and improve network expressive ability,and it does not degrade the network performance. The two parameters are used as Eq.(2).

    (2)

    1.3 3D-BN-VGG network layer structure

    The 3D-BN-VGG network structure mode adopts 3D-VGG-Block plus BN algorithm. The network has a total of 24 hidden layers with parameters, including 10 convolutional layers and 2 fully connected layers . The output of the fully connected layer is normalized using the BN layer. Then it uses the ReLU layer as the activation function to obtain nonlinear features and the softmax function as the final output layer to classify the input video sequences, which obtains the probability of belonging to each class. Adding the Dropout layers into the network structure reduces the risk of over-fitting. The Dropout values in the 3 network structures are 0.25, 0.25 and 0.5 respectively. Through experiments, it determines that the optimal input video sequence size is 32×64×64. The input layer of the network uses 5-dimensional tensors, which are the batch size of the input video sequence, the length and width of the video frame, the length of the video sequence, and the number of channels of the video frame.

    (1) Fig.2(a) is a structural diagram of the first 3D-BN-VGG-Block which includes 2 convolutional layers. The input of the first layer is the original video sequence. Its size isNone×32×64×64×3, whereNoneis the size of the batch size, and it can also adjust its size according to the performance of the experimental equipment during the training process. The batch size is set to 32, 64×64 is the resolution of video frame after resize; 3 represents the number of original image channels. The sliding window of the first convolution layer is 1 and the padding is 1. Therefore, it can be concluded that the size of its output feature map is ((32×64×64)+2×1-3) / 1+1=(32×64×64). The number of feature maps is 32, and it is the same number as the convolution kernel, so the final output tensor isNone×32×64×64×32. Then it enters the output tensor into a BN layer ‘batch_normlization_1’. The BN layer only normalizes the input data, the input tensor therefore keeps unchanged and the input and output tensors stay the same size. The activation function ReLU layer ‘ReLU_1’ also only performs nonlinear transformation on the input tensor, and the input and output tensors have the same size. The second layer is the same as the first layer parameter. Through the second layer, the operation is also the same as the first layer’s operation. The input tensor size of the final 3D pooling layer which is named ‘max_pooling3d_1’ isNone×32×64×64×32. The pooling size is 2×2×2, which will maximize the value of all feature maps in the window. Finally, it takes the maximum value of the local nonlinear response of the output as the eigenvalue. Experiments have shown that the effect of the maximum pooling layer is the best in convolutional neural networks. The size of the output feature map of the final pooling layer is ((32×64×64)-2) / 2+1=16×32×32. In addition, in order to reduce the risk of over-fitting, a Dropout layer is added after ‘max_pooling3d_1’, and its random discard probablity is 0.25.

    (2) The structure of the second is similar to the first one. The difference is that the number of convolution kernels of the two convolutional layers is 64.

    (3) Fig.2(b) is a structural diagram of the third. It is also possible to increase a convolution layer to obtain a feature representation at a higher level of abstraction which increases the receptive field of the last block output feature that has been extractd on a larger scale. So the third structure has a total of 3 convolutional layers, and the other structures are similar to the previous two structures. Meanwhile, in order to obtain more different types of feature representations, the number of convolution kernels is also twice that of the previous one, becoming 128 convolution kernels. After passing through 3 convolutional layers, a BN layer and an activation layer, the input tensor of the feature pooling layer isNone×8×16×16×128. After the dimension reduction of the pooling layer with the size of 2×2×2 , the dimension of the feature maps is reduced to the size of 4×8×8, and the number of feature maps remains unchanged at 128. After the pooling layer sampling, the output tensor size isNone×4×8×8×128.

    (4) The fourth is identical to the third structure. The output tensor isNone×2×4×4×128.

    (5) Fig.2(c) is a structural diagram of the fully connected layer. The feature vector composed of all features extracted from the previous convolution layer is input into the full connection layer, and the feature vector is reduced and further extracted by using two full connection layers.

    (a) The first structure

    (b) The third structure

    (c) Fully connected layer

    1.4 Implementation of fusion network based on Keras framework

    The fusion network is implemented on the basis of the pre-trained 3D-BN-VGG. A new ‘reshape’ layer and an LSTM unit layer are added, and the final output softmax layer is unchanged. The reshape layer separates the output eigenvectors of each video frame into a dimension, which serves as the time step length of LSTM. In order to improve the accuracy of classification, LSTM is used to process some sequence features which can not be extracted from 3D-BN-VGG. The network fusion structure is shown in Fig.3. The structure diagram of the newly added network layer is shown in Fig.4. The input of the ‘flatten_1’ layer is the output of 3D-BN-VGG. Through this layer, the output feature vector of each video frame is separated into one dimension, which is the time step length of the LSTM, and the data length of the LSTM input is 2 048, which is the length of the feature vector output from the convolutional network. The output feature length of the LSTM is 1 024. The feature vector of length 1 024 is sent to the ‘dense_1’ layer and they are classified by the softmax function. In order to avoid the loss of useful information in the downsampling of the pooling layer, the sampling window of the pooling layer in the pre-trained 3D-BN-VGG is changed from 2×2×2 to 2×2×1. In this way, the pooling layer will only sample in the 2-dimensional spatial domain, and the sequence features of the time domain can be completely preserved. Since the parameters of the convolutional layer have not changed, this does not affect the 3D-BN-VGG extraction time domain feature.

    Fig.3 Fusion network

    Fig.4 Keras framework structure of the fusion network

    By describing each module of the network structure, the entire framework of the fusion network is obtained, as shown in Fig.5.

    Fig.5 The structure of entire framework

    2 Experimental results and analysis

    2.1 Experimental hardware and software environment

    The experimental environment used in the network training is shown in Table 1. In the experimental stage, the Keras framework is used to implement the specific network structure and conduct training and testing; Ubuntu is used as an experimental system; NVIDIA’s GeForce GTX1080Ti GPU is applied as a computing device in order to complete the computing tasks in the network training and testing phase. Since Keras’ back-end TensorFlow[18]has excellent data parallel acceleration for multiple graphics cards, the experimental phase uses two GTX1080Ti GPUs to speed up the training process. In addition, a large amount of training data needs to be read repeatedly in the training process, in order to speed up the training process, the training data is stored on the solid state hard disk.

    Table 1 Experimental hardware and software environment

    2.2 Data preprocessing

    UCF-101 dataset is the largest public behavior recognition dataset at present. Video clips in the data set are collected from YouTube video website, which contains 101 categories of human behavior and 13 320 video clips. All the videos are collected from real life scenes, which is the most challenging behavior recognition data set at present. The average number of video clips in each class of UCF-101 dataset is about 130, and the length of the video clips is more than 5-10 s. Each video contains a complete process of human behavior, and the quality of data sets is good. At the same time, the resolution of all video clips is normalized to 320×240. The HMDB-51 data set includes 51 categories of human behavior and 6 849 video clips.Compared with UCF-101 data set, it contains fewer videos, and the length of each video clip is concentrated in 1-5 s, which has relatively less data. The overview of data sets is shown in Fig.6.

    Both UCF-101 and HMDB-51 used are based on video sequence data. It is not feasible to directly use video sequences as input, so it is necessary to parse the video sequence into a picture format.

    Since all video lengths are long and the average length is about 100 frames, it is not possible to put all of the selected video sequences into the network training. It adopts a video frame selection strategy of random starting frame: after selecting a video sequence as training data, the lengthNof the video frame is obtained by the number of images. Then it can use the random number generation function of the NumPy[19]to generate a random numberRin the range of 0 toN-32, and it uses the random number as the starting frame of the selected video frame, and theRtoR+31 frames of the video are selected as training data. It can generate more different training samples to achieve the purpose of data augmentation. For each video sequence, it can generateN-32 training samples, and since these samples start with different frames, the risk of over-fitting the same data multiple training is reduced. Experiments have shown that using this strategy increases the accuracy by 2%. The strategy for data processing in HMDB-51 is the same.

    (b) HMDB-51

    2.3 Fusion network training process

    In the experimental stage, the UCF-101 data set is divided into 3 parts, of which 9 624 are used as training data, 1 896 are used as verification data, and 1 800 are used as test data. For the HMDB-51 data set, 4 794 are used as training data, 1 000 are used as verification data, and 1 055 are used as test data. The number of iterations of training is determined to be 40 000 times after multiple trials. Since the BN is used to accelerate the training, the initial learning rate is set to 0.1, the learning rate is 1e-6. In the training, the momentum is added to the SGD algorithm to update the parameters. When the network is trained, it is set to 0.9. The trained 3D-BN-VGG network parameters are used in the fusion network training to initialize the pre-network. It uses the transfer learning in the training process. Fig.7(a) is the accuracy curve and Fig.7(b) is the loss function curve.

    (a) Accuracy curve

    (b) Loss function curve

    In Fig.7, When it only trains 3D-BN-VGG, the network is iterated by 130 000 iterations. In fact, the fusion network is iterated by 40 000 iterations during the training process. It can be seen that the fusion network which is trained using migration learning is faster, especially in the early stage of training, the accuracy has reached 60% by only 6 000 iterations.

    2.4 Test indicators

    As shown in Fig.8, the test results are directly displayed for each class in the fusion network . The lowest accuracy of all kinds of tests is ‘WalkingWithGog’, with an accuracy of 69.79%.The accuracy of ‘ApplyLipstick’, ‘BlowingCandle’,‘Boxin-gSpeedBag’,‘CricketBowling’, ‘FrisbeeCatch’ and other classes is 100%, and the actions are relatively simple. According to the visual recognition results of each class, the reasons leading to the difference of each class are analyzed for further study. Although the complexity of each type of action, the similarity of action and background and many other reasons lead to the difference of recognition rate, there is little difference in accuracy between different classes. Table 2 and Table 3 give the overall data set accuracy, various average accuracy rates and various accuracy variances.

    (a) The results in part I

    (b) The results in part II

    (c) The results in part III

    Table 2 Accuracy statistics of 3D-BN-VGG

    Table 3 Accuracy statistics of fusion networks

    According to the statistical information of the fusion network accuracy, it can be found that the accuracy of the fusion network in each category is small, so the accuracy of the fusion network for different types of behavior recognition is relatively stable.

    2.5 Choice of decision strategy

    All tests are based on every 32 frames as an input data. For example, a sample video sequence containing 128 frames will have 4 input data and 4 classification results, and each classification result has independent statistical accuracy. The result of such statistics has a certain impact on the accuracy. For instance, if one of the 4 classification results is wrong, while the other 3 classification results are correct, the accuracy rate is only 75%. In order to avoid the inconsistency of multiple test results of the same sample, an improved decision-making strategy is proposed in the decision-making stage. The basic idea of the strategy is to fuse multiple test results of the same sample.

    When the results are fused, the softmax output probabilities of the decision results of multiple input data of the same sample are fused, and then the consistent classification results are obtained. The experimental process directly averages the softmax output vectors of multiple input data of the same test sample, and the fusion method is as shown in Eq.(3).

    (3)

    Table 4 Comparison of accuracy before and after fusion using decision results

    As can be seen from Table 4, the accuracy rates in the fusion network are increased by 0.5% on UCF-1 and by 0.77% on HMDB-51 after using the decision result fusion strategy.

    2.6 Evaluation of test results

    Table 5 shows the behavior recognition algorithm implemented. The test accuracy index on the specific data set is compared with the excellent research results in the field of behavior recognition at home and abroad. The experimental results prove the feasibility of the new idea of combining the 3D-CNN with LSTM to process the video sequence, and obtain better results. It is better than the best algorithms in Ref.[20] and Ref.[21]. At the same time, compared with the algorithm of two-dimensional CNN and LSTM network fusion realized in Ref.[17], the algorithm of 3D convolutional neural network and LSTM network realized has greatly improved the accuracy performance. It also shows that the algorithm of fusion network is a feasible and effective scheme in the field of human behavior recognition.

    Table 5 Comparison of the accuracy of the algorithm and other excellent research results

    Table 6 shows the comparison between the computational performance of the fusion network implemented and some other excellent research results. It can be found that the algorithm has a great advantage in speed.

    Table 6 Comparison of calculation speed between this algorithm and other excellent research results

    2.7 Analysis of network parameters

    There are 25 parameterized layers in 3D-BN-VGG with a total of 23 734 597 parameters, of which 23 722 437 are trainable parameters. Among the trainable parameters, the total connection layer parameters are 21 080 165, which account for 88.86% of the total parameters.The huge amount of parameters in the fully connected layer will greatly affect the generalization ability of the entire network. A higher Dropout discard rate is employed to reduce the risk of over-fitting of the fully connected layer.

    Table 7 shows the amount of parameters for every layer of the fusion network. The parameter value of the whole fusion network is 15 324 485, which is nearly 2 times smaller than that of the 3D-BN-VGG network 23 734 597. The parameters of the convolutional layer are 2 633 952, accounting for 17% of all parameters. The parameters of the LSTM layer account for 82% of the total parameters quantity, and the last output layer’s parameters only account for 1%. The main reason for the decrease of the parameter quantity is that the full connection layer is removed.

    Table 7 Parameters of each part of the fusion network

    3 Contribution

    Through the analysis of the previous sections, the contributions of this paper are as follows:

    The choice of convolution core uses multiple small convolution cores stacking instead of large convolution cores, which deepens the depth of the neural network and reduces the parameters of the network.

    The latest batch normalization algorithm is added to the network to improve the training speed.

    Increasing Dropout layer reduces the risk of over-fitting.

    Removing the full connection layer, which accounts for 88.86% of the total network parameters, and connecting the final output of the convolution layer directly to the reshape layer, the output eigenvectors of each video frame are separated into one dimension, LSTM network processes some sequence features that can not be extracted from 3D-CNN to improve the accuracy of classification. Through the analysis of Table 7, the amount of network parameters has also been reduced nearly twice.

    Data preprocessing strategy reduces the risk of over-fitting of the same data after repeated training. Experiments show that using this strategy improves the accuracy by 2%.

    After using decision fusion strategy, the accuracy of fusion network is improved by 0.5% on UCF-1 and 0.77% on HMDB-51.

    Through the above improvements, compared with some excellent research results, the main contributions are as follows:

    (1) The recognition rate is improved, as shown in Table 5.

    (2) The calculation speed has been greatly improved, as shown in Table 6.

    4 Conclusion

    This work designs a fusion network combining 3D convolutional network and LSTM network. The algorithm is extended to 3D convolutional neural network.Compared with the fusion network of 2D convolutional neural network and LSTM, the advantages of the network implemented are: it can extract the front and rear frame information of video sequence in advance, which can improve the feature extraction ability of the algorithm. At the same time, in the network implementation, there is no need to use the batch size channel input from the network to replace the video sequence dimension, which is faster in the network implementation and training process. In the specific implementation, the fusion network does not use the full connection layer to reduce the output of the convolutional network. It avoids the loss of features, reduces the amount of parameters of the network and improves the generalization ability of the network. In the experimental stage, the performance of the fusion network implemented is verified on the specific data sets, and the accuracy is greatly improved compared with that of the 2D convolutional neural network.

    我的亚洲天堂| 日韩一区二区视频免费看| 欧美 日韩 精品 国产| 9热在线视频观看99| 国产视频首页在线观看| 亚洲av福利一区| 国产成人精品婷婷| 97人妻天天添夜夜摸| 国产日韩一区二区三区精品不卡| 中文字幕色久视频| 亚洲一级一片aⅴ在线观看| 国产国语露脸激情在线看| 又大又黄又爽视频免费| 亚洲伊人久久精品综合| 人人妻人人澡人人看| 性少妇av在线| 美女国产视频在线观看| 少妇人妻 视频| 久久精品久久久久久噜噜老黄| 日韩不卡一区二区三区视频在线| 久久这里只有精品19| 亚洲欧洲国产日韩| 国产极品粉嫩免费观看在线| 久久这里只有精品19| 69精品国产乱码久久久| 最近最新中文字幕大全免费视频 | 日韩制服骚丝袜av| 国产一区二区在线观看av| 精品一区二区免费观看| 亚洲国产av影院在线观看| 精品一区二区三区四区五区乱码 | 国产精品久久久久成人av| 久久午夜综合久久蜜桃| 日韩av免费高清视频| 男女边吃奶边做爰视频| 丰满迷人的少妇在线观看| 青春草亚洲视频在线观看| 久久ye,这里只有精品| 男女边吃奶边做爰视频| 久久久久久久精品精品| 免费观看性生交大片5| 大片免费播放器 马上看| 亚洲综合精品二区| 我要看黄色一级片免费的| av女优亚洲男人天堂| 国产精品一二三区在线看| 亚洲精品在线美女| 热re99久久精品国产66热6| 欧美日韩亚洲高清精品| 熟妇人妻不卡中文字幕| 伊人久久国产一区二区| 美女午夜性视频免费| 一级爰片在线观看| 国产又色又爽无遮挡免| 国产成人精品福利久久| 在线天堂最新版资源| 伊人亚洲综合成人网| 波多野结衣一区麻豆| 国产国语露脸激情在线看| 日本vs欧美在线观看视频| 一级毛片 在线播放| 久久久国产一区二区| 日本wwww免费看| 丝袜人妻中文字幕| 久久久国产精品麻豆| 美女高潮到喷水免费观看| 亚洲精品美女久久av网站| 亚洲成人av在线免费| 免费观看av网站的网址| 日韩一本色道免费dvd| 国产一区二区三区av在线| 日本av免费视频播放| 搡女人真爽免费视频火全软件| 亚洲一级一片aⅴ在线观看| 最新中文字幕久久久久| 亚洲欧洲国产日韩| 26uuu在线亚洲综合色| 亚洲国产av影院在线观看| 欧美中文综合在线视频| 国产亚洲最大av| xxxhd国产人妻xxx| 啦啦啦在线免费观看视频4| 王馨瑶露胸无遮挡在线观看| 一本大道久久a久久精品| 亚洲精品日本国产第一区| 97人妻天天添夜夜摸| 香蕉精品网在线| 丰满饥渴人妻一区二区三| 青春草国产在线视频| 蜜桃国产av成人99| 久久精品国产亚洲av天美| 久久精品国产亚洲av天美| 精品亚洲成国产av| 日韩欧美一区视频在线观看| 高清视频免费观看一区二区| 色94色欧美一区二区| 夫妻性生交免费视频一级片| 欧美xxⅹ黑人| 午夜日本视频在线| 久久精品久久久久久噜噜老黄| 国产精品亚洲av一区麻豆 | 亚洲精品,欧美精品| 看非洲黑人一级黄片| 视频区图区小说| 亚洲欧美中文字幕日韩二区| 亚洲欧美清纯卡通| 美女国产高潮福利片在线看| 最近2019中文字幕mv第一页| 飞空精品影院首页| 久久人人97超碰香蕉20202| 日本91视频免费播放| 午夜免费男女啪啪视频观看| 欧美激情极品国产一区二区三区| videosex国产| 久久精品国产亚洲av高清一级| 国产极品天堂在线| av福利片在线| 国产97色在线日韩免费| av一本久久久久| 亚洲色图综合在线观看| 婷婷色综合www| 国产日韩欧美视频二区| 欧美成人精品欧美一级黄| 一区福利在线观看| 亚洲中文av在线| 国产伦理片在线播放av一区| 久久久久国产网址| 日本-黄色视频高清免费观看| 母亲3免费完整高清在线观看 | 免费av中文字幕在线| 欧美精品av麻豆av| 久久精品国产综合久久久| 秋霞在线观看毛片| 亚洲国产欧美日韩在线播放| 波多野结衣av一区二区av| 免费在线观看完整版高清| videosex国产| 男女边摸边吃奶| 欧美bdsm另类| 又粗又硬又长又爽又黄的视频| 日本爱情动作片www.在线观看| 日韩免费高清中文字幕av| 下体分泌物呈黄色| 亚洲成人一二三区av| 一个人免费看片子| 高清视频免费观看一区二区| 欧美最新免费一区二区三区| 国产精品国产三级专区第一集| 18在线观看网站| 99热网站在线观看| 国产亚洲欧美精品永久| 久久97久久精品| 亚洲精品日本国产第一区| 女的被弄到高潮叫床怎么办| 久久精品熟女亚洲av麻豆精品| 国产在线免费精品| 免费观看在线日韩| av免费观看日本| 国产成人精品婷婷| 黄色毛片三级朝国网站| 国产精品嫩草影院av在线观看| 亚洲av综合色区一区| 亚洲第一av免费看| 亚洲久久久国产精品| 中文天堂在线官网| 日本午夜av视频| 大片电影免费在线观看免费| 久久青草综合色| 国语对白做爰xxxⅹ性视频网站| 激情视频va一区二区三区| 国产精品久久久av美女十八| 国产一区二区三区综合在线观看| 高清黄色对白视频在线免费看| 免费观看a级毛片全部| 日韩av免费高清视频| 国产成人免费无遮挡视频| 欧美日韩精品网址| 精品人妻一区二区三区麻豆| 国产精品二区激情视频| 日韩av在线免费看完整版不卡| 99久久中文字幕三级久久日本| 亚洲av电影在线观看一区二区三区| av电影中文网址| 成人二区视频| 午夜福利乱码中文字幕| 人体艺术视频欧美日本| 制服诱惑二区| 国产精品免费大片| 精品卡一卡二卡四卡免费| 久久久久久免费高清国产稀缺| 国产xxxxx性猛交| 五月开心婷婷网| av免费观看日本| 国产精品女同一区二区软件| 水蜜桃什么品种好| 久久久国产欧美日韩av| 亚洲中文av在线| 成人黄色视频免费在线看| 丰满乱子伦码专区| 中文字幕精品免费在线观看视频| 自线自在国产av| 91精品三级在线观看| 国产极品天堂在线| 18禁裸乳无遮挡动漫免费视频| 一区二区三区四区激情视频| 久久精品夜色国产| 欧美精品高潮呻吟av久久| 看十八女毛片水多多多| 亚洲人成电影观看| 欧美中文综合在线视频| 中文字幕精品免费在线观看视频| 伊人久久大香线蕉亚洲五| 欧美激情 高清一区二区三区| 肉色欧美久久久久久久蜜桃| 99香蕉大伊视频| 中文字幕最新亚洲高清| 欧美国产精品va在线观看不卡| 婷婷成人精品国产| 少妇的逼水好多| 亚洲精品日本国产第一区| 一二三四在线观看免费中文在| 美女午夜性视频免费| 叶爱在线成人免费视频播放| 91aial.com中文字幕在线观看| 欧美另类一区| 99九九在线精品视频| 天天躁夜夜躁狠狠躁躁| 99国产精品免费福利视频| 交换朋友夫妻互换小说| xxx大片免费视频| 国产精品久久久久成人av| 老女人水多毛片| 国产一区二区激情短视频 | 亚洲国产av影院在线观看| av在线播放精品| 热re99久久精品国产66热6| 热99久久久久精品小说推荐| 大话2 男鬼变身卡| 欧美国产精品一级二级三级| 欧美日韩av久久| 久久国产精品男人的天堂亚洲| 欧美最新免费一区二区三区| 五月伊人婷婷丁香| 亚洲精品久久午夜乱码| 侵犯人妻中文字幕一二三四区| 秋霞伦理黄片| 永久免费av网站大全| 大码成人一级视频| 超碰成人久久| 97在线人人人人妻| 老司机影院毛片| 国产麻豆69| 91精品国产国语对白视频| 亚洲伊人色综图| 十八禁网站网址无遮挡| 国产黄频视频在线观看| 有码 亚洲区| 国产片特级美女逼逼视频| 国产又爽黄色视频| 最近最新中文字幕大全免费视频 | 亚洲精品久久成人aⅴ小说| 制服丝袜香蕉在线| 极品人妻少妇av视频| 国产1区2区3区精品| 欧美 日韩 精品 国产| 亚洲国产日韩一区二区| 久久99精品国语久久久| 亚洲国产精品一区三区| 国产日韩欧美视频二区| 国产毛片在线视频| 国产精品国产三级专区第一集| 夫妻午夜视频| 一二三四在线观看免费中文在| 久久精品aⅴ一区二区三区四区 | 18禁裸乳无遮挡动漫免费视频| 人妻人人澡人人爽人人| 国产一区有黄有色的免费视频| 黑丝袜美女国产一区| 色婷婷av一区二区三区视频| 亚洲国产最新在线播放| 波多野结衣av一区二区av| 久久精品国产亚洲av高清一级| 2018国产大陆天天弄谢| 九色亚洲精品在线播放| 久热这里只有精品99| 亚洲精品自拍成人| 国产精品欧美亚洲77777| 亚洲国产精品成人久久小说| 国产精品香港三级国产av潘金莲 | 欧美成人午夜精品| 蜜桃在线观看..| 亚洲精品自拍成人| 国产精品一区二区在线不卡| 亚洲三区欧美一区| 亚洲男人天堂网一区| 最新的欧美精品一区二区| 成人18禁高潮啪啪吃奶动态图| 精品国产超薄肉色丝袜足j| 亚洲,欧美,日韩| 国产 一区精品| 91精品伊人久久大香线蕉| 国产野战对白在线观看| av.在线天堂| 国产精品三级大全| 麻豆精品久久久久久蜜桃| 欧美+日韩+精品| 成人黄色视频免费在线看| 毛片一级片免费看久久久久| 日本91视频免费播放| 天天操日日干夜夜撸| www.精华液| 精品少妇一区二区三区视频日本电影 | 成年美女黄网站色视频大全免费| 美女主播在线视频| 午夜日韩欧美国产| 精品亚洲成a人片在线观看| 亚洲欧美精品自产自拍| 亚洲综合色网址| 麻豆精品久久久久久蜜桃| www.av在线官网国产| 亚洲美女视频黄频| 卡戴珊不雅视频在线播放| 久久久久久人妻| 亚洲第一区二区三区不卡| 免费av中文字幕在线| 黄色 视频免费看| www.av在线官网国产| 日韩大片免费观看网站| 日韩 亚洲 欧美在线| 久久久欧美国产精品| 美女主播在线视频| 大片免费播放器 马上看| 国产色婷婷99| 亚洲人成电影观看| 亚洲色图综合在线观看| 极品少妇高潮喷水抽搐| 国产成人精品久久久久久| 久久ye,这里只有精品| 美女视频免费永久观看网站| 99精国产麻豆久久婷婷| 97在线人人人人妻| 永久网站在线| 国产精品欧美亚洲77777| 亚洲av综合色区一区| 精品国产一区二区三区久久久樱花| 熟女少妇亚洲综合色aaa.| 在线免费观看不下载黄p国产| 尾随美女入室| 少妇被粗大猛烈的视频| 美国免费a级毛片| 两个人免费观看高清视频| 观看av在线不卡| 大片免费播放器 马上看| 青春草视频在线免费观看| 街头女战士在线观看网站| 精品国产国语对白av| 日韩欧美一区视频在线观看| 波多野结衣一区麻豆| 国产在视频线精品| 少妇被粗大的猛进出69影院| 免费观看a级毛片全部| 99九九在线精品视频| 80岁老熟妇乱子伦牲交| 69精品国产乱码久久久| 夜夜骑夜夜射夜夜干| 精品少妇黑人巨大在线播放| 男女边摸边吃奶| 欧美少妇被猛烈插入视频| 国产精品久久久久久久久免| 黄频高清免费视频| 成人漫画全彩无遮挡| 乱人伦中国视频| 最新的欧美精品一区二区| 美女福利国产在线| 久久婷婷青草| 制服诱惑二区| 亚洲国产最新在线播放| 国产又爽黄色视频| 亚洲熟女精品中文字幕| 高清欧美精品videossex| 18+在线观看网站| av片东京热男人的天堂| 侵犯人妻中文字幕一二三四区| 国产白丝娇喘喷水9色精品| 又粗又硬又长又爽又黄的视频| 妹子高潮喷水视频| 丝袜美腿诱惑在线| 激情五月婷婷亚洲| 丝瓜视频免费看黄片| 日韩av在线免费看完整版不卡| 91精品伊人久久大香线蕉| 少妇精品久久久久久久| 91国产中文字幕| 欧美 亚洲 国产 日韩一| 亚洲国产精品999| 中国三级夫妇交换| 在线免费观看不下载黄p国产| 午夜福利在线免费观看网站| 欧美日韩亚洲国产一区二区在线观看 | 97人妻天天添夜夜摸| 国产成人一区二区在线| 久久精品久久久久久噜噜老黄| 午夜激情av网站| 日本黄色日本黄色录像| 中文精品一卡2卡3卡4更新| 晚上一个人看的免费电影| 高清黄色对白视频在线免费看| 中国国产av一级| 午夜福利视频在线观看免费| 香蕉国产在线看| 日本欧美国产在线视频| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 日韩av免费高清视频| 欧美日韩亚洲高清精品| 精品人妻偷拍中文字幕| 日日摸夜夜添夜夜爱| 日韩电影二区| 在线观看免费日韩欧美大片| 亚洲综合精品二区| 亚洲精品一二三| 18禁裸乳无遮挡动漫免费视频| 国产精品一二三区在线看| 肉色欧美久久久久久久蜜桃| 午夜免费鲁丝| 美女午夜性视频免费| 亚洲精品自拍成人| 自拍欧美九色日韩亚洲蝌蚪91| www.精华液| 男女国产视频网站| 一级毛片电影观看| 一边亲一边摸免费视频| 久久精品国产自在天天线| 成人毛片a级毛片在线播放| 精品一区二区免费观看| 黄色毛片三级朝国网站| 国产一级毛片在线| 国产有黄有色有爽视频| 色哟哟·www| 伊人久久大香线蕉亚洲五| av网站免费在线观看视频| 男男h啪啪无遮挡| 午夜免费观看性视频| 国产精品秋霞免费鲁丝片| 久久精品久久久久久久性| 亚洲精品aⅴ在线观看| 观看美女的网站| 久久人人97超碰香蕉20202| 久久精品国产亚洲av天美| 这个男人来自地球电影免费观看 | 亚洲 欧美一区二区三区| av在线老鸭窝| 国产一区二区三区av在线| 99re6热这里在线精品视频| 91精品三级在线观看| 欧美精品国产亚洲| 熟妇人妻不卡中文字幕| 婷婷色麻豆天堂久久| 亚洲国产最新在线播放| 宅男免费午夜| 国产一区亚洲一区在线观看| 亚洲av综合色区一区| 在线 av 中文字幕| 伦理电影大哥的女人| 欧美另类一区| 一级爰片在线观看| 菩萨蛮人人尽说江南好唐韦庄| 婷婷成人精品国产| 1024视频免费在线观看| 我要看黄色一级片免费的| 黄色配什么色好看| 色哟哟·www| 少妇精品久久久久久久| 精品国产乱码久久久久久小说| 丝袜脚勾引网站| 精品国产一区二区三区久久久樱花| av在线播放精品| 1024香蕉在线观看| 成年女人毛片免费观看观看9 | 国产无遮挡羞羞视频在线观看| 亚洲第一区二区三区不卡| 大片免费播放器 马上看| a级片在线免费高清观看视频| av又黄又爽大尺度在线免费看| 99久久人妻综合| 波多野结衣一区麻豆| 三级国产精品片| 在线看a的网站| 在线观看国产h片| 国产日韩欧美在线精品| 国产极品粉嫩免费观看在线| 亚洲欧美精品综合一区二区三区 | 美女国产视频在线观看| 免费看av在线观看网站| 老鸭窝网址在线观看| 欧美国产精品va在线观看不卡| 免费高清在线观看日韩| 久久青草综合色| 赤兔流量卡办理| 日本猛色少妇xxxxx猛交久久| 国产精品久久久av美女十八| 精品久久久久久电影网| 女人被躁到高潮嗷嗷叫费观| 在线观看免费日韩欧美大片| 最近2019中文字幕mv第一页| 国产熟女欧美一区二区| 国产免费一区二区三区四区乱码| 九草在线视频观看| 国产综合精华液| 两个人免费观看高清视频| 色吧在线观看| 亚洲国产精品国产精品| 欧美日韩精品成人综合77777| 久久久久久免费高清国产稀缺| 日韩av不卡免费在线播放| 欧美 亚洲 国产 日韩一| 人体艺术视频欧美日本| av卡一久久| 看免费成人av毛片| 国产一区二区 视频在线| 1024香蕉在线观看| 一级毛片电影观看| 秋霞伦理黄片| 国产亚洲欧美精品永久| 国产又爽黄色视频| 精品午夜福利在线看| 天天躁狠狠躁夜夜躁狠狠躁| 十分钟在线观看高清视频www| 男女午夜视频在线观看| 99热网站在线观看| 伊人久久大香线蕉亚洲五| 成人国产麻豆网| 飞空精品影院首页| 99香蕉大伊视频| 婷婷成人精品国产| 久久精品亚洲av国产电影网| 麻豆精品久久久久久蜜桃| 只有这里有精品99| 亚洲精品第二区| 麻豆av在线久日| 亚洲五月色婷婷综合| 18在线观看网站| 亚洲五月色婷婷综合| 亚洲av欧美aⅴ国产| 黄色视频在线播放观看不卡| a级毛片在线看网站| 国产黄色视频一区二区在线观看| 满18在线观看网站| 成年人午夜在线观看视频| 成人毛片60女人毛片免费| 国产精品久久久久成人av| 永久网站在线| 欧美 亚洲 国产 日韩一| 国产成人精品久久二区二区91 | 亚洲精品国产一区二区精华液| 午夜免费观看性视频| 亚洲精品国产色婷婷电影| 黄片小视频在线播放| 国产一区二区 视频在线| 成人午夜精彩视频在线观看| 国产成人精品婷婷| 国产xxxxx性猛交| xxx大片免费视频| 精品一区二区三卡| √禁漫天堂资源中文www| 久久国产精品男人的天堂亚洲| 自拍欧美九色日韩亚洲蝌蚪91| av在线老鸭窝| 日本vs欧美在线观看视频| 久久精品国产鲁丝片午夜精品| 人人妻人人澡人人看| 日日啪夜夜爽| 亚洲国产av新网站| 一个人免费看片子| 日韩中文字幕视频在线看片| 国产成人精品一,二区| 中文精品一卡2卡3卡4更新| 精品一品国产午夜福利视频| 男女高潮啪啪啪动态图| 欧美黄色片欧美黄色片| 国产综合精华液| 看免费成人av毛片| 热re99久久国产66热| 久久久久久久久久人人人人人人| 一本久久精品| 亚洲一码二码三码区别大吗| 欧美日韩精品网址| 国产精华一区二区三区| 国产精品综合久久久久久久免费 | 久久中文字幕一级| 中文字幕高清在线视频| 大陆偷拍与自拍| 免费日韩欧美在线观看| 最近最新免费中文字幕在线| 精品国产亚洲在线| 欧美色视频一区免费| 人人妻,人人澡人人爽秒播| 狂野欧美激情性xxxx| 黄色女人牲交| 人人澡人人妻人| av有码第一页| 久久精品亚洲精品国产色婷小说| 高清黄色对白视频在线免费看| 在线观看66精品国产| 欧美日韩视频精品一区| www.自偷自拍.com| 欧美成人免费av一区二区三区| 国产蜜桃级精品一区二区三区| 久久国产精品人妻蜜桃| 欧美激情极品国产一区二区三区| 亚洲午夜理论影院| 亚洲一码二码三码区别大吗| 国产亚洲av高清不卡| 国产欧美日韩一区二区精品| av天堂在线播放| 成人亚洲精品av一区二区 |