• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multi-Layered Deep Learning Features Fusion for Human Action Recognition

    2021-12-15 07:10:50SadiaKiranMuhammadAttiqueKhanMuhammadYounusJavedMajedAlhaisoniUsmanTariqYunyoungNamRobertasDamaeviciusandMuhammadSharif
    Computers Materials&Continua 2021年12期

    Sadia Kiran,Muhammad Attique Khan,Muhammad Younus Javed,Majed Alhaisoni,Usman Tariq,Yunyoung Nam,Robertas Dama?eviˇcius and Muhammad Sharif

    1Department of Computer Science,HITEC University Taxila,Taxila,Pakistan

    2College of Computer Science and Engineering,University of Ha’il,Ha’il,Saudi Arabia

    3College of Computer Engineering and Sciences,Prince Sattam Bin Abdulaziz University,Al-Khraj,Saudi Arabia

    4Department of Computer Science and Engineering,Soonchunhyang University,Asan,Korea

    5Faculty of Applied Mathematics,Silesian University of Technology,Gliwice,Poland

    6Department of Computer Science,COMSATS University Islamabad,Wah Campus,Pakistan

    Abstract: Human Action Recognition (HAR) is an active research topic in machine learning for the last few decades.Visual surveillance, robotics, and pedestrian detection are the main applications for action recognition.Computer vision researchers have introduced many HAR techniques,but they still face challenges such as redundant features and the cost of computing.In this article,we proposed a new method for the use of deep learning for HAR.In the proposed method,video frames are initially pre-processed using a global contrast approach and later used to train a deep learning model using domain transfer learning.The Resnet-50 Pre-Trained Model is used as a deep learning model in this work.Features are extracted from two layers:Global Average Pool(GAP)and Fully Connected(FC).The features of both layers are fused by the Canonical Correlation Analysis (CCA).Then features are selected using the Shanon Entropy-based threshold function.The selected features are finally passed to multiple classifiers for final classification.Experiments are conducted on five publicly available datasets as IXMAS, UCF Sports,YouTube, UT-Interaction, and KTH.The accuracy of these data sets was 89.6%,99.7%,100%,96.7%and 96.6%,respectively.Comparison with existing techniques has shown that the proposed method provides improved accuracy for HAR.Also, the proposed method is computationally fast based on the time of execution.

    Keywords: Action recognition; transfer learning; features fusion; features selection; classification

    1 Introduction

    Human action Recognition (HAR) has incredible significance in numerous everyday applications, for instance, video surveillance [1], virtual reality, robotics, video analytics, and assistive living, etc.[2,3].The movement of several body parts of a human being simultaneously can be referred to as an action [4,5].According to the view of computer vision (CV), action recognition relates the observations such as video data with sequences [6].A sequence of human actions accomplished by at least two actors in which one actor must be a person or an object is called interaction [7].It has become a demanding task in CV to understand the human activities from videos.Automated recognition of an activity that being performed by human in a video sequences is the main capability of intelligent video system [8].

    The main aim of action recognition is to supply useful information related to the subjects’habits.Also, they permit the system or robot to make users comfortable with interacting with them.Recognition and forecasting the occurrence of crimes could be done by interpreting human activities to assist the police or other agencies in reacting straightaway [9].The proper recognition of human actions accurately is extremely difficult due to lots of problems, e.g., jumbled backgrounds, changing environmental conditions, and viewpoint differences [10].

    HAR techniques from video sequences are usually classified into two types- template-based method and model-based method.In template-based method, lower and middle-level features are emphasized.In the model-based method, high-level features are emphasized [11,12].In the past few years, a large number of feature extraction methods are introduced, especially spatialtemporal interest points (STIP) feature descriptor [13], motion extraction image (MEI) and motion history image (MHI) [14], spatiotemporal descriptors based on 3-D gradients [15], extend robust scale features (SURF) [16], 3D SIFT [17], histogram oriented gradients (HOG) 3D [15] and dense optical trajectories [18] have achieved fruitful results for HAR using video sequences [19].Then classification of these extracted features is done using different machine learning (ML)classification methods such as K-Nearest Neighbor (KNN), Support Vector Machine (SVM),decision tree, linear discriminant analysis (LDA), Ensemble tree and neural networks, etc.

    Compared to the techniques above, significant performance was achieved after the deep convolutional neural network (DCNN) in machine learning [20,21].Several pre-trained deep models are presented in the literature, such as AlexNet, VGG, GoogleNet, ResNet, and Inception V3.DCNN models can act directly on the raw inputs without any preprocessing [22].More complex features can be extracted with every supplementary layer.A major dissimilarity in the complexity of adjoining layers of model reduces with the proceeding of the data to the upper convolutional layers.In recent years, these deep learning (DL) models are utilized for HAR and show high accuracy [23].But sometimes, when humans have performed complex actions similar to each other,these models diminish the performance.

    Therefore, some researchers presented sequential techniques.In those techniques, they performed fusion and get better information for an entire video sequence.Afza et al.[24] presented a parallel fusion approach named length control features (LCF) and achieved improved performance.Xiaog et al.[25] presented a two-stream CNN model for action recognition.They focused on both optical flow-based generated images and temporal actions for better action recognition.Elharrous et al.[26] presented a combined model for both action classification and summarization.They initially extract the human silhouette and then extract the shape and temporal features.In summary these methods computed improved results but they did not focused on the computational time.The major challenges which are handling in this work are:i) Change of human variation and viewpoint due to camera conditions, camera being static or dynamic; ii)Consumption of more time during the training process, and iii) Selection of the most related features is a key problem for minimizing error rate of an automated system.In this article, we proposed a fusion based framework along with a reduction scheme for better computational time and improved accuracy.Our major contributions are as follows:

    (a) Global frames preprocessing is employed using mean and standard deviation values in a fitness function.

    (b) Transfer learning-based features are extracted from two successive layers of Resnet50,where the numbers of parameters are the same as the original model.

    (c) Canonical correlation approach is implemented for deep features fusion of both successive layers.

    (d) A threshold function is implemented based on Shannon Entropy for the selection of best features.

    (e) Multiple supervised learning algorithms are implemented for classification and also conducted a fair comparison with existing techniques.

    The rest of the manuscript is organized as follows.The proposed methodology includes frame normalization, deep learning features, features fusion, and selection, presented in Section 2.Results are presented in Section 3, which followed the Conclusion of Section 4.

    2 Proposed Methodology

    A unified model has been proposed in this article for HAR built on the fusion of multiple deep CNN layers features.Five core steps are involved in this work-database normalization,individual CNN features extraction through two successive layers, fusion information of both successive layers (AVG-Pool and FC1000), selection of best features, and finally classification through supervised learning method.The proposed method is evaluated on five datasets and also compared with existing techniques.Proposed architecture of this work has been illustrated in Fig.1.

    Figure 1:Proposed deep learning-based architecture for human action recognition

    2.1 Preprocessing

    The main objective of preprocessing is to bring improvement in image statistics.It represses unwanted distortions or improves some image features that are essential for additional processing.In this work, we performed normalization of video frames.Normalization is a procedure frequently applied as a major aspect of data preparation for machine learning.The normalization process is comprised of global contrast normalization(GCN), local normalization, and histogram equalization [27].

    GCN:In Global Contrast Normalization (GCN), each value of an image’s pixels is subtracted from the mean value and then divided with predictable error.To avert images from possessing differing quantities of contrast is the main objective of GCN.Images having very little, however, not equal to zero contrast have smaller details and turn out to be problematic for action recognition.GCN can be described as:

    wheremrepresents a row,nrepresents column,lis a depth of color andmean intensity of the full image is represented byZ.Then the local contrast is improved by employing bottom hat filtering and log transformation.In the end, the output of both local and global is combined in one matrix for the final resultant enhanced image.These resultant enhanced images are used for the training of a deep learning model for further processing.

    2.2 Convolutional Neural Network

    A simple CNN model consists of the following layers:Convolution Layer, Pooling Layer,ReLu Layer, Batch Normalization Layer, Fully Connected Layer, and output layer.The details of each layer defined as follows:

    The Convolutional layer received a volume of sizeM1×G1×D1.This layer needsfourvariables/factors, e.g.,CFilters, their spatial rangeE, the strideSand the quantity of zero paddingP.It generates a capacity of sizeM2×G2×D2where

    It representsE.E.D1weights per filter with parameter sharing.The next layer is the pooling layer.The pooling layer’s task is to lessen the image’s spatial dimensions to minimize the number of variables and calculations within the linkage.It thus controls the problem of overfitting among layers.The pooling layer takes a capacity that has sizeM1×G1×D1.This needs 2 variables, e.g.,strideS, and their spatial rangeE.It generates a capacity that hasM2×G2×D2.Mathematically,it is defined as:

    The next layer is a ReLU activation layer.It is a kind of activation function.Mathematically,it is described asz=max(0,y).This function described that the values with negative would be converted into zero (positive).A fully connected layer can be described as the calculation of some component of the outputyl+1(orznote thatzis analiasforyl+1) have to need of every component of the inputyl.This layer is also known as the feature layer.The last layer is softmax,which is used for classification.In this layer, an entropymax function is employed for the final decision.

    2.3 Deep Learning Features

    Deep learning showed great success in machine learning in the last few years for several video surveillance and medical imaging applications.Video surveillance is a hot research area, but the researchers face major issues called imbalanced datasets and large datasets.In this article, we used a pre-trained deep learning model named ResNet-50.The common ResNet models usually perform the skipping of double or triple layers, which comprise (ReLu) and batch normalization [28].The incentive for skipping over layers is to attach the output by the previous layer to the next coming layer.This will support reducing the vanishing gradient issue.Skipping helps simplify the network because it uses small no layers in the initial training stages.As a result, it accelerates the process of learning.As long as the network will learn the space of features, it slowly reinstates those skipped layers.A neural network in the absence of residual portion investigates additional space of feature that makes it unprotected.ResNet-50 is a convolutional neural network (CNN),originally trained on the ImageNet dataset, consisting of around 100 million images of 1000 classes.The depth of this network is 50 layers, and the input size is 224-by-224-by-3.The original number of layers and selected layers are presented in Tabs.1 and 2.

    Table 1:Number of layers of ResNet50 CNN model

    We modified this ResNet50 architecture according to the number of classes.For this purpose,we removed the last layer and added a new layer that includes the number of action classes.We then train the modified model through transfer learning on 70% data, where the next 30% are used for the testing process.In the training process, we define the number of epochs 100, the learning rate is 0.0001, and mini-batch size is 64.The input size is the same as the input of the original deep model.We extract features from the last two feature layers named Average Pool Layer and Fully Connected Layer.The dimension of extracted features isN×2048 andN×1000,respectively.In the later stage, we fused features of both vectors into one vector for further processing.

    2.4 Features Fusion Using CCA

    In multivariate statistical analysis, CCA has similar significance as principal component analysis (PCA) and linear discriminant analysis (LDA).It is the most important multi-data processing technique.CCA is conventionally utilized for analyzing the associations amongst two groups of variables [29].It tries to find two groups of random variables so that these random variables presume the highest correlation over two groups of data.In contrast, the transformations inside every group of data are not correlated.Mathematically, it is formulated as follows:

    Table 2:Description of selected layers of ResNet50 for feature extraction

    Two groups of dataZ1∈-Rm*p and Z2∈-Rm*qare given, CCA finds the linear combinationsZ1V1and Z2V2which will enhance the couple-wise correlations over the two groups of data.E1and E2∈-Rm*b,b≤min(rank(Z1,Z2)), are identified as canonical variables andV1∈-Rp*b and V2∈-Rq*bare the canonical vectors.If another method is used, the procedure discovers an initial couple of canonical vectorswhich will enhance the linear fusion of the two groups of data.

    To acquire the initial couple of canonical variables stated as:

    The leftoverb- 1 canonical variables will be computed by using the same method.By applying further restrictions at matrices columns, i.e.,=1,...,b,h=1,2), every group of data, the canonical variables are uncorrelated, and they possess zero mean and unit variance.

    where,Krepresents the number of fused features, which are 1876 in this work, andNrepresents the sample frames used for training and the testing.

    2.5 Feature Selection

    Feature selection is the process of selecting the best features for accurate classification within less execution time.In this work, a new threshold function is proposed to select best features based on the Shannon Entropy.Initially, we computed the entropy value of the fused vector by the following equation.

    where,Nrepresents the total number of features,piis the probability of each feature, andirepresents the index of each feature in the fused vector.Then, we implemented a threshold function to select the best features.The criterion of the selection of best features is the fitness function, which is Fine KNN.We initialize 20 iterations, and after each iteration, the selected features are evaluated through the fitness function.In the end, the higher accuracy-based feature set is selected for the final classification.Mathematically, the threshold function is defined as follows:

    The final selected features are passed in the supervised learning classifiers for final classification.

    3 Experimental Results and Discussion

    The proposed method is evaluated on four selected datasets as IXMAS, UCF Sports, UT Interaction, and KTH.Each classifier’s performance is measured through the following parameters such as recall rate, precision rate, accuracy, FNR, and testing time.Also, the performance of Fine KNN is further compared with few other well-known classifiers such as Linear Discriminant(LDA), Quadratic SVM (QSVM), Cubic SVM (CSVM), Medium Gaussian SVM (MGSVM), and Weighted KNN (WKNN).The results of each classifier are presented below.

    3.1 IXMAS Dataset

    The proposed recognition accuracy of the IXMAS dataset is presented in Tab.3.Six different classifiers are utilized for recognition accuracy and selected the best one based on the accuracy performance.From this table, the highest accuracy is 89.6% achieved on Fine KNN, whereas the other parameter such as recall rate is 89.58%, the precision rate is 89.75%, FNR is 10.42%,and the classification computation time is 277.97 s.The next highest accuracy is 87.8% which is attained at Cubic SVM.The minimum accuracy of 79.8% is achieved on Weighted KNN along best recognition time of 194.89 s.The best accuracy of FKNN is proved by the confusion matrix given in Tab.4.Besides, the computation time of each classifier is plotted in Fig.2.As shown in this figure, the WKNN executes fast as compare to other classifiers.

    Table 3:Proposed recognition accuracy of IXMAS dataset

    Table 4:Confusion matrix of FKNN for IXMAS dataset.The action classes are checkwatch(CW), CrossArm (CA), ScratchHead (SH), TurnAround (TA), Wave (WV), getup (GU), kick (K),pickup (PU), point (P), punch (PN), SitDown (SD), and walk (W)

    Figure 2:Testing Computational time for IXMAS dataset

    3.2 UCF Sports Dataset

    The proposed recognition accuracy of the UCF Sports dataset is presented in Tab.5.Six different classifiers are used for recognition accuracy and selected the best one based on the accuracy performance.From this table, the highest accuracy is 99.7% achieved on Linear Discriminant.In contrast, the other parameter such as recall rate is 99.76%, the precision rate is 99.76%, FNR is 0.24, and the classification computation time is 49.143 s.The next highest accuracy is 99.2%achieved on Quadratic SVM and cubic SVM.The minimum accuracy of 93.5% is achieved on Weighted KNN along best recognition time of 16.524 s.The best accuracy of LDA is further proved by a confusion matrix, presented in Tab.6.Besides, the computation time of each classifier is plotted in Fig.3.From this figure, it is illustrated that the WKNN executes fast as compare to other classifiers.

    Table 5:Proposed recognition accuracy of UCF Sports dataset

    3.3 UT Interaction Dataset

    The proposed recognition accuracy of the UT Interaction dataset is presented in Tab.7.Six different classifiers are used for recognition accuracy and selected the best one based on the accuracy performance.From this table, the highest accuracy is 96.7% achieved on Fine KNN,whereas the other parameters such as recall rate are 97%, the precision rate is 96.66%, and FNR is 3%.The next highest accuracy is 96.5% that is attained on the LDA classifier.The minimum noted the accuracy of 91.2% achieved on Weighted KNN along best recognition time of 14.604 s.The best accuracy of FKNN is proved by the confusion matrix given in Tab.8.Also, the computation time of each classifier is plotted in Fig.4.As shown in this figure, the WKNN computationally fast as compared to the rest of the classifiers.

    Table 6:Confusion matrix of LDA for UCF Sports dataset.The action classes are Golf-Swing-Back(GSB), Golf-Swing-Front (GSF), Golf-Swing-Side(GSS), Kicking-Front (KF),Kicking-Side(KS), Lifting(LF), Riding-Horse (RH), Run-Side(RS), SkateBoarding-Front (SBF),Swing-Bench (SB ), Swing-SideAngle (SSA), Walk-Front(WF), DivingSide(DS)

    Figure 3:Testing computational time for UCF sports dataset

    3.4 KTH Dataset

    The proposed recognition accuracy of the KTH dataset is shown in Tab.9.Six different classifiers are used for recognition accuracy and selected the best one based on the accuracy performance.From this table, the highest accuracy is 96.6% achieved on FKNN.In contrast, the other parameter such as recall rate is 96.5%, the precision rate is 96.5%, FNR is 3.5%, and the classification computation time is 497.09 s.The next highest accuracy is 96.0% that is attained on Quadratic SVM.The minimum achieved accuracy is 91.7% for the weighted KNN classifier.The accuracy of Fine KNN is further proved by a confusion matrix, given in Tab.10.Also, the computation time of each classifier is plotted in Fig.5.From this figure, it is noted that the WKNN classifier executes fast as compared to the rest of the listed classifiers.

    Table 7:Proposed recognition accuracy of UT Interaction dataset

    Table 8:Confusion matrix of proposed recognition accuracy for FKNN classifier.The action classes are handshaking (HS), hugging (HG), kicking (K), pointing (PT), punching (PN), and pushing (PU).

    Figure 4:Recognition computational time for UT interaction dataset

    Table 9:Proposed recognition accuracy of KTH dataset

    Table 10:Confusion matrix of Fine KNN on KTH dataset.The action classes are boxing (BX),Handclapping (HC), handwaving (HW), Jogging (JG), running (R), walking (W)

    Figure 5:Recognition computational time for KTH dataset

    Finally, we discussed our proposed method performance in the form of numeric values and graph plots.The numerical results are given in Tabs.3–10.The results presented in these tables are validated through different performance matrices such as Recall rate, Precision rate, Accuracy,FNR, and Time.Based on the results, the Fine KNN showed better performance.However, the computational time of Weighted KNN is better.The computational time of each dataset is plotted in Figs.2–5.But based on the accuracy, the Fine KNN is much better.Finally, we compare the proposed method accuracy with some recent techniques, as presented in Tab.11.From this table,it is showed that the proposed accuracy is much better as compared to the existing techniques.

    Table 11:Proposed method comparison with existing techniques

    4 Conclusion

    A new method for the recognition of human actions is presented in this deep learning research work.There are few important steps to the proposed method.In the first step, pre-processing is applied, and video frames are resized according to the target model’s input.The pre-trained ResNet50 model is used in the next step and is trained using transfer learning.Employing TL,features are extracted from two successive layers and fused using canonical correlation analysis(CCA).The fused feature vector consists of irrelevant information that is selected using the Shanon Entropy approach.Finally, the selected features are classified using supervised learning classifiers, and the best of them are selected based on the accuracy value.A few well-known datasets are used to evaluate the proposed method and have achieved remarkable accuracy.Based on the accuracy, we conclude that the features extracted through deep learning give better results when handling large-scale datasets.It is also noted that the merging of multilayer features produces better results.But this step affects the efficiency of the system.As a result, the selection process provided more accuracy and also minimizes overall time.In future studies, more complex datasets such as HMDB51 and UCF101 will be considered to evaluate the proposed method.

    Funding Statement:This research was supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (P0012724, The Competency Development Program for Industry Specialist) and the Soonchunhyang University Research Fund.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产在视频线在精品| 中文乱码字字幕精品一区二区三区 | 精品人妻一区二区三区麻豆| 亚洲人成网站在线播| 亚洲国产精品国产精品| 亚洲人成网站高清观看| 日韩欧美一区视频在线观看 | 国产女主播在线喷水免费视频网站 | 日韩大片免费观看网站| 亚洲av福利一区| 欧美性感艳星| 日本一二三区视频观看| 午夜爱爱视频在线播放| 能在线免费观看的黄片| 超碰av人人做人人爽久久| 欧美变态另类bdsm刘玥| 欧美日韩视频高清一区二区三区二| 亚洲熟妇中文字幕五十中出| av免费观看日本| 男人舔奶头视频| 午夜免费观看性视频| 日韩 亚洲 欧美在线| 日韩成人av中文字幕在线观看| 日韩 亚洲 欧美在线| 国产探花在线观看一区二区| 亚洲激情五月婷婷啪啪| 最近视频中文字幕2019在线8| 国产探花在线观看一区二区| 国产一区二区亚洲精品在线观看| 身体一侧抽搐| 纵有疾风起免费观看全集完整版 | 午夜日本视频在线| 亚洲欧美精品自产自拍| 国产熟女欧美一区二区| 国产爱豆传媒在线观看| 汤姆久久久久久久影院中文字幕 | 六月丁香七月| 在线免费十八禁| 蜜桃亚洲精品一区二区三区| 国产色爽女视频免费观看| 久久久欧美国产精品| 网址你懂的国产日韩在线| 边亲边吃奶的免费视频| 亚洲熟女精品中文字幕| 国产欧美另类精品又又久久亚洲欧美| 一夜夜www| 中文乱码字字幕精品一区二区三区 | 精品人妻一区二区三区麻豆| 午夜福利在线观看免费完整高清在| av免费在线看不卡| 成人性生交大片免费视频hd| 国产伦理片在线播放av一区| 婷婷色综合大香蕉| 欧美日韩国产mv在线观看视频 | 欧美日韩亚洲高清精品| 成人亚洲欧美一区二区av| 亚洲国产精品专区欧美| 亚洲精品日本国产第一区| 亚洲在线自拍视频| 日韩av在线免费看完整版不卡| 亚洲国产欧美在线一区| 国产精品国产三级国产专区5o| 亚洲国产日韩欧美精品在线观看| 久久精品久久久久久噜噜老黄| 一二三四中文在线观看免费高清| 丝袜喷水一区| 男的添女的下面高潮视频| 精品久久国产蜜桃| 又爽又黄无遮挡网站| 人妻制服诱惑在线中文字幕| 99久久中文字幕三级久久日本| 婷婷色综合www| 一级爰片在线观看| 一区二区三区高清视频在线| 一个人免费在线观看电影| 欧美丝袜亚洲另类| www.av在线官网国产| 亚洲精品日韩av片在线观看| 欧美成人一区二区免费高清观看| 亚洲图色成人| 亚洲性久久影院| 精品国内亚洲2022精品成人| 国产精品久久久久久av不卡| 日韩成人av中文字幕在线观看| 男女视频在线观看网站免费| 国产片特级美女逼逼视频| 美女cb高潮喷水在线观看| 亚洲av电影在线观看一区二区三区 | 日本黄色片子视频| 在线 av 中文字幕| 在线 av 中文字幕| 成人亚洲精品av一区二区| 久久精品熟女亚洲av麻豆精品 | 亚洲成人中文字幕在线播放| xxx大片免费视频| 免费观看无遮挡的男女| 51国产日韩欧美| 国产亚洲av片在线观看秒播厂 | 欧美日韩在线观看h| 中文精品一卡2卡3卡4更新| 久久精品久久精品一区二区三区| 久久精品夜色国产| 国产激情偷乱视频一区二区| 亚洲成人av在线免费| 亚洲欧美精品专区久久| videossex国产| 中文在线观看免费www的网站| 亚洲第一区二区三区不卡| 精品午夜福利在线看| 最近的中文字幕免费完整| 中文资源天堂在线| 久久99热这里只有精品18| 欧美日韩国产mv在线观看视频 | 熟妇人妻不卡中文字幕| 寂寞人妻少妇视频99o| 国产亚洲午夜精品一区二区久久 | 亚洲av电影在线观看一区二区三区 | 99热这里只有精品一区| 国产一级毛片七仙女欲春2| 国产一区二区亚洲精品在线观看| 色视频www国产| 亚洲最大成人中文| 国产成年人精品一区二区| 国产精品久久久久久精品电影小说 | 麻豆成人午夜福利视频| 国产亚洲精品av在线| 欧美日韩精品成人综合77777| av国产久精品久网站免费入址| 老师上课跳d突然被开到最大视频| 国产精品一区二区性色av| or卡值多少钱| videos熟女内射| 男的添女的下面高潮视频| 美女xxoo啪啪120秒动态图| 欧美 日韩 精品 国产| 久久久久久久久久人人人人人人| 狂野欧美激情性xxxx在线观看| 国产v大片淫在线免费观看| 网址你懂的国产日韩在线| 爱豆传媒免费全集在线观看| 嘟嘟电影网在线观看| 搞女人的毛片| 又粗又硬又长又爽又黄的视频| 欧美xxxx性猛交bbbb| 欧美激情国产日韩精品一区| 亚洲国产欧美在线一区| 建设人人有责人人尽责人人享有的 | 性色avwww在线观看| 国产探花在线观看一区二区| 久久精品久久久久久噜噜老黄| 一区二区三区乱码不卡18| 国产探花在线观看一区二区| 国产黄频视频在线观看| 久久久久九九精品影院| 赤兔流量卡办理| 午夜日本视频在线| 亚洲综合精品二区| 久久久a久久爽久久v久久| 亚洲精品亚洲一区二区| 最近中文字幕高清免费大全6| videos熟女内射| 国产乱人视频| 免费观看无遮挡的男女| 一级爰片在线观看| 少妇丰满av| 婷婷色综合www| 免费av观看视频| 亚洲欧美成人综合另类久久久| 高清欧美精品videossex| 极品少妇高潮喷水抽搐| 久久99精品国语久久久| 春色校园在线视频观看| 少妇熟女aⅴ在线视频| 亚州av有码| 日本黄色片子视频| 亚洲色图av天堂| 成人国产麻豆网| 男人和女人高潮做爰伦理| 在线播放无遮挡| 九九久久精品国产亚洲av麻豆| 亚洲国产色片| 老司机影院毛片| 精品一区在线观看国产| 国产一级毛片在线| av网站免费在线观看视频 | 可以在线观看毛片的网站| 亚洲av中文av极速乱| 性插视频无遮挡在线免费观看| 夜夜爽夜夜爽视频| 日韩不卡一区二区三区视频在线| 久久久久性生活片| 三级经典国产精品| 久久久精品欧美日韩精品| 久久精品国产鲁丝片午夜精品| 国产永久视频网站| 丰满少妇做爰视频| 91精品一卡2卡3卡4卡| 日韩制服骚丝袜av| 国产亚洲一区二区精品| 色综合色国产| 狂野欧美激情性xxxx在线观看| 嘟嘟电影网在线观看| 国产成人精品久久久久久| 国产亚洲最大av| 午夜激情福利司机影院| 国产探花在线观看一区二区| 国产精品久久久久久久电影| 亚洲国产精品sss在线观看| 高清日韩中文字幕在线| 日本免费a在线| 老师上课跳d突然被开到最大视频| 亚洲精品久久久久久婷婷小说| 午夜福利在线观看吧| 欧美高清成人免费视频www| 欧美日韩精品成人综合77777| 午夜福利视频1000在线观看| 日韩国内少妇激情av| 国产成人精品婷婷| 免费看日本二区| 熟妇人妻久久中文字幕3abv| 纵有疾风起免费观看全集完整版 | 免费电影在线观看免费观看| eeuss影院久久| 亚洲精品色激情综合| 国产精品三级大全| 精品久久久精品久久久| 久久久久国产网址| 18禁在线播放成人免费| 久久久精品免费免费高清| 国内揄拍国产精品人妻在线| 国产精品国产三级国产专区5o| 国产熟女欧美一区二区| 91精品伊人久久大香线蕉| 中文字幕av在线有码专区| 亚洲国产av新网站| 欧美成人精品欧美一级黄| 午夜福利网站1000一区二区三区| 18禁裸乳无遮挡免费网站照片| 十八禁国产超污无遮挡网站| 久久精品人妻少妇| videossex国产| 国产大屁股一区二区在线视频| 成年女人看的毛片在线观看| 六月丁香七月| 亚洲性久久影院| 久久人人爽人人爽人人片va| 精品久久久久久久久久久久久| 亚洲乱码一区二区免费版| 男人和女人高潮做爰伦理| 国产精品.久久久| 亚洲国产欧美人成| videos熟女内射| 免费观看a级毛片全部| 国产老妇伦熟女老妇高清| 亚洲色图av天堂| 中国国产av一级| 夜夜爽夜夜爽视频| 日韩欧美三级三区| 日日啪夜夜爽| 亚洲在线自拍视频| 亚洲欧美成人综合另类久久久| 三级国产精品片| 麻豆久久精品国产亚洲av| 啦啦啦中文免费视频观看日本| 亚洲av中文字字幕乱码综合| 国产av不卡久久| 高清毛片免费看| 亚洲欧美精品专区久久| 狂野欧美白嫩少妇大欣赏| 高清午夜精品一区二区三区| 色播亚洲综合网| 亚洲人成网站在线观看播放| 黑人高潮一二区| 欧美极品一区二区三区四区| 日本午夜av视频| 国产美女午夜福利| 国产免费一级a男人的天堂| 久久韩国三级中文字幕| 欧美激情久久久久久爽电影| 亚洲av一区综合| 联通29元200g的流量卡| 亚洲精品色激情综合| 亚洲美女搞黄在线观看| 最新中文字幕久久久久| 蜜桃亚洲精品一区二区三区| 精品人妻视频免费看| 午夜爱爱视频在线播放| 亚洲精品aⅴ在线观看| 亚洲国产最新在线播放| 国产男女超爽视频在线观看| 亚洲高清免费不卡视频| 九草在线视频观看| 久久6这里有精品| 91狼人影院| 夫妻午夜视频| 国产淫语在线视频| 少妇的逼好多水| 十八禁国产超污无遮挡网站| 美女黄网站色视频| 久久久a久久爽久久v久久| 精品国产三级普通话版| 内地一区二区视频在线| 91av网一区二区| 麻豆精品久久久久久蜜桃| 国产爱豆传媒在线观看| 国产精品一区www在线观看| 人体艺术视频欧美日本| 国产精品久久视频播放| 久久久久久久久久久免费av| 男女视频在线观看网站免费| h日本视频在线播放| 视频中文字幕在线观看| 国产午夜精品论理片| 国国产精品蜜臀av免费| 国产成人a区在线观看| 亚洲av.av天堂| 内地一区二区视频在线| 婷婷六月久久综合丁香| 久久久久久久久久久丰满| 少妇被粗大猛烈的视频| 狠狠精品人妻久久久久久综合| 欧美日韩在线观看h| 麻豆乱淫一区二区| 亚洲精品视频女| 亚洲国产精品成人久久小说| ponron亚洲| 亚洲最大成人手机在线| 三级国产精品欧美在线观看| 国产精品熟女久久久久浪| 99视频精品全部免费 在线| 欧美成人a在线观看| 黄色一级大片看看| 午夜福利视频精品| av福利片在线观看| 国语对白做爰xxxⅹ性视频网站| 亚洲av二区三区四区| 女人十人毛片免费观看3o分钟| av在线亚洲专区| 久久久精品欧美日韩精品| 赤兔流量卡办理| 亚洲欧洲日产国产| 午夜免费男女啪啪视频观看| 乱人视频在线观看| 亚洲人成网站在线观看播放| 嫩草影院精品99| 国产又色又爽无遮挡免| 久久久精品欧美日韩精品| 80岁老熟妇乱子伦牲交| 亚洲最大成人av| 18+在线观看网站| 精品一区在线观看国产| 中国美白少妇内射xxxbb| 国产高清国产精品国产三级 | 99久久人妻综合| 又大又黄又爽视频免费| 蜜桃久久精品国产亚洲av| 26uuu在线亚洲综合色| 婷婷色av中文字幕| 男人舔女人下体高潮全视频| 久久精品国产亚洲av天美| 亚洲美女视频黄频| 老师上课跳d突然被开到最大视频| 国产成人91sexporn| 尤物成人国产欧美一区二区三区| 美女国产视频在线观看| 真实男女啪啪啪动态图| 亚洲熟妇中文字幕五十中出| 亚洲av中文字字幕乱码综合| 舔av片在线| 国产精品不卡视频一区二区| 美女xxoo啪啪120秒动态图| 尾随美女入室| 国产精品精品国产色婷婷| 亚洲国产日韩欧美精品在线观看| 18禁在线播放成人免费| 人妻制服诱惑在线中文字幕| 美女内射精品一级片tv| 欧美区成人在线视频| 国产在视频线精品| 欧美xxⅹ黑人| 最近2019中文字幕mv第一页| av在线观看视频网站免费| 秋霞在线观看毛片| 日本与韩国留学比较| 国产视频内射| 午夜日本视频在线| 亚洲av免费高清在线观看| 99久久九九国产精品国产免费| 日韩av免费高清视频| 久久久久久久国产电影| 网址你懂的国产日韩在线| 看非洲黑人一级黄片| 欧美日韩视频高清一区二区三区二| 精品久久久久久久久久久久久| 亚洲伊人久久精品综合| 成人美女网站在线观看视频| 久久精品综合一区二区三区| 亚洲性久久影院| 国产黄片视频在线免费观看| 少妇高潮的动态图| 亚洲综合精品二区| ponron亚洲| 秋霞在线观看毛片| 日本午夜av视频| 精品99又大又爽又粗少妇毛片| 免费播放大片免费观看视频在线观看| 国产大屁股一区二区在线视频| 国产一区亚洲一区在线观看| 小蜜桃在线观看免费完整版高清| 观看美女的网站| 十八禁网站网址无遮挡 | 寂寞人妻少妇视频99o| 成人性生交大片免费视频hd| 亚洲精品,欧美精品| 国产单亲对白刺激| 亚洲怡红院男人天堂| 中文在线观看免费www的网站| 亚洲自偷自拍三级| 久久久久久久久中文| 免费大片18禁| 成人亚洲精品一区在线观看 | 精品午夜福利在线看| 成人欧美大片| 少妇裸体淫交视频免费看高清| 岛国毛片在线播放| 国产在视频线精品| 精品人妻偷拍中文字幕| 美女cb高潮喷水在线观看| 免费av观看视频| 亚洲av男天堂| 麻豆精品久久久久久蜜桃| 亚洲av中文av极速乱| 精品欧美国产一区二区三| 三级男女做爰猛烈吃奶摸视频| 嫩草影院新地址| 18+在线观看网站| 免费高清在线观看视频在线观看| 中文精品一卡2卡3卡4更新| 天堂影院成人在线观看| 久久精品夜夜夜夜夜久久蜜豆| 晚上一个人看的免费电影| 精华霜和精华液先用哪个| 麻豆精品久久久久久蜜桃| 日韩人妻高清精品专区| 成人av在线播放网站| 亚洲精品aⅴ在线观看| 在线免费十八禁| 精品一区二区三区人妻视频| 亚洲精品国产av成人精品| 你懂的网址亚洲精品在线观看| 亚洲欧美清纯卡通| 国产老妇女一区| 一个人免费在线观看电影| 国产午夜精品论理片| 国产黄a三级三级三级人| 丝袜喷水一区| 高清日韩中文字幕在线| 中文字幕制服av| 亚洲精品成人久久久久久| 干丝袜人妻中文字幕| 黄片无遮挡物在线观看| 国产成人aa在线观看| 国产精品久久久久久久电影| 国产高清有码在线观看视频| 国产一区二区三区综合在线观看 | 国产探花在线观看一区二区| 久久久久久久久久黄片| 直男gayav资源| 在线观看一区二区三区| 成年免费大片在线观看| 国产爱豆传媒在线观看| 能在线免费观看的黄片| 国产永久视频网站| 久久精品夜色国产| 久久久亚洲精品成人影院| 2021少妇久久久久久久久久久| 美女脱内裤让男人舔精品视频| 午夜老司机福利剧场| 色综合色国产| 91久久精品国产一区二区成人| 天天一区二区日本电影三级| 搡老妇女老女人老熟妇| 伦精品一区二区三区| 亚洲最大成人av| 欧美变态另类bdsm刘玥| 亚洲av成人av| 亚洲在线自拍视频| 我的女老师完整版在线观看| 九九爱精品视频在线观看| 亚洲熟女精品中文字幕| 国产视频内射| 亚洲av中文av极速乱| 在线免费十八禁| 肉色欧美久久久久久久蜜桃 | 国内精品宾馆在线| 美女脱内裤让男人舔精品视频| 91久久精品国产一区二区成人| 国产午夜精品论理片| 我要看日韩黄色一级片| 成人欧美大片| 高清av免费在线| 黄色欧美视频在线观看| 国产成人免费观看mmmm| 熟妇人妻不卡中文字幕| 欧美日韩精品成人综合77777| 啦啦啦啦在线视频资源| 汤姆久久久久久久影院中文字幕 | 亚洲人成网站在线观看播放| 亚洲内射少妇av| 可以在线观看毛片的网站| 久久久久久久久久黄片| 极品少妇高潮喷水抽搐| 免费av毛片视频| 国产乱人偷精品视频| 热99在线观看视频| 亚洲精品国产av蜜桃| 婷婷色综合大香蕉| 国产精品伦人一区二区| 美女大奶头视频| 天堂俺去俺来也www色官网 | 我要看日韩黄色一级片| 中文字幕av在线有码专区| 91久久精品国产一区二区三区| 赤兔流量卡办理| 国产精品日韩av在线免费观看| 精品国内亚洲2022精品成人| 亚洲18禁久久av| 伊人久久国产一区二区| 99久久精品热视频| 亚洲国产精品国产精品| 久久久成人免费电影| 久久精品国产鲁丝片午夜精品| 精品久久久久久电影网| 国产在线一区二区三区精| 黄片wwwwww| 中文字幕亚洲精品专区| 欧美日韩精品成人综合77777| 蜜桃亚洲精品一区二区三区| 亚洲精品日韩av片在线观看| 欧美丝袜亚洲另类| 国产亚洲5aaaaa淫片| 国产精品一二三区在线看| 国产黄频视频在线观看| 建设人人有责人人尽责人人享有的 | 热99在线观看视频| 亚洲图色成人| 七月丁香在线播放| 精品久久久久久久久久久久久| 亚洲国产精品专区欧美| 国产 一区 欧美 日韩| 国产 亚洲一区二区三区 | 亚洲国产成人一精品久久久| 日韩中字成人| 欧美成人午夜免费资源| 搡女人真爽免费视频火全软件| 午夜福利成人在线免费观看| 亚洲欧洲国产日韩| 中文在线观看免费www的网站| 日韩三级伦理在线观看| 青春草视频在线免费观看| 亚洲欧洲日产国产| 欧美高清成人免费视频www| 亚洲av一区综合| 十八禁网站网址无遮挡 | 亚洲精品第二区| 国产亚洲5aaaaa淫片| 国产精品伦人一区二区| 日韩伦理黄色片| 尤物成人国产欧美一区二区三区| 免费电影在线观看免费观看| 亚洲在线观看片| 国产亚洲午夜精品一区二区久久 | 高清视频免费观看一区二区 | 国产成人freesex在线| 欧美激情在线99| 夫妻性生交免费视频一级片| 春色校园在线视频观看| 91久久精品电影网| 建设人人有责人人尽责人人享有的 | 亚洲内射少妇av| 欧美人与善性xxx| 69av精品久久久久久| 在线a可以看的网站| 免费观看的影片在线观看| ponron亚洲| 天堂俺去俺来也www色官网 | 久久韩国三级中文字幕| 一级毛片黄色毛片免费观看视频| 老司机影院成人| 午夜日本视频在线| 国产精品蜜桃在线观看| 国产精品av视频在线免费观看| 国产 一区 欧美 日韩| 夫妻午夜视频| 欧美人与善性xxx| 性插视频无遮挡在线免费观看| 亚洲电影在线观看av| 午夜爱爱视频在线播放| 国产亚洲5aaaaa淫片| 午夜精品国产一区二区电影 | 亚洲激情五月婷婷啪啪| 免费看日本二区| 日日撸夜夜添| 秋霞伦理黄片| 一级毛片久久久久久久久女| 日韩制服骚丝袜av| 午夜精品一区二区三区免费看| 狂野欧美激情性xxxx在线观看| 不卡视频在线观看欧美| 日韩欧美精品免费久久| 国产有黄有色有爽视频| 国产美女午夜福利| 久久国产乱子免费精品| 亚洲天堂国产精品一区在线| 一级a做视频免费观看|