• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Human Gait Recognition:A Deep Learning and Best Feature Selection Framework

    2022-11-09 08:13:50AsifMehmoodMuhammadAttiqueKhanUsmanTariqChangWonJeongYunyoungNamRehamMostafaandAmiraElZeiny
    Computers Materials&Continua 2022年1期

    Asif Mehmood,Muhammad Attique Khan,Usman Tariq,Chang-Won Jeong,Yunyoung Nam,Reham R.Mostafa and Amira ElZeiny

    1Department of Computer Science,COMSATS University Islamabad,Wah Campus,47080,Pakistan

    2Department of Computer Science,HITEC University Taxila,Taxila,47040,Pakistan

    3College of Computer Engineering and Sciences,Prince Sattam Bin Abdulaziz University,Al-Khraj,Saudi Arabia

    4Medical Convergence Research Center,Wonkwang University,Iksan,Korea

    5Department of Computer Science and Engineering,Soonchunhyang University,Asan,Korea

    6Department of Information Systems,Faculty of Computers and Information Sciences,Mansoura University,Mansoura,35516,Egypt

    7Department of Information Systems,Faculty of Computers and Information,Damietta University,Damietta,Egypt

    Abstract: Background—Human Gait Recognition (HGR) is an approach based on biometric and is being widely used for surveillance.HGR is adopted by researchers for the past several decades.Several factors are there that affect the system performance such as the walking variation due to clothes,a person carrying some luggage,variations in the view angle.Proposed—In this work,a new method is introduced to overcome different problems of HGR.A hybrid method is proposed or efficient HGR using deep learning and selection of best features.Four major steps are involved in this work-preprocessing of the video frames,manipulation of the pre-trained CNN model VGG-16 for the computation of the features,removing redundant features extracted from the CNN model,and classification.In the reduction of irrelevant features Principal Score and Kurtosis based approach is proposed named PSbK.After that,the features of PSbK are fused in one materix.Finally,this fused vector is fed to the One against All Multi Support Vector Machine (OAMSVM)classifier for the final results.Results—The system is evaluated by utilizing the CASIA B database and six angles 00°,18°,36°,54°,72°,and 90° are used and attained the accuracy of 95.80%,96.0%,95.90%,96.20%,95.60%,and 95.50%,respectively.Conclusion—The comparison with recent methods show the proposed method work better.

    Keywords: Human gait recognition;deep features extraction;features fusion;features selection

    1 Introduction

    The walking pattern of an individual is referred to as Gait [1].A lot of research is being carried out on Gait at present.The analysis of human gait was started in the 1960 s [2].At the start,the Human Gait Recognition (HGR) was used for the diagnosis of various diseases such as spinal stenosis,Parkinson’s [3],and walking pattern distortion due to age factor.HGR was used to diagnose these diseases at early stages.At present,HGR is used for the recognition of a person from the walking pattern [4,5].Plenty of techniques has been used in the past for recognition of an individual such as fingerprints,iris,retina,face recognition,ear biometrics,footprints,and recognition from the pattern of palm veins [6,7].The method of HGR is preferable because there is no need for individual cooperation.An individual can be recognized from a distance by his/her walking style.The recent studies show that there are 24 different components of human gait that can be used to extract the feature and these features can be further used for recognition [8,9].

    Massive research has been done by the researchers on the HGR.This method has been used to minimize various types of security risks,such as an embassy,bank,airport,video surveillance,and military purpose [10,11].This method is easy to use for recognition but,several factors harshly reduce the system performance such as inferior lighting conditions,view angle changes,can be seen in Fig.1 [12],different types of clothes [13],and different carrying conditions.A bunch of methods based on traditional features and CNN features are instituted [14].

    Figure 1:CASIA B sample frames [15]

    HGR can be separated into two distinct groups.One group is based on a model [16] and the second group is called model free.The model-based technique uses various attributes such as joint angle.These methods are considered better for angle variations,clothing condition,and carrying conditions.A high-level model can be obtained through this approach but,the computational cost of this method is high.The model-free approach is based on the human body silhouette.This method is cost-effective but,is sensitive towards the different covariant such as angle variations,clothes,shadow,and luggage carrying conditions [17].Therefore,the tradeoff between accuracy and computational cost should be considered.HGR is a steps process such as frames preprocessing,segmentation of region of interest,attribute computation,and finally recognition [18].

    Preprocessing is a key step in Computer Vision (CV) and the computation of good attributes is based on the noiseless image [19].This step is used for the improvement of image quality and this is achieved by eliminating the noise,contrast improvement,and background eliminating.Good features and segmentation are achieved with the help of preprocessing.Several techniques are being utilized for preprocessing of frames such as watershed,thresholding,and background removing [20].

    After the process of preprocessing,the attributes of computing is a significant step [21] and is used to compute the features from image frames.The main interest is to compute the important attributes and eliminating the rest.The system efficiency is affected if there are irrelevant features.Therefore,the main concern is the extraction of relevant features only.After the computation of features,another concern is to reduce the dimensionality of these features.The feature reduction method is used to improve system efficiency by working on relevant features only and eliminating the redundant [22].Many techniques are impelemented in literature for features reduction and selection such as entropy based selection [23],variances based reduction [24],and name a few more [25].

    Plenty of research has been carried out by the researchers on HGR recently.Several techniques are applied for the recognition such as (i) HGR based on human silhouette;(ii) HGR features based on the traditional or classical methods;(iii) methods based on deep learning.The techniques based on the human silhouette are very slow and space-taking.Sometimes the incorrect silhouette gives incorrect and irrelevant attributes that affect the system reliability.The feature computation through the classical techniques is based on the low level of attributes and they are concerned with a specific problem.Therefore a fully automized system is needed for the feature computation which gives a high level of the descriptor.For the computation of high-level descriptor,a lot of techniques are offered in the literature work.There exist several problems that affect the system reliability such as various carrying conditions,clotting problems,variations in the view angles,insufficient lighting,the speed of a person,shadow of feet.These factors distort the human silhouette thus leads to inaccurate features.To address these factors the main contributions in the field of the HGR are:

    a) Frames transformation based on HSV and selection of the best channel which gives the maximum features and information.

    b) The computation of deep features by utilizing the VGG-16 pre-trained model with the help of transfer learning.

    c) Selection of high-quality attributes with the help of a hybrid approach based on Principal Score and Kurtosis (PSaK).

    d) Merging the selected attributes and fed to the One-against-All Multi SVM (OAMSVM).

    Section 2 represents the related work of this study.The proposed work which includes finetune deep models,selection of important features,and recognition,is discussed in Section 3.Results and comparison are discussed in Section 4.The conclusion is given in Section 5.

    2 Related Work

    Several techniques are used for HGR recently to recognize a person from the walking pattern.Castro et al.[26] deployed a method for HGR that is based on CNN features.In this technique,high-level descriptor learning has been done by using low-level features.To test their method,they used an HGR dataset called TUM-GAID and during experimental analysis,they reach an accuracy of 88.9%.Alotaibi et al.[27] instituted an HGR system that is based on CNN attributes.In this method,they tried to minimize the problem of occlusion that degrade system efficiency.To handle the problem of small data,they carried out augmentation of the data.Fine-tuning on the dataset is also carried out.CASIA-B database is used to assess the system performance.The 90°angle of CASIA-B dataset is used and achieved the accuracy of 98.3%,83.87%,and 89.12% on three variations nm,bg,and cl accordingly.Li et al.[28] deployed a new network called DEEPNET in which they tried to minimize the problem that comes due to view variations.For solving this problem,they adopted the Joint Bayesian.The normalization of the gait phase is done by using Normalized Auto Correlation (NAC).After the normalization,the gait attributes are computed.For the assessment of the system,the OULP database is used and an accuracy of 89.3% is attained.Arshad et al.[4] used a method for HGR to sort out the problem of the different variations.For the computation of gait features,two CNN models VGG26 and AlexNet are used.The feature vector is computed by using entropy andKurtosis.After computation,both vectors are fused.Fuzzy Entropy Controlled Kurtosiss (FEcK) is utilized for the selection of best features.Experimental analysis has been done on AVAMVG by achieving 99.8%,CASIA-A by achieving 99.7%,CASIA-B by achieving 93.3%,and CASIA-C by achieving 92.2% of recognition rate.To overcome the dilemma of angle variation Deng et al.[29] deployed a new HGR method.The fusion of knowledge and deterministic learning is done in this method.CASIA-B database is used for experimental analysis and 88%,87%,and 86% recognition rate is attained on three angles 18°,36°,and 54°accordingly.

    Mehmood et al.[5] addresses the problem of variation by using a hybrid approach for feature selection.The gait attributes are computed from the image frames by using DenseNet-201.Two layers avg_pool and fc1000 are used for the computation of attributes.The parallel order method is used to merge these features.For the selection of attributes,an algorithm based on Kurtosis and firefly is used.CASIA-B is used to assess the system’s performance.The accuracy of 94.3% on 18°,93.8% on 36°,and 94.7% on 54°angle are attained,respectively.Rani et al.[30] introduced an ANN-based HGR system to identify a person from the way he walks.Image preprocessing is done background subtraction.The morphological based operation is used for tracking of image silhouette.The self-similarity-based technique is used for the assessment of the system.They evaluated the system on the CASIA dataset and observed the better performance of the system as compared to current techniques.

    Zhang et al.[31] introduced a novel method to minimize the drawback of variation of clothes,angles,and carrying things.LSTM and CNN are used for the computation of attributes from RGB image frames.After that,the attributes are fused.The assessment of the system is done by using CASIA-B,FVG,and USF by achieving 81.8%,87.8%,and 99.5% accordingly.To conquer the problem of covariations Yu et al.[32] instituted a new approach.CNN is utilized to compute the feature from the images.A Stack progressive based autoencoder is utilized to deal with the problem of variations.For reduction of features,PCA is utilized and final features are fed to the KNN algorithm.The approach is tested SZU RGB D and CASIA-B and a recognition rate of 63.90% with variations and 97% without variations were achieved.Marcin et al.[33] presented a new technique and analyzed that how different types of shoes affect the walking style of people.The total walking cycles of 2700 were analyzed obtained from 81 individuals.The accuracy of 99% was achieved on the dataset of 81 individuals.Khan et al.[34] introduced an HGR approach and used the sequence of video to compute the attributes.In this technique,a codebook is generated.After the generation of the codebook,the encoding of vector-based on fisher vector is done.CASIA-A and TUM GAID are used for assessment of the system and 100% and 97.74%recognition rate was attained,respectively.

    3 Proposed Methodology

    A fully mechanized HGR system is proposed that is based on very deep features of neural networks.The proposed method is based on four steps such as: preprocessing of image frames,feature extraction through CNN model VGG-16,feature selection through a novel combined method,and at last final recognition with the help of supervised learning.The complete architecture of the system is illustrated in Fig.2.

    Figure 2:Proposed system architecture for HGR

    3.1 Frame Preprocessing

    Preprocessing plays an important role in CV and image processing to improve the quality of the given data [35].Preprocessing includes,resizing of images,background removal,noise removal,and changing the color space of the image such as RGB to Gray.In this work,preprocessing is carried out to prepare data for the neural network.Initially,the image resizing is carried out.After that minimum set count based on all classes is done.Later the HSV is performed and the best channel is selected.Mathematically,HSV transformation is specified as:

    The R,G,B values are divided by 255 to change the range from 0.255 to 0.1:

    where,ωh,ωs,andωvsymbolize three channels of HSV conversion.The notationωr,ωg,andωbdemonstrates red,green,and blue channels of the original image frame.After that,the best information channelωhis selected which is presented in Fig.3.This figure specifies the first channel of HSV and the best channel is further processed.

    Figure 3:HSV color transformation and channel selection

    3.2 Deep Learning Feature Computing

    Feature computing is a very important part of machine learning and pattern recognition[36,37].The main objective of this step the extraction of important features from the objects that are presented in the image frame.After the computation of the feature,the next step is the prediction of the object category [38].Numerous types of features are available such as: based on geometry,shape,and texture.By utilizing these features the author tries to achieve higher accuracy but fails due to a huge dataset.Deep learning is becoming important and being used by many researchers because it works efficiently on large as well as small databases.Convolutional Neural Network (CNN) is famous types of layers-based networks that are used to extract the efficient and relevant features of the object [14].The CNN model is based on pooling,ReLu,convolution,softmax,and fully connected (FC) layers.Low-level feature extraction is carried out by convolution layer and high-level information is obtained on FC layers.

    In the proposed work,a pre-trained CNN model name VGG-16 is applied for the computation of features.The computation of features based on transfer learning is computed from the third last layer named fc6 and the second last layer named fc8.The features of both layers are combined in one matrix which proceeds in the next step.The description of each step involved in the VGG-16 model is described as follows:

    3.2.1 VGG-16

    VGG-16 is a famous CNN model that is being used for efficient feature computation.The size of the input image used for VGG-16 is 224×224×3.This network can be used for RGB images also.The architecture of VGG-16 is based on the input layer,5 layers of the max pool,five segments of convolution layers having 13 layers of the total,and 3 Fully Connected (FC)layers.The filters of 3×3 are used in the first two convolutional layers.The filter of size 3×3 with stride 1 is used.A total of 64 filters are used in the first two layers and it gives an output of 224×224×64.After that,the pooling layer is used,and it gives an output of 112×112×64.After the pooling layer,two more convolution layers are used having a total 128 of filters and it gives us the output size of 112×112×128.After these convolution layers pooling layer is used,and it gives the output of size 56×56×128.After this pooling layer,two more convolution layers are added having a 256 filter.Then pooling layer is added.After that,3 convolution layers are added having filters of 512.After this convolution layer,the pooling layer is added again.After that,3 convolution layers are added having filters of 512.Then pooling layer is added again after these convolution layers.Finlay,the fully connected layers are added,and the final output size is 7×7×512 into the FC layer.There is a total of three FC layers.The total channels at the first two FC layers are 4096 and at the third FC layer is 1000.ReLU activation function is used in all hidden layers.The architecture of the VGG-16 is illustrated in Fig.4.

    Figure 4:VGG-16 architecture

    In this study,deep feature extraction is carried out by a pre-trained VGG-16CNN model.The features are computed on two layers fc6 and fc8 for the best extraction of the features.After the activation function,size of vectors N×4096 is on fc6 and N×1000 on fc8 is obtained,where N represents the number of images.After the extraction of features,the features of both layers are merged lineally.

    3.2.2 Transfer Learning-Based Feature Extraction

    Feature extraction is performed by transfer learning (TL) [39] on pre-trained VGG-16.For this purpose,the VGG-16 structure was trained on various angles of the HGR database CASIA-B.Activation is performed on fc6 and fc8 layers of the network and features of both the layers are merged in parallel order.The input size of the image was 224×224 at the input layer.Therefore,feature extraction through TL was performed on fc6 and fc8.The number of output feature maps on both layers is N×4096.

    Let v1denotes the fc6 feature vector of dimension N×4096 and v2denotes the fc8 feature vector of dimension N×1000,respectively.Let v3denote the matrix of feature vector after fusion represents the fused feature matrix of dimension N×T1 where T1 represents the fused matrix length.The length of the fused matrix is depending on the maximum dimensional vector like v1or v2.

    The maximum dimensional vector is first computed before the fusion of both vectors as shown below.

    The maximum length vector is specified through this expression and a blank array is found to make both vector’s length equal.This is performed by a simple subtraction operation to find the difference in the length of vectors.The mean value of maximum vector T1 is computed and placed at the lower dimensional vector instead of using zero paddings.The maximum index of both vectors is identified by defining a threshold.The threshold is mathematically defined as follows:

    In the above equation,the indexes of both vectors are compared and after that concatenation is performed as follows:

    3.3 Feature Selection

    Feature selection is carried out by applying heuristic approach baed on Principle Score and Kurtosis to only select important features and eliminating the less important features.

    Kurtosis:After feature extraction through the VGG-16 deep network,a heuristic-based approach kurtosis is applied on the computed feature vector FV.The main goal of using this approach is to select the top features and eliminating the rest.The kurtosis is formulated as:

    Principle Component Analysis: Principal component analysis (PCA) is a statistical technique that is based on linear transformation.PCA is very useful for pattern recognition and data analysis and us is widely used in image processing and computer vision.It is used for data reduction and compression and also for decorrelation.Numerous algorithms that are neural network and multiple variation-based are being utilized as PCA on the various dataset.PCA can be defined as the transformation of n number of vectors having N length and n dimension vector of Y=[y1,y2,...,yn]Tand this vector is formed in x.

    A simple formula can be generated from this concept,but it is mandatory to remember that a single input of vector y has N values.The myvector can be described as mean values of input variables and this can be demonstrated by a relation.

    The matrix B is based on the covariance matrix Cy.The formation of rows is based on the eigenvectors v of Cy.The order of these eigenvectors is descending.The matrix Cycan be evaluated as follows:

    We know that the vector y is an n dimensional vector so the size of the matrix will be n x n and the elements the variance of y lying on the main diagonal are Cy(i,i).

    For Cy(i,j)the covariance is based on other points yi,yjand can be determined as follows.

    The rows of B are orthonormal so the inversion of PCA is conceivable as follows:

    Due to these properties,PCA can be used in image processing and computer vision.In this process,a feature matrix of dimension N×R2 is obtained and denoted by y.Finally,by using Eq.(6),a final matrix of dimensional N×R3 is obtained which is fed to OAMSVM [40] for final recognition.

    4 Results and Analysis

    In this section,the proposed system validation is illustrated.The system is tested on CASIA-B [41] database that is publicly available.This database is used for Multi-view HGR.This database is based on 124 subjects of which 93 are male and 31 are female.The database is captured with the difference of 18°that is based on three different walking styles such as a person with a normal walk (nm),a person with a bag (bg),and a person in the coat (cl).Experiments are carried on six different angles of CASIA-B such as 00°,18°,36°,54°,72°,and 90°angles.All three gaits nm,bg,and cl are included in each angle.A few sample frames of 00°,18°,36°,54°,72°,and 90°angles are illustrated in Fig.5.Six methods are used for the validation of the model including Linear SVM (L-SVM),Quadratic SVM (Q-SVM),Cubic SVM (C-SVM),Fine Gaussian SVM (F-SVM),Medium Gaussian SVM (M-SVM),and Cubic Gaussian SVM (CG-SVM).One-VS-All SVM (M-SVM) approach is utilized for the validation of the model.Statistical methods such as accuracy,Fales negative rate (FNR),precision,Area under the curve (AUC),and time are used as performance evaluation measures.

    Figure 5:Sample images of CASIA B

    4.1 Implementation Details

    The standard ratio of 70:30 is used for the evaluation of the system.It means that for training purposes 70% of image frames are utilized and for the testing purpose 30% of the image frames are used.The cross-validation of 10 K-Fold is used while the rate for leaning was 0.0001.Transfer Learning is used for the training and testing features computation.After the computation of features,these features along with the labels are fed into M-SVM (Linear method) for final classification.After that,the trained model is used to predict the test features.All experiments are done by using MATLAB 2018b,running on a Core i7 machine that contains 16 GB of RAM and 4 GB NVIDIA GeForce 940MX GPU.Moreover.Matcovnet,a toolbox for deep learning is used to compute deep features.

    4.2 CASIA B Dataset

    CASIA-B is a database that is publicly available and use of human gait recognition.The database was extracted in the indoor environment.There are several variations in the database such as various angles,carrying,and clothing conditions.This dataset is based on 124 subjects and this database is extracted from 11 different angles.These angles are 0°,18°,36°,54°,72°,90°,108°,126°,144°,162°,and 180.This database was captured on 352 x 240 resolution at the rate of 25 fps.Six angles 0°,18°,36°,54°,72°,and 90°are utilized in this work.The results are computed separately and are discussed below.

    4.2.1 Exp 1:0 Angle

    The results on 00°angle are given in this section.The results are illustrated in Tab.1.The best accuracy is 95.80% and is achieved by L-SVM classifier.The accuracies of the rest of the classifier Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM are 95.0%.94.90%,86.30%.92.90%,and 86.20% respectively.The best results of other evaluation parameters Recall,Precision,AUC are 95.67%,96.0%,and 99.67% are also achieved on LSVM.The Recall on rest of classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM is 95.0%,95.0%,86.33%,93.33% and 86.0%respectively.The precision of rest of classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CGSVM is 95.33%,95.0%,86.0%,93.0%,and 87.33% respectively.The AUC of the rest of the classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM 99.67%,99.67%,97.0%.99.33%,and 97.0% respectively.The computation time of the system is also calculated for all classifiers.The minimum time is 30.584 (s) in the case of C-SVM.The computational time of rest of classifiers L-SVM,Q-SVM,F-SVM,M-SVM,and CG-SVM is 31.984,42.692,142.93,49.029,and 38.839 (s)respectively.

    Table 1:Results on 00° angle using proposed method

    4.2.2 Exp 2:18 Angle

    The results on 18°angle are given in this section.The results are illustrated in Tab.2.The best accuracy is 96.0% and is achieved by L-SVM classifier.The accuracies of the rest of the classifier Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM are 95.40%.94.80%,84.10%.93.10%,and 86.50% respectively.The best results of other evaluation parameters Recall,Precision,AUC are 96.0%,96.0%,and 99.67% are also achieved on LSVM.The Recall on the rest of classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM is 95.67%,94.67%,84.33%,93.0% and 86.67%respectively.The precision of rest of classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CGSVM is 95.33%,94.67%,84.33%,93.0%,and 87.67% respectively.The AUC of the rest of the classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM 99.67%,99.67%,95.67%.99.0%,and 97.0% respectively.The computation time of the system is also calculated for all classifiers.The minimum time is 37.656 (s) in the case of L-SVM.The computational time of rest of classifiers QSVM,C-SVM,F-SVM,M-SVM,and CG-SVM is 46.457,42.518,165.39,49.378,and 80.243 (s)respectively.

    Table 2:Results on 18° angle using proposed method

    4.2.3 Exp 3:36 Angle

    The results on 36°angle are given in this section.The results are illustrated in Tab.3.The best accuracy is 95.90% and is achieved by L-SVM classifier.The accuracies of the rest of the classifier Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM are 95.50%.94.70%,82.70%.91.70%,and 86.70% respectively.The best results of other evaluation parameters Recall,Precision,AUC are 96.67%,96.0%,and 99.67% are also achieved on LSVM.The Recall on the rest of classifiers QSVM,C-SVM,F-SVM,M-SVM,and CG-SVM is 95.67%,94.67%,82.67%,91.67%,and 84.67%respectively.The precision of rest of classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CGSVM is 95.33%,95.0%,82.67%,92.0%,and 86.67% respectively.The AUC of the rest of the classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM 99.67%,99.67%,94.67%.99.0%,and 96.67% respectively.The computation time of the system is also calculated for all classifiers.The minimum time is 37.782 (s) in the case of L-SVM.The computational time of rest of classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM is 43.538,42.54,154.71,51.157,and 79.455 (s)respectively.

    Table 3:Results on 36° angle

    4.2.4 Exp 3:54 Angle

    The results on the 54°angle are given in this section.The results are illustrated in Tab.4.The best accuracy is 96.20% and is achieved by L-SVM classifier.The accuracies of the rest of the classifier Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM are 95.90%.95.50%,83.10%.94.60%,and 89.20% respectively.The best results of other evaluation parameters Recall,Precision,AUC are 96.33%,96.33%,and 100% are also achieved on LSVM.The Recall on the rest of classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM is 96.0%,95.33%,83.33%,93.67%,and 89.33% respectively.The precision of rest of classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM is 96.0%,95.33%,83.67%,94.33%,and 89.67% respectively.The AUC of the rest of the classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM 99.67%,99.67%,96.0%.99.67%,and 98.0% respectively.The computation time of the system is also calculated for all classifiers.The minimum time is 37.50 (s) in the case of L-SVM.The computational time of rest of classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM is 38.571,43.077,203.98,53.16,and 73.748 (s)respectively.

    Table 4:Results on 54° angle using proposed method

    4.2.5 Exp 4:72 Angle

    The results on 72°angle are given in this section.The results are illustrated in Tab.5.The best accuracy is 95.60% and is achieved by L-SVM classifier.The accuracies of the rest of the classifier Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM are 94.80%.93.80%,80.70%.91.20%,and 84.40% respectively.The best results of other evaluation parameters Recall,Precision,AUC are 95.67%,95.33%,and 100% are also achieved on LSVM.The Recall on the rest of classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM is 95.33%,95.67%,80.67%,91.33%,and 84.67% respectively.The precision of rest of classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM is 95.0%,94.0%,81.0%,92.0%,and 86.33% respectively.The AUC of the rest of the classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM 99.67%,99.33%,94.33%.99.0%,and 97.0% respectively.The computation time of the system is also calculated for all classifiers.The minimum time is 42.984 (s) in the case of L-SVM.The computational time of rest of classifiers QSVM,C-SVM,F-SVM,M-SVM,and CG-SVM is 51.448,47.442,175.94,51.312,and 91.519 (s)respectively.

    Table 5:Results on 72° angle

    4.2.6 Exp 5:90 Angle

    The results on 90°angle are given in this section.The results are illustrated in Tab.6.The best accuracy is 95.50% and is achieved by L-SVM classifier.The accuracies of the rest of the classifier Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM are 94.90%.94.0%,81.30%.92.0%,and 84.80%respectively.The best results of other evaluation parameters Recall,Precision,AUC are 95.33%,95.33%,and 100% are also achieved on LSVM.The Recall on rest of classifiers Q-SVM,CSVM,F-SVM,M-SVM,and CG-SVM is 95.0%,94.33%,81.33%,92.0%,and 84.67% respectively.The precision of rest of classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM is 94.67%,95.0%,81.67%,92.33%,and 86.0% respectively.The AUC of the rest of the classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM 99.67%,99.0%,97.67%.99.0%,and 96.33% respectively.The computation time of the system is also calculated for all classifiers.The minimum time is 41.906 (s) in the case of L-SVM.The computational time of rest of classifiers Q-SVM,C-SVM,F-SVM,M-SVM,and CG-SVM is 51.279,46.226,173.12,55.82,and 88.502 (s) respectively.

    Table 6:Results on 90° angle using proposed method

    4.3 Discussion and Comparison

    A detailed discussion is provided in this section.As demonstrated in Fig.2,the introduced system is based on few steps such as computation of deep feature by utilizing pre-trained model of VGG-16,feature fusion,feature selection with the help of PCA and Kurtosis.After that,powerful features are selected and combined.Finally,the feature vector is fed to Multi-SVM of one against all methods.The proposed system is assessed by using six different angles of the CASIA B dataset such as 00°,18°,36°,54°,72°,and 90°.The results of the angles are calculated separately and demonstrated in Tabs.1-6,respectively.The computational time against each angle is also computed.An extensive comparison has been carried out with the recent HGR methodologies to assess the proposed methodology as shown in Tab.7.Mehmood et al.[5] introduced a hybrid feature selection HGR method based on deep CNN.They used the CASIA B database for the assessment for techniques and attained the recognition rate of 94.3%,93.8%,and 94.7% on 18°,36°,and 54°angles accordingly.Ben et al.[42] introduced an HGR technique called CBDP to address the problem of view variations.CASIA B dataset is utilized to assess the system and used CASIA-B dataset and accuracy of 81.77% on 00°,78.06% on 18°,78.6% on 36°,80.16%on 54°,79.06% on 72°and,77.96% on 90°angles is achieved.Anusha et al.[43] advised a novel HGR method based on binary descriptors and feature dimensionality reduction is also used in the method.CASIA B dataset is utilized to assess the system performance and the accuracy of 95.20%,94.60%,95.40%,90.40%,and 93.00% is attained on 00°,18°,36°,54°,and 72°angles consequently.Arshad et al.[8] an HGR methodology based on binomial distribution and achieved the recognition rate of 87.70% on CASIA B using 90°angle.Zhang et al.[31] suggested an encoder-based architecture for HGR to address the problem of variations by utilizing LSTM and CNN-based networks.The system was assessed by using the CASIA B database and attained a recognition rate of 91.00% on 54°.In case of our proposed HGR method the recognition rate of 95.80%,96.0%,95.90%,96.20%,95.60%,and 95.50% is obtained on 00°,18°,36°,54°,72°,and 90°angles consequently.The strength of this work is selection of best features.The limitation of this work is less number of predictors for final classification.

    Table 7:Comparison with the recent state of the art techniques

    5 Conclusion

    HGR is a biometric-based approach in which an individual is recognized from the walking pattern.In this work,a new method is introduced for HGR to address various factors such as view variations,clothes variations,and different carrying conditions.A person wearing a coat,a person walking normally,and a person carrying a bag.In this work,the feature computation has been carried out by using pre-trained network VGG-16 instead of using classical feature methods such as color-based,shape-based,geometric-based features.A PCA and Kurtosis based method is used for the reduction of the features.Six different angles of CASIA B are utilized to assess the performance of the system and attained an average recognition rate of more than 90%,which is better than the recent techniques.The result demonstrated in this work,this can be easily verified that CNN based features give better performance in the sense of better attributes and accuracy.Deep features work well for both small datasets and as well as for large databases.Overall,the introduced approach works well for the various angles of CASIA B and gives a good performance.However,the introduced method is inefficient in the case of small data.In the future,the same approach can be applied for different angles of CASIA B and other HGR databases.

    Funding Statement: This study was supported by the grants of the Korea Health Technology R &D Project through the Korea Health Industry Development Institute (KHIDI),funded by the Ministry of Health &Welfare (HI18C1216) and the Soonchunhyang University Research Fund.

    Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.

    九九久久精品国产亚洲av麻豆| 亚洲美女黄色视频免费看| 久久午夜福利片| 在线精品无人区一区二区三| 国产老妇伦熟女老妇高清| 免费高清在线观看日韩| 国产精品麻豆人妻色哟哟久久| 男女啪啪激烈高潮av片| 久久久久人妻精品一区果冻| 在线观看免费视频网站a站| 国产综合精华液| 黑人高潮一二区| 天天躁夜夜躁狠狠久久av| 交换朋友夫妻互换小说| 看免费成人av毛片| av在线app专区| 久久久精品区二区三区| 亚洲欧洲精品一区二区精品久久久 | 欧美少妇被猛烈插入视频| kizo精华| 久久ye,这里只有精品| 午夜日本视频在线| 日韩欧美一区视频在线观看| 狂野欧美白嫩少妇大欣赏| a级毛片黄视频| 美女xxoo啪啪120秒动态图| 成人手机av| 不卡视频在线观看欧美| 一个人看视频在线观看www免费| 曰老女人黄片| 人人澡人人妻人| 爱豆传媒免费全集在线观看| 中文欧美无线码| 亚洲av二区三区四区| 9色porny在线观看| 日韩一区二区三区影片| 亚洲欧洲精品一区二区精品久久久 | 高清在线视频一区二区三区| 人妻制服诱惑在线中文字幕| 亚洲国产精品国产精品| 全区人妻精品视频| 亚洲在久久综合| 欧美日韩av久久| 国产视频首页在线观看| 有码 亚洲区| 国产男人的电影天堂91| 精品国产一区二区三区久久久樱花| 乱人伦中国视频| 亚洲欧美清纯卡通| 老司机亚洲免费影院| 人妻少妇偷人精品九色| 最近2019中文字幕mv第一页| 中国国产av一级| 精品国产一区二区久久| 免费高清在线观看日韩| 久久精品夜色国产| 老司机影院成人| 最近的中文字幕免费完整| 亚洲国产最新在线播放| 久久久久久久久大av| 18在线观看网站| 婷婷色麻豆天堂久久| 欧美精品人与动牲交sv欧美| 大话2 男鬼变身卡| 日本-黄色视频高清免费观看| 亚洲中文av在线| 日本黄色日本黄色录像| 免费大片黄手机在线观看| 91精品三级在线观看| 99re6热这里在线精品视频| 国产成人一区二区在线| 欧美+日韩+精品| 亚洲av福利一区| 亚洲第一区二区三区不卡| 国产精品99久久久久久久久| 国产亚洲精品久久久com| 久久99精品国语久久久| 91精品一卡2卡3卡4卡| 亚洲国产精品国产精品| 卡戴珊不雅视频在线播放| 精品国产一区二区三区久久久樱花| 亚洲人成网站在线观看播放| 亚洲欧美成人精品一区二区| 精品人妻在线不人妻| 午夜福利视频在线观看免费| 丝袜喷水一区| 免费看av在线观看网站| 国产亚洲一区二区精品| 久久99热6这里只有精品| 97精品久久久久久久久久精品| 亚洲内射少妇av| 亚洲熟女精品中文字幕| 成人综合一区亚洲| 久久韩国三级中文字幕| www.色视频.com| 男人添女人高潮全过程视频| 亚洲国产色片| 嫩草影院入口| 视频中文字幕在线观看| 色哟哟·www| 女人精品久久久久毛片| 日韩伦理黄色片| av在线观看视频网站免费| 免费高清在线观看日韩| 欧美日韩综合久久久久久| 在线播放无遮挡| 91精品国产九色| 亚洲精品av麻豆狂野| 国产一区二区三区av在线| 只有这里有精品99| 欧美精品一区二区免费开放| 日韩精品有码人妻一区| 99热网站在线观看| 精品久久国产蜜桃| av免费在线看不卡| 久久免费观看电影| 国产精品一区二区在线观看99| 亚洲在久久综合| 久久久国产欧美日韩av| videos熟女内射| 午夜精品国产一区二区电影| 久久精品夜色国产| 啦啦啦中文免费视频观看日本| 亚洲欧洲国产日韩| 伦理电影大哥的女人| 国产一区二区三区综合在线观看 | 高清不卡的av网站| 伊人久久国产一区二区| 色婷婷久久久亚洲欧美| 久久精品久久久久久久性| 日日啪夜夜爽| 97在线人人人人妻| 97在线人人人人妻| 高清毛片免费看| www.av在线官网国产| 永久免费av网站大全| av.在线天堂| 青春草视频在线免费观看| 狂野欧美白嫩少妇大欣赏| 天堂俺去俺来也www色官网| 伊人久久国产一区二区| 一区二区日韩欧美中文字幕 | 高清欧美精品videossex| 美女国产高潮福利片在线看| 欧美丝袜亚洲另类| 亚洲人成网站在线播| 午夜精品国产一区二区电影| 一区二区日韩欧美中文字幕 | 热re99久久精品国产66热6| 精品视频人人做人人爽| 免费av不卡在线播放| 69精品国产乱码久久久| 中国国产av一级| 男人爽女人下面视频在线观看| 涩涩av久久男人的天堂| 国产精品一区二区三区四区免费观看| 大话2 男鬼变身卡| 国产精品国产三级国产av玫瑰| 国模一区二区三区四区视频| 亚洲国产成人一精品久久久| 国产av一区二区精品久久| 蜜桃在线观看..| 欧美人与善性xxx| 精品人妻在线不人妻| 日日摸夜夜添夜夜添av毛片| 一边摸一边做爽爽视频免费| 久久 成人 亚洲| 一边亲一边摸免费视频| 看十八女毛片水多多多| av视频免费观看在线观看| 少妇熟女欧美另类| 夫妻性生交免费视频一级片| 久久久久久伊人网av| 亚洲国产日韩一区二区| 成年av动漫网址| 热re99久久精品国产66热6| 亚洲,一卡二卡三卡| 久久久国产一区二区| 久久久精品94久久精品| 五月玫瑰六月丁香| 最近最新中文字幕免费大全7| 日韩强制内射视频| 欧美性感艳星| 777米奇影视久久| 亚洲欧美日韩卡通动漫| av卡一久久| av不卡在线播放| 少妇人妻 视频| 91aial.com中文字幕在线观看| 人妻制服诱惑在线中文字幕| 国产成人精品在线电影| 日韩精品免费视频一区二区三区 | 国产国语露脸激情在线看| 中文天堂在线官网| 成人免费观看视频高清| 满18在线观看网站| 欧美亚洲 丝袜 人妻 在线| 日韩制服骚丝袜av| 99热网站在线观看| 王馨瑶露胸无遮挡在线观看| 精品少妇久久久久久888优播| 国产熟女欧美一区二区| 你懂的网址亚洲精品在线观看| 观看av在线不卡| 91精品伊人久久大香线蕉| 黑人巨大精品欧美一区二区蜜桃 | 精品久久久噜噜| 麻豆成人av视频| 成人毛片60女人毛片免费| 五月玫瑰六月丁香| 99九九在线精品视频| 如何舔出高潮| 少妇被粗大猛烈的视频| 中文字幕久久专区| 欧美最新免费一区二区三区| 国产精品一国产av| 黑人猛操日本美女一级片| 99久久精品国产国产毛片| 97在线人人人人妻| .国产精品久久| 亚洲欧洲国产日韩| 9色porny在线观看| 麻豆精品久久久久久蜜桃| 成年女人在线观看亚洲视频| 男女免费视频国产| 欧美日韩精品成人综合77777| 成人18禁高潮啪啪吃奶动态图 | 精品久久久久久久久av| 国产亚洲午夜精品一区二区久久| 国产精品欧美亚洲77777| 制服人妻中文乱码| 国产 精品1| 一区二区日韩欧美中文字幕 | 国产免费又黄又爽又色| 天堂中文最新版在线下载| 日韩不卡一区二区三区视频在线| 精品久久久久久电影网| 91久久精品电影网| 丁香六月天网| 少妇 在线观看| 国产黄片视频在线免费观看| 精品久久久久久电影网| 中文精品一卡2卡3卡4更新| 日本wwww免费看| 桃花免费在线播放| 狂野欧美激情性xxxx在线观看| 亚洲综合色网址| 人人妻人人爽人人添夜夜欢视频| 99久久综合免费| 久久久久久久久久久久大奶| 成人二区视频| 成人国产av品久久久| 高清毛片免费看| 熟女人妻精品中文字幕| 久久精品久久精品一区二区三区| 一区二区av电影网| 亚洲欧美一区二区三区黑人 | 亚洲精品aⅴ在线观看| 人妻制服诱惑在线中文字幕| 国产免费又黄又爽又色| 国产精品久久久久久精品古装| 一本大道久久a久久精品| 久久人人爽av亚洲精品天堂| av免费在线看不卡| 成人毛片a级毛片在线播放| av在线app专区| 日本黄色片子视频| 婷婷色综合www| 99九九线精品视频在线观看视频| 亚洲一级一片aⅴ在线观看| 91国产中文字幕| 日本av免费视频播放| 成人漫画全彩无遮挡| 午夜激情福利司机影院| 欧美变态另类bdsm刘玥| 岛国毛片在线播放| 久久狼人影院| 欧美成人午夜免费资源| 久久精品夜色国产| 一区二区三区乱码不卡18| 亚洲情色 制服丝袜| 亚州av有码| av福利片在线| 男的添女的下面高潮视频| 亚洲人成77777在线视频| kizo精华| 男女无遮挡免费网站观看| h视频一区二区三区| 国产永久视频网站| 亚洲精品视频女| 亚洲精华国产精华液的使用体验| 性高湖久久久久久久久免费观看| 欧美人与善性xxx| 国产精品99久久久久久久久| av播播在线观看一区| 欧美日韩视频高清一区二区三区二| 麻豆精品久久久久久蜜桃| 国产欧美日韩一区二区三区在线 | 伊人亚洲综合成人网| 天天操日日干夜夜撸| 国产极品粉嫩免费观看在线 | 久久精品夜色国产| 一区二区日韩欧美中文字幕 | 日韩av免费高清视频| 18禁在线无遮挡免费观看视频| 极品少妇高潮喷水抽搐| 一级,二级,三级黄色视频| 亚洲情色 制服丝袜| 国产深夜福利视频在线观看| 成人亚洲欧美一区二区av| 久久人人爽人人爽人人片va| 免费黄色在线免费观看| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 成年人午夜在线观看视频| 在线精品无人区一区二区三| av福利片在线| 一本大道久久a久久精品| 亚洲丝袜综合中文字幕| 成人黄色视频免费在线看| 黄色怎么调成土黄色| 欧美精品亚洲一区二区| www.av在线官网国产| 亚洲av福利一区| 日本黄色片子视频| 欧美日韩综合久久久久久| 男女国产视频网站| 五月天丁香电影| 久久毛片免费看一区二区三区| 一区在线观看完整版| 毛片一级片免费看久久久久| 一个人免费看片子| 欧美精品一区二区大全| 激情五月婷婷亚洲| 大片免费播放器 马上看| 国产有黄有色有爽视频| 久久久国产精品麻豆| 欧美激情极品国产一区二区三区 | 夜夜骑夜夜射夜夜干| 能在线免费看毛片的网站| 热re99久久国产66热| 久久 成人 亚洲| 国产成人a∨麻豆精品| 男男h啪啪无遮挡| av国产久精品久网站免费入址| 精品卡一卡二卡四卡免费| 男男h啪啪无遮挡| 王馨瑶露胸无遮挡在线观看| av国产久精品久网站免费入址| 国产免费一区二区三区四区乱码| 99久久综合免费| 成人无遮挡网站| 亚洲综合精品二区| 国内精品宾馆在线| 亚洲国产精品成人久久小说| 欧美日韩av久久| 亚洲欧美日韩另类电影网站| 熟女人妻精品中文字幕| 亚洲精品第二区| 婷婷色av中文字幕| 国产成人freesex在线| 五月玫瑰六月丁香| 国产日韩欧美视频二区| 熟女电影av网| av网站免费在线观看视频| 婷婷色麻豆天堂久久| 欧美亚洲 丝袜 人妻 在线| 午夜av观看不卡| 亚洲欧美成人综合另类久久久| 少妇人妻 视频| 中国三级夫妇交换| 日韩av免费高清视频| av天堂久久9| 日日爽夜夜爽网站| 一本久久精品| 亚洲欧美中文字幕日韩二区| 亚洲人与动物交配视频| 亚洲成人一二三区av| 欧美成人午夜免费资源| 国产精品麻豆人妻色哟哟久久| h视频一区二区三区| 午夜福利,免费看| 99国产精品免费福利视频| .国产精品久久| 午夜福利网站1000一区二区三区| 汤姆久久久久久久影院中文字幕| av免费观看日本| 色婷婷av一区二区三区视频| 在线观看三级黄色| 狂野欧美激情性xxxx在线观看| xxxhd国产人妻xxx| 国模一区二区三区四区视频| 熟女人妻精品中文字幕| 婷婷色av中文字幕| 国产日韩欧美亚洲二区| 两个人免费观看高清视频| 在线天堂最新版资源| 国产片内射在线| 极品少妇高潮喷水抽搐| 久久鲁丝午夜福利片| 欧美日韩亚洲高清精品| 国产白丝娇喘喷水9色精品| 黄片无遮挡物在线观看| 尾随美女入室| 国产高清不卡午夜福利| 亚洲av欧美aⅴ国产| av免费观看日本| 国产精品人妻久久久影院| 校园人妻丝袜中文字幕| 国产成人精品一,二区| 亚洲美女黄色视频免费看| 老司机影院成人| 国产欧美亚洲国产| 欧美国产精品一级二级三级| av国产精品久久久久影院| 亚洲国产av新网站| 青春草亚洲视频在线观看| 中文字幕免费在线视频6| a级毛片在线看网站| 9色porny在线观看| 欧美亚洲日本最大视频资源| 看免费成人av毛片| 亚洲av成人精品一二三区| 99九九在线精品视频| 国产一区有黄有色的免费视频| 91午夜精品亚洲一区二区三区| 国国产精品蜜臀av免费| av电影中文网址| 飞空精品影院首页| 赤兔流量卡办理| 一级毛片黄色毛片免费观看视频| 亚洲美女黄色视频免费看| 一边亲一边摸免费视频| 亚洲高清免费不卡视频| 青春草国产在线视频| 天天影视国产精品| 久久久国产精品麻豆| 一区二区三区精品91| 啦啦啦在线观看免费高清www| 午夜福利视频精品| 韩国高清视频一区二区三区| 精品亚洲成国产av| 国产探花极品一区二区| 欧美bdsm另类| 国产成人aa在线观看| 中文字幕人妻丝袜制服| 青青草视频在线视频观看| 国产深夜福利视频在线观看| 亚洲欧美精品自产自拍| 日韩强制内射视频| 飞空精品影院首页| 美女脱内裤让男人舔精品视频| 一边亲一边摸免费视频| 国产精品国产三级国产专区5o| 久久女婷五月综合色啪小说| 多毛熟女@视频| av女优亚洲男人天堂| av在线老鸭窝| 久久久国产精品麻豆| 亚洲精品视频女| 精品久久久久久久久av| 久久国内精品自在自线图片| 另类精品久久| 亚洲国产精品999| 婷婷色麻豆天堂久久| 精品久久国产蜜桃| 国产精品嫩草影院av在线观看| 亚洲精品一区蜜桃| 女性生殖器流出的白浆| 国产熟女午夜一区二区三区 | 国产精品嫩草影院av在线观看| 国产在线一区二区三区精| 人妻制服诱惑在线中文字幕| 免费黄频网站在线观看国产| 午夜免费男女啪啪视频观看| 成人毛片60女人毛片免费| 久久久久视频综合| 精品久久久精品久久久| 亚洲av在线观看美女高潮| 国产在线视频一区二区| 欧美日韩在线观看h| 久久综合国产亚洲精品| 久久人人爽人人片av| 亚洲人成网站在线观看播放| 国产精品 国内视频| 欧美精品国产亚洲| av不卡在线播放| 精品久久久噜噜| 高清黄色对白视频在线免费看| 黑人高潮一二区| 一本一本综合久久| 国产一区有黄有色的免费视频| 精品国产露脸久久av麻豆| 久久久久久伊人网av| av电影中文网址| 亚洲欧美色中文字幕在线| 成年美女黄网站色视频大全免费 | 日韩一区二区三区影片| 黄色欧美视频在线观看| 久久99一区二区三区| 午夜福利影视在线免费观看| 色网站视频免费| 毛片一级片免费看久久久久| 精品少妇久久久久久888优播| 午夜影院在线不卡| 九九久久精品国产亚洲av麻豆| 性色av一级| 最后的刺客免费高清国语| 特大巨黑吊av在线直播| 久久亚洲国产成人精品v| 黄片无遮挡物在线观看| 久久青草综合色| 久久精品国产亚洲av天美| 99国产综合亚洲精品| 精品午夜福利在线看| 欧美日韩av久久| 亚洲精品一二三| 亚洲精品日韩av片在线观看| 一区二区三区四区激情视频| 亚洲精品乱码久久久v下载方式| 夫妻性生交免费视频一级片| 久久国内精品自在自线图片| 午夜影院在线不卡| 久久免费观看电影| 青青草视频在线视频观看| 国产爽快片一区二区三区| 插逼视频在线观看| 色哟哟·www| 亚洲av综合色区一区| 国产免费现黄频在线看| 亚州av有码| 少妇 在线观看| 日韩,欧美,国产一区二区三区| 欧美bdsm另类| 一本大道久久a久久精品| 精品人妻在线不人妻| 成人手机av| 欧美日韩国产mv在线观看视频| 中文字幕久久专区| 超色免费av| 99久久综合免费| 大又大粗又爽又黄少妇毛片口| 亚洲精品久久久久久婷婷小说| av在线播放精品| 又黄又爽又刺激的免费视频.| 日韩av不卡免费在线播放| 青青草视频在线视频观看| 国产在线一区二区三区精| 两个人的视频大全免费| 美女xxoo啪啪120秒动态图| 国产爽快片一区二区三区| 久久99一区二区三区| 黑人欧美特级aaaaaa片| 亚洲少妇的诱惑av| 亚洲一区二区三区欧美精品| 亚洲精品av麻豆狂野| 欧美一级a爱片免费观看看| 只有这里有精品99| 最后的刺客免费高清国语| 久久久欧美国产精品| 精品国产一区二区久久| 黄色视频在线播放观看不卡| 免费大片黄手机在线观看| 又粗又硬又长又爽又黄的视频| 在线观看www视频免费| 国产av码专区亚洲av| 久久午夜综合久久蜜桃| 国产乱人偷精品视频| 男女免费视频国产| 国产高清有码在线观看视频| 中文字幕久久专区| 欧美bdsm另类| 日韩 亚洲 欧美在线| 亚洲经典国产精华液单| 亚洲精品国产色婷婷电影| 亚洲av在线观看美女高潮| 视频在线观看一区二区三区| 国产免费一区二区三区四区乱码| 两个人免费观看高清视频| 菩萨蛮人人尽说江南好唐韦庄| 少妇 在线观看| 高清午夜精品一区二区三区| 免费大片18禁| 久久狼人影院| 亚洲国产av新网站| 日本黄色片子视频| 我的老师免费观看完整版| 卡戴珊不雅视频在线播放| 久久毛片免费看一区二区三区| 色婷婷av一区二区三区视频| 欧美丝袜亚洲另类| 亚洲精品aⅴ在线观看| 精品卡一卡二卡四卡免费| 尾随美女入室| 日本猛色少妇xxxxx猛交久久| 久久毛片免费看一区二区三区| a级毛片免费高清观看在线播放| 亚洲久久久国产精品| 午夜免费鲁丝| 精品人妻熟女毛片av久久网站| 欧美日韩视频高清一区二区三区二| 人妻一区二区av| 日韩成人av中文字幕在线观看| 国产成人一区二区在线| 久久综合国产亚洲精品| 人人妻人人爽人人添夜夜欢视频| 青青草视频在线视频观看| 国产在线视频一区二区| 18禁在线播放成人免费| 精品一品国产午夜福利视频| 亚洲精品乱久久久久久| 久久久久久久久久久丰满| 哪个播放器可以免费观看大片| 在线免费观看不下载黄p国产| 亚洲成色77777| 啦啦啦视频在线资源免费观看|