• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Attention-Aware Network with Latent Semantic Analysis for Clothing Invariant Gait Recognition

    2019-11-25 10:22:24HefeiLingJiaWuPingLiandJialieShen
    Computers Materials&Continua 2019年9期

    Hefei LingJia WuPing Li and Jialie Shen

    Abstract:Gait recognition is a complicated task due to the existence of co-factors like carrying conditions,clothing,viewpoints,and surfaces which change the appearance of gait more or less.Among those co-factors,clothing analysis is the most challenging one in the area.Conventional methods which are proposed for clothing invariant gait recognition show the body parts and the underlying relationships from them are important for gait recognition.Fortunately,attention mechanism shows dramatic performance for highlighting discriminative regions.Meanwhile,latent semantic analysis is known for the ability of capturing latent semantic variables to represent the underlying attributes and capturing the relationships from the rawinput.Thus,we propose a new CNN-based method which leverages advantage of the latent semantic analysis and attention mechanism.Based on discriminative features extracted using attention and the latent semantic analysis module respectively,multi-modal fusion method is proposed to fuse those features for its high fault tolerance in the decision level.Experiments on the most challenging clothing variation dataset:OU-ISIR TEADM ILL dataset B show that our method outperforms other state- of -art gait approaches.

    Keywords:Gait recognition,latent semantic analysis,attention mechanism,attention-aware neural network,clothing-invariant,feature fusion.

    1 Introduction

    In recent years,how to develop intelligent algorithm for modeling biometric traits plays more and more important roles in human identification.Most of the static traits such as fingerprint and iris have been used in r eality.But these traits are limited by distance and the interaction with subjects [Bouchrika,Carter and Nixon (2016)].Comparing with these biometric features,gait is an important coarse feature about motion so that gait recognition is robust to low resolution.It can be captured from long distance scenarios without the cooperation of subjects.And at the same time,the amount of cameras installed in public places is explosive increasing which make gait recognition possible for crime surveillance and prevention.

    However,there are still many challenges for applying gait recognition in the real life.Robust and discriminative features are important for the task of human identification because of theexistence of covariates(e.g.,carrying condition,cameraviewpoint,clothing,thevariation of walking speed,walking surfaceand so on).Frommost of appearance-based gait recognitionmethods[Wu,Huang,Wang et al.(2016)],the variation of clothing and carrying condition affects the performance of gait recognition drastically.These co-factors take the same problems to clothing invariantgait recognition,they change the appearance of subjectsgreatly.So,it becomesahotspot for researchers.

    In order to tackle the problem of the variation of appearance caused by clothing variation.There are a wide range of methods proposed in recent years(for recent review[Lee,Belkhatir and Sanei(2014)]),most of conventional approaches use hand-crafted features to represent the clothing-invariant human gait.For example,Shariful et al.[Shariful,Islam,Akter et al.(2014)]proposed amethod called random window subspace(RWSM)to split rawinput into smallwindow chunks to get the gaitsegmentation and contribution of each body part for clothing-invariant gait recognition.Guan et al.[Guan,Liand Hu(2012)]proposed a random subspacemethod(RSM)based on computing a fullhypothesis space,the method randomly chooses subspaces for classification.And Hossain et al.[Hossain,Makihara,Wang et al.(2010)]proposed a part-based gait identification in the light of substantial clothing variations,which exploits the discrimination capability as a matchingweight foreach part and controls theweightsadaptively based on the distribution of distances between the probe and all the galleries.Rokanujjaman etal.[Rokanujjaman,Islam,Hossain etal.(2015)]proposed an effective parts definition approach based on the contribution of each row when itmerges orderly from bottom to top.It shows that some rowshave positive effects and some rows have negative effects for gait recognition.Based on the positive and negative bias,they defined threemost effective body parts and two redundantbody parts.Discarding two redundantpartsand considering only three effective body partsimprove theper for mance of gaitrecognitioneffectively.Actually,thepipeline of most of the conventional methods for clothing invariantgait recognition isalways dividing the body into components firstly,and learns the weights of the e features from different components.But the per for mance of these methods are unsatisfied because of the inevitable errors in extracting local featuresby traditional methods.While,they show the importance of local information and the relationship among them.

    Besides those conventional approaches,the deep learning approach[Yeoh,Aguirre and Tanaka(2017)]automatically learns clothing-invariantgait featuresdirectly from raw data.Convolutional neural networks make strong and mostly correct assumptions about the nature of images(namely,stationarity of statistics and locality of pixel dependencies),so they givegreatperformance inobjectrecognition and areapplied inmany fields.Zhouetal.[Zhou,Liang,Lietal.(2018)]use deep learning method in road traffic sign recognition.It is obvious that the CNN-based approachesoutperform those conventional methods in many aspects.The CNN-basedmethodsare easier to capture the features from rawinput.At the same time,from the aforementioned conventional methods,the latent attributes and local features from limbs are importantin the field of clothing invariantgait recognition.To take advantage of the CNN-based methods and make use of the advantages from conventional methods,a more effective method based on convolutional neural network is urgent to proposed.

    Attention network[Zhao,Wu,Feng etal.(2017)]and latentsemantic features[Liand Guo(2014)]play important roles in the field of computer version.Attention network learns to paymore attention in important local parts of images.And latentsemantic analysis(LSA)is known for the ability of capturing latent semantic features.Many recent studies show satisfying results than previous classification network[Krizhevsky,Sutskever and Hinton(2012)]by applying attention mechanism and LSA.They perform well in a variety of applications such as scene classification[Liand Guo(2014)],natural language processing[Fei,Cai-Hong,Wang etal.(2015)]and so on.

    Inspired by the excellentper for mance of attentionmechanism and latentsemantic analysis,we employ latent semantic features to help analyze the contribution for different parts of images and get the latent relationships among features and classification results.And attention-aware network captures more discriminative features which highlight the important regions from subjects.In this paper,we combine the advantages of attention mechanism and LSA respectively,and design a new CNN-based method to address the problem of clothing invariantgait recognition.

    We summarize the contribution of our work as following:

    Firstly,we propose a specific CNN-based method for clothing-invariant gait recognition.The method automatically learns to combine features extracted from low-level input and latent semantic features from middle-level features which get a good representation for clothing invariantgait recognition.

    Secondly,we evaluate our method on the most challenging clothing variant dataset:OU-ISIR Treadmill B dataset which includes the different clothing conditions,and it achievesbetter performance than other sate- of -artmethods.

    In the remainder,wedetailour paperas following:related work about attention mechanism,latent semantic analysis and gait recognition are introduced in Section 2.A fter Section 2,how do CNNs,latentse manticanalys is and attention combine and work are demonstrated in Section 3.Then experimental resultsare shown in Section 4.Finally,wegivea conclusion in Section 5.

    2 Related work

    Approaches to gait recognition can be classified into two categories,one is model-based[Shariful,Islam,Akter et al.(2014);Guan,Li and Hu(2012);Shen,Pang,Tao et al.(2010)]and the other is model-free methods[Wu,Huang,Wangetal.(2016)].Model-based methods are always conventional methods considered to bemade up of statics from shape of human bodies and the components that can reflect the dynamic features of a cycle of gait.It is majoring in modeling the structure of human body.The other method extracts gait feature from the rawinput with outconsidering the structure of subjects,it focuson the the shape of the silhouette rather than fitting it to a chosen model.Our method combines the structure of human body with model-free method so itcan remedy the dependencies of model-free approaches on clothing variation by attention mechanism and latent semantic analysis.

    Attention mechanism[Wang,Jiang,Qian et al.(2017)]is designed to highlight discriminative features for various kindsof tasks including images classification[Cao,Liu,Yang et al.(2016)],semantic segmentation[Chen,Yi,Jiang et al.(2016)],image question answering[Yang,He,Gao etal.(2016)],image captioning[Mnih,Heess,Graves et al.(2014)]and so on.Attention mechanism is effective in understanding images,since it adaptively focuses on related regions of the e images when the deep networks are trained with spatially-related labels for capturing the underlying relations of labels and provides spatial regularization for the the results.In some extent,attention mechanism is similar to the conventional methods for clothing-invariantgait recognition butattention mechanism highlights the salient features automatically.Except the attention mechanism for gait recognition,there isadramatic method to extractun derlying attributes among those subjects.LSA learns latent features for gait recognition,which are important features and compensate the spatial features from attention-aware network.

    LSA is a topic-model technique in neural language processing for improving information retrieval,it is first introduced by Deerwester et al.in 1988[Deerwester(1988)]and further improved in 1990[Deerwester(2010)].Recently,the idea of latent semantic representation learning hasbeen used in computer vision community.Zhiwu Lu proposed a novel latent semantic learningmethod for extracting high-level latent semantics from a large vocabulary of abundantmid-level features[Lu and Peng(2013)]for human action recognition.Bergamo etal.[Bergamo,Torresaniand Fitzgibbon(2011)]applied a compact code learningmethod for object categorization,which uses a set of latentbinary indicator variables as the intermediate representation of images.In the field of image retrievaland objectdetection,latentsemantic learning can also be used to extracthigh-level features for latentsemantic.It isobvious that features learned from latentsemantic analysisextracting latent features not given before,and combining the features from improved CNN-based model with attentionmechanism and latentsemantic an alysiscan improve the performance of our task:clothing invariantgait recognition.

    3 Methodology

    We propose a convolutional neural network for clothing invariant gait recognition,which utilizesattentio nmodel foradaptiveweights of different parts and latentsemantic analysis for learning latent semantic features.The framework of our latent-attention compositional network(LACN)is illustrated in Fig.1.The input data of our method is gait energy image(GEI)[Man and Bhanu(2005)],it is the average silhouette over one walking cycle of gait.And GEI is the most common input data for whether traditionalmethods orCNN-based methods.The samples and corresponding GEIs from dataset of different clothing combination are illustrated in Fig.2.LACN consistsoftwomain components:one combines theattentionmechanism with latentsemantic analysis formulti-level feature extracting,the other ismulti-modal fusion which fuses the features from different feature extractingmodules.

    Figure1:The pipeline of our network is illustrated in the figure.The base network is the same as the CNN-basedmethod[Yeoh,Aguirre and Tanaka(2017)],which is composed of there convolutional layers,the kernel size are 7× 7,5× 5 and 3× 3 respectively.After capturing the featuremaps,the attentionmodule learns a softmask and gets new features from the base network.In the latent semanticmodule,we divide the features from base network into fixed number of components and get latent variables for the corresponding components.Then,calculate the relationship with the final gait labels.Finally,we fuse the features from the twomodules using convolutional layer with kernel size 1× 1 to get discrim inative and robust features

    Figure2:Samples of images from differentkinds of clothing variations of OU-ISIR dataset B and the corresponding GEIs[Makihara,Mannami and Tsuji(2012)]

    The attentionmodel pays attention to high-level representation for the whole input data.It is constructed by two-branch convolutional neural network.Latent semantic analysis is used for extracting m iddle-level features that are ignored in high level.Finally,the features fusion strategy combines the features from different levels.The details for the two componentsare discussed in next three Subsections(3.1,3.2 and 3.3).

    Motivated by the conventional methods for clothing-invariant gaitreco gnition.Dividing the inputGEIinto small fixed subspaces and getting latentvariables from those subspaces is an effective way to ge tmore discriminative features.As a result,we employ latent semantic analysis called patch-based latentse mantic learning model for latentsemantic features.

    In this module,tlabeledimagesare given,where theXidenotes thei-thimage andYiis the label for the image.We aim to learn a model fromXitoYi,the first step is to divide the input GEI into non-overlapping patches,the patches forms low-level features of inputGEI,the features from these patches are regarded as latentvariablesZji.To predict the results from those latent variables,we take the eachas latenthigh-level visual features,and get the gait label by the summarizing the high-level visual features inferred from their corresponding patches.

    It is obvious that the latentvariables are predicted from the inputGEI.In theory,they can also represent the discriminative high-level features for the target gait labels.From the assumption,we formula the two stages of the e prediction problemsas the following unified optimization over the loss function.

    3.1 Latentsemantic analysis

    Figure3:The procedure of the latentsemantic analysis

    where the functionf(?)is the function thatpredicts thegait labels from the latentvariables from thewhole image,Wdenotes themodelparameters of the e prediction function.Latent variableZjiis computed by the latent featuresextractor,formulated as the Eq.(2).

    The process to extract latentsemantic featuresand capture the final result from those latent variables are demonstrated in Fig.3,the procedure of functionf(?)andg(?)are linear function as Eqs.(3)and(4)respectively.

    From those fixed patches,latentvariables are calculated to the corresponding patches and improve the performance of prediction function at the same time.

    3.2 Attentionmodelforadaptiveweights of features

    Attentionmaps highlight discriminative regions of different parts from human body.The attention network stimulatesselection from featuremapsby a softmask which includes the weights of every dimension of features.As shown in Fig.1,we design an attention-aware structure to capturespecific regions from GEI.Thereare two chunks for theattentionmodel.The one learns a softmask for the featuremaps from the base network which extracts featuresautomatically by the othermain chunk.The softmask highlights the regions from corresponding partand playsa important role for its robust features.

    Featuremaps from themain chunk of inputGEIare defined as Eq.(7).

    whereIis the inputdata(GEI).To the resultbetter than theoriginal featuresX.Then,the second stage refines the attentionmapsAbymodifying all previous prediction,θattis the parameters learnt from the attentionmodules.The attentionmodule consistsoftwo layers(the first layer has512 filters with kernelsize1×1 and the second issigmoid layer).

    The result from attention maps ranges from 0 to 1,it represent show important the original features is.The outputsFof final result are formulaas,

    From the formulation,it is obvious that attention map works as discriminative features selector which selects theoriginal featuresX.Although attention mapsadaptively capture the salient features.So the loss for attention modules is:

    whereLattdenotes the loss function of confidencemaps from attention-aware network,it is crossentropy loss.

    We emphasize that the attention model calculates soft weights for feature maps from subjects,and it allows the gradient of loss function to be back-propagated through.The outputAfrom attention module is actual am ask for the corresponding feature mapFwhich adaptively highlights the importantco mponents of subjects.From Fig.4,th eattention module highlights the limbs and head of subjects,which are discriminative parts in the problem of clothing-invariantgait recognition.

    3.3 Feature fusion and classification

    To fuse the feature from network with attention mechanism and latentse mantic analysis and get better performance from the two modules,we joint the two kinds of features.Here we will introducehow we get the new features and calculate the final result from new features.Features from attention-aware networkfattand latent semantic analysisflatentare multi-modal features.After jointing thefattandflatentby channels,we can get the final featuresffin,and employ a convolutional layer with kernel size 1×1 to gethigher-level featuresfmixfrom the two kinds of features.After the feature extracting,we use the featuresfmixto calculate the similarity of individualsubjectsusing the Euclidean distance.

    whered(Pi,Gi)isa distance between the images from gallery and probe,Nis the size of feature vectors.The smaller the value ofdthe higher possibility of the givenmatching pair and find the corresponding subject with the highes tsimilarity in the gallery.

    Figure4:Example images illustrate that different features have different corresponding attention masks in our network.As we can seen in the figure,the attention chunk highlights the limbs and head of human body which are robust from the changing of appearance caused by clothing variation

    Figure5:Samples of evaluation set,the image in the left(a)is in normal clothing type is used as gallery images,images in the right(b)-(i)are probe set with different clothing combinations

    4 Experiments

    4.1 Database description

    The proposed method is evaluated on OU-ISIR Treadmill dataset B[Makihara,Mannami and Tsuji(2012)].OU-ISIR Treadm illdatasetB isa largegaitdataset forevaluation of gait methods in presence of variations in clothing.It includes 68 subjects with up to 32 types of clothing combinations.Tab.1 shows 15 different types of clothes used in constructing the dataset.Tab.2 shows clothing combinationsbased on the15 different types of clothes.For themostcommon approaches,thesetup for the datasetissplitinto three parts including training set,probe setand gallery set.And there are 446 samples of 20 subjects from all types of clothing combination in training set,48 sequences of 48 different subjects from normal clothing type,the probe set is consist of the rest clothing types of the e 48 subjectsexceptsamples in gallery set.The totalnumber of restclothing types of the ese48 subjects is 856.But this kind of setup is notsuitable for deep learning approaches,one reason is that the clothing type in training setin notcomplete from allkinds of clothing types,theother is that446 sequences for training set isnotenough for the input of deep learning approaches.To capture the discriminative feature from the variant clothing types,32 kinds of clothing typesand enough data for input of deep learning arenecessary for training.So,in ourwork,thewhole datasetare divided into two parts,the one is used to train themodel the other is for evaluation.And the proportion for training and evaluation is 80/20 respectively.The subjects from the two subsets are notoverlapping,and sequences in normal clothing type from all subjects in the evaluation are used for gallery set,probe set are composed of the restdata from evaluation.The samples from gallery and probe setare illustrated in Fig.5.

    Table1:List of clothes used in OU-ISIR treadmill dataset B[Makihara,Mannami and Tsuji(2012)]

    Table2:Different clothing combinations used in the OU-ISIR B dataset[Makihara,Mannami and Tsuji(2012)](Abbreviation:Clothes type ID)

    4.2 Performance evaluation

    1)Performance analysis with clothing variationseffect

    Figure6:Performance of ourmethod and state- of -art CNN-based methods on OU-ISIR Treadmill B datasetunder the32 different clothing combination

    To demonstrate the effectiveness for ourmethod,we conduct experiments on the dataset:OU-ISIRTreadmillB.The results of two kinds of featuresextracting from twomodules and the final featuresare illustrated in Fig.6.From the results,we can observe theexperiments’results,we can observe that there are four level difficulties of clothing combination in the dataset OU-ISIR Treadmill B.In the experiment 1-4(Exp.1-4),the CNN-based[Yeoh,Aguirre and Tanaka(2017)]method is the base network of our proposed method.The performance of attention module and latent semantic analysis module are better than CNN-based method inmost of clothing types.What ismore,our proposedmethod which combines the two modules outperforms the twomodules respectively and it also shows better results than CNN-based method especially in the clothes type 4(regular pants and half shirt)and M(baggy pants).Itproves that the two-level features compensate for each other.

    2)Comparison with state- of -art methods

    In the experiment,we evaluate our method on the test set of dataset,and calculate the averageaccuracy.Compared ourmethod with somestate- of -art methods,Tab.3 summarize the comparison of results with thehand-craft methods[Shariful,Islam,Akteretal.(2014);Guan,Liand Hu(2012)],CNN-based method[Yeoh,Aguirre and Tanaka(2017)]and our method.Itshowsourmethod achievebetter performance than state- of -art methods.

    Table3:List of clothes used in OU-ISIR treadmill dataset B[Makihara,Mannami and Tsuji(2012)]

    5 Conclusion

    In this paper,we combine latent semantic analysis and attention mechanism for clothing-invariant gait recognition to get robust and discriminative features end-to-end.And fuse them for higher-level representation which improves the performance of gait recognition.The proposed method not only makes use of the advantages of CNN-based method which learnshigh-level feature from rawinputdat abutal sohig hlights the important regions from subjects.Local information is emphasized by attention mechanism in our method.At the same time,latent semantic variables play an essential role in ourmethod,the number of latentvariablesare not the more the better,herewe chose 30 variablesafter comparing the performance of the gait recognition.The performance of our method also shows itoutperforms the state- of -artmethods.

    In our futurework,we take additive sequential information into consideration.Although GEI is most popular representation for gait,but it obviously loses spatial and sequential information in some extent.To makeuse of sequential information,the rawinputcan bea cycle of silhouette or rawimages.So the network for extracting sequential information is suitable for clothing-invariant.Attention-based long short term memory network(LSTM)[Greff,Srivastava,Koutnik etal.(2017)])is the nextstep of our futurework.

    Acknowledgement:Thiswork was supported in part by the Natural Science Foundation of China under Grant U1536203,in part by the National key research and development program of China(2016QY01W 0200),in part by the Major Scientific and Technological Project of Hubei Province(2018AAA068).

    另类精品久久| 少妇丰满av| av福利片在线观看| 免费少妇av软件| 午夜激情福利司机影院| 少妇裸体淫交视频免费看高清| 一区二区av电影网| 九九在线视频观看精品| 少妇精品久久久久久久| 晚上一个人看的免费电影| 国产精品熟女久久久久浪| 黑人猛操日本美女一级片| 最黄视频免费看| 天天操日日干夜夜撸| 国产成人精品福利久久| 国产淫语在线视频| 免费看日本二区| 国产精品熟女久久久久浪| 久久97久久精品| 97在线视频观看| 草草在线视频免费看| 亚洲激情五月婷婷啪啪| 国产男女超爽视频在线观看| 久久国产精品男人的天堂亚洲 | 大香蕉97超碰在线| 多毛熟女@视频| 菩萨蛮人人尽说江南好唐韦庄| 亚洲av国产av综合av卡| .国产精品久久| av专区在线播放| 亚洲国产成人一精品久久久| 欧美 亚洲 国产 日韩一| 中文字幕av电影在线播放| 有码 亚洲区| 精品午夜福利在线看| 一区二区av电影网| 午夜日本视频在线| 制服丝袜香蕉在线| 色94色欧美一区二区| 国产在线视频一区二区| 日韩一区二区视频免费看| 国产爽快片一区二区三区| 极品教师在线视频| 国产精品蜜桃在线观看| 人妻少妇偷人精品九色| 在线 av 中文字幕| 国产精品久久久久久av不卡| 国产高清国产精品国产三级| 大片电影免费在线观看免费| 又大又黄又爽视频免费| 国产午夜精品久久久久久一区二区三区| 男女啪啪激烈高潮av片| 国产精品一区二区性色av| 日本免费在线观看一区| 欧美区成人在线视频| 女人久久www免费人成看片| 色5月婷婷丁香| 日韩在线高清观看一区二区三区| 久久精品熟女亚洲av麻豆精品| 久久久久久久大尺度免费视频| 亚洲欧洲国产日韩| 久久国产精品大桥未久av | 国产国拍精品亚洲av在线观看| 精品久久久久久电影网| 精品久久久久久久久亚洲| 国产 一区精品| 国产精品无大码| 亚洲精品,欧美精品| 22中文网久久字幕| av.在线天堂| 国产精品久久久久久久电影| 久久女婷五月综合色啪小说| 日韩熟女老妇一区二区性免费视频| 欧美变态另类bdsm刘玥| 日韩精品免费视频一区二区三区 | 国产在线免费精品| 99热6这里只有精品| 最近的中文字幕免费完整| 亚洲欧美成人综合另类久久久| 国产毛片在线视频| 亚洲国产色片| 午夜福利视频精品| 热re99久久国产66热| 久久人人爽av亚洲精品天堂| 国产成人精品婷婷| 精品一区二区免费观看| 亚洲av成人精品一二三区| 日本av免费视频播放| av播播在线观看一区| 日本与韩国留学比较| 亚洲精品乱码久久久久久按摩| 高清黄色对白视频在线免费看 | 免费久久久久久久精品成人欧美视频 | 乱系列少妇在线播放| 美女福利国产在线| 久久久久久久精品精品| 男女国产视频网站| 各种免费的搞黄视频| 26uuu在线亚洲综合色| 精品久久久久久电影网| 精品国产一区二区久久| 国产精品秋霞免费鲁丝片| 国产极品粉嫩免费观看在线 | tube8黄色片| tube8黄色片| 欧美97在线视频| 一个人免费看片子| tube8黄色片| 日本欧美视频一区| 黄色欧美视频在线观看| h视频一区二区三区| 成人综合一区亚洲| a级一级毛片免费在线观看| 久久国产精品男人的天堂亚洲 | 国产精品无大码| 最黄视频免费看| 精品国产乱码久久久久久小说| 国产一区二区在线观看av| 日本av免费视频播放| 国产一区二区在线观看日韩| 成人国产麻豆网| 777米奇影视久久| 国产亚洲欧美精品永久| 各种免费的搞黄视频| 午夜激情久久久久久久| 国国产精品蜜臀av免费| 午夜免费鲁丝| 日日爽夜夜爽网站| 黄色怎么调成土黄色| 国产免费视频播放在线视频| 国产老妇伦熟女老妇高清| 老司机影院成人| 99久久综合免费| 亚洲va在线va天堂va国产| 亚洲色图综合在线观看| 精品少妇内射三级| 亚洲成人手机| 性色avwww在线观看| 日韩免费高清中文字幕av| 最近的中文字幕免费完整| 韩国高清视频一区二区三区| 亚洲精品国产av蜜桃| 亚洲国产毛片av蜜桃av| 久热这里只有精品99| kizo精华| 少妇高潮的动态图| 欧美日韩综合久久久久久| 一级毛片久久久久久久久女| 汤姆久久久久久久影院中文字幕| 纵有疾风起免费观看全集完整版| 日韩人妻高清精品专区| 中文欧美无线码| 久久久久精品久久久久真实原创| 亚洲av成人精品一二三区| 国产国拍精品亚洲av在线观看| 天堂8中文在线网| 男男h啪啪无遮挡| 一区二区三区免费毛片| 久久久久精品性色| 男人狂女人下面高潮的视频| 久久精品久久久久久久性| 男人狂女人下面高潮的视频| 欧美精品高潮呻吟av久久| 精品一区二区三卡| 老熟女久久久| 久久久久国产精品人妻一区二区| 国模一区二区三区四区视频| 美女内射精品一级片tv| 人妻少妇偷人精品九色| 91精品国产九色| 日日啪夜夜爽| 久久久欧美国产精品| 国产伦理片在线播放av一区| 五月开心婷婷网| 国产精品一区二区在线观看99| 欧美区成人在线视频| 久久久久久久久久久久大奶| 少妇被粗大猛烈的视频| 日韩精品有码人妻一区| 久久久久久伊人网av| 2018国产大陆天天弄谢| 日本av手机在线免费观看| 日本av手机在线免费观看| 王馨瑶露胸无遮挡在线观看| 国产精品久久久久久精品电影小说| 免费看日本二区| 特大巨黑吊av在线直播| 亚洲图色成人| 涩涩av久久男人的天堂| 人人妻人人看人人澡| 久久精品国产亚洲av涩爱| 一个人看视频在线观看www免费| 亚洲av日韩在线播放| 只有这里有精品99| 最新中文字幕久久久久| 日韩制服骚丝袜av| 国产亚洲91精品色在线| 国产视频首页在线观看| 久久韩国三级中文字幕| 男人和女人高潮做爰伦理| 国产精品99久久99久久久不卡 | 国产女主播在线喷水免费视频网站| 夜夜看夜夜爽夜夜摸| 男女啪啪激烈高潮av片| 亚洲精品日韩av片在线观看| 色吧在线观看| 国产视频首页在线观看| 一本久久精品| 国产精品一二三区在线看| 亚洲av在线观看美女高潮| 99久久精品一区二区三区| 国产毛片在线视频| 日本免费在线观看一区| 免费观看a级毛片全部| 最近最新中文字幕免费大全7| 青春草亚洲视频在线观看| 久久国产亚洲av麻豆专区| 国产精品久久久久成人av| 国产精品一二三区在线看| 成人综合一区亚洲| h日本视频在线播放| 国产一区二区三区av在线| 国产国拍精品亚洲av在线观看| 卡戴珊不雅视频在线播放| 久久精品久久久久久久性| 亚洲第一av免费看| 岛国毛片在线播放| 国产黄频视频在线观看| 亚洲一级一片aⅴ在线观看| 国产av国产精品国产| 欧美xxxx性猛交bbbb| 97在线人人人人妻| 色5月婷婷丁香| av网站免费在线观看视频| 精品一区二区免费观看| 国产成人a∨麻豆精品| 免费看av在线观看网站| 欧美97在线视频| 色网站视频免费| 人人妻人人澡人人看| 国产日韩一区二区三区精品不卡 | 久久久久视频综合| 亚洲,一卡二卡三卡| 少妇熟女欧美另类| 精品一区二区三卡| 最近中文字幕2019免费版| 成人黄色视频免费在线看| 国产精品欧美亚洲77777| 国产精品久久久久成人av| 精品99又大又爽又粗少妇毛片| 久久久欧美国产精品| 天天操日日干夜夜撸| 七月丁香在线播放| 日日爽夜夜爽网站| 街头女战士在线观看网站| 精品久久国产蜜桃| 性色avwww在线观看| 国产深夜福利视频在线观看| 精品少妇黑人巨大在线播放| 国产精品成人在线| 9色porny在线观看| 国产美女午夜福利| 91成人精品电影| 黄色日韩在线| 国内揄拍国产精品人妻在线| 成人漫画全彩无遮挡| av不卡在线播放| 免费黄网站久久成人精品| 亚洲在久久综合| 天天躁夜夜躁狠狠久久av| 亚洲av男天堂| 中文字幕av电影在线播放| 亚洲精品456在线播放app| 国产精品国产av在线观看| 久久人妻熟女aⅴ| 成人国产麻豆网| 高清不卡的av网站| 成人二区视频| a级片在线免费高清观看视频| 九九在线视频观看精品| 永久免费av网站大全| 蜜臀久久99精品久久宅男| 一级片'在线观看视频| 亚洲精品国产av成人精品| 97在线人人人人妻| a级片在线免费高清观看视频| 久久久久久久久久久久大奶| 爱豆传媒免费全集在线观看| 嫩草影院入口| 蜜桃久久精品国产亚洲av| 内射极品少妇av片p| 久久久亚洲精品成人影院| 色吧在线观看| 大片免费播放器 马上看| 免费看日本二区| 久久精品国产亚洲av涩爱| 免费人妻精品一区二区三区视频| av一本久久久久| 高清欧美精品videossex| 丝瓜视频免费看黄片| 久久久久国产网址| 街头女战士在线观看网站| 日韩强制内射视频| 午夜激情福利司机影院| 国产精品国产av在线观看| 91午夜精品亚洲一区二区三区| √禁漫天堂资源中文www| 久久久久精品性色| 国产亚洲av片在线观看秒播厂| 日韩熟女老妇一区二区性免费视频| 成人黄色视频免费在线看| 免费av中文字幕在线| 中文乱码字字幕精品一区二区三区| 亚洲精品一区蜜桃| 久久久久久久久久久免费av| 欧美精品一区二区免费开放| 九色成人免费人妻av| 日韩免费高清中文字幕av| 亚洲人成网站在线播| 99re6热这里在线精品视频| 亚洲美女黄色视频免费看| 成人漫画全彩无遮挡| 韩国av在线不卡| 精品一区二区三卡| 亚洲人成网站在线观看播放| 精品熟女少妇av免费看| 亚洲怡红院男人天堂| 91在线精品国自产拍蜜月| 51国产日韩欧美| 另类亚洲欧美激情| 永久网站在线| 99热国产这里只有精品6| 黄色怎么调成土黄色| 国产精品蜜桃在线观看| 亚洲国产日韩一区二区| 免费人妻精品一区二区三区视频| 大片电影免费在线观看免费| 日本欧美国产在线视频| 日本欧美视频一区| 成人毛片60女人毛片免费| 爱豆传媒免费全集在线观看| 最近手机中文字幕大全| 十八禁高潮呻吟视频 | 我的女老师完整版在线观看| 99久久综合免费| 精品亚洲乱码少妇综合久久| 国产在线免费精品| 最新中文字幕久久久久| av国产精品久久久久影院| 亚洲精品成人av观看孕妇| 两个人免费观看高清视频 | 欧美日韩国产mv在线观看视频| 久久精品国产亚洲av天美| 精品国产露脸久久av麻豆| 成人国产av品久久久| 午夜福利视频精品| 老司机亚洲免费影院| 乱码一卡2卡4卡精品| 成人亚洲欧美一区二区av| 桃花免费在线播放| 天美传媒精品一区二区| 国产精品嫩草影院av在线观看| 成人综合一区亚洲| 成人毛片60女人毛片免费| 多毛熟女@视频| 黄色视频在线播放观看不卡| 97超碰精品成人国产| 成年美女黄网站色视频大全免费 | 人妻 亚洲 视频| 亚洲电影在线观看av| 在线观看三级黄色| 成年人午夜在线观看视频| 99热这里只有是精品50| 日韩电影二区| 在线精品无人区一区二区三| 久久国产乱子免费精品| 人人澡人人妻人| 男人狂女人下面高潮的视频| 色94色欧美一区二区| 中国三级夫妇交换| 一本久久精品| 日韩视频在线欧美| 精品国产一区二区三区久久久樱花| 日本wwww免费看| 我要看黄色一级片免费的| 成年女人在线观看亚洲视频| 我的老师免费观看完整版| 久久久精品94久久精品| 一级av片app| 亚洲成人av在线免费| 日韩大片免费观看网站| 国产片特级美女逼逼视频| a级片在线免费高清观看视频| 国产欧美日韩综合在线一区二区 | 少妇人妻精品综合一区二区| 赤兔流量卡办理| 日本黄大片高清| √禁漫天堂资源中文www| 男人爽女人下面视频在线观看| 十八禁高潮呻吟视频 | 在线观看人妻少妇| 少妇人妻 视频| 久久国产精品男人的天堂亚洲 | 这个男人来自地球电影免费观看 | 日日摸夜夜添夜夜添av毛片| 午夜av观看不卡| 男男h啪啪无遮挡| av在线app专区| 91午夜精品亚洲一区二区三区| 天美传媒精品一区二区| av黄色大香蕉| 亚洲欧美一区二区三区黑人 | 精品久久国产蜜桃| 三级国产精品片| 国产精品.久久久| 一边亲一边摸免费视频| 成人二区视频| 精品国产一区二区三区久久久樱花| 99久久中文字幕三级久久日本| 最近2019中文字幕mv第一页| 高清视频免费观看一区二区| 各种免费的搞黄视频| 国产日韩欧美视频二区| 不卡视频在线观看欧美| 一级毛片 在线播放| 97精品久久久久久久久久精品| 免费看光身美女| 草草在线视频免费看| 亚洲精品日本国产第一区| 国产精品99久久久久久久久| 视频区图区小说| 亚洲精品国产av成人精品| 嘟嘟电影网在线观看| 精品熟女少妇av免费看| 成人18禁高潮啪啪吃奶动态图 | 多毛熟女@视频| 欧美日韩亚洲高清精品| 我的老师免费观看完整版| 少妇裸体淫交视频免费看高清| 乱人伦中国视频| 日韩av在线免费看完整版不卡| 午夜福利视频精品| 秋霞伦理黄片| 人人妻人人澡人人看| 麻豆成人午夜福利视频| 免费看光身美女| 五月玫瑰六月丁香| 啦啦啦啦在线视频资源| 99精国产麻豆久久婷婷| 青春草亚洲视频在线观看| 涩涩av久久男人的天堂| 2022亚洲国产成人精品| av国产久精品久网站免费入址| 人人妻人人看人人澡| 亚洲怡红院男人天堂| 日韩一区二区三区影片| 久久久久久久久久人人人人人人| 伦理电影免费视频| 新久久久久国产一级毛片| 成人毛片60女人毛片免费| 高清av免费在线| 男女边吃奶边做爰视频| tube8黄色片| av免费观看日本| 亚洲欧美清纯卡通| 亚洲欧美成人精品一区二区| 99久久中文字幕三级久久日本| 建设人人有责人人尽责人人享有的| 免费av不卡在线播放| 亚洲欧洲精品一区二区精品久久久 | 热99国产精品久久久久久7| kizo精华| 国产男人的电影天堂91| 国产深夜福利视频在线观看| 亚洲精品日韩在线中文字幕| 精品久久久精品久久久| 欧美人与善性xxx| 国产国拍精品亚洲av在线观看| 中文字幕免费在线视频6| 大话2 男鬼变身卡| 久久 成人 亚洲| 日韩视频在线欧美| 国产一区二区三区av在线| 一级毛片电影观看| 亚洲精品乱码久久久v下载方式| 久久免费观看电影| 午夜福利影视在线免费观看| 99国产精品免费福利视频| 18禁裸乳无遮挡动漫免费视频| 韩国av在线不卡| 大话2 男鬼变身卡| 啦啦啦视频在线资源免费观看| 男人添女人高潮全过程视频| 日本wwww免费看| 韩国高清视频一区二区三区| 欧美三级亚洲精品| 久久6这里有精品| 另类亚洲欧美激情| 亚洲av.av天堂| 国产精品久久久久久久久免| 男女啪啪激烈高潮av片| 免费观看的影片在线观看| 成人免费观看视频高清| 在线观看国产h片| 国产伦精品一区二区三区四那| 日韩制服骚丝袜av| 国产精品国产三级国产专区5o| 九九在线视频观看精品| 日日摸夜夜添夜夜爱| h视频一区二区三区| 91午夜精品亚洲一区二区三区| 亚洲色图综合在线观看| 国产日韩欧美在线精品| 纵有疾风起免费观看全集完整版| 国产成人a∨麻豆精品| 欧美激情国产日韩精品一区| 一级片'在线观看视频| 99久久人妻综合| 日本午夜av视频| 国产日韩欧美视频二区| 久久99一区二区三区| 日韩欧美精品免费久久| 少妇高潮的动态图| 美女xxoo啪啪120秒动态图| 成人午夜精彩视频在线观看| 各种免费的搞黄视频| 高清毛片免费看| 少妇丰满av| 久久久久久久国产电影| 中文欧美无线码| 99热全是精品| 纵有疾风起免费观看全集完整版| 日韩免费高清中文字幕av| 国产精品一区二区在线观看99| 青青草视频在线视频观看| 精品一品国产午夜福利视频| 各种免费的搞黄视频| 亚洲精品一区蜜桃| 婷婷色综合大香蕉| 国产中年淑女户外野战色| 在线看a的网站| 老司机亚洲免费影院| 美女脱内裤让男人舔精品视频| 亚洲成人手机| 成人亚洲精品一区在线观看| 一二三四中文在线观看免费高清| 欧美激情国产日韩精品一区| 一级二级三级毛片免费看| 久久毛片免费看一区二区三区| 2018国产大陆天天弄谢| 日韩视频在线欧美| 久久人人爽人人爽人人片va| 精品久久久噜噜| 大片电影免费在线观看免费| 国产一区有黄有色的免费视频| 十分钟在线观看高清视频www | 看免费成人av毛片| 天堂8中文在线网| 国产精品99久久久久久久久| 在线观看国产h片| 亚洲,欧美,日韩| 成人18禁高潮啪啪吃奶动态图 | 丝袜在线中文字幕| 如日韩欧美国产精品一区二区三区 | 男女边摸边吃奶| 久久精品夜色国产| 亚洲精品自拍成人| 免费黄色在线免费观看| 亚洲真实伦在线观看| 人妻系列 视频| 欧美精品一区二区免费开放| 国产69精品久久久久777片| 欧美日韩在线观看h| 啦啦啦啦在线视频资源| 国产精品偷伦视频观看了| 人妻系列 视频| av播播在线观看一区| 国产精品伦人一区二区| 亚洲天堂av无毛| 纯流量卡能插随身wifi吗| 免费播放大片免费观看视频在线观看| 欧美日韩亚洲高清精品| 哪个播放器可以免费观看大片| 亚洲av不卡在线观看| 亚洲怡红院男人天堂| 精品亚洲成国产av| 亚洲精品乱久久久久久| 精品一区二区三卡| 男女边吃奶边做爰视频| 国产午夜精品一二区理论片| 久久人人爽av亚洲精品天堂| 一个人看视频在线观看www免费| 天美传媒精品一区二区| 日韩,欧美,国产一区二区三区| 在线观看免费高清a一片| 国产av码专区亚洲av| 男人和女人高潮做爰伦理| 天天操日日干夜夜撸| 狂野欧美激情性bbbbbb| 亚洲精品视频女| 国产69精品久久久久777片| 欧美日韩在线观看h| 香蕉精品网在线| 最黄视频免费看| 亚洲精品一区蜜桃| 日韩中文字幕视频在线看片| 久久精品熟女亚洲av麻豆精品| 夫妻午夜视频| 色5月婷婷丁香| 91精品国产国语对白视频| 老司机影院毛片| 亚洲国产色片| 只有这里有精品99| 内地一区二区视频在线| 亚洲精品色激情综合|