• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Prediction of Changed Faces with HSCNN

    2022-08-24 03:30:56JinhoHan
    Computers Materials&Continua 2022年5期

    Jinho Han

    Department of Liberal Studies(Computer),Korean Bible University,Seoul,01757,Korea

    Abstract: Convolutional Neural Networks (CNN) have been successfully employed in the field of image classification.However, CNN trained using images from several years ago may be unable to identify how such images have changed over time.Cross-age face recognition is,therefore,a substantial challenge.Several efforts have been made to resolve facial changes over time utilizing recurrent neural networks(RNN)with CNN.The structure of RNN contains hidden contextual information in a hidden state to transfer a state in the previous step to the next step.This paper proposes a novel model called Hidden State-CNN(HSCNN).This adds to CNN a convolution layer of the hidden state saved as a parameter in the previous step and requires no more computing resources than CNN.The previous CNN-RNN models perform CNN and RNN, separately and then merge the results.Therefore, their systems consume twice the memory resources and CPU time,compared with the HSCNN system,which works the same as CNN only.HSCNN consists of 3 types of models.All models load hidden state ht-1 from parameters of the previous step and save ht as a parameter for the next step.In addition,model-B adds ht-1 to x,which is the previous output.The summation of ht-1 and x is multiplied by weight W.In model-C the convolution layer has two weights:W1 and W2.Training HSCNN with faces of the previous step is for testing faces of the next step in the experiment.That is, HSCNN trained with past facial data is then used to verify future data.It has been found to exhibit 10 percent greater accuracy than traditional CNN with a celeb face database.

    Keywords: CNN-RNN;HSCNN;hidden state;changing faces

    1 Introduction

    Face recognition(FR)systems have been continually developed for personal authentication.These efforts have resulted in FR applications acting on mobile phones [1].Researchers have proposed several ideas for FR systems:eigen faces[2],independent component analysis[3],linear discriminant analysis [4,5], three-dimensional (3D) method [6–9], and liveness detection schemes to prevent the misuse of photographic images[10].Using data acquisition methodology,Jafri et al.[11]divided FR techniques into three categories:intensity images, video sequences, and 3D or infra-red techniques.They introduced AI approaches as one of the operating methods for intensity images and reported that it worked efficiently for somewhat complex FR scenarios.Such techniques had not previously been utilized for practical everyday purposes.

    In 2012,AlexNet[12]was proposed and became a turning point in large-scale image recognition.It was the first CNN,one of the deep learning techniques,and was declared the winner of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 with 83.6% accuracy.In ILSVRC 2013,Clarifai was the winner with 88.3%[13,14],whereas in ILSVRC 2014,GoogLeNet was the winner with 93.3% [15].The latter was an astonishing result because humans trained for annotator comparison exhibited approximately 95%accuracy in ILSVRC[16].In 2014,using a nine-layer CNN,DeepFace[17]achieved 97.35%accuracy in FR,closely approaching the 97.53%ability of humans to recognize cropped Labeled Faces in the Wild(LFW)benchmark[18].However,DeepID2[19]achieved 99.15%face verification accuracy with the balanced identification and verification features on ConvNet,which contained four convolution layers.In 2015,DeepID3[20]achieved 99.53%accuracy using VGGNet(Visual Geometry Group Net)[21],whereas FaceNet[22]achieved 99.63%using only 128-bytes per face.

    CNN consists of convolution layers,pooling layers,and fully connected layers.However,a number of problems still needed to be addressed.For instance,CNN trained with past images failed to verify changed images according to a time sequence.In their in-depth FR survey,Wang et al.[23]described three types of cross-factor FR algorithms as challenges to address in real-world applications:crosspose,cross-age,and makeup.Cross-age FR is a substantial challenge with respect to facial aging over time.Several researchers have attempted to resolve this issue.For instance,Liu et al.[24]proposed an age estimation system for faces with CNN.Bianco et al.[25]and Khiyari et al.[26]applied CNN to learn cross-age information.Li et al.[27]suggested metric learning in a deep CNN.Other studies have suggested combining CNN with recurrent neural networks(RNN)to verify changed images because RNN can predict data sequences [28].RNN contains contextual information in a hidden state to transfer a state in the previous step to the next step, and has been found to generate sequences in various domains,including text[29],motion capture data[30],and music[31,32].

    This paper proposes a novel model called Hidden State-CNN (HSCNN) as well as training the modified CNN with past data to verify future data.HSCNN adds to CNN a convolution layer of the hidden state saved as a parameter.The contributions of the present study are as follows:

    First,the proposed model,HSCNN,exhibits 10 percent greater accuracy than traditional CNN with a celeb face database[33].Facial images of the future were tested after training based on facial images of the past.HSCNN adds the hidden state saved as a parameter in the previous step to the CNN structure.Further details on this process are provided in Section 4.2.

    Secondly, because HSCNN included the hidden state of RNN in the proposed architecture, it was efficient in its use of computing resources.Other researchers have performed CNN and RNN,separately, and merged the results in the system, consuming double the number of resources and processing time.Further details are presented in Section 2.

    Thirdly,this paper indicates that HSCNN can train with only two images of one person per step.Also,the loss value reached 0.4 in just 40 epochs in training with loading parameters and 250 epochs in training without loading parameters.Therefore,the HSCNN achieves efficiency because it uses only two images and trains in 40 epochs with loading parameters.This is explained further in Section 4.1.

    In the remainder of this paper,Section 2 introduces related works,Section 3 outlines the proposed method,Section 4 presents the experimental results,and Section 5 provides the conclusion.

    2 Related Works

    Some neural network models can acquire contextual information in various text environments using recurrent layers.Convolutional Recurrent Neural Networks (CRNN) works for a scene text recognition system to read scene texts in the image [34].It contains both convolutional and LSTM recurrent layers of the network architecture and uses past state and current input to predict the subsequent text.Recurrent Convolutional Neural Networks(RCNN)also uses a recurrent structure to classify text from document datasets[35].The combined CNN and RNN model uses relations between phrases and word sequences[36]and in the field of natural language processing(NLP)[37].

    Regarding changed images, combining CNN with RNN methods have been proposed in image classification[38]and a medical paper for blood cell images[39].These authors merged the features extracted from CNN and RNN to determine the long-term dependency and continuity relationship.A CNN-LSTM algorithm was proposed for stock price prediction according to leading indicators[40].This algorithm employed a sequence array of historical data as the input image of the CNN and feature vectors extracted from the CNN as the input vector of LSTM.However, these methods used CNN and RNN,separately,and merged the results vectors extracted from CNN to RNN.Therefore,their systems consume twice the memory resources and CPU time, compared with the proposed system,which works the same as CNN only.Fig.1 presents an overview of the models developed by Yin et al.[38]and Liang et al.[39].

    Figure 1:Overview of previous CNN-RNN models

    Han introduced incremental learning in CNN[41].Incremental-CNN(I-CNN)was tested using the MNIST dataset.HSCNN referenced I-CNN, which used hidden states (ht-1, ht) and an added convolution layer.For training,I-CNN used the MNIST database of handwritten digits comprising 60,000 examples and changed handwritten digits(CHD)comprising 1,000 images.This paper proposes HSCNN, a new structure combining CNN with RNN.It adds a hidden state of RNN into a convolution layer of CNN.Consequently,HSCNN acts like CNN and performs efficiently for crossage FR.

    3 Proposed Method:Hidden State CNN

    The following subsections explain the equation of the loss function cross-entropy error and optimizer of the stochastic gradient descent used in the proposed method.And then,3 types of models of Hidden State CNN are described.

    3.1 Loss Function and Optimizer

    In HSCNN, experiments indicated that cross-entropy error (CEE) was the appropriate loss function (cost function or objective function).The best model has a cross-entropy error of 0.The smaller the CEE,the better the model.The CEE equation is:

    where tkis the truth label,and ykis the softmax probability for the kthclass.When calculating the log,a minimal value near the 0.0 delta is necessary to prevent a ‘0’error.In the python code, the CEE equation is:

    Also,the optimizer is the stochastic gradient descent(SGD),which is a method for optimizing a loss function.The equation is:

    where W is the weight,ηis the learning rate,andis the gradient of the loss function L for W.

    3.2 Hidden State CNN(HSCNN)

    HSCNN consists of 3 types of models:model-A,model-B,and model-C.Model-A is the same as I-CNN.Fig.2 presents the overall structure of HSCNN,which is the modified AlexNet with hidden state htof Convolution layer 5.HSCNN consists of convolution layers,RelU function,pooling layers,affine layers,and soft max function such as traditional CNN,but it also has hidden states:ht-1and ht.This distinguishes HSCNN from CNN.

    Figure 2:The overall structure of HSCNN

    Fig.3 presents the structure of model-A.Model-A loads hidden state ht-1from the parameters of the previous step and saves htas a parameter for the next step.It adds the convolution layer 6 and W2.

    The activation function of the rectified linear unit(RelU)equation of model-A is:

    Fig.4 presents the structure of model-B.Model-B loads hidden state ht-1from parameters of the previous step and saves htas a parameter for the next step.The model adds ht-1to x,which was the previous output.The summation of ht-1and x is multiplied by weight W.The activation function of the rectified linear unit(RelU)equation of model-B is:

    Figure 3:The structure of model-A of HSCNN

    Figure 4:The structure of model-B of HSCNN

    Fig.5 presents the structure of model-C.Model-C loads hidden state ht-1from parameters of the previous step and saves ht as av parameter for the next step.In this case,the convolution layer has two weights:W1and W2.The activation function of the rectified linear unit(RelU)equation of model-C is:

    Figure 5:The structure of model-C of HSCNN

    4 Experimental Results

    In the experiment,HSCNN used Cross-Entropy Error as the loss function,Stochastic Gradient Descent(SGD)as the optimizer,and the Instruction rate(Ir)was 0.001.HSCNN also used python,NumPy library,and CuPy library for the NVIDIA GPUs employed for software coding.

    4.1 Preparations for Experiments

    The essential items required for the experiments are listed in Tab.1.The experiments used FaceApp Pro to make old faces from young faces and Abrosoft Fanta Morph Deluxe to generate 100 morphing facial images between young and old faces.

    Table 1:Items prepared for the experiments

    The facial images of 1,100 persons were selected from the celeb face database,known as the Largescale CelebFaces Attributes (CelebA) Dataset.Those faces were then paired with a similar face; for example,man to man,woman to woman,Asian to Asian,and so on.One was chosen as the young face between paired faces and the other was changed to the old face.To make these changes,the photo editing application FaceApp Pro was used.Between the transition from young faces to old faces,100 continually changing faces were created using the photo morphing software FantaMorph.

    The difference between the young and old images of the same person made by FaceApp Pro was not sufficient, so the experiments also paired the faces of two other persons.Thus, the young face and old face of each pair were different persons.Finally,102 images changing from young to old were created for all 550 paired images.Number 1 is the youngest image,and number 102 is the oldest.102 images show the aging state over time from young to old.Number 1 is the youngest image,and number 102 is the oldest.102 images show the aging state over time from young to old.Numbers 1 to 10 are young images used as the first training data,and the last numbers 101 to 102 are old images and used as final target images.Using 90 images from numbers 11 to 100,the experiments used 10 steps to learn and test aging changes over time.When learning using 9 images in each of 10 steps,all 90 images are used.However,in this case,learning occurs on the entire data,and the training result of the proposed model or CNN was almost the same.So the prediction experiments become meaningless.Therefore,to clearly confirm the results of the prediction experiment,two images are trained for each step,and the two images mean a specific point in the middle of facial aging.This is the same as testing the face at the last oldest point with two final target images.Tab.2 presents sample images from the dataset and their numbers for each step:primary step,target step,and step 1 to step 10.Beyond primary step,each step has only two samples.There is a substantial difference between images number 60 to number 90;however,after number 90,all facial images look almost identical.

    Table 2:Dataset samples of changed images according to steps

    HSCNN appears to be an extremely efficient method as it achieved 99.9%accuracy and 0.01 loss value with only two training images in each step from step 1 to step 10.Also,the HSCNN is trained with loading parameters saved in the previous step.The use of loading parameters means the training epochs can be shortened.Fig.6 indicates that the loss value reaches 0.4 in 250 epochs in training without loading parameters and just 40 epochs with loading parameters.Thus,the HSCNN achieves efficiency because it uses only two images and trains in 40 epochs with loading parameters.

    4.2 Experiments

    The experiments utilized AlexNet as the traditional CNN and created three models of HSCNN modifying AlexNet.All models added hidden states to the convolution five-layer and used the RelU 5 layer of the AlexNet.

    Figure 6:The efficiency of HSCNN

    Fig.7 presents the samples of train steps of the experiments and it shows the accuracy of each training step and loss values through 150 epochs.In the case of training step 3 of model-A,the graph shows the accuracy of training step 3, testing of step 4, step 5, step 6, step 7, step 8, step 9, step 10,and target step.It also shows the loss values according to the epochs.For including loss values in one graph,loss values were divided by 10.

    Figure 7:The samples of train steps of the experiments

    The experiments consisted of 2 groups:experiment I,experiment II.Experiment I tested the target step,and experiment II tested step 3(behind the training step).Step 1 employed the parameters saved in the primary step,while step 2 used the parameters saved in step 1.

    Tab.3 presents the accuracy of the target step in testing faces.Tab.4 presents the accuracy of step 3 behind,according to the models,in testing faces.The test of step 3 behind means that HSCNN tested step 4 when training step 1,and step 5 when training step 2.

    Table 3:Accuracy(%)of testing the target step(Experiment I)

    Table 4:Accuracy(%)of testing the step 3 behind(Experiment II)

    Based on Tabs.3 and 4,the results of the experiments are presented in Figs.8–10.Fig.8 depicts the differences between model-A and AlexNet.These indicate that model-A achieved more than 10%higher accuracy in step 3 to step 5 of experiment I and in step 1 to step 3 of experiment II.Therefore,model-A can be employed for both long-and short-time changes.

    Fig.9 depicts the differences between model-B and AlexNet.These indicate that model-B achieved more than 10%higher accuracy in step 4 to step 6 of experiment I and in step 1 to step 2 of experiment II.Therefore,model-B can be employed to predict both long-and short-time changes.Fig.10 depicts the differences between model-C and AlexNet.These indicate that model-C achieved more than 10%higher accuracy in step 2 to step 3 of experiment II, but not in any step of experiment I.Therefore,model-C is considered an appropriate method to predict short-time changes.

    Figure 8:The results of the model-A and AlexNet experiments

    Figure 9:The results of model-B and AlexNet experiments

    Figure 10:The results of model-C and AlexNet experiments

    5 Conclusion

    This paper proposes a novel model, Hidden State-CNN (HSCNN), which adds to CNN a convolution layer of the hidden state.Training with past data to verify future data was the aim of the experiments,and HSCNN exhibited 10 percent higher accuracy with the celeb face database than the traditional CNN.The experiments consisted of 2 groups.Experiment I tested the target step and experiment II tested step 3 behind the training step.The latter step means that step 4 was tested when training step 1.Experiment I assessed long-time changes, and experiment II relatively shorttime changes.Both yielded similar results.Furthermore, HSCNN achieved a more efficient use of computing resources because its structure differed from that of other CNN-RNN methods.HSCNN also represented a new and efficient training process to verify changing faces.It used only two training images in each step and achieved 99.9%accuracy and 0.01 loss value.

    Funding Statement:This work was supported by the National Research Foundation of Korea(NRF)grant in 2019(NRF-2019R1G1A1004773).

    Conflicts of Interest:The author declares that he has no conflicts of interest to report regarding the present study.

    真人一进一出gif抽搐免费| 久久这里只有精品19| 精品一区二区三区四区五区乱码| 观看免费一级毛片| 三级男女做爰猛烈吃奶摸视频| 大型黄色视频在线免费观看| 深夜精品福利| av视频在线观看入口| 久99久视频精品免费| 色av中文字幕| 成人高潮视频无遮挡免费网站| 91字幕亚洲| 国产精品,欧美在线| 国产高清有码在线观看视频| 老司机午夜福利在线观看视频| 国产激情偷乱视频一区二区| 精品电影一区二区在线| 欧美一区二区国产精品久久精品| 麻豆国产av国片精品| 99久国产av精品| 欧美高清成人免费视频www| 身体一侧抽搐| 国产欧美日韩精品亚洲av| 精品久久久久久久末码| 黄色丝袜av网址大全| 国内精品一区二区在线观看| av国产免费在线观看| 极品教师在线免费播放| 国产69精品久久久久777片 | 亚洲欧美日韩卡通动漫| 成熟少妇高潮喷水视频| 国产伦在线观看视频一区| 男人舔女人的私密视频| 综合色av麻豆| av片东京热男人的天堂| 亚洲自偷自拍图片 自拍| 成人三级做爰电影| 亚洲欧美日韩东京热| 最好的美女福利视频网| 午夜亚洲福利在线播放| 午夜福利成人在线免费观看| 国产精品国产高清国产av| 午夜福利18| 亚洲无线在线观看| 最新在线观看一区二区三区| 久久精品亚洲精品国产色婷小说| 亚洲国产精品sss在线观看| 极品教师在线免费播放| 午夜福利成人在线免费观看| 精品不卡国产一区二区三区| 97超视频在线观看视频| av女优亚洲男人天堂 | 女人高潮潮喷娇喘18禁视频| 成年免费大片在线观看| 男插女下体视频免费在线播放| 色播亚洲综合网| a在线观看视频网站| 十八禁人妻一区二区| 嫁个100分男人电影在线观看| x7x7x7水蜜桃| 国产伦在线观看视频一区| 午夜影院日韩av| 天天添夜夜摸| 美女扒开内裤让男人捅视频| 女同久久另类99精品国产91| 亚洲精品美女久久av网站| 久久性视频一级片| 天堂av国产一区二区熟女人妻| 国产极品精品免费视频能看的| 久久中文看片网| 18禁裸乳无遮挡免费网站照片| 亚洲男人的天堂狠狠| 亚洲国产精品合色在线| 精品国产美女av久久久久小说| 最近最新免费中文字幕在线| 久久热在线av| 日本免费一区二区三区高清不卡| 久久中文字幕一级| 法律面前人人平等表现在哪些方面| 国产毛片a区久久久久| 久久人人精品亚洲av| 黄色视频,在线免费观看| 老司机午夜十八禁免费视频| 欧美激情久久久久久爽电影| 国产免费av片在线观看野外av| 夜夜爽天天搞| 久久人妻av系列| 亚洲人成网站在线播放欧美日韩| 国产一区二区激情短视频| 可以在线观看的亚洲视频| 可以在线观看毛片的网站| 国产高清视频在线播放一区| 听说在线观看完整版免费高清| 99久久精品国产亚洲精品| 人人妻人人看人人澡| 欧美又色又爽又黄视频| 亚洲av熟女| 欧美国产日韩亚洲一区| 两个人的视频大全免费| 久久中文字幕人妻熟女| 老司机午夜福利在线观看视频| 欧美激情久久久久久爽电影| 男女床上黄色一级片免费看| 精品一区二区三区四区五区乱码| 亚洲国产精品合色在线| 亚洲国产欧洲综合997久久,| 日本撒尿小便嘘嘘汇集6| 国产亚洲精品一区二区www| 国产av在哪里看| 成在线人永久免费视频| 午夜成年电影在线免费观看| 人妻夜夜爽99麻豆av| 欧美一区二区国产精品久久精品| 久久精品国产99精品国产亚洲性色| 超碰成人久久| 免费电影在线观看免费观看| 欧美日韩福利视频一区二区| 香蕉久久夜色| 国产精品av视频在线免费观看| 男人的好看免费观看在线视频| 91老司机精品| 色在线成人网| 日韩高清综合在线| 国产亚洲精品综合一区在线观看| 日本五十路高清| 18禁美女被吸乳视频| 校园春色视频在线观看| 欧美一区二区精品小视频在线| 亚洲性夜色夜夜综合| 最新在线观看一区二区三区| 精品一区二区三区视频在线观看免费| 天堂√8在线中文| 97超级碰碰碰精品色视频在线观看| 超碰成人久久| 久久精品影院6| 夜夜夜夜夜久久久久| 亚洲最大成人中文| 久久天堂一区二区三区四区| 亚洲人成电影免费在线| 手机成人av网站| 国产蜜桃级精品一区二区三区| 日本一二三区视频观看| 中文字幕av在线有码专区| 精品久久久久久,| 亚洲黑人精品在线| 99精品在免费线老司机午夜| 久久国产乱子伦精品免费另类| 亚洲欧美日韩卡通动漫| 成人欧美大片| 国内揄拍国产精品人妻在线| 在线观看日韩欧美| 曰老女人黄片| 亚洲国产欧美人成| 欧美一级毛片孕妇| 日韩欧美 国产精品| 嫩草影院精品99| 一卡2卡三卡四卡精品乱码亚洲| 夜夜看夜夜爽夜夜摸| www.999成人在线观看| 两人在一起打扑克的视频| 两性午夜刺激爽爽歪歪视频在线观看| 黄片小视频在线播放| 成人性生交大片免费视频hd| 99久久九九国产精品国产免费| 国产成年人精品一区二区| 热99re8久久精品国产| 亚洲最大成人中文| 国产精品日韩av在线免费观看| 成人鲁丝片一二三区免费| 精品不卡国产一区二区三区| 免费大片18禁| 精品久久久噜噜| 日本五十路高清| 国产精品久久视频播放| 高清视频免费观看一区二区 | 最近最新中文字幕免费大全7| 最近2019中文字幕mv第一页| 人人妻人人澡人人爽人人夜夜 | av在线天堂中文字幕| 欧美极品一区二区三区四区| 永久免费av网站大全| 偷拍熟女少妇极品色| 国产在线一区二区三区精 | 日韩高清综合在线| 两性午夜刺激爽爽歪歪视频在线观看| 国产成人福利小说| 级片在线观看| 亚洲成人精品中文字幕电影| 久久久色成人| 国语自产精品视频在线第100页| 亚洲自偷自拍三级| 男人和女人高潮做爰伦理| 国产乱人偷精品视频| 国内精品一区二区在线观看| 只有这里有精品99| eeuss影院久久| 国产精品三级大全| 最近手机中文字幕大全| 国产精品久久电影中文字幕| 欧美高清成人免费视频www| 久久久久久九九精品二区国产| 七月丁香在线播放| 美女内射精品一级片tv| 99热这里只有精品一区| 国产亚洲精品av在线| 波多野结衣高清无吗| 亚洲最大成人av| 内地一区二区视频在线| 久久久久久九九精品二区国产| 国产免费视频播放在线视频 | 成人毛片60女人毛片免费| 超碰av人人做人人爽久久| 国产大屁股一区二区在线视频| 欧美日本亚洲视频在线播放| 中文乱码字字幕精品一区二区三区 | 亚洲国产成人一精品久久久| 又粗又爽又猛毛片免费看| 亚洲国产精品成人综合色| 啦啦啦观看免费观看视频高清| 久久国产乱子免费精品| 国产麻豆成人av免费视频| 大香蕉久久网| 丰满乱子伦码专区| 亚洲人成网站高清观看| 搡女人真爽免费视频火全软件| 在线免费十八禁| 黄色一级大片看看| 日本猛色少妇xxxxx猛交久久| 国产91av在线免费观看| 国产三级中文精品| 亚洲av免费高清在线观看| 人妻制服诱惑在线中文字幕| 日本猛色少妇xxxxx猛交久久| 国产精品蜜桃在线观看| 2021天堂中文幕一二区在线观| 能在线免费看毛片的网站| 国产精品不卡视频一区二区| 长腿黑丝高跟| 精品久久久久久久久久久久久| 婷婷色麻豆天堂久久 | 蜜臀久久99精品久久宅男| 97热精品久久久久久| 国产亚洲精品av在线| 久久久久久久久久黄片| 99热这里只有精品一区| 91aial.com中文字幕在线观看| 国产亚洲av嫩草精品影院| 国产精品乱码一区二三区的特点| 男女国产视频网站| 一级毛片aaaaaa免费看小| 少妇丰满av| 免费黄色在线免费观看| 中文天堂在线官网| 国产精品嫩草影院av在线观看| 九草在线视频观看| 搡老妇女老女人老熟妇| 97超碰精品成人国产| 99久久精品热视频| 亚洲人成网站在线播| 蜜臀久久99精品久久宅男| 午夜精品在线福利| 亚洲成人av在线免费| 午夜亚洲福利在线播放| 免费av观看视频| 丰满人妻一区二区三区视频av| 一级av片app| 纵有疾风起免费观看全集完整版 | 成人性生交大片免费视频hd| 国产成人精品久久久久久| 亚洲av熟女| 国产亚洲精品av在线| 久久人人爽人人爽人人片va| 国产大屁股一区二区在线视频| 亚洲精品成人久久久久久| 最近2019中文字幕mv第一页| 91久久精品电影网| 国产片特级美女逼逼视频| 伦理电影大哥的女人| 国产精品女同一区二区软件| 欧美高清成人免费视频www| 精品午夜福利在线看| 精品久久久久久成人av| 看非洲黑人一级黄片| 午夜激情欧美在线| 欧美一区二区精品小视频在线| 亚洲精品影视一区二区三区av| 熟妇人妻久久中文字幕3abv| 国产高清国产精品国产三级 | 日韩成人av中文字幕在线观看| 免费人成在线观看视频色| 免费电影在线观看免费观看| 中文字幕人妻熟人妻熟丝袜美| 欧美日本亚洲视频在线播放| 建设人人有责人人尽责人人享有的 | 99在线人妻在线中文字幕| av黄色大香蕉| 亚洲av.av天堂| 又爽又黄无遮挡网站| 国内精品美女久久久久久| 乱人视频在线观看| 一卡2卡三卡四卡精品乱码亚洲| 国语自产精品视频在线第100页| 午夜a级毛片| 天天躁日日操中文字幕| 国产色婷婷99| 国产乱人视频| eeuss影院久久| 久久久久九九精品影院| 久久久欧美国产精品| 国产精品久久久久久精品电影| 久久精品夜色国产| 国产精品乱码一区二三区的特点| 国产成人精品婷婷| 老司机福利观看| 寂寞人妻少妇视频99o| 免费大片18禁| 国产探花在线观看一区二区| 日产精品乱码卡一卡2卡三| 亚洲综合精品二区| 97超碰精品成人国产| 亚洲综合色惰| 日韩一区二区三区影片| 长腿黑丝高跟| 一本久久精品| 日韩成人伦理影院| 成人国产麻豆网| 美女cb高潮喷水在线观看| 成人午夜精彩视频在线观看| 蜜臀久久99精品久久宅男| 日日摸夜夜添夜夜爱| 久久久a久久爽久久v久久| 日韩av在线免费看完整版不卡| 日日干狠狠操夜夜爽| 午夜久久久久精精品| 国内少妇人妻偷人精品xxx网站| or卡值多少钱| 亚洲精品一区蜜桃| 亚洲精品aⅴ在线观看| 日日摸夜夜添夜夜爱| 日韩视频在线欧美| 搡女人真爽免费视频火全软件| 亚洲欧美日韩高清专用| 国产伦在线观看视频一区| 免费播放大片免费观看视频在线观看 | 国产成人91sexporn| 简卡轻食公司| 99久久九九国产精品国产免费| 国国产精品蜜臀av免费| 国产精品永久免费网站| 国产精品国产三级专区第一集| 99久久无色码亚洲精品果冻| 少妇的逼水好多| 最近最新中文字幕免费大全7| 免费看av在线观看网站| 亚洲中文字幕日韩| 岛国在线免费视频观看| 久久这里有精品视频免费| 久久精品久久久久久噜噜老黄 | 国产精品国产高清国产av| 国产精品三级大全| 乱系列少妇在线播放| 久久精品熟女亚洲av麻豆精品 | 国产免费福利视频在线观看| 天美传媒精品一区二区| 久久久a久久爽久久v久久| 欧美日韩在线观看h| 毛片女人毛片| 日本熟妇午夜| 国产亚洲91精品色在线| 亚洲av电影不卡..在线观看| 日本黄大片高清| 嫩草影院新地址| 国产淫语在线视频| 国产一区二区在线观看日韩| 亚洲国产色片| 九草在线视频观看| 色哟哟·www| 在线播放国产精品三级| 日日啪夜夜撸| 欧美成人一区二区免费高清观看| 亚洲人成网站在线观看播放| 国产成人91sexporn| 干丝袜人妻中文字幕| 亚洲中文字幕一区二区三区有码在线看| 听说在线观看完整版免费高清| 国产日韩欧美在线精品| 日本午夜av视频| 最近2019中文字幕mv第一页| 国产又色又爽无遮挡免| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 又爽又黄无遮挡网站| 亚洲欧美日韩东京热| 亚洲国产欧洲综合997久久,| 亚洲人成网站在线播| 亚洲精品亚洲一区二区| 日日摸夜夜添夜夜爱| 日韩一区二区视频免费看| eeuss影院久久| 久久精品夜色国产| 中文亚洲av片在线观看爽| 一夜夜www| 少妇的逼水好多| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 精品国产一区二区三区久久久樱花 | 美女大奶头视频| 国产黄色视频一区二区在线观看 | 最后的刺客免费高清国语| 两性午夜刺激爽爽歪歪视频在线观看| 久久久久久国产a免费观看| 精品久久久久久久久久久久久| 亚洲真实伦在线观看| 三级毛片av免费| 午夜福利在线在线| 亚洲av不卡在线观看| 日本wwww免费看| 别揉我奶头 嗯啊视频| 免费电影在线观看免费观看| 亚洲丝袜综合中文字幕| 欧美一区二区亚洲| h日本视频在线播放| 亚洲va在线va天堂va国产| 身体一侧抽搐| 麻豆精品久久久久久蜜桃| 少妇人妻一区二区三区视频| 色尼玛亚洲综合影院| 国产私拍福利视频在线观看| 大又大粗又爽又黄少妇毛片口| 国产一区二区在线av高清观看| 亚洲av中文字字幕乱码综合| 午夜亚洲福利在线播放| 在线观看66精品国产| 亚洲最大成人手机在线| 日本免费一区二区三区高清不卡| 成人鲁丝片一二三区免费| 在线观看一区二区三区| 中文字幕人妻熟人妻熟丝袜美| 亚洲国产欧洲综合997久久,| 韩国高清视频一区二区三区| 亚洲成人中文字幕在线播放| 午夜久久久久精精品| 天堂√8在线中文| 97超碰精品成人国产| 91在线精品国自产拍蜜月| 波野结衣二区三区在线| 精品一区二区三区视频在线| 久久99蜜桃精品久久| 亚洲久久久久久中文字幕| 国产精品国产三级国产av玫瑰| 成人一区二区视频在线观看| 视频中文字幕在线观看| 欧美日韩综合久久久久久| 国产亚洲av嫩草精品影院| 久久久亚洲精品成人影院| 又爽又黄无遮挡网站| 九九久久精品国产亚洲av麻豆| 国产综合懂色| 国产视频首页在线观看| 国产精品.久久久| 午夜免费激情av| 日本免费一区二区三区高清不卡| 久久精品国产99精品国产亚洲性色| 国产伦精品一区二区三区视频9| 亚洲在线自拍视频| or卡值多少钱| h日本视频在线播放| 久久久色成人| 91aial.com中文字幕在线观看| 午夜福利在线观看免费完整高清在| 国产老妇伦熟女老妇高清| 波多野结衣高清无吗| 亚洲国产成人一精品久久久| 精品午夜福利在线看| 在线播放无遮挡| 狠狠狠狠99中文字幕| 高清午夜精品一区二区三区| 一级黄片播放器| 久久精品夜夜夜夜夜久久蜜豆| 国产日韩欧美在线精品| 青青草视频在线视频观看| 网址你懂的国产日韩在线| 99久久精品国产国产毛片| 婷婷色麻豆天堂久久 | 国产国拍精品亚洲av在线观看| 中文资源天堂在线| 成人无遮挡网站| 边亲边吃奶的免费视频| 超碰97精品在线观看| 99热网站在线观看| 国产精品电影一区二区三区| 69人妻影院| 精品熟女少妇av免费看| 国产亚洲一区二区精品| 日韩制服骚丝袜av| 国产精品电影一区二区三区| 国产精品日韩av在线免费观看| 欧美97在线视频| 国产亚洲91精品色在线| 国产又黄又爽又无遮挡在线| 国产精品久久久久久精品电影| 久久6这里有精品| 国产精品国产三级国产专区5o | 国产免费男女视频| 一级二级三级毛片免费看| 久久亚洲国产成人精品v| 国产亚洲午夜精品一区二区久久 | 成人无遮挡网站| 国产高清不卡午夜福利| 一区二区三区免费毛片| 精品酒店卫生间| 性插视频无遮挡在线免费观看| 看十八女毛片水多多多| 一边摸一边抽搐一进一小说| 波野结衣二区三区在线| 麻豆国产97在线/欧美| 国产人妻一区二区三区在| 麻豆久久精品国产亚洲av| 成人性生交大片免费视频hd| 欧美日韩综合久久久久久| 99九九线精品视频在线观看视频| 日日摸夜夜添夜夜添av毛片| av在线天堂中文字幕| 久久精品国产亚洲av涩爱| 一级二级三级毛片免费看| 国产精品乱码一区二三区的特点| 日本av手机在线免费观看| 特级一级黄色大片| 97热精品久久久久久| ponron亚洲| 色视频www国产| 国产精品无大码| 亚洲不卡免费看| 欧美精品国产亚洲| 99热这里只有是精品50| 精品久久国产蜜桃| 久久久久久久久中文| 搡老妇女老女人老熟妇| 欧美最新免费一区二区三区| 亚洲婷婷狠狠爱综合网| 国产女主播在线喷水免费视频网站 | 免费av毛片视频| 亚洲欧美精品综合久久99| 日本wwww免费看| 国产高清三级在线| 国产精品久久久久久精品电影| 99热精品在线国产| 欧美性猛交╳xxx乱大交人| 伦理电影大哥的女人| 亚洲欧美清纯卡通| 又粗又爽又猛毛片免费看| 又爽又黄无遮挡网站| 91av网一区二区| 亚洲欧美中文字幕日韩二区| 男女啪啪激烈高潮av片| 麻豆av噜噜一区二区三区| 搡女人真爽免费视频火全软件| 国产久久久一区二区三区| 日韩大片免费观看网站 | 男人舔奶头视频| 久久精品久久久久久久性| 欧美bdsm另类| 日本三级黄在线观看| 亚洲美女视频黄频| 国产又黄又爽又无遮挡在线| 国产精品一区二区三区四区久久| 亚洲在线观看片| 99在线人妻在线中文字幕| a级一级毛片免费在线观看| 97在线视频观看| 亚洲无线观看免费| 亚洲高清免费不卡视频| 日本黄色视频三级网站网址| 久久人人爽人人片av| 国产 一区精品| 久久久久久久久久久丰满| 亚洲国产欧洲综合997久久,| 亚洲人成网站在线观看播放| 日本av手机在线免费观看| 久久久久国产网址| 国产在视频线精品| 日韩av在线免费看完整版不卡| 干丝袜人妻中文字幕| www.色视频.com| 精品国内亚洲2022精品成人| a级毛片免费高清观看在线播放| 国产伦在线观看视频一区| 亚洲五月天丁香| 夜夜看夜夜爽夜夜摸| 久99久视频精品免费| 久久精品人妻少妇| 在线免费观看不下载黄p国产| 日产精品乱码卡一卡2卡三| 欧美极品一区二区三区四区| 在线免费观看的www视频| 秋霞伦理黄片| 免费大片18禁| 国产高清不卡午夜福利| 国产精品一区www在线观看| 神马国产精品三级电影在线观看| 麻豆av噜噜一区二区三区| 久久精品夜夜夜夜夜久久蜜豆| 亚洲图色成人| 国产精品一区二区三区四区久久| 91精品一卡2卡3卡4卡| 99国产精品一区二区蜜桃av| 在线a可以看的网站| 春色校园在线视频观看| 亚洲第一区二区三区不卡| 熟女人妻精品中文字幕| 亚洲真实伦在线观看| 成年av动漫网址| 久久精品影院6| 国产伦一二天堂av在线观看| 免费观看人在逋| 天天一区二区日本电影三级| 美女内射精品一级片tv|