• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Prediction of Changed Faces with HSCNN

    2022-08-24 03:30:56JinhoHan
    Computers Materials&Continua 2022年5期

    Jinho Han

    Department of Liberal Studies(Computer),Korean Bible University,Seoul,01757,Korea

    Abstract: Convolutional Neural Networks (CNN) have been successfully employed in the field of image classification.However, CNN trained using images from several years ago may be unable to identify how such images have changed over time.Cross-age face recognition is,therefore,a substantial challenge.Several efforts have been made to resolve facial changes over time utilizing recurrent neural networks(RNN)with CNN.The structure of RNN contains hidden contextual information in a hidden state to transfer a state in the previous step to the next step.This paper proposes a novel model called Hidden State-CNN(HSCNN).This adds to CNN a convolution layer of the hidden state saved as a parameter in the previous step and requires no more computing resources than CNN.The previous CNN-RNN models perform CNN and RNN, separately and then merge the results.Therefore, their systems consume twice the memory resources and CPU time,compared with the HSCNN system,which works the same as CNN only.HSCNN consists of 3 types of models.All models load hidden state ht-1 from parameters of the previous step and save ht as a parameter for the next step.In addition,model-B adds ht-1 to x,which is the previous output.The summation of ht-1 and x is multiplied by weight W.In model-C the convolution layer has two weights:W1 and W2.Training HSCNN with faces of the previous step is for testing faces of the next step in the experiment.That is, HSCNN trained with past facial data is then used to verify future data.It has been found to exhibit 10 percent greater accuracy than traditional CNN with a celeb face database.

    Keywords: CNN-RNN;HSCNN;hidden state;changing faces

    1 Introduction

    Face recognition(FR)systems have been continually developed for personal authentication.These efforts have resulted in FR applications acting on mobile phones [1].Researchers have proposed several ideas for FR systems:eigen faces[2],independent component analysis[3],linear discriminant analysis [4,5], three-dimensional (3D) method [6–9], and liveness detection schemes to prevent the misuse of photographic images[10].Using data acquisition methodology,Jafri et al.[11]divided FR techniques into three categories:intensity images, video sequences, and 3D or infra-red techniques.They introduced AI approaches as one of the operating methods for intensity images and reported that it worked efficiently for somewhat complex FR scenarios.Such techniques had not previously been utilized for practical everyday purposes.

    In 2012,AlexNet[12]was proposed and became a turning point in large-scale image recognition.It was the first CNN,one of the deep learning techniques,and was declared the winner of the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2012 with 83.6% accuracy.In ILSVRC 2013,Clarifai was the winner with 88.3%[13,14],whereas in ILSVRC 2014,GoogLeNet was the winner with 93.3% [15].The latter was an astonishing result because humans trained for annotator comparison exhibited approximately 95%accuracy in ILSVRC[16].In 2014,using a nine-layer CNN,DeepFace[17]achieved 97.35%accuracy in FR,closely approaching the 97.53%ability of humans to recognize cropped Labeled Faces in the Wild(LFW)benchmark[18].However,DeepID2[19]achieved 99.15%face verification accuracy with the balanced identification and verification features on ConvNet,which contained four convolution layers.In 2015,DeepID3[20]achieved 99.53%accuracy using VGGNet(Visual Geometry Group Net)[21],whereas FaceNet[22]achieved 99.63%using only 128-bytes per face.

    CNN consists of convolution layers,pooling layers,and fully connected layers.However,a number of problems still needed to be addressed.For instance,CNN trained with past images failed to verify changed images according to a time sequence.In their in-depth FR survey,Wang et al.[23]described three types of cross-factor FR algorithms as challenges to address in real-world applications:crosspose,cross-age,and makeup.Cross-age FR is a substantial challenge with respect to facial aging over time.Several researchers have attempted to resolve this issue.For instance,Liu et al.[24]proposed an age estimation system for faces with CNN.Bianco et al.[25]and Khiyari et al.[26]applied CNN to learn cross-age information.Li et al.[27]suggested metric learning in a deep CNN.Other studies have suggested combining CNN with recurrent neural networks(RNN)to verify changed images because RNN can predict data sequences [28].RNN contains contextual information in a hidden state to transfer a state in the previous step to the next step, and has been found to generate sequences in various domains,including text[29],motion capture data[30],and music[31,32].

    This paper proposes a novel model called Hidden State-CNN (HSCNN) as well as training the modified CNN with past data to verify future data.HSCNN adds to CNN a convolution layer of the hidden state saved as a parameter.The contributions of the present study are as follows:

    First,the proposed model,HSCNN,exhibits 10 percent greater accuracy than traditional CNN with a celeb face database[33].Facial images of the future were tested after training based on facial images of the past.HSCNN adds the hidden state saved as a parameter in the previous step to the CNN structure.Further details on this process are provided in Section 4.2.

    Secondly, because HSCNN included the hidden state of RNN in the proposed architecture, it was efficient in its use of computing resources.Other researchers have performed CNN and RNN,separately, and merged the results in the system, consuming double the number of resources and processing time.Further details are presented in Section 2.

    Thirdly,this paper indicates that HSCNN can train with only two images of one person per step.Also,the loss value reached 0.4 in just 40 epochs in training with loading parameters and 250 epochs in training without loading parameters.Therefore,the HSCNN achieves efficiency because it uses only two images and trains in 40 epochs with loading parameters.This is explained further in Section 4.1.

    In the remainder of this paper,Section 2 introduces related works,Section 3 outlines the proposed method,Section 4 presents the experimental results,and Section 5 provides the conclusion.

    2 Related Works

    Some neural network models can acquire contextual information in various text environments using recurrent layers.Convolutional Recurrent Neural Networks (CRNN) works for a scene text recognition system to read scene texts in the image [34].It contains both convolutional and LSTM recurrent layers of the network architecture and uses past state and current input to predict the subsequent text.Recurrent Convolutional Neural Networks(RCNN)also uses a recurrent structure to classify text from document datasets[35].The combined CNN and RNN model uses relations between phrases and word sequences[36]and in the field of natural language processing(NLP)[37].

    Regarding changed images, combining CNN with RNN methods have been proposed in image classification[38]and a medical paper for blood cell images[39].These authors merged the features extracted from CNN and RNN to determine the long-term dependency and continuity relationship.A CNN-LSTM algorithm was proposed for stock price prediction according to leading indicators[40].This algorithm employed a sequence array of historical data as the input image of the CNN and feature vectors extracted from the CNN as the input vector of LSTM.However, these methods used CNN and RNN,separately,and merged the results vectors extracted from CNN to RNN.Therefore,their systems consume twice the memory resources and CPU time, compared with the proposed system,which works the same as CNN only.Fig.1 presents an overview of the models developed by Yin et al.[38]and Liang et al.[39].

    Figure 1:Overview of previous CNN-RNN models

    Han introduced incremental learning in CNN[41].Incremental-CNN(I-CNN)was tested using the MNIST dataset.HSCNN referenced I-CNN, which used hidden states (ht-1, ht) and an added convolution layer.For training,I-CNN used the MNIST database of handwritten digits comprising 60,000 examples and changed handwritten digits(CHD)comprising 1,000 images.This paper proposes HSCNN, a new structure combining CNN with RNN.It adds a hidden state of RNN into a convolution layer of CNN.Consequently,HSCNN acts like CNN and performs efficiently for crossage FR.

    3 Proposed Method:Hidden State CNN

    The following subsections explain the equation of the loss function cross-entropy error and optimizer of the stochastic gradient descent used in the proposed method.And then,3 types of models of Hidden State CNN are described.

    3.1 Loss Function and Optimizer

    In HSCNN, experiments indicated that cross-entropy error (CEE) was the appropriate loss function (cost function or objective function).The best model has a cross-entropy error of 0.The smaller the CEE,the better the model.The CEE equation is:

    where tkis the truth label,and ykis the softmax probability for the kthclass.When calculating the log,a minimal value near the 0.0 delta is necessary to prevent a ‘0’error.In the python code, the CEE equation is:

    Also,the optimizer is the stochastic gradient descent(SGD),which is a method for optimizing a loss function.The equation is:

    where W is the weight,ηis the learning rate,andis the gradient of the loss function L for W.

    3.2 Hidden State CNN(HSCNN)

    HSCNN consists of 3 types of models:model-A,model-B,and model-C.Model-A is the same as I-CNN.Fig.2 presents the overall structure of HSCNN,which is the modified AlexNet with hidden state htof Convolution layer 5.HSCNN consists of convolution layers,RelU function,pooling layers,affine layers,and soft max function such as traditional CNN,but it also has hidden states:ht-1and ht.This distinguishes HSCNN from CNN.

    Figure 2:The overall structure of HSCNN

    Fig.3 presents the structure of model-A.Model-A loads hidden state ht-1from the parameters of the previous step and saves htas a parameter for the next step.It adds the convolution layer 6 and W2.

    The activation function of the rectified linear unit(RelU)equation of model-A is:

    Fig.4 presents the structure of model-B.Model-B loads hidden state ht-1from parameters of the previous step and saves htas a parameter for the next step.The model adds ht-1to x,which was the previous output.The summation of ht-1and x is multiplied by weight W.The activation function of the rectified linear unit(RelU)equation of model-B is:

    Figure 3:The structure of model-A of HSCNN

    Figure 4:The structure of model-B of HSCNN

    Fig.5 presents the structure of model-C.Model-C loads hidden state ht-1from parameters of the previous step and saves ht as av parameter for the next step.In this case,the convolution layer has two weights:W1and W2.The activation function of the rectified linear unit(RelU)equation of model-C is:

    Figure 5:The structure of model-C of HSCNN

    4 Experimental Results

    In the experiment,HSCNN used Cross-Entropy Error as the loss function,Stochastic Gradient Descent(SGD)as the optimizer,and the Instruction rate(Ir)was 0.001.HSCNN also used python,NumPy library,and CuPy library for the NVIDIA GPUs employed for software coding.

    4.1 Preparations for Experiments

    The essential items required for the experiments are listed in Tab.1.The experiments used FaceApp Pro to make old faces from young faces and Abrosoft Fanta Morph Deluxe to generate 100 morphing facial images between young and old faces.

    Table 1:Items prepared for the experiments

    The facial images of 1,100 persons were selected from the celeb face database,known as the Largescale CelebFaces Attributes (CelebA) Dataset.Those faces were then paired with a similar face; for example,man to man,woman to woman,Asian to Asian,and so on.One was chosen as the young face between paired faces and the other was changed to the old face.To make these changes,the photo editing application FaceApp Pro was used.Between the transition from young faces to old faces,100 continually changing faces were created using the photo morphing software FantaMorph.

    The difference between the young and old images of the same person made by FaceApp Pro was not sufficient, so the experiments also paired the faces of two other persons.Thus, the young face and old face of each pair were different persons.Finally,102 images changing from young to old were created for all 550 paired images.Number 1 is the youngest image,and number 102 is the oldest.102 images show the aging state over time from young to old.Number 1 is the youngest image,and number 102 is the oldest.102 images show the aging state over time from young to old.Numbers 1 to 10 are young images used as the first training data,and the last numbers 101 to 102 are old images and used as final target images.Using 90 images from numbers 11 to 100,the experiments used 10 steps to learn and test aging changes over time.When learning using 9 images in each of 10 steps,all 90 images are used.However,in this case,learning occurs on the entire data,and the training result of the proposed model or CNN was almost the same.So the prediction experiments become meaningless.Therefore,to clearly confirm the results of the prediction experiment,two images are trained for each step,and the two images mean a specific point in the middle of facial aging.This is the same as testing the face at the last oldest point with two final target images.Tab.2 presents sample images from the dataset and their numbers for each step:primary step,target step,and step 1 to step 10.Beyond primary step,each step has only two samples.There is a substantial difference between images number 60 to number 90;however,after number 90,all facial images look almost identical.

    Table 2:Dataset samples of changed images according to steps

    HSCNN appears to be an extremely efficient method as it achieved 99.9%accuracy and 0.01 loss value with only two training images in each step from step 1 to step 10.Also,the HSCNN is trained with loading parameters saved in the previous step.The use of loading parameters means the training epochs can be shortened.Fig.6 indicates that the loss value reaches 0.4 in 250 epochs in training without loading parameters and just 40 epochs with loading parameters.Thus,the HSCNN achieves efficiency because it uses only two images and trains in 40 epochs with loading parameters.

    4.2 Experiments

    The experiments utilized AlexNet as the traditional CNN and created three models of HSCNN modifying AlexNet.All models added hidden states to the convolution five-layer and used the RelU 5 layer of the AlexNet.

    Figure 6:The efficiency of HSCNN

    Fig.7 presents the samples of train steps of the experiments and it shows the accuracy of each training step and loss values through 150 epochs.In the case of training step 3 of model-A,the graph shows the accuracy of training step 3, testing of step 4, step 5, step 6, step 7, step 8, step 9, step 10,and target step.It also shows the loss values according to the epochs.For including loss values in one graph,loss values were divided by 10.

    Figure 7:The samples of train steps of the experiments

    The experiments consisted of 2 groups:experiment I,experiment II.Experiment I tested the target step,and experiment II tested step 3(behind the training step).Step 1 employed the parameters saved in the primary step,while step 2 used the parameters saved in step 1.

    Tab.3 presents the accuracy of the target step in testing faces.Tab.4 presents the accuracy of step 3 behind,according to the models,in testing faces.The test of step 3 behind means that HSCNN tested step 4 when training step 1,and step 5 when training step 2.

    Table 3:Accuracy(%)of testing the target step(Experiment I)

    Table 4:Accuracy(%)of testing the step 3 behind(Experiment II)

    Based on Tabs.3 and 4,the results of the experiments are presented in Figs.8–10.Fig.8 depicts the differences between model-A and AlexNet.These indicate that model-A achieved more than 10%higher accuracy in step 3 to step 5 of experiment I and in step 1 to step 3 of experiment II.Therefore,model-A can be employed for both long-and short-time changes.

    Fig.9 depicts the differences between model-B and AlexNet.These indicate that model-B achieved more than 10%higher accuracy in step 4 to step 6 of experiment I and in step 1 to step 2 of experiment II.Therefore,model-B can be employed to predict both long-and short-time changes.Fig.10 depicts the differences between model-C and AlexNet.These indicate that model-C achieved more than 10%higher accuracy in step 2 to step 3 of experiment II, but not in any step of experiment I.Therefore,model-C is considered an appropriate method to predict short-time changes.

    Figure 8:The results of the model-A and AlexNet experiments

    Figure 9:The results of model-B and AlexNet experiments

    Figure 10:The results of model-C and AlexNet experiments

    5 Conclusion

    This paper proposes a novel model, Hidden State-CNN (HSCNN), which adds to CNN a convolution layer of the hidden state.Training with past data to verify future data was the aim of the experiments,and HSCNN exhibited 10 percent higher accuracy with the celeb face database than the traditional CNN.The experiments consisted of 2 groups.Experiment I tested the target step and experiment II tested step 3 behind the training step.The latter step means that step 4 was tested when training step 1.Experiment I assessed long-time changes, and experiment II relatively shorttime changes.Both yielded similar results.Furthermore, HSCNN achieved a more efficient use of computing resources because its structure differed from that of other CNN-RNN methods.HSCNN also represented a new and efficient training process to verify changing faces.It used only two training images in each step and achieved 99.9%accuracy and 0.01 loss value.

    Funding Statement:This work was supported by the National Research Foundation of Korea(NRF)grant in 2019(NRF-2019R1G1A1004773).

    Conflicts of Interest:The author declares that he has no conflicts of interest to report regarding the present study.

    亚洲av熟女| 最近最新免费中文字幕在线| 国产麻豆成人av免费视频| 午夜久久久久精精品| 国产成人精品久久二区二区免费| 悠悠久久av| 亚洲激情在线av| 国产精品av久久久久免费| 这个男人来自地球电影免费观看| 中文字幕色久视频| 精品久久久久久久人妻蜜臀av | 亚洲熟妇熟女久久| 在线观看午夜福利视频| 久久久久精品国产欧美久久久| 免费在线观看视频国产中文字幕亚洲| 97碰自拍视频| 美女高潮到喷水免费观看| 一区二区三区精品91| 熟女少妇亚洲综合色aaa.| 亚洲av成人一区二区三| 此物有八面人人有两片| 国产一区二区三区综合在线观看| 午夜精品在线福利| 脱女人内裤的视频| 亚洲五月色婷婷综合| 欧美日本亚洲视频在线播放| 成在线人永久免费视频| 非洲黑人性xxxx精品又粗又长| 精品久久久久久久久久免费视频| 不卡一级毛片| www.自偷自拍.com| 国产精品一区二区三区四区久久 | 午夜免费激情av| 亚洲欧美精品综合久久99| 91在线观看av| 亚洲美女黄片视频| 日韩欧美免费精品| 久久久久国产一级毛片高清牌| 亚洲午夜理论影院| 成人三级黄色视频| 久久人妻熟女aⅴ| 国产精品秋霞免费鲁丝片| 成人18禁高潮啪啪吃奶动态图| 美女国产高潮福利片在线看| 国产高清激情床上av| 性色av乱码一区二区三区2| bbb黄色大片| 欧美久久黑人一区二区| 身体一侧抽搐| 一区二区日韩欧美中文字幕| 18美女黄网站色大片免费观看| 婷婷丁香在线五月| 国产精品国产高清国产av| 欧美av亚洲av综合av国产av| 午夜福利免费观看在线| 性欧美人与动物交配| 精品卡一卡二卡四卡免费| 一区二区三区高清视频在线| 成人亚洲精品一区在线观看| 在线av久久热| 欧美绝顶高潮抽搐喷水| 午夜福利欧美成人| 亚洲av第一区精品v没综合| 性少妇av在线| 超碰成人久久| avwww免费| 99精品欧美一区二区三区四区| 满18在线观看网站| 色尼玛亚洲综合影院| 欧美日韩精品网址| 岛国视频午夜一区免费看| 午夜福利视频1000在线观看 | 欧美日本中文国产一区发布| 日韩一卡2卡3卡4卡2021年| 看黄色毛片网站| 好男人电影高清在线观看| 国产亚洲欧美在线一区二区| 国产97色在线日韩免费| 亚洲色图av天堂| 精品一品国产午夜福利视频| a级毛片在线看网站| 午夜成年电影在线免费观看| 不卡一级毛片| av有码第一页| 女人被狂操c到高潮| 中文字幕精品免费在线观看视频| 国产免费av片在线观看野外av| 18美女黄网站色大片免费观看| 欧美大码av| 亚洲中文字幕日韩| 欧美激情极品国产一区二区三区| 久久久久久亚洲精品国产蜜桃av| 成年版毛片免费区| 亚洲男人天堂网一区| tocl精华| 亚洲国产日韩欧美精品在线观看 | 欧美日韩乱码在线| 在线播放国产精品三级| 老汉色∧v一级毛片| 国产黄a三级三级三级人| 可以在线观看的亚洲视频| 成人精品一区二区免费| 亚洲国产看品久久| 国产色视频综合| 9色porny在线观看| 国产精品综合久久久久久久免费 | 精品久久久久久久久久免费视频| 老熟妇仑乱视频hdxx| 亚洲av日韩精品久久久久久密| 亚洲成a人片在线一区二区| 无限看片的www在线观看| 国产精品久久久av美女十八| 99国产精品一区二区蜜桃av| www.精华液| 国产亚洲精品久久久久久毛片| 免费在线观看完整版高清| 一级作爱视频免费观看| 真人做人爱边吃奶动态| 欧美在线黄色| 日韩大码丰满熟妇| 国产精品电影一区二区三区| 久久人人精品亚洲av| 亚洲av第一区精品v没综合| 亚洲男人的天堂狠狠| 变态另类成人亚洲欧美熟女 | 国产亚洲精品综合一区在线观看 | 久久欧美精品欧美久久欧美| 69av精品久久久久久| 国产精品一区二区在线不卡| 一区二区三区高清视频在线| 午夜亚洲福利在线播放| 日本在线视频免费播放| 国产成人欧美| 国产av精品麻豆| 精品日产1卡2卡| 国产精品免费视频内射| 国产精品九九99| 女人精品久久久久毛片| 欧美成人午夜精品| 国产一区二区三区在线臀色熟女| 亚洲国产精品久久男人天堂| 精品一区二区三区视频在线观看免费| 国产精品精品国产色婷婷| 欧美成人免费av一区二区三区| 黑人巨大精品欧美一区二区mp4| АⅤ资源中文在线天堂| 中出人妻视频一区二区| 久久精品91蜜桃| 色综合站精品国产| 午夜福利成人在线免费观看| 女人被躁到高潮嗷嗷叫费观| 日本免费a在线| 亚洲男人的天堂狠狠| 日本黄色视频三级网站网址| 亚洲狠狠婷婷综合久久图片| 国产精品一区二区在线不卡| 日韩成人在线观看一区二区三区| 久久性视频一级片| 无限看片的www在线观看| 国产蜜桃级精品一区二区三区| 国产av又大| 女人爽到高潮嗷嗷叫在线视频| 欧美黑人精品巨大| bbb黄色大片| 欧美成人性av电影在线观看| 精品第一国产精品| 在线视频色国产色| 午夜精品久久久久久毛片777| 亚洲午夜理论影院| 日韩欧美三级三区| 少妇的丰满在线观看| 精品不卡国产一区二区三区| 欧美日韩亚洲综合一区二区三区_| 欧美另类亚洲清纯唯美| 成人av一区二区三区在线看| cao死你这个sao货| 久久国产亚洲av麻豆专区| 波多野结衣av一区二区av| 1024视频免费在线观看| 亚洲午夜理论影院| 欧美黑人欧美精品刺激| 成人三级做爰电影| 午夜福利高清视频| 亚洲国产欧美一区二区综合| 韩国av一区二区三区四区| 午夜福利一区二区在线看| 久久香蕉激情| 怎么达到女性高潮| 身体一侧抽搐| 久久久国产精品麻豆| e午夜精品久久久久久久| 久久精品亚洲熟妇少妇任你| 日韩精品免费视频一区二区三区| 日本 欧美在线| 国产人伦9x9x在线观看| 精品国产亚洲在线| 国产一区二区在线av高清观看| 精品国产一区二区久久| 99精品在免费线老司机午夜| 色综合婷婷激情| 在线播放国产精品三级| 一级毛片高清免费大全| 国产欧美日韩一区二区三区在线| 国产欧美日韩一区二区三| 啪啪无遮挡十八禁网站| 多毛熟女@视频| 成人国产综合亚洲| 亚洲精品久久成人aⅴ小说| 国产精品av久久久久免费| 国产单亲对白刺激| 超碰成人久久| 亚洲国产看品久久| 国产成人精品久久二区二区91| 两个人看的免费小视频| 亚洲精品在线观看二区| 免费看十八禁软件| 91九色精品人成在线观看| 国产欧美日韩一区二区三| 欧美色欧美亚洲另类二区 | 如日韩欧美国产精品一区二区三区| 好看av亚洲va欧美ⅴa在| 欧美午夜高清在线| 91麻豆av在线| 神马国产精品三级电影在线观看 | 满18在线观看网站| 亚洲国产日韩欧美精品在线观看 | 婷婷六月久久综合丁香| www国产在线视频色| 色哟哟哟哟哟哟| 男女下面插进去视频免费观看| 国内精品久久久久精免费| 国产真人三级小视频在线观看| 真人一进一出gif抽搐免费| 亚洲自偷自拍图片 自拍| 亚洲熟妇中文字幕五十中出| 精品日产1卡2卡| 日本免费一区二区三区高清不卡 | 免费在线观看影片大全网站| 亚洲欧美一区二区三区黑人| 丁香六月欧美| 岛国视频午夜一区免费看| 精品欧美一区二区三区在线| 999久久久国产精品视频| 一级,二级,三级黄色视频| 亚洲午夜理论影院| 一二三四社区在线视频社区8| 一卡2卡三卡四卡精品乱码亚洲| 女人被躁到高潮嗷嗷叫费观| 久久性视频一级片| 黑人巨大精品欧美一区二区mp4| 香蕉丝袜av| 黄片播放在线免费| 国产精品久久久人人做人人爽| 精品无人区乱码1区二区| 亚洲成人免费电影在线观看| 久久天躁狠狠躁夜夜2o2o| 91九色精品人成在线观看| 99国产精品免费福利视频| 自拍欧美九色日韩亚洲蝌蚪91| 午夜精品国产一区二区电影| 日韩大码丰满熟妇| 黄色成人免费大全| 亚洲av第一区精品v没综合| 黄色毛片三级朝国网站| 久久天堂一区二区三区四区| 久久午夜亚洲精品久久| 精品国产国语对白av| 久久国产乱子伦精品免费另类| 欧美日韩乱码在线| 国产单亲对白刺激| 搞女人的毛片| 亚洲国产毛片av蜜桃av| av在线天堂中文字幕| 精品欧美一区二区三区在线| 国产熟女xx| 亚洲专区中文字幕在线| 亚洲中文av在线| 免费在线观看黄色视频的| 亚洲国产欧美网| 一进一出抽搐gif免费好疼| 欧美激情极品国产一区二区三区| 精品国产一区二区久久| 久久久久久久久中文| 欧美av亚洲av综合av国产av| 欧美色欧美亚洲另类二区 | 少妇的丰满在线观看| 大型黄色视频在线免费观看| 欧美日韩福利视频一区二区| 12—13女人毛片做爰片一| 又大又爽又粗| 亚洲欧美精品综合一区二区三区| а√天堂www在线а√下载| 国产亚洲精品综合一区在线观看 | 国产私拍福利视频在线观看| 久久伊人香网站| 午夜福利18| 欧美人与性动交α欧美精品济南到| 国产精品免费视频内射| 亚洲精华国产精华精| 不卡av一区二区三区| 亚洲无线在线观看| 中文字幕高清在线视频| 久久香蕉国产精品| 日日夜夜操网爽| 亚洲第一青青草原| 最近最新中文字幕大全电影3 | 日本精品一区二区三区蜜桃| 制服人妻中文乱码| 在线视频色国产色| 中文字幕色久视频| 亚洲精品av麻豆狂野| 搡老岳熟女国产| 日本vs欧美在线观看视频| 12—13女人毛片做爰片一| 一区二区三区精品91| 黄色成人免费大全| 午夜福利在线观看吧| 欧美精品啪啪一区二区三区| 久久人妻熟女aⅴ| netflix在线观看网站| 长腿黑丝高跟| 成人欧美大片| 久久久久久久午夜电影| 丁香六月欧美| 久久精品人人爽人人爽视色| 真人做人爱边吃奶动态| 久久久久久大精品| 色播在线永久视频| 黄色视频,在线免费观看| 波多野结衣巨乳人妻| 长腿黑丝高跟| 日日爽夜夜爽网站| 日韩三级视频一区二区三区| 怎么达到女性高潮| 亚洲精品国产精品久久久不卡| 黄片播放在线免费| 亚洲熟妇中文字幕五十中出| 久久青草综合色| 国产精品免费一区二区三区在线| 制服人妻中文乱码| 人人妻人人爽人人添夜夜欢视频| 50天的宝宝边吃奶边哭怎么回事| 深夜精品福利| 精品久久久久久久久久免费视频| 亚洲av成人av| 日日摸夜夜添夜夜添小说| 国产成年人精品一区二区| 国产人伦9x9x在线观看| 国产又爽黄色视频| 一级作爱视频免费观看| 麻豆久久精品国产亚洲av| а√天堂www在线а√下载| 黄片播放在线免费| 在线永久观看黄色视频| 久久伊人香网站| 国产又色又爽无遮挡免费看| 欧美另类亚洲清纯唯美| 久久精品国产亚洲av香蕉五月| 欧美中文综合在线视频| 欧美午夜高清在线| 国产在线精品亚洲第一网站| 天堂动漫精品| 亚洲色图综合在线观看| 高清黄色对白视频在线免费看| 国产成+人综合+亚洲专区| 精品久久久久久久毛片微露脸| 精品久久久久久久久久免费视频| 久久久水蜜桃国产精品网| 久久青草综合色| 99在线人妻在线中文字幕| 99精品在免费线老司机午夜| 国产精品日韩av在线免费观看 | 国产成人精品无人区| 欧美性长视频在线观看| 免费人成视频x8x8入口观看| 777久久人妻少妇嫩草av网站| 电影成人av| 在线观看66精品国产| 视频在线观看一区二区三区| 黄网站色视频无遮挡免费观看| 丝袜人妻中文字幕| 在线观看www视频免费| 香蕉国产在线看| 老司机靠b影院| 欧美日韩一级在线毛片| www.自偷自拍.com| 女人被狂操c到高潮| 丁香欧美五月| 国产精品久久电影中文字幕| 在线永久观看黄色视频| 精品久久久久久久久久免费视频| 国产精品九九99| 一区二区三区国产精品乱码| 黑丝袜美女国产一区| 国产精品亚洲美女久久久| 18禁美女被吸乳视频| 日韩国内少妇激情av| 午夜亚洲福利在线播放| 欧美日韩精品网址| 午夜福利视频1000在线观看 | 国产成人欧美在线观看| 日本 av在线| 多毛熟女@视频| 国产欧美日韩一区二区三| 最近最新中文字幕大全电影3 | 日本免费a在线| 亚洲国产精品合色在线| 色精品久久人妻99蜜桃| 亚洲中文字幕一区二区三区有码在线看 | 午夜成年电影在线免费观看| 欧美老熟妇乱子伦牲交| 免费观看精品视频网站| 国产精品自产拍在线观看55亚洲| 中文字幕精品免费在线观看视频| av超薄肉色丝袜交足视频| 中文亚洲av片在线观看爽| 精品欧美国产一区二区三| 精品电影一区二区在线| 桃色一区二区三区在线观看| 午夜福利欧美成人| 成人18禁高潮啪啪吃奶动态图| 十八禁网站免费在线| 麻豆一二三区av精品| or卡值多少钱| 天天一区二区日本电影三级 | 日本免费一区二区三区高清不卡 | 一区二区三区高清视频在线| 国产成人影院久久av| 色老头精品视频在线观看| 69av精品久久久久久| 午夜日韩欧美国产| 日本精品一区二区三区蜜桃| 法律面前人人平等表现在哪些方面| 亚洲av成人不卡在线观看播放网| 最近最新免费中文字幕在线| 在线观看免费视频日本深夜| 久久精品国产亚洲av高清一级| 黄色a级毛片大全视频| 18禁观看日本| 男人的好看免费观看在线视频 | 国产单亲对白刺激| 国产精品久久久人人做人人爽| 黄片播放在线免费| 露出奶头的视频| 无限看片的www在线观看| 亚洲成人国产一区在线观看| 精品欧美一区二区三区在线| 国产私拍福利视频在线观看| 91精品国产国语对白视频| 欧美日韩亚洲综合一区二区三区_| 国产精品国产高清国产av| 丝袜人妻中文字幕| 大型黄色视频在线免费观看| 欧美成人免费av一区二区三区| 国产欧美日韩一区二区三| 人妻丰满熟妇av一区二区三区| 久久久久久国产a免费观看| 搡老熟女国产l中国老女人| 狠狠狠狠99中文字幕| 一级黄色大片毛片| 男人操女人黄网站| 日本五十路高清| 色av中文字幕| 中文字幕人妻丝袜一区二区| 精品久久久精品久久久| 色精品久久人妻99蜜桃| 欧美一区二区精品小视频在线| 欧美丝袜亚洲另类 | 中文字幕最新亚洲高清| 久久精品影院6| 国产亚洲精品一区二区www| av网站免费在线观看视频| 久久久久久免费高清国产稀缺| 日本一区二区免费在线视频| 国产不卡一卡二| 最新美女视频免费是黄的| 色综合站精品国产| 亚洲欧美日韩无卡精品| 国产精品永久免费网站| av有码第一页| 美女国产高潮福利片在线看| 久久性视频一级片| 亚洲 国产 在线| 国产成人欧美在线观看| 欧美激情 高清一区二区三区| 波多野结衣巨乳人妻| 高清黄色对白视频在线免费看| 日本黄色视频三级网站网址| 很黄的视频免费| 午夜a级毛片| 啦啦啦观看免费观看视频高清 | 一级a爱视频在线免费观看| 国产av又大| 村上凉子中文字幕在线| 久久影院123| 99在线人妻在线中文字幕| 又黄又粗又硬又大视频| 在线观看免费日韩欧美大片| 亚洲少妇的诱惑av| 日本在线视频免费播放| 视频区欧美日本亚洲| 亚洲最大成人中文| 后天国语完整版免费观看| 女人高潮潮喷娇喘18禁视频| 日本在线视频免费播放| 视频区欧美日本亚洲| 一边摸一边抽搐一进一出视频| 禁无遮挡网站| 最好的美女福利视频网| 国产av一区在线观看免费| 国产一区二区激情短视频| 女性被躁到高潮视频| 黄片大片在线免费观看| 最好的美女福利视频网| 久久影院123| 成在线人永久免费视频| 女同久久另类99精品国产91| 国内精品久久久久久久电影| 少妇被粗大的猛进出69影院| 免费一级毛片在线播放高清视频 | 亚洲精品在线美女| 国产精品亚洲av一区麻豆| 黄片小视频在线播放| 精品卡一卡二卡四卡免费| 久久精品91无色码中文字幕| 伊人久久大香线蕉亚洲五| 午夜久久久在线观看| 日韩三级视频一区二区三区| 美女国产高潮福利片在线看| 亚洲人成电影免费在线| 亚洲精品久久成人aⅴ小说| 丁香六月欧美| 日本 欧美在线| 嫩草影视91久久| 成人18禁在线播放| 久久久精品欧美日韩精品| 麻豆成人av在线观看| 国产麻豆69| 精品国产美女av久久久久小说| 色播在线永久视频| 日韩大码丰满熟妇| 亚洲美女黄片视频| 国产成人av激情在线播放| 日本免费a在线| 亚洲精品国产区一区二| 大码成人一级视频| 日韩三级视频一区二区三区| 成年版毛片免费区| 国产成人一区二区三区免费视频网站| 国产免费av片在线观看野外av| 午夜日韩欧美国产| 高清毛片免费观看视频网站| 午夜影院日韩av| 91老司机精品| 亚洲av第一区精品v没综合| 国产成人欧美| 亚洲国产中文字幕在线视频| 成人免费观看视频高清| 又紧又爽又黄一区二区| 极品人妻少妇av视频| 欧美老熟妇乱子伦牲交| 欧美绝顶高潮抽搐喷水| 别揉我奶头~嗯~啊~动态视频| 男女下面进入的视频免费午夜 | 老熟妇乱子伦视频在线观看| 国产精品美女特级片免费视频播放器 | 三级毛片av免费| 国产欧美日韩一区二区三| 亚洲一区二区三区不卡视频| tocl精华| 亚洲精品久久成人aⅴ小说| 国产成人精品久久二区二区免费| 十八禁人妻一区二区| 亚洲成人精品中文字幕电影| 人妻丰满熟妇av一区二区三区| 欧美在线一区亚洲| 国产精品久久久久久亚洲av鲁大| 亚洲欧洲精品一区二区精品久久久| 黑丝袜美女国产一区| 在线观看www视频免费| 欧美成人免费av一区二区三区| 亚洲中文av在线| 在线观看免费视频日本深夜| 狂野欧美激情性xxxx| 久久久水蜜桃国产精品网| 国语自产精品视频在线第100页| 亚洲欧美激情综合另类| 欧美激情 高清一区二区三区| 天天一区二区日本电影三级 | avwww免费| 国产亚洲精品综合一区在线观看 | av超薄肉色丝袜交足视频| 亚洲精品一区av在线观看| 人人澡人人妻人| 国产高清有码在线观看视频 | 91在线观看av| 亚洲色图 男人天堂 中文字幕| 一区福利在线观看| 欧美日本视频| 欧美乱妇无乱码| 国产野战对白在线观看| 欧美另类亚洲清纯唯美| 国产精品免费一区二区三区在线| 久久久久久久久中文| 一个人观看的视频www高清免费观看 | 亚洲中文日韩欧美视频| 大型av网站在线播放| 亚洲精品中文字幕在线视频| 亚洲成a人片在线一区二区| 在线永久观看黄色视频| 日日夜夜操网爽| 高清黄色对白视频在线免费看| 在线观看www视频免费| aaaaa片日本免费| 久热这里只有精品99|