• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Generating Cartoon Images from Face Photos with Cycle-Consistent Adversarial Networks

    2021-12-15 08:14:42TaoZhangZhanjieZhangWenjingJiaXiangjianHeandJieYang
    Computers Materials&Continua 2021年11期

    Tao Zhang,Zhanjie Zhang,*,Wenjing Jia,Xiangjian He and Jie Yang

    1School of Artificial Intelligence and Computer Science,Jiangnan University,Wuxi,214000,China

    2Key Laboratory of Artificial Intelligence,Jiangsu,214000,China

    3The Global Big Data Technologies Centre,University of Technology Sydney,Ultimo,NSW,2007,Australia

    4The Institute of Image Processing and Pattern Recognition,Shanghai Jiao Tong University,Shanghai,201100,China

    Abstract:The generative adversarial network(GAN)is first proposed in 2014,and this kind of network model is machine learning systems that can learn to measure a given distribution of data,one of the most important applications is style transfer.Style transfer is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image.CYCLE-GAN is a classic GAN model,which has a wide range of scenarios in style transfer.Considering its unsupervised learning characteristics,the mapping is easy to be learned between an input image and an output image.However,it is difficultfor CYCLE-GAN to converge and generate high-quality images.In order to solve this problem,spectral normalization is introduced into each convolutional kernel of the discriminator.Every convolutional kernel reaches Lipschitz stability constraint with adding spectral normalization and the value of the convolutional kernel is limited to[0,1],which promotes the training process of the proposed model.Besides,we use pretrained model(VGG16) to control the loss of image content in the position of l1 regularization.To avoid overfitting,l1 regularization term and l2 regularization term are both used in the object loss function.In terms of Frechet Inception Distance (FID) score evaluation,our proposed model achieves outstanding performance and preserves more discriminative features.Experimental results show that the proposed model converges faster and achieves better FID scores than the state of the art.

    Keywords:Generative adversarial network;spectral normalization;Lipschitz stability constraint;VGG16;l1 regularization term;l2 regularization term;Frechet inception distance

    1 Introduction

    Goodfellow et al.[1,2]proposed a new neural network model in 2014,and named it as generative adversarial network (GAN).Nowadays,the GAN develops rapidly and promotes the development of the whole neural network.The GAN is composed of two parts:one is a generator and the other is a discriminator.Although the generator and discriminator are two separate parts,they need to be trained at the same time.Then,the generator generates false data to deceive the discriminator and the discriminator identifies the generated data.At present,the GAN is widely used due to its unsupervised learning characteristics [3],it’s worth noting that the GAN is prone to collapse during model convergence.Arjovsky et al.[4-7]tried to solve this problem by adjusting the object loss function.They performed rigorous mathematical derivation to find out why the model was prone to collapse and introduced Wasserstein distance.With the development of GAN,the convolutional neural network is firstly used in DCGAN [8].After that,GAN is gradually applied in the field of style transfer [9-11].Style transfer means that we can transfer the style of one image to another image.Since 2016,many style transfer methods have been proposed,such as PIX2PIX [12],CO-GAN [13],CYCLE-GAN [14],STAR-GAN [15,16],CARTOON-GAN [17],DISCO-GAN [18],DUAL-GAN [19],etc.

    Style transfer has been widely applied in diverse scenes [20-26].Up to now,one of the most important is to solve the problem of high resolution [27]and multi-target style transfer.To achieve this goal,some researchers propose new methods.Besides,it is difficult for GAN to achieve Nash equilibrium.In the process of style transfer,the model may collapse.The model collapse means that the model cannot generate a better result and the object loss function value doesn’t keep descending.For the style transfer of face images,the distorted distribution of facial features often occurs.Thus,it is necessary to add key point constraints for an effective style transfer.

    For style transfer,the main methods are based on the GAN.On this basis,adjusting the architecture [28]and reconstructing the object loss function is necessary.Some researchers try to use the classical GAN to replace facial features,expressions [29],clothing [30],etc.And other researchers even break the limitations of one generator to achieve style transfer between multiple fields,such as STAR-GAN.However,the extra hardware resources and time resources are wasted to generate high-quality images in style transfer.Our proposed model achieves style transfer based on unpaired images and uses lightweight neural networks to generate better results.In our proposed model,the discriminator relies on the method of embedded normalization [31-38],and it reduces the oscillation of the object loss function during model convergence.In order to achieve the spectral normalization of each convolutional kernel,it is necessary to obtain the singular value of the weight matrix,and the iteration method is used to get it.After obtaining the spectral normalization,each parameter matrix of the discriminator is subject to Lipschitz constraint [39]and each change of the parameter matrix is limited to a certain range.

    At present,many problems remain to be solved in style transfer.Many researchers try to solve these problems by optimizing neural network structure.Besides,some measures are proposed,such as constructing loss function term,adding normalization and implementing attention mechanism [40,41].In this paper,a novel discriminator of the generative adversarial network is proposed.At the same time,by extracting the high-dimensional features of the generated images,the pretrained model (VGG16 [42]) is used to reduce the loss of image content.In the object loss function,l1 regularization item and l2 regularization item [43,44]are both used to avoid overfitting.

    In the process of style transfer,feature extraction and feature reconstruction are very important when treating face images with high dimensional features.Besides,it is beneficial for GAN to learn the style when the cartoon image has clear outlines.In order to obtain better experimental results,there is no fixed learning rate algorithm and object loss function in style transfer.Also,multiple attempts to construct different neural network structures are necessary.It’s worth noting that the different methods may conflict with each other and make the object loss function not converge.In this paper,we accelerated style transfer by training an end-to-end neural network with a lightweight structure.

    The remaining of this paper is organized as follows.In Section 2,we introduce our related work.We introduce our proposed GAN model in Section 3.The experimental result of style transfer is presented in Section 4.At last,the conclusions are summarized in Section 5.

    2 Related Works

    Traditional non-parametric image style transfer methods are mainly based on the physical model by image rendering and texture synthesis.The non-parametric image style transfer method can only extract the low-level features of the image.When processing the images with complex colors and textures,the final image synthesis result is relatively rough,therefore,it doesn’t meet the actual needs.

    In deep learning,image style transfer methods mainly include image iteration method and model iteration method.The image iteration method presents many advantages,such as highquality composite image,good controllability,convenient parameter adjustment and no training data.However,the additional computing resources are consumed.More specifically,image iteration methods can be divided into maximum mean difference methods [45-47],Markov Random Field methods [48]and depth image analogy methods [49].At present,the model iteration method is the mainstream technology of industrial application software,which can be used for fast video programming.However,the quality of image generation needs to be further improved,and lots of images are needed for training the model.The model iteration method can be divided into generative model method [50-52]and image reconstruction decoder method [53].

    Many generative model methods were proposed,such as CYCLE-GAN,DISCO-GAN and DUAL-GAN.CYCLE-GAN is based on the cycle consistency method,while DISCO-GAN and DUAL-GAN are based on machine translation.These excellent models break the limit of paired training data and successfully realize unsupervised transfer learning.At present,GAN is quite unstable in model convergence,and discriminator makes it difficult to implement the style transfer in clear direction.In addition,GAN is an iterative optimization based on the distribution of image divergence,rather than based on the content,texture and color of the image,so it is difficult to control the process of style transfer.In order to solve the model instability and improve the quality of the generated images,the spectral normalization and pretrained model (VGG16) are introduced in proposed model.

    In this paper,our proposed model is different from the traditional GAN model.Instead of using one generator and one discriminator,two generators and two discriminators are both used.Besides,the pretrained model (VGG16) is also added to reduce the loss of image content.The proposed model is based on CYCLE-GAN and we make some improvements.In the generator,U-NET method is used to extract features and reconstruct features.To get more discriminative facial features of generated images,we use a pretrained model (VGG16) to extract high dimensional features from generated images and face images.Based on that,the generated image could preserve the image content well.In our constructed discriminator,convolutional neural networks are used instead of fully connected neural networks.The proposed discriminator is designed according to human visual characteristics.In object loss function,the loss function of image content is introduced in this paper.

    The open-source dataset is collected including face images and cartoon images.All the images are divided into two parts:training set and test set.To reduce computation resources,all the images are normalized when the proposed model loads the image dataset.In this paper,the proposed model is constructed to transfer style between face images and cartoon images.And then the loss function value of the proposed model is recorded,as well as the generated images.On the basis of the original GAN model,the spectral normalization,l2 regularization item and pretrained model (VGG16) are introduced.The object loss function is recorded for comparison.After training the proposed model,the size of the image is resized to the original image.During the convergence process of the proposed model,the images in the training set are randomly selected.The learning rate changes at each epoch.

    The open-source cartoon dataset is selected which has simple features and small image sizes.For datasets with more complex features,it is better to construct deeper neural networks [54-56].We use TensorFlow to save the checkpoints during the convergence process,and use the built-in TensorBoard to plot the loss function graph.The generator adopts the classical architecture of the convolutional network,residual network and deconvolutional network respectively.In order to stabilize the training process of GAN and prevent image distortion,it’s effective to adjust the angle of face images and cartoon images.In addition,in order to get better training results,adjusting the learning rate is effective.The designed generator firstly extracts the image features through a three-layer convolutional neural network,then learns the image features through a nine-layer residual network,and finally reconstructs the image features through a three-layer deconvolutional network.We make some improvements on the traditional GAN and propose a new discriminator.However,due to the many unstable factors in the convergence process of the GAN,the model often breaks down.The main reason is the mismatch between the abilities of the generator and the discriminator.Besides,it’s possible that the neural network is not deep enough to learn the complex features of the image dataset.So,many attempts to adjust the model structure and manually adjust the parameters are very necessary.Thus,we worked hard to reconstruct the architecture of GAN.

    Generally,the main contributions of this paper are summarized as follows:

    · The key point is to obtain the parameter matrix of the convolutional neural network and calculate the singular value.Extra computing resources are consumed during singular value calculation.Thus,the power iteration method is firstly used to approximate singular values.The singular value is used to conduct spectral normalization.After adding spectral normalization,the parameter matrix of the convolutional neural network meets the Lipschitz constraint.It replaces the parts of the parameter matrix is greater than 1 set for less than 1.

    · The proposed discriminator is designed according to human visual characteristics.It has six layers of convolutional neural networks.Each convolution kernel of the convolutional neural network is designed to satisfy the Lipschitz stability constraint,and the value of it is limited to [0,1].

    · We use pretrained model (VGG16) to extract the high dimensional features of the face images and the generated images.In this way,the generator can learn the style features from the style image and preserve the content features of the face images well.Then,high dimensional features are extracted from the face images and generated images.Besides,l1 loss is used.In the object loss function,l1 regularization term and l2 regularization term are used to avoid overfitting.

    · We collected more images of faces and cartoons.When loading the total length of the image list and the image index of the dataset,the image is normalized to reduce computation.After style transfer,the generated image is restored to a 128×128 pixel image.

    3 Proposed Model

    3.1 Basic Model

    The model of the GAN can be divided into two parts:generator and discriminator.The generator and discriminator constantly learn from each other when the model converges.In the original GAN theory,it is unnecessary to define generator and discriminator as a neural network.At present,the deep neural network is generally used as a generator and discriminator.In order to obtain better generated images,the training sequence of the generator and discriminator needs to be adjusted asynchronously during the training process.The best training strategy is to ensure that the loss function values of the generator and discriminator are close to each other.Among them,the generator is trained to generate realistic images to deceive the discriminator,and the discriminator is trained to not be deceived by the generated image.In the first epoch,the generated image looks very messy.Subsequently,the discriminator receives false and real images and learns to distinguish between them.The generator receives “feedback” from the discriminator through a backpropagation step to generate better images.In this paper,the face image undergoes feature extraction,feature mapping and feature reconstruction respectively.Through constant learning of the generator,the false image is generated which is similar to the real face.The discriminator is used to identify the generated image.The discriminator continuously improves its discriminating ability and furtherly guides the generator.And the generator generates false images to deceive the discriminator continuously.Finally,the discriminator cannot distinguish whether the generated image is true or false.After the generator and the discriminator learning from each other over a period of time,the training process of GAN is finished.The architecture of the GAN model is shown in Fig.1.

    Figure 1:The model of generative adversarial network (I.J.Goodfellow)

    Different from the traditional neural network model,the loss function of GAN can be divided into two parts:generator loss function and discriminator loss function.The loss function is described as Eqs.(1)-(3).Grepresents generator,Drepresents discriminator,Pz(z)represents false data distribution,Pdata(x)represents true data distribution

    Generator loss function:

    Discriminator loss function:

    Object loss function:

    As shown in the Eqs.(1) and (2),the loss functions of generator and discriminator are actually treated as two separate parts.Different object loss functions are required for different models.Based on the BP [57]approach,the parameter matrix of the convolutional neural network is continuously optimized to minimize the object loss function.For GAN,there are no specific restrictions on the architecture of generators and discriminators.It only represents a network model,and any neural network can be used to implement the GAN.

    3.2 Improved Model

    The traditional GAN only allows the generator to generate data to deceive the discriminator and the generated data contains similar features to the original data.However,traditional GAN cannot achieve style transfer from one domain to another.On the basis of CYCLE-GAN,the content consistency loss function is proposed with the help of pretrained model (VGG16).In this paper,the pretrained model (VGG16) is used to extract the high dimensional features of the generated image and the face image.In this way,the content of the generated image is determined.The proposed model needn’t be trained by paired images.The proposed model is easy to train and fast to converge.The proposed model has a very wide range of applications.Its purpose is to form a general mapping from image domainXto image domainY.Overview of the proposed model is shown as Fig.2.

    Figure 2:An overview of the proposed model

    The proposed model consists of two discriminators and two generators,which control image content loss by the pretrain model.The loss function is shown as Eqs.(4)-(7).Lrepresents the loss function,Xrepresents the image in domainXandYrepresents the image in domainY.

    For generatorGX:X→Yand its discriminatorDY,the loss function is defined as Eq.(4):

    For generatorGY:Y→X,discriminatorDX,the loss function is defined as Eq.(5):

    Cycle consistency loss function is defined as Eq.(6):

    For the object loss function,object loss function is defined as Eq.(7):

    3.3 Spectral Normalization

    In 2017,WGAN introduced that the value of each convolution kernel of discriminator must satisfy Lipschitz constraint.In fact,the way to solve this problem is to set the value greater than 1 to 1 in the convolution kernel.Then,the loss function of the discriminator is actually discontinuous.It even leads to difficult optimization problems,and various approaches have been proposed,such as WGAN-GP.In this paper,spectral normalization is conducted to achieve Lipschitz stability constraint.The core idea is the spectral normalization of the weight matrix in convolutional network,and using the maximum singular value to scale the weight matrix.In the proposed model,the generator adopts convolutional neural networks,residual networks,and deconvolutional neural networks in sequence.The discriminator adopts five layers neural network according to human visual characteristics.For the convolutional neural network in the proposed model,the value of the weight matrix is limited to [0,1].It is beneficial for the proposed model to reduce the model oscillation.In order to obtain the spectral normalization of each convolutional network parameter matrix,the singular value of the parameter matrix needs to be solved.In this paper,the power iteration method is used to approximate the value.The iteration process is as shown in Algorithm 1.

    Algorithm 1:Power iteration method 1.V0l ←a random Gaussian vector;2.WHILE:3.ukl ←WlVk-1 l,normalization:ukL ← ukl‖ukl‖;4.v0 l ←(Wl)T uk l,normalization:vk l ← vkl‖vkl‖;5.END-WHILE 6.σl(W)=(ukl)TWvkl;7.WWTu=σ(W)·u ?uTWWT u=1 σ(W),as ‖u‖=1;8.σ(W)=uTWv,as v=WTu;

    The discriminator is mainly composed of a convolutional neural network.It contains a fivelayer convolutional neural network to extract the high dimensional features of the image.The size of the convolution kernel is set to 4 at each layer of the convolutional neural network.The proposed discriminator is designed according to human visual characteristics.It’s worth noting that every convolutional kernel is subjected to spectral normalization after each iteration.Thus,the value of the convolution kernel is under Lipschitz stability constraint.The architecture of the discriminator is illustrated in Fig.3.

    Figure 3:The architecture of the discriminator with adding spectral normalization

    4 Results and Analysis

    4.1 Datasets

    In the experiment,we collect 200 images of faces and 200 cartoon images.Face images are divided into training setXand test setY.Similarly,cartoon images are divided into training setYand test setY.Training set and test set are divided according to the 8 to 2.Besides,increasing the number of training images and improving the quality of training images are very helpful for the proposed model to converge.If the number of images in training setXand training setYis not the same,our proposed model selects an equal number of images.The visual examples of training images are shown in Fig.4.

    Figure 4:Visual examples of the training set.(a) Face images.(b) Cartoon images

    4.2 The Training Parameters

    The epoch is set to 200,and each epoch contains 200 steps.The learning rate is set to 0.0002 and the random seed is set to 100.Adam optimizer [58]is used,with the default parameter set to 0.5.The loss function value is recorded every 200 steps.The result of the style transfer is saved every 1000 steps.In the convergence process of the proposed model,the image is normalized to reduce computation.In order to improve the experimental result,it’s effective to use cartoon images with obvious outlines.

    4.3 Experimental Results

    In fact,the value of loss function converges more easily and the oscillation is less by adding spectral normalization.This helps to reduce computation time.In the Figs.5-8,the horizontal axis represents the number of steps,and the vertical axis represents the loss value.The figure shows discriminator loss function value in domainXand domainY.Besides,the content consistency loss function value and the object loss function value of the proposed model are shown.The number of the training steps for the proposed model is 160,000.The learning rate changes after one epoch in this experiment.Based on the CYCLE-GAN,the loss function value of discriminator in domainXand domainYis shown in Fig.5,and the value of content consistent loss function and the object loss function is shown in Fig.6.

    After adding spectral normalization in the discriminator and pretrained model (VGG16),the training set and training parameters were consistent with the original model.Compared with CYCLE-GAN,it is found that the oscillation of the loss function value is significantly reduced in the discriminator.The value of the object loss function converges quickly in the proposed model.As shown in Figs.7 and 8.

    Figure 5:The loss function value of discriminator in CYCLE-GAN.(a) Loss function value in domain X.(b) Loss function value in domain Y

    Figure 6:The content consistent loss function value and the object loss function value in CYCLEGAN.(a) Content consistent loss function value.(b) Object loss function value

    Figure 7:The loss function value of discriminator in proposed model.(a) The loss function value of discriminator in domain X.(b) The loss function value of discriminator in domain Y

    As shown in the Fig.9,the face image is transferred into a cartoon image.The cartoon image generated by CYCLE-GAN and our proposed model respectively is shown in Fig.9.

    In this paper,we evaluate the FID [59]scores between face images and the generated images.The FID scores comparison between CYCLE-GAN and ours is shown in Tab.1.The image number in the table represents the generated image (the cartoon image) respectively in Fig.9.

    Figure 8:The content consistent loss function value and object loss function value in proposed model.(a) Content consistent loss value.(b) object loss function value

    Figure 9:Visual examples after style transfer.(a) Face images.(b) CYCLE-GAN.(c) Ours

    Table 1:Performance evaluation based on the FID metric.Lower is better for the FID metric

    5 Conclusions

    Style transfer mainly relies on the unsupervised learning characteristics of GAN.On this basis,the generator deceives the discriminator by enhancing its ability to fake.The discriminator strengthens its discriminability by constantly discriminating the generated images.In this paper,it is different from CYCLE-GAN,we proposed to add pretrained model (VGG16) to control content loss in the position of l1 loss.Besides,spectral normalization is used to reduce the oscillations of the loss function value.In the convergence process of the proposed model,it is found that the quality of the cartoon image plays an important role in style transfer.Thus,it is very necessary to select high-quality training set.The higher the resolution of the cartoon image,the deeper the neural network needs to be selected.To learn complex image features,it is necessary to increase the depth and width of the neural network.However,the problem of GAN model collapse remains to be solved.Notably,the value of the object loss function drops,and the generated image is very distorted.Therefore,we did many attempts to design a reasonable GAN structure.

    Acknowledgement:We thank all the team members for their efforts.

    Funding Statement:This work is supported by the National Natural Science Foundation of China (No.61702226);the 111 Project (B12018);the Natural Science Foundation of Jiangsu Province (No.BK20170200);the Fundamental Research Funds for the Central Universities (No.JUSRP11854).

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    人人澡人人妻人| 久久久久久久大尺度免费视频| 在线观看一区二区三区激情| 免费大片黄手机在线观看| 国产黄色免费在线视频| 日韩三级伦理在线观看| 99久久综合免费| 欧美亚洲日本最大视频资源| 18+在线观看网站| 大陆偷拍与自拍| 国产 一区精品| 夫妻午夜视频| 在线 av 中文字幕| 久久国产精品男人的天堂亚洲 | 国产成人一区二区在线| 久久99一区二区三区| 日本wwww免费看| 日本爱情动作片www.在线观看| 99热6这里只有精品| 亚洲第一av免费看| 大话2 男鬼变身卡| 久久久久视频综合| 最后的刺客免费高清国语| 国产无遮挡羞羞视频在线观看| 亚洲丝袜综合中文字幕| 亚洲精品自拍成人| 欧美国产精品一级二级三级| 2021少妇久久久久久久久久久| 999精品在线视频| 免费看不卡的av| 边亲边吃奶的免费视频| 七月丁香在线播放| 久久午夜福利片| 久热久热在线精品观看| 日本黄色日本黄色录像| 亚洲色图 男人天堂 中文字幕 | 一级二级三级毛片免费看| 免费观看无遮挡的男女| 黑人巨大精品欧美一区二区蜜桃 | 婷婷色综合大香蕉| 久久 成人 亚洲| 丝袜在线中文字幕| 一个人看视频在线观看www免费| 超色免费av| 黑人猛操日本美女一级片| 男人爽女人下面视频在线观看| 久久久国产欧美日韩av| 永久网站在线| 2018国产大陆天天弄谢| 亚洲欧美日韩另类电影网站| 麻豆成人av视频| 日韩av在线免费看完整版不卡| 啦啦啦在线观看免费高清www| 人妻人人澡人人爽人人| 欧美精品高潮呻吟av久久| 99国产综合亚洲精品| 最新中文字幕久久久久| 午夜91福利影院| 成年av动漫网址| 欧美日本中文国产一区发布| 国产伦精品一区二区三区视频9| 91在线精品国自产拍蜜月| 国产精品一二三区在线看| 另类亚洲欧美激情| 91久久精品电影网| 国产视频首页在线观看| 韩国高清视频一区二区三区| 五月开心婷婷网| av福利片在线| videossex国产| av线在线观看网站| 性高湖久久久久久久久免费观看| 少妇被粗大猛烈的视频| av在线播放精品| 国产亚洲一区二区精品| 岛国毛片在线播放| 亚洲国产av新网站| 亚洲欧美清纯卡通| 99久久精品一区二区三区| 母亲3免费完整高清在线观看 | 纵有疾风起免费观看全集完整版| 亚洲综合色网址| 简卡轻食公司| 伦精品一区二区三区| 欧美日韩在线观看h| 久久人人爽av亚洲精品天堂| 九九爱精品视频在线观看| 成人亚洲精品一区在线观看| 国产一区亚洲一区在线观看| 狂野欧美白嫩少妇大欣赏| 少妇的逼水好多| 视频在线观看一区二区三区| 国产高清有码在线观看视频| 精品99又大又爽又粗少妇毛片| 女人久久www免费人成看片| 26uuu在线亚洲综合色| 少妇猛男粗大的猛烈进出视频| 中文字幕免费在线视频6| 日韩熟女老妇一区二区性免费视频| 亚洲欧洲国产日韩| 夜夜骑夜夜射夜夜干| 国产日韩欧美在线精品| 亚洲精品日韩在线中文字幕| 啦啦啦中文免费视频观看日本| 国产精品久久久久久久久免| 成人亚洲欧美一区二区av| 日韩大片免费观看网站| 亚洲欧美日韩另类电影网站| www.av在线官网国产| 国产一区二区在线观看日韩| 国产免费一区二区三区四区乱码| 秋霞伦理黄片| 久久99精品国语久久久| 国产免费一区二区三区四区乱码| 久久久国产欧美日韩av| 蜜臀久久99精品久久宅男| 亚洲情色 制服丝袜| 国产一级毛片在线| 成年美女黄网站色视频大全免费 | 亚洲性久久影院| 69精品国产乱码久久久| 91久久精品国产一区二区成人| 国产精品一二三区在线看| 久久毛片免费看一区二区三区| 婷婷色麻豆天堂久久| 综合色丁香网| 亚洲国产精品国产精品| 一级,二级,三级黄色视频| 国产视频首页在线观看| 亚洲精品一区蜜桃| 男女啪啪激烈高潮av片| 日日爽夜夜爽网站| 大又大粗又爽又黄少妇毛片口| 母亲3免费完整高清在线观看 | 免费人妻精品一区二区三区视频| 亚洲精品国产色婷婷电影| 超碰97精品在线观看| 黄色一级大片看看| 桃花免费在线播放| 韩国av在线不卡| 亚洲久久久国产精品| 久久久久精品久久久久真实原创| 国产老妇伦熟女老妇高清| 一级毛片aaaaaa免费看小| 日韩av不卡免费在线播放| 黄片播放在线免费| tube8黄色片| 欧美精品人与动牲交sv欧美| 一区二区三区四区激情视频| 亚洲第一av免费看| 美女主播在线视频| a级毛片黄视频| kizo精华| 日韩中字成人| 男人爽女人下面视频在线观看| 熟女av电影| 亚洲熟女精品中文字幕| 国产老妇伦熟女老妇高清| 日韩欧美一区视频在线观看| 三级国产精品欧美在线观看| 高清av免费在线| 人妻少妇偷人精品九色| 国产色婷婷99| 久久久亚洲精品成人影院| 国产成人av激情在线播放 | 国产极品天堂在线| 热re99久久国产66热| 日韩视频在线欧美| 国产精品一国产av| 国产老妇伦熟女老妇高清| a级片在线免费高清观看视频| 看非洲黑人一级黄片| 亚洲欧美清纯卡通| 在线播放无遮挡| 最新中文字幕久久久久| 国产精品一区www在线观看| 色5月婷婷丁香| 精品国产一区二区久久| 久久精品国产亚洲网站| 久久女婷五月综合色啪小说| 亚洲av电影在线观看一区二区三区| 亚洲精品国产av蜜桃| 欧美xxⅹ黑人| 中文字幕av电影在线播放| 国产日韩欧美在线精品| 最新中文字幕久久久久| 韩国av在线不卡| 亚洲经典国产精华液单| 伦理电影免费视频| 亚洲图色成人| 亚洲美女黄色视频免费看| 嫩草影院入口| 亚洲欧美成人精品一区二区| 亚洲美女视频黄频| 国产乱来视频区| 亚洲经典国产精华液单| 黑人欧美特级aaaaaa片| 欧美97在线视频| 国产精品成人在线| 精品一区二区三区视频在线| 国产成人精品福利久久| av一本久久久久| 国产欧美日韩一区二区三区在线 | 国产视频内射| 大话2 男鬼变身卡| 精品卡一卡二卡四卡免费| 成人午夜精彩视频在线观看| 国产国语露脸激情在线看| 亚洲国产色片| 国产高清不卡午夜福利| 欧美性感艳星| 欧美97在线视频| 一区在线观看完整版| 大香蕉97超碰在线| 国产精品.久久久| 精品久久国产蜜桃| 高清在线视频一区二区三区| 久久99蜜桃精品久久| 亚洲精品国产av成人精品| 国产精品不卡视频一区二区| 国产成人精品婷婷| 久热久热在线精品观看| 亚洲av国产av综合av卡| 在线 av 中文字幕| 大陆偷拍与自拍| 制服人妻中文乱码| 中文字幕久久专区| 美女主播在线视频| a级毛色黄片| 高清午夜精品一区二区三区| 高清不卡的av网站| 尾随美女入室| 多毛熟女@视频| 高清毛片免费看| 男女啪啪激烈高潮av片| 亚洲四区av| 99国产精品免费福利视频| 亚洲精品色激情综合| 热re99久久精品国产66热6| 精品熟女少妇av免费看| 国产黄片视频在线免费观看| 日韩在线高清观看一区二区三区| 日韩三级伦理在线观看| 久久精品国产鲁丝片午夜精品| 亚洲av中文av极速乱| 欧美日韩国产mv在线观看视频| 日韩成人伦理影院| 国产男人的电影天堂91| 午夜精品国产一区二区电影| 亚洲美女黄色视频免费看| 一本一本综合久久| 久久国产精品大桥未久av| 欧美另类一区| 国产色婷婷99| 人人妻人人澡人人看| 午夜激情福利司机影院| 国产免费一级a男人的天堂| 国精品久久久久久国模美| 考比视频在线观看| 国产精品人妻久久久影院| 晚上一个人看的免费电影| 天天躁夜夜躁狠狠久久av| 欧美日韩国产mv在线观看视频| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 亚洲在久久综合| 高清av免费在线| 亚洲图色成人| 少妇丰满av| 久久精品国产亚洲网站| 久久热精品热| 亚洲国产色片| 日本黄大片高清| 男男h啪啪无遮挡| 欧美变态另类bdsm刘玥| 精品一区二区免费观看| 99热这里只有精品一区| 成人黄色视频免费在线看| 简卡轻食公司| 国产av精品麻豆| 亚洲精品久久午夜乱码| 麻豆成人av视频| 夫妻午夜视频| 久久婷婷青草| 啦啦啦在线观看免费高清www| 亚洲美女搞黄在线观看| 欧美日韩精品成人综合77777| 午夜激情av网站| 人体艺术视频欧美日本| 18禁在线无遮挡免费观看视频| 麻豆成人av视频| 国产精品免费大片| videos熟女内射| 欧美人与性动交α欧美精品济南到 | av网站免费在线观看视频| 王馨瑶露胸无遮挡在线观看| 欧美日本中文国产一区发布| 搡老乐熟女国产| 夜夜爽夜夜爽视频| 亚洲精品色激情综合| 丝瓜视频免费看黄片| 欧美日韩在线观看h| 亚洲人与动物交配视频| 母亲3免费完整高清在线观看 | 18禁裸乳无遮挡动漫免费视频| 三上悠亚av全集在线观看| 男女啪啪激烈高潮av片| 九色亚洲精品在线播放| 欧美xxⅹ黑人| 99久国产av精品国产电影| 国产白丝娇喘喷水9色精品| 国产精品女同一区二区软件| 亚洲欧美成人精品一区二区| 国产精品一区二区在线不卡| 久久毛片免费看一区二区三区| 国产一区亚洲一区在线观看| 亚洲欧美色中文字幕在线| 日韩制服骚丝袜av| 亚洲av在线观看美女高潮| 五月伊人婷婷丁香| 久久久精品区二区三区| 只有这里有精品99| 天堂俺去俺来也www色官网| 精品人妻偷拍中文字幕| 九色亚洲精品在线播放| 91午夜精品亚洲一区二区三区| 夜夜骑夜夜射夜夜干| 亚洲成色77777| 黄色怎么调成土黄色| 少妇被粗大的猛进出69影院 | 日韩三级伦理在线观看| 精品亚洲成国产av| 桃花免费在线播放| 久久久a久久爽久久v久久| 免费播放大片免费观看视频在线观看| 特大巨黑吊av在线直播| 91久久精品电影网| 曰老女人黄片| 国产精品 国内视频| 免费日韩欧美在线观看| 国产成人精品无人区| av不卡在线播放| 99精国产麻豆久久婷婷| 成人无遮挡网站| 亚洲经典国产精华液单| 99九九在线精品视频| 国产白丝娇喘喷水9色精品| 亚洲色图综合在线观看| 大香蕉久久网| 亚洲成人手机| 最近中文字幕2019免费版| 一二三四中文在线观看免费高清| 日韩熟女老妇一区二区性免费视频| 国产高清有码在线观看视频| av有码第一页| 好男人视频免费观看在线| av有码第一页| 精品国产国语对白av| 人人妻人人澡人人爽人人夜夜| av国产精品久久久久影院| 日日摸夜夜添夜夜添av毛片| 免费大片18禁| 精品少妇黑人巨大在线播放| 久久青草综合色| 日韩在线高清观看一区二区三区| 亚洲情色 制服丝袜| 在线播放无遮挡| 中文字幕最新亚洲高清| 中国美白少妇内射xxxbb| 亚洲欧洲精品一区二区精品久久久 | 久久人妻熟女aⅴ| 亚洲综合精品二区| 日本黄大片高清| 一本一本综合久久| 母亲3免费完整高清在线观看 | 欧美少妇被猛烈插入视频| 秋霞伦理黄片| 制服人妻中文乱码| 在线精品无人区一区二区三| 久久99热这里只频精品6学生| 伊人久久精品亚洲午夜| 美女福利国产在线| 国产男女超爽视频在线观看| 亚洲综合精品二区| 人妻夜夜爽99麻豆av| 国产片特级美女逼逼视频| 熟女电影av网| 成年人午夜在线观看视频| 寂寞人妻少妇视频99o| 亚洲四区av| 在线看a的网站| 老司机影院成人| 我的老师免费观看完整版| 亚洲精品日本国产第一区| 三级国产精品欧美在线观看| 亚洲人与动物交配视频| 日本与韩国留学比较| 亚洲成人一二三区av| 亚洲色图 男人天堂 中文字幕 | 国产色爽女视频免费观看| 日韩三级伦理在线观看| 久热这里只有精品99| 欧美国产精品一级二级三级| 亚洲国产精品999| 久久人人爽av亚洲精品天堂| 精品酒店卫生间| 精品卡一卡二卡四卡免费| 99精国产麻豆久久婷婷| 老司机影院成人| av网站免费在线观看视频| 久久精品夜色国产| 亚洲精品乱码久久久久久按摩| 亚洲人与动物交配视频| 欧美日韩成人在线一区二区| 国产一级毛片在线| 婷婷色综合大香蕉| av线在线观看网站| av又黄又爽大尺度在线免费看| 男人添女人高潮全过程视频| 欧美国产精品一级二级三级| 在线播放无遮挡| 9色porny在线观看| 十八禁网站网址无遮挡| 18禁在线无遮挡免费观看视频| 熟女电影av网| 黄色视频在线播放观看不卡| 中文欧美无线码| 18禁动态无遮挡网站| 精品国产一区二区久久| 亚洲欧美色中文字幕在线| 国产男女内射视频| 欧美激情 高清一区二区三区| 视频中文字幕在线观看| 大香蕉久久网| 我要看黄色一级片免费的| 午夜福利网站1000一区二区三区| 国产精品国产三级国产专区5o| 国产 一区精品| 九九爱精品视频在线观看| 99久久中文字幕三级久久日本| 午夜影院在线不卡| 91久久精品国产一区二区三区| 午夜福利,免费看| 亚洲精品av麻豆狂野| 免费高清在线观看日韩| 国产片特级美女逼逼视频| 亚洲av国产av综合av卡| 亚洲国产成人一精品久久久| 色婷婷av一区二区三区视频| 男女无遮挡免费网站观看| 久久ye,这里只有精品| 亚洲久久久国产精品| 午夜福利视频精品| 大话2 男鬼变身卡| 建设人人有责人人尽责人人享有的| 国产一区二区三区av在线| 欧美日韩亚洲高清精品| av免费在线看不卡| av视频免费观看在线观看| 一级二级三级毛片免费看| 岛国毛片在线播放| 99久国产av精品国产电影| 一边摸一边做爽爽视频免费| 男女边摸边吃奶| 国产无遮挡羞羞视频在线观看| 国产极品粉嫩免费观看在线 | 少妇被粗大猛烈的视频| 国产高清三级在线| 亚洲精品国产色婷婷电影| 国产在线视频一区二区| 午夜91福利影院| 亚洲精品aⅴ在线观看| 国产精品一二三区在线看| 日韩伦理黄色片| 一个人免费看片子| 中文字幕最新亚洲高清| 久久精品人人爽人人爽视色| 亚洲精品日本国产第一区| av在线观看视频网站免费| 午夜精品国产一区二区电影| 精品午夜福利在线看| 免费看不卡的av| 久久 成人 亚洲| 天堂中文最新版在线下载| 久久久久国产精品人妻一区二区| 国产一区二区在线观看日韩| 久久久a久久爽久久v久久| 最近最新中文字幕免费大全7| 日韩精品免费视频一区二区三区 | 国产日韩一区二区三区精品不卡 | 一级毛片电影观看| 九草在线视频观看| 国产精品熟女久久久久浪| 欧美日韩成人在线一区二区| 乱人伦中国视频| 亚洲精品乱久久久久久| 亚洲精华国产精华液的使用体验| av卡一久久| 成年美女黄网站色视频大全免费 | 亚洲美女视频黄频| 久久婷婷青草| 高清不卡的av网站| 国产免费一级a男人的天堂| 热re99久久精品国产66热6| 亚洲国产精品专区欧美| 成人午夜精彩视频在线观看| 91精品国产国语对白视频| 久久av网站| 精品少妇内射三级| 精品人妻偷拍中文字幕| 精品99又大又爽又粗少妇毛片| 亚洲av福利一区| 欧美+日韩+精品| 日韩免费高清中文字幕av| 亚洲一区二区三区欧美精品| 三上悠亚av全集在线观看| 国产精品国产三级国产av玫瑰| 97超碰精品成人国产| 精品卡一卡二卡四卡免费| 久久热精品热| 自线自在国产av| 色哟哟·www| 国产在线免费精品| 简卡轻食公司| 久久久久精品久久久久真实原创| 成年女人在线观看亚洲视频| 亚洲欧美日韩卡通动漫| 久久99一区二区三区| 蜜桃国产av成人99| 亚洲精品乱久久久久久| 亚洲性久久影院| 亚洲精品国产av成人精品| 超色免费av| 精品亚洲成a人片在线观看| 成人影院久久| 99视频精品全部免费 在线| 久久精品国产亚洲网站| 国产成人免费观看mmmm| 久久av网站| 亚洲欧美清纯卡通| 国产午夜精品久久久久久一区二区三区| 91成人精品电影| 欧美性感艳星| 成年美女黄网站色视频大全免费 | www.色视频.com| 少妇被粗大的猛进出69影院 | 婷婷色综合www| 国产黄频视频在线观看| av在线老鸭窝| 久久久久久久精品精品| 寂寞人妻少妇视频99o| 亚洲精品456在线播放app| 精品午夜福利在线看| 免费看光身美女| 国产片特级美女逼逼视频| av福利片在线| 一区二区日韩欧美中文字幕 | 欧美日韩av久久| 国产国语露脸激情在线看| 久久久精品免费免费高清| 热re99久久国产66热| 亚洲av中文av极速乱| 国产日韩欧美亚洲二区| 青青草视频在线视频观看| 天堂中文最新版在线下载| 欧美成人精品欧美一级黄| 亚洲内射少妇av| 性高湖久久久久久久久免费观看| 久久精品夜色国产| 久久久久久久久大av| 国产伦理片在线播放av一区| 大香蕉97超碰在线| 日韩中字成人| 欧美变态另类bdsm刘玥| 最后的刺客免费高清国语| 国产精品成人在线| 欧美97在线视频| 久久 成人 亚洲| 亚洲一区二区三区欧美精品| 91成人精品电影| 校园人妻丝袜中文字幕| 天天操日日干夜夜撸| 亚洲图色成人| 美女xxoo啪啪120秒动态图| 午夜福利在线观看免费完整高清在| 飞空精品影院首页| 97在线视频观看| 99国产综合亚洲精品| 国产精品久久久久久精品古装| 一边亲一边摸免费视频| 久久女婷五月综合色啪小说| 丰满饥渴人妻一区二区三| 亚洲精品色激情综合| 黄色一级大片看看| 亚洲五月色婷婷综合| 97超视频在线观看视频| 伊人久久精品亚洲午夜| 九色亚洲精品在线播放| 日日撸夜夜添| 伊人久久精品亚洲午夜| 日韩 亚洲 欧美在线| 桃花免费在线播放| 欧美日本中文国产一区发布| 狂野欧美白嫩少妇大欣赏| 中国三级夫妇交换| 黄色配什么色好看| 少妇 在线观看| 亚州av有码| 日韩熟女老妇一区二区性免费视频| 日韩一区二区三区影片| 国产一区亚洲一区在线观看| 性高湖久久久久久久久免费观看| a级毛色黄片| 一级片'在线观看视频| 国产白丝娇喘喷水9色精品| 丰满饥渴人妻一区二区三|