• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Generative Adversarial Network with Separate Learning Rule for Image Generation

    2020-06-04 06:38:50YINFengCHENXinyu陳新雨QIUJieKANGYongliang康永亮

    YIN Feng(印 峰), CHEN Xinyu(陳新雨), QIU Jie(邱 杰), KANG Yongliang(康永亮)

    1 College of Automation and Electronic Information, Xiangtan University, Xiangtan 411105, China 2 National Engineering Laboratory of Robot Vision Perception and Control Technology, Changsha 410012, China

    Abstract: Boundary equilibrium generative adversarial networks (BEGANs) are the improved version of generative adversarial networks (GANs). In this paper, an improved BEGAN with a skip-connection technique in the generator and the discriminator is proposed. Moreover, an alternative time-scale update rule is adopted to balance the learning rate of the generator and the discriminator. Finally, the performance of the proposed method is quantitatively evaluated by Fréchet inception distance (FID) and inception score (IS). The test results show that the performance of the proposed method is better than that of the original BEGAN.

    Key words: generative adversarial network (GAN); boundary equilibrium generative adversarial network (BEGAN); Fréchet inception distance (FID); inception score (IS)

    Introduction

    As one kind of methods for generating models, generative adversarial networks (GANs)[1]excel at generating realistic images[2-4], creating videos[5-7]and producing text[8-9]. The architecture of GANs composed of two subnetworks is a deep convolutional neural net. One subnetwork is used as a generator to synthesize data from random noise. The other is used as a discriminator to separate above synthesized data (also known as fake data) from real data. The competition between the generator and the discriminator drives both to improve themselves until the counterfeits are indistinguishable from real data. As a powerful subclass of generative models, GANs have achieved great success in many fields such as semi-supervised learning[10], semantic segmentation[11], small object detection[12]and so on. However, it cannot perform very well on some practical issues. In general, it is very hard to effectively train GANs without additional auxiliaries because of high need for well-designed network structures and hyper-parameters. There have been many different attempts to solve these issues mainly from three perspectives.

    The first attempt is the improvement of the object functions. Nowozinetal.[13]noticed that not just Jensen-Shannon (JS) divergence, any divergence can be placed in the GAN architecture. Least-square GAN (LSGAN)[14]replaces sigmoid cross entropy loss in the standard GAN with least square loss, which can improve the quality of picture generation and make the training stable by directly moving generator samples close to real data distribution. Wasserstein GAN (WGAN)[15]introduces Earth-Mover (EM) distance which has superior smoothing characteristics with respect to relative Kullback-Leibler (KL) divergence and JS divergence. The EM distance used as a cost function not only solves the problem of unstable training, but also provides a reliable training process indicator. WGAN-gradient penalty (WGAN-GP) is an improved version of WGAN. The approximate 1-Lipschitz limit on the discriminator is achieved with gradient penalty. WGAN-GP achieves very good results in the further test.

    The second attempt is the modification of the network structure. Well-architected generators and discriminators are more important for GANs. The most commonly used structure is convolutional neural networks (CNNs) in the image processing field[2]. The core idea of the approach in Ref. [2] is adopting and modifying demonstrated changes to CNN architectures. Experiments show that deep convolution generative adversarial networks (DCGANs) can provide a high quality generation process. Especially, it is very good at solving semi-supervised classification tasks. Self-attention generative adversarial networks (SAGANs)[4]use the self-attention paradigm to capture long-range spatial relationships that exist in images to better synthesize new images. Under good hardware with a tensor processor unit (TPU) and huge parameter system conditions, large scale GAN training for high fidelity natural image synthesis (denoted by BigGAN)[16]increases batch size and channel to produce realistic sharp pictures.

    The third attempt is the use of additional networks or supervision conditions. In addition to the different cost function and network framework, additional networks or supervision conditions are often adopted to further improve the performance of GANs. It is interesting to note that the architecture of the generator in GANs does not differ significantly from other approaches like variational auto-encoders (VAEs)[17]. VAEs, GANs and their variants are three kinds of generation models based on deep learning. It is normal practice to combine GANs with auto-encoder networks, such as VAE-GANs[18]and energy-based GANs (EBGANs)[19]. Compared with using the VAE alone, the VAE-GAN combining VAEs and GANs can produce clearer pictures. In the VAE-GAN, one discriminator is used to authenticate the input image whether it is from real data or generated samples. In contrast, the discriminator used in the EBGAN is adopted to identify the re-configurability of the input image. That is it can remember what real data distribution looks like and then give a high score as long as the arbitrary inputxis close to that of the real sample. In other improved methods of GANs, supervision conditions are added. In conditional generative adversarial nets (CGANs), an additional condition variable is introduced into both of the generator and the discriminator. The involved information can be used to guide the data generation process in the CGAN[20]. The information maximizing GAN (Info-GAN) contains a hidden variablecwhich is also known as the latent code. By associating latent code and generated data with additional constrains, letccontain interpretable information about the data to help Info-GAN[21]find an interpretable expression. Warde-Farley and Bengio[22]proposed a denoising feature matching (DFM) technique to guide the generator toward probable configurations of abstract discriminator features. A de-noising auto-encoder is used to greatly improve the GAN image model.

    In this paper, we proposed an improved BEGAN with a skip-connection technique in the generator and the discriminator. The use of the skip-connection technique allows feature information to be transmitted directly across layers. Its greatest advantage is that it can reduce the information loss in the process of transmission. More feature information can improve the performance of generated images. Moreover, an alternative time-scale update rule was proposed to balance the learning rates of the generator and the discriminator. As a result, more realistic pictures can be generated by using the proposed method. Finally, we evaluated the performance of the proposed method, and compared it with BEGANs[23], improving generative adversarial networks with DFM[22], adversarial learned inference (ALI)[24], improved techniques for training GANs (Improved GANs)[25]and generalization and equilibrium in generative adversarial nets (denoted by MIX+WGAN)[26].

    1 Review of Generative Adversarial Networks

    A GAN involves generator networks and discriminator networks whose purposes are to map random noise to samples and discriminate real and generated samples, respectively[16]. LetGθdenote the generator with the parametersθandDφdenote the discriminator with the parameterφ. Formally, the GAN objective, in its original form, involves finding Nash equilibrium for the following two-player min-max problem until neither player can improve its cost unilaterally. Both players aim to minimize their own cost function. The cost function for the discriminator is defined as

    (1)

    where the distributions of real dataxand random noisezarep(x) andp(z), respectively. In the minimax GAN, the discriminator (shown in Eq.(1)) attempts to classify generated images (fake images) from real images and outputs a probability. Simultaneously, the generator attempts to fool the discriminator and learns to generate samples that have a low probability of judging fake. The cost function for the generator in minimax the GAN is defined as

    (2)

    To improve the gradient, Goodfellowetal.[1]also proposed a non-saturating GAN as an alternative cost function, where the generator instead aims to maximize the probability of generated samples being real[27]. The cost function for the generator in the non-saturating GAN is defined as

    (3)

    The original GAN adopts the JS divergence to measure the distance between the distribution of real data and random noise. It is noted that the generation model and the discriminant model can be various neural networks without any limitations. During the training process of the GAN, the goals of the discriminator and the generator training are just opposite. The former is to maximize the discriminative accuracy to better distinguish between the real data and the generated data. In contrast, the latter is to minimize the discriminant accuracy of the discriminator. Generally, without auxiliary stabilization techniques, training procedure of GANs is notoriously brittle.

    2 Proposed Methods and Architectures

    Under the condition of the near-optimal discriminator, minimization of the loss of the generator is equivalent to minimizing the JS divergence betweenp(x) andp(z). In practice, the distributions of both are almost impossible to have substantial overlap, which eventually cause the gradient of the generator close to 0. In other words, the gradient disappears. This problem can be alleviated to some extent by an alternative non-saturating cost function. The minimization of the cost function is equivalent to minimizing an unreasonable distance measurement. However, there are also two problems to be solved: one is gradient instability, and the other is mode collapse.

    WGANs[15], BEGANs[23]and SAGANs[4]are excellent methods proposed to solve above problems. WGANs suggest the EM distance, also called the Wasserstein distance, as a measure of the discrepancy between two distributions. BEGANs adopt the distances between loss distributions, instead of sample distributions. A self-attention mechanism is added into the SAGAN. Moreover, the spectral normalization and two time-scale update rule (TTUR) optimization techniques are used to stabilize GAN training. Next, we will develop an improved BEGAN. The proposed BEGAN based network is shown in Fig. 1. The generator and the discriminator both adopt an encoder-decoder framework. Between them, the architecture of the discriminatorDis a deep convolutional neural network.Nxis short for the dimensions ofx.Nx=H×W×C, whereH,WandCare height, width and color, respectively; for RGB images,C=3. The architecture of the generatorGhas the dimensions ofH×W×C.Guses the same architecture as the decoder of the discriminator. The generator network illustrated in the upper section of Fig. 1 contains nine convolutional layers and three up-sampling convolutional layers.

    Fig. 1 Architectures of the generator networks and the discriminator networks with convolutional kernel size and output channels for each convolutional layer (SL denotes the skip layer; Conv w=(k, k) denotes a convolutional layer with k×k kernel; in d=(a, b), a and b denote input and output filters, respectively; n denotes the number of filters/channels)

    It is noted that the proposed method also uses the auto-encoder as the discriminator and aims to match the auto-encoder loss distributions using a loss derived from the Wasserstain distance. The definitions of the Wasserstain distance and its lower bound are stated as follows.

    LetL(ν)=|ν-D(ν)|η, whereη∈{1, 2}, and it is the target norm. LetΓ(u1,u2) be the set all of couplings ofu1andu2, whereu1andu2are two distributions of auto-encoder losses. Letm1andm2be their respective means, wherem1∈R andm2∈R. The Wasserstain distance can be confirmed as

    (4)

    By taking Jensen’s inequality, the lower bound ofW1(u1,u2)can be derived as

    infE[|x1-x2|]≥inf|E[x1-x2]|=|m1-m2|.

    (5)

    Letu1be the distribution of real data losses andu2be the distribution of the lossL(G(z)). Equation (5) shows that the infimum ofW1(u1,u2) is|m1-m2|.

    In order to maximize the distance between real data and generated data auto-encoder losses, there are only two solutions to maximize|m1-m2|ofW1(u1,u2)≥m1-m2, wherem1→∞andm2→0, orW1(u1,u2)≥m2-m1, wherem1→0 andm2→∞. Therefore minimizingm1naturally leads to auto encoding the real images. Similar to the BEGAN, the objective of the network training is to meet

    (6)

    where,L(x)is the auto-encoder loss of real data, andL(x)=D(x)-x;L(G(zD)) is the auto-encoder loss of generated data, andL(G(zD))=D(G(zD))-G(zD); a variablektis used to control how much emphasis is placed on generator losses during gradient descent and initializek0in this work, andkt∈[0, 1];λkis the learning rate ofkt; the hyper-parameterγ=E[|L(G(z))|]/E[|L(x)|]∈[0, 1] , balances these two goals, namely auto-encoder real images and discriminate real images from generated images, and at the same time,γis also an indicator about image diversity where a lower value means lower image diversity. Deriving a global measure of convergence is formulated as the sum of these two terms:

    Mglobal=L(x)+|γL(x)-L(G(zG))|,

    (7)

    where lowerMglobalmeans better training process. Figure 2 shows the detail process of image generation. To generate more real images, we would like to use Algorithm 1 to converge to a good estimator ofpdataif given enough capacity and training time, and Adam stands for adaptive moment estimation.

    Fig. 2 Detail process of image generation

    When applying the GAN, it is required that the discriminating ability of the discriminator is better than the generating ability of the current generator. To achieve this aim, the usual practice is to update the parameters of the discriminator more times than the generator during training. It is noted that the discriminator often learns faster than the generator in practice. To balance the learning speed, the TTUR[28]is adopted during the training process. Specifically, a same update rate but different learning rates are adopted in the TTUR when training the discriminator and the generator. As they have the same update rate, we only need to adjust the learning rate. Here, the learning rate is set to be 0.004 and 0.001 respectively for the discriminator and generator training. It is because choosing a relatively high learning rate can accelerate the learning speed of the discriminator. And a small learning rate is necessary for the generator to successfully fool the discriminator.

    Furthermore, a strategy of adding skip connection[29]between different layers is applied to strengthen feature propagation and encourage the reuse of feature. The feature information can be directly transmitted across layers with the help of additional skip connection. Then the integrity of the feature is preserved to the greatest extent. The skip-connection structure adopted in our model is that the input is additionally connected to the output for each convolution block. These skip connection is only added into the generator and the decoder. It is noted that another structure of skip connection similar to dense block is mentioned in the BEGAN. By comparison, our structure is more suitable for processing big dataset because of its simple connection. The data flow diagram of the generator and the discriminator in our model is shown in Fig. 3.

    Fig. 3 Data flow diagram of the generator and the discriminator (L(·) denotes auto-encoder loss)

    3 Experiments

    In this section, a series of experiments were conducted to demonstrate the performance of the proposed method.

    3.1 Parameter settings

    The architecture of the model is shown in Fig. 1. Both the discriminator and the generator use 3×3 convolutions with exponential liner units (ELUs) activation function for outputs. Many of these convolution layers constitute a convolution block. Specifically, two layers are used in this paper. The training of the generator and the discriminator both include down-sampling and up-sampling phases. Down-sampling is implemented as sub-sampling with stride 2 and sub-sampling is done by the nearest neighbour. The learning approach adopts Adam optimization algorithm with the initial learning rate of 0.000 1. Note that the learning rate will be adjusted by multiplying a factor of 2 when convergence stalls. The batch size, one of the important parameters, is set to be 16. The sizes of input images are 64×64. Note that the model is also suitable for varied resolution from 32 to 256 by adjusting convolution layers while keeping 8×8 size of the final down-sampled image.

    All training processes are conducted on a NVIDIA GeForce GTX 1080Ti GPU using 162 770 face images, randomly sampled from the large-scale celeb faces attributes (CelebA), and 60 000 images of 10 categories from CIFAR-10. Training images are different from the testing images.

    3.2 Computational experiments

    In this example, the data set used is CelebA with a larger variety of facial poses. It is noted that we resize the images to 64×64 to highlight the areas where faces are located in images. We prefer to do this since humans are better at identifying flaws appearing on the faces. First, we discuss the effect of another hyper-parameterγ∈{0, 1} and perform several group comparison tests. The value ofγis related to the quality and the diversity of the generated images. As shown in Fig. 4, we can observe skin colour, expression, moustaches, gender, hair colour, hair style and age from the generated images.

    Fig. 4 Comparison of samples randomly generated under different γ : (a) γ=0.3; (b) γ=0.7

    In order to observe the influence ofγconveniently, we change its values across the range [0, 1] in tests. Some typical results about image diversity are displayed in Fig. 4. Overall, the generated images appear to be well behaved. When the parameter is at a lower level, such asγ=0.3, the generated images look overly uniform, and contain more noise. The facial contours are gradually becoming similar, and the generated face samples are drawn to less diverse. Moreover, the noises are greatly reduced. From Fig. 4(a), it can be seen that little noises are concentrated around the picture, such as positions located near the hair and forehead. What’s more, more detailed features can be created successfully, like beards, blue eyes and bangs highlighted in Fig. 4(b). Note that these features are usually hard to be created by other methods.

    Furthermore, we quantitatively evaluate the performance of the proposed method. In this paper, a widely used quantitative measurement method, Fréchet inception distance (FID), is adopted for evaluation. The FID[28]provides a more principled and comprehensive metric. It compares the distributions between the generated images and real images in the feature space of inception networks. Lower FID means closer distances between synthetic and real date distributions. Figure 5 shows a series of 64×64 random generated samples based on the proposed method. From the visual point of view, the generated images are very impressive. It can even generate teeth images clearly.

    Fig. 5 Random generated samples (γ=0.5)

    It can be seen from Fig. 6 that the FID of our model decreases sharply at the beginning of the iteration, and gradually decreases with a mild concussion. By comparison, the FID value of the BEGAN fluctuates greatly, and suddenly increases dramatically in the late stages of the iteration. Figure 7 shows the convergence curves of theLD. The results show that our method is slightly better. Moreover, the numerical results show that FID values obtained based on BEGANs and our model are 84.19 and 24.57, respectively. At this point, the performance of our method increases by 3.4 times.

    Fig. 6 Comparison of the BEGAN and our model about convergence curves of the FID in CelebA datasets

    Fig. 7 Comparison of the BEGAN and our model about convergence curves of LD in CelebA datasets

    Another dataset used in the test is the fashion MNIST dataset, which consists of a training set of 60 000 samples and a test set of 10 000 samples. Unlike the previous CelebA dataset, the samples in the fashion MNIST dataset is a grayscale image, associated with a label from 10 classes. The parameters are set as follows: the size of the input images is 32×32, the batch size is 64 and the iteration number is 100 000.

    Figure 8 shows some results of generated random samples based on the proposed method. As can be seen from Fig. 8, a variety of shoe styles can be successfully generated.

    Fig. 8 Random samples generated on the fashion MNIST dataset (the picture contains a variety of shoe styles)

    Furthermore, we compare FID and inception score (IS) of the BEGAN and our model. IS is another widely used quantitative measurement to evaluate the performances of the comparing methods[25]. It uses an inception network pre-trained on ImageNet to compute the KL-divergence between the conditional class distribution and the marginal class distribution. Higher IS indicates better image quality and diversity.

    As can be seen from Fig. 9(a), the FID of our model is significantly smaller than that of the BEGAN. According to the analysis above, the quality of the images generated by our model is higher. Figure 9(b) shows the results of IS. It can be seen that higher IS can be obtained by our model. Moreover, the IS shows an upward trend at 100 000 iteration.

    (a)

    (b)

    Fig. 9 Comparison of (a) FID and (b) IS of the BEGAN and our model in the fashion MNIST dataset

    4 Verification

    In this section, we further compare the performances among the proposed method and common classical methods, including BEGANs, ALIs, DFMs, Improved GANs and MIX+WAGN. In these comparison experiments, we retrain models on the single NVIDIA GeForce GTX 1080 Ti GPU with CIFAR-10 dataset, which goes through 100 000 iterations and the batch size is 64. The values of other hyper-parameters are defaults as in the train file. All models are built based on TensorFlow.

    The used dataset is CIFAR-10. We calculate IS of the comparing methods with an average of 10 evaluations for 50 000 samples. The final numerical results are shown in Table 1. The test results show that our score is better than all methods except for the DFM. This seems to confirm experimentally that the DFM is an effect and direct method of matching data distributions to improve their performance. Using additional network to train de-nosing feature and combine with our model will be a possible avenue for future work.

    Note that IS can only be used to quantify the diversity of generated samples. To further compare the distributions of target samples, the above method evaluates the robustness of the model by calculating the FID value. In this example, FIDs are calculated with 50 000 train dataset and 10 000 generated samples. The experimental results show that the FIDs obtained based on DFMs, BEGANs and our model are 30.02, 77.27 and 57.96, respectively. All in all, our model is slightly inferior to the DFM, but better than the BEGAN.

    Table 1 Numerical results of IS

    Figure 10 shows some intermediate results when the CIFAR-10 is used to further test our method. As the number of training increases, the generated image changes from fuzzy to sharp, and the generated image distribution is gradually closer to the real image distribution. It is noted that here we combine 64 pieces of images individually generated into one.

    Fig. 10 Random samples generated with different training steps on CIFAR-10: (a) 20 000; (b) 40 000; (c) 60 000; (d) 80 000; (e) 100 000

    5 Conclusions

    An improved BEGAN with the additional skip-connection technique is proposed in this paper.An alternative time-scale update rule is adopted to balance the learning rates of the generator and the discriminator. The results of qualitative visual assessments show that high quality images can be created by the improved BEGAN when 0.5<γ<1. Furthermore, the performance of the proposed method is quantitatively evaluated by FID and IS. The FID for the proposed method and the BEGAN with CelebA dataset are 24.57 and 84.19, respectively. At this point, the performance of our method increases by 3.4 times. The other test results for CIFAR-10 dataset show that the FID is 57.96, which are also lower than 77.27 of the BEGAN. In addition, the ISs for the proposed method and the BEGAN are 6.32 and 5.62, respectively. Our method is also slightly better than the BEGAN. However, it should be pointed out that the performance of the proposed method is better than other compared methods except for the DFM. This result is predictable because the method of DFM directly aims to match the data distribution. In short, the experiment results can confirm that the use of such imbalanced learning rate updates and the skip-connection technique can improve the performances of image generation methods. In future work, we will try to add the low rank constraint to lead to generation of high quality images with lower rank.

    日韩中文字幕视频在线看片| 欧美精品高潮呻吟av久久| 看免费av毛片| 男人爽女人下面视频在线观看| 免费日韩欧美在线观看| 亚洲欧美中文字幕日韩二区| 国产免费一区二区三区四区乱码| 夜夜骑夜夜射夜夜干| 亚洲精品国产av成人精品| 我要看黄色一级片免费的| 一边摸一边抽搐一进一出视频| 91老司机精品| 欧美xxⅹ黑人| 久久久久久久久久久久大奶| 观看av在线不卡| 日本猛色少妇xxxxx猛交久久| 日韩一卡2卡3卡4卡2021年| 菩萨蛮人人尽说江南好唐韦庄| 欧美精品一区二区免费开放| 久久免费观看电影| 精品国产一区二区三区久久久樱花| 成人亚洲欧美一区二区av| 90打野战视频偷拍视频| 欧美人与善性xxx| www.av在线官网国产| 国产精品国产三级专区第一集| 国产男人的电影天堂91| av不卡在线播放| 亚洲国产精品国产精品| 国产伦人伦偷精品视频| 国产亚洲欧美精品永久| 欧美精品啪啪一区二区三区 | 美女视频免费永久观看网站| 欧美黑人欧美精品刺激| 99香蕉大伊视频| 亚洲欧美日韩高清在线视频 | 热99国产精品久久久久久7| 涩涩av久久男人的天堂| 亚洲av日韩在线播放| 18禁裸乳无遮挡动漫免费视频| 丰满少妇做爰视频| 亚洲第一av免费看| 男女边摸边吃奶| 男女边吃奶边做爰视频| 国产视频一区二区在线看| 国产av一区二区精品久久| 男的添女的下面高潮视频| 免费不卡黄色视频| 欧美精品av麻豆av| 最新在线观看一区二区三区 | 欧美性长视频在线观看| 国产亚洲欧美精品永久| 久久这里只有精品19| 成人三级做爰电影| 99香蕉大伊视频| 久久久久精品国产欧美久久久 | 又粗又硬又长又爽又黄的视频| 狂野欧美激情性xxxx| 欧美日韩亚洲国产一区二区在线观看 | 麻豆乱淫一区二区| 老司机在亚洲福利影院| 丁香六月欧美| 亚洲欧美一区二区三区久久| 亚洲一区中文字幕在线| 久久午夜综合久久蜜桃| 国产成人啪精品午夜网站| 国产精品 欧美亚洲| 久久久国产一区二区| 免费av中文字幕在线| 啦啦啦中文免费视频观看日本| 日本猛色少妇xxxxx猛交久久| 欧美日韩亚洲国产一区二区在线观看 | 国产成人91sexporn| 精品一区二区三区四区五区乱码 | av在线播放精品| 丰满人妻熟妇乱又伦精品不卡| 亚洲中文日韩欧美视频| av一本久久久久| 国产精品 欧美亚洲| 亚洲综合色网址| 久久久精品国产亚洲av高清涩受| 免费在线观看黄色视频的| 午夜激情久久久久久久| 亚洲免费av在线视频| 国产黄频视频在线观看| 国产97色在线日韩免费| 国产精品欧美亚洲77777| 免费久久久久久久精品成人欧美视频| 99热网站在线观看| 国产国语露脸激情在线看| 国产一区二区三区av在线| 亚洲熟女精品中文字幕| 亚洲综合色网址| 啦啦啦在线免费观看视频4| 大片免费播放器 马上看| 男人爽女人下面视频在线观看| 亚洲精品自拍成人| 国产精品一区二区在线观看99| 熟女av电影| 啦啦啦啦在线视频资源| 久热爱精品视频在线9| 国产成人精品久久二区二区免费| 国产精品三级大全| 成人黄色视频免费在线看| 欧美日韩亚洲综合一区二区三区_| 日韩欧美一区视频在线观看| 亚洲av在线观看美女高潮| 亚洲人成网站在线观看播放| 大陆偷拍与自拍| 久久天躁狠狠躁夜夜2o2o | 中文乱码字字幕精品一区二区三区| 国产精品久久久久久精品电影小说| 亚洲欧美激情在线| 国产深夜福利视频在线观看| 十八禁高潮呻吟视频| 精品人妻在线不人妻| 免费看十八禁软件| 中文字幕高清在线视频| av天堂在线播放| 国产免费福利视频在线观看| 久久综合国产亚洲精品| 99久久人妻综合| 精品国产乱码久久久久久男人| 色综合欧美亚洲国产小说| 97人妻天天添夜夜摸| 亚洲成人国产一区在线观看 | 午夜福利视频在线观看免费| 国产黄频视频在线观看| 亚洲精品国产av成人精品| 日韩电影二区| 日本午夜av视频| 丰满少妇做爰视频| 一区福利在线观看| 69精品国产乱码久久久| 国语对白做爰xxxⅹ性视频网站| 日本欧美视频一区| 如日韩欧美国产精品一区二区三区| 日日爽夜夜爽网站| avwww免费| 欧美亚洲 丝袜 人妻 在线| 啦啦啦中文免费视频观看日本| 亚洲精品一区蜜桃| 亚洲av国产av综合av卡| netflix在线观看网站| 欧美97在线视频| 又大又爽又粗| av电影中文网址| 成年动漫av网址| 国产精品国产三级专区第一集| 啦啦啦在线免费观看视频4| 日本91视频免费播放| 精品福利永久在线观看| xxxhd国产人妻xxx| avwww免费| 国产一区二区三区av在线| 精品卡一卡二卡四卡免费| 日韩视频在线欧美| 只有这里有精品99| 亚洲av片天天在线观看| 国产欧美日韩一区二区三 | 91麻豆av在线| 国产精品一区二区精品视频观看| 国产免费又黄又爽又色| 色网站视频免费| 在线观看免费日韩欧美大片| 熟女少妇亚洲综合色aaa.| 国产成人a∨麻豆精品| 亚洲五月色婷婷综合| 国产欧美日韩一区二区三区在线| 久久毛片免费看一区二区三区| 夫妻性生交免费视频一级片| 亚洲激情五月婷婷啪啪| av片东京热男人的天堂| 汤姆久久久久久久影院中文字幕| 成年av动漫网址| 天天躁狠狠躁夜夜躁狠狠躁| 精品一品国产午夜福利视频| 高清视频免费观看一区二区| 中文字幕人妻丝袜制服| 免费观看a级毛片全部| 中文精品一卡2卡3卡4更新| 欧美精品啪啪一区二区三区 | 在线观看免费视频网站a站| 国产精品一国产av| a级毛片黄视频| 在线精品无人区一区二区三| 午夜激情久久久久久久| 超碰97精品在线观看| 国产精品av久久久久免费| 男女边吃奶边做爰视频| 日本91视频免费播放| 欧美日韩成人在线一区二区| 欧美精品啪啪一区二区三区 | 少妇裸体淫交视频免费看高清 | 一区二区三区乱码不卡18| 欧美变态另类bdsm刘玥| 精品国产乱码久久久久久小说| 久久精品亚洲熟妇少妇任你| 亚洲专区国产一区二区| 亚洲,欧美,日韩| 一级毛片 在线播放| 日韩一区二区三区影片| 精品久久久久久久毛片微露脸 | 国产av国产精品国产| 国产真人三级小视频在线观看| 后天国语完整版免费观看| 一级毛片我不卡| 国产欧美日韩一区二区三 | 国产精品欧美亚洲77777| 久久久久久久国产电影| 男女下面插进去视频免费观看| 黄色怎么调成土黄色| 欧美av亚洲av综合av国产av| 精品国产乱码久久久久久小说| 嫁个100分男人电影在线观看 | 电影成人av| 国产成人a∨麻豆精品| 考比视频在线观看| 精品欧美一区二区三区在线| 韩国精品一区二区三区| 青草久久国产| 热99国产精品久久久久久7| 国产精品二区激情视频| 国产精品人妻久久久影院| 啦啦啦啦在线视频资源| 日韩av在线免费看完整版不卡| 中文字幕高清在线视频| 一二三四社区在线视频社区8| xxx大片免费视频| 老司机靠b影院| 亚洲一码二码三码区别大吗| 久久久久久久久免费视频了| 日韩欧美一区视频在线观看| 日韩中文字幕视频在线看片| 久久精品国产a三级三级三级| 国产黄色免费在线视频| 色婷婷久久久亚洲欧美| 国产男女超爽视频在线观看| 日日夜夜操网爽| 欧美变态另类bdsm刘玥| 一级片'在线观看视频| 久久久久久久国产电影| 午夜av观看不卡| 欧美性长视频在线观看| 国产高清不卡午夜福利| 日日摸夜夜添夜夜爱| 午夜福利一区二区在线看| 亚洲伊人久久精品综合| 999久久久国产精品视频| 日韩av在线免费看完整版不卡| 精品久久久精品久久久| 国产精品.久久久| 日本a在线网址| 黄片播放在线免费| 考比视频在线观看| 一边亲一边摸免费视频| 亚洲成人免费av在线播放| 亚洲美女黄色视频免费看| 老司机影院成人| 国产精品.久久久| 中国国产av一级| 国产xxxxx性猛交| 欧美日韩一级在线毛片| 在线观看www视频免费| 日本a在线网址| 91精品伊人久久大香线蕉| 亚洲,欧美精品.| 91麻豆av在线| 日本91视频免费播放| 99精品久久久久人妻精品| 国产精品免费视频内射| 亚洲av欧美aⅴ国产| 欧美乱码精品一区二区三区| 亚洲成色77777| 国产一区二区 视频在线| 搡老岳熟女国产| 女警被强在线播放| 国产高清视频在线播放一区 | 制服人妻中文乱码| 波多野结衣av一区二区av| 99精品久久久久人妻精品| 国产亚洲av高清不卡| 黄频高清免费视频| 精品少妇内射三级| 性高湖久久久久久久久免费观看| 一级毛片女人18水好多 | 亚洲中文日韩欧美视频| av有码第一页| 免费在线观看日本一区| 男女床上黄色一级片免费看| 久久热在线av| 18禁国产床啪视频网站| 亚洲欧洲日产国产| 亚洲国产看品久久| 自拍欧美九色日韩亚洲蝌蚪91| 自线自在国产av| 在线观看免费日韩欧美大片| 亚洲中文日韩欧美视频| 亚洲第一av免费看| 国产福利在线免费观看视频| 久久久精品94久久精品| 欧美成人午夜精品| 人妻一区二区av| 亚洲熟女精品中文字幕| 黄色毛片三级朝国网站| 国产一级毛片在线| 欧美黑人欧美精品刺激| 久9热在线精品视频| 9色porny在线观看| 国产爽快片一区二区三区| 亚洲欧美一区二区三区黑人| 两人在一起打扑克的视频| 中文字幕另类日韩欧美亚洲嫩草| 精品国产一区二区久久| 一本久久精品| 亚洲国产毛片av蜜桃av| av有码第一页| 久久精品亚洲av国产电影网| 少妇猛男粗大的猛烈进出视频| 成年av动漫网址| 国产成人精品在线电影| 国产成人欧美在线观看 | 我要看黄色一级片免费的| 国产午夜精品一二区理论片| 亚洲精品乱久久久久久| 性色av一级| 肉色欧美久久久久久久蜜桃| 最新在线观看一区二区三区 | 亚洲三区欧美一区| 亚洲久久久国产精品| 叶爱在线成人免费视频播放| 精品一区在线观看国产| 国产精品国产三级国产专区5o| 国产伦理片在线播放av一区| 精品国产一区二区久久| 大片免费播放器 马上看| 国产高清视频在线播放一区 | 人妻 亚洲 视频| 大型av网站在线播放| 日本vs欧美在线观看视频| 国产在视频线精品| 美女脱内裤让男人舔精品视频| 99久久综合免费| 大香蕉久久网| av天堂久久9| 精品人妻一区二区三区麻豆| 欧美成狂野欧美在线观看| www日本在线高清视频| 99久久99久久久精品蜜桃| 夫妻午夜视频| 黑人巨大精品欧美一区二区蜜桃| 国产黄频视频在线观看| 一区二区三区精品91| 一本大道久久a久久精品| 国产又色又爽无遮挡免| 亚洲国产av新网站| 亚洲中文字幕日韩| 免费观看人在逋| 国产一区二区 视频在线| 精品国产一区二区三区四区第35| 天天躁夜夜躁狠狠久久av| 十分钟在线观看高清视频www| 精品久久久精品久久久| 午夜影院在线不卡| 欧美精品一区二区免费开放| 久热爱精品视频在线9| 久久免费观看电影| 黄网站色视频无遮挡免费观看| 日韩av不卡免费在线播放| 水蜜桃什么品种好| 亚洲精品久久久久久婷婷小说| 欧美成人午夜精品| 国产一区二区三区av在线| 亚洲av综合色区一区| 国产黄频视频在线观看| 免费看av在线观看网站| 亚洲熟女精品中文字幕| 成年av动漫网址| 精品国产一区二区久久| 日本91视频免费播放| 亚洲精品乱久久久久久| av欧美777| 丰满人妻熟妇乱又伦精品不卡| 久久久欧美国产精品| 亚洲精品国产色婷婷电影| 另类亚洲欧美激情| 丰满迷人的少妇在线观看| 国产成人一区二区在线| 亚洲精品av麻豆狂野| 国产三级黄色录像| 人妻人人澡人人爽人人| 男女下面插进去视频免费观看| 9热在线视频观看99| 欧美精品一区二区大全| 国产精品国产三级国产专区5o| 国产精品偷伦视频观看了| 久久精品久久久久久久性| 亚洲天堂av无毛| 晚上一个人看的免费电影| 视频区图区小说| 久久精品国产a三级三级三级| av片东京热男人的天堂| 欧美 日韩 精品 国产| 欧美激情高清一区二区三区| 精品国产一区二区三区四区第35| 天天操日日干夜夜撸| 日韩中文字幕视频在线看片| 亚洲av成人不卡在线观看播放网 | 男女床上黄色一级片免费看| 少妇人妻久久综合中文| 精品国产超薄肉色丝袜足j| 亚洲av日韩精品久久久久久密 | netflix在线观看网站| 久久精品熟女亚洲av麻豆精品| 亚洲欧美精品自产自拍| 久久午夜综合久久蜜桃| 国产亚洲av片在线观看秒播厂| 亚洲美女黄色视频免费看| 欧美成人精品欧美一级黄| 亚洲欧洲国产日韩| 无遮挡黄片免费观看| 国产精品人妻久久久影院| 日韩欧美一区视频在线观看| 黄色a级毛片大全视频| 看免费成人av毛片| 欧美日韩国产mv在线观看视频| 国产亚洲精品久久久久5区| 亚洲精品日本国产第一区| 日韩人妻精品一区2区三区| 一级黄片播放器| 亚洲美女黄色视频免费看| 天天躁夜夜躁狠狠躁躁| 少妇的丰满在线观看| 男女无遮挡免费网站观看| 久久久久精品国产欧美久久久 | 一级毛片黄色毛片免费观看视频| 亚洲黑人精品在线| 一边亲一边摸免费视频| 午夜福利视频在线观看免费| 制服诱惑二区| 丝袜脚勾引网站| 色综合欧美亚洲国产小说| 日韩制服丝袜自拍偷拍| 中文欧美无线码| 又紧又爽又黄一区二区| 人成视频在线观看免费观看| 在线观看免费午夜福利视频| 免费在线观看黄色视频的| 侵犯人妻中文字幕一二三四区| 亚洲欧美精品自产自拍| 天天躁日日躁夜夜躁夜夜| 欧美日韩综合久久久久久| 丝袜美足系列| 午夜福利免费观看在线| 中文字幕高清在线视频| 捣出白浆h1v1| 免费一级毛片在线播放高清视频 | 青草久久国产| 黄网站色视频无遮挡免费观看| 高清黄色对白视频在线免费看| 热re99久久国产66热| 天天添夜夜摸| 久久精品国产综合久久久| 亚洲中文av在线| 国产成人免费无遮挡视频| 波野结衣二区三区在线| 一区二区三区乱码不卡18| 欧美日韩综合久久久久久| 美女福利国产在线| 国产精品.久久久| 久久综合国产亚洲精品| 亚洲久久久国产精品| 校园人妻丝袜中文字幕| 午夜福利,免费看| 久久久精品94久久精品| 在线观看免费视频网站a站| 国产精品亚洲av一区麻豆| 老熟女久久久| 操出白浆在线播放| 亚洲熟女毛片儿| 亚洲av国产av综合av卡| 午夜激情av网站| 满18在线观看网站| 欧美 亚洲 国产 日韩一| 国产精品一区二区在线观看99| 黄色毛片三级朝国网站| 欧美激情极品国产一区二区三区| 超碰成人久久| 中文欧美无线码| 亚洲少妇的诱惑av| av一本久久久久| av有码第一页| 亚洲国产精品成人久久小说| 麻豆国产av国片精品| 亚洲成人免费av在线播放| 伦理电影免费视频| 亚洲成色77777| 欧美日韩av久久| 日本色播在线视频| 欧美av亚洲av综合av国产av| 久久热在线av| 精品国产乱码久久久久久男人| 亚洲欧美日韩另类电影网站| 日本wwww免费看| 日韩欧美一区视频在线观看| 啦啦啦视频在线资源免费观看| 大话2 男鬼变身卡| 另类精品久久| 在线av久久热| 免费在线观看完整版高清| 9191精品国产免费久久| 男女边摸边吃奶| 成人国产一区最新在线观看 | 在线观看人妻少妇| 爱豆传媒免费全集在线观看| 搡老岳熟女国产| 国产成人啪精品午夜网站| 亚洲国产看品久久| √禁漫天堂资源中文www| 精品一品国产午夜福利视频| 国产高清不卡午夜福利| 精品第一国产精品| 亚洲,欧美,日韩| 精品少妇久久久久久888优播| 精品少妇黑人巨大在线播放| 国产av国产精品国产| 深夜精品福利| 免费在线观看日本一区| 欧美黑人欧美精品刺激| 91老司机精品| 亚洲国产日韩一区二区| 久久国产亚洲av麻豆专区| 亚洲人成77777在线视频| 精品国产国语对白av| 欧美日韩综合久久久久久| 婷婷色综合www| 亚洲七黄色美女视频| 丝袜脚勾引网站| 美女大奶头黄色视频| 狂野欧美激情性bbbbbb| 婷婷色麻豆天堂久久| 亚洲中文字幕日韩| 亚洲熟女精品中文字幕| 18禁观看日本| 国产麻豆69| av在线app专区| 亚洲精品乱久久久久久| 考比视频在线观看| 人人澡人人妻人| 国产亚洲欧美在线一区二区| 超碰成人久久| 欧美日韩一级在线毛片| 久久久精品免费免费高清| 99久久综合免费| 亚洲欧美精品自产自拍| 9热在线视频观看99| 高清视频免费观看一区二区| 麻豆av在线久日| 在线观看www视频免费| 日本色播在线视频| 久久人人97超碰香蕉20202| 一区二区av电影网| 男女午夜视频在线观看| 在线av久久热| 精品视频人人做人人爽| av天堂久久9| 国产欧美亚洲国产| 精品国产国语对白av| 亚洲av日韩在线播放| 一级黄色大片毛片| 久9热在线精品视频| 狠狠精品人妻久久久久久综合| 亚洲av男天堂| 考比视频在线观看| 天天躁日日躁夜夜躁夜夜| 成年人黄色毛片网站| 久久狼人影院| 亚洲国产欧美网| 免费高清在线观看视频在线观看| 少妇精品久久久久久久| 亚洲色图综合在线观看| a级毛片在线看网站| 男女午夜视频在线观看| 黄色怎么调成土黄色| 美女大奶头黄色视频| 女人高潮潮喷娇喘18禁视频| 人人妻人人添人人爽欧美一区卜| 国产福利在线免费观看视频| 国产深夜福利视频在线观看| 久久久国产一区二区| 欧美老熟妇乱子伦牲交| 久久国产亚洲av麻豆专区| 成年美女黄网站色视频大全免费| 精品福利观看| 欧美日韩视频精品一区| 乱人伦中国视频| 多毛熟女@视频| 欧美日本中文国产一区发布| 中文字幕精品免费在线观看视频| 久久精品成人免费网站| 亚洲专区中文字幕在线| 各种免费的搞黄视频| kizo精华| 中文字幕高清在线视频| 大型av网站在线播放| 黄网站色视频无遮挡免费观看| 亚洲欧美成人综合另类久久久| 欧美av亚洲av综合av国产av| 伦理电影免费视频| 日韩,欧美,国产一区二区三区| 午夜91福利影院| 满18在线观看网站| 久久久久久免费高清国产稀缺| 极品人妻少妇av视频| 男人添女人高潮全过程视频|