• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    UFC-Net with Fully-Connected Layers and Hadamard Identity Skip Connection for Image Inpainting

    2021-12-14 06:05:16ChungIlKimJehyeokRewYongjangChoandEenjunHwang
    Computers Materials&Continua 2021年9期

    Chung-Il Kim,Jehyeok Rew,Yongjang Cho and Eenjun Hwang,*

    1Korea Electronics Technology Institute,Seongnam,13488,Korea

    2School of Electrical Engineering,Korea University,Seoul,02841,Korea

    Abstract:Image inpainting is an interesting technique in computer vision and artificial intelligence for plausibly filling in blank areas of an image by referring to their surrounding areas.Although its performance has been improved significantlyusing diverse convolutional neural network(CNN)-based models,these models have difficulty filling in some erased areas due to the kernel size of the CNN.If the kernel size is too narrow for the blank area,the models cannot consider the entire surrounding area,only partial areas or none at all.This issue leads to typical problems of inpainting,such as pixel reconstruction failure and unintended filling.To alleviate this,in this paper,we propose a novel inpainting model called UFC-net that reinforces two components in U-net.The first component is the latent networks in the middle of U-net to consider the entire surrounding area.The second component is the Hadamard identity skip connection to improve the attention of the inpainting model on the blank areas and reduce computational cost.We performed extensive comparisons with other inpainting models using the Places2 dataset to evaluate the effectiveness of the proposed scheme.We report some of the results.

    Keywords:Image processing;computer vision;image inpainting;image restoration;generative adversarial nets

    1 Introduction

    Image inpainting is one of the image processing techniques used to fill in blank areas of an image based on the surrounding areas.Inpainting can be used in various applications,such as image/video uncropping,rotation,stitching,retargeting,recomposition,compression,superresolution,and harmonization.Due to its versatility,the importance of image inpainting has been particularly addressed in the fields of computer vision and artificial intelligence [1-3].

    Traditional image inpainting methods can be classified into two types:diffusion-based and patch-based methods [4-9].Diffusion-based methods use a diffusion process to propagate background data into blank areas [4-7].However,these methods are less effective in handling large blank areas due to their inability to synthesize textures [4].Patch-based methods fill in blank areas by copying information from similar areas of the image.These methods effectively restore a blank area when its ground truth is a regular and similar pattern.However,they could have difficulty reconstructing an erased area when the ground truth has a complex and irregular pattern [8,9].As a result,both types of methods have difficulty reconstructing specific patterns,such as natural scenes and urban cityscapes [10].

    Recently,deep neural network (DNN)-based methods [11-15]have significantly improved image inpainting performance compared to diffusion-based and patch-based methods.Generally,because DNN-based methods fill in the blank areas using learned data distribution,they can produce consistent results for blank areas,which has been almost impossible using traditional methods.Among the DNN-based methods,adapting the generative adversarial network (GAN)has become mainstream for image inpainting [11,16].The GAN estimates the distribution of training data through adversarial training between a generator and discriminator.Based on this distribution,the GAN reconstructs the blank area realistically in inpainting [11-15,17].Still,this approach often produces unexpected results,such as blurred restorations and unwanted shapes,when the image resolution is high,or the scene is complex [10,11,13].

    One plausible approach to solving these shortcomings is to consider spatial support [12].Spatial support represents the pixel range within the input values necessary to generate one pixel inside blank areas.To fill blank areas effectively,the inpainting model should consider the entire area outside the blank areas.For instance,Iizuka et al.[12]proposed a new inpainting model using dilated convolutions to increase the spatial support from 99 × 99 to 307× 307.As a result,this model exhibits consistent inpainting performance compared to the Context Encoder (CE) [11,12].Although several inpainting studies have used this model,it lacks spatial support when the blank areas are extensive [12].Another approach to improving inpainting model performance is to use the skip connection (SC) [18,19].In such models,the SC connects the previous values of the neural network to the output of the neural network to enhance the effect of the input values on the output.By adding SC to an inpainting model,unwanted shapes can be removed,and the resulting images can be sharper [18].However,as the previous values of the neural network have both spatial information and information about blank areas,the SC has no significant effect on nonnarrow masks [15].In addition,as the SC has unnecessary information,using the SC as is for inpainting can be a burden.

    In this paper,we propose a new inpainting model called UFC-net using U-net with fully connected (FC) layers and the SC.The proposed model is quite different from other models from two perspectives.First,UFC-net allows full spatial support,which recent inpainting models cannot guarantee [12-15].Second,UFC-net uses the Hadamard identity skip connection (HISC)to reduce the decoder’s computational overhead and focus on reconstructing blank areas.We first perform qualitative and quantitative comparisons with recent inpainting models to verify that these two differences improve inpainting performance.Then,we demonstrate through experiments that HISC is more effective than the SC in inpainting.

    This paper is organized as follows.Section 2 reviews the related work,and Section 3 describes UFC-net and HISC.Section 4 presents the quantitative and qualitative results by comparing UFC-net with several state-of-the-art models.We also quantitatively and qualitatively compare the inpainting performance of the HISC and SC.Section 5 concludes this paper and highlights some future plans.

    2 Related Work

    Three approaches improve the performance of DNN-based inpainting models.The first is to consider spatial support [11,12,14].The second is to use the SC [18-22],and the third is to improve the restoration performance using some additional techniques,such as loss functions [23-25],a two-stage model [13,15,23,26],and optional input [15,19].Fig.1 lists various inpainting models according to this classification.

    Figure 1:Classification of deep learning-based inpainting models

    2.1 Considering Spatial Support

    The CE was the first DNN-based inpainting model to use the GAN [11].The CE comprises three components:an encoder based on AlexNet [27],a decoder composed of multiple de-convolutional layers [28],and a channel-wise FC layer connecting the encoder and decoder.Although CE can reduce restoration errors,it cannot handle multiple inpainting masks or high-resolution images wider than 227×227 [12,14].

    To mitigate these problems,Iizuka et al.[12]proposed a new model consisting of an encoder,four dilated convolutional layers [29],and a decoder.The encoder downsamples an input image twice,and the decoder up-samples the image to its original size.Due to the dilated convolution,their model considered a wider surrounding area to generate a pixel than the vanilla convolution [30].They called this spatial support and demonstrated that this could extend the area from 99×99 to 307×307.However,their model was only effective for filling in blank areas using regular masks (25% of the image size in the center) but not for irregular masks with diverse shapes,sizes,and rotations.

    Liu et al.[14]applied U-net [20]for both inpainting irregular masks and increasing the region of spatial support.Although their model exhibited more consistent inpainting performance than Iizuka’s model or CE,its spatial support was not sufficient for filling in both regular and irregular masks.

    2.2 Skip Connection

    The SC has been studied to address three main problems arising from the training of the DNN:the effect of weakening input values,vanishing or exploding gradients,and performance degradation with increasing network depth.The SC was used in U-net to enhance the effects of input values in image segmentation.DenseNet [21]attempts to mitigate both vanishing or exploding gradient problems and weakening input value effects by connecting the output of each layer to the input of every other layer in a feed-forward network.He et al.[22]suggested and implemented a shortcut connection in every block in the model to alleviate degradation when the network depth increases.Boundless [18]and SC-FEGAN [19]used the SC to provide spatial information,improving inpainting performance compared to each model without the SC.However,in [15],the authors suggested that the SC is not effective when blank areas are large.

    2.3 Other Techniques for Improving Inpainting Performance

    The extra loss function can be used to improve inpainting performance.For instance,adversarial loss can be used as a reasonable loss function to estimate the distribution and generate plausible samples according to the distribution [11,31].Following this,adversarial loss has become one of the most important factors in DNN-based inpainting models [12-15].Additionally,several recent studies on inpainting [13,15,23]have attempted to reduce the frequency of undesired shapes that have often occurred in inpainted data by using perceptual loss [24]and style loss [25].

    Alternatively,two-stage models have been proposed to improve reconstruction performance [13,15].In the first stage,the models usually restore blank areas coarsely by training the generator using reconstruction loss.Then,in the second stage,they restore blank areas finely by training another generator using reconstruction loss and adversarial loss.DeepFill v1 [13]is a two-stage inpainting model in which a contextual attention layer is added to the second generator to improve inpainting performance further.The contextual attention layer learns where to borrow or copy feature information from known background patches to generate the blank patches.Yu et al.[15]proposed a gated convolution (GC)-based inpainting model,DeepFill v2,to improve DeepFill v1.This model created soft masks automatically from the input so that the network learns a dynamic feature selection mechanism.In the experiment,DeepFill v2 was superior to Iizuka’s model,DeepFill v1,and Liu’s model,but some filled areas were still blurry [19].

    Nazeri et al.[23]proposed another two-stage inpainting model called EdgeConnect.This model was inspired by a real artist’s work.In the first stage,the model draws edges in the given image.In the second stage,blank areas were filled in based on the results of the first stage.Although the model exhibits higher reconstruction performance than Liu’s model and Iizuka’s model,it often fails to reconstruct a smooth transition [32].StructureFlow [26]follows the twostage modeling approach.The first stage reconstructs the edge-preserved smooth images,and the second stage restores the texture in the output of the first stage as the original.StructureFlow is very good at reproducing textures but sometimes fails to generate plausible results [33].

    Lastly,inpainting performance can be improved using additional conditions as an input.For instance,DeepFill v2 allows the user to provide sparse sketches selectively as conditional channels inside the mask to obtain more desirable inpainting results [15].In SC-FEGAN,users can input not only sketches but also color.Both DeepFill v2 and SC-FEGAN are one step closer to interactive image editing [19].

    3 Approach

    In this section,we present details of the proposed model,UFC-net,including the discriminator,loss function,and spatial support.We first describe the effects of the FC layers in an inpainting model and then introduce UFC-net in detail.Afterward,we discuss the discriminator and loss function for the training process.

    3.1 Effects of Fully Connected Layers

    Unlike other recent inpainting models,we appended FC layers into the inpainting model to achieve two effects [12-15,19].The first effect is that the model has enough spatial support to account for all input areas,and the second is that the model can provide sharp inpainting results.We explain these two effects in turn.

    The FC layer is connected to all areas for the model to account for all surrounding areas.Recent inpainting models [12-15],which are composed only of convolutional neural networks(CNNs),cannot consider all input areas.For a more detailed explanation,we demonstrate the difference between the U-net model,which is popularly adopted as an inpainting model [14,19],and the U-net model with FC layers.

    For example,for the 512×512 sample image with a 384×384 area erased in Fig.2a,the images in Figs.2b and 2c represent two pixels generated by the U-net model and their spatial support with a 767×767 area.

    Fig.2b illustrates the case where the spatial support can consider the surrounding area.In contrast,Fig.2c depicts the case where the spatial support cannot consider any surrounding image even though the spatial support is the same size.In this case,the U-net model fills the blank area regardless of the surrounding area because CNN-based models,such as U-net,construct spatial support with the pixel as the center point.

    Figure 2:A data sample with a vertex-aligned 384×384 square mask and the spatial support of each of two given pixels with the data sample in U-net:(a) data sample,(b) spatial support of the center-aligned pixel with the data sample,and (c) spatial support of the top-right pixel with the data sample

    Unlike the original U-net,U-net with an FC layer can consider all input areas because the FC layer uses all inputs to calculate the output.As a result,inpainting models based on the Unet with FC layer recover all blank regions more effectively by considering all surrounding areas regardless of the position of the generated pixel,as displayed in Figs.3b and 3c.

    Figure 3:An image sample with a vertex-aligned 384×384 square mask and the spatial support of two pixels inside the image by U-net with an FC layer:(a) image sample,(b) spatial support of the pixel at the center of the image,and (c) spatial support of the top-right pixel of the image

    Another effect of the FC layer is to naturally transform the input image distribution,including blank areas,into the original image distribution without any blank areas.As typical convolutions operate with the same filters for both blank and surrounding areas,several problems,such as color discrepancy,blurriness,and visible mask edges,have been observed in CNN-based inpainting models [14,15].Kerras et al.[34]reported that applying the FC layer makes it easier for the generator to generate plausible images because the input distribution is flexibly modified to the desired output distribution.They also revealed that an inpainting model without an FC layer often fails to generate plausible images.

    Although partial convolution (PC) and GC can alleviate typical convolution problems,they have their limitations.For instance,if the layer becomes deep,PC becomes insensitive to the erased area [15],or two convolutions must be performed in GC.In contrast,the FC layer enables the inpainting model to mitigate the typical convolution problems in inpainting and avoid problems by PC or GC.The FC layer is a trainable weight that can learn both the blank and surrounding areas,which PC cannot do.In addition,inpainting models based on the U-net with an FC layer is lighter than GC-based inpainting models.

    3.2 UFC-Net

    We constructed an inpainting model called UFC-net that implements FC layers into U-net to employ the benefits of the FC layer in inpainting.Fig.4 presents the overall architecture of UFCnet,which has fully spatial support and can transform the input distribution into the original image distribution naturally.The generator model receives masked images,masks,and sketches as input data,where the sketches are optional.A DNN-based generator usually has the risk that the gradient used for learning may disappear [25-27],so the generator in the UFC-net uses batch normalization [35]except for the last layer.

    Figure 4:The UFC-net architecture

    The UFC-net consists of three components:the encoder,latent networks,and decoder.The encoder consists of nine convolutional layers that compute feature maps over input images with a stride of 2.Tab.1 describes some encoder details.

    After the encoding process,encoded features pass through eight FC layers to smoothly transform the input distribution to the corresponding output distribution.Tab.2 presents some hyperparameters of the latent networks in the generator model.

    Table 1:Hyperparameters of the UFC-net encoder

    The decoder consists of eight Hadamard identity blocks (HIB).Fig.5 presents the difference between U-net’s SC and HIB.A typical SC takes the latent value of the encoder and concatenates it channel-wise to the decoder.In the case of HIB,however,the value of the nonblank area is replaced by the latent value of the encoder.The HISC can be defined by Eq.(1):

    Figure 5:Two convolutional neural networks with skip connections (SC):(a) SC and a couple of convolutional layers,and (b) Hadamard identity block

    whereβrepresents the result of the previous neural networks,andMis the mask area (0 for holes and 1 for filled).In addition,αis the latent value received from the encoder.

    As HISC replaces the decoder latent value with the encoder latent value for nonblank areas,the gradient between the HIB and another HIB is not calculated in these regions.Thus,the HISC reduces the computational cost by having the generator focus on the erased area.Tab.3 lists some hyperparameters of decoder networks in the UFC-net.

    Table 2:Hyperparameters of latent networks in UFC-net

    3.3 Discriminator and the Loss Function

    Many inpainting models have used the patchGAN discriminator [36]as their discriminator [12-14,23].However,due to the adversarial training process in the GAN,GAN-based inpainting models often exhibit unstable training [34,37,38].This problem should be addressed to use the discriminator in GAN-based models.Further,spectral normalization has the property that the generated data are quite similar to the training data [37].Therefore,we applied spectral normalization to the patchGAN discriminator and used the outcome as the discriminator of UFC-net.Tab.4 presents the hyperparameters of the patchGAN discriminator.

    We used reconstruction loss,adversarial loss,perceptual loss,and style loss to train our model.Reconstruction loss is essential for image reconstruction and is defined using Eq.(2).We used the hinge loss from [15]as the adversarial loss.The adversarial loss effectively restores the results sharply [11,12],which can be defined by Eq.(3).Both perceptual loss and style loss are used to mitigate unintended shapes [14,23],defined by Eqs.(4) and (5),respectively:

    Table 3:Hyperparameters of decoder networks in UFC-net

    Table 4:Hyperparameters of the patchGAN discriminator

    wherex,,m,andsrepresent samples from the original data,erased data,mask,and sketch,respectively.The generatorGreceivesz,which is the channel-wise concatenated feature of,m,ands,and generates the fake dataG(z).The discriminatorDreceives two types of samples:fake data samplesG(z)from fake distributionpdata(z)and real data samplesxfrompdata(x).This discriminator outputs D(G(z))and D(x)for the fake and real data samples,respectively.In addition,φi(x)∈Cj×Hj×Wjis the activation map of relui_1 calculated using the given dataxin the VGG-19 model pretrained with the ImageNet dataset.Moreover,Gjφ (x)∈Cj×Cjis a Gram matrix constructed fromφj(x).To summarize,our final loss function is defined by Eq.(6):

    4 Experiments

    To evaluate the inpainting performance of the proposed model,we conducted various experiments.We first present the environment and hyperparameters for the experiments and then describe the effectiveness of the spatial support and HISC used in UFC-net.In addition,we demonstrate the effect of the sketch input in the proposed model.

    4.1 Experimental Setting

    As the dataset for the experiments,we used the Places2 [17]dataset,which contains 18 million scene photographs and their labeled data with scene categories.Fig.6 presents some of the images in the dataset.

    Figure 6:Images from the Places2 dataset

    We employed two types of masks for training:regular and irregular masks.Regular masks were square with a fixed size (25% of total image pixels) centered at a random location within the image.Irregular masks used the same dataset as Liu et al.[14].We applied the canny edge algorithm [39]to the Places2 dataset to obtain the sketch dataset.Before training,all weights in the generator and discriminator were initialized with samples of a normal random distribution.

    The distribution had 0 for the mean and 0.02 for the standard variation.For training,we used Adam [40]as the optimizer.They were implemented based on the TensorFlow framework and run on Nvidia GTX 1080ti and Nvidia RTX Titan,with batch sizes of 4 and 8,respectively.Both generators and discriminators set the learning rate to 0.002,with one million training iterations.We updated the generator weights twice after updating the discriminator weights once [41].

    4.2 Quantitative Comparison

    The proposed model’s primary goals are to widen the spatial support and restore the blank areas for more effective inpainting.Therefore,for comparison,we considered three models that are closely related to these two properties.The models are DeepFill v1 [13],Liu et al.[14]model,and DeepFill v2 [15].

    In addition,we used the L1 loss,L2 loss,total variation (TV) loss [14],and variation as the evaluation metrics,which can be defined by Eqs.(7)-(10) as follows:

    whereRis the region of one-pixel dilation of the hole region,yis |G(z)-x|,Nis the number of elements of the nonmask areas iny,andy(i,j)represents the pixel corresponding to a spatial position(i,j)iny.

    The L1 loss is also known as the least absolute error that measures the absolute difference between the target and estimated values.Similarly,the L2 loss is used to measure the sum of the square of the difference between the target and estimated values.These two loss functions are often used to evaluate the performance of inpainting models.Smaller values of these metrics indicate better generative performance.The TV loss is a metric that expresses the amount of change from the surrounding area based on each pixel for the L1 error.If the TV loss is low,the error does not change rapidly,making it difficult to detect the error visually.The variance indicates the gap performance between the L1 loss and L2 loss in each model.Tab.5 presents the L1 loss,TV loss,L2 loss,and variance of four models for both regular masks and irregular masks.The proposed model presented the lowest L1 and TV loss errors,which indicates that our model outperforms PC or GC in handling blank areas.However,the proposed model could not achieve the lowest L2 loss and variance.Nevertheless,the proposed model yields the best inpainting results for the human eye.We demonstrate this in the next section.

    Table 5:Inpainting implementation of quantitative results.Bold indicates the smallest value(smaller is better) when comparing models in each evaluation metric

    Table 6:Accuracy comparison of HISC and SC in inpainting

    4.3 Qualitative Comparison

    Fig.7 illustrates some of the inpainting results by the four models.Overall,our model outperformed the other models visually.For instance,Liu’s model produced pixels of different colors than the original color,especially in the background.DeepFill v2 produced some edges or regions in the first and fourth images that were not in the ground truth,although it exhibited reasonable restoration performance.However,the proposed model exhibited excellent restoration results for all images.

    Figure 7:Comparison of inpainting results for the Places2 test dataset

    4.4 Skip Connection vs.Hadamard Identity Skip Connection

    We compared the performance of UFC-net with HISC and UFC-net with SC to validate the effectiveness of HISC.In addition,we used the same conditions as in Sections 4.2 and 4.3 except for the sketch condition.We concatenated sketches during both training and testing with a 50%probability.Tab.6 lists the evaluation results.The HISC outperformed the conventional SC in most cases,particularly for irregular masks.Fig.8 illustrates the actual visual effects of HISC and SC in the UFC-net.The SC-based model generated an image in which the mask area and its surroundings were visually separated.In addition,the model adopting the SC technique often produced unintended shapes or colors,whereas HISC did so less often.

    Figure 8:Inpainting results of HISC and SC for the Places2 test dataset.The top two images were generated without sketches,and the bottom two images were generated with sketches

    Table 7:Quantitative result comparison with 1,2,4,8,and 16 latent fully connected layers

    4.5 Effectiveness of the Latent Network and Sketch Input

    In this experiment,we evaluated the accuracy of the model according to the number of latent network layers and summarized the results in Tab.7.Eight FC layers achieved the best performance in L1 loss and TV loss.In contrast,16 FC layers exhibited the lowest L2 loss.Fig.9 illustrates the results of applying a sketch to our model.The image edges were determined along with the sketch,which indicates that the proposed model can perform sketch-based interactive image editing,like DeepFill v2 [15]and SC-FEGAN [19].

    Figure 9:Example of using a sketch (black line) in the erased gray area of the original image

    5 Conclusion

    In this paper,we proposed an inpainting model by appending FC layers and HISC in the U-net.Our model not only extended the scope of spatial support but also transformed the input distribution to the output distribution smoothly using FC layers.In addition,HISC improved the reconstruction performance and reduced the computational cost compared to the original SC.Through extensive experiments using the Places2 dataset,we found that the proposed model outperformed the state-of-the-art inpainting models in terms of L1 loss and TV loss through diverse sample images.We also verified that HISC could achieve better performance than the original SC for regular and irregular masks.In the near future,we will consider other datasets for testing and improve the UFC-net to cover larger blank areas.

    Funding Statement:This research was supported in part by NRF (National Research Foundation of Korea) Grant funded by the Korean Government (No.NRF-2020R1F1A1074885) and in part by the Brain Korea 21 FOUR Project in 2021.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产精品成人在线| 99热这里只有精品一区| 国产淫片久久久久久久久| 在线观看免费高清a一片| 久久久精品欧美日韩精品| 国产欧美另类精品又又久久亚洲欧美| 91午夜精品亚洲一区二区三区| 国产成人精品久久久久久| 国产精品.久久久| 亚洲人成网站在线观看播放| 女的被弄到高潮叫床怎么办| 亚洲精品成人av观看孕妇| 亚洲欧美一区二区三区国产| 久久久久九九精品影院| 欧美成人a在线观看| 在线观看美女被高潮喷水网站| 国产免费福利视频在线观看| 国产免费福利视频在线观看| 日本黄大片高清| 精品国产露脸久久av麻豆| 国产精品一区二区三区四区免费观看| 国产欧美日韩精品一区二区| av专区在线播放| 91精品国产九色| 国产亚洲精品久久久com| 欧美性猛交╳xxx乱大交人| 亚洲国产欧美人成| 久久久精品94久久精品| 七月丁香在线播放| 欧美激情久久久久久爽电影| 欧美日本视频| 中国美白少妇内射xxxbb| 丝袜喷水一区| 久久精品国产鲁丝片午夜精品| 天堂俺去俺来也www色官网| 国产人妻一区二区三区在| 99久久精品国产国产毛片| 亚洲精品国产av成人精品| 国产老妇伦熟女老妇高清| 日韩亚洲欧美综合| 亚洲人与动物交配视频| 日韩 亚洲 欧美在线| 国产乱来视频区| 极品教师在线视频| 精品99又大又爽又粗少妇毛片| 久久人人爽人人片av| 成人漫画全彩无遮挡| 日日啪夜夜撸| 久久久久国产精品人妻一区二区| 综合色av麻豆| 亚洲天堂av无毛| 成人高潮视频无遮挡免费网站| 国产 精品1| 久久综合国产亚洲精品| av免费观看日本| 老司机影院毛片| 99热这里只有是精品50| 波多野结衣巨乳人妻| 久久ye,这里只有精品| 交换朋友夫妻互换小说| 亚洲精品国产av蜜桃| 国产人妻一区二区三区在| 亚洲国产精品成人久久小说| 国产精品久久久久久精品电影小说 | 国产老妇伦熟女老妇高清| 国产精品99久久99久久久不卡 | 亚洲真实伦在线观看| 国产乱人视频| 亚洲精品亚洲一区二区| 婷婷色麻豆天堂久久| 日韩av在线免费看完整版不卡| 亚洲av中文av极速乱| 日本与韩国留学比较| 国内精品美女久久久久久| 国产精品一区二区性色av| 国产永久视频网站| 亚洲精品乱码久久久久久按摩| 亚洲精品aⅴ在线观看| av天堂中文字幕网| 国产午夜福利久久久久久| 制服丝袜香蕉在线| 亚洲成人av在线免费| 久久久久久久久久久免费av| 国产成人freesex在线| 国产高清不卡午夜福利| 人人妻人人爽人人添夜夜欢视频 | 黄片无遮挡物在线观看| av播播在线观看一区| 亚洲av日韩在线播放| 亚洲色图av天堂| 亚洲最大成人中文| 国产欧美亚洲国产| 亚洲在久久综合| 岛国毛片在线播放| 大香蕉久久网| 欧美高清性xxxxhd video| 日韩亚洲欧美综合| 免费看av在线观看网站| 日日啪夜夜爽| 成人欧美大片| 99re6热这里在线精品视频| 人妻制服诱惑在线中文字幕| 亚洲欧美日韩东京热| 欧美最新免费一区二区三区| 大话2 男鬼变身卡| 久久99蜜桃精品久久| 免费观看a级毛片全部| 亚洲欧美精品专区久久| 国产黄色免费在线视频| 日产精品乱码卡一卡2卡三| 蜜臀久久99精品久久宅男| 国产 一区精品| 亚洲美女视频黄频| 一本色道久久久久久精品综合| 国产午夜精品久久久久久一区二区三区| 一区二区三区精品91| 亚洲av国产av综合av卡| 3wmmmm亚洲av在线观看| 搡女人真爽免费视频火全软件| 午夜福利高清视频| 亚洲av在线观看美女高潮| 中文字幕制服av| 中文字幕亚洲精品专区| 亚洲国产色片| 少妇丰满av| 免费少妇av软件| 最近最新中文字幕大全电影3| 久久久a久久爽久久v久久| kizo精华| 国产白丝娇喘喷水9色精品| av.在线天堂| 国产 一区精品| 久久99精品国语久久久| 99热国产这里只有精品6| 日韩大片免费观看网站| 日韩精品有码人妻一区| 日韩伦理黄色片| 亚洲av免费在线观看| 各种免费的搞黄视频| av国产精品久久久久影院| 亚洲高清免费不卡视频| 毛片女人毛片| 亚洲av成人精品一二三区| 身体一侧抽搐| 国内少妇人妻偷人精品xxx网站| 午夜福利网站1000一区二区三区| 欧美日韩一区二区视频在线观看视频在线 | a级毛片免费高清观看在线播放| 欧美一区二区亚洲| 男男h啪啪无遮挡| 亚洲国产精品成人综合色| 午夜福利视频精品| 日韩欧美精品免费久久| 水蜜桃什么品种好| 熟妇人妻不卡中文字幕| 午夜精品国产一区二区电影 | 日韩电影二区| 日韩,欧美,国产一区二区三区| 国产免费福利视频在线观看| 亚洲一区二区三区欧美精品 | 国产精品女同一区二区软件| 亚洲精品久久午夜乱码| 王馨瑶露胸无遮挡在线观看| 成年免费大片在线观看| 中文字幕制服av| 18+在线观看网站| 亚洲欧美日韩东京热| 亚洲欧美日韩东京热| 麻豆久久精品国产亚洲av| 亚洲激情五月婷婷啪啪| 久久99热这里只有精品18| 国产黄a三级三级三级人| 亚洲国产欧美人成| 亚洲精品影视一区二区三区av| 简卡轻食公司| 成人二区视频| 欧美一级a爱片免费观看看| 麻豆精品久久久久久蜜桃| 婷婷色av中文字幕| 男女那种视频在线观看| 在线精品无人区一区二区三 | 免费看光身美女| 边亲边吃奶的免费视频| 在线观看一区二区三区| 激情五月婷婷亚洲| 午夜老司机福利剧场| 中文字幕亚洲精品专区| 美女被艹到高潮喷水动态| 亚洲经典国产精华液单| 午夜福利网站1000一区二区三区| 国产午夜精品久久久久久一区二区三区| 亚洲一级一片aⅴ在线观看| 成人漫画全彩无遮挡| 久久久久精品性色| 尤物成人国产欧美一区二区三区| 亚洲欧美成人精品一区二区| 国产精品久久久久久久电影| 亚洲欧美精品专区久久| 在线观看三级黄色| 毛片一级片免费看久久久久| 国产片特级美女逼逼视频| 国产爽快片一区二区三区| 赤兔流量卡办理| 成年版毛片免费区| 22中文网久久字幕| 日本免费在线观看一区| 97人妻精品一区二区三区麻豆| 精品久久国产蜜桃| 18禁裸乳无遮挡免费网站照片| 久久精品夜色国产| 免费黄频网站在线观看国产| 网址你懂的国产日韩在线| 欧美亚洲 丝袜 人妻 在线| 18禁裸乳无遮挡动漫免费视频 | 久久精品国产a三级三级三级| 久久亚洲国产成人精品v| 全区人妻精品视频| 黄片无遮挡物在线观看| 麻豆国产97在线/欧美| 2021少妇久久久久久久久久久| 成人亚洲精品av一区二区| 秋霞伦理黄片| 亚洲欧美精品自产自拍| 国产精品久久久久久精品电影| 在线观看免费高清a一片| 久久久久久久国产电影| 亚洲最大成人中文| 99视频精品全部免费 在线| 欧美bdsm另类| 天美传媒精品一区二区| 久久精品夜色国产| 免费少妇av软件| .国产精品久久| 99热国产这里只有精品6| 日本爱情动作片www.在线观看| 韩国av在线不卡| 99久久精品一区二区三区| 精品国产露脸久久av麻豆| 亚洲欧美成人综合另类久久久| 少妇裸体淫交视频免费看高清| 人妻系列 视频| a级一级毛片免费在线观看| 大陆偷拍与自拍| 男的添女的下面高潮视频| 亚洲怡红院男人天堂| 国产视频首页在线观看| 国产免费一区二区三区四区乱码| 国产69精品久久久久777片| 精品视频人人做人人爽| 亚洲高清免费不卡视频| 国产老妇伦熟女老妇高清| 亚洲av成人精品一区久久| 免费看光身美女| 欧美日韩视频精品一区| 久久久久久久午夜电影| 精品一区二区三区视频在线| 蜜桃久久精品国产亚洲av| kizo精华| 国产精品国产三级专区第一集| 久久久久久国产a免费观看| 五月天丁香电影| 男女那种视频在线观看| 久久久久久久久久人人人人人人| 国产伦精品一区二区三区视频9| 国产视频首页在线观看| 最近2019中文字幕mv第一页| 国产精品麻豆人妻色哟哟久久| 成人亚洲精品一区在线观看 | 国产精品国产av在线观看| 国内少妇人妻偷人精品xxx网站| 午夜激情久久久久久久| 国产老妇伦熟女老妇高清| 久久久久久久亚洲中文字幕| 国产老妇女一区| 免费播放大片免费观看视频在线观看| 午夜免费鲁丝| 在线亚洲精品国产二区图片欧美 | 女人久久www免费人成看片| av国产免费在线观看| 免费大片18禁| 一级av片app| 国产精品99久久久久久久久| 哪个播放器可以免费观看大片| 激情五月婷婷亚洲| av女优亚洲男人天堂| 精品一区二区三卡| 欧美精品一区二区大全| 亚洲国产av新网站| 国产精品爽爽va在线观看网站| 久久久久精品性色| 久久精品熟女亚洲av麻豆精品| 国内精品美女久久久久久| 一区二区三区精品91| 国产高清国产精品国产三级 | 99九九线精品视频在线观看视频| 丝袜脚勾引网站| 久久这里有精品视频免费| 日韩一本色道免费dvd| 一边亲一边摸免费视频| 男女那种视频在线观看| 国产欧美亚洲国产| 看非洲黑人一级黄片| 亚洲,一卡二卡三卡| 九九爱精品视频在线观看| 国产黄片美女视频| 国产精品一及| 99久国产av精品国产电影| 成人欧美大片| 一区二区三区精品91| 国产精品女同一区二区软件| 亚洲精品中文字幕在线视频 | 欧美另类一区| 人体艺术视频欧美日本| 欧美极品一区二区三区四区| 成人特级av手机在线观看| 两个人的视频大全免费| 国产又色又爽无遮挡免| 日日啪夜夜撸| 亚洲精品国产色婷婷电影| 少妇熟女欧美另类| 大香蕉久久网| 国产一区亚洲一区在线观看| 下体分泌物呈黄色| 亚洲精品一二三| 3wmmmm亚洲av在线观看| 国产亚洲av嫩草精品影院| 成人欧美大片| 亚洲自拍偷在线| 欧美日韩视频高清一区二区三区二| 精品久久国产蜜桃| 男人和女人高潮做爰伦理| 日韩电影二区| 日韩欧美精品v在线| 一区二区三区四区激情视频| av在线app专区| 我的老师免费观看完整版| 国产高清国产精品国产三级 | 麻豆国产97在线/欧美| 一级黄片播放器| 国产一区二区三区综合在线观看 | 免费看不卡的av| 两个人的视频大全免费| 久久久精品欧美日韩精品| 极品教师在线视频| 白带黄色成豆腐渣| 美女国产视频在线观看| 一本一本综合久久| 内地一区二区视频在线| 国产精品国产av在线观看| 成人亚洲欧美一区二区av| 中文欧美无线码| 久久精品综合一区二区三区| 免费观看无遮挡的男女| 欧美xxxx性猛交bbbb| 国产乱人偷精品视频| 看黄色毛片网站| 亚洲无线观看免费| 三级男女做爰猛烈吃奶摸视频| 中文天堂在线官网| 3wmmmm亚洲av在线观看| 一级片'在线观看视频| av在线播放精品| 日韩 亚洲 欧美在线| 在线亚洲精品国产二区图片欧美 | 国产精品精品国产色婷婷| 久久久久九九精品影院| 免费看日本二区| av专区在线播放| 老司机影院毛片| 亚洲av.av天堂| 人妻少妇偷人精品九色| 国产成人午夜福利电影在线观看| 久久精品国产自在天天线| 久久久久九九精品影院| 天堂网av新在线| 久久久午夜欧美精品| 久久精品久久精品一区二区三区| 中文字幕制服av| 欧美三级亚洲精品| 一级a做视频免费观看| 婷婷色av中文字幕| 波野结衣二区三区在线| av免费在线看不卡| 下体分泌物呈黄色| 免费av观看视频| 一区二区av电影网| 亚洲四区av| 亚洲,欧美,日韩| 99热网站在线观看| 精品人妻熟女av久视频| 大香蕉久久网| 七月丁香在线播放| 色婷婷久久久亚洲欧美| 亚洲精品一二三| 亚洲av男天堂| 久久精品夜色国产| 国产探花极品一区二区| 亚洲伊人久久精品综合| 久久午夜福利片| 欧美高清成人免费视频www| 国产男女超爽视频在线观看| 大码成人一级视频| 欧美 日韩 精品 国产| 人体艺术视频欧美日本| 97在线视频观看| 欧美3d第一页| 久久久久久久久久人人人人人人| 麻豆国产97在线/欧美| 欧美 日韩 精品 国产| 永久免费av网站大全| 国产黄片视频在线免费观看| 尤物成人国产欧美一区二区三区| 国产永久视频网站| 一二三四中文在线观看免费高清| 亚洲av成人精品一区久久| 久久久久国产精品人妻一区二区| 搞女人的毛片| 99热网站在线观看| 久久精品国产亚洲网站| 波野结衣二区三区在线| 男女边吃奶边做爰视频| 日本三级黄在线观看| 亚洲精品,欧美精品| av国产久精品久网站免费入址| 美女被艹到高潮喷水动态| 亚洲欧洲国产日韩| 在现免费观看毛片| 在线精品无人区一区二区三 | 日本色播在线视频| 久久久久网色| 国产成人精品福利久久| 久久久精品94久久精品| 高清在线视频一区二区三区| 久久热精品热| 毛片一级片免费看久久久久| 波多野结衣巨乳人妻| 日韩中字成人| 丝袜脚勾引网站| 久久久久久久久久久免费av| 久久久精品欧美日韩精品| 亚洲天堂国产精品一区在线| 男女边摸边吃奶| 国模一区二区三区四区视频| 赤兔流量卡办理| 久久久久久久久大av| 久久久久精品性色| tube8黄色片| 亚洲av日韩在线播放| 国产国拍精品亚洲av在线观看| 中文乱码字字幕精品一区二区三区| 国产高清三级在线| 肉色欧美久久久久久久蜜桃 | 一区二区三区乱码不卡18| 女的被弄到高潮叫床怎么办| 亚洲在久久综合| 蜜臀久久99精品久久宅男| 久久久国产一区二区| 王馨瑶露胸无遮挡在线观看| 国产精品久久久久久精品古装| 亚洲精华国产精华液的使用体验| 亚洲av成人精品一区久久| 18+在线观看网站| 美女国产视频在线观看| 久热久热在线精品观看| 日韩欧美 国产精品| 欧美zozozo另类| 国产毛片在线视频| 精品久久国产蜜桃| 极品少妇高潮喷水抽搐| av免费观看日本| 人妻系列 视频| 日韩欧美 国产精品| 午夜老司机福利剧场| 亚洲色图av天堂| 午夜免费鲁丝| 成人午夜精彩视频在线观看| 国产成年人精品一区二区| 午夜精品国产一区二区电影 | 99热网站在线观看| 国产精品蜜桃在线观看| 亚洲激情五月婷婷啪啪| 国内精品宾馆在线| 大香蕉久久网| 午夜免费鲁丝| 国产精品人妻久久久久久| 高清av免费在线| 亚洲av成人精品一区久久| 亚洲精品成人久久久久久| 日韩一区二区三区影片| 成年女人看的毛片在线观看| 69人妻影院| 免费av观看视频| 国产男女内射视频| 又黄又爽又刺激的免费视频.| 久久99蜜桃精品久久| 亚洲四区av| 在线天堂最新版资源| 免费观看性生交大片5| 欧美一区二区亚洲| 亚洲在线观看片| 亚洲综合色惰| 国产在线一区二区三区精| 久久精品久久久久久噜噜老黄| 久久久久国产精品人妻一区二区| 欧美精品人与动牲交sv欧美| 欧美成人一区二区免费高清观看| 国产毛片在线视频| 精品午夜福利在线看| 内地一区二区视频在线| 亚洲精品亚洲一区二区| 一区二区三区免费毛片| 白带黄色成豆腐渣| 女人十人毛片免费观看3o分钟| 国产成人午夜福利电影在线观看| 国国产精品蜜臀av免费| 黄色欧美视频在线观看| 国产欧美另类精品又又久久亚洲欧美| 亚洲国产成人一精品久久久| 久久国产乱子免费精品| 免费在线观看成人毛片| 国产永久视频网站| av女优亚洲男人天堂| 纵有疾风起免费观看全集完整版| 国产亚洲精品久久久com| 在线天堂最新版资源| 成人无遮挡网站| 色综合色国产| 欧美zozozo另类| 久久鲁丝午夜福利片| 我要看日韩黄色一级片| 久久久精品免费免费高清| av又黄又爽大尺度在线免费看| 久久精品国产a三级三级三级| 亚洲成人精品中文字幕电影| a级毛色黄片| 只有这里有精品99| 欧美xxxx性猛交bbbb| 久久久亚洲精品成人影院| 国产美女午夜福利| 国产爱豆传媒在线观看| 国产淫片久久久久久久久| 人妻夜夜爽99麻豆av| 男人和女人高潮做爰伦理| 永久网站在线| 国产亚洲精品久久久com| 国产成人精品久久久久久| 午夜福利视频1000在线观看| 国产av码专区亚洲av| 国产亚洲av嫩草精品影院| 国产午夜福利久久久久久| 亚洲四区av| 精品熟女少妇av免费看| 插阴视频在线观看视频| 一级毛片黄色毛片免费观看视频| 日韩欧美精品免费久久| 欧美区成人在线视频| 最近手机中文字幕大全| 肉色欧美久久久久久久蜜桃 | h日本视频在线播放| 国产成人免费无遮挡视频| 99热全是精品| 亚洲一区二区三区欧美精品 | 三级国产精品欧美在线观看| 国产精品熟女久久久久浪| 日韩欧美 国产精品| 国产精品一区www在线观看| 国内少妇人妻偷人精品xxx网站| 亚洲成人久久爱视频| 亚洲精品亚洲一区二区| 久久精品夜色国产| 久久99精品国语久久久| 欧美丝袜亚洲另类| 亚洲精品乱码久久久久久按摩| 丝袜脚勾引网站| 婷婷色麻豆天堂久久| 麻豆乱淫一区二区| 日本与韩国留学比较| 91精品国产九色| 亚洲欧美成人综合另类久久久| 高清在线视频一区二区三区| 久久久成人免费电影| 国产 一区 欧美 日韩| 午夜激情久久久久久久| 人妻 亚洲 视频| 99九九线精品视频在线观看视频| 久久久久久久久久成人| 综合色丁香网| 国产精品.久久久| 日韩欧美精品免费久久| 夫妻性生交免费视频一级片| 最近手机中文字幕大全| 三级国产精品欧美在线观看| 最近的中文字幕免费完整| 日本与韩国留学比较| 亚洲国产成人一精品久久久| 草草在线视频免费看| 尤物成人国产欧美一区二区三区| 成人国产av品久久久| 麻豆精品久久久久久蜜桃| 亚洲精品日韩在线中文字幕| 国语对白做爰xxxⅹ性视频网站| 美女脱内裤让男人舔精品视频| 能在线免费看毛片的网站| 国产av码专区亚洲av| 日韩免费高清中文字幕av| 国产白丝娇喘喷水9色精品| 午夜福利视频精品| 欧美激情久久久久久爽电影| 国产成人免费无遮挡视频| 成人亚洲精品一区在线观看 | 国产精品一区二区性色av| 老司机影院成人| 纵有疾风起免费观看全集完整版| 亚洲四区av| 成人毛片a级毛片在线播放|