• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Prior-guided GAN-based interactive airplane engine damage image augmentation method

    2022-11-13 07:29:30RuiHUANGBokunDUANYuxiangZHANGWeiFAN
    CHINESE JOURNAL OF AERONAUTICS 2022年10期

    Rui HUANG, Bokun DUAN, Yuxiang ZHANG, Wei FAN

    School of Computer Science and Technology, Civil Aviation University of China, Tianjin 300300, China

    KEYWORDS Airplane engine;Damage detection;Data augmentation;GAN;Interactive

    Abstract Deep learning-based methods have achieved remarkable success in object detection, but this success requires the availability of a large number of training images.Collecting sufficient training images is difficult in detecting damages of airplane engines.Directly augmenting images by rotation, flipping, and random cropping cannot further improve the generalization ability of existing deep models. We propose an interactive augmentation method for airplane engine damage images using a prior-guided GAN to augment training images. Our method can generate many types of damages on arbitrary image regions according to the strokes of users.The proposed model consists of a prior network and a GAN.The Prior network generates a shape prior vector,which is used to encode the information of user strokes. The GAN takes the shape prior vector and random noise vectors to generate candidate damages. Final damages are pasted on the given positions of background images with an improved Poisson fusion. We compare the proposed method with traditional data augmentation methods by training airplane engine damage detectors with state-ofthe-art object detectors, namely, Mask R-CNN, SSD, and YOLO v5. Experimental results show that training with images generated by our proposed data augmentation method achieves a better detection performance than that by traditional data augmentation methods.

    1. Introduction

    The quality and quantity of the available training data are important for modern deep learning-based methods. Given the huge number of ImageNet images, deep learning methods(e.g., VGG1and ResNet2) achieve state-of-the-art performances in image classification. However, collecting sufficient training images to train a deep damage detector is difficult in detecting damages of airplane engines.3

    Various data augmentation methods have been proposed to increase the number of training images artificially for attaining satisfactory performance. One kind of data augmentation technique is based on different transformations, such as geometric transformation,4,5color transformation,6random cropping,7,8random erasure,9and neural style transfer.10,11Although this kind of methods can extend training images, it only changes appearances and positions of targets,and cannot generate novel targets. Another kind of data augmentation technique is based on synthesis,such as image mixing,12-14feature space enhancement15, and images generated by Generative Adversarial Networks (GANs).16-19This kind of methods can generate targets according to a given dataset.However,the quality of targets is poor,which makes generated images easily distinguishable from real images. Previous work20has proven that generated targets need to match image context, or they might hurt detection performance.

    Fig. 1 shows examples of augmented airplane engine damage images by traditional and our proposed data augmentation methods.Fig.1(a)is a real damage image.The ellipses in Fig.1(b) are generated cracks using the proposed method. Fig. 1(c)is the label of Fig.1(b).Figs.1(d),1(e),and 1(f)are augmented images generated by traditional methods. Only one crack is present in the original image, which occupies a small ratio of the entire image pixels. Traditional data augmentation methods, such as random rotation, color dithering, and cropping,only change image appearance and viewpoint. The essential facts for the generalization ability of a deep learning-based damage detector are the diversity of damages (i.e., quality)and the number of damages (i.e., quantity).21

    In this study, we aim to generate high-quality damages in health regions to increase the number of damages.We propose an interactive data augmentation method for airplane engine damage images using a prior-guided GAN. A simple stroke can be drawn, and our proposed method can generate several candidate damages that match the stroke and context of images. To achieve this goal, we firstly train a generator with random noise as input and an artificial damage image as output, which can be formulated as a generative and adversarial process. Then, we fix the generator to train a prior network for encoding a user stroke into a shape prior vector with a fixed length. We add random noises with the shape prior vector to improve the randomness of the damage. Finally, the generated damage is pasted on the background by our improved Poisson fusion to obtain natural transition at the boundary of the damage. Our method generates a damage label by adjustable labeling, which can alleviate human labeling. After completing training of the generator and the shape prior network, we build an interactive system to facilitate a user to draw a stroke and control the threshold of adjustable label generation. We verify the proposed method on airplane engine image damage detection. Compared with traditional data augmentation methods, the proposed method not only generates high-quality damages but also boosts the detection performance of Mask R-CNN,22SSD,7and YOLO v5.23The contributions are three-fold:

    (1) We propose an interactive data augmentation method for airplane engine damage images, which uses simple user interaction and generates artificial damages that are close to real damages. The augmented data can improve the performance of modern object detectors in airplane engine damage detection.

    (2) The proposed method can encode a user stroke into a shape prior vector, which facilitates the damage generator to generate a damage image that matches the shape of the user stroke. We also add random noises with a shape prior vector to increase the diversity of the damage.

    (3) The proposed adjustable label generation can generate high-quality labels of damage images and control the final shape of the damage. Improved Poisson fusion is used to ensure natural transition at the boundaries of the damage, which makes an augmented damage image look like a real image.

    2. Related work

    Data augmentation is an essential step in training deep learning models, and corresponding methods can be classified into transformation and generative methods. Transformation methods use different image transformations to increase the number of images. Generative methods try to generate synthetic images.

    2.1. Transformation methods

    Most deep learning-based classifiers and detectors utilize transformation-based methods,such as geometric transformations,4,5to conduct flipping,cropping,translation,scaling,and rotation for achieving translation, scale, and rotation invariance. Several basic transformations can be combined, for example, cropping after translation or rotation. Apart from geometric transformations, appearance transformations change the appearances of images to mimic images captured with different lighting conditions, under the pollution of noise or blur.6Moreno-Barea et al.24tested different noise injection methods and found that adding noise to images could make Convolutional Neural Networks (CNNs) learn more robust features. Kang et al.25used filters to blur image patches and proposed a patch shuffle regularization, which decreased the classification error rate from 6.33% to 5.66% on CIFAR-10.

    Transformation-based data augmentation methods are easy to implement. They have been introduced into nearly all deep learning-based classification, detection, and segmentation problems to improve the generalization ability of models.However, transformation-based methods cannot solve the problem of data unbalance and scarcity fundamentally.

    2.2. Generative methods

    Recent studies have focused on synthesizing images by random erasing,9Cutout,26Mixup,12Cutmix,14Augmix,13and style transfer10,11to ensure enriched data diversity for existing training data. DeVries et al.26randomly cut out image regions to simulate occlusion for increasing the robustness of models on occluded objects. Zhong et al.9erased a rectangular area to increase the difficulty of data to force a network handle partially occluded objects. However, neither cutout nor erasing keeps the balance between deleted information and retained information. Thus, a structured drop operation was proposed by Chen et al.,27and this operation deletes multiple evenlydistributed small square regions and controls data balance by density and size parameters. Different from Cutout,Mixup12merges two images by a certain ratio to generate a new image.The representative work,CutMix,14adopts a local fusion idea and fills cutout regions with pixels of another image.Hendrycks et al.13proposed Augmix that fused an original image with a transformed image according to a certain ratio. Dwibedi et al.28randomly combined foreground and background images to increase a training image set. Kisantal et al.29used different methods to copy and paste small targets for improving the performance of small-target detection.Random combinations may result in unrealistic images, which leads to unpredictable results for object detection.20

    Style transfer has been studied in data augmentation areas.10,30Shen et al.11proposed a transfer network, which could transfer an image into some given styles to increase image diversity. Some researchers used a GAN to generate novel images for augmenting training images.31,32,33Motamed and Khalvati34proposed a semi-supervised augmentation of chest X-rays by a novel GAN with inception and residual blocks. Gurumurthy et al.35trained a GAN by reparameterizing a latent generative space as parameters of mixture and learning a mixture model with limited data. Georgakis et al.36augmented some hand-labeled training data with synthetic examples carefully composed onto scenes. Automatic data enhancement technologies37,38automatically learned optimization strategies from data and searched for appropriate augmentation strategies.

    The most related work to our method was proposed by Lahiri et al.,39which used a prior network to build a relation between noise and a masked image. Then, a GAN took the generated noise to produce an inpainted image. Four differences have been observed: (A) our method is an interactive data augmentation method, while a prior-guided GAN39aims at inpainting; (B) we use a prior network to encode user strokes and random noise to increase damage diversity, while a prior-guided GAN39utilizes a prior network to encode contents of a masked image; (C) we use l2distance of convolutional features to train a prior network, while a prior-guided GAN39adopts l2distance of image pixels; (D) our method adopts improved Poisson fusion to fuse generated damages with background images,while a prior-guided GAN39directly complements the contents of a masked region.

    3. Proposed method

    We aim to generate a novel damage on an existing airplane engine image with a user stroke and a damage type.Two questions need to be answered.The first question is the way to generate a damage with a given damage type, and the second question is how to encode a user stroke. We encode a user stroke into a shape prior vector with a prior network (i.e., P)and generate a damage with a generative network (i.e., G) in this work. Fig. 2 shows the framework of our proposed interactive data augmentation method.

    3.1. Damage generation

    3.1.1. Generative network

    We assume that a generator G(z) could produce damages that are similar to real damages by taking a random noise z. The training process of G is the same as that proposed in a GAN,40that is,alternatively training a generator G and a discriminator D. G is a counterfeiter, which produces a fake image to fool D.Meanwhile, D is a judge, which distinguishes the fake image from the real image.The generator and the discriminator play a two-player min-max game with the following objective function:

    3.1.2. Prior network

    Although G can learn the distributions of damages, it cannot generate damages with user-wanted shapes. We propose to learn a prior network P to generate a shape prior vector p with a user stroke Ishapeby Eq.(2)for enabling G to generate a damage with a specified shape.

    where θPdenotes the parameters of the prior network. The shape prior vector p encodes the shape information of the user stroke.

    A pre-trained generator G can generate a damage having a similar shape to a user stroke base on p. Let x′denote a generated damage by G(p) and y a real damage. Ishapecan be extracted from y.The distance between x′and y should be sufficiently small to ensure that x′is like y. Thus, it forces the prior network P to encode the shape information of Ishapeand generate a shape prior vector p. We extract features of the fourth convolutional layers of AlexNet,41Fx′and Fyfrom x′and y,respectively.l2distance between Fx′and Fyis used to evaluate the similarity between x′and y. The optimization of the prior network P can be obtained by

    where N is the number of training image pairs.

    3.1.3. Training policy The proposed model has a generative network G and a shape prior network P.Directly training the whole model is difficult.Thus, we adopt three stages to train the proposed model. We train G in the first stage. Then we train P after G has converged.Fig.3 shows the training process of the first and second stages. We firstly train G to generate a damage by inputting random noise in the first stage. Then we train P by fixing G to force P to encode user stroke information into a shape prior vector in the second stage. Finally, we joint-tune P and G to optimize the whole model in an end-to-end manner.

    (1) Training G. Directly training all types of damage in a single generator produces a confused damage image.We train each type of damage alone to obtain a highquality damage. DCGAN42is used to train the damage generator G. The training setup is the same as that proposed in DCGAN.42

    (2) Training P. After G is trained, we fix G to train P. The training image pairs are the stroke and its corresponding image patch. We replace the user stroke with the binary label of the damage because we do not have a user stroke for a real damage. The network architecture of P is similar to the discriminator of DCGAN.42We replace the final output of the discriminator with a 100D vector to ensure the suitability of the network for generating a shape prior vector.

    (3) Joint-tuning P and G. Fixing G to train P only forces P to generate vectors being suitable for G.The parameters of G cannot update when training P. We joint-tune P and G to obtain an optimal model. Joint-tuning aims to minimize the loss of Eq. (3). After joint-tuning, G can generate a damage with a given stroke stabler and then separator training.

    3.1.4. Inference

    When the training process is completed, we can generate a damage x′from a given user stroke Ishapeby

    3.2. Adjustable label generation

    The generated damage x′contains not only a damage but also a normal background. Thus, we cannot directly treat the whole region of x′as a label.In addition,the shape of the damage does not strictly agree with the user stroke. Thus, we cannot use the user stroke as a label as well. We propose an adjustable label generation method to generate suitable labels for damages.

    As shown in Fig. 4, adjustable label generation consists of Canny edge detection,43morphological dilation operation, morphological close operation, and finding the largest connectivity area.We adjust the minimal threshold Tminfor Canny edge detection. The maximal threshold Tmaxis set to 3 × Tmin. A 5 × 5 square is used as a structural element in morphological dilation operation to generate a dilated image. Thereafter, we adopt a morphological close operation on the dilated image.A final label can be generated by finding the maximal connectivity area.

    3.3. Improved Poisson fusion

    Directly pasting the damage image on the background image makes the fused image unnatural.We propose to fuse the damage image with the background image by Poisson fusion44in this work. Original Poisson fusion changes the pixel values of the entire foreground image, whereas crack is unnecessary to change the content inside the crack.

    Fig. 5 demonstrates our improved Poisson fusion. Let g denote the generated damage and f* the background image.g′is inside the region of g, which is generated by resizing the area of g with a ratio of 0.8. The region between g and g′is a transitional region,which is represented as Ω.Our improved Poisson fusion merely changes the appearance of Ω by

    where f is the fused image,v is the gradient of g,?is the gradient operator,and ?Ω1and ?Ω2are the boundaries of g and g′,respectively. Eq. (5)makes the transitional region having the same texture as that of the generated damage.The two constraints enable the fused image f to have identical pixel values with f* and g at?Ω1and ?Ω2, respectively. Notably, g is related to the damage label, and it can be obtained by adjustable label generation. The improved Poisson fusion can not only retain the original pixels of the fusion area but also smooth the fusion boundary.

    3.4. Interactive damage augmentation

    Given a type and a user stroke of the damage, our proposed model can generate some candidate damages. We can fuse a generated damage at the user-given position by improved Poisson fusion.A corresponding label of the damage can be generated by adjustable label generation. The whole process of our interactive damage generation is depicted in Algorithm 1.

    4. Experiment

    4.1. User interface

    The type and shape of a damage are closely related to the position of an airplane engine. In order to make generated damages real enough, we design a user interface to facilitate user interaction. Fig. 6 shows our user interface. Users can select a damage type and draw the shape of a damage on an image.The damage shape is the user stroke that records the movement of a mouse.With the user stroke,our method can generate a realistic damage with a given shape at a specified position. The final fusion effect can be changed by setting different values of threshold Tmin.

    4.2. Setup

    4.2.1. Dataset We build an airplane engine damage image dataset, namely,AEDID, to validate the proposed method. This dataset consists of 363 crack images, 316 burn images, 41 worn images,and 508 missing-tbc images. Each image has been labeled to generate a pixel-level label.

    We extract sub-images and their corresponding labels that contain damages, which are resized to 64 × 64, to train the proposed network. Finally, we obtain 550 crack, 902 burn,206 worn, and 1712 missing-tbc image pairs.

    4.2.2. Implementation detail

    The proposed method is conducted on a laptop with RTX3000 GPU. We implemented our code with Pytorch 1.5.1,CUDA10.2, and CUDNN 7.6. The parameters of networks are updated by Adam optimizer. The base learning rate is set to 0.0002, the batch size is set to 128, and the momentum is set to 0.999.

    4.3. Result analysis

    4.3.1. Results of the proposed method and traditional data augmentation methods

    We show some examples of traditional data augmentation methods,such as random rotation and cropping,and our proposed interactive data augmentation method in Fig. 7. Traditional data augmentation methods only simulate different viewpoints, but cannot generate novel damages. Our method can generate diverse damages of a certain type at user-given positions.Fig.7 shows that our proposed interactive data augmentation method can generate diverse damages and increase the pixel number of damages in images,which is the essence of solving an unbalanced data problem. In addition, our generated images can further be augmented by traditional data augmentation methods.

    4.3.2. Results of the proposed method and image-to-image translation methods To demonstrate the effectiveness of the proposed method, we conduct crack generation by using three image-to-image translation methods, such as pix2pix,45iSketchNFill,46and SPADE.47pix2pix45learns mapping from a source domain to a target domain by introducing inverse mapping and a cycle consistency loss,which can be used for several tasks,including style transferring, object transfiguration, season transfer,photo enhancement, etc. Directly training pix2pix by image patches with a background cannot generate wanted cracks.Thus, we train pix2pix by image patches without a background.

    iSketchNFill46is an interactive GAN-based sketch-toimage translation method, which allows users to generate distinct classes from a single generator network. Similar to training pix2pix,we train iSketchNFill by image patches without a background.

    SPADE47is a semantic image synthesis method with spatially adaptive normalization to propagate semantic information throughout the network. The training images of SPADE are needed to be labeled into different semantic categories.We train SPADE with crack images and their corresponding semantic labels.

    All compared methods are trained by the same training dataset as that of the proposed method for crack generation.Fig. 8 shows the generated crack images of different methods with given user strokes.Although the generated cracks of pix2-pix, iSketchNFill, and SPADE meet the shapes of the user strokes, they do not have the features of real cracks. Besides,SPADE fails to generate the background contents of crack images. On the contrary, our method can generate realistic cracks.

    4.3.3. Performances of different detectors with our data augmentation method We conduct object detection with state-of-the-art object detectors, namely, Mask R-CNN,22YOLO v5,23and SSD,7to quantitatively analyze the merits of the proposed interactive data augmentation method.

    All detectors are trained to detect crack damages because crack is one of the most concerned damages. Crack images are partitioned into 144 training images, 31 validation images,and 32 test images.For comparison,we adopt traditional data augmentation methods, namely, flipping, rotation, random cropping, Gaussian blur, and Gaussian noise, to augment training images by 10 times. We also augment training images by 10 times with our proposed data augmentation method.

    We show some detection results of Mask R-CNN, SSD,and YOLO v5 with images augmented by different data augmentation methods in Fig. 9. Fig. 9(a) is the result of Mask R-CNN. Fig. 9(b) is the result of SSD. Fig. 9(c) is the result of YOLO v5. Mask R-CNN trained with different training data can detect all cracks in the last three images except the first one.Unlike Mask R-CNN,SSD and YOLO v5 only output bounding box predictions. SSD fails to detect the small crack on the left boundary of the first image with or without data augmentation. SSD trained with original data and augmented by the traditional data augmentation methods cannot detect the cracks in the third image. Without data augmentation,YOLO v5 obtains poor detection results in the first three images. On the contrary, YOLO v5 trained with the proposed data augmentation method can detect all the cracks in all four images. Fig. 9 demonstrates that (A) data augmentation is helpful for crack detection; (B) generating damages is more effective than conducting global transformation on images for damage detection.

    Table 1 reports the detection performances of different detectors trained with different data augmentation methods.We can find that (A) state-of-the-art object detectors cannot obtain promising performances without data augmentation on crack detection;(B)traditional data augmentation methods can improve detection performances of all detectors by enlarging the training image number; (C) our method can further improve the performances of object detectors compared with those of traditional data augmentation methods. Specifically,training Mask R-CNN, SSD, and YOLO v5 with images generated by the proposed method achieves 49.86%,21.98%,and 116.74% relative mean Average Precision (mAP) improvements over training with original images, respectively. Training Mask R-CNN, SSD, and YOLO v5 with images generated by the proposed data augmentation method achieves 7.26%, 6.17%, and 13.11% relative mAP improvements over training with images generated by traditional data augmentation methods, respectively.

    Table 1 Comparison of the performances of different detectors on training images with different data augmentation methods.

    Apart from performance improvements, the proposed data augmentation method can accelerate the convergence speed of models. Mask R-CNN can be converged at 45000 iterations with images generated by our data augmentation method,while it converges at 50000 iterations with training images augmented by traditional data augmentation methods. SSD takes 100000 and 120000 iterations with training images generated by our and traditional data augmentation methods,respectively.

    4.4. Ablation study

    4.4.1. Replacing AlexNet with VGG-19

    We replace AlexNet with VGG-19 for the generator to find the effect of using different backbone networks. As shown in Fig. 10, the difference between the generated damage images of AlexNet and VGG-19 is very small.However,AlexNet costs less computation than that of VGG-19. Although many choices can be used as the backbone for the generator, Alex-Net can generate damage images.

    4.4.2. Results of generator G with a random noise vector Fig. 11 shows data augmentation results of G with random noise vector n on four types of damages. After being trained with each type of damage image, G can capture the distribution of the corresponding damage type.

    Generated worn damages are similar in texture but different in color. Other types of generated damages have different shapes and textures. Obviously, G cannot control the shape of a damage with random noise vector n. However, damages are tightly related to occurred positions, which needs strict

    shape control. We will conduct various experiments on crack images in the following sections to demonstrate the shape control ability of our proposed method.

    4.4.3. Results of generator G with prior network P

    We demonstrate some generated results of G with shape prior network P on crack images in Fig.12.x is the original damage image.Ishapeis the shape of a damage.x′is the generated damage image with Ishapeby our proposed method. The shapes input into P are generated from labels by extracting the skeletons of the corresponding labels of damage images.Generated damage images are similar to their shapes, which verifies that the shape prior network encodes shape information into the shape prior vector. We draw strokes in images as shown in Fig. 13 to generate cracks capturing the shapes of the strokes.Small random noises are added to the shape prior vector to increase the diversity of generated cracks. Images in the first column in Fig.13 are user strokes.x1′,x2′,...,x4′are generated damage images with four random noises.We can find that generated cracks follow the shapes of the strokes and show diversity with the same stroke.

    Table 2 Comparison of the performances of training YOLO v5 on the images generated by using n and p+n.

    Table 2 Comparison of the performances of training YOLO v5 on the images generated by using n and p+n.

    Method mAP AP50 n 0.444 0.707 p+0.1n 0.466 0.806

    4.4.4. Effects of the segmentation threshold

    We show generated labels and their corresponding augmented results at Tmin=0,45,140,225 to study the effects of threshold Tminon adjustable label generation. As shown in Fig. 14,we can find that (A) all pixels in the patch are labeled as a crack when Tmin=0;(B)increasing threshold Tminwill reduce normal regions,and when Tmin=140,we obtain the best segmentation label for this example; (C) surging Tminwill reduce more normal pixels and even reduce the pixels inside the crack;(D) using a high value, such as Tmin= 22, results in a black label (all pixels are labeled as a background). The threshold Tmincan be adjusted according to the generated damage image to generate a suitable augmented image.

    4.4.5. Failure cases

    We show four failure cases of generated crack damages in Fig.15.The damages shown in the first row can capture shapes of strokes but have large translations in horizontal and vertical directions. The shapes of damages shown in the second rows are different from their corresponding strokes because random noise n disturbs the distribution of shape prior vector p.With a disturbed vector n, generator G cannot always produce promising cracks. However, failure cases rarely occur in real usage.

    5. Conclusions

    In this study, we propose an interactive data augmentation method with a prior-guided GAN. The proposed model has two networks:a generator G and a prior network P.We firstly train G to generate images with random noise. Then, we train P by fixing G.The training process can enable P to encode user stroke information and force P to produce a prior vector that makes G generate wanted damages.Random noise is added to the shape prior vector to obtain various generated damages.We propose an adjustable label generation method to adjust segmented regions. We use an improved Poisson fusion method based on a generated label to paste a damage image on a background image.We conduct various experiments to analyze the proposed method. Compared with traditional data augmentation methods, our method can generate novel damages on existing training images. Thus, it can effectively increase the number and diversity of training images. Experimental results on object detection demonstrate that using the proposed data augmentation method can improve detection performance with a large margin. Our data augmentation method can be applied to other computer vision-based detection problems such as deterioration detection on roads to increase the number of training images and solve the unbalanced data problem.

    Declaration of Competing Interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    Acknowledgement

    This work was supported by the Natural Science Foundation of Tianjin, China (No. 20JCQNJC00720).

    色视频在线一区二区三区| 久久久久久人人人人人| 各种免费的搞黄视频| 亚洲成av片中文字幕在线观看 | 亚洲av综合色区一区| 精品久久久精品久久久| 丰满少妇做爰视频| 99久久综合免费| 国产精品女同一区二区软件| 各种免费的搞黄视频| 搡老乐熟女国产| 久久久久久久精品精品| 啦啦啦中文免费视频观看日本| 久久久久久久精品精品| 天天躁日日躁夜夜躁夜夜| av在线播放精品| 久久久久久久久免费视频了| 人人澡人人妻人| 精品午夜福利在线看| 中文精品一卡2卡3卡4更新| 中国三级夫妇交换| 中文精品一卡2卡3卡4更新| 18禁裸乳无遮挡动漫免费视频| 欧美精品一区二区免费开放| 99热国产这里只有精品6| 波多野结衣一区麻豆| 久久99蜜桃精品久久| 99久久中文字幕三级久久日本| 亚洲av福利一区| 亚洲欧洲精品一区二区精品久久久 | 国产av精品麻豆| 狠狠婷婷综合久久久久久88av| 亚洲av欧美aⅴ国产| 中文欧美无线码| 日韩制服骚丝袜av| 国产亚洲午夜精品一区二区久久| 人人妻人人添人人爽欧美一区卜| 亚洲欧美一区二区三区久久| 国产精品久久久久久精品电影小说| 亚洲国产精品成人久久小说| 亚洲精品成人av观看孕妇| 人妻系列 视频| 亚洲欧美日韩另类电影网站| 女人被躁到高潮嗷嗷叫费观| 久久久久久久亚洲中文字幕| kizo精华| 亚洲欧美一区二区三区黑人 | 人妻系列 视频| 国产 精品1| 国产日韩欧美在线精品| 国产av码专区亚洲av| 国产欧美日韩综合在线一区二区| 精品国产国语对白av| 丝瓜视频免费看黄片| 国产成人精品福利久久| 国产一区二区三区av在线| av天堂久久9| 一级黄片播放器| 80岁老熟妇乱子伦牲交| 日韩av不卡免费在线播放| 亚洲内射少妇av| 色婷婷久久久亚洲欧美| 日韩电影二区| 天天操日日干夜夜撸| xxx大片免费视频| 春色校园在线视频观看| 久久精品熟女亚洲av麻豆精品| 高清黄色对白视频在线免费看| 国产精品人妻久久久影院| 2021少妇久久久久久久久久久| 日本91视频免费播放| 精品国产露脸久久av麻豆| 美女国产视频在线观看| 黄色视频在线播放观看不卡| 高清视频免费观看一区二区| 欧美日韩视频精品一区| 丝袜美足系列| 欧美精品高潮呻吟av久久| 中文字幕人妻丝袜制服| 精品午夜福利在线看| 曰老女人黄片| 伦精品一区二区三区| 香蕉国产在线看| 最近中文字幕高清免费大全6| 在线亚洲精品国产二区图片欧美| 国产熟女欧美一区二区| 高清av免费在线| 日本欧美国产在线视频| 考比视频在线观看| 国产精品无大码| 国精品久久久久久国模美| 女人被躁到高潮嗷嗷叫费观| 久久女婷五月综合色啪小说| 久久久国产一区二区| 老司机亚洲免费影院| 精品国产一区二区三区四区第35| 天天操日日干夜夜撸| 91久久精品国产一区二区三区| 人妻少妇偷人精品九色| 亚洲国产日韩一区二区| xxx大片免费视频| 欧美日韩av久久| 考比视频在线观看| 成人国产麻豆网| av国产久精品久网站免费入址| 一个人免费看片子| 亚洲精品,欧美精品| 日本爱情动作片www.在线观看| 妹子高潮喷水视频| 国产精品av久久久久免费| 亚洲av男天堂| 亚洲五月色婷婷综合| 欧美变态另类bdsm刘玥| 欧美黄色片欧美黄色片| 日韩不卡一区二区三区视频在线| 9191精品国产免费久久| 少妇被粗大的猛进出69影院| 日韩伦理黄色片| 热re99久久国产66热| av在线app专区| 在线看a的网站| 国产精品 欧美亚洲| 青春草国产在线视频| 18+在线观看网站| 一区二区三区精品91| 国产精品欧美亚洲77777| 一区二区三区乱码不卡18| 国产乱来视频区| 波多野结衣av一区二区av| av有码第一页| av福利片在线| 亚洲伊人久久精品综合| 精品第一国产精品| 欧美日韩av久久| 美女中出高潮动态图| 满18在线观看网站| 又大又黄又爽视频免费| 久久久国产一区二区| 热re99久久国产66热| 亚洲精品第二区| 久久久久久久大尺度免费视频| 亚洲国产成人一精品久久久| 亚洲精品第二区| 三级国产精品片| 黄色 视频免费看| 秋霞在线观看毛片| 成人二区视频| 欧美成人午夜免费资源| 青春草视频在线免费观看| 欧美老熟妇乱子伦牲交| 国产在线视频一区二区| 伦理电影大哥的女人| 大香蕉久久成人网| 免费播放大片免费观看视频在线观看| 国产精品秋霞免费鲁丝片| 亚洲内射少妇av| 精品卡一卡二卡四卡免费| 久久精品夜色国产| 妹子高潮喷水视频| 最近的中文字幕免费完整| 国产精品久久久久久精品古装| 熟妇人妻不卡中文字幕| 涩涩av久久男人的天堂| 国产亚洲精品第一综合不卡| 国产野战对白在线观看| 久久久a久久爽久久v久久| 国产精品麻豆人妻色哟哟久久| 久久毛片免费看一区二区三区| 亚洲久久久国产精品| 欧美日本中文国产一区发布| 伦理电影大哥的女人| 99精国产麻豆久久婷婷| 美女脱内裤让男人舔精品视频| 午夜福利在线免费观看网站| 夫妻性生交免费视频一级片| 欧美激情高清一区二区三区 | 男女午夜视频在线观看| 97在线人人人人妻| 欧美在线黄色| 免费大片黄手机在线观看| 99国产精品免费福利视频| 精品人妻熟女毛片av久久网站| 久久久久精品人妻al黑| av.在线天堂| 麻豆av在线久日| 欧美日韩精品成人综合77777| 观看av在线不卡| 久久精品久久久久久噜噜老黄| 可以免费在线观看a视频的电影网站 | 亚洲久久久国产精品| 黑丝袜美女国产一区| 国产女主播在线喷水免费视频网站| 男女啪啪激烈高潮av片| 日韩伦理黄色片| 日韩av免费高清视频| 亚洲国产精品一区三区| 亚洲成人手机| 国产老妇伦熟女老妇高清| 国产精品久久久久久精品古装| 国产亚洲av片在线观看秒播厂| 另类亚洲欧美激情| 国产精品一区二区在线观看99| 满18在线观看网站| 免费在线观看黄色视频的| 女人精品久久久久毛片| 国产精品 欧美亚洲| 国精品久久久久久国模美| 欧美日韩国产mv在线观看视频| 人妻人人澡人人爽人人| 亚洲精品在线美女| 自线自在国产av| 久久狼人影院| 国产探花极品一区二区| 欧美激情高清一区二区三区 | 国产福利在线免费观看视频| 亚洲美女黄色视频免费看| 99热网站在线观看| 欧美少妇被猛烈插入视频| 色视频在线一区二区三区| 亚洲欧美中文字幕日韩二区| 中文字幕制服av| 日本91视频免费播放| tube8黄色片| kizo精华| 香蕉丝袜av| 国产黄频视频在线观看| 亚洲欧洲国产日韩| 亚洲内射少妇av| 国产综合精华液| 午夜福利影视在线免费观看| 黄色视频在线播放观看不卡| 欧美成人精品欧美一级黄| 少妇精品久久久久久久| 少妇被粗大的猛进出69影院| 国产在线视频一区二区| 满18在线观看网站| 欧美亚洲 丝袜 人妻 在线| 色婷婷av一区二区三区视频| av天堂久久9| 春色校园在线视频观看| 黄网站色视频无遮挡免费观看| 女人高潮潮喷娇喘18禁视频| 亚洲成色77777| 久久久久精品久久久久真实原创| 亚洲av电影在线观看一区二区三区| 亚洲精品在线美女| 午夜福利乱码中文字幕| 看十八女毛片水多多多| 纵有疾风起免费观看全集完整版| 中文字幕人妻熟女乱码| 亚洲综合色网址| 久久99热这里只频精品6学生| 亚洲国产色片| 久久久久久人人人人人| 青春草视频在线免费观看| 成人免费观看视频高清| 电影成人av| 观看av在线不卡| 777米奇影视久久| av国产久精品久网站免费入址| 国产精品国产三级国产专区5o| 亚洲av在线观看美女高潮| 亚洲精品美女久久av网站| 电影成人av| 欧美bdsm另类| av一本久久久久| 亚洲国产色片| 男女午夜视频在线观看| 中文字幕人妻丝袜一区二区 | 亚洲男人天堂网一区| 最近最新中文字幕大全免费视频 | 国产欧美日韩一区二区三区在线| 精品一品国产午夜福利视频| 女性生殖器流出的白浆| 亚洲激情五月婷婷啪啪| 国产成人免费观看mmmm| 久久97久久精品| 伊人久久国产一区二区| 久久女婷五月综合色啪小说| 人人妻人人澡人人看| 春色校园在线视频观看| 国产xxxxx性猛交| 亚洲精品自拍成人| 亚洲精品国产一区二区精华液| 亚洲五月色婷婷综合| 18禁国产床啪视频网站| 最近最新中文字幕大全免费视频 | 97在线人人人人妻| 人妻一区二区av| 街头女战士在线观看网站| 国产片内射在线| 精品少妇一区二区三区视频日本电影 | 久久久久国产一级毛片高清牌| 啦啦啦在线免费观看视频4| 人妻系列 视频| 蜜桃在线观看..| 老鸭窝网址在线观看| 精品一区在线观看国产| 久久人人爽人人片av| 精品一区二区三卡| 男女免费视频国产| 一本—道久久a久久精品蜜桃钙片| 国产免费视频播放在线视频| 国产欧美日韩一区二区三区在线| 大香蕉久久网| 免费观看a级毛片全部| 久久久国产一区二区| 五月天丁香电影| h视频一区二区三区| 中文天堂在线官网| 日韩伦理黄色片| 我的亚洲天堂| 国产亚洲一区二区精品| 少妇的逼水好多| 纵有疾风起免费观看全集完整版| 国产麻豆69| 精品久久久精品久久久| 熟妇人妻不卡中文字幕| 亚洲经典国产精华液单| 亚洲天堂av无毛| 久久久久久久精品精品| 国产精品一二三区在线看| 国产精品国产av在线观看| 亚洲精品久久久久久婷婷小说| 久久ye,这里只有精品| 一区二区三区激情视频| 91精品三级在线观看| 汤姆久久久久久久影院中文字幕| 色哟哟·www| 日韩 亚洲 欧美在线| 1024香蕉在线观看| 91午夜精品亚洲一区二区三区| 日本猛色少妇xxxxx猛交久久| 午夜激情av网站| 久久国产精品男人的天堂亚洲| 久久久久久久久久久免费av| 中文字幕人妻丝袜制服| 久久99一区二区三区| 亚洲国产av新网站| 国产黄色视频一区二区在线观看| www.av在线官网国产| 美女主播在线视频| 亚洲av电影在线观看一区二区三区| 国产精品久久久久久精品电影小说| 高清在线视频一区二区三区| 看十八女毛片水多多多| 免费播放大片免费观看视频在线观看| 亚洲 欧美一区二区三区| 激情视频va一区二区三区| 亚洲视频免费观看视频| 国产精品国产三级专区第一集| 久久热在线av| 青春草国产在线视频| 欧美+日韩+精品| 老鸭窝网址在线观看| 有码 亚洲区| 国产精品嫩草影院av在线观看| 丝瓜视频免费看黄片| 成人亚洲精品一区在线观看| 久久久精品国产亚洲av高清涩受| 啦啦啦中文免费视频观看日本| 欧美 日韩 精品 国产| 成年人免费黄色播放视频| 91aial.com中文字幕在线观看| 青春草亚洲视频在线观看| 少妇 在线观看| 午夜福利一区二区在线看| 成人毛片a级毛片在线播放| 国产成人91sexporn| 在线观看免费高清a一片| 如何舔出高潮| 亚洲,欧美精品.| 高清黄色对白视频在线免费看| 哪个播放器可以免费观看大片| 黄片播放在线免费| 秋霞在线观看毛片| 免费看不卡的av| 成人手机av| 丰满迷人的少妇在线观看| 欧美 日韩 精品 国产| 国产一区二区三区综合在线观看| 青草久久国产| 在线观看美女被高潮喷水网站| 欧美成人精品欧美一级黄| 大香蕉久久网| 桃花免费在线播放| 精品国产国语对白av| 日韩一区二区视频免费看| 波多野结衣一区麻豆| av国产久精品久网站免费入址| 国产日韩欧美在线精品| 亚洲激情五月婷婷啪啪| 久久热在线av| 免费黄网站久久成人精品| 国产一区二区三区综合在线观看| 婷婷色综合大香蕉| 午夜91福利影院| 久久久久久久亚洲中文字幕| 男的添女的下面高潮视频| 久久ye,这里只有精品| 伊人久久国产一区二区| 国产一区二区激情短视频 | 欧美日韩综合久久久久久| 少妇人妻 视频| 婷婷色综合www| 伦精品一区二区三区| 精品国产乱码久久久久久小说| 免费观看无遮挡的男女| 老女人水多毛片| 日产精品乱码卡一卡2卡三| 精品卡一卡二卡四卡免费| kizo精华| 性少妇av在线| 国产精品久久久久久精品电影小说| 尾随美女入室| 少妇被粗大猛烈的视频| 一本久久精品| 美女中出高潮动态图| 一级,二级,三级黄色视频| 亚洲精品国产一区二区精华液| 久久精品夜色国产| 黄片小视频在线播放| 99热网站在线观看| 另类精品久久| 交换朋友夫妻互换小说| 人人妻人人添人人爽欧美一区卜| 在线亚洲精品国产二区图片欧美| 亚洲经典国产精华液单| 欧美日韩亚洲国产一区二区在线观看 | 伦精品一区二区三区| 亚洲欧美成人综合另类久久久| 国产精品一二三区在线看| 亚洲欧美精品自产自拍| 9191精品国产免费久久| 国产不卡av网站在线观看| 黑人巨大精品欧美一区二区蜜桃| 18+在线观看网站| av网站免费在线观看视频| 久久精品久久精品一区二区三区| 欧美av亚洲av综合av国产av | 久久国产精品男人的天堂亚洲| 丝袜人妻中文字幕| 国产成人免费无遮挡视频| 日本欧美视频一区| 亚洲av中文av极速乱| 国产av国产精品国产| 99久久人妻综合| 永久网站在线| 国产激情久久老熟女| 女的被弄到高潮叫床怎么办| 夫妻性生交免费视频一级片| 国产一区亚洲一区在线观看| 少妇熟女欧美另类| 国产一区二区激情短视频 | av在线老鸭窝| 纵有疾风起免费观看全集完整版| 有码 亚洲区| 人妻系列 视频| 青草久久国产| 丰满迷人的少妇在线观看| 久久久精品区二区三区| 久久精品国产亚洲av涩爱| 七月丁香在线播放| 亚洲四区av| 考比视频在线观看| 国产极品粉嫩免费观看在线| 天天躁日日躁夜夜躁夜夜| 秋霞伦理黄片| 亚洲国产av影院在线观看| 一区二区三区乱码不卡18| 久久久久久久精品精品| 女性生殖器流出的白浆| 熟女少妇亚洲综合色aaa.| 人妻系列 视频| 国产精品一区二区在线观看99| 欧美国产精品一级二级三级| 18禁观看日本| 母亲3免费完整高清在线观看 | 最近手机中文字幕大全| 免费大片黄手机在线观看| 人人妻人人添人人爽欧美一区卜| 美女午夜性视频免费| 日本vs欧美在线观看视频| 亚洲av综合色区一区| 国产免费福利视频在线观看| 国产综合精华液| 午夜影院在线不卡| 日韩中字成人| 丝袜脚勾引网站| 90打野战视频偷拍视频| 久久久久人妻精品一区果冻| 极品少妇高潮喷水抽搐| 只有这里有精品99| av不卡在线播放| 国产欧美亚洲国产| 高清av免费在线| 国产深夜福利视频在线观看| 这个男人来自地球电影免费观看 | 观看美女的网站| 欧美97在线视频| 99热网站在线观看| 国产欧美日韩一区二区三区在线| 制服诱惑二区| 三级国产精品片| 大陆偷拍与自拍| av在线播放精品| 亚洲精华国产精华液的使用体验| 午夜久久久在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 国产一区二区在线观看av| 精品99又大又爽又粗少妇毛片| 亚洲精品,欧美精品| 一级a爱视频在线免费观看| 午夜福利网站1000一区二区三区| 亚洲av免费高清在线观看| 亚洲,欧美精品.| av又黄又爽大尺度在线免费看| www.自偷自拍.com| 中文欧美无线码| 亚洲欧美日韩另类电影网站| 纵有疾风起免费观看全集完整版| 欧美 亚洲 国产 日韩一| 在线天堂最新版资源| 日本av手机在线免费观看| 人妻 亚洲 视频| 大码成人一级视频| 欧美精品高潮呻吟av久久| 亚洲国产av影院在线观看| 国产一级毛片在线| 街头女战士在线观看网站| 少妇熟女欧美另类| 亚洲av中文av极速乱| 美女国产高潮福利片在线看| 老司机影院毛片| 久久精品久久精品一区二区三区| 尾随美女入室| 男女边吃奶边做爰视频| 天天影视国产精品| 99热网站在线观看| 久久精品国产亚洲av天美| 少妇人妻 视频| 久久久久久久国产电影| 精品国产乱码久久久久久小说| 在线观看www视频免费| 亚洲精品视频女| 日韩,欧美,国产一区二区三区| 日韩在线高清观看一区二区三区| 2018国产大陆天天弄谢| 男人舔女人的私密视频| av在线观看视频网站免费| 一级a爱视频在线免费观看| 欧美精品一区二区大全| 在线免费观看不下载黄p国产| 在线观看免费日韩欧美大片| 一二三四在线观看免费中文在| 亚洲激情五月婷婷啪啪| 丰满饥渴人妻一区二区三| av免费观看日本| 涩涩av久久男人的天堂| 中文精品一卡2卡3卡4更新| 1024香蕉在线观看| 久久青草综合色| 亚洲国产日韩一区二区| 少妇猛男粗大的猛烈进出视频| 亚洲成色77777| 亚洲四区av| 国产女主播在线喷水免费视频网站| 亚洲欧美清纯卡通| 人人妻人人澡人人爽人人夜夜| av国产久精品久网站免费入址| 国产av精品麻豆| 国产老妇伦熟女老妇高清| 91午夜精品亚洲一区二区三区| 免费黄色在线免费观看| 日韩人妻精品一区2区三区| 国产精品国产三级国产专区5o| 久久精品国产自在天天线| 亚洲图色成人| 日韩中文字幕视频在线看片| h视频一区二区三区| 黑人猛操日本美女一级片| 国产探花极品一区二区| 午夜老司机福利剧场| 大话2 男鬼变身卡| 久久人人爽人人片av| 婷婷色综合大香蕉| 天天躁狠狠躁夜夜躁狠狠躁| 精品一区在线观看国产| 亚洲在久久综合| 亚洲国产欧美网| 永久免费av网站大全| 免费女性裸体啪啪无遮挡网站| 热99国产精品久久久久久7| 寂寞人妻少妇视频99o| 一二三四在线观看免费中文在| 日韩人妻精品一区2区三区| 国产精品一二三区在线看| 少妇被粗大猛烈的视频| 国产精品麻豆人妻色哟哟久久| 97在线视频观看| 亚洲男人天堂网一区| 日日摸夜夜添夜夜爱| 肉色欧美久久久久久久蜜桃| 亚洲欧美精品综合一区二区三区 | 免费人妻精品一区二区三区视频| 亚洲精品中文字幕在线视频| 日本vs欧美在线观看视频| 男女午夜视频在线观看| 有码 亚洲区| 不卡视频在线观看欧美| 天天躁日日躁夜夜躁夜夜| 成人毛片a级毛片在线播放| 建设人人有责人人尽责人人享有的| 免费播放大片免费观看视频在线观看| 国产又爽黄色视频|