• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Non-identical residual learning for image enhancement via dynamic multi-level perceptual loss①

    2022-07-06 03:23:08HURuiguang胡瑞光HUANGLi
    High Technology Letters 2022年2期

    HU Ruiguang(胡瑞光), HUANG Li

    (Beijing Aerospace Automatic Control Institute, Beijing 100854, P.R.China)

    Abstract Residual learning based deep generative networks have achieved promising performance in image enhancement. However, due to the large color gap between a low-quality image and its highquality version, the identical mapping in conventional residual learning cannot explore the elaborate detail differences, resulting in color deviations and texture losses in enhanced images. To address this issue, an innovative non-identical residual learning architecture is proposed, which views image enhancement as two complementary branches, namely a holistic color adjustment branch and a finegrained residual generation branch. In the holistic color adjustment, an adjusting map is calculated for each input low-quality image,in order to regulate the low-quality image to the high-quality representation in an overall way. In the fine-grained residual generation branch, a novel attention-aware recursive network is designed to generate residual images. This design can alleviate the overfitting problem by reusing parameters and promoting the network’s adaptability for different input conditions. In addition, a novel dynamic multi-level perceptual loss based on the error feedback ideology is proposed. Consequently, the proposed network can be dynamically optimized by the hybrid perceptual loss provided by a well-trained VGG, so as to improve the perceptual quality of enhanced images in a guided way. Extensive experiments conducted on publicly available datasets demonstrate the state-of-the-art performance of the proposed method.

    Key words: image enhancement, deep residual network, adversarial learning

    0 Introduction

    Image enhancement, as a classical computer vision task, aims at recovering high-quality image from its low-quality version. High-quality images should have abundant color, clear texture, and satisfactory perception, etc.. It is an important task that can facilitate various industrial communities, e. g. satellite[1],medical, and 4K television[2]. Many traditional enhancement methods including Gaussian smoothing, and bilateral filtering have been proposed without supervised information. With the flourish of deep neural networks, convolutional neural networks (CNNs) have shown the powerful capability in image enhancement by learning pairwise training patches. Some existing methods mainly focus on solving image enhancement problem from specific aspects, such as enhancing illumination, adjusting contrast, and denoising.

    It can be noticed that low-quality images and their high-quality targets have great similarity in contents,thus their detail differences, i. e. texture, edge and color recovery are important for image enhancement.Consequently, residual learning has become a successful method to excavate those details by building an identical mapping from low-quality to high-quality images. Later, generative adversarial networks (GANs)based image enhancement frameworks are proposed.They adopt deep residual network as generative model for enhancing low-quality images, and take multiple loss function,e.g. a perceptual loss and an adversarial loss, to optimize network for promoting visual quality.However, those methods still remains three deficiencies. (1) A low-quality image and its high-quality version exist large gaps in holistic color. The identical mapping in the residual learning cannot force generative models to accurately capture the detailed information. (2) Generative models usually have large number of parameters, causing great storage cost and rising the risk of overfitting. (3) Although one or multi-level perceptual losses are widely applied for network optimization, the loss weight allocated to each level are fixed, resulting in unpleasant artifacts or unfavorable color representations in enhanced images.

    To address the above-mentioned issues, non-identical residual learning is first considered to adjust lowquality images to high-quality style. Hence, a novel image enhancement framework is proposed, which consists of two complementary branches: holistic color adjustment and fine-grained residual generation. In the fine-grained residual generation branch, recursive structures are employed to construct the proposed network with less parameters meanwhile alleviate overfitting. However, the feature representations are still limited due to model capacity, and it lacks flexibility to adapt to different image scenes. Consequently, a lightweight attention-aware recursive network is proposed.It is composed of fully multi-scale feature extraction to extract more representative primary features, and a recursive convolutional function, which collocates multilevel channel-wise attention to promote the flexibility of the network by dynamically excavating color information. The holistic color adjustment can adjust global information and facilitate the generative network to learn local details. It is tried to compute the overall residuals between low-quality images and high-quality images.Then, an adjusting map is estimated for input lowquality images adaptively. Accordingly, low-level feature maps extracted from a well-trained network have abundant color information, while the extracted highlevel feature maps contain more spatial and texture information. Optimizing single one-level perceptual loss cannot comprehensively promote enhanced quality.Therefore, a multi-level perceptual loss is considered to comprehensively optimize the proposed network.However, the loss weight of each level cannot be easily determined, and it lacks of flexibility during the training process. Consequently, a dynamic multi-level perceptual loss is introduced for optimization based on the error feedback. Detailedly, feature contents of highquality and enhanced images are extracted from maxpooling layers of VGG16, and content errors between high-quality and enhanced features are computed. According to the value of the errors, a weight is decided for perceptual loss of each level. Thus, enhanced images will have rational color representations and textures.

    In summary, the main contributions of this paper are as follows.

    (1) A novel non-identical residual learning frame-work is tailored for image enhancement, in which an adjusting map is carefully computed to adjust global color to high-quality target.

    (2) A novel attention-aware recursive network is proposed to adaptively enhance residual details according to input low-quality images.

    (3) An innovative dynamic multi-level perceptual loss (DPL) is presented to approximate color representation of high-quality images, hence promoting perceptual effect in a more comprehensive way.

    (4) Extensive experiments on publicly available dataset show the state-of-the-art performance of the proposed method,both quantitatively and qualitatively.

    The rest of paper is organized as follows. Section 1 overviews related work. Section 2 describes the enhancement architecture. Experimental results and their analysis are presented in Section 3. Section 4 concludes this paper.

    1 Related work

    1.1 Image enhancement

    The pioneer image enhancement work often concentrate on improving image contrast,such as histogram equalization (HE) and its variants bi-HE. Ref.[3]proposed a low-light image enhancement method by estimating illumination maps. However, those methods do not use external information and the performance of them is usually inadequate and limited. An external example-based approach was proposed for low-light image enhancement in Ref.[4], which adopts an autoencoder to learn a mapping function. Ref.[5] proposed a unified image enhancement framework, which combines learning based methods with reconstruction based methods. Some work enhance images in specific conditions,e. g. hyper-spectral image and underwater image.

    Recent years, CNNs show promising performance in many image enhancement sub-tasks, e.g. image super-resolution, image denosing[6]and image colorization. In Ref.[7], a reconstruction-based pairwise depth dataset for depth image enhancement was proposed. CNNs for weakly illuminated image enhancement was proposed in Ref.[3]. Deep residual learning was proposed in Ref.[8], and it showed effectiveness for deep network construction. However, those deep networks significantly increase the number of parameters and the overfitting problem is highly likely. Recursive structures have become an effective way to relieve overfitting for the less parameters. Ref.[9] proposed DRRN that combines residual learning for easy training by a 52-layers network, showing the promising performance in image super-resolution. Employing the recursive structure is tried to construct a lightweight model for image enhancement. However, those methods are limited by optimizing single MSE loss and it will cause some blurry and unrealistic enhancement results.

    1.2 Deep residual learning

    Deep learning firstly attracts great attention in Ref.[10], and it showed significant promotions in image classification tasks. Then, VGG networks were presented in Ref.[11], and they become universal feature extraction models. Ref.[12] proposed Inception network to introduce multi-scale feature representation in CNN. Ref.[12] demonstrated that deeper network can accordingly achieve better performance.Afterwards, many work focus on increasing depth of CNN to promote performance. However, when deeper networks are able to start converging, a degradation problem is exposed, that is, with the network depth increasing the performance gets saturated and then degrades rapidly. Besides, vanishing gradient problem still limits the performance of CNN.

    Residual learning tries to solve those problems by constructing identical mapping, and the depth of CNN is substantially increased. It can be written asy=x+F(x),wherexandyare the input and output vectors of the layers, andFrepresents the residual mapping to be learned. The ideology of residual learning can be integrated into many previous networks[8],and many image-to-image translation tasks also adopt residual learning method to abridge the gap of generated images and input images. Ref.[8] proposed a residual learning based CNN for image denoising. In Ref.[6],a residual dense network was proposed for image super-resolution. However, residual learning has some bottlenecks in image enhancement. The identical mappingxcannot forceF(x) to learn detailed difference between low-quality and high-quality images. Therefore, nonidentical mapping is considered to adjusts inputxto an appropriate value.

    1.3 Perceptual loss

    A high-quality image should have clear textures,abundant colors, and conform to human perception.Thus, Ref.[13] introduced a pre-trained VGG network to compute perceptual loss for improving the quality of generated images. Ref.[9] proposed an enhancement method based on perceptual loss, which enriches more high-frequency information of enhanced images. Ref.[14] proposed generative adversarial nets(GANs), which has become an effective way for image generation. A conditional GAN was proposed in Ref.[15]for image-to-image translation task. Ref.[16] proposed a cycle-consistent adversarial networks for style transfer.Super-resolution based GAN adopts a generator, a feature extractor, and a discriminator to optimize hybrid loss, and they also achieve state-of-the-art performance in human perceptions. However, real-world image enhancement is a universal task for various image transformations (texture, luminance and resolutions). In Ref.[17], universal image enhancement frameworks were proposed. They publish a new large-scale image enhancement dataset based on DSLR camera. And a multi-term loss function is composed of color, texture and content terms, allowing an efficient image quality estimation. For image enhancement, optimizing highlevel perceptual loss tends to extrude the shape of objects, while optimizing low-level perceptual loss can generate color-bright images. However, conventional multi-level perceptual loss lacks of flexibility in balancing those two aspects, because they allocate a fix loss weight for each level. Those weights are dynamically controlled to promote the flexibility.

    2 The proposed method

    2.1 General framework

    The architecture of non-identical residual learning for image enhancement via dynamic multi-level perceptual loss is shown in Fig.1. The holistic color adjustment globally adjusts the low-quality image to highquality target. The fine-grained residual generation can recover texture and color details. Conventional residual learning[17]for image enhancement can be represented as

    whereY∈R3×H×Wis a trainable matrix rather than a single value. In the fine-grained residual generation,an attention-aware recursive network is proposed to generate fine residuals, and it is composed of three components. In the first component, the fully multiscale block (FMSB) aims to extract multi-scale primary features. ByN-step recursions in the recursive block, deep feature representations can be exploited.Finally, the reconstruction component converts the deep features to residual image. The generated residual image will be added with the adjusted image for getting the final enhanced image. In network training, three losses, i. e. MSE loss, dynamic multilevel perceptual loss (DPL) and adversarial loss are utilized.

    Fig.1 Framework of the proposed method (the holistic color adjustment globally adjusts low-quality image,so the fine-grained residual generation tends to generate elaborate details. FMSB denotes fully multi-scale block, and ⊕denotes element-wise addition.DPL denotes dynamic multi-level perceptual loss, and ADV is adversarial loss)

    2.2 Holistic color adjustment

    Fig.2 Flow diagram of average residual computation

    2.3 Fine-grained residual generation

    Fully multi-scale block. The success of inception network[4]has shown that the multi-scale information can provide multiple views to detect one image.So, the extracted features can benefit final image reconstruction. Motivated by Ref.[4], a fully multiscale block is designed to extract primary features. It is composed of two multi-scale convolutional layers and a compressive layer, as shown in Fig.3. In the first layer, convolutional kernels are adopted with three sizesW(1)i×i(i∈{1,3,5}) to extract multi-scale features, in which PReLU is selected as activation functions[19].Each convolutional kernel introduces all multi-scale features from the first layer, which utilizes features extracted by three kinds of receptive field. While the conventional multi-scale block only utilizes one kind of receptive field. Thus,FMSB can obtain more abundant information from the first layer to bring diversity representation. Finally, 1 ×1 convolutional kernel is used to compress feature maps and perform non-linear mapping.

    Fig.3 Comparison of the proposed fully multi-scaleblock against the conventional multi-scale block

    Recursive block. Recently, residual recursive structures are proposed and show promising performance in super-resolution tasks[9].It can construct large receptive fields by reusing convolutional layers. However, a well-behaved image enhancement model should flexibly consider different input conditions (light and color etc.), and feature representations in conventional recursive structures are limited due to parameters sharing. So some dynamic factors are introduced in this structure by adaptively selecting appropriate channels according to input images. Based on this motivation,attention mechanisms are employed[20]and three kinds of attention-aware recursive units are built.

    Design of recursive units. A recursive block consists of multiple recursive units. Fig.4(a) shows a typical recursive structure proposed in Ref.[8], which has no attention mechanism. In this work, three kinds of attention based recursive units are designed to explore the effectiveness of dynamic factors in different weighting scope. Their architectures are listed in Fig.4(b), (c), and (d). Fig.4(b) is residual recursive attentive unit (RRAU), which aims to effectively extract local discriminative features via directly weighting the convolutional features of the input imageX.Fig.4(c) is attentive residual recursive unit (ARRU), which aims to adaptively select global recursive features via weighting the residual recursive representations. Fig.4(d) is residual attentive recursive unit(RARU), which aims to enhance the mutual information between convolutional features and inputs via second-order residual attentive weighting. According to the experiments, RARU is more appropriate for image enhancement task. Hence, RARU is used in block construction. Double 3 ×3 convolution and ReLU are stacked to construct a convolutional function for convolving.

    Fig.4 Four types of recursive units

    The local residual learning is designated to always start fromX(0)for efficient and stable training[19]. Notably, due to the global non-identical residual learning can adjust the above-mentioned residual gap, local identical residual learning is normally adopted in the recursive block.

    2.4 Dynamic multi-level perceptual loss

    An individual MSE loss based optimization approaches usually lead to generating blurry and unrealistic results in image-to-image translation[16]. Inspired by Ref.[17], the generative adversarial nets (GANs)are considered. They are accomplished by utilizing an adversarial loss, which minimizes KL-divergence between the distribution of images produced by the generator and the distribution of images in the training dataset. An adversarial learning framework based on dynamic multi-level perceptual loss is proposed, which mainly contains a attention-aware recursive generator,a pre-trained VGG-19-based feature extractor, and a CNN-based discriminator. Specially, the feature extractor and the discriminator are used as two constraints to optimize the enhanced images generated by the generator from low-quality images. Among them, the feature extractor provides dynamic multi-level perceptual loss of hierarchical content, and the discriminator provides the measure of similarity between the generated images and corresponding ground-truths.

    The feature extractor can provide perceptual loss based on content error between enhanced images and their high-quality versions. However, conventional GAN based methods for image enhancement usually optimize high-level perceptual loss, losing accuracy in color representations. Based on the motivation that optimizing high-level perceptual loss is beneficial for recovering spatial and texture information, and optimizing low-level perceptual loss is helpful for color reconstruction[9], hierarchical features are utilized, which are widely applied to classification[21]and detection[22]tasks. Instead of solely optimizing high-level content loss, five content losses are optimized from the output of each max-pooling layer cooperatively. In this way,the generated patches tend to be more consistent with human perception. The overall formulation of the multi-level perceptual loss is

    However, the weightaiis usually hard to design due to uncertainty of the importance of each level perceptual loss. Although equally allocating weights is an intuitive way, but the importance of each level perceptual loss is also dynamic with training process. Hence,a weightaican be computed via a dynamic way based on error feedback ideology. Firstly, theith average content errorzibetween generated patchesPGand highquality patchesPHis computed. Then the weightaiis gotten via a softmax function for normalizing those errorszi:

    3 Experiments

    3.1 Dataset and metrics

    Following Ref.[17], the classic DSLR enhancement dataset (DPED) is adopted to train and test the method. The DSLR is specially collected for image enhancement tasks. The image quadruples in DSLR are captured by cameras with different qualities. Detailedly, DPED contains 4549 photos from Sony smartphone,5727 photos from iPhone, and 6015 photos from Canon. The peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) index are two prevailing criteria selected for evaluations. In the experiments, PSNR and SSIM are all calculated on RGB space. In ablation studies, the most challenging iPhone dataset[25]is selected to conduct validation and 400 pairs of patches for testing are randomly selected. Without loss of generality, NRL is optimized by MSE loss. The detailed settings are introduced in next section.

    3.2 Ablation study for network architectures

    Fully multi-scale block (FMSB) can better extract features compared with conventional multi-scale block (MSB).Other three methods are compared:(1)double convolutional layer with 3 ×3 kernel size (Conv-3); (2) a large convolutional layer with 9 ×9 kernel size (Conv9); (3) multi-scale block (MSB) as shown in Fig.4; (4) fully multi-scale block, in which the second convolutional layers are all 3 ×3 (FMSB-3). In line with the settings in Ref.[12], ResNet is used as backbone network. As shown in Table 1,FMSB achieves the highest scores both in PSNR and SSIM and it is 0.04 dB PSNR and 0.0025 SSIM higher than MSB.

    Table 1 Results of different blocks on the iPhone dataset

    It demonstrates the superiority of FMSB. Conv3 and Conv9, large kernel size tends to achieve higher SSIM but low PSNR. FMSB-3 gains a very close PSNR to FMSB, but FMSB has different sizes of kernel in the second layer, which can better exploit holistic color information. In summary, the proposed FMSB is an effective block for primary feature extraction.

    To verify the effectiveness of the attention along with recursive architectures, a comparison for different designs is shown in Fig.4. Besides, RRU+A is introduced which adds single attention mechanism in the last recursive step. Without loss of generality, PSNR is compared for verifying modeling capability in 1,3,6,12,24 recursive steps (Nsteps). Experimental results are listed in Table 2.

    Table 2 PSNR results of different recursive steps

    It can be seen that except forN=1 andN=3,RARU achieves the best performance compared with the others. FromN=6 toN=12, it can be seen that simply increasing recursive steps sometimes can decrease performance. Especially, RRAU degrades performance withNincreasing. It can be seen that the setting of 6-step recursions is cost-effective. The 6-step RARU achieves promising performance meanwhile has less recursive steps. It outperforms RRU, RRAU and ARRU by 0. 36, 0. 05, and 0. 05 dB PSNR, respectively.Training curves are exhibited in the condition of 6-step recursions in Fig.5. The pink curve is conventional ResNet structure as illustrated in Ref.[17]. It can get similar results before 20 epochs. While RARU can stably increase PSNR, resulting in the best performance. Therefore, RARU is considered as the final structure.

    Fig.5 PSNR testing results with different recursive structures

    3.3 Evaluation for non-identical residual learning

    For evaluating the non-identical residual learning(denoted as RL), three control group are set, i.e. residual learning is not utilized in the holistic color adjustment (denote as no-RL), conventional residual learning, incomplete non-identical residual learning as illustrated in Eq.(6) (denoted as NRL-). Experimental results are shown in Table 3. Compared with no-RL and RL, RL outperforms no-RL by 0.47 dB PSNR and 0. 0097 SSIM on the iPhone dataset. The advantage of residual learning is clearly demonstrated.The performance of NRL- is slightly lower than RL,because some pixels cannot be accurately adjusted.While the NRL achieves the best performance, it outperforms RL by 0.09 dB PSNR and 0.013 SSIM on the Sony dataset,respectively. Although both RL and NRL achieve identical PSNR value (22. 54 dB) on the BlackBerry dataset,NRL precedes RL by 0.0027 SSIM,showing the effectiveness of NRL. The training curve of RL and NRL are visualized in Fig.6. It can be seen that the NRL lines are higher than the RL lines in the most conditions. Though both RL and NRL cause unstable PSNR curves on the Sony dataset, the NRL is still higher than RL in some peak values. Conclusively, the non-identical residual learning is an effective method, which is superior to conventional residual learning in the image enhancement task.

    Table 3 Experimental results of residual learning

    Fig.6 Comparisons of non-identical residual learning and conventional residual learning with N=6

    3.4 Comparisons with state-of-the-art methods

    The proposed method is compared with the stateof-the-art methods (Apple photo enhancer (APE) is taken as a baseline). Ref.[23] was a 3-layer CNN and was optimized by MSE. Ref.[9] was classical image-to-image translation method based on perceptual losses. Refs[17,24] were all state-of-the-art enhancement methods. Ref.[25] was an adversarial learning framework and its generator is replaced by the attention-aware recursive network for fair comparison. NRL denotes the non-identical residual learning framework optimized by individual MSE loss. Besides, NRL is introduced with the proposed dynamic multi-level perceptual loss (denotes as NRL-DPL). Experimental results are shown in Table 4, where NRL-DPL achieves the highest SSIM among all others. Concretely, NRL-DPL outperforms by 0.0072[17]and 0.0046[24]SSIM on the iPhone dataset, respectively. It also outperforms by 3.82 dB[23]and 1.14 dB[25]PSNR on the Sony dataset, respectively. It demonstrates the state-of-the-art performance of the method for image enhancement.NRL also achieves favourable PSNR results compared with others. It reveals the strong generalization ability of the network architecture. NRL-DPL outperforms NRL except for PNSR on the BlackBerry dataset, showing superiority of the proposed DPL and adversarial learning strategy. According to Fig.7, the method achieves better visual effect and less unpleasant artifacts. In the first group comparison, the bag can be enhanced more distinctly by the method.

    Table 4 Comparisons with the state-of-the-art methods in PSNR/SSIM

    Fig.7 Examples of visual enhancement comparisons on DPED

    In the second group, the edges of the window has less artifacts compared with Ref.[17]. In the last group, the method achieves a very high-quality result both in overall and local details. Notably, the model has only 190 ×103parameters compared with 400 ×103in Ref.[17]; it also demonstrates the advantage of the proposed recursive architecture.

    The proposed framework is trained with 6 recursive steps without batch normalization (BN). All channel numbers are set to 64 in the recursive block.1/6 patches are randomly selected in training dataset as one epoch. The Adam is adopted for optimizing the network, and the initial learning rate is set to 0.0005.Training batch is set to 32. For each 5 epochs, the learning rate will decrease by the scale of 0.95. Training is stopped at 100 epoch. Experiments are performed on double NVIDIA Titan XP GPUs for training and testing. The training process costs about 14 h for 100 epochs,and the average testing speed of a 256 ×256 patch is 0.04 s.

    3.5 User study

    Previous classical work are followed to perform mean opinion score (MOS) tests, which quantify the ability of different approaches to re-construct perceptually convincing images. 100 low-quality images are selected from VOC2012 (VOC2012-LQ100) for testing.Specifically, 29 raters are asked for assigning an integral score from 1 (bad quality) to 5 (excellent quality). The score criterion is four-fold:Color(Col),Texture (Tex), Luminance (Lumin), Overall (Over).Four methods are evaluated,i.e.,Ref.[17], Ref.[23],Ref.[25] and the method NRL-DPL. They are all trained on the iPhone dataset for evaluating their adaptability. According to results in Table 5, the proposed method achieves the highest average MOS scores. Although Ref.[17] achieved the highest luminance score, it causes many harsh textures. Overall, the method performs the best scores in color, texture and overall feeling.

    Fig.8 shows some visual examples. Apparently,the enhanced images obtained by the method have more perceptual comfortableness and textural softness.

    4 Conclusions

    In this paper, a non-identical residual learning for image enhancement via dynamic multi-level perceptualloss is proposed, which views image enhancement as two branches. In the first branch, a holistic color adjustment method is designed to adjust global color representation to the high-qualities. It forces the second branch to accurately capture color and texture details by learning elaborate difference. In the second branch,an attention-aware recursive network is proposed to adaptively transform features according to image color conditions, as well as mitigate overfitting problem. Last but not least, a dynamic multi-level content loss is designed to improve color effect as high-quality images.Extensive experiments conducted on publicly available datasets demonstrate the state-of-the-art performance of the proposed method.

    Table 5 MOS testing results on VOC2012-LQ100

    Fig.8 The selected visual demonstration on VOC2012-LQ100

    免费女性裸体啪啪无遮挡网站| 欧美激情极品国产一区二区三区| 婷婷成人精品国产| 女性被躁到高潮视频| 国产99久久九九免费精品| 国产高清国产精品国产三级| 美女扒开内裤让男人捅视频| 精品久久久久久电影网| 欧美日韩一级在线毛片| 国产av又大| 亚洲伊人久久精品综合| 欧美日韩福利视频一区二区| 黄片大片在线免费观看| 黄色片一级片一级黄色片| 免费在线观看黄色视频的| 国产麻豆69| 国产精品美女特级片免费视频播放器 | 免费久久久久久久精品成人欧美视频| 国产成人免费观看mmmm| 三级毛片av免费| 一个人免费在线观看的高清视频| 俄罗斯特黄特色一大片| 一夜夜www| 别揉我奶头~嗯~啊~动态视频| 最新美女视频免费是黄的| 下体分泌物呈黄色| 国产免费av片在线观看野外av| 天堂中文最新版在线下载| 亚洲国产av影院在线观看| 黄色a级毛片大全视频| 欧美激情久久久久久爽电影 | 国产精品免费大片| 欧美日韩精品网址| 国产欧美日韩一区二区三| 最近最新中文字幕大全免费视频| 亚洲avbb在线观看| 一二三四在线观看免费中文在| 巨乳人妻的诱惑在线观看| 国产精品av久久久久免费| 免费观看a级毛片全部| 黑人欧美特级aaaaaa片| 精品一区二区三区视频在线观看免费 | 最近最新中文字幕大全电影3 | 伊人久久大香线蕉亚洲五| 天堂动漫精品| 久9热在线精品视频| 欧美久久黑人一区二区| 欧美亚洲日本最大视频资源| 一二三四社区在线视频社区8| 十八禁人妻一区二区| 黑人猛操日本美女一级片| 免费日韩欧美在线观看| 亚洲精品中文字幕在线视频| 国产又爽黄色视频| 久久精品国产亚洲av高清一级| 亚洲成国产人片在线观看| 亚洲av片天天在线观看| 国产亚洲精品第一综合不卡| 肉色欧美久久久久久久蜜桃| 成人国产av品久久久| 国产精品九九99| 一区在线观看完整版| 久久人人爽av亚洲精品天堂| 老汉色∧v一级毛片| 欧美黑人欧美精品刺激| 99精品久久久久人妻精品| 精品少妇久久久久久888优播| 激情视频va一区二区三区| 国产精品久久久人人做人人爽| 老鸭窝网址在线观看| 黑人猛操日本美女一级片| 一二三四在线观看免费中文在| 考比视频在线观看| 国产精品成人在线| 啦啦啦在线免费观看视频4| 免费在线观看影片大全网站| 亚洲精品粉嫩美女一区| 色婷婷av一区二区三区视频| 在线十欧美十亚洲十日本专区| 国产一区有黄有色的免费视频| 少妇裸体淫交视频免费看高清 | 亚洲国产精品一区二区三区在线| 国产亚洲欧美精品永久| 精品少妇一区二区三区视频日本电影| 国产精品二区激情视频| av福利片在线| 搡老熟女国产l中国老女人| 国产男靠女视频免费网站| 国产男女超爽视频在线观看| 在线av久久热| www日本在线高清视频| 国产真人三级小视频在线观看| 精品国产乱码久久久久久男人| 成人国产av品久久久| 免费观看av网站的网址| 国产成人啪精品午夜网站| 欧美日韩精品网址| 亚洲熟妇熟女久久| 久久久久久亚洲精品国产蜜桃av| 老司机在亚洲福利影院| 一本一本久久a久久精品综合妖精| a在线观看视频网站| 成人18禁高潮啪啪吃奶动态图| 日韩中文字幕欧美一区二区| 成人免费观看视频高清| 成年女人毛片免费观看观看9 | 国产无遮挡羞羞视频在线观看| 中文字幕人妻丝袜一区二区| 三级毛片av免费| 一区二区三区乱码不卡18| 波多野结衣av一区二区av| 日本a在线网址| 欧美日韩亚洲国产一区二区在线观看 | 黄色毛片三级朝国网站| av天堂在线播放| 国内毛片毛片毛片毛片毛片| 久久ye,这里只有精品| 日韩中文字幕欧美一区二区| 日韩视频一区二区在线观看| 人妻久久中文字幕网| 天天躁日日躁夜夜躁夜夜| 国产无遮挡羞羞视频在线观看| 国产精品一区二区精品视频观看| 色老头精品视频在线观看| 国产日韩欧美视频二区| 久久久国产欧美日韩av| 亚洲熟妇熟女久久| 男女下面插进去视频免费观看| 亚洲一区二区三区欧美精品| 色视频在线一区二区三区| 欧美日韩亚洲高清精品| 欧美日韩亚洲综合一区二区三区_| 精品一区二区三区四区五区乱码| 免费日韩欧美在线观看| 精品少妇黑人巨大在线播放| 亚洲免费av在线视频| 欧美 亚洲 国产 日韩一| 久久精品熟女亚洲av麻豆精品| 高清毛片免费观看视频网站 | 制服诱惑二区| 欧美性长视频在线观看| 欧美变态另类bdsm刘玥| 别揉我奶头~嗯~啊~动态视频| 午夜91福利影院| 丁香欧美五月| 制服诱惑二区| 岛国在线观看网站| 99re6热这里在线精品视频| 国产精品偷伦视频观看了| 国产成人一区二区三区免费视频网站| 久9热在线精品视频| svipshipincom国产片| 高潮久久久久久久久久久不卡| 下体分泌物呈黄色| 热re99久久精品国产66热6| 日本黄色视频三级网站网址 | 久久精品aⅴ一区二区三区四区| 日韩视频一区二区在线观看| 熟女少妇亚洲综合色aaa.| 我要看黄色一级片免费的| 久久久久久久国产电影| 日本五十路高清| 19禁男女啪啪无遮挡网站| 久久99一区二区三区| 69精品国产乱码久久久| 成人18禁在线播放| av电影中文网址| 国产深夜福利视频在线观看| 亚洲一码二码三码区别大吗| 国产精品一区二区在线不卡| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲天堂av无毛| 99九九在线精品视频| 欧美黑人精品巨大| 日韩一卡2卡3卡4卡2021年| 国产一卡二卡三卡精品| 成年人黄色毛片网站| 在线观看舔阴道视频| 国产高清激情床上av| 久久九九热精品免费| 亚洲国产欧美网| 日韩精品免费视频一区二区三区| 18禁黄网站禁片午夜丰满| 国产成+人综合+亚洲专区| 亚洲视频免费观看视频| 十分钟在线观看高清视频www| 777久久人妻少妇嫩草av网站| 天堂动漫精品| 午夜成年电影在线免费观看| 久久久久久人人人人人| 电影成人av| 久久 成人 亚洲| 国产成人系列免费观看| 国产欧美日韩一区二区精品| 九色亚洲精品在线播放| 大陆偷拍与自拍| 多毛熟女@视频| 午夜视频精品福利| 久久免费观看电影| 啦啦啦在线免费观看视频4| 欧美在线黄色| 人人妻人人澡人人看| 在线十欧美十亚洲十日本专区| 欧美黑人欧美精品刺激| 夫妻午夜视频| 精品亚洲成国产av| 久久香蕉激情| 中文亚洲av片在线观看爽 | 操出白浆在线播放| 亚洲五月色婷婷综合| 免费高清在线观看日韩| 我的亚洲天堂| 欧美黑人精品巨大| 一本色道久久久久久精品综合| 一边摸一边做爽爽视频免费| 国产伦人伦偷精品视频| 啦啦啦在线免费观看视频4| avwww免费| 99香蕉大伊视频| 无限看片的www在线观看| 美女高潮喷水抽搐中文字幕| 一级片'在线观看视频| 久久婷婷成人综合色麻豆| 最近最新中文字幕大全免费视频| 国精品久久久久久国模美| 丁香欧美五月| 性少妇av在线| 黄色 视频免费看| 精品免费久久久久久久清纯 | 国产成人欧美| 亚洲人成电影观看| 色老头精品视频在线观看| 国产亚洲欧美在线一区二区| 久久久国产一区二区| 中文字幕制服av| 精品福利永久在线观看| 免费在线观看视频国产中文字幕亚洲| 国产三级黄色录像| 91老司机精品| 精品人妻在线不人妻| 视频区图区小说| av免费在线观看网站| 国产亚洲欧美在线一区二区| 国产成人av激情在线播放| 亚洲国产看品久久| 久久久久视频综合| 新久久久久国产一级毛片| 成年人黄色毛片网站| 五月天丁香电影| 久久久水蜜桃国产精品网| 午夜激情久久久久久久| 极品少妇高潮喷水抽搐| 亚洲人成电影免费在线| 国产一区二区三区视频了| 亚洲国产精品一区二区三区在线| 女人爽到高潮嗷嗷叫在线视频| 日韩欧美三级三区| 正在播放国产对白刺激| 美女午夜性视频免费| 视频区图区小说| 建设人人有责人人尽责人人享有的| 中文字幕另类日韩欧美亚洲嫩草| 在线观看免费高清a一片| 国产av一区二区精品久久| 久久久久久亚洲精品国产蜜桃av| 久久久精品区二区三区| 高清视频免费观看一区二区| 18禁美女被吸乳视频| 别揉我奶头~嗯~啊~动态视频| xxxhd国产人妻xxx| 女人被躁到高潮嗷嗷叫费观| 色94色欧美一区二区| av片东京热男人的天堂| 老司机亚洲免费影院| 欧美午夜高清在线| 亚洲人成电影观看| 亚洲专区国产一区二区| 狠狠狠狠99中文字幕| 免费一级毛片在线播放高清视频 | 美女福利国产在线| 精品少妇黑人巨大在线播放| svipshipincom国产片| 免费黄频网站在线观看国产| 一区二区av电影网| 国产成人欧美| 高清毛片免费观看视频网站 | 高清在线国产一区| 国产又色又爽无遮挡免费看| 亚洲精品乱久久久久久| 色在线成人网| 最新在线观看一区二区三区| 韩国精品一区二区三区| 欧美日韩一级在线毛片| 午夜视频精品福利| 香蕉国产在线看| 精品福利观看| 18禁美女被吸乳视频| 亚洲精品av麻豆狂野| 国产欧美日韩一区二区三| 久久久久视频综合| 国产精品久久久av美女十八| 亚洲成a人片在线一区二区| 久久久精品区二区三区| 国产成人影院久久av| 岛国在线观看网站| 在线看a的网站| 脱女人内裤的视频| 亚洲视频免费观看视频| 一级毛片电影观看| 国产日韩欧美在线精品| 亚洲伊人久久精品综合| 日本五十路高清| 国产色视频综合| 国产在视频线精品| 午夜精品国产一区二区电影| 久久久久精品国产欧美久久久| 无限看片的www在线观看| 色在线成人网| 国产福利在线免费观看视频| 黄色怎么调成土黄色| 亚洲中文字幕日韩| 在线永久观看黄色视频| 麻豆国产av国片精品| 中文字幕高清在线视频| 国产精品麻豆人妻色哟哟久久| 国产免费福利视频在线观看| 夫妻午夜视频| 美女视频免费永久观看网站| 日韩欧美免费精品| 女人精品久久久久毛片| 又黄又粗又硬又大视频| 色综合欧美亚洲国产小说| 久久人妻av系列| 亚洲中文字幕日韩| 国产精品免费一区二区三区在线 | 一个人免费看片子| 天堂中文最新版在线下载| 性少妇av在线| 久久精品亚洲精品国产色婷小说| 久久久精品免费免费高清| 久久久久久久久久久久大奶| 两人在一起打扑克的视频| 国产成+人综合+亚洲专区| h视频一区二区三区| 久久九九热精品免费| av国产精品久久久久影院| 国产精品免费视频内射| 欧美激情 高清一区二区三区| 精品免费久久久久久久清纯 | 男女高潮啪啪啪动态图| 精品少妇一区二区三区视频日本电影| 另类亚洲欧美激情| 母亲3免费完整高清在线观看| 亚洲国产欧美在线一区| 久久国产精品影院| videosex国产| 亚洲国产欧美在线一区| 欧美日韩黄片免| 久久99一区二区三区| 99国产精品一区二区三区| 久久人妻福利社区极品人妻图片| 国产在视频线精品| 在线看a的网站| 不卡av一区二区三区| 日韩欧美免费精品| 亚洲一区中文字幕在线| av网站免费在线观看视频| 一本大道久久a久久精品| 国产av精品麻豆| 国产1区2区3区精品| 日韩三级视频一区二区三区| 亚洲精品av麻豆狂野| 亚洲成av片中文字幕在线观看| 天堂8中文在线网| 蜜桃在线观看..| 高清在线国产一区| kizo精华| 精品一区二区三区av网在线观看 | 国产男女内射视频| 波多野结衣av一区二区av| 亚洲免费av在线视频| 午夜福利视频在线观看免费| 可以免费在线观看a视频的电影网站| 天天躁狠狠躁夜夜躁狠狠躁| 自拍欧美九色日韩亚洲蝌蚪91| 国产日韩欧美在线精品| 多毛熟女@视频| 两个人免费观看高清视频| 免费av中文字幕在线| 男女无遮挡免费网站观看| 自拍欧美九色日韩亚洲蝌蚪91| 久久狼人影院| 国产亚洲午夜精品一区二区久久| 欧美变态另类bdsm刘玥| 搡老乐熟女国产| 蜜桃在线观看..| 国产高清国产精品国产三级| 捣出白浆h1v1| 丁香六月欧美| 精品久久久久久久毛片微露脸| 亚洲av成人一区二区三| 99精国产麻豆久久婷婷| 老司机深夜福利视频在线观看| 男女之事视频高清在线观看| 国产精品久久久久久精品电影小说| 久久久久久久精品吃奶| 变态另类成人亚洲欧美熟女 | 久久久久久久国产电影| 黄色片一级片一级黄色片| 日韩制服丝袜自拍偷拍| 精品久久久久久电影网| 亚洲美女黄片视频| 大码成人一级视频| 欧美午夜高清在线| 黄色怎么调成土黄色| 亚洲中文av在线| 久久 成人 亚洲| 国产精品自产拍在线观看55亚洲 | 丁香六月欧美| www.精华液| 在线十欧美十亚洲十日本专区| 99国产精品一区二区三区| 91字幕亚洲| a在线观看视频网站| 不卡av一区二区三区| 十八禁网站免费在线| 午夜激情久久久久久久| 日韩欧美国产一区二区入口| 一边摸一边抽搐一进一小说 | 脱女人内裤的视频| 亚洲国产毛片av蜜桃av| av在线播放免费不卡| 欧美大码av| 国产区一区二久久| 国产日韩欧美在线精品| svipshipincom国产片| 一进一出好大好爽视频| 亚洲精品乱久久久久久| 12—13女人毛片做爰片一| 成人影院久久| 99re在线观看精品视频| 成年女人毛片免费观看观看9 | 国产高清videossex| av天堂在线播放| 91国产中文字幕| 黑人巨大精品欧美一区二区mp4| 天天添夜夜摸| 国产亚洲午夜精品一区二区久久| 99国产极品粉嫩在线观看| 亚洲av国产av综合av卡| 丰满少妇做爰视频| 啦啦啦免费观看视频1| 国产色视频综合| 女性被躁到高潮视频| 日韩视频一区二区在线观看| 久久人人爽av亚洲精品天堂| 精品国产一区二区三区四区第35| 18禁黄网站禁片午夜丰满| 麻豆乱淫一区二区| 久久精品国产亚洲av高清一级| 中文字幕av电影在线播放| 精品久久久久久久毛片微露脸| 无限看片的www在线观看| 亚洲国产精品一区二区三区在线| av免费在线观看网站| 波多野结衣一区麻豆| 亚洲全国av大片| 精品人妻在线不人妻| 欧美大码av| 欧美激情极品国产一区二区三区| 日韩免费高清中文字幕av| 国内毛片毛片毛片毛片毛片| 在线十欧美十亚洲十日本专区| 日韩大码丰满熟妇| 建设人人有责人人尽责人人享有的| 亚洲色图 男人天堂 中文字幕| 妹子高潮喷水视频| 久久久久久亚洲精品国产蜜桃av| 成年动漫av网址| 成人影院久久| 国产精品一区二区在线观看99| 黑人操中国人逼视频| 51午夜福利影视在线观看| 变态另类成人亚洲欧美熟女 | 久久久久久人人人人人| 一本色道久久久久久精品综合| 精品人妻在线不人妻| 亚洲欧美日韩高清在线视频 | 9热在线视频观看99| 18在线观看网站| 亚洲国产成人一精品久久久| 91麻豆av在线| 宅男免费午夜| 亚洲国产欧美网| 国产精品 国内视频| 夜夜骑夜夜射夜夜干| 国产精品.久久久| 国产av国产精品国产| 免费黄频网站在线观看国产| 亚洲色图 男人天堂 中文字幕| 不卡av一区二区三区| 啦啦啦 在线观看视频| 9色porny在线观看| 少妇被粗大的猛进出69影院| 三级毛片av免费| 国产亚洲av高清不卡| 天天添夜夜摸| 色播在线永久视频| 国产91精品成人一区二区三区 | 国产精品久久久久久精品古装| 亚洲五月婷婷丁香| 在线观看66精品国产| cao死你这个sao货| 免费在线观看视频国产中文字幕亚洲| 国产三级黄色录像| av国产精品久久久久影院| 亚洲成av片中文字幕在线观看| 搡老熟女国产l中国老女人| 老鸭窝网址在线观看| 国产一区二区三区视频了| 欧美黄色淫秽网站| 另类亚洲欧美激情| 中文字幕av电影在线播放| 满18在线观看网站| 热re99久久精品国产66热6| 日韩一卡2卡3卡4卡2021年| 在线永久观看黄色视频| 丁香六月欧美| 日韩人妻精品一区2区三区| 天堂俺去俺来也www色官网| 国产国语露脸激情在线看| 欧美在线黄色| 欧美久久黑人一区二区| 99国产综合亚洲精品| 中文亚洲av片在线观看爽 | 免费在线观看日本一区| 精品久久久久久久毛片微露脸| 亚洲专区国产一区二区| 女警被强在线播放| 久久亚洲精品不卡| 国产成人精品久久二区二区免费| 在线看a的网站| 欧美日韩亚洲高清精品| 一级片'在线观看视频| 亚洲精品美女久久久久99蜜臀| 久久精品亚洲熟妇少妇任你| 最近最新免费中文字幕在线| 国产在线一区二区三区精| 自拍欧美九色日韩亚洲蝌蚪91| 中文字幕人妻丝袜一区二区| 欧美激情久久久久久爽电影 | 国产精品影院久久| www.熟女人妻精品国产| 女人久久www免费人成看片| 手机成人av网站| 麻豆国产av国片精品| 欧美在线黄色| 真人做人爱边吃奶动态| 日本a在线网址| 欧美日韩亚洲高清精品| 色综合欧美亚洲国产小说| 国产欧美日韩精品亚洲av| 老司机亚洲免费影院| 青草久久国产| 日韩 欧美 亚洲 中文字幕| 国产日韩一区二区三区精品不卡| 久久中文字幕一级| 日韩一区二区三区影片| 国产成人av教育| 18禁国产床啪视频网站| 91精品国产国语对白视频| 老鸭窝网址在线观看| 欧美+亚洲+日韩+国产| 亚洲第一欧美日韩一区二区三区 | 一级片'在线观看视频| 精品人妻熟女毛片av久久网站| 夜夜骑夜夜射夜夜干| 麻豆国产av国片精品| 亚洲第一av免费看| 99精品在免费线老司机午夜| a级毛片在线看网站| 人人妻,人人澡人人爽秒播| 熟女少妇亚洲综合色aaa.| 人妻 亚洲 视频| 久久久久久亚洲精品国产蜜桃av| 大片电影免费在线观看免费| 国产精品影院久久| e午夜精品久久久久久久| 在线观看人妻少妇| 菩萨蛮人人尽说江南好唐韦庄| 乱人伦中国视频| 热99re8久久精品国产| 亚洲少妇的诱惑av| 熟女少妇亚洲综合色aaa.| av视频免费观看在线观看| 制服人妻中文乱码| 欧美激情 高清一区二区三区| 我的亚洲天堂| 精品人妻熟女毛片av久久网站| 午夜日韩欧美国产| 欧美在线一区亚洲| 亚洲专区中文字幕在线| 夫妻午夜视频| 中文字幕人妻丝袜制服| 超碰97精品在线观看| 欧美日韩亚洲国产一区二区在线观看 | 午夜两性在线视频| 一本色道久久久久久精品综合| 亚洲精品美女久久av网站| 97人妻天天添夜夜摸| 国产欧美日韩综合在线一区二区| 少妇被粗大的猛进出69影院| 9191精品国产免费久久| 桃红色精品国产亚洲av| 国产淫语在线视频|