• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Adaptive deep residual network for single image super-resolution

    2019-02-27 10:37:14ShuaiLiuRuipengGangChenghuaLiandRuixiaSong
    Computational Visual Media 2019年4期

    Shuai Liu, Ruipeng Gang, Chenghua Li, and Ruixia Song

    Abstract In recent years, deep learning has achieved great success in the field of image processing. In the single image super-resolution (SISR) task, the convolutional neural network (CNN) extracts the features of the image through deeper layers, and has achieved impressive results. In this paper, we propose a single image super-resolution model based on Adaptive Deep Residual named as ADR-SR, which uses the Input Output Same Size (IOSS) structure, and releases the dependence of upsampling layers compared with the existing SR methods. Specifically, the key element of our model is the Adaptive Residual Block(ARB),which replaces the commonly used constant factor with an adaptive residual factor. The experiments prove the effectiveness of our ADR-SR model,which can not only reconstruct images with better visual effects, but also get better objective performances.

    Keywords single image super-resolution (SISR);adaptive deep residual network; deep learning

    1 Introduction

    Single Image Super Resolution(SISR)is a very classic and important task in the field of computer vision.Its main purpose is to reconstruct High Resolution(HR)image from Low Resolution(LR)image through Super Resolution (SR) technology. SISR can be widely used in safety monitoring, medical treatment,and automatic driving, etc.

    In essence, SISR is an irreversible process. At present,the simple and fast super-resolution methods mostly use light field, patch-based, and interpolation methods [1—6], all of which rely on the smooth transition assumption of adjacent pixels. However,the interpolation methods will cause aliasing and ringing effects because of image discontinuities [7].

    With the development of deep learning in recent years, Convolutional Neural Network (CNN) has made breakthroughs in computer vision tasks such as classification [8], detection [9], and semantic segmentation [10]. In the field of Super-Resolution,the main feature of CNN-based methods can fit the complex mapping more directly between LR image and HR image, it enables better recovery of missing high-frequency information (such as edges, textures),so its performance goes beyond many classic methods.

    Based on the EDSR[11]model,we propose a single image super-resolution model named ADR-SR, as shown in Fig.1(b),which is a new SR model with the same size of input and output. ADR-SR releases the dependence of upsampling layers compared with the existing deep learning SR methods, and constructs a one-to-one mapping from LR pixel to HR pixel. The Adaptive Residual Block (ARB) is embedded in the ADR-SR to enhance the adaptive ability and improve the objective performance.

    In summary, the main contributions of this paper are as follows:

    ·We propose an Input Output Same Size (IOSS)structure for the same size super-resolution task,which releases the dependence of upsampling layers compared with the existing deep learning SR methods. IOSS can solve SR task with the same input and output size as the actual needs.

    Fig. 1 Comparison of (a) EDSR-baseline structure with our (b) ADR-SR structure. Note that our ADR-SR does not have any upsampling layers and uses Adaptive Residual Block (ARB). The position of the global residuals is modified, and the depth and width of the network are also modified.

    ·We propose an Adaptive Residual Block (ARB)based on adaptive residual factor, which solves the problem of poor adaptability caused by constant residual factor. Each channel in ARB has a different adaptive residual factor,and both adaptive ability and learning ability improve a lot.

    ·We propose a new idea for Super-Resolution network design. In some cases, adding width of the network has a significant performance improvement, and the convergence speed is faster.

    2 Related works

    2.1 Super-resolution model

    According to whether the input size and output size are the same, the Super-Resolution model based on deep learning is divided into two types: model with different input and output size and model with the same input and output size.

    The first type task: model with different input and output size, such as SRResNet [12], LapSRN [13],EDSR [11], etc, which reconstructs large image from small image. The key operation is mainly to increase the image size by the upsampling layer, in order to obtain a high-resolution output image. Currently,the commonly used upsampling layers include pixelshuffle, transposed convolution, etc. The essence of the first type task is to build a one-to-many mapping from LR pixel to HR pixel. The upsampling layer of EDSR is set at the end of the entire network,the feature after upsampling layer is the output image, so the EDSR increases the dependence on the upsampling layer. It is very unstable for one-tomany mapping and it cannot be better adapted to the second type task.

    The second type task: model with the same input and output size, such as SRCNN [14], DRRN [15],VDSR [16], etc. They are more suitable for practical applications,such as mobile phone,camera,and other mobile devices. Due to the camera quality is low,the photos we take are not clearly, which means that Super-Resolution processing is needed. It is more in line with the needs of camera equipment that importing the captured photo directly into the network to reconstruct high-resolution photo of the same size. The second type task is the focus and difficulty of Super-Resolution research and application in the future, but there are few studies at present, and it has just begun to attract attention in recent years. When constructing a dataset, the high-resolution images are down-sampled and then up-sampled using bicubic interpolation to obtain lowresolution image of the same size. Since the input and output are the same size, no additional upsampling layer is needed in the network, and thus we can construct a one-to-one mapping from LR pixel to corresponding HR pixel, which is more stable compared to one-to-many mapping.

    The comparison between the first type and the second type shown in Fig. 2 can clearly express that the first type of model reconstructs 4 output pixels from 1 input pixel, when the scale is 2. The pixel ratio of the input and output is 1:4 (the ratio is 1:16 when the scale is 4), the information of input is seriously insufficient, the spatial position information of the output pixel also needs to be trained, and the network pressure is large and unstable. The second type of model reconstructs 1 output pixel from 1 input pixel, ensures the spatial position, and reduces the pressure of the network. The overall performance of the network is greatly improved.

    2.2 Residual block and residual scale factor

    The residual block proposed by He et al. [17] adds the learned features to the residuals, further weakens the gradient disappearance and gradient explosion in deep networks, allows us to train deeper networks successfully, and has a good performance. SRResNet[12] uses the residual block in the SR task first and deletes the ReLU activation function layer between the connected residual blocks;EDSR[11]modifies the residual block based on SRResNet, deletes the batch normalization (BN) layer, multiplies the learned features by a constant residual scale factor (default is 0.1), and then adds it to the residual. They suppress the features to reduce the change of the residual,which is more conducive to the fast convergence in the early stage of training. However, multiplying all features by a constant residual scale factor forms a simple linear mapping, and the lack of nonlinear factor makes the network unable to handle more complex situations and reduces the learning ability.

    Fig. 2 Comparison of input and output between the first type and the second type of network. (a) The first type of network, which has different input and output sizes, reconstructs 4 output pixels from 1 input pixel. (b) The second type of network, which has the same input and output size, reconstructs 1 output pixel from 1 input pixel.

    2.3 Squeeze and excitation module

    CNN is characterized by a series of convolution layers, nonlinear layers, and down-sampling layers.This structural feature enables CNN to extract features with global receptive fields. Moreover,the performance of CNN can be greatly enhanced by adding multi-scale (Inception [18]), attention[19], context (Inside—Outside [20]), and other spatial feature enhancement mechanisms.

    The Squeeze and Excitation Network (SENet [21])enhances feature extraction by building a Squeeze and Excitation (SE) module, which can clearly construct the relationship between different feature channels in the convolution layer. The SE module consists of two operations: Squeeze and Excitation.The squeeze operation compresses all 2-dimensional feature channels into 1-dimensional values by a global average pooling, in order to obtain an output vector with global corresponding features (dimensions are the same as the number of channels, assumingC). The excitation operation learns the relationship between each channels by learning a weight vector(the dimensions are stillC). Afterwards, the SE module uses weight vectors to enhance or suppress individual feature channels.

    Since the different feature maps have different image feature coding characteristics [8] (such as contour, color, region, etc.), different features have different importance to the Super-Resolution task. Therefore, the characteristic of recalibration operation in feature map of the SE module is bound to improve the performance of the Super-Resolution model. This is one of the main motivations of this paper.

    2.4 Deeper and wider model

    For the classification task, the residual network(ResNet [17]) won the championship in ILSVRC [22],and the accuracy of the model has been greatly improved. The number of layers in ResNet is 152. A deeper layer means a deeper semantic feature which has a strong effect on the network’s understanding.In the Super-Resolution task, SRCNN [14] is the first network to use CNN, and it only has about 3 convoluation layers; SRResNet [12] embeds the residual block in the network, and it has 15 residual blocks; VDSR [16] uses the global residual structure to perform residual learning on the high-frequency information of the image, and uses gradient clipping to enhance the gradient transmission; meanwhile,they propose a theory of “the deeper, the better”,so the VDSR has 20 convoluation layers; EDSR [11]modifies SRResNet and has 32 residual blocks, but the training time also increases.

    3 Proposed method

    We choose EDSR-baseline [11] as the base model(As shown in Fig. 1(a)). EDSR is Enhanced Deep Residual Networks, and it has been modified on the basis of SRResNet [12]; not only the number of parameters is reduced, but also the performance is significantly improved. EDSR won the first place in the internationally renowned NTIRE2017 Super Resolution Challenge,representing the highest level of the current Super-Resolution field. However, EDSR cannot solve the same size super-resolution task and has poor adaptability. In order to make up the shortcomings of EDSR, we propose a new super resolution network named ADR-SR,which uses Input Output Same Size (IOSS) structure to ensure the same size of input and output (see more details in Section 3.1), embeds Adaptive Residual Block (ARB)into the network to enhance adaptive ability (see more details in Section 3.2), follows the new design idea and increases the width of the network(see more details in Section 3.3).

    3.1 Network structure

    In this paper, we propose an Input Output Same Size structure named IOSS for the second type task(Section 2.1). The upsampling layer in the base model is redundant because upsampling operation is not required. The convolution layer before the upsampling layer is used to expand the number of feature maps so that it can be better upsampled, but it is redundant after deleting the upsampling layer.The IOSS deletes the redundant layer, and it can not only reduce the complexity of the network, but also reduce the number of parameters. In addition, IOSS also modifies the global residuals from the first layer to the network input, in order to accommodate the second type task better. The gray layer of the base model in Fig. 1(a) is the redundant layer which is to be deleted. The IOSS structure can be applied not only to Super-Resolution task, but also to other image processing tasks.

    3.2 Adaptive residual block

    In order to increase the nonlinear mapping missing in the base model because of using a constant residual scale factor, we propose an Adaptive Residual Block named ARB.As shown in Fig.1(b),ARB uses the SE module to obtain the importance of different feature channels (adaptive residual scale factors), which are used to replace the constant scale factor, so that each channel has different adaptive residual scale factor to enhance adaptive and nonlinearity. Due to the feature suppression, the advantages of rapid convergence at the beginning of training are preserved.

    Therefore, the ARB can be expressed as

    whereBi-1is the (i-1)-th output of the residual block,K*is a convolution operation whose channel width is 192,σmeans the activation function of

    ReLU.

    We have a 3×3 convolution operationKfor theP1inside the local residual block, and then mix and compress feature maps into 32 channels. The outputs above continue to enter the SE module which express asSE. Finally, its output is added to the output of the first residual block to obtain a local residual.

    The global residual of this paper can be expressed as

    whereB0is the input of the local residuals,Klis a convolution operation with a feature channel number of 3. We add the output to the LR image to form a global residual to get the final super-resolution imagey.

    Fig. 3 Effects of adding SE module in different positions on the model.

    Adding the SE module after both the first and second convolution layer in the residual block will have an effect of feature suppression, but for the former, the feature after suppression will also pass the activation function and the second convolution layer resulting in weakening the suppression effect again. For the latter,the SE module is added after the second convolution layer to suppress the feature,then we do the addition because of the residuals structure,and the suppression effect remains unchanged. The comparison of different situations is shown in Fig. 3.The performance of the SE module after the second convolution layer is a little different from other cases;however, the PSNR of the validation set is small at the initial stage of training, and the model converges are faster and more stably. Therefore, the SE module is set after the second convolution layer in our ARB.It is worth noting that the PSNR of the model without the SE module is relatively low, and the additional complexity brought by the addition of the SE module is minimal (2%—10% additional parameters,<1%additional computation [21]), which also verifies the effectiveness of our ARB.

    As shown in Fig. 4, we compare the residual block structure of different models including the original residual block, the SRResNet residual block, the EDSR residual block, and our ARB.

    3.3 The increase of channel width

    For Super-Resolution task, a wider network can achieve similar or even better results than a deeper network in some cases, when we construct a mapping from LR pixel to HR pixel. Excessive network layers will not bring huge upgrades, but increase training costs.

    In this section, we compare the effects of increasing the width and increasing the depth on the model,where the number of parameters is approximately the same (about 0.3M). As shown in Fig. 5(a), the horizontal axis represents the number of training epochs, and the vertical axis represents the PSNR of the validation dataset. The model curve with the depth of 16 and the width of 32 is a control group.It can be clearly seen that the effect of the model with the depth of 16 and the width of 64 (the depth remains unchanged and the width is expanded by 2 times) is significantly improved in numerical value.However, the effect of the model with the depth of 32 and the width of 32 (the depth is expanded by 2 times and the width remains unchanged) is slightly lower than the control group. In Fig.5(b),we provide another set of experiments to verify the above points.

    Fig. 4 Comparison of different residual block structures. (a) Original residual block, which is proposed in ResNet. (b) SRResNet residual block,which removes the last activation function from the original residual block. (c) EDSR residual block, which removes the BatchNormalization from the SRResNet residual block and adds a fixed factor of 0.1. (d) Ours Adaptive Residual Block (ARB), which replaces the fixed factor with the SE modules to increase adaptability.

    Fig. 5 Effects of increasing depth and increasing width on the model.

    Based on the above conclusions, we propose a new idea for Super-Resolution model design. Compared with increasing the depth of the network, increasing the width of the network can better adapt to the image restoration task. Thus, the reconstruction task has equal or better enhancements than the deep features as the increase of the shallow features.Compared with the base model, we increase the width in the residual block about 3 times which is from 64 to 192, so that the model has more shallow features. In addition, in order to balance the number of parameters and training time, we also reduce the number of input feature channels of the residual block by half which is from 64 to 32. The constantnis the number of residual blocks, and our ADR-SR and the EDSR-baseline have the same number of residual blocks (n=16).

    4 Experiment

    4.1 Datasets and evaluation performances

    Following the setting in EDSR[11],we train our ADRSR on the DIV2K[23]dataset,and evaluate it on four standard benchmark datasets (including Set5 [24],Set14[25],B100[26],and Urban100[27]). DIV2K has 1000 2K HD images, including 800 training images,100 validation images, and 100 testing images. In the process of constructing LR training images, we first use the bicubic interpolation function to reduce the original HR image to different scales, and then interpolate them to the original size. In this paper,the objective signal is evaluated by two performances:Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM).

    4.2 Training details

    In the training of this paper, the RGB image is randomly cropped into a 96×96 pixel frame as input. Data enhancement methods include: random horizontal flip, random vertical flip, and random rotation of 90 degrees. The pre-processing operations include: meanization(minus the mean of the training set to make the mean of the input is 0)and normalized(divided the variance of the training set to make the variance of the input is 1). The optimizer is Adam [28], where the hyperparameter is set to:β1= 0.9,β2= 0.999,∈= 10-8. The training batch size is 16, the learning rate is initialized as 0.001,which is reduced in 200k, 400k, 600k, and 800k iterations, and the scale of the reduction is 0.5. The loss function is L1 loss. The training environment is NVIDIA Titan XP GPUs and PyTorch framework.

    4.3 Experimental results

    As shown in Table 1, we test the performance of different algorithms on standard benchmark datasets and give quantitative evaluation results.The first line gives a comparison model, including LapSRN[13],VDSR[16],DRRN[15],SRResNet[12],SRDenseNet [29], CARN [30], MCAN [31], EDSRbaseline [11], and our ADR-SR. The first and second columns represent different benchmark datasets and corresponding scales. The table gives the quantitative value (PSNR/SSIM) results of the various models in different datasets and different scale settings, where the optimal results are shown in bold.

    In order to ensure the fairness of the experimentdata, we reproduce the test results of the comparison model and obtain the relevant experiment data, and all the pre-training comparison models are derived from the network open source. The datasets are constructed by the bicubic interpolation function,and PSNR and SSIM are calculated on the three channels of RGB space. There is a slight deviation from the original data, due to the different construction methods of the partial comparison model and some of the original paper calculate PSNR and SSIM on the Y channel of the YCbCr space.

    Table 1 Quantitative comparison with the state-of-the-art methods based on ×2, ×4 SR with bicubic degradation model

    It can be seen from Table 1 that in the tasks of different training sets with scale 2, our ADR-SR is optimal in objective performances, and the PSNR and SSIM are higher than the second method. Due to the error caused by different data construction methods, in the DIV2K validation dataset of scale 4,ADR-SR has a small lower of 0.02 and 0.001 on the PSNR and SSIM compared with the EDSR-baseline model,but on other datasets,our ADR-SR are higher obviously.

    Experiment shows that our ADR-SR achieves relatively good visual effects and objective performances on different scale tasks of different standard benchmark datasets, and ADR-SR has obvious advantages in image clarity, spatial similarity, and image texture details.

    In order to verify the validity of the ADR-SR, we take the Urban100 and DIV2K datasets as examples to select some images, and compare them with LapSRN,VDSR,DRRN,CARN,MCAN,and EDSRbaseline. The bicubic interpolation method is also shown as a reference. As shown in Fig. 6 and Fig. 7, the red dotted box highlights the obvious advantages of our ADR-SR. It can be clearly seen from the experiment results that our ADR-SR has a better Super-Resolution effect than other models when dealing with the edge of the object. The edge distinction is more significant, the detail information missing from many other models is reconstructed,and the visual effect is greatly improved.

    5 Conclusions

    In summary, we proposes a single image superresolution model named ADR-SR based on adaptive deep residual, which can be used for super-resolution task with the same size of input and output image.The visual effects and objective performances of the experiment demonstrate the effectiveness of our ADR-SR. The specific innovations are: (1) Input Output Same Size structure (IOSS) for same size super-resolution task. (2) Adaptive Residual Block(ARB), the adaptive ability and convergence speed improve a lot. (3) A new idea for super-resolution network design increases the width of the network instead of the depth to obtain additional performance improvements.

    Fig. 6 Comparison of experimental effects of Urban100 dataset.

    Acknowledgements

    This work was supported in part by National Natural Science Foundation of China (No. 61571046)and National Key R&D Program of China (No.2017YFF0209806).

    Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format,as long as you give appropriate credit to the original author(s)and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

    The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use,you will need to obtain permission directly from the copyright holder.

    To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095.To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

    国产精品国产高清国产av| 男人操女人黄网站| 久久香蕉国产精品| 久久久久久久久久黄片| 一区二区三区高清视频在线| 亚洲最大成人中文| 久久热在线av| 久久香蕉激情| 啦啦啦 在线观看视频| 久久中文看片网| 婷婷精品国产亚洲av在线| 午夜两性在线视频| 不卡av一区二区三区| 级片在线观看| 免费在线观看亚洲国产| 啪啪无遮挡十八禁网站| 久久久水蜜桃国产精品网| 婷婷精品国产亚洲av在线| 国产乱人伦免费视频| 欧美最黄视频在线播放免费| 亚洲免费av在线视频| 午夜免费激情av| 成人免费观看视频高清| 国产又爽黄色视频| 国产亚洲av高清不卡| 久久精品国产综合久久久| 欧美中文综合在线视频| 国产精华一区二区三区| 在线天堂中文资源库| 免费在线观看黄色视频的| 日本 av在线| 亚洲成人久久性| 欧美精品啪啪一区二区三区| 国产成人av激情在线播放| 欧美黄色淫秽网站| 好男人在线观看高清免费视频 | 一夜夜www| 免费电影在线观看免费观看| 国产av在哪里看| 亚洲成人免费电影在线观看| 欧美日本视频| 一级毛片精品| 91大片在线观看| 日本在线视频免费播放| 精品午夜福利视频在线观看一区| 可以在线观看的亚洲视频| 制服诱惑二区| 18禁裸乳无遮挡免费网站照片 | 亚洲黑人精品在线| 国产真人三级小视频在线观看| 亚洲一区中文字幕在线| 两个人看的免费小视频| 色尼玛亚洲综合影院| 亚洲第一欧美日韩一区二区三区| 久9热在线精品视频| 满18在线观看网站| 国产成年人精品一区二区| 国产主播在线观看一区二区| 在线观看一区二区三区| 男人舔女人下体高潮全视频| 国产又爽黄色视频| 免费看a级黄色片| 亚洲熟女毛片儿| 白带黄色成豆腐渣| 国产亚洲精品综合一区在线观看 | 欧美日韩福利视频一区二区| 亚洲精品中文字幕一二三四区| 国产精品精品国产色婷婷| 国产精品二区激情视频| 亚洲精品色激情综合| 精品国产一区二区三区四区第35| 精品人妻1区二区| 精品电影一区二区在线| 亚洲精品在线美女| 亚洲午夜精品一区,二区,三区| 在线观看舔阴道视频| 国产精品一区二区免费欧美| 午夜久久久在线观看| 可以免费在线观看a视频的电影网站| 午夜成年电影在线免费观看| 99久久精品国产亚洲精品| 18禁黄网站禁片午夜丰满| 欧美另类亚洲清纯唯美| 人妻久久中文字幕网| 成人18禁在线播放| 一级毛片精品| 777久久人妻少妇嫩草av网站| 一边摸一边做爽爽视频免费| 麻豆久久精品国产亚洲av| 嫩草影视91久久| 天天躁狠狠躁夜夜躁狠狠躁| 国产蜜桃级精品一区二区三区| 精品国产一区二区三区四区第35| av欧美777| 男女床上黄色一级片免费看| 免费高清在线观看日韩| 免费人成视频x8x8入口观看| 女警被强在线播放| 可以免费在线观看a视频的电影网站| 中国美女看黄片| 久久久国产精品麻豆| 免费电影在线观看免费观看| 十八禁网站免费在线| 黄色女人牲交| 国产91精品成人一区二区三区| 亚洲国产看品久久| 亚洲 国产 在线| 欧美一区二区精品小视频在线| 麻豆久久精品国产亚洲av| 国产成人欧美| 亚洲专区字幕在线| 欧美乱色亚洲激情| 欧美绝顶高潮抽搐喷水| 亚洲国产欧美网| 黄色a级毛片大全视频| 欧美大码av| 国产亚洲av嫩草精品影院| 色精品久久人妻99蜜桃| 中国美女看黄片| 制服丝袜大香蕉在线| 欧美激情极品国产一区二区三区| 18禁国产床啪视频网站| 中文亚洲av片在线观看爽| 精品久久久久久久人妻蜜臀av| 久久久久久久久免费视频了| 日韩欧美在线二视频| 婷婷丁香在线五月| 特大巨黑吊av在线直播 | 亚洲国产精品成人综合色| 琪琪午夜伦伦电影理论片6080| 国产高清激情床上av| 国产av在哪里看| av欧美777| 亚洲精品在线观看二区| 欧美三级亚洲精品| 婷婷精品国产亚洲av| 亚洲国产精品合色在线| 成年版毛片免费区| 国产精品免费视频内射| 国产99白浆流出| 女同久久另类99精品国产91| 久久久久久免费高清国产稀缺| 国产一级毛片七仙女欲春2 | 高清毛片免费观看视频网站| 久9热在线精品视频| 在线观看免费午夜福利视频| 给我免费播放毛片高清在线观看| 国产视频内射| 高潮久久久久久久久久久不卡| 亚洲最大成人中文| 一本大道久久a久久精品| √禁漫天堂资源中文www| 色综合婷婷激情| 成人国产一区最新在线观看| 欧美黑人欧美精品刺激| 成人手机av| 最新美女视频免费是黄的| 欧美三级亚洲精品| 亚洲熟妇熟女久久| 91国产中文字幕| 国产精品一区二区免费欧美| 久久精品国产综合久久久| 在线看三级毛片| 黑人欧美特级aaaaaa片| 两性午夜刺激爽爽歪歪视频在线观看 | 亚洲成人免费电影在线观看| 久久久久久免费高清国产稀缺| 亚洲自拍偷在线| avwww免费| 欧美性猛交╳xxx乱大交人| 亚洲aⅴ乱码一区二区在线播放 | 少妇裸体淫交视频免费看高清 | 国产精品久久久av美女十八| 波多野结衣高清无吗| 97人妻精品一区二区三区麻豆 | 波多野结衣高清作品| 成在线人永久免费视频| 精品免费久久久久久久清纯| 18禁裸乳无遮挡免费网站照片 | 色综合婷婷激情| 在线观看www视频免费| 亚洲欧美精品综合一区二区三区| 亚洲电影在线观看av| av天堂在线播放| 中文字幕人成人乱码亚洲影| 日韩欧美一区二区三区在线观看| 亚洲片人在线观看| 久久中文看片网| 成人免费观看视频高清| 午夜亚洲福利在线播放| 亚洲国产欧美一区二区综合| 色综合欧美亚洲国产小说| 美国免费a级毛片| 免费看a级黄色片| 欧美在线黄色| 岛国视频午夜一区免费看| 19禁男女啪啪无遮挡网站| 国产黄a三级三级三级人| 精品久久久久久久毛片微露脸| 国产高清有码在线观看视频 | 国产野战对白在线观看| 欧美黑人精品巨大| 天堂动漫精品| 99国产综合亚洲精品| 免费人成视频x8x8入口观看| 91国产中文字幕| 桃色一区二区三区在线观看| 99热这里只有精品一区 | 不卡一级毛片| 欧美又色又爽又黄视频| 国产又色又爽无遮挡免费看| 国产精品久久久av美女十八| 91在线观看av| 黄色成人免费大全| 国产高清视频在线播放一区| 亚洲片人在线观看| 嫩草影院精品99| 久久久久久久精品吃奶| 可以在线观看的亚洲视频| 色综合欧美亚洲国产小说| 国产精品电影一区二区三区| 亚洲一区中文字幕在线| 好男人电影高清在线观看| 黄色女人牲交| 国产精品av久久久久免费| 在线观看免费日韩欧美大片| 97超级碰碰碰精品色视频在线观看| 国产aⅴ精品一区二区三区波| 可以在线观看毛片的网站| 欧美性长视频在线观看| 大型黄色视频在线免费观看| 男女下面进入的视频免费午夜 | 一夜夜www| 一进一出抽搐动态| 亚洲avbb在线观看| 看免费av毛片| 久久久久久九九精品二区国产 | 午夜免费成人在线视频| 宅男免费午夜| 中文字幕另类日韩欧美亚洲嫩草| 在线观看免费日韩欧美大片| 人妻久久中文字幕网| 国产激情偷乱视频一区二区| 日韩精品中文字幕看吧| 国产亚洲精品久久久久久毛片| 不卡一级毛片| 美女免费视频网站| 啦啦啦 在线观看视频| 精品久久久久久久末码| 欧洲精品卡2卡3卡4卡5卡区| 午夜日韩欧美国产| 国产视频一区二区在线看| 亚洲国产毛片av蜜桃av| 成在线人永久免费视频| 欧美人与性动交α欧美精品济南到| 午夜久久久在线观看| av电影中文网址| 久久人人精品亚洲av| 中文字幕人妻丝袜一区二区| 亚洲欧美一区二区三区黑人| 1024手机看黄色片| 亚洲av电影不卡..在线观看| 看片在线看免费视频| 国产乱人伦免费视频| 久久欧美精品欧美久久欧美| 国产精品久久久久久精品电影 | 亚洲国产中文字幕在线视频| 久久久久久久久中文| 精品不卡国产一区二区三区| 99re在线观看精品视频| 国产一区二区激情短视频| 国产av又大| 国产在线精品亚洲第一网站| 曰老女人黄片| 免费看a级黄色片| 黄色视频不卡| 午夜福利视频1000在线观看| 亚洲欧洲精品一区二区精品久久久| 黄色成人免费大全| 日韩欧美一区视频在线观看| 亚洲五月天丁香| 精品久久久久久成人av| 国产精品亚洲美女久久久| 日韩免费av在线播放| 免费在线观看完整版高清| 国产精品av久久久久免费| 俺也久久电影网| 91麻豆av在线| 精品国产国语对白av| 精品国产超薄肉色丝袜足j| av欧美777| 丝袜美腿诱惑在线| bbb黄色大片| 淫妇啪啪啪对白视频| 亚洲欧美精品综合一区二区三区| 日韩精品免费视频一区二区三区| 免费观看人在逋| 久久久国产成人精品二区| 99精品在免费线老司机午夜| 欧美日韩黄片免| 宅男免费午夜| 国产欧美日韩一区二区精品| 精品国内亚洲2022精品成人| 午夜两性在线视频| 国产欧美日韩一区二区三| 性色av乱码一区二区三区2| 老司机午夜十八禁免费视频| 怎么达到女性高潮| 午夜两性在线视频| 国产不卡一卡二| 俄罗斯特黄特色一大片| 国产麻豆成人av免费视频| 91老司机精品| 亚洲色图av天堂| 无限看片的www在线观看| 亚洲国产高清在线一区二区三 | 国产精品久久久久久亚洲av鲁大| 88av欧美| 久久久久九九精品影院| 亚洲国产中文字幕在线视频| 不卡av一区二区三区| av中文乱码字幕在线| 91成年电影在线观看| 老司机午夜十八禁免费视频| 午夜日韩欧美国产| 好男人电影高清在线观看| 91成年电影在线观看| 午夜激情福利司机影院| 中文字幕高清在线视频| 不卡一级毛片| 久久香蕉国产精品| 精品一区二区三区四区五区乱码| 亚洲午夜理论影院| 日韩欧美国产一区二区入口| 亚洲精品粉嫩美女一区| 欧美人与性动交α欧美精品济南到| 精品久久久久久久久久免费视频| 国产高清视频在线播放一区| 亚洲精品国产精品久久久不卡| 色播亚洲综合网| 日韩欧美在线二视频| 变态另类成人亚洲欧美熟女| 国产真人三级小视频在线观看| 欧美黄色片欧美黄色片| 亚洲五月婷婷丁香| 欧美中文日本在线观看视频| 日韩欧美国产在线观看| 日韩欧美 国产精品| 草草在线视频免费看| 制服丝袜大香蕉在线| 国产激情久久老熟女| 婷婷亚洲欧美| 美女免费视频网站| 一个人观看的视频www高清免费观看 | 国产黄色小视频在线观看| 成人18禁在线播放| 手机成人av网站| 日韩欧美国产一区二区入口| 十八禁网站免费在线| 久久天躁狠狠躁夜夜2o2o| 久久久久国产一级毛片高清牌| 亚洲午夜理论影院| 成人国产一区最新在线观看| 视频区欧美日本亚洲| 黄色片一级片一级黄色片| 50天的宝宝边吃奶边哭怎么回事| 精品国产亚洲在线| 久久天堂一区二区三区四区| 色精品久久人妻99蜜桃| 久久国产精品人妻蜜桃| 亚洲国产欧美日韩在线播放| 哪里可以看免费的av片| 国产成人av教育| 亚洲一区二区三区不卡视频| www日本黄色视频网| 在线观看午夜福利视频| 91大片在线观看| 日韩免费av在线播放| a级毛片a级免费在线| 亚洲成a人片在线一区二区| 欧美乱色亚洲激情| 99re在线观看精品视频| 免费搜索国产男女视频| www.熟女人妻精品国产| 又紧又爽又黄一区二区| 欧美黄色片欧美黄色片| 午夜精品久久久久久毛片777| 国产激情偷乱视频一区二区| 久久欧美精品欧美久久欧美| 欧美性猛交╳xxx乱大交人| 欧美中文日本在线观看视频| 女性被躁到高潮视频| av欧美777| 日韩大码丰满熟妇| 脱女人内裤的视频| www.自偷自拍.com| 少妇裸体淫交视频免费看高清 | 90打野战视频偷拍视频| 国产精品美女特级片免费视频播放器 | 黄色 视频免费看| 嫩草影院精品99| 午夜成年电影在线免费观看| 丝袜在线中文字幕| 俄罗斯特黄特色一大片| 国产片内射在线| 国产精品1区2区在线观看.| 88av欧美| 可以免费在线观看a视频的电影网站| 又黄又粗又硬又大视频| 久久99热这里只有精品18| 18禁裸乳无遮挡免费网站照片 | 757午夜福利合集在线观看| 久久精品国产亚洲av香蕉五月| 欧美乱妇无乱码| 中文资源天堂在线| 两个人免费观看高清视频| 国产色视频综合| 中文资源天堂在线| 1024香蕉在线观看| 非洲黑人性xxxx精品又粗又长| 国产精品亚洲美女久久久| 久久国产精品男人的天堂亚洲| 丝袜美腿诱惑在线| 熟女少妇亚洲综合色aaa.| 成人国产一区最新在线观看| 90打野战视频偷拍视频| 欧美黄色淫秽网站| 12—13女人毛片做爰片一| 黄色视频,在线免费观看| 操出白浆在线播放| 亚洲国产日韩欧美精品在线观看 | 久久中文看片网| 亚洲人成伊人成综合网2020| 一级黄色大片毛片| 变态另类丝袜制服| 老司机午夜十八禁免费视频| 亚洲av成人一区二区三| 亚洲av成人av| 长腿黑丝高跟| 在线av久久热| 亚洲精品中文字幕在线视频| 成人午夜高清在线视频 | 午夜视频精品福利| 51午夜福利影视在线观看| 波多野结衣巨乳人妻| 91麻豆精品激情在线观看国产| e午夜精品久久久久久久| 在线观看www视频免费| 满18在线观看网站| 免费观看精品视频网站| av中文乱码字幕在线| 亚洲精华国产精华精| 日韩有码中文字幕| 久久精品人妻少妇| 久久精品aⅴ一区二区三区四区| 欧美日韩一级在线毛片| 国产精品99久久99久久久不卡| 久久久国产精品麻豆| √禁漫天堂资源中文www| 一进一出抽搐gif免费好疼| 欧美zozozo另类| 亚洲国产欧美网| 美女高潮到喷水免费观看| 久久午夜综合久久蜜桃| 久久狼人影院| 亚洲一卡2卡3卡4卡5卡精品中文| 一本大道久久a久久精品| 亚洲av美国av| 国产成人欧美在线观看| 国产视频一区二区在线看| 黄色视频,在线免费观看| 九色国产91popny在线| 村上凉子中文字幕在线| 黄片大片在线免费观看| 女人爽到高潮嗷嗷叫在线视频| 在线永久观看黄色视频| 精品无人区乱码1区二区| 变态另类成人亚洲欧美熟女| 久久久国产精品麻豆| 欧美国产日韩亚洲一区| www.自偷自拍.com| 亚洲精华国产精华精| 成人永久免费在线观看视频| 亚洲久久久国产精品| 狂野欧美激情性xxxx| 亚洲精品国产区一区二| 日本免费a在线| 精品高清国产在线一区| 国产精品一区二区三区四区久久 | 正在播放国产对白刺激| 国产主播在线观看一区二区| 两人在一起打扑克的视频| 99久久久亚洲精品蜜臀av| 丝袜人妻中文字幕| 亚洲自偷自拍图片 自拍| 欧美激情 高清一区二区三区| 精品久久蜜臀av无| 国产爱豆传媒在线观看 | 啪啪无遮挡十八禁网站| 9191精品国产免费久久| 深夜精品福利| 一卡2卡三卡四卡精品乱码亚洲| 亚洲精品在线观看二区| 1024手机看黄色片| 少妇 在线观看| 18禁美女被吸乳视频| 人人妻人人看人人澡| 91麻豆精品激情在线观看国产| 国产成人欧美在线观看| 12—13女人毛片做爰片一| 欧美成人一区二区免费高清观看 | 欧美黑人欧美精品刺激| 99久久综合精品五月天人人| 亚洲精品中文字幕在线视频| 中国美女看黄片| 中文字幕精品免费在线观看视频| 国产午夜精品久久久久久| 国内精品久久久久精免费| 国产一卡二卡三卡精品| 色尼玛亚洲综合影院| 18禁黄网站禁片免费观看直播| 国产熟女午夜一区二区三区| 日韩中文字幕欧美一区二区| 国产精品 国内视频| 欧美日韩瑟瑟在线播放| or卡值多少钱| 一级毛片精品| 国产一卡二卡三卡精品| 亚洲男人的天堂狠狠| 久久久国产欧美日韩av| 中文字幕人妻熟女乱码| 中文亚洲av片在线观看爽| 国产精品 国内视频| 欧美不卡视频在线免费观看 | 成人亚洲精品一区在线观看| 免费在线观看日本一区| 亚洲,欧美精品.| 法律面前人人平等表现在哪些方面| 啦啦啦免费观看视频1| 亚洲免费av在线视频| 精品人妻1区二区| 久久这里只有精品19| 午夜视频精品福利| 给我免费播放毛片高清在线观看| 国产视频一区二区在线看| 国产精品久久视频播放| 成年版毛片免费区| 性色av乱码一区二区三区2| 91在线观看av| 国产亚洲欧美在线一区二区| 我的亚洲天堂| 久久久久久久久久黄片| 久久伊人香网站| 久久狼人影院| 精品日产1卡2卡| 美女大奶头视频| 夜夜夜夜夜久久久久| 日韩精品青青久久久久久| 别揉我奶头~嗯~啊~动态视频| 亚洲 欧美 日韩 在线 免费| 国产精品久久视频播放| 午夜激情av网站| 亚洲一码二码三码区别大吗| 中文字幕精品免费在线观看视频| 亚洲狠狠婷婷综合久久图片| 国产精品野战在线观看| 天堂√8在线中文| 中文字幕久久专区| 亚洲精品美女久久久久99蜜臀| 午夜激情av网站| 国产色视频综合| 人人妻人人澡欧美一区二区| 国产97色在线日韩免费| 日本五十路高清| 午夜老司机福利片| 天天躁夜夜躁狠狠躁躁| 免费在线观看黄色视频的| 99久久久亚洲精品蜜臀av| av福利片在线| 欧美一区二区精品小视频在线| 久9热在线精品视频| 国产成人欧美| 真人一进一出gif抽搐免费| 999精品在线视频| 别揉我奶头~嗯~啊~动态视频| 日韩欧美 国产精品| 别揉我奶头~嗯~啊~动态视频| 亚洲成人久久性| 中文字幕人妻熟女乱码| 午夜视频精品福利| 国产精品美女特级片免费视频播放器 | 91成人精品电影| 免费看日本二区| 18禁黄网站禁片免费观看直播| 亚洲av电影不卡..在线观看| 久久狼人影院| 亚洲欧洲精品一区二区精品久久久| 欧美午夜高清在线| 国产成人av激情在线播放| 在线天堂中文资源库| 啦啦啦韩国在线观看视频| 国产av不卡久久| 99国产精品99久久久久| 欧美性猛交╳xxx乱大交人| 亚洲va日本ⅴa欧美va伊人久久| 大型av网站在线播放| 侵犯人妻中文字幕一二三四区| 国产亚洲av嫩草精品影院| 成人av一区二区三区在线看| 很黄的视频免费| 亚洲国产精品sss在线观看| 色av中文字幕| 丰满的人妻完整版| 高潮久久久久久久久久久不卡| 在线天堂中文资源库|