• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Adaptive deep residual network for single image super-resolution

    2019-02-27 10:37:14ShuaiLiuRuipengGangChenghuaLiandRuixiaSong
    Computational Visual Media 2019年4期

    Shuai Liu, Ruipeng Gang, Chenghua Li, and Ruixia Song

    Abstract In recent years, deep learning has achieved great success in the field of image processing. In the single image super-resolution (SISR) task, the convolutional neural network (CNN) extracts the features of the image through deeper layers, and has achieved impressive results. In this paper, we propose a single image super-resolution model based on Adaptive Deep Residual named as ADR-SR, which uses the Input Output Same Size (IOSS) structure, and releases the dependence of upsampling layers compared with the existing SR methods. Specifically, the key element of our model is the Adaptive Residual Block(ARB),which replaces the commonly used constant factor with an adaptive residual factor. The experiments prove the effectiveness of our ADR-SR model,which can not only reconstruct images with better visual effects, but also get better objective performances.

    Keywords single image super-resolution (SISR);adaptive deep residual network; deep learning

    1 Introduction

    Single Image Super Resolution(SISR)is a very classic and important task in the field of computer vision.Its main purpose is to reconstruct High Resolution(HR)image from Low Resolution(LR)image through Super Resolution (SR) technology. SISR can be widely used in safety monitoring, medical treatment,and automatic driving, etc.

    In essence, SISR is an irreversible process. At present,the simple and fast super-resolution methods mostly use light field, patch-based, and interpolation methods [1—6], all of which rely on the smooth transition assumption of adjacent pixels. However,the interpolation methods will cause aliasing and ringing effects because of image discontinuities [7].

    With the development of deep learning in recent years, Convolutional Neural Network (CNN) has made breakthroughs in computer vision tasks such as classification [8], detection [9], and semantic segmentation [10]. In the field of Super-Resolution,the main feature of CNN-based methods can fit the complex mapping more directly between LR image and HR image, it enables better recovery of missing high-frequency information (such as edges, textures),so its performance goes beyond many classic methods.

    Based on the EDSR[11]model,we propose a single image super-resolution model named ADR-SR, as shown in Fig.1(b),which is a new SR model with the same size of input and output. ADR-SR releases the dependence of upsampling layers compared with the existing deep learning SR methods, and constructs a one-to-one mapping from LR pixel to HR pixel. The Adaptive Residual Block (ARB) is embedded in the ADR-SR to enhance the adaptive ability and improve the objective performance.

    In summary, the main contributions of this paper are as follows:

    ·We propose an Input Output Same Size (IOSS)structure for the same size super-resolution task,which releases the dependence of upsampling layers compared with the existing deep learning SR methods. IOSS can solve SR task with the same input and output size as the actual needs.

    Fig. 1 Comparison of (a) EDSR-baseline structure with our (b) ADR-SR structure. Note that our ADR-SR does not have any upsampling layers and uses Adaptive Residual Block (ARB). The position of the global residuals is modified, and the depth and width of the network are also modified.

    ·We propose an Adaptive Residual Block (ARB)based on adaptive residual factor, which solves the problem of poor adaptability caused by constant residual factor. Each channel in ARB has a different adaptive residual factor,and both adaptive ability and learning ability improve a lot.

    ·We propose a new idea for Super-Resolution network design. In some cases, adding width of the network has a significant performance improvement, and the convergence speed is faster.

    2 Related works

    2.1 Super-resolution model

    According to whether the input size and output size are the same, the Super-Resolution model based on deep learning is divided into two types: model with different input and output size and model with the same input and output size.

    The first type task: model with different input and output size, such as SRResNet [12], LapSRN [13],EDSR [11], etc, which reconstructs large image from small image. The key operation is mainly to increase the image size by the upsampling layer, in order to obtain a high-resolution output image. Currently,the commonly used upsampling layers include pixelshuffle, transposed convolution, etc. The essence of the first type task is to build a one-to-many mapping from LR pixel to HR pixel. The upsampling layer of EDSR is set at the end of the entire network,the feature after upsampling layer is the output image, so the EDSR increases the dependence on the upsampling layer. It is very unstable for one-tomany mapping and it cannot be better adapted to the second type task.

    The second type task: model with the same input and output size, such as SRCNN [14], DRRN [15],VDSR [16], etc. They are more suitable for practical applications,such as mobile phone,camera,and other mobile devices. Due to the camera quality is low,the photos we take are not clearly, which means that Super-Resolution processing is needed. It is more in line with the needs of camera equipment that importing the captured photo directly into the network to reconstruct high-resolution photo of the same size. The second type task is the focus and difficulty of Super-Resolution research and application in the future, but there are few studies at present, and it has just begun to attract attention in recent years. When constructing a dataset, the high-resolution images are down-sampled and then up-sampled using bicubic interpolation to obtain lowresolution image of the same size. Since the input and output are the same size, no additional upsampling layer is needed in the network, and thus we can construct a one-to-one mapping from LR pixel to corresponding HR pixel, which is more stable compared to one-to-many mapping.

    The comparison between the first type and the second type shown in Fig. 2 can clearly express that the first type of model reconstructs 4 output pixels from 1 input pixel, when the scale is 2. The pixel ratio of the input and output is 1:4 (the ratio is 1:16 when the scale is 4), the information of input is seriously insufficient, the spatial position information of the output pixel also needs to be trained, and the network pressure is large and unstable. The second type of model reconstructs 1 output pixel from 1 input pixel, ensures the spatial position, and reduces the pressure of the network. The overall performance of the network is greatly improved.

    2.2 Residual block and residual scale factor

    The residual block proposed by He et al. [17] adds the learned features to the residuals, further weakens the gradient disappearance and gradient explosion in deep networks, allows us to train deeper networks successfully, and has a good performance. SRResNet[12] uses the residual block in the SR task first and deletes the ReLU activation function layer between the connected residual blocks;EDSR[11]modifies the residual block based on SRResNet, deletes the batch normalization (BN) layer, multiplies the learned features by a constant residual scale factor (default is 0.1), and then adds it to the residual. They suppress the features to reduce the change of the residual,which is more conducive to the fast convergence in the early stage of training. However, multiplying all features by a constant residual scale factor forms a simple linear mapping, and the lack of nonlinear factor makes the network unable to handle more complex situations and reduces the learning ability.

    Fig. 2 Comparison of input and output between the first type and the second type of network. (a) The first type of network, which has different input and output sizes, reconstructs 4 output pixels from 1 input pixel. (b) The second type of network, which has the same input and output size, reconstructs 1 output pixel from 1 input pixel.

    2.3 Squeeze and excitation module

    CNN is characterized by a series of convolution layers, nonlinear layers, and down-sampling layers.This structural feature enables CNN to extract features with global receptive fields. Moreover,the performance of CNN can be greatly enhanced by adding multi-scale (Inception [18]), attention[19], context (Inside—Outside [20]), and other spatial feature enhancement mechanisms.

    The Squeeze and Excitation Network (SENet [21])enhances feature extraction by building a Squeeze and Excitation (SE) module, which can clearly construct the relationship between different feature channels in the convolution layer. The SE module consists of two operations: Squeeze and Excitation.The squeeze operation compresses all 2-dimensional feature channels into 1-dimensional values by a global average pooling, in order to obtain an output vector with global corresponding features (dimensions are the same as the number of channels, assumingC). The excitation operation learns the relationship between each channels by learning a weight vector(the dimensions are stillC). Afterwards, the SE module uses weight vectors to enhance or suppress individual feature channels.

    Since the different feature maps have different image feature coding characteristics [8] (such as contour, color, region, etc.), different features have different importance to the Super-Resolution task. Therefore, the characteristic of recalibration operation in feature map of the SE module is bound to improve the performance of the Super-Resolution model. This is one of the main motivations of this paper.

    2.4 Deeper and wider model

    For the classification task, the residual network(ResNet [17]) won the championship in ILSVRC [22],and the accuracy of the model has been greatly improved. The number of layers in ResNet is 152. A deeper layer means a deeper semantic feature which has a strong effect on the network’s understanding.In the Super-Resolution task, SRCNN [14] is the first network to use CNN, and it only has about 3 convoluation layers; SRResNet [12] embeds the residual block in the network, and it has 15 residual blocks; VDSR [16] uses the global residual structure to perform residual learning on the high-frequency information of the image, and uses gradient clipping to enhance the gradient transmission; meanwhile,they propose a theory of “the deeper, the better”,so the VDSR has 20 convoluation layers; EDSR [11]modifies SRResNet and has 32 residual blocks, but the training time also increases.

    3 Proposed method

    We choose EDSR-baseline [11] as the base model(As shown in Fig. 1(a)). EDSR is Enhanced Deep Residual Networks, and it has been modified on the basis of SRResNet [12]; not only the number of parameters is reduced, but also the performance is significantly improved. EDSR won the first place in the internationally renowned NTIRE2017 Super Resolution Challenge,representing the highest level of the current Super-Resolution field. However, EDSR cannot solve the same size super-resolution task and has poor adaptability. In order to make up the shortcomings of EDSR, we propose a new super resolution network named ADR-SR,which uses Input Output Same Size (IOSS) structure to ensure the same size of input and output (see more details in Section 3.1), embeds Adaptive Residual Block (ARB)into the network to enhance adaptive ability (see more details in Section 3.2), follows the new design idea and increases the width of the network(see more details in Section 3.3).

    3.1 Network structure

    In this paper, we propose an Input Output Same Size structure named IOSS for the second type task(Section 2.1). The upsampling layer in the base model is redundant because upsampling operation is not required. The convolution layer before the upsampling layer is used to expand the number of feature maps so that it can be better upsampled, but it is redundant after deleting the upsampling layer.The IOSS deletes the redundant layer, and it can not only reduce the complexity of the network, but also reduce the number of parameters. In addition, IOSS also modifies the global residuals from the first layer to the network input, in order to accommodate the second type task better. The gray layer of the base model in Fig. 1(a) is the redundant layer which is to be deleted. The IOSS structure can be applied not only to Super-Resolution task, but also to other image processing tasks.

    3.2 Adaptive residual block

    In order to increase the nonlinear mapping missing in the base model because of using a constant residual scale factor, we propose an Adaptive Residual Block named ARB.As shown in Fig.1(b),ARB uses the SE module to obtain the importance of different feature channels (adaptive residual scale factors), which are used to replace the constant scale factor, so that each channel has different adaptive residual scale factor to enhance adaptive and nonlinearity. Due to the feature suppression, the advantages of rapid convergence at the beginning of training are preserved.

    Therefore, the ARB can be expressed as

    whereBi-1is the (i-1)-th output of the residual block,K*is a convolution operation whose channel width is 192,σmeans the activation function of

    ReLU.

    We have a 3×3 convolution operationKfor theP1inside the local residual block, and then mix and compress feature maps into 32 channels. The outputs above continue to enter the SE module which express asSE. Finally, its output is added to the output of the first residual block to obtain a local residual.

    The global residual of this paper can be expressed as

    whereB0is the input of the local residuals,Klis a convolution operation with a feature channel number of 3. We add the output to the LR image to form a global residual to get the final super-resolution imagey.

    Fig. 3 Effects of adding SE module in different positions on the model.

    Adding the SE module after both the first and second convolution layer in the residual block will have an effect of feature suppression, but for the former, the feature after suppression will also pass the activation function and the second convolution layer resulting in weakening the suppression effect again. For the latter,the SE module is added after the second convolution layer to suppress the feature,then we do the addition because of the residuals structure,and the suppression effect remains unchanged. The comparison of different situations is shown in Fig. 3.The performance of the SE module after the second convolution layer is a little different from other cases;however, the PSNR of the validation set is small at the initial stage of training, and the model converges are faster and more stably. Therefore, the SE module is set after the second convolution layer in our ARB.It is worth noting that the PSNR of the model without the SE module is relatively low, and the additional complexity brought by the addition of the SE module is minimal (2%—10% additional parameters,<1%additional computation [21]), which also verifies the effectiveness of our ARB.

    As shown in Fig. 4, we compare the residual block structure of different models including the original residual block, the SRResNet residual block, the EDSR residual block, and our ARB.

    3.3 The increase of channel width

    For Super-Resolution task, a wider network can achieve similar or even better results than a deeper network in some cases, when we construct a mapping from LR pixel to HR pixel. Excessive network layers will not bring huge upgrades, but increase training costs.

    In this section, we compare the effects of increasing the width and increasing the depth on the model,where the number of parameters is approximately the same (about 0.3M). As shown in Fig. 5(a), the horizontal axis represents the number of training epochs, and the vertical axis represents the PSNR of the validation dataset. The model curve with the depth of 16 and the width of 32 is a control group.It can be clearly seen that the effect of the model with the depth of 16 and the width of 64 (the depth remains unchanged and the width is expanded by 2 times) is significantly improved in numerical value.However, the effect of the model with the depth of 32 and the width of 32 (the depth is expanded by 2 times and the width remains unchanged) is slightly lower than the control group. In Fig.5(b),we provide another set of experiments to verify the above points.

    Fig. 4 Comparison of different residual block structures. (a) Original residual block, which is proposed in ResNet. (b) SRResNet residual block,which removes the last activation function from the original residual block. (c) EDSR residual block, which removes the BatchNormalization from the SRResNet residual block and adds a fixed factor of 0.1. (d) Ours Adaptive Residual Block (ARB), which replaces the fixed factor with the SE modules to increase adaptability.

    Fig. 5 Effects of increasing depth and increasing width on the model.

    Based on the above conclusions, we propose a new idea for Super-Resolution model design. Compared with increasing the depth of the network, increasing the width of the network can better adapt to the image restoration task. Thus, the reconstruction task has equal or better enhancements than the deep features as the increase of the shallow features.Compared with the base model, we increase the width in the residual block about 3 times which is from 64 to 192, so that the model has more shallow features. In addition, in order to balance the number of parameters and training time, we also reduce the number of input feature channels of the residual block by half which is from 64 to 32. The constantnis the number of residual blocks, and our ADR-SR and the EDSR-baseline have the same number of residual blocks (n=16).

    4 Experiment

    4.1 Datasets and evaluation performances

    Following the setting in EDSR[11],we train our ADRSR on the DIV2K[23]dataset,and evaluate it on four standard benchmark datasets (including Set5 [24],Set14[25],B100[26],and Urban100[27]). DIV2K has 1000 2K HD images, including 800 training images,100 validation images, and 100 testing images. In the process of constructing LR training images, we first use the bicubic interpolation function to reduce the original HR image to different scales, and then interpolate them to the original size. In this paper,the objective signal is evaluated by two performances:Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM).

    4.2 Training details

    In the training of this paper, the RGB image is randomly cropped into a 96×96 pixel frame as input. Data enhancement methods include: random horizontal flip, random vertical flip, and random rotation of 90 degrees. The pre-processing operations include: meanization(minus the mean of the training set to make the mean of the input is 0)and normalized(divided the variance of the training set to make the variance of the input is 1). The optimizer is Adam [28], where the hyperparameter is set to:β1= 0.9,β2= 0.999,∈= 10-8. The training batch size is 16, the learning rate is initialized as 0.001,which is reduced in 200k, 400k, 600k, and 800k iterations, and the scale of the reduction is 0.5. The loss function is L1 loss. The training environment is NVIDIA Titan XP GPUs and PyTorch framework.

    4.3 Experimental results

    As shown in Table 1, we test the performance of different algorithms on standard benchmark datasets and give quantitative evaluation results.The first line gives a comparison model, including LapSRN[13],VDSR[16],DRRN[15],SRResNet[12],SRDenseNet [29], CARN [30], MCAN [31], EDSRbaseline [11], and our ADR-SR. The first and second columns represent different benchmark datasets and corresponding scales. The table gives the quantitative value (PSNR/SSIM) results of the various models in different datasets and different scale settings, where the optimal results are shown in bold.

    In order to ensure the fairness of the experimentdata, we reproduce the test results of the comparison model and obtain the relevant experiment data, and all the pre-training comparison models are derived from the network open source. The datasets are constructed by the bicubic interpolation function,and PSNR and SSIM are calculated on the three channels of RGB space. There is a slight deviation from the original data, due to the different construction methods of the partial comparison model and some of the original paper calculate PSNR and SSIM on the Y channel of the YCbCr space.

    Table 1 Quantitative comparison with the state-of-the-art methods based on ×2, ×4 SR with bicubic degradation model

    It can be seen from Table 1 that in the tasks of different training sets with scale 2, our ADR-SR is optimal in objective performances, and the PSNR and SSIM are higher than the second method. Due to the error caused by different data construction methods, in the DIV2K validation dataset of scale 4,ADR-SR has a small lower of 0.02 and 0.001 on the PSNR and SSIM compared with the EDSR-baseline model,but on other datasets,our ADR-SR are higher obviously.

    Experiment shows that our ADR-SR achieves relatively good visual effects and objective performances on different scale tasks of different standard benchmark datasets, and ADR-SR has obvious advantages in image clarity, spatial similarity, and image texture details.

    In order to verify the validity of the ADR-SR, we take the Urban100 and DIV2K datasets as examples to select some images, and compare them with LapSRN,VDSR,DRRN,CARN,MCAN,and EDSRbaseline. The bicubic interpolation method is also shown as a reference. As shown in Fig. 6 and Fig. 7, the red dotted box highlights the obvious advantages of our ADR-SR. It can be clearly seen from the experiment results that our ADR-SR has a better Super-Resolution effect than other models when dealing with the edge of the object. The edge distinction is more significant, the detail information missing from many other models is reconstructed,and the visual effect is greatly improved.

    5 Conclusions

    In summary, we proposes a single image superresolution model named ADR-SR based on adaptive deep residual, which can be used for super-resolution task with the same size of input and output image.The visual effects and objective performances of the experiment demonstrate the effectiveness of our ADR-SR. The specific innovations are: (1) Input Output Same Size structure (IOSS) for same size super-resolution task. (2) Adaptive Residual Block(ARB), the adaptive ability and convergence speed improve a lot. (3) A new idea for super-resolution network design increases the width of the network instead of the depth to obtain additional performance improvements.

    Fig. 6 Comparison of experimental effects of Urban100 dataset.

    Acknowledgements

    This work was supported in part by National Natural Science Foundation of China (No. 61571046)and National Key R&D Program of China (No.2017YFF0209806).

    Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format,as long as you give appropriate credit to the original author(s)and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

    The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use,you will need to obtain permission directly from the copyright holder.

    To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095.To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

    少妇人妻一区二区三区视频| 亚洲成人久久爱视频| 欧美极品一区二区三区四区| 国产午夜精品论理片| 1000部很黄的大片| 亚洲自拍偷在线| 免费高清视频大片| 亚洲黑人精品在线| 欧美丝袜亚洲另类 | 内地一区二区视频在线| 脱女人内裤的视频| 婷婷亚洲欧美| 夜夜看夜夜爽夜夜摸| 成人高潮视频无遮挡免费网站| 天美传媒精品一区二区| 丰满乱子伦码专区| av在线天堂中文字幕| 日韩欧美免费精品| 一二三四社区在线视频社区8| 中文字幕熟女人妻在线| 国产三级黄色录像| 日本免费a在线| 久久天躁狠狠躁夜夜2o2o| 日本一二三区视频观看| 午夜精品久久久久久毛片777| 三级毛片av免费| 亚洲欧美精品综合久久99| 欧美国产日韩亚洲一区| 国内精品久久久久精免费| 亚洲国产日韩欧美精品在线观看| 成人国产综合亚洲| 精品福利观看| 欧美三级亚洲精品| 久久久成人免费电影| 天天躁日日操中文字幕| 欧美日韩亚洲国产一区二区在线观看| av天堂中文字幕网| 久久久久亚洲av毛片大全| 免费在线观看影片大全网站| 青草久久国产| 最近中文字幕高清免费大全6 | 少妇被粗大猛烈的视频| 亚洲av美国av| 国产伦人伦偷精品视频| 制服丝袜大香蕉在线| 久久久久九九精品影院| 97超级碰碰碰精品色视频在线观看| 免费黄网站久久成人精品 | 欧美一区二区亚洲| xxxwww97欧美| 亚洲一区二区三区不卡视频| 日韩中字成人| 色精品久久人妻99蜜桃| 精品99又大又爽又粗少妇毛片 | av欧美777| 久久久久久久精品吃奶| 亚洲男人的天堂狠狠| 国产白丝娇喘喷水9色精品| 精品久久久久久久久亚洲 | 亚洲avbb在线观看| 黄色一级大片看看| 搡女人真爽免费视频火全软件 | 香蕉av资源在线| 在线观看午夜福利视频| 国产av一区在线观看免费| 在线观看免费视频日本深夜| 丰满人妻一区二区三区视频av| www.www免费av| av黄色大香蕉| 免费在线观看影片大全网站| 免费大片18禁| 99热这里只有精品一区| 网址你懂的国产日韩在线| 精品国产三级普通话版| 夜夜爽天天搞| 精品一区二区三区av网在线观看| 久久欧美精品欧美久久欧美| 性色av乱码一区二区三区2| 欧美丝袜亚洲另类 | 美女免费视频网站| av视频在线观看入口| 国产成人啪精品午夜网站| 伦理电影大哥的女人| 午夜a级毛片| 性插视频无遮挡在线免费观看| 国内久久婷婷六月综合欲色啪| 在线观看免费视频日本深夜| 哪里可以看免费的av片| 欧美xxxx性猛交bbbb| 国产伦精品一区二区三区四那| 成人av一区二区三区在线看| 深爱激情五月婷婷| 欧美丝袜亚洲另类 | 91狼人影院| 亚洲真实伦在线观看| 深爱激情五月婷婷| 老司机福利观看| 亚洲专区国产一区二区| 免费电影在线观看免费观看| 国产亚洲精品久久久com| 欧美最黄视频在线播放免费| 精品人妻1区二区| 超碰av人人做人人爽久久| 亚洲第一欧美日韩一区二区三区| 在线观看免费视频日本深夜| 国产一区二区激情短视频| 非洲黑人性xxxx精品又粗又长| 国产熟女xx| 97人妻精品一区二区三区麻豆| 欧美zozozo另类| 别揉我奶头 嗯啊视频| 成人特级黄色片久久久久久久| 国产淫片久久久久久久久 | 日本精品一区二区三区蜜桃| 在线看三级毛片| 两性午夜刺激爽爽歪歪视频在线观看| 久久中文看片网| 男人和女人高潮做爰伦理| 国产av麻豆久久久久久久| av中文乱码字幕在线| 欧美日韩乱码在线| 免费黄网站久久成人精品 | 国产日本99.免费观看| 久久精品国产亚洲av涩爱 | 九色国产91popny在线| 在线播放国产精品三级| 久久久久国产精品人妻aⅴ院| 日韩精品青青久久久久久| 国产私拍福利视频在线观看| 亚洲黑人精品在线| 国产69精品久久久久777片| 国产 一区 欧美 日韩| 国产精品1区2区在线观看.| 一级a爱片免费观看的视频| 亚洲内射少妇av| 国模一区二区三区四区视频| 国产亚洲欧美98| 91在线观看av| 亚洲精品一区av在线观看| 国产三级黄色录像| 69人妻影院| 男人的好看免费观看在线视频| xxxwww97欧美| 精品日产1卡2卡| av专区在线播放| 欧美区成人在线视频| 精华霜和精华液先用哪个| 久久精品综合一区二区三区| 国模一区二区三区四区视频| 一卡2卡三卡四卡精品乱码亚洲| 成人永久免费在线观看视频| 午夜福利高清视频| 天堂动漫精品| 97热精品久久久久久| 色综合欧美亚洲国产小说| 91狼人影院| av天堂在线播放| 淫秽高清视频在线观看| 97热精品久久久久久| 十八禁人妻一区二区| avwww免费| 草草在线视频免费看| 国产高清视频在线观看网站| 我要搜黄色片| x7x7x7水蜜桃| 免费一级毛片在线播放高清视频| 国产精品久久久久久久电影| 国产伦精品一区二区三区视频9| 日日干狠狠操夜夜爽| 亚洲欧美激情综合另类| 成人精品一区二区免费| 亚洲成人久久性| 十八禁网站免费在线| 日韩亚洲欧美综合| 成人美女网站在线观看视频| 欧美区成人在线视频| 九九久久精品国产亚洲av麻豆| 日韩av在线大香蕉| 成人亚洲精品av一区二区| 国产精品国产高清国产av| 亚洲成人精品中文字幕电影| 亚洲欧美日韩卡通动漫| 精品一区二区三区av网在线观看| 在线观看一区二区三区| 麻豆成人午夜福利视频| 一个人看视频在线观看www免费| 美女黄网站色视频| 国产中年淑女户外野战色| 免费无遮挡裸体视频| 天堂动漫精品| 日本在线视频免费播放| 97碰自拍视频| 一本一本综合久久| 日韩国内少妇激情av| 天堂动漫精品| 亚洲第一电影网av| 中文字幕av成人在线电影| 69av精品久久久久久| 真实男女啪啪啪动态图| 亚洲乱码一区二区免费版| 18禁裸乳无遮挡免费网站照片| 中文字幕高清在线视频| 男女做爰动态图高潮gif福利片| 欧美日韩福利视频一区二区| .国产精品久久| 国产亚洲欧美在线一区二区| 老女人水多毛片| 人人妻人人澡欧美一区二区| 欧美xxxx黑人xx丫x性爽| 免费av观看视频| 欧美午夜高清在线| 日本撒尿小便嘘嘘汇集6| 欧美日韩综合久久久久久 | 亚洲国产欧美人成| 99热这里只有精品一区| 精品人妻偷拍中文字幕| 天天躁日日操中文字幕| 赤兔流量卡办理| 久久精品夜夜夜夜夜久久蜜豆| 午夜精品久久久久久毛片777| 老熟妇乱子伦视频在线观看| 极品教师在线免费播放| 久久国产精品人妻蜜桃| 日韩大尺度精品在线看网址| 窝窝影院91人妻| 最好的美女福利视频网| 亚洲五月天丁香| 99热这里只有是精品50| 精品一区二区三区视频在线观看免费| 欧美黄色片欧美黄色片| bbb黄色大片| 久久精品国产亚洲av涩爱 | 久久人人爽人人爽人人片va | av福利片在线观看| 色哟哟·www| 亚洲欧美精品综合久久99| 国产精品亚洲一级av第二区| 国产精品日韩av在线免费观看| 日本黄色片子视频| 一个人看的www免费观看视频| 色吧在线观看| 精品欧美国产一区二区三| 成人美女网站在线观看视频| 久久国产精品影院| 香蕉av资源在线| 国产精华一区二区三区| 成人欧美大片| 国产男靠女视频免费网站| 亚洲av一区综合| 一级av片app| 成人国产综合亚洲| 免费av毛片视频| 亚洲成av人片免费观看| 成人一区二区视频在线观看| 日本免费一区二区三区高清不卡| 午夜福利免费观看在线| 99久久精品国产亚洲精品| 欧美精品啪啪一区二区三区| 色吧在线观看| 一进一出抽搐动态| 国产精品亚洲美女久久久| 亚洲精品成人久久久久久| 亚洲精品成人久久久久久| 国产精品综合久久久久久久免费| 亚洲激情在线av| 午夜福利在线观看吧| av在线老鸭窝| 91久久精品电影网| 俄罗斯特黄特色一大片| 老司机午夜福利在线观看视频| 白带黄色成豆腐渣| 日本精品一区二区三区蜜桃| 老司机午夜十八禁免费视频| 内射极品少妇av片p| 婷婷六月久久综合丁香| 一本久久中文字幕| 麻豆一二三区av精品| 午夜福利18| 18美女黄网站色大片免费观看| 真实男女啪啪啪动态图| 又粗又爽又猛毛片免费看| 欧美zozozo另类| 免费在线观看日本一区| 午夜影院日韩av| 中国美女看黄片| 脱女人内裤的视频| 久久精品国产清高在天天线| 国产成人啪精品午夜网站| 日韩欧美精品v在线| 久99久视频精品免费| 在线观看免费视频日本深夜| 亚洲精品久久国产高清桃花| 亚洲专区国产一区二区| 久久久久久久精品吃奶| 欧美激情国产日韩精品一区| 99国产综合亚洲精品| 丁香六月欧美| 久久精品影院6| 国产又黄又爽又无遮挡在线| 久久久久久久久大av| 成人亚洲精品av一区二区| 国产三级在线视频| 亚洲成人久久爱视频| 永久网站在线| 免费观看的影片在线观看| 亚洲色图av天堂| 观看美女的网站| 国产亚洲精品综合一区在线观看| 露出奶头的视频| 美女被艹到高潮喷水动态| 色吧在线观看| 国产高清三级在线| www.色视频.com| 人妻久久中文字幕网| 久久人人精品亚洲av| 非洲黑人性xxxx精品又粗又长| 成人av一区二区三区在线看| 简卡轻食公司| 日韩人妻高清精品专区| 男人和女人高潮做爰伦理| 99视频精品全部免费 在线| 91久久精品国产一区二区成人| 91狼人影院| 亚洲最大成人av| 麻豆一二三区av精品| 亚洲内射少妇av| 天堂网av新在线| 99热只有精品国产| av黄色大香蕉| 97超视频在线观看视频| 直男gayav资源| 亚洲成人中文字幕在线播放| 91麻豆av在线| 亚洲欧美日韩无卡精品| 夜夜夜夜夜久久久久| 男人舔奶头视频| 亚洲人与动物交配视频| 一区二区三区高清视频在线| 精品人妻熟女av久视频| av在线蜜桃| 午夜福利视频1000在线观看| 国内精品美女久久久久久| 久久天躁狠狠躁夜夜2o2o| 99久久九九国产精品国产免费| 国产伦人伦偷精品视频| 欧美日韩国产亚洲二区| 嫩草影院精品99| 成人精品一区二区免费| 亚洲av日韩精品久久久久久密| 99久久精品一区二区三区| 国产欧美日韩精品亚洲av| 高潮久久久久久久久久久不卡| 我的女老师完整版在线观看| 真实男女啪啪啪动态图| 精品乱码久久久久久99久播| 日韩大尺度精品在线看网址| 久久久久久国产a免费观看| 国产伦精品一区二区三区四那| 亚洲精品粉嫩美女一区| 国产激情偷乱视频一区二区| a级毛片a级免费在线| 免费人成在线观看视频色| 天堂动漫精品| 真人做人爱边吃奶动态| 亚洲av电影在线进入| 精品久久久久久久久久免费视频| 久久国产乱子免费精品| 国产精品免费一区二区三区在线| 一个人免费在线观看的高清视频| 日本黄色片子视频| 国产毛片a区久久久久| 亚州av有码| aaaaa片日本免费| 老司机深夜福利视频在线观看| 丰满的人妻完整版| aaaaa片日本免费| 日韩中文字幕欧美一区二区| 日本免费a在线| 久久久精品大字幕| 少妇被粗大猛烈的视频| 成人av在线播放网站| 亚洲美女黄片视频| 一本一本综合久久| 看十八女毛片水多多多| 麻豆国产97在线/欧美| 亚洲国产精品久久男人天堂| 中文字幕av成人在线电影| 99久久无色码亚洲精品果冻| 成人高潮视频无遮挡免费网站| 岛国在线免费视频观看| 激情在线观看视频在线高清| 18禁裸乳无遮挡免费网站照片| 夜夜看夜夜爽夜夜摸| 成年人黄色毛片网站| 12—13女人毛片做爰片一| bbb黄色大片| 99国产极品粉嫩在线观看| 亚洲成a人片在线一区二区| 精品久久久久久,| 欧美黄色淫秽网站| 国产一区二区三区视频了| 国产精品久久久久久人妻精品电影| 色播亚洲综合网| 97热精品久久久久久| 色综合婷婷激情| 精品乱码久久久久久99久播| 亚洲av不卡在线观看| 嫩草影视91久久| 国产亚洲av嫩草精品影院| 国产探花极品一区二区| 一个人免费在线观看的高清视频| 免费观看精品视频网站| 国产精品av视频在线免费观看| 两人在一起打扑克的视频| 全区人妻精品视频| 中亚洲国语对白在线视频| 国产亚洲精品综合一区在线观看| 一进一出好大好爽视频| 亚洲精品456在线播放app | 免费av观看视频| 久久人人爽人人爽人人片va | 成人特级黄色片久久久久久久| 91久久精品国产一区二区成人| 91九色精品人成在线观看| 一二三四社区在线视频社区8| 国内久久婷婷六月综合欲色啪| 精品人妻视频免费看| 非洲黑人性xxxx精品又粗又长| 日本黄大片高清| 18美女黄网站色大片免费观看| 久久精品国产亚洲av涩爱 | 国产视频内射| 成人毛片a级毛片在线播放| 日韩高清综合在线| 成人国产综合亚洲| 国内毛片毛片毛片毛片毛片| 99久久精品一区二区三区| 国产高清视频在线观看网站| 老女人水多毛片| 午夜福利高清视频| 狂野欧美白嫩少妇大欣赏| 亚洲成人精品中文字幕电影| 国产精品永久免费网站| 一二三四社区在线视频社区8| 亚洲中文日韩欧美视频| а√天堂www在线а√下载| 国产乱人伦免费视频| 日本一二三区视频观看| 国内少妇人妻偷人精品xxx网站| 日本黄色片子视频| 草草在线视频免费看| 成人美女网站在线观看视频| 听说在线观看完整版免费高清| 欧美精品啪啪一区二区三区| 日本一二三区视频观看| 嫩草影视91久久| 国产v大片淫在线免费观看| 丰满的人妻完整版| 99精品在免费线老司机午夜| 俄罗斯特黄特色一大片| 免费av不卡在线播放| 国产欧美日韩一区二区精品| 国产午夜精品久久久久久一区二区三区 | 脱女人内裤的视频| 18禁在线播放成人免费| 日韩av在线大香蕉| АⅤ资源中文在线天堂| 直男gayav资源| 精品国产三级普通话版| 亚洲av不卡在线观看| 成人三级黄色视频| 久久久国产成人精品二区| 欧美又色又爽又黄视频| 午夜精品久久久久久毛片777| 色哟哟哟哟哟哟| 久久久色成人| 国产一区二区三区视频了| 日本一二三区视频观看| 色哟哟哟哟哟哟| 桃色一区二区三区在线观看| 他把我摸到了高潮在线观看| 精品久久久久久久人妻蜜臀av| 亚洲七黄色美女视频| 一区二区三区激情视频| 久久99热6这里只有精品| 久久婷婷人人爽人人干人人爱| 淫妇啪啪啪对白视频| 亚洲内射少妇av| 国产精品电影一区二区三区| 亚洲成人中文字幕在线播放| 欧美一区二区国产精品久久精品| 国产探花在线观看一区二区| 精品午夜福利在线看| 久久久久精品国产欧美久久久| 亚洲片人在线观看| 成人亚洲精品av一区二区| 午夜福利欧美成人| 精品久久久久久成人av| avwww免费| 亚洲美女搞黄在线观看 | 51国产日韩欧美| 国产免费一级a男人的天堂| 最后的刺客免费高清国语| 婷婷亚洲欧美| 最近中文字幕高清免费大全6 | 又爽又黄a免费视频| 色综合亚洲欧美另类图片| 黄色丝袜av网址大全| 免费av毛片视频| 国产又黄又爽又无遮挡在线| 国产精品免费一区二区三区在线| 国产aⅴ精品一区二区三区波| 久久久久久久久久黄片| 成人美女网站在线观看视频| 日本精品一区二区三区蜜桃| 午夜精品一区二区三区免费看| 亚洲精品色激情综合| 人妻丰满熟妇av一区二区三区| 九色成人免费人妻av| 真人做人爱边吃奶动态| 国产精品爽爽va在线观看网站| 搡老熟女国产l中国老女人| 国产黄色小视频在线观看| 黄色一级大片看看| 色精品久久人妻99蜜桃| 中文亚洲av片在线观看爽| 在现免费观看毛片| 午夜福利高清视频| 99热6这里只有精品| 精品久久久久久,| 亚洲avbb在线观看| 丰满人妻一区二区三区视频av| 久久久久精品国产欧美久久久| 日韩精品中文字幕看吧| 夜夜爽天天搞| av福利片在线观看| 一级作爱视频免费观看| 国产精品影院久久| 国产中年淑女户外野战色| 男人的好看免费观看在线视频| 黄色配什么色好看| 十八禁人妻一区二区| 伦理电影大哥的女人| 成年人黄色毛片网站| 又粗又爽又猛毛片免费看| 美女高潮的动态| 亚洲中文字幕一区二区三区有码在线看| 国产成人aa在线观看| 人人妻人人看人人澡| av视频在线观看入口| 精品一区二区三区视频在线观看免费| 亚洲人成伊人成综合网2020| 国产一区二区在线观看日韩| www.熟女人妻精品国产| 长腿黑丝高跟| 日日摸夜夜添夜夜添av毛片 | 亚洲狠狠婷婷综合久久图片| 黄色一级大片看看| 中国美女看黄片| 一进一出抽搐gif免费好疼| 最新在线观看一区二区三区| 久久婷婷人人爽人人干人人爱| 成人美女网站在线观看视频| 一个人免费在线观看的高清视频| 欧美不卡视频在线免费观看| 香蕉av资源在线| 免费一级毛片在线播放高清视频| 国产精品一区二区三区四区久久| 亚洲欧美日韩高清在线视频| 婷婷六月久久综合丁香| 老司机午夜十八禁免费视频| 久久这里只有精品中国| 日韩大尺度精品在线看网址| 99国产综合亚洲精品| 国产在线男女| 91久久精品国产一区二区成人| 欧美日韩瑟瑟在线播放| 桃红色精品国产亚洲av| 日韩精品青青久久久久久| 中文字幕免费在线视频6| 免费高清视频大片| 欧美+日韩+精品| 嫩草影院精品99| 亚洲经典国产精华液单 | 久久香蕉精品热| 精品无人区乱码1区二区| 三级国产精品欧美在线观看| 日韩欧美三级三区| 窝窝影院91人妻| 精品免费久久久久久久清纯| h日本视频在线播放| 久久性视频一级片| av中文乱码字幕在线| 精品人妻一区二区三区麻豆 | 国产激情偷乱视频一区二区| 69av精品久久久久久| 三级国产精品欧美在线观看| 中文字幕精品亚洲无线码一区| 国产伦精品一区二区三区视频9| 看片在线看免费视频| 国产成人福利小说| 国产精品亚洲美女久久久| 一边摸一边抽搐一进一小说| 久久香蕉精品热| 在线看三级毛片| 久久久久久九九精品二区国产| 久99久视频精品免费| 男女下面进入的视频免费午夜| 欧美最黄视频在线播放免费| or卡值多少钱| 日本成人三级电影网站| 欧美性猛交黑人性爽| 伦理电影大哥的女人| 男女视频在线观看网站免费| 午夜福利成人在线免费观看|