• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Adaptive deep residual network for single image super-resolution

    2019-02-27 10:37:14ShuaiLiuRuipengGangChenghuaLiandRuixiaSong
    Computational Visual Media 2019年4期

    Shuai Liu, Ruipeng Gang, Chenghua Li, and Ruixia Song

    Abstract In recent years, deep learning has achieved great success in the field of image processing. In the single image super-resolution (SISR) task, the convolutional neural network (CNN) extracts the features of the image through deeper layers, and has achieved impressive results. In this paper, we propose a single image super-resolution model based on Adaptive Deep Residual named as ADR-SR, which uses the Input Output Same Size (IOSS) structure, and releases the dependence of upsampling layers compared with the existing SR methods. Specifically, the key element of our model is the Adaptive Residual Block(ARB),which replaces the commonly used constant factor with an adaptive residual factor. The experiments prove the effectiveness of our ADR-SR model,which can not only reconstruct images with better visual effects, but also get better objective performances.

    Keywords single image super-resolution (SISR);adaptive deep residual network; deep learning

    1 Introduction

    Single Image Super Resolution(SISR)is a very classic and important task in the field of computer vision.Its main purpose is to reconstruct High Resolution(HR)image from Low Resolution(LR)image through Super Resolution (SR) technology. SISR can be widely used in safety monitoring, medical treatment,and automatic driving, etc.

    In essence, SISR is an irreversible process. At present,the simple and fast super-resolution methods mostly use light field, patch-based, and interpolation methods [1—6], all of which rely on the smooth transition assumption of adjacent pixels. However,the interpolation methods will cause aliasing and ringing effects because of image discontinuities [7].

    With the development of deep learning in recent years, Convolutional Neural Network (CNN) has made breakthroughs in computer vision tasks such as classification [8], detection [9], and semantic segmentation [10]. In the field of Super-Resolution,the main feature of CNN-based methods can fit the complex mapping more directly between LR image and HR image, it enables better recovery of missing high-frequency information (such as edges, textures),so its performance goes beyond many classic methods.

    Based on the EDSR[11]model,we propose a single image super-resolution model named ADR-SR, as shown in Fig.1(b),which is a new SR model with the same size of input and output. ADR-SR releases the dependence of upsampling layers compared with the existing deep learning SR methods, and constructs a one-to-one mapping from LR pixel to HR pixel. The Adaptive Residual Block (ARB) is embedded in the ADR-SR to enhance the adaptive ability and improve the objective performance.

    In summary, the main contributions of this paper are as follows:

    ·We propose an Input Output Same Size (IOSS)structure for the same size super-resolution task,which releases the dependence of upsampling layers compared with the existing deep learning SR methods. IOSS can solve SR task with the same input and output size as the actual needs.

    Fig. 1 Comparison of (a) EDSR-baseline structure with our (b) ADR-SR structure. Note that our ADR-SR does not have any upsampling layers and uses Adaptive Residual Block (ARB). The position of the global residuals is modified, and the depth and width of the network are also modified.

    ·We propose an Adaptive Residual Block (ARB)based on adaptive residual factor, which solves the problem of poor adaptability caused by constant residual factor. Each channel in ARB has a different adaptive residual factor,and both adaptive ability and learning ability improve a lot.

    ·We propose a new idea for Super-Resolution network design. In some cases, adding width of the network has a significant performance improvement, and the convergence speed is faster.

    2 Related works

    2.1 Super-resolution model

    According to whether the input size and output size are the same, the Super-Resolution model based on deep learning is divided into two types: model with different input and output size and model with the same input and output size.

    The first type task: model with different input and output size, such as SRResNet [12], LapSRN [13],EDSR [11], etc, which reconstructs large image from small image. The key operation is mainly to increase the image size by the upsampling layer, in order to obtain a high-resolution output image. Currently,the commonly used upsampling layers include pixelshuffle, transposed convolution, etc. The essence of the first type task is to build a one-to-many mapping from LR pixel to HR pixel. The upsampling layer of EDSR is set at the end of the entire network,the feature after upsampling layer is the output image, so the EDSR increases the dependence on the upsampling layer. It is very unstable for one-tomany mapping and it cannot be better adapted to the second type task.

    The second type task: model with the same input and output size, such as SRCNN [14], DRRN [15],VDSR [16], etc. They are more suitable for practical applications,such as mobile phone,camera,and other mobile devices. Due to the camera quality is low,the photos we take are not clearly, which means that Super-Resolution processing is needed. It is more in line with the needs of camera equipment that importing the captured photo directly into the network to reconstruct high-resolution photo of the same size. The second type task is the focus and difficulty of Super-Resolution research and application in the future, but there are few studies at present, and it has just begun to attract attention in recent years. When constructing a dataset, the high-resolution images are down-sampled and then up-sampled using bicubic interpolation to obtain lowresolution image of the same size. Since the input and output are the same size, no additional upsampling layer is needed in the network, and thus we can construct a one-to-one mapping from LR pixel to corresponding HR pixel, which is more stable compared to one-to-many mapping.

    The comparison between the first type and the second type shown in Fig. 2 can clearly express that the first type of model reconstructs 4 output pixels from 1 input pixel, when the scale is 2. The pixel ratio of the input and output is 1:4 (the ratio is 1:16 when the scale is 4), the information of input is seriously insufficient, the spatial position information of the output pixel also needs to be trained, and the network pressure is large and unstable. The second type of model reconstructs 1 output pixel from 1 input pixel, ensures the spatial position, and reduces the pressure of the network. The overall performance of the network is greatly improved.

    2.2 Residual block and residual scale factor

    The residual block proposed by He et al. [17] adds the learned features to the residuals, further weakens the gradient disappearance and gradient explosion in deep networks, allows us to train deeper networks successfully, and has a good performance. SRResNet[12] uses the residual block in the SR task first and deletes the ReLU activation function layer between the connected residual blocks;EDSR[11]modifies the residual block based on SRResNet, deletes the batch normalization (BN) layer, multiplies the learned features by a constant residual scale factor (default is 0.1), and then adds it to the residual. They suppress the features to reduce the change of the residual,which is more conducive to the fast convergence in the early stage of training. However, multiplying all features by a constant residual scale factor forms a simple linear mapping, and the lack of nonlinear factor makes the network unable to handle more complex situations and reduces the learning ability.

    Fig. 2 Comparison of input and output between the first type and the second type of network. (a) The first type of network, which has different input and output sizes, reconstructs 4 output pixels from 1 input pixel. (b) The second type of network, which has the same input and output size, reconstructs 1 output pixel from 1 input pixel.

    2.3 Squeeze and excitation module

    CNN is characterized by a series of convolution layers, nonlinear layers, and down-sampling layers.This structural feature enables CNN to extract features with global receptive fields. Moreover,the performance of CNN can be greatly enhanced by adding multi-scale (Inception [18]), attention[19], context (Inside—Outside [20]), and other spatial feature enhancement mechanisms.

    The Squeeze and Excitation Network (SENet [21])enhances feature extraction by building a Squeeze and Excitation (SE) module, which can clearly construct the relationship between different feature channels in the convolution layer. The SE module consists of two operations: Squeeze and Excitation.The squeeze operation compresses all 2-dimensional feature channels into 1-dimensional values by a global average pooling, in order to obtain an output vector with global corresponding features (dimensions are the same as the number of channels, assumingC). The excitation operation learns the relationship between each channels by learning a weight vector(the dimensions are stillC). Afterwards, the SE module uses weight vectors to enhance or suppress individual feature channels.

    Since the different feature maps have different image feature coding characteristics [8] (such as contour, color, region, etc.), different features have different importance to the Super-Resolution task. Therefore, the characteristic of recalibration operation in feature map of the SE module is bound to improve the performance of the Super-Resolution model. This is one of the main motivations of this paper.

    2.4 Deeper and wider model

    For the classification task, the residual network(ResNet [17]) won the championship in ILSVRC [22],and the accuracy of the model has been greatly improved. The number of layers in ResNet is 152. A deeper layer means a deeper semantic feature which has a strong effect on the network’s understanding.In the Super-Resolution task, SRCNN [14] is the first network to use CNN, and it only has about 3 convoluation layers; SRResNet [12] embeds the residual block in the network, and it has 15 residual blocks; VDSR [16] uses the global residual structure to perform residual learning on the high-frequency information of the image, and uses gradient clipping to enhance the gradient transmission; meanwhile,they propose a theory of “the deeper, the better”,so the VDSR has 20 convoluation layers; EDSR [11]modifies SRResNet and has 32 residual blocks, but the training time also increases.

    3 Proposed method

    We choose EDSR-baseline [11] as the base model(As shown in Fig. 1(a)). EDSR is Enhanced Deep Residual Networks, and it has been modified on the basis of SRResNet [12]; not only the number of parameters is reduced, but also the performance is significantly improved. EDSR won the first place in the internationally renowned NTIRE2017 Super Resolution Challenge,representing the highest level of the current Super-Resolution field. However, EDSR cannot solve the same size super-resolution task and has poor adaptability. In order to make up the shortcomings of EDSR, we propose a new super resolution network named ADR-SR,which uses Input Output Same Size (IOSS) structure to ensure the same size of input and output (see more details in Section 3.1), embeds Adaptive Residual Block (ARB)into the network to enhance adaptive ability (see more details in Section 3.2), follows the new design idea and increases the width of the network(see more details in Section 3.3).

    3.1 Network structure

    In this paper, we propose an Input Output Same Size structure named IOSS for the second type task(Section 2.1). The upsampling layer in the base model is redundant because upsampling operation is not required. The convolution layer before the upsampling layer is used to expand the number of feature maps so that it can be better upsampled, but it is redundant after deleting the upsampling layer.The IOSS deletes the redundant layer, and it can not only reduce the complexity of the network, but also reduce the number of parameters. In addition, IOSS also modifies the global residuals from the first layer to the network input, in order to accommodate the second type task better. The gray layer of the base model in Fig. 1(a) is the redundant layer which is to be deleted. The IOSS structure can be applied not only to Super-Resolution task, but also to other image processing tasks.

    3.2 Adaptive residual block

    In order to increase the nonlinear mapping missing in the base model because of using a constant residual scale factor, we propose an Adaptive Residual Block named ARB.As shown in Fig.1(b),ARB uses the SE module to obtain the importance of different feature channels (adaptive residual scale factors), which are used to replace the constant scale factor, so that each channel has different adaptive residual scale factor to enhance adaptive and nonlinearity. Due to the feature suppression, the advantages of rapid convergence at the beginning of training are preserved.

    Therefore, the ARB can be expressed as

    whereBi-1is the (i-1)-th output of the residual block,K*is a convolution operation whose channel width is 192,σmeans the activation function of

    ReLU.

    We have a 3×3 convolution operationKfor theP1inside the local residual block, and then mix and compress feature maps into 32 channels. The outputs above continue to enter the SE module which express asSE. Finally, its output is added to the output of the first residual block to obtain a local residual.

    The global residual of this paper can be expressed as

    whereB0is the input of the local residuals,Klis a convolution operation with a feature channel number of 3. We add the output to the LR image to form a global residual to get the final super-resolution imagey.

    Fig. 3 Effects of adding SE module in different positions on the model.

    Adding the SE module after both the first and second convolution layer in the residual block will have an effect of feature suppression, but for the former, the feature after suppression will also pass the activation function and the second convolution layer resulting in weakening the suppression effect again. For the latter,the SE module is added after the second convolution layer to suppress the feature,then we do the addition because of the residuals structure,and the suppression effect remains unchanged. The comparison of different situations is shown in Fig. 3.The performance of the SE module after the second convolution layer is a little different from other cases;however, the PSNR of the validation set is small at the initial stage of training, and the model converges are faster and more stably. Therefore, the SE module is set after the second convolution layer in our ARB.It is worth noting that the PSNR of the model without the SE module is relatively low, and the additional complexity brought by the addition of the SE module is minimal (2%—10% additional parameters,<1%additional computation [21]), which also verifies the effectiveness of our ARB.

    As shown in Fig. 4, we compare the residual block structure of different models including the original residual block, the SRResNet residual block, the EDSR residual block, and our ARB.

    3.3 The increase of channel width

    For Super-Resolution task, a wider network can achieve similar or even better results than a deeper network in some cases, when we construct a mapping from LR pixel to HR pixel. Excessive network layers will not bring huge upgrades, but increase training costs.

    In this section, we compare the effects of increasing the width and increasing the depth on the model,where the number of parameters is approximately the same (about 0.3M). As shown in Fig. 5(a), the horizontal axis represents the number of training epochs, and the vertical axis represents the PSNR of the validation dataset. The model curve with the depth of 16 and the width of 32 is a control group.It can be clearly seen that the effect of the model with the depth of 16 and the width of 64 (the depth remains unchanged and the width is expanded by 2 times) is significantly improved in numerical value.However, the effect of the model with the depth of 32 and the width of 32 (the depth is expanded by 2 times and the width remains unchanged) is slightly lower than the control group. In Fig.5(b),we provide another set of experiments to verify the above points.

    Fig. 4 Comparison of different residual block structures. (a) Original residual block, which is proposed in ResNet. (b) SRResNet residual block,which removes the last activation function from the original residual block. (c) EDSR residual block, which removes the BatchNormalization from the SRResNet residual block and adds a fixed factor of 0.1. (d) Ours Adaptive Residual Block (ARB), which replaces the fixed factor with the SE modules to increase adaptability.

    Fig. 5 Effects of increasing depth and increasing width on the model.

    Based on the above conclusions, we propose a new idea for Super-Resolution model design. Compared with increasing the depth of the network, increasing the width of the network can better adapt to the image restoration task. Thus, the reconstruction task has equal or better enhancements than the deep features as the increase of the shallow features.Compared with the base model, we increase the width in the residual block about 3 times which is from 64 to 192, so that the model has more shallow features. In addition, in order to balance the number of parameters and training time, we also reduce the number of input feature channels of the residual block by half which is from 64 to 32. The constantnis the number of residual blocks, and our ADR-SR and the EDSR-baseline have the same number of residual blocks (n=16).

    4 Experiment

    4.1 Datasets and evaluation performances

    Following the setting in EDSR[11],we train our ADRSR on the DIV2K[23]dataset,and evaluate it on four standard benchmark datasets (including Set5 [24],Set14[25],B100[26],and Urban100[27]). DIV2K has 1000 2K HD images, including 800 training images,100 validation images, and 100 testing images. In the process of constructing LR training images, we first use the bicubic interpolation function to reduce the original HR image to different scales, and then interpolate them to the original size. In this paper,the objective signal is evaluated by two performances:Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM).

    4.2 Training details

    In the training of this paper, the RGB image is randomly cropped into a 96×96 pixel frame as input. Data enhancement methods include: random horizontal flip, random vertical flip, and random rotation of 90 degrees. The pre-processing operations include: meanization(minus the mean of the training set to make the mean of the input is 0)and normalized(divided the variance of the training set to make the variance of the input is 1). The optimizer is Adam [28], where the hyperparameter is set to:β1= 0.9,β2= 0.999,∈= 10-8. The training batch size is 16, the learning rate is initialized as 0.001,which is reduced in 200k, 400k, 600k, and 800k iterations, and the scale of the reduction is 0.5. The loss function is L1 loss. The training environment is NVIDIA Titan XP GPUs and PyTorch framework.

    4.3 Experimental results

    As shown in Table 1, we test the performance of different algorithms on standard benchmark datasets and give quantitative evaluation results.The first line gives a comparison model, including LapSRN[13],VDSR[16],DRRN[15],SRResNet[12],SRDenseNet [29], CARN [30], MCAN [31], EDSRbaseline [11], and our ADR-SR. The first and second columns represent different benchmark datasets and corresponding scales. The table gives the quantitative value (PSNR/SSIM) results of the various models in different datasets and different scale settings, where the optimal results are shown in bold.

    In order to ensure the fairness of the experimentdata, we reproduce the test results of the comparison model and obtain the relevant experiment data, and all the pre-training comparison models are derived from the network open source. The datasets are constructed by the bicubic interpolation function,and PSNR and SSIM are calculated on the three channels of RGB space. There is a slight deviation from the original data, due to the different construction methods of the partial comparison model and some of the original paper calculate PSNR and SSIM on the Y channel of the YCbCr space.

    Table 1 Quantitative comparison with the state-of-the-art methods based on ×2, ×4 SR with bicubic degradation model

    It can be seen from Table 1 that in the tasks of different training sets with scale 2, our ADR-SR is optimal in objective performances, and the PSNR and SSIM are higher than the second method. Due to the error caused by different data construction methods, in the DIV2K validation dataset of scale 4,ADR-SR has a small lower of 0.02 and 0.001 on the PSNR and SSIM compared with the EDSR-baseline model,but on other datasets,our ADR-SR are higher obviously.

    Experiment shows that our ADR-SR achieves relatively good visual effects and objective performances on different scale tasks of different standard benchmark datasets, and ADR-SR has obvious advantages in image clarity, spatial similarity, and image texture details.

    In order to verify the validity of the ADR-SR, we take the Urban100 and DIV2K datasets as examples to select some images, and compare them with LapSRN,VDSR,DRRN,CARN,MCAN,and EDSRbaseline. The bicubic interpolation method is also shown as a reference. As shown in Fig. 6 and Fig. 7, the red dotted box highlights the obvious advantages of our ADR-SR. It can be clearly seen from the experiment results that our ADR-SR has a better Super-Resolution effect than other models when dealing with the edge of the object. The edge distinction is more significant, the detail information missing from many other models is reconstructed,and the visual effect is greatly improved.

    5 Conclusions

    In summary, we proposes a single image superresolution model named ADR-SR based on adaptive deep residual, which can be used for super-resolution task with the same size of input and output image.The visual effects and objective performances of the experiment demonstrate the effectiveness of our ADR-SR. The specific innovations are: (1) Input Output Same Size structure (IOSS) for same size super-resolution task. (2) Adaptive Residual Block(ARB), the adaptive ability and convergence speed improve a lot. (3) A new idea for super-resolution network design increases the width of the network instead of the depth to obtain additional performance improvements.

    Fig. 6 Comparison of experimental effects of Urban100 dataset.

    Acknowledgements

    This work was supported in part by National Natural Science Foundation of China (No. 61571046)and National Key R&D Program of China (No.2017YFF0209806).

    Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format,as long as you give appropriate credit to the original author(s)and the source, provide a link to the Creative Commons licence, and indicate if changes were made.

    The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use,you will need to obtain permission directly from the copyright holder.

    To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

    Other papers from this open access journal are available free of charge from http://www.springer.com/journal/41095.To submit a manuscript, please go to https://www.editorialmanager.com/cvmj.

    欧美性感艳星| 中文亚洲av片在线观看爽| 国产极品天堂在线| 久久人人爽人人爽人人片va| 国内精品宾馆在线| 老女人水多毛片| 看免费成人av毛片| 国产av在哪里看| 亚洲内射少妇av| 国产成人影院久久av| 久久精品夜色国产| 91精品国产九色| 欧美高清性xxxxhd video| av黄色大香蕉| 赤兔流量卡办理| 国内精品美女久久久久久| 久久久成人免费电影| 亚洲av一区综合| 亚洲人与动物交配视频| 国产在线男女| 国产极品天堂在线| 国产黄色小视频在线观看| 51国产日韩欧美| 黄色欧美视频在线观看| 免费观看在线日韩| 黑人高潮一二区| 美女内射精品一级片tv| 亚洲av电影不卡..在线观看| 白带黄色成豆腐渣| 欧美bdsm另类| 亚洲七黄色美女视频| 夜夜夜夜夜久久久久| 天堂av国产一区二区熟女人妻| 99视频精品全部免费 在线| 亚洲成人精品中文字幕电影| 一级毛片久久久久久久久女| 干丝袜人妻中文字幕| 国产成人午夜福利电影在线观看| 日产精品乱码卡一卡2卡三| 亚洲国产欧美在线一区| 亚洲欧美中文字幕日韩二区| 国产在线男女| 亚洲真实伦在线观看| 欧美一区二区国产精品久久精品| 在线免费十八禁| 国产 一区 欧美 日韩| 欧美激情国产日韩精品一区| 亚洲成人中文字幕在线播放| 女人十人毛片免费观看3o分钟| 国产黄色视频一区二区在线观看 | 久久人妻av系列| .国产精品久久| 久久精品夜夜夜夜夜久久蜜豆| 91在线精品国自产拍蜜月| 99热这里只有是精品在线观看| 国产精品不卡视频一区二区| 国产亚洲5aaaaa淫片| 欧美一区二区国产精品久久精品| 三级毛片av免费| 国产欧美日韩精品一区二区| 久久99精品国语久久久| 深爱激情五月婷婷| 成人二区视频| 看十八女毛片水多多多| 色吧在线观看| 99在线视频只有这里精品首页| 国产成人精品一,二区 | 麻豆乱淫一区二区| 免费看光身美女| 少妇被粗大猛烈的视频| 久久午夜福利片| 亚洲图色成人| 国语自产精品视频在线第100页| 国产不卡一卡二| av视频在线观看入口| 国产精品久久久久久久久免| 大香蕉久久网| 免费av毛片视频| 欧美成人一区二区免费高清观看| 日韩大尺度精品在线看网址| 国内精品美女久久久久久| 长腿黑丝高跟| 欧美色欧美亚洲另类二区| 97超碰精品成人国产| 波多野结衣高清无吗| 九九热线精品视视频播放| 最近的中文字幕免费完整| av专区在线播放| 波多野结衣高清无吗| 成人漫画全彩无遮挡| 久久99热6这里只有精品| 中国美白少妇内射xxxbb| 日韩欧美精品v在线| 日本免费a在线| 国产一级毛片在线| 欧美日韩综合久久久久久| 美女国产视频在线观看| 中国美女看黄片| 天堂影院成人在线观看| 国产老妇女一区| 波多野结衣高清作品| 好男人在线观看高清免费视频| 日韩视频在线欧美| 欧美一区二区精品小视频在线| 精品午夜福利在线看| 国产老妇伦熟女老妇高清| 国产色婷婷99| 成人毛片a级毛片在线播放| 麻豆国产97在线/欧美| 久99久视频精品免费| av专区在线播放| 22中文网久久字幕| 欧美激情在线99| 欧美成人精品欧美一级黄| 国产精品99久久久久久久久| 久久国产乱子免费精品| 日韩高清综合在线| 日韩大尺度精品在线看网址| 村上凉子中文字幕在线| 亚洲五月天丁香| 国产综合懂色| 国产女主播在线喷水免费视频网站 | 亚洲人成网站在线观看播放| 国内揄拍国产精品人妻在线| 能在线免费看毛片的网站| 久久人妻av系列| 国产精品综合久久久久久久免费| 成人午夜高清在线视频| 国产色婷婷99| 亚洲精品久久久久久婷婷小说 | 亚洲婷婷狠狠爱综合网| 一级毛片aaaaaa免费看小| 亚洲av中文av极速乱| 久久综合国产亚洲精品| 国产综合懂色| 亚洲成人精品中文字幕电影| 成人无遮挡网站| 男女那种视频在线观看| 日本与韩国留学比较| 岛国在线免费视频观看| 天堂影院成人在线观看| 国产三级在线视频| 18禁裸乳无遮挡免费网站照片| 亚洲三级黄色毛片| 人人妻人人澡欧美一区二区| 能在线免费看毛片的网站| 国内精品久久久久精免费| 亚洲成人久久性| a级毛片a级免费在线| 又爽又黄无遮挡网站| 国语自产精品视频在线第100页| 爱豆传媒免费全集在线观看| 少妇熟女欧美另类| 看免费成人av毛片| 国产国拍精品亚洲av在线观看| 三级毛片av免费| 成人高潮视频无遮挡免费网站| 日韩制服骚丝袜av| 午夜激情福利司机影院| 夫妻性生交免费视频一级片| 亚洲无线在线观看| 国产综合懂色| 91av网一区二区| 久久午夜福利片| 美女被艹到高潮喷水动态| 免费看a级黄色片| 99国产精品一区二区蜜桃av| 国产探花在线观看一区二区| 久久亚洲精品不卡| 乱系列少妇在线播放| 色5月婷婷丁香| 日韩制服骚丝袜av| 国产69精品久久久久777片| 老熟妇乱子伦视频在线观看| 变态另类丝袜制服| 亚洲久久久久久中文字幕| 欧美日韩一区二区视频在线观看视频在线 | 99久久精品国产国产毛片| 成人午夜精彩视频在线观看| 久久这里只有精品中国| 一级黄片播放器| 网址你懂的国产日韩在线| 国产国拍精品亚洲av在线观看| 性欧美人与动物交配| av天堂中文字幕网| 亚洲av一区综合| 乱码一卡2卡4卡精品| 成人毛片60女人毛片免费| 日韩成人av中文字幕在线观看| 久久久久久久久久久丰满| 国产女主播在线喷水免费视频网站 | 亚洲精品久久久久久婷婷小说 | 亚洲欧美中文字幕日韩二区| 国产高清不卡午夜福利| 深爱激情五月婷婷| 99热精品在线国产| 99热网站在线观看| 性欧美人与动物交配| 日本黄色视频三级网站网址| 最近中文字幕高清免费大全6| 中文在线观看免费www的网站| 欧美又色又爽又黄视频| 好男人视频免费观看在线| 成人欧美大片| 国产私拍福利视频在线观看| 又爽又黄无遮挡网站| 亚洲国产高清在线一区二区三| 91久久精品国产一区二区三区| 亚洲av不卡在线观看| 人妻夜夜爽99麻豆av| 免费大片18禁| 亚洲av中文字字幕乱码综合| 久久精品国产清高在天天线| 国产日本99.免费观看| 三级经典国产精品| 欧美性猛交╳xxx乱大交人| 天堂影院成人在线观看| 久久精品国产自在天天线| 日韩亚洲欧美综合| 一个人看的www免费观看视频| 九色成人免费人妻av| 久久草成人影院| 午夜福利在线观看免费完整高清在 | 成人午夜精彩视频在线观看| 国产成人91sexporn| 国产美女午夜福利| 插逼视频在线观看| 亚洲七黄色美女视频| 免费人成在线观看视频色| 97人妻精品一区二区三区麻豆| 99久久精品国产国产毛片| 国产免费男女视频| 国产精品日韩av在线免费观看| 麻豆乱淫一区二区| 亚洲成av人片在线播放无| 不卡一级毛片| 长腿黑丝高跟| 亚洲成人久久爱视频| 日日摸夜夜添夜夜添av毛片| 亚洲成人中文字幕在线播放| 舔av片在线| 简卡轻食公司| 亚洲av成人av| 岛国在线免费视频观看| 国产乱人偷精品视频| 国产亚洲精品久久久久久毛片| 久久精品国产清高在天天线| 亚洲四区av| 国产综合懂色| 国产高潮美女av| 麻豆国产97在线/欧美| 精品无人区乱码1区二区| 欧美日韩国产亚洲二区| av国产免费在线观看| 国产精品久久久久久av不卡| 国产成人a区在线观看| 边亲边吃奶的免费视频| 99热只有精品国产| 亚洲最大成人手机在线| 欧美潮喷喷水| 久久精品国产清高在天天线| 精品日产1卡2卡| 一区二区三区高清视频在线| 99久久无色码亚洲精品果冻| 天天一区二区日本电影三级| 嫩草影院精品99| 丝袜喷水一区| 搡女人真爽免费视频火全软件| 亚洲人与动物交配视频| 两个人的视频大全免费| 高清毛片免费观看视频网站| 亚洲色图av天堂| 噜噜噜噜噜久久久久久91| 中文在线观看免费www的网站| 哪个播放器可以免费观看大片| 狂野欧美白嫩少妇大欣赏| 日本成人三级电影网站| 欧美zozozo另类| 91精品一卡2卡3卡4卡| 亚州av有码| 午夜精品国产一区二区电影 | 成人午夜高清在线视频| 国产成人freesex在线| 国产精品三级大全| 一级毛片aaaaaa免费看小| 久久精品国产清高在天天线| 亚洲图色成人| 亚洲美女视频黄频| 国产精品久久久久久久久免| 国产一区二区三区av在线 | 蜜臀久久99精品久久宅男| 亚洲最大成人手机在线| 欧美日韩综合久久久久久| 日日干狠狠操夜夜爽| 国产精品人妻久久久久久| 国产男人的电影天堂91| 中文在线观看免费www的网站| 日韩中字成人| 特大巨黑吊av在线直播| 黄片无遮挡物在线观看| 麻豆乱淫一区二区| 99国产极品粉嫩在线观看| 成人av在线播放网站| 国产老妇女一区| 99国产精品一区二区蜜桃av| 岛国毛片在线播放| 久久6这里有精品| 国产精品人妻久久久久久| 一边亲一边摸免费视频| 成人美女网站在线观看视频| 我的女老师完整版在线观看| 国产大屁股一区二区在线视频| av国产免费在线观看| 日本黄色片子视频| av在线观看视频网站免费| 亚洲成人久久爱视频| 国产极品精品免费视频能看的| 99久久无色码亚洲精品果冻| 国产精品.久久久| 嫩草影院新地址| 在线国产一区二区在线| 亚洲最大成人av| 亚洲三级黄色毛片| 床上黄色一级片| 十八禁国产超污无遮挡网站| 深夜精品福利| 久久久a久久爽久久v久久| 精品一区二区免费观看| 日本三级黄在线观看| 欧美日韩在线观看h| 最近的中文字幕免费完整| 悠悠久久av| 国内精品宾馆在线| 国产精品一区二区性色av| 午夜福利视频1000在线观看| 高清午夜精品一区二区三区 | 日韩av不卡免费在线播放| 日本欧美国产在线视频| av.在线天堂| 国产视频首页在线观看| 国产精品不卡视频一区二区| 偷拍熟女少妇极品色| 亚洲成av人片在线播放无| 久久久精品94久久精品| 亚洲欧美成人精品一区二区| 久久久精品欧美日韩精品| 成人综合一区亚洲| 深夜精品福利| 啦啦啦韩国在线观看视频| 波多野结衣高清作品| 国产成人a区在线观看| 美女黄网站色视频| 欧美一级a爱片免费观看看| 我的女老师完整版在线观看| 亚洲av成人精品一区久久| 99热网站在线观看| 老师上课跳d突然被开到最大视频| 日本色播在线视频| 亚洲久久久久久中文字幕| 少妇丰满av| 欧美成人精品欧美一级黄| 免费观看a级毛片全部| 色视频www国产| 亚洲第一区二区三区不卡| 春色校园在线视频观看| 成人毛片60女人毛片免费| 欧美色视频一区免费| 三级国产精品欧美在线观看| 精品一区二区三区视频在线| 欧美+亚洲+日韩+国产| 在线免费观看不下载黄p国产| 一进一出抽搐gif免费好疼| 国产真实乱freesex| av在线蜜桃| 在线免费十八禁| 成人毛片a级毛片在线播放| 国产真实乱freesex| 国产精品综合久久久久久久免费| 国产中年淑女户外野战色| 国产乱人视频| 久久精品国产亚洲网站| 亚洲欧美精品专区久久| 国产一级毛片在线| 婷婷六月久久综合丁香| 国产高清三级在线| 国产成人精品婷婷| 午夜免费男女啪啪视频观看| 噜噜噜噜噜久久久久久91| 亚洲欧美成人精品一区二区| 一夜夜www| 国产免费男女视频| 夜夜爽天天搞| 在线观看免费视频日本深夜| 老司机影院成人| 麻豆久久精品国产亚洲av| 国产精品久久久久久精品电影小说 | 身体一侧抽搐| 亚洲国产精品成人久久小说 | 国产精品久久久久久久久免| 亚洲欧美成人综合另类久久久 | 国产精品久久久久久久电影| 麻豆成人午夜福利视频| av天堂在线播放| 成人鲁丝片一二三区免费| av免费观看日本| 国产91av在线免费观看| 99久久中文字幕三级久久日本| 国产精品麻豆人妻色哟哟久久 | 精品国产三级普通话版| 真实男女啪啪啪动态图| 亚洲精品色激情综合| 欧美最黄视频在线播放免费| 亚洲无线在线观看| 日本熟妇午夜| 丝袜美腿在线中文| 欧美日韩综合久久久久久| 毛片女人毛片| av黄色大香蕉| 亚洲内射少妇av| 久久久欧美国产精品| 中文字幕熟女人妻在线| 久久久久国产网址| 一区二区三区免费毛片| 1000部很黄的大片| 99热这里只有是精品在线观看| 成人三级黄色视频| 亚洲精品乱码久久久v下载方式| 菩萨蛮人人尽说江南好唐韦庄 | 18+在线观看网站| 我要搜黄色片| 国产精品一区二区在线观看99 | 美女黄网站色视频| 亚洲av中文字字幕乱码综合| 日韩成人av中文字幕在线观看| 大香蕉久久网| 给我免费播放毛片高清在线观看| 亚洲精品亚洲一区二区| 精品日产1卡2卡| 99热网站在线观看| 日韩一区二区视频免费看| 听说在线观看完整版免费高清| 不卡视频在线观看欧美| 午夜老司机福利剧场| 国产精品一区二区性色av| 午夜激情欧美在线| eeuss影院久久| 日韩欧美国产在线观看| 国国产精品蜜臀av免费| 亚洲,欧美,日韩| 欧美zozozo另类| 在线天堂最新版资源| 人妻少妇偷人精品九色| 亚洲国产精品sss在线观看| 久久精品国产亚洲av天美| 国产一区亚洲一区在线观看| 成人午夜精彩视频在线观看| а√天堂www在线а√下载| 人人妻人人澡人人爽人人夜夜 | 午夜a级毛片| 一级av片app| 亚洲欧美精品专区久久| 国产av一区在线观看免费| 久久婷婷人人爽人人干人人爱| 亚洲欧美精品综合久久99| 日韩欧美精品v在线| 直男gayav资源| 哪里可以看免费的av片| kizo精华| 六月丁香七月| 22中文网久久字幕| 亚洲av不卡在线观看| 国产一级毛片在线| 国产蜜桃级精品一区二区三区| 日韩欧美国产在线观看| a级毛片免费高清观看在线播放| 久久精品国产亚洲av涩爱 | 欧美一区二区精品小视频在线| 麻豆乱淫一区二区| 日韩制服骚丝袜av| a级一级毛片免费在线观看| 天美传媒精品一区二区| 国产老妇女一区| 国产成人aa在线观看| 亚洲自拍偷在线| 又爽又黄a免费视频| 有码 亚洲区| 国产午夜精品论理片| 亚洲高清免费不卡视频| 在线观看av片永久免费下载| 久久草成人影院| 国产精品国产三级国产av玫瑰| 中文字幕av在线有码专区| 国产黄色小视频在线观看| 亚洲人成网站在线播放欧美日韩| 秋霞在线观看毛片| 九九久久精品国产亚洲av麻豆| 成人一区二区视频在线观看| 最近中文字幕高清免费大全6| 亚洲国产高清在线一区二区三| 99在线视频只有这里精品首页| 亚洲欧美精品专区久久| 国产综合懂色| 99九九线精品视频在线观看视频| 国产老妇女一区| 午夜激情福利司机影院| 久久精品国产亚洲av香蕉五月| 欧美潮喷喷水| 最近中文字幕高清免费大全6| 免费观看在线日韩| 欧美3d第一页| 亚州av有码| av在线亚洲专区| 一边摸一边抽搐一进一小说| 日韩一本色道免费dvd| 极品教师在线视频| 深夜精品福利| 亚洲无线观看免费| 欧美成人一区二区免费高清观看| 亚洲精品日韩在线中文字幕 | 深夜精品福利| 内射极品少妇av片p| 国产精华一区二区三区| 国产午夜精品久久久久久一区二区三区| 天堂影院成人在线观看| 听说在线观看完整版免费高清| 在线a可以看的网站| 非洲黑人性xxxx精品又粗又长| 欧美bdsm另类| 日韩一区二区视频免费看| 日日摸夜夜添夜夜爱| 日本黄色视频三级网站网址| 51国产日韩欧美| 亚洲美女视频黄频| 中文字幕av在线有码专区| 日本免费a在线| 天堂中文最新版在线下载 | 12—13女人毛片做爰片一| 晚上一个人看的免费电影| 长腿黑丝高跟| 黄色一级大片看看| 99在线视频只有这里精品首页| 婷婷色综合大香蕉| 成人综合一区亚洲| 欧美在线一区亚洲| 一区二区三区高清视频在线| 亚洲人成网站在线播| 哪里可以看免费的av片| 老熟妇乱子伦视频在线观看| 欧美xxxx性猛交bbbb| 99久久精品热视频| 久久久欧美国产精品| 一边摸一边抽搐一进一小说| 2021天堂中文幕一二区在线观| 久久人妻av系列| 日本免费a在线| 身体一侧抽搐| 国产91av在线免费观看| 成人特级av手机在线观看| 成人亚洲欧美一区二区av| 麻豆精品久久久久久蜜桃| 在线免费观看的www视频| 一个人看视频在线观看www免费| 男人和女人高潮做爰伦理| 亚洲欧美日韩无卡精品| 久久久久久久亚洲中文字幕| 69人妻影院| 国产爱豆传媒在线观看| 色尼玛亚洲综合影院| 国产精品精品国产色婷婷| 精品熟女少妇av免费看| 又爽又黄无遮挡网站| АⅤ资源中文在线天堂| 久久久久久久久久久免费av| 韩国av在线不卡| 免费不卡的大黄色大毛片视频在线观看 | 国产日本99.免费观看| 美女黄网站色视频| 日本熟妇午夜| 日韩 亚洲 欧美在线| 亚洲综合色惰| 国产免费男女视频| 成人av在线播放网站| 国产蜜桃级精品一区二区三区| 97超碰精品成人国产| 亚洲av熟女| 国产老妇女一区| 在线免费十八禁| 国产精品久久久久久亚洲av鲁大| 桃色一区二区三区在线观看| 精华霜和精华液先用哪个| 午夜精品国产一区二区电影 | 最好的美女福利视频网| 国产精品蜜桃在线观看 | 久久九九热精品免费| 日韩av不卡免费在线播放| 国产单亲对白刺激| 久久久成人免费电影| 欧美最黄视频在线播放免费| 深爱激情五月婷婷| 国产精品一区二区在线观看99 | 插逼视频在线观看| 亚洲成人久久爱视频| 免费观看a级毛片全部| 国产亚洲av片在线观看秒播厂 | 午夜福利成人在线免费观看| 久久精品国产亚洲av香蕉五月| 成人综合一区亚洲| 校园人妻丝袜中文字幕| 国产乱人视频| av在线老鸭窝| 卡戴珊不雅视频在线播放| 2021天堂中文幕一二区在线观| 青春草视频在线免费观看| 97热精品久久久久久| 国产午夜精品久久久久久一区二区三区|