• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Lightweight Image Super-Resolution via Weighted Multi-Scale Residual Network

    2021-06-18 03:27:30LongSunZhenbingLiuXiyanSunLichengLiuRushiLanandXiaonanLuo
    IEEE/CAA Journal of Automatica Sinica 2021年7期

    Long Sun, Zhenbing Liu, Xiyan Sun, Licheng Liu, Rushi Lan, and Xiaonan Luo

    I. INTRODUCTION

    IMAGE super-resolution (SR), which is to recover a visually high-resolution (HR) image from its low-resolution (LR)input, is a classical and fundamental problem in the low level vision. SR algorithms are widely used in many practical applications, like computational photography [1], scene classification [2], and object recognition [3], [4]. The SR technology can be roughly divided into two categories based on the number of input frames: multi-frame super-resolution[1], [5], [6] or single image super-resolution (SISR) [7]–[9]. In this work, we focus on SISR.

    Although numerous approaches have been proposed for image SR from both interpolation [10] and learning perspectives [7], [11]–[13], it is still a challenging task due to that multiple HR images can map to the same degraded observation LR. Recently, with the powerful feature representation capability, deep learning-based models have achieved superior performance in the low-level vision tasks,such as image denoising [14], [15], image deblurring [16],image deraining [17], image colorization [18] and image super-resolution [8], [19]–[28]. These methods learn a nonlinear mapping from degraded LR input to its corresponding visually pleasing output.

    Observing the advanced SISR algorithms shows a general trend that most existing convolutional neural network (CNN)-based SISR networks highly rely on increasing model depth to enhance the reconstruction performance. However, these methods would rarely deploy to solve practical problems because many devices certainly cannot provide enough computing resources. Therefore, it is crucial to design a fast and lightweight architecture to mitigate this problem [29].

    To build an efficient network, we propose a weighted multiscale residual network (WMRN) (Fig. 1) for SISR in this work. Specifically, instead of using conventional convolution operation, we first introduce depthwise separable convolutions(DS Convs) to reduce the number of model parameters and computational complexity (i.e., Multi-Adds). For exploiting and enriching multi-scale representations, we then conduct weighted multi-scale blocks (WMRBs), which adaptively filter information from different scales. By stacking several WMRBs, the representation capability can be improved.Moreover, global residual learning is adopted to add highfrequency details for reconstructing better visual results. The comparative results indicate that WMRN achieves state-ofthe-art performance via a high efficiency and a small model size.

    In summary, the main contributions of this work include:

    1) A novel weighted multi-scale residual block (WMRB) is proposed, which can not only effectively exploit multi-scale features but also dramatically reduce the computational burden.

    2) A global residual shortcut is deployed, which adds highfrequency features to generate more clear details and promote gradient information propagation.

    3) Extensive experiments show that the WMRN model utilizes only a modest number of parameters and operations to achieve competitive SR performance on different benchmarks with different upscaling factors (see Fig. 2 and Table I).

    The rest of this paper is organized as follows: Section II presents a brief review of the related works, Section III details the proposed method and Section IV evaluates the proposed algorithm from different aspects. Section V finally draws the conclusions.

    Fig. 2. Multi-Adds vs. peak signal-to-noise ratio (PSNR). The PSNR values are evaluated on the B100 dataset for ×2 SR. The Multi-Adds are calculated by assuming that the spatial resolution of output image is 720 P. The proposed WMRN strides a balance between reconstruction accuracy and computational operations.

    II. RELATED WORKS

    A. Deep Architecture for Super-Resolution

    Since Donget al.[19], [30] first introduced a CNN-based method (named SRCNN) to SR task, a series of deep learningbased works [8], [20]–[27], [31] have demonstrated impressive performance by jointly optimizing the feature extraction,nonlinear mapping, and image reconstruction stages in an endto-end manner in the recent years [31].

    With regard to SRCNN, it was a three Conv layers network and achieved superior performance against the conventional example-based or reconstruction-based methods. Later, to ease the training difficulty from plain architecture, Kimet al.[20] employed global residual learning strategy to build a very deep super-resolution (VDSR) framework and showed a significant improvement over the SRCNN model. The following works like DRCN [21], DRRN [22] and MemNet[32] also adopted similar manner to improve the final results.Although the above methods obtained good performance, they all operated on the bicubic-interpolated HR space, which relatively added computational budget along with artifacts.

    To reduce the computational cost caused by pre-processed input, FSRCNN [33], and ESPCN [23] directly extracted features in the LR space and adopted an upscaling layer in the final upsampling phase to reconstruct images. Nevertheless,they explored two different upsampling ways: transposed convolutional layer [34] and sub-pixel convolution (i.e., pixel shuffling). As a trade-off of speed, both FSRCNN [33] and ESPCN [23] have limited the model size of networks for learning complex nonlinear mappings [35]. Furthermore, the VDSR method [20] also proved that building a deeper network with residual learning could achieve better reconstruction performance. After that, an increasing number of works focused on designing more complex CNN architectures to improve performance. Limet al.[8] proposed a wide residual network (EDSR) and a multi-scale deep model(MDSR) for enhancing SR performance and made a significant improvement. Zhanget al.[36] combined residual learning and channel attention mechanism to build the SR model with the largest depth (more than 400 layers) and achieved great improvement in terms of PSNR.

    Although various techniques have been proposed for SISR,most existing CNN-based models increase the reconstruction performance by increasing model complexity with a deeper network and neglecting their higher inference time and computational burden; this limits their applications in real-life.As a result, it is desirable to design a lightweight framework with a considerable performance for the SISR problem. We attempt to explore an available solution in this paper for the purpose.

    Recently, there has been a rising interest in building lightweight and efficient neural networks for solving the SISRproblem [9], [25], [26], [29], [37]. The CARN model utilized a cascading mechanism upon a residual network for efficient SR and achieved competitive results. Huiet al.[37] proposed a information distillation network to gradually extract features for the reconstruction of HR images. Wanget al.[9]introduced an adaptive weighted residual unit for fusing multiscale features and obtained better reconstruction performance.Unlike they only used the multi-scale scheme in the upsampling part, we carefully designed a multi-scale module based on dilated convolution, which was stacked as a backbone network for effective and efficient feature extraction.

    TABLE I PUBLIC BENCHMARK RESULTS. AVERAGE PSNR/SSIM VALUES FOR MAGNIFICATION FACTOR ×2, ×3 AND ×4 ON DATASETS SET5 [53],SET14 [54], B100 [55], URBAN100 [13], MANGA109 [56]

    Fig. 3. The proposed building block. From left to right are: (a) ResBlock that used for extracting shallow feature; (b) WMRB, the core module of our model,generats the weighted multi-scale representations.

    B. Loss Functions for Super-Resolution

    Loss functions are generally used to measure the difference between super-resolved HR outputs and referenced HR images, and guide the model optimization [38]. ?1and ?2are the most widely used loss functions in SR field. Researchers tented to employ the ?2loss in early times, but later empirically found that ?1loss could achieve better performance and faster convergence [8], [37]. While obtaining high PSNR values, the HR image processed with the above functions often lack high-frequency details and produce overly smooth textures [39].

    For improving perceptual quality, the content loss [39], [40]is introduced into super-resolution to generate more visually perceptible results. The generative adversarial network-based(GAN-based) SR algorithms are usually trained with content loss and adversarial loss [39], [41] to create more realistic details. As so far, however, the training process of GAN is still unstable.

    In this work, we introduce total variation penalty ?TV[14],[25], [42] to constrain the smoothness of ?1-processed HR images. From the experimental results (see Fig. 1), the linear combination of ?1and ?TVgains a tradeoff between visual quality and PSNR values.

    III. METHOD

    In this section, we introduce the proposed weighted multiscale residual network (WMRN) in detail. An overall framework description is first given, and then the main building block of WMRN is also introduced. The training strategy is shown at the end of this section.

    A. Network Structure

    The goal of SISR is to estimate the high-resolution imageIS Rfrom the given low-resolution counterpartILR. As shown in Fig. 1, the proposed approach can be divided into three components: feature extraction (FE), nonlinear mapping(NLM), and image reconstruction.

    Specifically, we utilize a residual block (ResBlock) (see Fig. 3(a)) to extract low-level feature information. To express formally, let us denotefas a Conv layer (e.g.,fdsrefer to depthwise separable convolution operator, to be described in the next section) and Φ as an activation function. Then, theFEprocedure can be formulated as whereHFErepresents the feature extraction function, andF0is an output of the first convolution in ResBlock andFFEdenotes the feature maps from FE module. Then,FFEis sent to the nonlinear mapping (NLM) module that contains several WMRBs. For improving the high frequency details ofIS R, we additionally employ global residual learning. This can be defined as

    whereHNLMis the function of nonlinear mapping, to be discussed in detail in Section III-C.FNLMdenotes the generated deep intermediate features. Finally, an image recovery subnet is adopted to handle the feature maps after applying FE and NLM functions, which aims to recover the HR imageIS R:

    whereHRECis the reconstruction module,fHRrepresents the last convolution function working on HR space,fUPcorresponds to the upsampling module andfRECdenotes a Conv layer in reconstruction subnet.

    B. Depthwise Separable Convolution

    To improve efficiency, we need to balance the relationship among parameters, Multi-Adds, and performance. Thus,parameter-efficient operations are introduced to achieve this purpose. In this work, depthwise separable convolution (DS Conv) [43]–[46], constructed by a depthwise convolution(DW Conv), i.e., a single spatial filter performs on per input channel, and a pointwise layer (i.e., 1×1 convolution exploits linear combinations over the intermediate features), is highly applied.

    Let κ be the convolutional kernel size.cinandcout,respectively, denote the numbers of input and output channels of the tensor, andh×wrepresents the size of feature map. The computational cost of a standard convolutional layer is ash·w·cin·cout·κ·κ, while the costCDSof DS Conv is given by

    By replacing regular convolutional operation, we can reduce the computation by the ratioRof

    Unlike the original DS Conv utilized in [46], we use a dilated depthwise convolution in this work. It can enlarge the receptive field without introducing additional computing budgets.

    C. Weighted Multi-Scale Residual Block (WMRB)

    The overall structure of WMRB is shown in Fig. 3(b). It mainly includes three parts: extracting multi-scale features,weighting feature representations, fusing hierarchical information.

    The multi-scale representations correspond to a number of feature maps obtained under multiple available receptive fields. To exploit different feature scales, we firstly employ a DS Conv to process the input tensor τiand then further adopt two parallel subnets. Specifically, the left module introduces two asymmetric convolutions [43] to utilize vertical and horizontal orientation features and fuse the output with previous input data via residual connection within this block.Let τi+1be the output of the first DS Conv, this procedure can be formulated as

    For enlarging the receptive field, we apply several dilated DS Convs with different rates on the right branch. Inspired by[25], [43], [44], [47], [48], to avoid the gridding effect caused by dilated filtering [15], [47], [49], we fuse these multi-scale information in the fashion of element-wise sum and then concatenate them for subsequent processing. Similarly, we also use a shortcut connection to preserve previous features and propagate gradient information. The mathematical expressioncanbewrittenas

    After multi-scale processing, an adaptive weighting factor set (α1,α2) is employed to scale the intermediate results, and then we fuse these weighted features and the original input tensor in a manner of pointwise addition. The initial value of the weight factors are set to 0.5 in this work. Lastly, a DS Conv is conducted for further filtering:

    D. Loss Function

    For training the proposed CNN model, rather than employing ?1or ?2loss function, we employ the objective function [25] (i.e., ?1with total variation (TV) regularization)under the assumption that TV penalty could constrain the smoothness ofIS R. DenotingIGTas the reference image, we have

    where λ is the balanced weight. It is empirically found λ=1×10?5works well. We analyze the effect of different losses (e.g., charbonnier loss [6], perceptual loss [50]) in Section II.

    IV. EXPERIMENTS

    This section provides detailed experimental results to evaluate the effectiveness of the proposed model from different aspects. The implementation details are first presented, and then we offer a brief introduction to the used datasets. After that, experiments are carried out to verify the core component’s efficiency and the effect of different loss functions. Subsequently, quantitative and qualitative comparisons with several state-of-the-art approaches are conducted. Finally, experiments on real low-resolution images are also given.

    A. Implementation Details

    In each training mini-batch, we randomly cropped 16 color patches with a size of 48×48 from the LR images as input. We augment the training set with randomly rotating 90°and horizontal flips. Except the number of the input and output channels is 3 as the proposed model processes red-green-blue(RGB) image, the number of other internal channels is 48.

    B. Datasets

    We train WMRN model based on DIV2K dataset [52],which includes 800 2K-resolution images for the training set,another 200 pictures for validation and test sets. The LR images are downscaled from the reference HR image using bicubic downsampling. During the testing phase, we use five standard benchmark datasets for evaluation: Set5 [53], Set14[54], B100 [55], Urban100 [13] and Manga109 [56]. All PSNR and SSIM [57] results are calculated on the Y channelof the transformed YCbCr color space.

    TABLE II INVESTIGATION OF WMRB MODULE. WE EXAMINE THE PSNR (DB) ON SET14 (×2) AND EVALUATE THE AVERAGE RUNTIME ON 100 DIV2K VALIDATION IMAGES (×2) WITH SAME TRAINING SETTINGS

    TABLE III QUANTITATIVE EVALUATION OF DIFFERENT OBJECTIVE FUNCTIONS. WE COMPARE SEVERAL LOSSES ON THE B100, URBAN100 AND MANGA109 DATASETS (×4). OVERALL, THE LOSS FUNCTION ? totalHELPS TO IMPROVE VISUAL EFFECT

    C. Analysis

    Weighted multi-scale residual block:To show the effectiveness of the WMRB module, we validate the contributions of different components (mainly considering DS Conv, residual learning (RL), and weight scaling (WS)) in the proposed building block. The baseline is obtained without DS Conv(replaced by regular Conv layer), RL and WS. This model could gain better PSNR results but has a large number of parameters. To demonstrate the efficiency of DS Conv, we train our model with DS Conv-based WMRB module(denoted asMDS) and compare the performance with baseline.The quantitative results in Table II show thatMDScould highly reduce the parameters and Multi-Adds, and achieve a considerable performance. We then investigate the effect of residual learning (written asMRL) by adding three identity connections on the backbone part,subnet1, andsubnet2based on theMDS, respectively. As illustrated in Table II,MRLoutperforms theMDS; it increases by 0.04 dB. Furthermore,by combining weight scaling withMRL, our model shows a significant performance improvement (e.g., 0.41 dB on Set14 and beyond the baseline), which confirms the effectiveness of our weighted multi-scale residual block design.

    D. Comparisons With State-of-the-Arts

    We compare the proposed method with 11 advanced CNNbased SR methods, including SRCNN [19], FSRCNN [33],VDSR [20], DRCN [21], LapSRN [31], DRRN [22], MemNet[32], SRMDNF [58], IDN [37], CARN [26], AWSRN [9]. We note that the numbers of parameters and Multi-Adds are used to measure the model complexity, and we assume the output SR image with spatial resolution 1 280×720 to calculate Multi-Adds. The geometric self-ensembling strategy [8], [11] is used for further evaluation and marked with “+”. Since the source code of IDN is Tensorflow implementation at https://github.com/Zheng222/IDN-tensorflow, we rebuild it with Pytorch framework and the model size is really close to the original version (e.g., 579K vs. 590K (×2), 587.9K vs. 590K (×3),600K vs. 590K (×4)).

    We show the reconstruction performance versus the Multi-Adds of CNN-based SR algorithms on the B100 dataset in Fig. 2, from which we can see that the proposed WMRN model is efficient in terms of the computational cost and achieves a superior performance among these CNN-based methods. Specifically, The WMRN model has Multi-Adds about 50% less than the CARN [26] and 60% less than the IDN [37]. This effectiveness mainly comes from the introduced novel module (i.e., weighted multi-scale residual block (WMRB)) and the post-upsample structure that is widely used in recent works [8], [23], [26].

    Quantitative results with the state-of-the-art algorithms are listed in Table I. Note that we mainly compare models with roughly 2M parameters, these models have an approximately similar footprint as ours. Our WMRN model employs fewer parameters and Multi-Adds, while it performs favorably against the existing models on different scaling factors and benchmarks. Taking ×2 SR on the B100 set for example,using only about half of the number of CARN operations,WMRN gains comparable reconstruction performance against the computationally-expensive model.

    As shown in Fig. 4 , we present visual comparisons on Urban100 dataset for ×4 scale. The qualitative comparison results demonstrate the recovered images by our proposed method are more visually pleasing.

    E. Experiments on Real-World Photos

    Fig. 4. Visual comparison for ×4 SR on benchmark testsets. Our method could reduce the spatial aliasing artifacts and generate more faithful and clear details.

    Figs. 5 and 6 show an application of super-resolving realworld LR images with compression artifacts [35], [58] (e.g.,image Chip and Historical-007). The high-quality reference images and the degradation model are not available in these cases. The SelfEx [13] is used as one of the representative machine learning-based methods, and an advanced CNNbased algorithm AWSRN [9] is also included for comparison.

    It can be observed from the examples that WMRN can reconstruct sharper and more accurate images than the competing methods. Specifically, as shown in Fig. 6 , the SelfEx and AWSRN series are affected by JPEG compression artifacts and tend to produce blurred edges while our method reconstructs more clear rails. Although WMRN could obtain better SR performance than the state-of-the-art approaches, it fails to recover very sharp details, as shown in Fig. 5. This phenomenon may be caused by training with a simple degradation model, like bicubic downsampling. For achieving better real SR performance, we need to develop a more general observation model and collect real-world datasets in the future.

    V. CONCLUSIONS

    In this work, we propose an effective weighted multi-scale residual network (WMRN) for real-time and accurate image super-resolution. The proposed weighted multi-scale residual block (WMRB) module adaptively utilizes the feature representations at different scale spaces via dilated convolutions with different rates. Besides, a global residual connection is adopted to ease the flow of information and add high-frequency details. Comprehensive experiments show the effectiveness of the proposed model.

    Fig. 5. Comparison of real-world image “Chip” for ×3 SR. The proposed WMRN reconstructs more accurate results but generates smoothed details.

    Fig. 6. Comparison of historical image “Historical-007” for ×3 SR. The proposed WMRN restores the rails without compression artifacts.

    国产精品一及| 五月伊人婷婷丁香| 国产熟女欧美一区二区| 看免费成人av毛片| 黄色配什么色好看| 无遮挡黄片免费观看| 天堂√8在线中文| 99riav亚洲国产免费| 中出人妻视频一区二区| 中文字幕av在线有码专区| 一个人看的www免费观看视频| 91午夜精品亚洲一区二区三区 | 亚洲精品乱码久久久v下载方式| 国产探花在线观看一区二区| 精品一区二区三区视频在线观看免费| 五月伊人婷婷丁香| 亚洲国产精品成人综合色| 午夜爱爱视频在线播放| 亚洲中文日韩欧美视频| 大型黄色视频在线免费观看| 琪琪午夜伦伦电影理论片6080| 在现免费观看毛片| 日韩大尺度精品在线看网址| 国内少妇人妻偷人精品xxx网站| 精品乱码久久久久久99久播| 天天躁日日操中文字幕| 老司机午夜福利在线观看视频| 免费一级毛片在线播放高清视频| 久久精品91蜜桃| 热99在线观看视频| 一区二区三区四区激情视频 | 毛片女人毛片| 久久国产精品人妻蜜桃| 亚洲一区二区三区色噜噜| 高清日韩中文字幕在线| 国产成人影院久久av| 久久精品国产亚洲av天美| 少妇熟女aⅴ在线视频| 日本五十路高清| 欧美一区二区国产精品久久精品| 很黄的视频免费| 啦啦啦韩国在线观看视频| 日本黄色视频三级网站网址| 国产男人的电影天堂91| 免费观看的影片在线观看| 一本精品99久久精品77| 中文亚洲av片在线观看爽| 91av网一区二区| 97人妻精品一区二区三区麻豆| 99精品久久久久人妻精品| 亚洲va在线va天堂va国产| 亚洲内射少妇av| 自拍偷自拍亚洲精品老妇| 午夜福利在线在线| 一区二区三区高清视频在线| 国产精品一区二区性色av| 身体一侧抽搐| 日本黄大片高清| 午夜免费成人在线视频| 午夜影院日韩av| 18禁黄网站禁片免费观看直播| 黄片wwwwww| 国产私拍福利视频在线观看| 床上黄色一级片| 亚洲av日韩精品久久久久久密| 国产美女午夜福利| 色综合婷婷激情| 此物有八面人人有两片| 亚洲av第一区精品v没综合| 久久草成人影院| 亚洲va日本ⅴa欧美va伊人久久| 国产真实伦视频高清在线观看 | 亚洲成av人片在线播放无| av中文乱码字幕在线| 舔av片在线| 日本黄色片子视频| 成年版毛片免费区| 丰满人妻一区二区三区视频av| 午夜亚洲福利在线播放| 99riav亚洲国产免费| 午夜福利成人在线免费观看| 精品福利观看| 他把我摸到了高潮在线观看| 黄色视频,在线免费观看| 简卡轻食公司| 赤兔流量卡办理| 精品一区二区三区人妻视频| 国产黄a三级三级三级人| 一本久久中文字幕| 亚洲av成人精品一区久久| 国产高清不卡午夜福利| 久久亚洲精品不卡| 麻豆成人午夜福利视频| 国产激情偷乱视频一区二区| 亚洲中文字幕日韩| 99在线人妻在线中文字幕| 乱系列少妇在线播放| 久久久久久久久中文| 亚洲乱码一区二区免费版| 色哟哟·www| 亚洲va在线va天堂va国产| 精品久久久噜噜| 九九在线视频观看精品| 色吧在线观看| 日韩亚洲欧美综合| 精品久久久久久久人妻蜜臀av| 精品一区二区三区人妻视频| 成人特级av手机在线观看| 亚洲欧美日韩高清专用| 搞女人的毛片| 精品人妻视频免费看| 中文字幕熟女人妻在线| 如何舔出高潮| 亚洲专区国产一区二区| 美女高潮喷水抽搐中文字幕| 真人一进一出gif抽搐免费| 国产精品野战在线观看| 免费看a级黄色片| 噜噜噜噜噜久久久久久91| 少妇人妻精品综合一区二区 | 欧美性猛交╳xxx乱大交人| 简卡轻食公司| 亚洲成av人片在线播放无| 搡老岳熟女国产| 免费大片18禁| 老司机福利观看| 日本熟妇午夜| 97人妻精品一区二区三区麻豆| 最近视频中文字幕2019在线8| av女优亚洲男人天堂| 大又大粗又爽又黄少妇毛片口| 国产精品98久久久久久宅男小说| 国产精品,欧美在线| 亚洲精华国产精华精| 夜夜爽天天搞| 欧美日韩亚洲国产一区二区在线观看| 白带黄色成豆腐渣| 99热只有精品国产| 别揉我奶头~嗯~啊~动态视频| 成人美女网站在线观看视频| 男女啪啪激烈高潮av片| 99九九线精品视频在线观看视频| 国内久久婷婷六月综合欲色啪| 亚洲,欧美,日韩| 91午夜精品亚洲一区二区三区 | h日本视频在线播放| 色噜噜av男人的天堂激情| 18禁裸乳无遮挡免费网站照片| 观看免费一级毛片| 日韩中文字幕欧美一区二区| 91久久精品国产一区二区三区| 午夜激情福利司机影院| 特级一级黄色大片| 深夜精品福利| 成人av在线播放网站| 99久久精品一区二区三区| 亚洲自拍偷在线| 搡老妇女老女人老熟妇| 国产在视频线在精品| 全区人妻精品视频| 观看美女的网站| 99久国产av精品| 成人国产综合亚洲| 亚洲综合色惰| bbb黄色大片| 免费av观看视频| 精品人妻1区二区| 免费搜索国产男女视频| 国产又黄又爽又无遮挡在线| 国产午夜福利久久久久久| 久久人人爽人人爽人人片va| 在线看三级毛片| 亚洲无线观看免费| 国产精品av视频在线免费观看| 国产淫片久久久久久久久| 国产欧美日韩精品一区二区| 国产三级在线视频| 久久久色成人| 成人永久免费在线观看视频| 日韩欧美一区二区三区在线观看| 麻豆精品久久久久久蜜桃| 看黄色毛片网站| 亚洲经典国产精华液单| 婷婷精品国产亚洲av| av.在线天堂| av在线蜜桃| 2021天堂中文幕一二区在线观| 在线观看一区二区三区| 男人舔奶头视频| av在线观看视频网站免费| 亚洲精品色激情综合| 国内精品宾馆在线| 亚洲成av人片在线播放无| 国产真实伦视频高清在线观看 | av天堂中文字幕网| 美女黄网站色视频| 日本与韩国留学比较| 国产精品国产三级国产av玫瑰| a级毛片免费高清观看在线播放| 久久久久九九精品影院| 夜夜夜夜夜久久久久| 国产aⅴ精品一区二区三区波| 色5月婷婷丁香| 欧美+亚洲+日韩+国产| 88av欧美| 桃红色精品国产亚洲av| 久久精品国产鲁丝片午夜精品 | 天堂av国产一区二区熟女人妻| 欧美激情久久久久久爽电影| 国产一区二区在线观看日韩| 毛片一级片免费看久久久久 | 中文字幕免费在线视频6| 熟妇人妻久久中文字幕3abv| 亚洲成人精品中文字幕电影| 精品乱码久久久久久99久播| 国产精品一区www在线观看 | 一卡2卡三卡四卡精品乱码亚洲| 亚洲va日本ⅴa欧美va伊人久久| 久久久久久伊人网av| 深爱激情五月婷婷| 99热这里只有精品一区| 免费看a级黄色片| 美女xxoo啪啪120秒动态图| 桃色一区二区三区在线观看| 国产主播在线观看一区二区| or卡值多少钱| 欧美3d第一页| 精品福利观看| 欧美黑人欧美精品刺激| 国产真实乱freesex| 99在线视频只有这里精品首页| 中文字幕人妻熟人妻熟丝袜美| 国内精品一区二区在线观看| 我的女老师完整版在线观看| 91麻豆av在线| 亚洲专区国产一区二区| 精品乱码久久久久久99久播| 别揉我奶头~嗯~啊~动态视频| av专区在线播放| 欧美区成人在线视频| 九九久久精品国产亚洲av麻豆| 色播亚洲综合网| 国产精品伦人一区二区| 尾随美女入室| 国产国拍精品亚洲av在线观看| 别揉我奶头~嗯~啊~动态视频| 99久久精品一区二区三区| 国内少妇人妻偷人精品xxx网站| 成人特级av手机在线观看| 欧美xxxx性猛交bbbb| 日日夜夜操网爽| 男女啪啪激烈高潮av片| 18禁在线播放成人免费| 精品久久久久久成人av| 18+在线观看网站| 日韩欧美 国产精品| 久久精品国产亚洲av香蕉五月| 哪里可以看免费的av片| 国产69精品久久久久777片| 国产探花极品一区二区| 日日撸夜夜添| 午夜爱爱视频在线播放| 免费观看在线日韩| 亚洲内射少妇av| 可以在线观看的亚洲视频| 天堂av国产一区二区熟女人妻| 99国产极品粉嫩在线观看| 国产精品女同一区二区软件 | 国产熟女欧美一区二区| 亚洲熟妇中文字幕五十中出| 免费在线观看成人毛片| 国产探花极品一区二区| 欧美黑人欧美精品刺激| 干丝袜人妻中文字幕| 亚洲专区国产一区二区| 嫩草影院精品99| 桃色一区二区三区在线观看| 成人高潮视频无遮挡免费网站| 色噜噜av男人的天堂激情| 男女视频在线观看网站免费| 亚洲三级黄色毛片| 少妇人妻一区二区三区视频| 亚洲va在线va天堂va国产| 午夜久久久久精精品| 国产一区二区三区视频了| 深夜精品福利| 午夜激情福利司机影院| 亚洲无线观看免费| 最新中文字幕久久久久| 最后的刺客免费高清国语| 啦啦啦观看免费观看视频高清| 日本免费一区二区三区高清不卡| 俄罗斯特黄特色一大片| 亚洲精品乱码久久久v下载方式| 亚洲国产高清在线一区二区三| 天堂动漫精品| 国内精品宾馆在线| 亚洲专区国产一区二区| 99在线人妻在线中文字幕| 热99re8久久精品国产| 亚洲性久久影院| 一区福利在线观看| 国产精品一区www在线观看 | 亚洲精华国产精华精| 99久久成人亚洲精品观看| 身体一侧抽搐| av在线天堂中文字幕| 特大巨黑吊av在线直播| 国产一区二区三区视频了| 亚洲无线在线观看| 很黄的视频免费| 999久久久精品免费观看国产| 午夜福利欧美成人| 深爱激情五月婷婷| 国产aⅴ精品一区二区三区波| 日本免费一区二区三区高清不卡| 亚洲欧美日韩卡通动漫| 成人精品一区二区免费| www日本黄色视频网| 国产一区二区激情短视频| 免费看av在线观看网站| 成人国产综合亚洲| 俄罗斯特黄特色一大片| 免费黄网站久久成人精品| 全区人妻精品视频| 最近视频中文字幕2019在线8| 中出人妻视频一区二区| 悠悠久久av| 亚洲无线在线观看| 久久久久久久久久久丰满 | 九色成人免费人妻av| 国产精品女同一区二区软件 | 男人舔奶头视频| 国内毛片毛片毛片毛片毛片| 十八禁国产超污无遮挡网站| 国产精品福利在线免费观看| 桃红色精品国产亚洲av| 午夜福利在线观看免费完整高清在 | 国产精品久久久久久av不卡| 国产探花在线观看一区二区| 久久精品影院6| 国产单亲对白刺激| 九九在线视频观看精品| 国产精品精品国产色婷婷| 成人av在线播放网站| 又黄又爽又免费观看的视频| 久久久久久大精品| 国产av麻豆久久久久久久| 精华霜和精华液先用哪个| 日本一二三区视频观看| 亚洲电影在线观看av| 99九九线精品视频在线观看视频| 观看免费一级毛片| 国产 一区精品| 久久九九热精品免费| 大又大粗又爽又黄少妇毛片口| 色噜噜av男人的天堂激情| 狂野欧美白嫩少妇大欣赏| 99久久九九国产精品国产免费| 亚洲熟妇熟女久久| 男人狂女人下面高潮的视频| 国产女主播在线喷水免费视频网站 | 国产在线男女| 国产主播在线观看一区二区| 亚洲成人久久爱视频| 一个人看视频在线观看www免费| 亚洲综合色惰| 国产亚洲欧美98| 亚洲专区国产一区二区| 亚洲无线在线观看| 欧美3d第一页| 国产精品无大码| 久久国内精品自在自线图片| 亚洲性久久影院| 99热只有精品国产| 成熟少妇高潮喷水视频| 两性午夜刺激爽爽歪歪视频在线观看| 日本精品一区二区三区蜜桃| 亚洲av不卡在线观看| 亚洲电影在线观看av| 免费大片18禁| 亚洲第一电影网av| 欧美又色又爽又黄视频| 亚洲精华国产精华精| 亚洲国产日韩欧美精品在线观看| 高清日韩中文字幕在线| 成人鲁丝片一二三区免费| 精品不卡国产一区二区三区| 两个人视频免费观看高清| 国产aⅴ精品一区二区三区波| 国产精品国产三级国产av玫瑰| 美女高潮的动态| 成人特级黄色片久久久久久久| 我要看日韩黄色一级片| 中文字幕人妻熟人妻熟丝袜美| 午夜免费成人在线视频| 俺也久久电影网| 国产午夜福利久久久久久| 超碰av人人做人人爽久久| 国产在视频线在精品| 亚洲精品在线观看二区| 国内精品宾馆在线| 老熟妇乱子伦视频在线观看| 欧美性猛交黑人性爽| 在线看三级毛片| 国产高潮美女av| 久久久久久久精品吃奶| 欧美日本视频| 九色成人免费人妻av| 亚洲美女搞黄在线观看 | 亚洲久久久久久中文字幕| 日本黄色视频三级网站网址| 久久久久久久精品吃奶| 在线a可以看的网站| 免费大片18禁| 午夜视频国产福利| 国产蜜桃级精品一区二区三区| 免费人成视频x8x8入口观看| 国产精品乱码一区二三区的特点| 国产欧美日韩精品亚洲av| 特大巨黑吊av在线直播| 国产av在哪里看| 亚洲av免费高清在线观看| 婷婷亚洲欧美| 女生性感内裤真人,穿戴方法视频| 又爽又黄无遮挡网站| 2021天堂中文幕一二区在线观| 精品人妻熟女av久视频| 亚洲人成网站在线播放欧美日韩| 搡老妇女老女人老熟妇| 成人精品一区二区免费| 亚洲熟妇中文字幕五十中出| 国产av一区在线观看免费| 国产伦精品一区二区三区视频9| 久久草成人影院| 国内少妇人妻偷人精品xxx网站| 国模一区二区三区四区视频| 又黄又爽又免费观看的视频| av在线天堂中文字幕| 国产毛片a区久久久久| 亚洲无线在线观看| 啦啦啦韩国在线观看视频| 国产高清激情床上av| 蜜桃亚洲精品一区二区三区| 成人毛片a级毛片在线播放| 男人的好看免费观看在线视频| 欧美精品啪啪一区二区三区| 极品教师在线免费播放| 亚洲av中文字字幕乱码综合| 日韩欧美一区二区三区在线观看| 亚洲欧美日韩高清在线视频| 天堂动漫精品| 特大巨黑吊av在线直播| 神马国产精品三级电影在线观看| 可以在线观看的亚洲视频| 夜夜看夜夜爽夜夜摸| 成人午夜高清在线视频| 国产伦一二天堂av在线观看| 国产精品爽爽va在线观看网站| 99精品久久久久人妻精品| av福利片在线观看| a级毛片a级免费在线| 尾随美女入室| 成人特级av手机在线观看| 国产成人a区在线观看| 99久久精品热视频| 免费看日本二区| 国产极品精品免费视频能看的| 在线免费观看不下载黄p国产 | 亚洲av.av天堂| 极品教师在线视频| av天堂在线播放| 乱系列少妇在线播放| 全区人妻精品视频| 日本撒尿小便嘘嘘汇集6| 波多野结衣巨乳人妻| 波多野结衣高清作品| 在线播放国产精品三级| 亚洲色图av天堂| 搡老岳熟女国产| 色哟哟·www| 国产精品一区二区免费欧美| 香蕉av资源在线| 欧美激情在线99| 日韩,欧美,国产一区二区三区 | 内射极品少妇av片p| 亚洲精品影视一区二区三区av| 色综合色国产| 国产精品自产拍在线观看55亚洲| 欧美潮喷喷水| 精品久久久久久,| 嫁个100分男人电影在线观看| 亚洲三级黄色毛片| 午夜福利在线观看免费完整高清在 | 国产精品日韩av在线免费观看| 深夜a级毛片| 99riav亚洲国产免费| 欧美高清成人免费视频www| 国产免费av片在线观看野外av| 欧美黑人巨大hd| 欧美3d第一页| 小说图片视频综合网站| 国产亚洲91精品色在线| 国产又黄又爽又无遮挡在线| 亚洲欧美日韩无卡精品| 一夜夜www| 亚洲三级黄色毛片| 久久精品国产自在天天线| 久久天躁狠狠躁夜夜2o2o| 久久人妻av系列| 午夜福利在线观看免费完整高清在 | 毛片女人毛片| 99热精品在线国产| 国产 一区精品| 1024手机看黄色片| 99热这里只有是精品50| 亚洲色图av天堂| 成人国产麻豆网| 日日撸夜夜添| 日韩亚洲欧美综合| 不卡视频在线观看欧美| 国产精品av视频在线免费观看| 欧美日韩乱码在线| 高清在线国产一区| 无遮挡黄片免费观看| 国产精品精品国产色婷婷| 精品久久久久久成人av| 天天一区二区日本电影三级| 国产不卡一卡二| 久9热在线精品视频| av在线老鸭窝| 亚洲精品日韩av片在线观看| 婷婷丁香在线五月| 亚洲熟妇中文字幕五十中出| 淫妇啪啪啪对白视频| 天美传媒精品一区二区| 精品一区二区三区视频在线观看免费| 亚洲成a人片在线一区二区| 我的女老师完整版在线观看| 人人妻人人看人人澡| 精品乱码久久久久久99久播| 精品一区二区三区av网在线观看| 亚洲七黄色美女视频| 成人特级黄色片久久久久久久| 久久久久久大精品| 最近最新免费中文字幕在线| netflix在线观看网站| 成年女人永久免费观看视频| 51国产日韩欧美| 免费看a级黄色片| 久久草成人影院| 亚洲18禁久久av| 亚洲成av人片在线播放无| 99国产精品一区二区蜜桃av| 成年女人看的毛片在线观看| 日韩欧美免费精品| 国产免费男女视频| 最近在线观看免费完整版| 国产三级中文精品| 三级毛片av免费| 精品国产三级普通话版| 中文字幕人妻熟人妻熟丝袜美| 琪琪午夜伦伦电影理论片6080| 天堂av国产一区二区熟女人妻| 亚洲av免费在线观看| 精品一区二区三区av网在线观看| 国产爱豆传媒在线观看| 日本一二三区视频观看| 国产女主播在线喷水免费视频网站 | 免费观看在线日韩| 亚洲国产色片| 91午夜精品亚洲一区二区三区 | 亚洲经典国产精华液单| 久久午夜福利片| 国产亚洲精品久久久久久毛片| 啪啪无遮挡十八禁网站| 国产极品精品免费视频能看的| 久99久视频精品免费| 亚洲精品亚洲一区二区| 亚洲精品456在线播放app | 日韩在线高清观看一区二区三区 | 国产毛片a区久久久久| 国产人妻一区二区三区在| 男女做爰动态图高潮gif福利片| 成人综合一区亚洲| 亚洲欧美日韩卡通动漫| 精品人妻偷拍中文字幕| 99久久成人亚洲精品观看| 国产精品精品国产色婷婷| 日本黄色视频三级网站网址| 老司机午夜福利在线观看视频| 亚洲熟妇熟女久久| 国语自产精品视频在线第100页| 美女 人体艺术 gogo| 国产91精品成人一区二区三区| 欧美+日韩+精品| 99在线视频只有这里精品首页| 男女下面进入的视频免费午夜| 久久久久久久久中文| 高清毛片免费观看视频网站| 搡老岳熟女国产| 成人精品一区二区免费| 99riav亚洲国产免费| 91av网一区二区| 波多野结衣巨乳人妻| 18禁裸乳无遮挡免费网站照片| 亚洲欧美日韩无卡精品| 成人一区二区视频在线观看| 亚洲精品影视一区二区三区av| 日本五十路高清| 国产色婷婷99| 毛片一级片免费看久久久久 | 人妻少妇偷人精品九色| 久久久成人免费电影| 欧美最黄视频在线播放免费|