• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    More Than Lightening: A Self-Supervised Low-Light Image Enhancement Method Capable for Multiple Degradations

    2024-03-04 07:43:44HanXuJiayiMaYixuanYuanHaoZhangXinTianandXiaojieGuo
    IEEE/CAA Journal of Automatica Sinica 2024年3期

    Han Xu , Jiayi Ma ,,, Yixuan Yuan ,,,Hao Zhang , Xin Tian , and Xiaojie Guo ,,

    Abstract—Low-light images suffer from low quality due to poor lighting conditions,noise pollution,and improper settings of cameras.To enhance low-light images,most existing methods rely on normal-light images for guidance but the collection of suitable normal-light images is difficult.In contrast, a self-supervised method breaks free from the reliance on normal-light data,resulting in more convenience and better generalization.Existing self-supervised methods primarily focus on illumination adjustment and design pixel-based adjustment methods, resulting in remnants of other degradations, uneven brightness and artifacts.In response, this paper proposes a self-supervised enhancement method, termed as SLIE.It can handle multiple degradations including illumination attenuation, noise pollution, and color shift, all in a self-supervised manner.Illumination attenuation is estimated based on physical principles and local neighborhood information.The removal and correction of noise and color shift removal are solely realized with noisy images and images with color shifts.Finally, the comprehensive and fully self-supervised approach can achieve better adaptability and generalization.It is applicable to various low light conditions, and can reproduce the original color of scenes in natural light.Extensive experiments conducted on four public datasets demonstrate the superiority of SLIE to thirteen state-of-the-art methods.Our code is available at https://github.com/hanna-xu/SLIE.

    I.INTRODUCTION

    DUE to low photon counts, images captured in low-light environments usually suffer from multifaceted degradations, e.g., low contrast, poor visibility, complex noise, and color shift/distortion.These degradations result in low signalto-noise ratio (SNR) and undesirable image quality, causing poor visibility of objects.Low-light image enhancement aims to remove these degradations and generate high-quality normal-light images, which are more conducive to subsequent applications, e.g., surveillance, human and object recognition,and automated driving [1], [2].Thus, enhancement performance is a critical challenge in image processing [3], [4].

    To remove these degradations, traditional methods have been proposed over the past years.Value-based methods are based on pixel values while failing to consider local consistency, resulting in uneven-exposed and unrealistic results.Model-based methods are limited by the capacity of models and cannot deal with extreme/challenging situations,e.g.,wide illumination variation and extremely dark illumination.In this case, they tend to generate under- or over-exposed results.

    To ameliorate these drawbacks, learning-based methods have been proposed.These methods can be roughly divided into three categories, including supervised, unsupervised, and self-supervised methods shown in Fig.1.Supervised methods require paired low-light and normal-light images for supervised learning or Retinex decomposition[5]-[9].Unsupervised methods use unpaired low-light and normal-light data for training [10], [11].By comparison, self-supervised methods merely utilize low-light images for training, eliminating the need for external information from normal-light images.

    A more specific overview and analysis of each type of methods is then performed.As illustrated in Fig.1(a), supervised methods require paired low-light and normal-light images.However, it is challenging to capture paired images of the same scene: i) Simultaneous shooting is a troublesome problem as it is hard for a scene to be completely static; ii)Ground truth does not always exist.Even with appropriate exposure settings, images still exhibit a variety of exposure levels.As shown in Fig.2, images where all the regions are well-exposed are not always available.

    To reduce dependence on paired data, some unsupervised methods use unpaired low-light and normal-light data, as illustrated in Fig.1(b).For example, generative adversarial networks (GANs) are applied to pull enhanced results close to normal-light data from the perspective of probability distribution.Even if they overcome the need for static scenes and simultaneous shooting,they still rely on normal-light data.The capture and selection of normal-light images are still timeconsuming and laborious and the performance is determined by the distribution of selected normal-light images.

    Fig.2.Examples of images with normal light at different exposure levels.

    To completely obviate the need for normal-light images,several self-supervised methods have been proposed.These methods approach enhancement by estimating curve parameters [12], mapping matrices of the image-to-curve transform [13], or adjusting the histogram distribution [14].While these methods only employ a self-supervised approach to brighten low-light images, they do not address other degradations to achieve higher image quality.Even if noise is considered,denoising is achieved by established operations rather than a self-supervised network.As illustrated in Fig.3, they cannot handle severe noise well and seldom account for color shifts, leading to poor visual outcomes.Primarily focusing on self-supervised brightness adjustment and overlooking other degradations, these methods primarily rely on pixel values and neglect local consistency, resulting in uneven exposure.Consequently, developing a self-supervised low-light image enhancement method that can comprehensively address multiple degradations to produce evenly-exposed, high-quality enhanced results remains a formidable task.

    In this paper, we propose a self-supervised low-light image enhancement method capable in the face of multiple degradations.Firstly,as illustrated in Fig.1 c),it only utilizes low-light images for training,eliminating the need for external information from normal-light images.Secondly, it comprehensively addresses various degradations, including low light, noise,and color shift, with each being handled by self-supervised networks.For each specific degradation,a self-supervised illumination adjustment block considers light distribution, object geometry and local smoothness.It estimates an illumination attenuation map and brightens an image for evenly-exposed outcome.A self-supervised denoising block only looks at bad noisy images without clean reference data while handling complex noise.A self-supervised color correction module trains a network to estimate and then correct color shifts, ensuring that the original colors under natural light are faithfully represented.The contributions are summarized as follows:

    1) We propose a self-supervised low-light image enhancement method.Compared to existing self-supervised methods,this method considers and addresses multiple degradations(including low illumination, noise, and color shift) to achieve more comprehensive enhancement performance.

    2) For illumination adjustment, different from pixel-based methods, the proposed self-supervised adjustment network considers physical basis (including light distribution, object geometry, and local smoothness).It shows better adaptability and more even and natural adjustment results.

    3) A self-supervised color correction block is designed.It performs color correction based solely on images with color shifts, breaking free from the reliance on white balanced images.

    4) The proposed method overcomes limitations imposed by supervised/unsupervised learning and is better generalized to various low-light conditions.Its results on four publicly available datasets demonstrate superiority over state-of-the-art methods.Moreover, it achieves a balance between parameters and performance.

    The remainder of this paper is organized as follows.Section II discusses related work.Section III provides problem formulation, loss functions, and network architectures.Section IV compares SLIE with state-of-the-art methods on four datasets.Intermediate results, ablation study, and parameter analysis are also conducted.Section V points out the limitations of the method and the future direction of work.Section VI summarizes the whole paper.

    II.RELATED WORK

    A. Traditional Methods

    Fig.3.Enhancement results of state-of-the-art supervised method KinD++ [9], self-supervised method DUNP [15] and the proposed SLIE.

    Typical traditional methods include value-based mapping and model-based optimization methods.Value-based mapping methods directly increase the dynamic range by mapping the low pixel values to higher values with nonlinear transformations, such as histogram equalization [16], gamma correction,and some variants [17], [18].Most model-based optimization methods adopt Retinex theory, assuming that an image can be decomposed into reflectance and illumination.Some early methods treat the decomposed reflectance as the enhanced image[19].Afterward,some methods multiply the reflectance by adjusted illumination to make the enhanced image natural[20],[21].

    B. Deep Learning-Based Methods

    1)Supervised/Unsupervised Methods With Paired/Unpaired Normal-Light Images: In deep learning-based methods,Retinex theory is widely applied.The deep models usually use paired data to decompose images into reflectance and illumination.For instance, DeepUPE [22] uses under-exposed image pairs to estimate an image-to-illumination mapping and then takes the illumination map to light up the under-exposed image.RetinexNet [7], LightenNet [23], RDGAN [24], and KinD++ [9] also estimate illumination based on Retinex theory.In these methods, the decomposition accuracy inevitably needs multi-exposed images of the same scene.Some methods perform other forms of decomposition.For instance, Renet al.decomposed images into scene information and edge details [25].Some methods directly use paired images with ground truth for supervised learning.EnlighenGAN [10] uses unpaired data for training.It trains the dual-discriminator to direct global and local information.Besides, a self feature preserving loss is applied to maintain textures and structures.However, the dependence on normal-light data in these methods sets high requirements for training data and limits network performance.Moreover, most of these methods are only applicable to a specific type of noise.In this work, we comprehensively consider Gaussian and Poisson noise to better deal with the noise in noisy images.Specifically, we adopt a self-supervised mechanism which only looks at bad noisy images and then performs denoising well.

    2)Self-Supervised Methods Without the Need for Normallight Images: These methods apply a self-supervised manner to get rid of the reliance on normal-light data.Zero-DCE [12]reformulates the enhancement as a pixel-wise and high-order curve estimation problem by training a network to learn the adjustable parameters of curves.The dynamic range is adjusted through multiple iterations while its effectiveness is limited in some extreme scenarios.Garget al.[14] assumed that the enhanced histogram distribution of the maximum channel should follow that of a histogram-equalised low-light image.Wuet al.[13] executed the image-to-curve transform by learning mapping matrices.[14] and [13] are based on pixel values and ignore local consistency, resulting in unevenexposed results.Zhaoet al.[26]proposed Enlighten Anything,which can enhance and fuse the semantic intent of SAM segmentation with low-light images to obtain fused images with good visual perception.In some other research fields,some self-supervised methods are also proposed.For example,Liet al.proposed a method for image super-resolution (SR)with multi-channel constraints.It integrates clustering, collaborative representation,and progressive multi-layer mapping relationships to reconstruct high-resolution color images [27].They also employed the geometrical structure of an image and the statistical priors based on clustering and local structure priors [28].In summary, these low-light image enhancement methods only focus on self-supervised brightness adjustment but can not handle other degradations such as noise.In addition to the aforementioned shortcomings, color correction is not considered and still remains a challenging task to be solved.The color shifts in low-light images still remain in the final results.

    In SLIE, firstly, to address the issue of uneven brightness caused by pixel-wise adjustment with existing methods, we formulate the illumination degradation by considering physical basis (light distribution, object geometry and local smoothness).Secondly, the self-supervised manner is not only used for brightness adjustment, but also for denoising and color correction,which are not considered in existing self-supervised enhancement methods.

    III.PROPOSED METHOD

    In this section, we provide the problem formulation and details of three blocks in the proposed method, including loss functions and network architectures.

    A. Problem Formulation

    Given a low-light image of sizeH×W×3, the goal is to generate an enhanced image.When deriving the proposed model, column-vector forms are tentatively applied for convenience.Denoting the low-light and enhanced images asIandE ∈RHW×3respectively, according to Retinex theory [29],the low-light image can be decomposed as

    whereR,L,Z ∈RHW×3stand for reflectance, illumination,and noise,respectively.?denotes element-wise multiplication.Ris related to intrinsic properties of objects, e.g., materials,textures, and colors.It is not influenced by external light conditions and remains the same in multi-exposure images.Lis independent from reflectance and is determined by factors such as the color of light source,the light intensity distribution and the geometry of objects.From the perspective of shooting equipment, a camera has onboard integrated signal processor(ISP).It applies a series of color rendering manipulations to generate standard RGB images.White balance is one of these manipulations to ensure that objects appear as the same color under different illumination conditions [30]-[33].However,low light and SNR may cause improper rendering parameters,resulting in color shifts in processed images.Some works aim to process RAW camera images captured in night scenes to produce a photo-finished output image encoded in the standard RGB (sRGB) space [34]-[36].In this work, the degradations in low light conditions are modeled separately by type.The color shifts caused by not only ISP but also the color of light source are summarized as the total color shift inL.

    From the above formulation,the degradations of a low-light image exist inLandZ.The degradation ofLcomes from two aspects.i) The low light results in the attenuation of pixel intensity.Denoting the normal illumination byLE∈RHW×3,the degraded illumination is formulated asLE?A, whereA ∈RHW×3denotes the intensity attenuation map.When formulating the intensity attenuation, the color shift is not considered for the time being.A gray-scale attenuation map is concatenated in the channel dimension to attenuate RGB channels identically.ii) The color of the light source and improper rendering parameters in ISP jointly cause the color shift inL.Considering that color cast is consistent in an image and independent of scene content and the rendering color correction process in image signal processor (ISP), the color shift is formulated as a diagonal matrixC ∈R3×3

    whererR,rG,rB >0 scale RGB channels to varying degrees,respectively.By disassemblingL, (1) can be rewritten as:

    AsCis a diagonal matrix, we rewrite the formula as

    The element-wise multiplication of reflectanceRand the normal illuminationLEis the enhanced imageEto be solved.Thus, (4) becomes

    whereZ= ~Z ?A.We aim to solve ~Zin the enlightened image rather thanZin the low-light image to avoid erroneously filtering valuable signals in the case of low signal values.Then,the enhancement problem is decomposed into illumination adjustment, denoising, and color correction, corresponding to the solution ofA, ~ZandC, respectively.The proposed SLIE aims to solve the three subproblems progressively.The overall framework of the proposed method is summarized as Fig.4.We present the details of each block in the following.

    B. Illumination Adjustment Block

    The illumination adjustment block with the low-light imageIas input, estimates the intensity attenuation mapA.The framework of this block is summarized as Fig.5.According to the Retinex theory, the degradation of illumination is related to light intensity and scene geometry.The light intensity is proportional to the pixel intensity.On the one hand, the attenuation map should be correlated with the pixel intensity.When the region shows poor pixel intensity, the attenuation map should also exhibit dark, indicating strong attenuation.After division, the image can be brightened to a greater extent.On the other hand,the attenuation map should consider the actual situation of texture smoothness under different illumination levels.For instance,the regions with appropriate pixel intensity are well-textured due to appropriate light intensity while the dark regions usually exhibit smoother textures.It motivates us to estimate the attenuation mapAaccording to original pixel values ofI.

    1)Loss Function: Either considering the light intensity or scene geometry, the attenuation map should be independent of surface textures and be locally consistent.A smoothness constraint is defined to ensure the regional smoothness of the attenuation mapA,which is the channel-dimension concatenation result of the output of the attenuation net.Jointly considering the self-supervised reference information and smoothness,it is defined as

    whereθAdenotes the parameters in the attenuation network.g(I) denotes the gray version ofI.αis a hyper-parameter to control the trade-off between the two terms.

    Considering the smoothness constraint, the regions with different illumination should be treated with distinction.More concretely,the dark regions suffer from tiny and inconspicuous textures due to the limited dynamic range.In this case, the intensity loss (first term in (6)) is expected to dominate and enlighten the textures hidden in the low dynamic range.By comparison, the regions with relatively appropriate illumination can exhibit clearer textures with a higher dynamic range.In this situation, the introduction of obvious textures in the generatedAwill cause the distortion of textures in the illumination-adjusted image.Thus, for the regions with rich textural details benefiting from appropriate illumination, the main task is to improve the overall brightness as a whole.To this end, the smoothness loss should be strictly minimized to ensure the regional consistency.Thus, the smoothness loss is concretized as

    whereApdenotes the value of pixelpinAandN(p)is a set of neighbors ofp.HandWare height and width, respectively.φ(Ap) denotes the weighted function according to the pixel intensity ofp.

    In this work, we employ a sigmoid-like function to model the correlation between weight and pixel intensity.The parametric form ofφ(x) is defined

    As shown in Fig.6,kdetermines the length of transition interval andbcontrols the central pixel intensity of transition interval.We empirically setk= 20 andb= 0.2.According to (5), we define the image after illumination adjustment asIA, formulated as

    where?is the element-wise division.c(·) denotes concatenating the image along the channel dimension.

    2)Network Architecture and Operations: The network architecture is shown in Fig.5.The deep network and large kernel size in the third to seventh layers are used to expand the receptive field and produce a smooth result independent of surface details.As the division operation directly raises the whole image to a bright interval,the results suffer from limited light and shadow variations and low saturation.We decay the result as a whole and apply an exponential transformation to stretch variations between different brightness.Moreover, going back to its definition, saturation is decided by achromatic and chromogenic components as

    Fig.6.The behavior of function φ(x) with x denoting the pixel intensity.

    wherer,g,bdenote RGB channels.The subscripth,wrepresents the position in theh-th row andw-th column of this channel.When different channels are attenuated identically,the saturation is the same as the original limited saturation.Hence, we focus on reducing the achromatic component and increasing the chromogenic component.We generate an achromatic maskmby removing the chromogenic component asm=1-g(c(r-g(IA),g-g(IA),b-g(IA))).RGB channels are adjusted with the same operation to avoid introducing chromatic aberration.

    With a ratior ∈[0,1] controlling the adjustment level, the adjusted red channel is represented asr′=r ?(1+r·m)-g(IA)?(r·m).Other channels are processed in the same way.The ratioris adaptively set according to the original saturation asr=(1-ls)/2.The finalIAis adjusted toIA=c(r′,g′,b′).

    C. Denoising Block

    This block aims to remove the noise in brightened images,as in Fig.7.There are several types of noise in real-world noisy images, produced both in image acquisition and signal transmission.In terms of noise distribution, the stochastic noise can be modeled with a Gaussian distribution.In practice,the signal has fluctuations in time due to the discrete nature of photons [37].The number of photons affects the shot noise.The variance of noise in dark regions is usually higher than that in bright regions.As the arrival of photons is not a steady stream, the shot noise can be modeled with a Poisson distribution.In this block,we take Gaussian and Poisson noise into account and focus on denosing the brightened resultIA.

    1)Loss Function: As only noisy images are available, it prompts us to perform an unsupervised denoising approach.As a representative unsupervised denoising method, Noise2Noise(N2N) [38] can infer clean images from noisy images.When there are a large number of samples, as the noise is unpredictable and randomized, to minimize the denoising loss function, the network tends to output the expectation of all possible outputs, i.e., the clean signal.However, for different types of noise,N2N needs to redefine loss function and retrain network, lacking generalization to mixed complex noise.To solve this problem, similar to N2N but different from it, we corruptIAwith four independent noise{Z1,...,Z4}, which are random combinations of Gaussian and Poisson noise.Then, it is more suitable for realistic scenes compared to N2N.Moreover, we introduce a total variation regularization to further remove noise.With the denoising function denoted asD(·), the loss function is:

    Fig.7.Framework of the denoising block (kernel size: 3×3, stride: 1).Once the denoising network has been optimized, we apply it to denoising IA.

    whereβis a hyper-parameter.θDrepresents parameters in the denoising network.N(·,·) denotes corrupting the signal by noise.For each sample, the standard deviation/parameter of Gaussian/Poisson noise is sampled from uniform distribution in [0,40] or [5,40].The output of this block isIZ=EC=D(IA).The overall framework is summarized as Fig.7.

    2)Network Architecture: The network architecture of the denoising block is shown in Fig.7.We adopt the U-Net architecture [39] as the denoising network, followed by a convolutional layer without an activation function to generate an estimated noise map.The kernel size of all convolutional layers is set to 3× 3 and the stride is set to 1.Once the denoising network has been optimized with multiple types of uncorrelated noisy data, we apply it to denoisingIA.

    D. Color Correction Block

    It aims to obtain the achromatic estimation for color correction.The achromatic estimation is removed from the denoised image to obtain the final enhanced result.The framework is summarized in Fig.8.Inspired by[40],we train a color weight network to yield a weight mapW.It represents the contribution weight of each pixel in the input image to determine the achromatic estimation.However, [40] is quasi-unsupervised and requires many balanced images without color cast for training.In our method, due to lack of balanced data, we realize color correction in the unsupervised manner and train the color weight block on unbalanced images with color cast.

    To train the color weight network, we assume that the brightest pixel is most likely to derive the color shift.If the image is white-balanced, this pixel should no have chromogenic component.In other words, the RGB values of the brightest pixel should be as identical as possible.Thus, we depend on the divergence between the estimated color vector c and the gray axis for optimization.Denoting the parameters in this block asθC, the loss function can be defined as

    where?=10-6is used to makes the equation stable.pr,pg,andpbare the RGB values of the brightest pixel, respectively.

    2)Network Architecture: The network architecture of the color weight block is shown in Fig.8.It consists of seven convolutional layers.In these layers, the kernel size is set to 3×3 and the stride is set to 1.The final layer is activated by the sigmoid function to generate a weight map representing contribution.Each pixel of the weight map corresponds to the weight assigned to the corresponding pixel in the input image.

    IV.EXPERIMENTS AND ANALYSIS

    In this section, we report the implementation details, compared state-of-the-art methods,and publicly available datasets.Then, qualitative and quantitative comparisons with state-ofthe-art methods on several low-light image datasets are performed to validate the effectiveness of the proposed method.Besides, the intermediate results are shown to show the specific functions of three blocks.Finally, the ablation study,hyper-parameter analysis, and parameter comparison are conducted.

    Fig.8.Framework of the color correction block.Achromatic estimation in this figure is the tile version of c by tiling it along the width and height of IZ.

    Algorithm 1: Overall description of SLIE Notation: I: low-light image, A: attenuation map, IA:image after illumination adjustment, ~Z: noise in IA,IZ: image after removing the noise in IA, C: color shift, E: enhanced image 1.Model I as I=IA ?A=(EC+~Z)?A.2.Initialize parameters, i.e., attenuation network (θA),denoising network (θD), and color weight network(θC).3.Update θA by minimizing Lillu(θA) defined in (6) to generate A.4.Generate the brightened image IA =EC+~Z based on A and operations defined in Section III-B.5.Update θD by minimizing Lden(θD) defined in (11)to solve the denoising function D(·).6.Generate IZ =EC=D(IA).7.Update θC by minimizing Lcolor(θC) defined in (13)to generate the achromatic estimation c.8.Make color correction according to Section III-D to obtain E.

    A. Implementation Details

    The hyper-parameters are set toα=80 andβ=0.03.The batch size is 16.We use part of images in the AGLIE, SID,and LOL datasets for training,including 200 low-light images.These low-light images from several datasets are clustered together as a whole to build the training dataset.Patches of size 256× 256 are randomly cropped from images and flipped as training data.The steps to train the attenuation,denoising, and color weight networks are 5000, 3000, and 600, respectively.We use the AdamOptimizer with a learning rate exponential decaying from 0.0002 to 0.0001.The specific training procedure is summarized as Algorithm 1.Experiments are performed on an NVIDIA Geforce GTX Titan X GPU and 2.4 GHz Intel Core i5-1135G7 CPU.

    B. Competitors and Datasets

    We compare SLIE with state-of-the-art competitors, including three traditional methods (SRIE [20], LIME [41] and Robust Retinex [42]) and ten deep learning-based methods(RetinexNet [7], Zero-DCE [12], EnlightenGAN [10], CSDNet UPE [43], KinD++ [9], RUAS [44], DUNP [15], Diff-Retinex [45], CLIP-LIT [46], and NeRCo [47]).In these methods, Zero-DCE and DUNP are self-supervised methods.

    To validate their generalization to various brightness, noise,and color shifts, we test them on four publicly available datasets, including AGLIE [48], See-in-the-Dark (SID) [5],MEF [49], and LIME [41].AGLIE is a synthetic dataset with a diverse exposure curve distribution and severe noise.SID contains indoor and outdoor raw short-exposure low-light images,which are darker due to less light during shooting and are contaminated by real noise.These images are illuminated by moonlight or street lights at night or taken in enclosed rooms with lights turned off and with faint indirect illumination.AGLIE and SID provide paired low-light and realistic reference images while MEF and LIME do not.

    C. Qualitative Comparison

    Qualitative comparison results on four datasets are shown in Figs.9-12.For each dataset,two intuitive examples are specifically reported.Compared with the competitors, SLIE shows three distinctive advantages corresponding to three blocks,i.e.,illumination adjustment, denoising, and color correction.

    1) SLIE outperforms the competitors with more appropriate illumination.With the appropriate brightness, our results present more scene information which is more natural than the competitors.Specifically, as shown in the second examples in Figs.9 and 11, although some competitors can brighten the whole images to some extent, some dark regions have not yet been properly adjusted.The contents of these regions are still difficult to recognize and not conducive to the human visual system.In more extreme cases,for some images in the AGLIE dataset which are darker than images in other datasets, most competitors fail to brighten these images.As shown in the second example in Fig.10, for this extreme example, only a small number of competitors and our method can properly adjust the illumination.In these examples,our results are most similar to the ground truth.The reason is that when low-light and normal-light images are used for training, the network usually aims to learn the mapping from low illumination to normal illumination.The learned mapping is determined by the training data and may suffer from overfitting, resulting in inapplicability to various low light.In some decomposition methods, the accuracy of decomposition also influences the enhancement performance and details in enhanced results.By comparison, our self-supervised method only relies on the information in low-light images.It avoids the influence of overfitting, decomposition accuracy, and inappropriateness of selected reference images on the enhanced results.

    Fig.9.Qualitative comparison on three images in the AGLIE dataset with state-of-the-art low-light image enhancement methods.For more intuitive comparison,some regions are highlighted and shown below the images.

    Fig.10.Qualitative comparison on two images in the SID dataset with state-of-the-art low-light image enhancement methods.

    Fig.11.Qualitative comparison on two images in the MEF dataset with state-of-the-art low-light image enhancement methods.

    Fig.12.Qualitative comparison on two images in the LIME dataset with the state-of-the-art low-light image enhancement methods.

    2) SLIE is more effective in suppressing noise in low-light images, whether real or synthetic noise.As shown in Fig.9,for severe synthetic noise in images of the AGLIE dataset,our results contain significantly less noise than other results.Also shown in the first examples in Figs.10 and 12, the low-light images suffer real noise, which is hidden in the dark and is more obvious after illumination adjustment.Some methods such as RetinexNet, EnlightenGAN, RUAS, and CLIP-LIT fail to remove the noise.Even compared with competitors consisting of denoising operations, such as RetinexNet and KinD++, our results still show smoother scenes and higher similarities with the ground truth.

    3) Our results can remove the degradation caused by color shift.As shown in the first example in Fig.9, the second examples in Figs.10 and 12, and the first example in Fig.11,the low-light images suffer from color shifts.Some are caused by the improper settings in the camera’s ISP (e.g., Figs.9 and 10) and some are caused by the light color (e.g., Figs.11 and 12).By comparison, SLIE can correct the color shifts and represent the original color of the scenes at normal color temperature.Specifically, the colors of examples in Figs.9 and 10 are similar to those of ground truth.It should also be emphasized that the formulation of color correction module assumes that the color shift is caused by improper rendering parameters in the ISP or wide range of ambient lighting and thus,is globally consistent.The color shifts shown in Figs.9-12 illustrate this situation.When the color cast is caused by local ambient lighting,the assumption and formulation are not entirely in line with the actual situation.

    4) As a self-supervised method, the comparison results of the proposed SLIE and the self-supervised competitors (i.e.,Zero-DCE and DUNP) are further analyzed.As shown in the results in Figs.9 and 10, Zero-DCE can brighten low-light images through its estimated set of light-enhancement curves.However, the estimated curves depends on the quality of lowlight images.When the input image is extremely dark, the curves fail to adjust the illumination well, as shown in its results in Fig.10.Besides,the noise and color shift are ignored in Zero-DCE, resulting in residual degradations in the results.DUNP exploits untrained neural networks priors for enhancing low-light images with noise.The enhancement is done by jointly optimizing the Retinex decomposition and illumination adjustment.However, it introduces artifacts in Fig.11 and uneven illumination adjustment in Fig.10.By comparison,our method shows better enhancement performance in a selfsupervised mechanism.

    Remark 1: Low-light images captured by mobile phone.The low-light images in the above datasets are captured by professional cameras.In real scenarios, there are also situations where the low-light images are captured by mobile phones in dark environments, such as during nighttime.We also compare the effectiveness of different methods on these images.The low-light images provided in the LSRW dataset[50], which are captured by the Huawei mobile phone are used for evaluation.The results on two scenes are shown in Fig.13.As can be seen from the results, our results exhibit more uniform brightness than the results of other methods.In this case, more scene content can be reflected in our results.Moreover,the colors of our results are more vibrant and bright,which is evident in the second group of results.The results on low-light images captured by a mobile phone further confirm the generalization of our method and its applicability in actual scenarios.

    D. Quantitative Evaluation

    We use both full-reference and no-reference metrics for objective evaluation.The AGLIE and SID datasets contain corresponding normal-light images that can be taken as ground truth.Full-reference metrics include peak signal-to-noise ratio(PSNR), structural similarity index measure (SSIM), visual information fidelity (VIF) [51] and lightness order error(LOE) [52].These metrics measure the similarity between the result and ground truth.PSNR is the ratio of peak value power and noise power by measuring the intensity differences between the enhanced image and ground truth.Thus, it can reflect distortions.SSIM denotes the structural similarities by measuring three components, including distortions of correlation and luminance and contrast distortion.VIF measures the information fidelity of the result and is consistent with the human visual system.LOE uses the relative lightness order to measure the naturalness of the enhanced image.The larger PSNR, SSIM and VIF are, the closer the result is to the ground truth.The lower the LOE, the more the enhanced image preserves natural lightness.

    TABLE I QUANTITATIVE RESULTS ON THE AGLIE AND SID DATASETS WITH FULL-REFERENCE METRICS.MEAN AND STANDARD VARIATION ARE SHOWN(RED: OPTIMAL,BLUE: SUBOPTIMAL)

    TABLE II QUANTITATIVE COMPARISON ON THE AGLIE, SID, MEF AND LIME DATASETS EVALUATED BY NIQE (RED:OPTIMAL, BLUE: SUBOPTIMAL,PINK: THIRD-OPTIMAL)

    The statistic results tested on the AGLIE and SID datasets are reported in Table I.From this table, it can be concluded that SLIE outperforms other methods with obvious superiority in most cases.It is consistent with the visual effects in Figs.9-10 as our results show less noise and color distortion.

    For the MEF and LIME datasets, there are no available normal-light images for reference.In this case, we adopt the natural image quality evaluator (NIQE) [53] to evaluate the enhancement performance.NIQE calculates the distance between the multivariate Gaussian (MVG) model parameters of the enhanced result and the pre-obtained model parameters of natural images.A lower NIQE indicates better image quality.This metric is additionally used to evaluate the results on the AGLIE and SID datasets.The results on the metric are reported in Table II.For the metric NIQE, our method achieves the third-optimal results on the SID, MEF, and LIME datasets and comparable results on the AGLIE datasets.These quantitative results show that our results can exhibit satisfactory natural qualities.The comparable results of our method on multiple datasets also demonstrate comparable generalization of the proposed SLIE.

    E. Effectiveness of Each Block

    1)Intermediate Results: Qualitative intermediate examples are presented in Figs.14-16 to show the effectiveness of the three blocks.As shown in Fig.14, the estimated attenuation maps contain few texture details.The edges mainly exist at the junctions of bright and dark regions of low-light images.In the attenuation maps, the dark regions help brighten the under-exposed regions and its smoothness avoids textures being weakened after adjustment.The bright regions avoid the overexposure of corresponding regions in low-light images.Fig.15 and 16 validate the effectiveness of the denoising and color correction blocks, respectively.Their results are in accordance with the human visual perception system.

    Fig.14.Qualitative intermediate results to validate the effectiveness of the illumination adjustment block.

    Fig.15.Qualitative intermediate results to validate the effectiveness of the denoising block.

    Fig.16.Qualitative intermediate results to validate the effectiveness of the color correction block.

    2)Ablation Study: In order to validate the effectiveness of each block,we compare the enhanced results with and without each block.The qualitative results are shown in Fig.17.Without the illumination adjustment block, the result still suffers from poor visibility due to low brightness while noise is significantly alleviated.Without the denoising block, the result shows brighter illumination while the visual effect is still affected by the noise.Without the color correction block,even though the image quality has been greatly improved, the overall tone of the image is still abnormal, appearing reddish.Under the combined effect of all the three blocks,the enhanced result exhibits optimal image quality and visual effect.

    3)Application of the Proposed Denoising/Color Correction Block(s)to Previous Methods: We further validate the effectiveness of the proposed denoising and color correction blocks by applying them to some state-of-the-art enhancement methods.As for the competitors mentioned in Section IV-B,if some competitor fails to consider image denoising, we incorporate this method with the proposed denoising block.Similarly, if color correction is not considered, the proposed color correction block is also incorporated.For these competitors, the details of incorporation are reported in Table III.For methods such as SRIE and EnlightenGAN which consider both image denoising and color correction, we use their original version for comparison and do not apply incorporation.

    Some qualitative results are shown in Fig.18 where the first row shows the original results of some competitors.The second row provides the results by applying the denoising/color correction block(s) to these competitors.By incorporating the denoising block, LIME and Zero-DCE show clearer scenes and less noise.Moreover, there is obvious red color cast in the original results of LIME,RetinexNet,Robust Retinex,and RUAS compared with the result of Zero-DCE.It is inconsistent with the tone that this scene should have.By applying the color correction block to these methods, their results shows more normal colors then the original results.

    The quantitative experiment is performed on the AGLIE dataset which contains the ground-truth enhanced images.The quantitative results before and after incorporating our block(s)are reported in Table III.By applying the denoising/color correction block(s), all the incorporated competitors show improvements in SSIM and PSNR, demonstrating better enhancement performance.On this basis,the proposed SLIE still shows better performance than the incorporated competitors.It further illustrates the effectiveness and superiority of the proposed illumination adjustment block.

    F. Ablation Study and Hyper-Parameter Analysis

    We regularize the illumination adjustment loss withLsmooth(θA) in (6).We perform the ablation study to verify its impact and setαto 10,30,50,80 and 120 for hyper-parameter analysis.As shown in Fig.19(a), when not introducing the smoothness loss(α=0),the maps are similar to original gray low-light images.The abundant details in the maps smooth the brightened results.Asαincreases, the attenuation maps only retain large luminance differences and its local smoothness enables brightened images to present more details.However,whenαis large enough to makeLsmooth(θA) dominant, the maps are almost globally consistent,resulting in inappropriate adjustment in some local regions.For the results ofα= 30,50 and 80, the differences between the brightened images are unnoticeable, so we also report quantitative comparison as in Fig.19(b).Through quantitative analysis, we setα= 80 in our method.

    G. Efficiency and Parameter Comparisons

    The running time of methods tested on the four publicly available datasets are reported in Table IV.The traditional methods are tested on a desktop with 2.4 GHz Intel Core i5-1135G7 CPU.The deep learning-based methods are tested on an NVIDIA Geforce GTX Titan X GPU.The reoptimization of each image and the large iteration number in DUNP [15]brings a main computational burden.Thus, the computational cost of DUNP for each image is over ten minutes or up to tens of minutes, significantly longer than other methods.Similarly,Diff-Retinex requires a few seconds of inference time for each image.As shown in this table,even though the proposed SLIE consists of three main blocks,it still performs with suboptimal efficiency, only second to CLIP-LIT.

    Fig.17.Qualitative results with only two blocks active.

    Fig.18.Qualitative results of state-of-the-art enhancement methods before and after incorporating the proposed denoising or/and color correction block(s).

    TABLE III QUANTITATIVE COMPARISON OF INCORPORATING DENOISING/COLOR CORRECTION BLOCK(S) WITH COMPETITORS ON THE AGLIE DATASET

    TABLE IV MEAN COMPUTATIONAL COST COMPARISON ON FOUR PUBLICLY AVAILABLE DATASETS(S)

    For all deep learning-based enhancement methods, we plot the corresponding relationship between parameter numbers and performance in Fig.20 to visualise the impact of increased parameter numbers on performance improvement.The proposed method shows the fifth least parameter number, which is more than those of RUAS, DUNP, Zero-DCE, and CLIPLIT.Slightly more parameters in our method are caused by three blocks for different purposes.However, in the vertical dimension, SLIE exhibits significant performance advantages over these methods.Therefore, our method realizes a better balance between computational complexity and performance.

    V.LIMITATIONS AND FUTURE WORK

    The proposed method performs color correction based on the assumption that the brightest pixel in an image should not contain color shift.The RGB values of this pixel should be as identical as possible.However, the individual pixel has randomness.It can largely but not completely accurately represent color shift.In future work, it is expected to improve the stability of color correction through statistical values.Moreover, the computational complexity of the proposed method is higher than some competitors and the reason is that the proposed method contains three different networks to address different types of degradations.From the perspective of reducing complexity, it may be possible to use a backbone to extract various features.On this basis, degradation/taskspecific headers can be connected to achieve output for different degradations, thereby reducing the parameter numbers.

    Fig.20.Performance and parameter comparisons of deep learning-based methods.

    VI.CONCLUSION

    We propose a self-supervised network for low-light image enhancement, termed as SLIE.It is a self-supervised network which only relies on low-light images for training and does not require external information from paired/unpaired multiexposed images.Specifically, we model the degradation in low-light images as illumination attenuation, noise pollution,and color shift and design three blocks to remove these degradations.An illumination adjustment block estimates attenuation maps based on the light intensity, scene geometry,and local smoothness of low-light images for illumination adjustment.A denoising block copes with complex and severe noise.A color correction block corrects color shifts and restores the original color in natural light.Extensive experiments conducted on four publicly available datasets demonstrate the superiority of SLIE over twelve state-of-theart competitors.Finally, SLIE achieves a balance between parameters and performance as it shows better enhancement performance while using fewer parameters than most existing deep enhancement networks.

    亚洲性夜色夜夜综合| 日本一本二区三区精品| 久久精品国产亚洲av涩爱 | 舔av片在线| 午夜福利视频1000在线观看| 中文字幕人妻熟人妻熟丝袜美| 亚洲最大成人中文| www日本黄色视频网| 深夜精品福利| 男人的好看免费观看在线视频| 一卡2卡三卡四卡精品乱码亚洲| 久久久久免费精品人妻一区二区| 久久久久亚洲av毛片大全| 哪里可以看免费的av片| 无人区码免费观看不卡| 少妇的逼水好多| 国内精品美女久久久久久| 在线观看一区二区三区| 久久天躁狠狠躁夜夜2o2o| 日本撒尿小便嘘嘘汇集6| 亚洲欧美日韩高清在线视频| 亚洲一区二区三区色噜噜| 国产私拍福利视频在线观看| 中亚洲国语对白在线视频| 成人亚洲精品av一区二区| 午夜视频国产福利| 国产av在哪里看| 欧美性猛交黑人性爽| 午夜精品久久久久久毛片777| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 精品国内亚洲2022精品成人| 在线十欧美十亚洲十日本专区| 国产精品综合久久久久久久免费| 国产一区二区激情短视频| 青草久久国产| 欧美一级a爱片免费观看看| 国产视频一区二区在线看| 亚洲精品日韩av片在线观看| 久久久久久久精品吃奶| 亚洲avbb在线观看| 老司机深夜福利视频在线观看| av天堂在线播放| 久久久久久大精品| 性欧美人与动物交配| 欧美日本视频| 极品教师在线免费播放| xxxwww97欧美| 亚洲va日本ⅴa欧美va伊人久久| 最好的美女福利视频网| 欧美性猛交╳xxx乱大交人| 国产成人av教育| 成人美女网站在线观看视频| 一区二区三区激情视频| 亚洲18禁久久av| 日本 欧美在线| 少妇人妻精品综合一区二区 | 国产不卡一卡二| 69人妻影院| 男人狂女人下面高潮的视频| 看十八女毛片水多多多| 在线免费观看的www视频| 国模一区二区三区四区视频| 又黄又爽又刺激的免费视频.| 国产老妇女一区| 国产精品一及| 麻豆国产av国片精品| 人妻丰满熟妇av一区二区三区| 麻豆成人av在线观看| 黄色女人牲交| 美女cb高潮喷水在线观看| 精品人妻一区二区三区麻豆 | 老司机午夜福利在线观看视频| 国产成+人综合+亚洲专区| 精品久久久久久,| 欧美性猛交╳xxx乱大交人| 99热这里只有是精品在线观看 | 中亚洲国语对白在线视频| 99久久成人亚洲精品观看| 在线a可以看的网站| 国产91精品成人一区二区三区| 亚洲五月天丁香| 国产久久久一区二区三区| 一个人看的www免费观看视频| 九九久久精品国产亚洲av麻豆| 老鸭窝网址在线观看| 久久精品国产亚洲av天美| 亚洲精品一卡2卡三卡4卡5卡| 免费av毛片视频| a在线观看视频网站| 久久人妻av系列| 成年女人看的毛片在线观看| a级毛片免费高清观看在线播放| 美女 人体艺术 gogo| 中文字幕av在线有码专区| 97碰自拍视频| av视频在线观看入口| 精品久久久久久成人av| 身体一侧抽搐| 18禁在线播放成人免费| 欧洲精品卡2卡3卡4卡5卡区| 国产精品一区二区性色av| 亚洲内射少妇av| 51国产日韩欧美| 18+在线观看网站| 无遮挡黄片免费观看| 精品免费久久久久久久清纯| 一本精品99久久精品77| 中文字幕av成人在线电影| 我要看日韩黄色一级片| 欧美高清性xxxxhd video| .国产精品久久| 香蕉av资源在线| 久久九九热精品免费| 怎么达到女性高潮| 国产男靠女视频免费网站| 51午夜福利影视在线观看| 亚洲久久久久久中文字幕| 国产老妇女一区| 国产成年人精品一区二区| 欧美色视频一区免费| 麻豆一二三区av精品| 午夜免费成人在线视频| 国产精品久久视频播放| 免费在线观看日本一区| 一级a爱片免费观看的视频| 亚洲aⅴ乱码一区二区在线播放| 久久精品国产99精品国产亚洲性色| 亚洲国产精品合色在线| 男女床上黄色一级片免费看| 97超级碰碰碰精品色视频在线观看| 日本熟妇午夜| 在线免费观看的www视频| 成人av在线播放网站| 99久久99久久久精品蜜桃| 日韩成人在线观看一区二区三区| 久久久久九九精品影院| 搡老熟女国产l中国老女人| 日韩中字成人| 国产欧美日韩一区二区精品| 又黄又爽又免费观看的视频| 十八禁网站免费在线| 亚洲最大成人手机在线| 成人永久免费在线观看视频| www.色视频.com| 免费观看的影片在线观看| 免费人成在线观看视频色| 亚洲 国产 在线| 男女之事视频高清在线观看| 免费观看精品视频网站| av天堂在线播放| 亚洲黑人精品在线| 国产激情偷乱视频一区二区| 在线观看一区二区三区| 国产麻豆成人av免费视频| 内射极品少妇av片p| 国产高清激情床上av| 男插女下体视频免费在线播放| 直男gayav资源| 国产精品99久久久久久久久| 精品一区二区三区人妻视频| 丰满人妻一区二区三区视频av| 精品一区二区三区视频在线| 日日摸夜夜添夜夜添av毛片 | 日本a在线网址| 国产av一区在线观看免费| 两个人的视频大全免费| 久久亚洲真实| 91在线精品国自产拍蜜月| 尤物成人国产欧美一区二区三区| 2021天堂中文幕一二区在线观| 一个人免费在线观看电影| 欧美成狂野欧美在线观看| 日日摸夜夜添夜夜添小说| 波多野结衣高清无吗| 一区二区三区四区激情视频 | 亚洲精品色激情综合| 51国产日韩欧美| 黄色配什么色好看| 有码 亚洲区| 老司机午夜福利在线观看视频| 又紧又爽又黄一区二区| or卡值多少钱| 精品99又大又爽又粗少妇毛片 | 欧美黑人欧美精品刺激| 99久久精品国产亚洲精品| 亚洲色图av天堂| 九色成人免费人妻av| 久久精品国产99精品国产亚洲性色| 麻豆成人av在线观看| 九九热线精品视视频播放| 校园春色视频在线观看| 日韩中文字幕欧美一区二区| 一个人免费在线观看电影| 观看免费一级毛片| 亚洲国产精品sss在线观看| 一个人观看的视频www高清免费观看| 国产成人a区在线观看| 12—13女人毛片做爰片一| 嫩草影院新地址| 观看免费一级毛片| 一a级毛片在线观看| 欧美乱妇无乱码| 国产熟女xx| 一进一出好大好爽视频| 美女高潮的动态| 夜夜看夜夜爽夜夜摸| 亚洲最大成人av| 精品久久久久久成人av| 97碰自拍视频| 免费观看的影片在线观看| 亚洲一区二区三区不卡视频| 亚洲熟妇熟女久久| 亚洲人成网站在线播| 91久久精品电影网| 日日干狠狠操夜夜爽| 免费观看精品视频网站| 在线观看一区二区三区| 国产精品美女特级片免费视频播放器| 国产精品亚洲一级av第二区| or卡值多少钱| 免费人成在线观看视频色| 男人舔女人下体高潮全视频| 99精品久久久久人妻精品| 欧美激情久久久久久爽电影| 午夜福利免费观看在线| 成人美女网站在线观看视频| 久久这里只有精品中国| 很黄的视频免费| .国产精品久久| 我要看日韩黄色一级片| 国产探花在线观看一区二区| 欧美午夜高清在线| 欧美成人免费av一区二区三区| 亚洲成人久久性| 99国产精品一区二区蜜桃av| 赤兔流量卡办理| 国产精品av视频在线免费观看| 在线免费观看不下载黄p国产 | 亚洲激情在线av| 午夜激情福利司机影院| 精品一区二区三区人妻视频| 97热精品久久久久久| 日韩欧美在线乱码| 国产黄a三级三级三级人| 99热这里只有精品一区| 午夜福利18| 最后的刺客免费高清国语| 国产精品亚洲美女久久久| 欧美精品国产亚洲| 国内久久婷婷六月综合欲色啪| 精品人妻1区二区| 午夜福利视频1000在线观看| .国产精品久久| 深爱激情五月婷婷| 精品国内亚洲2022精品成人| 国产精品,欧美在线| 老司机午夜福利在线观看视频| 91久久精品电影网| 欧美乱妇无乱码| 少妇高潮的动态图| 女生性感内裤真人,穿戴方法视频| 久久伊人香网站| 日本五十路高清| 给我免费播放毛片高清在线观看| 久久久久国产精品人妻aⅴ院| 色哟哟·www| 内地一区二区视频在线| 亚洲欧美日韩无卡精品| 一级作爱视频免费观看| 成人特级黄色片久久久久久久| 国产伦人伦偷精品视频| 欧美潮喷喷水| 深夜精品福利| 国产 一区 欧美 日韩| 2021天堂中文幕一二区在线观| 一本一本综合久久| 在线天堂最新版资源| а√天堂www在线а√下载| 亚洲在线观看片| 男人舔女人下体高潮全视频| 99久国产av精品| 精品人妻偷拍中文字幕| 久久性视频一级片| 亚洲成人免费电影在线观看| 在线观看午夜福利视频| 一个人看的www免费观看视频| 亚洲性夜色夜夜综合| 精品国内亚洲2022精品成人| 国产精品一区二区三区四区久久| 级片在线观看| 久久这里只有精品中国| 免费黄网站久久成人精品 | 亚洲美女搞黄在线观看 | 99久久成人亚洲精品观看| 亚洲精品一区av在线观看| 男插女下体视频免费在线播放| 国产精品国产高清国产av| 深爱激情五月婷婷| 韩国av一区二区三区四区| 哪里可以看免费的av片| 啦啦啦观看免费观看视频高清| 黄色女人牲交| 亚洲久久久久久中文字幕| 亚洲乱码一区二区免费版| 一级作爱视频免费观看| 亚洲成人久久性| 午夜免费激情av| 欧美激情久久久久久爽电影| 一区二区三区四区激情视频 | 两个人视频免费观看高清| 91在线观看av| 亚洲无线在线观看| 精品久久久久久久久久免费视频| 一进一出好大好爽视频| 尤物成人国产欧美一区二区三区| 精品久久久久久久久亚洲 | 欧美高清成人免费视频www| 国产av不卡久久| 麻豆成人av在线观看| 一级毛片久久久久久久久女| 色5月婷婷丁香| 成年女人永久免费观看视频| 中文字幕久久专区| 欧美精品国产亚洲| 看免费av毛片| 99热只有精品国产| 九色国产91popny在线| 久久亚洲精品不卡| 亚洲av电影不卡..在线观看| av国产免费在线观看| 日本撒尿小便嘘嘘汇集6| 一进一出抽搐gif免费好疼| 久久久久九九精品影院| 在线天堂最新版资源| 亚洲精华国产精华精| 国产精品亚洲美女久久久| 亚洲狠狠婷婷综合久久图片| 看黄色毛片网站| 亚洲av成人av| 人妻丰满熟妇av一区二区三区| 美女黄网站色视频| av黄色大香蕉| 久久久久久久久大av| 久久精品国产亚洲av天美| 在线观看一区二区三区| 蜜桃久久精品国产亚洲av| 久久国产乱子免费精品| 国产精品久久久久久久久免 | 久久久久九九精品影院| 成人特级黄色片久久久久久久| 成人无遮挡网站| 老熟妇乱子伦视频在线观看| 日本 欧美在线| 欧美不卡视频在线免费观看| 欧美在线一区亚洲| 日韩精品中文字幕看吧| 天天躁日日操中文字幕| 欧美不卡视频在线免费观看| 亚洲精华国产精华精| 网址你懂的国产日韩在线| 91久久精品国产一区二区成人| 国产伦在线观看视频一区| 亚洲狠狠婷婷综合久久图片| 一本综合久久免费| 性色av乱码一区二区三区2| 欧美日本视频| 一夜夜www| 一本综合久久免费| 狠狠狠狠99中文字幕| 一个人看视频在线观看www免费| 免费观看人在逋| 欧美+日韩+精品| 在线观看美女被高潮喷水网站 | 亚洲自偷自拍三级| 午夜精品一区二区三区免费看| 男女床上黄色一级片免费看| 欧美潮喷喷水| 中文字幕熟女人妻在线| 国产成+人综合+亚洲专区| 我要搜黄色片| 婷婷亚洲欧美| 亚洲精华国产精华精| av天堂中文字幕网| 毛片女人毛片| 人妻夜夜爽99麻豆av| 精品一区二区三区视频在线| 欧美+亚洲+日韩+国产| x7x7x7水蜜桃| 亚洲国产精品999在线| 色视频www国产| 无遮挡黄片免费观看| 性欧美人与动物交配| 亚洲国产精品sss在线观看| 成年女人毛片免费观看观看9| 午夜福利18| 亚洲精品456在线播放app | 欧美成人性av电影在线观看| 看十八女毛片水多多多| 十八禁人妻一区二区| 国产午夜福利久久久久久| 好男人在线观看高清免费视频| 99久久久亚洲精品蜜臀av| 一本一本综合久久| 免费在线观看影片大全网站| 一进一出抽搐动态| 自拍偷自拍亚洲精品老妇| 国产成人欧美在线观看| 男女下面进入的视频免费午夜| 亚洲人成网站在线播| 黄色女人牲交| 亚洲欧美日韩高清在线视频| 淫秽高清视频在线观看| 一个人看视频在线观看www免费| 日日干狠狠操夜夜爽| 欧美日韩瑟瑟在线播放| 一个人观看的视频www高清免费观看| .国产精品久久| 久久亚洲真实| 欧美潮喷喷水| 久久99热这里只有精品18| 亚洲一区高清亚洲精品| 波野结衣二区三区在线| 丰满乱子伦码专区| 婷婷丁香在线五月| 999久久久精品免费观看国产| 最近视频中文字幕2019在线8| 国内精品久久久久精免费| 色噜噜av男人的天堂激情| 熟女人妻精品中文字幕| 麻豆成人av在线观看| 嫩草影院新地址| 免费av观看视频| 88av欧美| 久久精品国产亚洲av天美| 午夜福利18| 日韩 亚洲 欧美在线| 亚洲第一区二区三区不卡| 嫩草影院入口| 91九色精品人成在线观看| 少妇的逼好多水| 国产综合懂色| avwww免费| 国产单亲对白刺激| 国产男靠女视频免费网站| 国产真实乱freesex| 国产极品精品免费视频能看的| 午夜福利在线观看吧| 91狼人影院| 成人午夜高清在线视频| 欧美区成人在线视频| 久9热在线精品视频| 此物有八面人人有两片| 久久草成人影院| 国产主播在线观看一区二区| 午夜精品一区二区三区免费看| 一二三四社区在线视频社区8| 国产成人啪精品午夜网站| 五月伊人婷婷丁香| 国内久久婷婷六月综合欲色啪| 亚洲精品在线美女| 我要搜黄色片| 欧美日韩综合久久久久久 | 简卡轻食公司| 深夜a级毛片| 亚洲男人的天堂狠狠| 色吧在线观看| 色综合婷婷激情| а√天堂www在线а√下载| 久久国产精品影院| 精品人妻熟女av久视频| 美女免费视频网站| av在线老鸭窝| 乱人视频在线观看| 一个人看的www免费观看视频| 亚洲最大成人手机在线| 免费看a级黄色片| 国产成人啪精品午夜网站| 精品午夜福利在线看| 午夜视频国产福利| 69av精品久久久久久| 欧美日韩综合久久久久久 | 能在线免费观看的黄片| 搡老岳熟女国产| 高清在线国产一区| 亚洲国产欧美人成| 国产精品一区二区三区四区免费观看 | 国产色爽女视频免费观看| 波多野结衣高清作品| 少妇的逼水好多| 免费看美女性在线毛片视频| 变态另类成人亚洲欧美熟女| 亚洲五月婷婷丁香| ponron亚洲| 老司机午夜福利在线观看视频| 国产私拍福利视频在线观看| 在线国产一区二区在线| 毛片一级片免费看久久久久 | 神马国产精品三级电影在线观看| 欧美极品一区二区三区四区| 亚洲精品色激情综合| 在线观看舔阴道视频| 精品一区二区免费观看| 成人欧美大片| 亚洲av熟女| 亚洲国产高清在线一区二区三| 国产精品美女特级片免费视频播放器| eeuss影院久久| 成人美女网站在线观看视频| 一级毛片久久久久久久久女| 在线天堂最新版资源| 一级黄片播放器| 久久这里只有精品中国| 免费一级毛片在线播放高清视频| 欧美zozozo另类| 我的老师免费观看完整版| 亚洲成a人片在线一区二区| 色综合亚洲欧美另类图片| 中文字幕高清在线视频| 欧美zozozo另类| 悠悠久久av| 国产精品久久久久久久久免 | 一卡2卡三卡四卡精品乱码亚洲| 国产精品一区二区三区四区久久| 老司机深夜福利视频在线观看| 日本一二三区视频观看| 村上凉子中文字幕在线| 一本精品99久久精品77| 极品教师在线免费播放| 在线a可以看的网站| 午夜激情福利司机影院| 一a级毛片在线观看| 欧美最黄视频在线播放免费| 亚洲av熟女| 国产一区二区三区视频了| 成人性生交大片免费视频hd| 少妇被粗大猛烈的视频| 欧美最黄视频在线播放免费| 久久6这里有精品| 永久网站在线| 亚洲欧美精品综合久久99| 丰满人妻一区二区三区视频av| 91九色精品人成在线观看| www.www免费av| 嫩草影视91久久| 精品日产1卡2卡| 九色成人免费人妻av| 成人国产综合亚洲| 国产真实乱freesex| 久久久久亚洲av毛片大全| 级片在线观看| 免费看美女性在线毛片视频| 亚洲第一区二区三区不卡| 一级av片app| 日本精品一区二区三区蜜桃| 欧美乱色亚洲激情| 亚洲欧美日韩高清专用| 少妇丰满av| 国产在线男女| 亚洲五月天丁香| 亚洲精品在线观看二区| 乱码一卡2卡4卡精品| 国产精品自产拍在线观看55亚洲| 一级黄色大片毛片| 色视频www国产| 五月玫瑰六月丁香| 黄色配什么色好看| 亚洲人与动物交配视频| 国产高清激情床上av| 午夜日韩欧美国产| 午夜免费激情av| 美女大奶头视频| 免费人成在线观看视频色| 一个人观看的视频www高清免费观看| 国产乱人视频| 丰满人妻熟妇乱又伦精品不卡| 国产精品爽爽va在线观看网站| 亚洲人成网站在线播放欧美日韩| 久久久久性生活片| 国产精品,欧美在线| 两性午夜刺激爽爽歪歪视频在线观看| 别揉我奶头~嗯~啊~动态视频| 国产高清激情床上av| 亚洲成av人片在线播放无| 禁无遮挡网站| 亚洲精品日韩av片在线观看| 美女被艹到高潮喷水动态| 久久午夜福利片| 日本五十路高清| 夜夜躁狠狠躁天天躁| 日日干狠狠操夜夜爽| 大型黄色视频在线免费观看| 天堂影院成人在线观看| 国产精品女同一区二区软件 | 亚洲精品色激情综合| 亚洲国产精品合色在线| 午夜激情福利司机影院| 日韩欧美三级三区| 欧美激情久久久久久爽电影| 少妇人妻精品综合一区二区 | 国产69精品久久久久777片| 免费高清视频大片| 老熟妇乱子伦视频在线观看| x7x7x7水蜜桃| 精品午夜福利在线看| 精品福利观看| 国产亚洲欧美98| 亚洲18禁久久av| 国语自产精品视频在线第100页| 99热这里只有是精品在线观看 | 欧美+日韩+精品| 成人性生交大片免费视频hd| 亚洲综合色惰| 天天一区二区日本电影三级| 色精品久久人妻99蜜桃| 亚洲电影在线观看av|