• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Attention-Based Deep Learning Model for Image Desaturation of SDO/AIA

    2023-09-03 01:36:14XinzeZhangLongXuZhixiangRenXuexinYuandJiaLi

    Xinze Zhang ,Long Xu ,Zhixiang Ren ,Xuexin Yu ,and Jia Li,5

    1 State Key Laboratory of Space Weather,National Space Science Center,Chinese Academy of Sciences,Beijing 100190,China;lxu@nao.cas.cn

    2 University of Chinese Academy of Sciences,Beijing 100049,China

    3 Peng Cheng Laboratory,Shenzhen 518000,China

    4 Department of Automation,Tsinghua University,Beijing 100084,China

    5 State Key Laboratory of Virtual Reality Technology and Systems,School of Computer Science and Engineering,Beihang University,Beijing 100191,China

    Abstract The Atmospheric Imaging Assembly (AIA) onboard the Solar Dynamics Observatory (SDO) captures full-disk solar images in seven extreme ultraviolet wave bands.As a violent solar flare occurs,incoming photoflux may exceed the threshold of an optical imaging system,resulting in regional saturation/overexposure of images.Fortunately,the lost signal can be partially retrieved from non-local unsaturated regions of an image according to scattering and diffraction principle,which is well consistent with the attention mechanism in deep learning.Thus,an attention augmented convolutional neural network(AANet)is proposed to perform image desaturation of SDO/AIA in this paper.It is built on a U-Net backbone network with partial convolution and adversarial learning.In addition,a lightweight attention model,namely criss-cross attention,is embedded between each two convolution layers to enhance the backbone network.Experimental results validate the superiority of the proposed AANet beyond state-of-the-arts from both quantitative and qualitative comparisons.

    Key words: techniques: image processing– Sun: atmosphere– Sun: flares

    1.Introduction

    The Atmospheric Imaging Assembly (AIA) (Lemen et al.2012) onboard the Solar Dynamics Observatory (SDO)(Pesnell et al.2012)captures full-disk solar images over seven extreme ultraviolet (EUV)(94 ?,131 ?,171 ?,193 ?,211 ?,304 ?,335 ?)wave bands with a temporal cadence of 12 s and an angular resolution of5,providing an unprecedented highdefinition observation of the solar atmosphere,especially finegrained dynamic evolution of solar activities.

    However,in case of big solar flares,the incoming photoflux may exceed the threshold of the charge-coupled device (CCD)of SDO/AIA,resulting in saturation/overexposure of the flare’s core region.The imaging system of SDO/AIA is characterized by these two processes,diffraction and diffusion which can be more or less explained by Figure 1.The diffraction replicates the core peak to generate diffraction fringes as shown in Figure 1(a),which is resulted by the convolution with the point-spread function(PSF)of SDO/AIA as shown in Figure 1(b).The diffusion causes the diffusion effect in local area when the input signal goes through the central part of the PSF as shown in Figure 1(c).The diffraction artifact would become apparent against the background in case of high intensity in image core frequently inducing saturation.More precisely,the saturation consists of the primary saturation and blooming/secondary saturation due to two different reasons.The former refers to the fact that the CCD pixels cannot accommodate additional charge for incoming photoflux while the latter names the fact that the primary saturation causes charge to spill into their neighbors.The overall effect of saturation is to flatten and threshold the brightest core of an image anisotropically (north–south direction) as shown in Figure 1(d).

    Figure 1.An example of diffraction and diffusion effects,where diffraction fringes scatter to other regions beyond the core peak,while diffusion effect causes some degree of blur in the core peak due to the central part of the PSF.

    The imaging process described above is formulated as the convolution between incoming photon flux and PSF,

    wherefandIare the incoming photon flux and the recorded signal by SDO/AIA respectively,AcandAdrepresent the diffusion component and diffraction component of the PSF respectively,?represents the convolution operator.AcandAdare illustrated in Figures 1(c) and (b),whereAcis a core peak,andAdis the replications of the core peak.It can be observed that diffraction fringes which is the result ofAd?fin Figure 1(a).The effect of diffraction fringes comes from a regular,peripheral diffraction pattern of varying intensity inAdas shown in Figure 1(b).In particular,diffraction fringes would become more apparent against the background when increasing the peak off.The other termAc?fin (1) results in image saturation which is split into the primary saturation and the second saturation/blooming.As discussed in (Guastavino et al.2019),the blooming cannot be restored while the primary saturation may present in the diffraction fringes due to diffraction effect.In detail,the signalfis coherently and linearly scattered to other regions presenting as diffraction fringes due to diffraction (Guastavino et al.2019) given byAd?fas shown in Figure 1(a).Thus,the lost signal can be partially retrieved from diffraction fringes (Schwartz et al.2014;Torre et al.2015).

    The recovery of lost signal from degraded signal is traditionally described as an inverse problem.To resolve an inverse problem,an extra constraint is additionally required.Usually,the extra constraint is given by typical image priors,like sparsity,non-local and total variation.This processing is well known as the regularization method which optimizes both data fidelity term and regularization (prior) term.In DESAT(Schwartz et al.2015),lost signal recovery in saturated regions was first formulated into an inverse diffraction issue as

    whereIdis the known recorded image in diffraction regions,Bdis the unknown saturated image background related to diffraction fringes.Then,the regularization method was employed to resolve(2)to recoverf.In Schwartz et al.(2015),Bdis estimated from the interpolation of two neighbor unsaturated images which are provided by short-time exposure of SDO/AIA.This short-time exposure of SDO/AIA can be automatically triggered once solar flare occurs.However,in case of large solar flare,the neighbor images of short-time exposure are also saturated,resulting in failure of DESAT.To address this problem,a Sparsity-Enhancing DESAT (SEDESAT) Guastavino et al.(2019) was proposed to estimateBdfrom only current image instead of its neighbors.Nevertheless,desaturated result is limited by the segmentation of diffraction fringes and primary saturation regions and the estimation of background.In addition,the blooming regions cannot in principle be restored in both DESAT and SEDESAT.

    Inspired by great success of deep learning,Mask-Pix2Pix(Zhao et al.2019),PCGAN (Yu et al.2021) and MCNet (Yu et al.2022) have been proposed to desaturate solar images in our previous efforts.Different from DESAT (Schwartz et al.2015) and SE-DESAT (Guastavino et al.2019) which explicitly model the recovery of saturation (desaturation) as an inverse diffraction,our models implicitly describe the desaturation as an image inpainting task with the help of deep learning.In addition,relative to DESAT and SE-DESAT,our models could compensate both the primary and second saturations with advanced image generation techniques of deep learning.Moreover,it is not necessary to segment primary saturation and blooming,and estimate background from the image superposed diffraction fringes,which were however two big challenges for both DESAT and SE-DESAT.Besides,partial convolution(PC)(Liu et al.2018)was used in PCGAN(Yu et al.2021)instead of standard convolution for processing invalid pixels within a convolution block.

    As discussed in Equation (1) and Figure 1,a peripheral regular diffraction patternAdreplicates the core peak to generate diffraction fringes distributed outside of the core peak.These diffraction fringes carry information about the core region to scatter outside the core region.They can be utilized to restore the saturated region through an inverse process of diffraction,which has been successfully formulated by convolutional neural networks (CNNs) in our previous efforts(Yu et al.2021,2022).However,due to the small receptive field of CNN,we cannot efficiently unitize the diffraction fringes spread throughout the entire image,result in a compromise of desaturation.In this work,considering the non-local property of diffraction fringes,an lightweight attention module,namely criss-cross attention (Huang et al.2019),is employed to enhance CNNs to exploit global diffraction fringes for desaturation.This attention model has the receptive field of the entire image,so it can efficiently synthesize the information of the entire image through different weights.

    The rest of paper is organized as follows.Section 2 introduces the network architecture,convolutions and loss functions of the proposed AANet in details.Experimental results are provided in Section 3.Conclusion and discussion are given in Section 4.

    2.Method

    The desaturation problem has been formulated as an inverse problem as given in Equation (2).The traditional solution was through regularization method (Guastavino et al.2019) which was however challenged by the decision of the primary saturation region and the estimation of the background signal.Deep learning has been widely acknowledged as a universal approximator (Cybenko 1989;Hornik 1991),which has achieved a big success in a variety of image processing tasks,such as image denoising,enhancement,super-resolution,inpainting,deconvolution and etc.In this section,an attention augmented convolutional neural network (AANet) is constructed to exploit attention mechanism for image desaturation.First,the network architecture of the proposed AANet is presented in details.Second,the loss function is presented and discussed in details for the optimization of the proposed model.

    2.1.Network Architecture

    The overall network of the proposed model is a generative adversarial network (GAN) as shown in Figure 2,consisting of a generator and a discriminator.The generator is a U-Net which architecture is shown in Figure 3.It consists of an encoder of eight convolutional layers and a decoder of eight deconvolutional layers.The basic modules of the generator include criss-cross attention(Huang et al.2019),PC,regional composite normalization (RCN) (Wang et al.2021),and ReLU/LeakyReLU.They are stacked repeatedly in the generator.In addition,the skip connection connects the encoder and the decoder at each layer.The detailed parameters are listed on the left side of network architecture,where the name of each module and volume of each module are provided.Moreover,the PC instead of the standard convolution is employed for processing invalid pixels in a convolution block.As illustrated in Figure 3,a mask image is provided to indicate “normal” and “saturated” pixels in an image for guiding the PC.In the encoder,invalid region gradually becomes smaller along with the mask updating in Liu et al.(2018).Finally,all pixels become valid,and the mask image converges to an all-ones matrix.During convolution process,the mask image is provided to image branch to guide extracting image features.

    Figure 2.The overview of the proposed model.

    Figure 3.The generator of the AANet,which learns a mapping from Im and Id to Igt.The discriminator supervises the learning process of the generator by an adversarial loss for discriminating fake{Id,Ig,Igg}and real{Id,Igt,Igtg}pairs.It finally minimizes the distance between two probability distributions of{Ig}and{Igt}.

    The discriminator is a general CNN consisting of convolution layers.Specifically,it is a PatchGAN (Isola et al.2017;Zhu et al.2017) which means that each image is divided into small patches(e.g.,8×8)rather than a whole for discriminating real/false (real: positive or false: negative).The output of discriminator is aH/8×W/8 binary matrix where“0”and“1”indicate the probability of each patch being real or false.From Figure 4,mean square error(MSE)is computed to measure the loss of the discriminator.

    2.2.Attention

    Concerning image inpainting,attention mechanism is of great importance for exploring global/contextual information in an image,which is equivalent to exploring non-local prior in traditional image processing.In the literature,there have been many attention models,including non-local(Wang et al.2018),SENet (Hu et al.2020),GCNet (Cao et al.2019),CCNet (Huang et al.2019) and transformer (Vaswani et al.v 2017).

    In this work,a lightweight attention,namely criss-cross attention (CCNet) (Huang et al.2019) is employed for low computational complexity.It exploits contextual information to further augment image convolutional features for recovery of saturated region of an image.The diagram of a CCNet is given in Figure 5,where an input image is first passed through convolution layers to produce feature maps.Then,these feature maps are fed to a criss-cross attention module for further enhancing image features,producing new feature maps.The criss-cross attention aggregates contextual information for each pixel in its criss-cross path,which means only the pixels in the same row and column as the current pixel are concerned.It is worth pointing out that two times of aggregation of criss-cross path are implemented in a CCNet.Thus,for each pixel,a CCNet actually aggregates the contextual information of all pixels of an image block.Such a recurrent aggregation is named as recurrent criss-cross attention (RCCA).To explore global contextual information over local feature representation,the non-local attention module(Wang et al.2018)generates a dense attention map size ofH×Wfor each pixel as shown in Figure 5(a),while the criss-cross attention module generates a sparse attention map only size ofH+W?1.However,going through two criss-cross operations,each pixel of the final output feature map can gather contextual information from all pixels as shown in Figure 5(b).

    Figure 5.Diagrams of non-local and criss-cross attention modules.(Computational complexity isO (H ×W )(H ×W))andO (H×W)×(H +W ? 1),respectively.)

    2.3.Convolutions

    Using deep learning,image restoration is accomplished by referring to the degraded image itself and the statistical distribution of massive unimpaired images.In this work,the desaturation of solar image is regarded as an image inpainting task.Solar images are impaired by saturated regions/holes as big flares happen.In image inpainting,convolution across intersection region between valid pixels and invalid pixels need to be designed specifically.First,invalid pixels should be excluded from standard convolution,namely partial convolution (PC) (Liu et al.2018).Second,the deviation caused by PC should be compensated so that output energy of PC remains the same relative to standard convolution.Given the input degraded image/feature mapx,convolution weightwand biasb,standard convolution is described as

    wherey(i,j)denotes the(i,j)-th position of output feature mapy,R confnies the receptive feild of convolution.For example,3×3 receptive feild is formulated asR={(? 1,?1),(?1,0),...,(0,1),(1,1) }.When receptive feild of convolution slides across the boundary of impaired hole,both valid and invalid pixels participate in convolution operation.

    To solve this issue,the PC (Liu et al.2018) is introduced,which is described as

    wheremrepresents the mask image where “0” stands for saturated pixels and “1” stands for normal pixels,it is updated by Equation(5)for each operation.The symbol 1 is a constant matrix with all entries equal to 1,with the same size asm.From Equations (4) and (5),PC only depends on valid pixels by introducing a mask to exclude invalid pixels in convolution,while the deviation caused by invalid pixels is calibrated by scaling the output of PC,and the scaling factor is proportional to the number of valid pixels in receptive field.

    2.4.Loss Functions

    To optimize a neural network for image generation,a hybrid loss function was usually raised.It includes both pixel-level and feature-level image losses,providing image high-fidelity and photorealistic effect respectively.The pixel-level loss is represented by theL1norm andL2norm,i.e.,mean absolute error (MAE) and mean square error (MSE) functions.It measures the pixel-level difference between generated image and ground-truth in supervised learning.Inspired by the classical image priors in image processing,image gradient and image smoothness are of great importance to the perception of human visual system.Thus,two additional losses,namely gradient loss(Ma et al.2020)and total variation loss(Johnson et al.2016) are included in the loss function of the proposed model.Relative to pixel-level loss,the feature-level loss could well describe the photorealistic property of an image,but ignores the pixel-level difference.In this work,perceptual loss(Johnson et al.2016),and style loss (Gatys et al.2016) are employed to measure feature-level difference between generated image and ground-truth.The last but not the least,an adversarial loss (Mao et al.2017) is included in the loss function,which optimizes a generator through zero-sum game against a discriminator (Goodfellow et al.2014).

    LetIdbe the input degraded image,Imthe initial binary mask image,Igthe generated image,andIgtthe ground-truth,the pixel-level loss given byL1norm is defined as

    where∥·∥1denotesL1norm,λhand λvare two weights for combining recovered saturated region and normal region.They are empirically set to 100 and 10,indicating more weight is allocated to the recovered saturated region in Equation (6).

    The gradient loss (Ma et al.2020) is adopted to ensure that the generated image has sharp structures and edges of objects,which is defined as theL1loss over image gradient map as

    where ?represents a gradient operator (Ma et al.2020) to compute image gradient,and′are two weights(empirically set to 300 and 10),assigning different weights to saturated regions and normal regions.

    Total variation loss (Johnson et al.2016) is included to ensure image smoothness,especially around the boundary between normal and saturated regions.It is defined as

    wherePindicates the region connecting saturated and normal regions.

    The adversarial loss (Mao et al.2017) is adopted to ensure photorealistic effect of generated image from feature-level,which is formulated as

    The perceptual loss (Johnson et al.2016) is adopted to capture high-level semantic information and alleviate gridshaped artifacts in recovered regions(Liu et al.2018),which is formulated as

    whereIc=(1 ?Im)⊙Ig+Im⊙Igtindicates the combination of the recovered region and the normal region extracted directly from the ground-truth.It can be seen that the perceptual loss computesL1norm in feature domain forIgandIc,respectively,where Ψirepresents the feature map of thei-th pooling layer of VGG-16(Simonyan&Zisserman 2015).In this work,the first three pooling layers (T=3) are used in Equation (10).

    The style loss (Gatys et al.2016) has been proved to be effective for capturing image semantic information,which first computes Gram matrix for each feature map of VGG-16,and then calculatesL1norm of Gram matrix.Therefore,it is defined as

    whereKiis a weight for scaling,which is given by 1/HiWiCifor thei-th layer of VGG-16,Ψi(I) represents the operator of Gram matrix,which outputs a feature map size ofHi×Wi×Ci.

    3.Experimental Results

    For evaluating the proposed AANet,experiments are performed to first compare our model with two state-of-theart desaturation methods,PCGAN(Yu et al.2021)and MCNet(Yu et al.2022).Then,the effectiveness of criss-cross attention module is verified by an ablation study.The source code of AANet can be accessed via GitHub (https://github.com/filterbank/AANet).

    3.1.Dataset

    For training deep learning models,a new large-scale data set beyond the previous one (Yu et al.2021) is established in this work.In this new data set,raw data of 14 bit-FITS format instead of “png” images are included,so that the high fidelity of scientific computing and physical plausibility of computing results can be guaranteed.Each sample of the data set consists of a ground-truth given by the image of short-time exposure without overexposure,a mask image labeling saturated pixels provided by the image of long-time exposure and a manually overexposed image flatten by a preset threshold.We gather M-class and X-class solar flare data at 193 ? of SDO/AIA(Lemen et al.2012) from 2010 to 2017.The short time exposure images closest to the overexposure one of long time exposure are taken as the ground-truths.In addition,they are normalized by the time of long time exposure.A sample of the data set is shown in Figure 6,where the saturated imageIsatis only used to deduce a more realistic mask (denoted byIm) of saturated region by imposing a threshold onIsat,degraded imageIdresults fromIgt⊙Im(⊙represents element-multiplication operator),andIgtis given by the image of short-time exposure closest to the long-time overexposed one.During model training,the triplet {Id,Im,Igt} is fed to our proposed network to optimize the model parameters.The whole data set contains about 18,700 samples.To train and test the network with multiple splittings of the data set,we split the data set into eight equal portions to alternatively select seven of them for training and the rest for testing.Thus,a mean and standard deviation (STD) value on the performance measures can be provided.

    Figure 6.An sample in desaturation data set established by Yu et al.(2021),which is composed of four images:Isat,Igt,Im and Id.Isat is a saturated image and Igt is the nearest unsaturated image of short-time exposure. Im is a binary mask which indicates saturated and unsaturated pixels of Isat by 1 and 0,respectively. Id is the simulated degraded image which is obtained by Igt ⊙Im.

    3.2.Implementation Details

    We evaluate the AANet on our established data set.It should be pointed out that there are two versions of AANet,one is PCGAN plus criss-cross(CC)attention and the other is MCNet plus CC.In our experiments,we employ the well-known data augmentation techniques to augment the training data set,including randomly cropping input image triplet (degraded image,corresponding mask and ground-truth) from 350×350 to 256×256,and randomly rotating (no rotating,90°,180°and 270°) and flipping them.The proposed AANet is implemented by the well-know PyTorch package,trained by a NVIDIA GeForce RTX 3090 GPU with batch size of 28 and epoch number of 200.The convolution weights are initialized by the method in He et al.(2015)and optimized by the ADAM algorithm (Kingma &Ba 2014) with β1=0.500 and β2=0.999.The initial learning rate is set to 2e?4,and then decays half at the 100th and 150th epoch successively.

    3.3.Comparisons with State-of-the-Arts

    The proposed AANet is compared to the three benchmarks,SE-DESAT(Guastavino et al.2019),PCGAN(Yu et al.2021)and MCNet (Yu et al.2022).

    For subjective comparison,seven samples of different saturated sizes are selected from the data set as shown in Figure 7.It can be observed that both the AANet and two benchmarks can recover the whole image well with sharp object edges and rich image structures,but the MCNet (Yu et al.2022)and AANet have more consistent structure with the ground-truth.The MCNet (Yu et al.2022) and AANet can generate a finer texture structure while PCGAN has a slight blocking artifacts at the peak of saturation sometimes,indicating the benefits of non-local information for image desaturation.Specifically,the MCNet employs validness migratable convolution (VMC) to exploit non-local information by copying surrounding pixels before convolution operation.While the AANet refers to non-local pixels through a lightweight attention module.For objective evaluation,peak signal-to-noise ratio (PSNR) and structural similarity (SSIM)(Wang et al.2004) are computed over the testing set for each splitting of the data set,and their means and STDs are listed in Table 1.It can be seen that the two AANets outperform the two benchmarks respectively.The two AANets achieve the PSNR improvements of 1.0 dB,0.54 dB over PCGAN and MCNet respectively.The success of the two AANets is due to the exploration of non-local information.In addition,the “CC” is more beneficial to PCGAN than MCNet since the latter has explored non-local information through the VMC.We also give the STDs of PSNR and SSIM in Table 1.It can be observed that the STDs of PSNR/SSIM are quite small,indicating the models are stable.In addition,they are comparable among all the tested models.

    Table 1Quantitative Comparison of PSNR/SSIM (Mean/std) between two AANetModels (PCGAN+CC and MCNet+CC) and Two Benchmarks (PCGAN,MCNet)

    Figure 7.Subjective quality comparison among SE-DESAT(Guastavino et al.2019),PCGAN(Yu et al.2021),MCNet(Yu et al.2022)and AANet(“PCGAN+CC”and “MCNet+CC,” where “CC” represents criss-cross).

    Comparing AANet with MCNet,they have the different mechanisms of non-local information exploring.The former is through a specially designed convolution operator,namely VMC,and the latter is through the popular attention module.From Table 1,the former is slightly superior to the latter with respect to both PSNR and SSIM.In addition,the attention module is more flexible to explore non-local information.It can be easily embedded in any backbone neural networks,and easily optimized to adapt to the specific task.

    We also apply the trained AANet to real saturated images corresponding to long time exposure to evaluate its performance in real scenario.The visual quality comparison among SE-DESAT (Guastavino et al.2019),PCGAN,MCNet and AANet(“PCGAN+CC,”“MCNet+CC”)is shown in Figure 8.It can be observed that the saturated region can be repaired more or less by all of the competitive models,where the size of saturated region shrinks obviously.Compared to PCGAN and MCNet,AANet demonstrates more natural and appealing visual effect,especially for large holes.

    Figure 8.Subjective quality comparison for real saturated images among SE-DESAT(Guastavino et al.2019),PCGAN Yu et al.(2021),MCNet Yu et al.(2022)and AANet (“PCGAN+CC” and “MCNet+CC,” where “CC” represents criss-cross).

    3.4.Exploring Attention

    To study how the attention module is embedded for the best trade-off between efficiency and complexity,we performed a set of experiments by embedding the attention module in the different layers of the network.The backbone network of AANet is a U-Net,where both encoder and decoder can embed the attention module in the different layers.The experiments are listed in Table 2,where“E”and“D”represents encoder and decoder respectively,“CC”represents criss-cross attention,the numbers in parentheses indicate the layers where“CC”module is embedded.

    Table 2Quantitative Comparison with State-of-the-art Methods on Testing Set

    From Table 2,we can conclude: 1) attention module contributes more to the low-level image features,i.e.,the shallow layers of a neural network.The first three layers with attention module have obvious improvement;2) the encoder benefits more than the decoder from attention module since the former can encode original pixel-level information into the compressed features through attention module while the latter can only access the high-level image features.This is consistent with the result of VMC in MCNet (Yu et al.2022);3) testing over individual layer and combined layers demonstrate that embedding the attention module into the first two layers achieves a good trade-off between efficiency and complexity.

    4.Conclusions and Discussion

    This paper proposes a criss-cross attention augmented deep neural network,namely AANet,to repair saturated images of SDO/AIA.The experimental results verify that the attention mechanism really makes a difference in the image desaturation task.Unlike general image denoising or enhancement,the information of saturated regions is completely instead of partially lost in our task.Thus,attention module which borrows information from the non-local regions is significantly important to recover lost information.Compared to the benchmarks,the AANet performs better with respect to both qualitative and quantitative comparisons,which attributes to criss-cross attention module for exploring non-local information efficiently.

    Acknowledgments

    This work was supported by the National Key R&D Program of China(Nos.2021YFA1600504 and 2022YFE0133700),the National Natural Science Foundation of China (NSFC) (Nos.11790305,11873060 and 11963003).

    成人国产av品久久久| 一级毛片我不卡| 亚洲五月色婷婷综合| 亚洲 国产 在线| 两个人免费观看高清视频| 天堂中文最新版在线下载| 一区二区av电影网| 9191精品国产免费久久| 久久精品国产a三级三级三级| 深夜精品福利| 午夜av观看不卡| 午夜激情久久久久久久| 欧美国产精品va在线观看不卡| 99久久人妻综合| 国产在线视频一区二区| 久9热在线精品视频| 精品第一国产精品| 亚洲精品国产区一区二| 欧美日本中文国产一区发布| 亚洲人成电影免费在线| 少妇猛男粗大的猛烈进出视频| 免费在线观看影片大全网站 | 国产精品亚洲av一区麻豆| 91精品国产国语对白视频| 丝袜在线中文字幕| 性高湖久久久久久久久免费观看| 精品久久久久久电影网| 欧美人与性动交α欧美软件| 美女中出高潮动态图| 色网站视频免费| 视频区图区小说| av在线播放精品| 亚洲精品第二区| 日韩精品免费视频一区二区三区| 免费看十八禁软件| 黑人猛操日本美女一级片| 国产精品久久久久久精品电影小说| 国产一区有黄有色的免费视频| 精品国产一区二区三区久久久樱花| 女警被强在线播放| 美国免费a级毛片| 一边摸一边做爽爽视频免费| 18禁国产床啪视频网站| 欧美成狂野欧美在线观看| 国产精品免费大片| 黄网站色视频无遮挡免费观看| 午夜免费男女啪啪视频观看| 母亲3免费完整高清在线观看| 自拍欧美九色日韩亚洲蝌蚪91| 一本大道久久a久久精品| 国产日韩欧美亚洲二区| 最近最新中文字幕大全免费视频 | 国产xxxxx性猛交| 狠狠婷婷综合久久久久久88av| 两人在一起打扑克的视频| 亚洲av片天天在线观看| 少妇精品久久久久久久| 久久人妻福利社区极品人妻图片 | 国产不卡av网站在线观看| 麻豆av在线久日| 精品一区在线观看国产| 另类精品久久| 久久精品国产亚洲av涩爱| 国产精品偷伦视频观看了| 国产日韩一区二区三区精品不卡| 中文字幕亚洲精品专区| 亚洲中文av在线| 久久精品久久久久久噜噜老黄| 中文字幕av电影在线播放| 久久久久久久大尺度免费视频| 十八禁网站网址无遮挡| 黄色片一级片一级黄色片| 美女福利国产在线| 亚洲第一av免费看| 最近最新中文字幕大全免费视频 | 老汉色av国产亚洲站长工具| 性少妇av在线| 国产老妇伦熟女老妇高清| 国产精品香港三级国产av潘金莲 | 女人被躁到高潮嗷嗷叫费观| 国产精品.久久久| 操出白浆在线播放| 无限看片的www在线观看| 国产在线观看jvid| 色精品久久人妻99蜜桃| 菩萨蛮人人尽说江南好唐韦庄| 成人18禁高潮啪啪吃奶动态图| 亚洲,一卡二卡三卡| 婷婷色综合www| 激情视频va一区二区三区| 国产在线观看jvid| 香蕉丝袜av| 亚洲熟女精品中文字幕| 亚洲国产中文字幕在线视频| 女人爽到高潮嗷嗷叫在线视频| 亚洲成人免费电影在线观看 | 日韩大片免费观看网站| 伦理电影免费视频| 99国产精品99久久久久| 一二三四在线观看免费中文在| 极品少妇高潮喷水抽搐| 80岁老熟妇乱子伦牲交| 国产欧美亚洲国产| 国产精品 国内视频| 99热国产这里只有精品6| 美女扒开内裤让男人捅视频| 久久青草综合色| 男人爽女人下面视频在线观看| 飞空精品影院首页| 久久国产精品男人的天堂亚洲| 精品人妻熟女毛片av久久网站| 国产免费福利视频在线观看| 如日韩欧美国产精品一区二区三区| 精品欧美一区二区三区在线| 亚洲,一卡二卡三卡| 国产精品久久久久久精品电影小说| 三上悠亚av全集在线观看| 国产亚洲av片在线观看秒播厂| 国产一卡二卡三卡精品| 精品久久久久久久毛片微露脸 | 亚洲视频免费观看视频| 国产高清不卡午夜福利| 青草久久国产| 自拍欧美九色日韩亚洲蝌蚪91| 高清不卡的av网站| 亚洲欧美精品综合一区二区三区| 国产日韩欧美亚洲二区| 国产欧美亚洲国产| 久久精品成人免费网站| 一本色道久久久久久精品综合| 啦啦啦视频在线资源免费观看| 丝袜脚勾引网站| 亚洲精品一区蜜桃| 老鸭窝网址在线观看| 国产精品免费大片| 亚洲av综合色区一区| 性高湖久久久久久久久免费观看| 久久久久国产一级毛片高清牌| 国产精品免费视频内射| 午夜福利免费观看在线| 黑人猛操日本美女一级片| 性色av一级| 亚洲中文av在线| 中文字幕色久视频| 天堂俺去俺来也www色官网| 国产在线免费精品| 亚洲欧洲日产国产| 亚洲精品国产区一区二| 叶爱在线成人免费视频播放| www.熟女人妻精品国产| 七月丁香在线播放| 一区二区三区乱码不卡18| 国产淫语在线视频| 午夜av观看不卡| 9191精品国产免费久久| 熟女av电影| 在线观看免费午夜福利视频| 精品福利观看| 国产免费现黄频在线看| 免费av中文字幕在线| 婷婷成人精品国产| 香蕉丝袜av| 精品一区在线观看国产| av在线app专区| 亚洲欧美一区二区三区黑人| 超碰成人久久| 久久久久久亚洲精品国产蜜桃av| av国产精品久久久久影院| 女人被躁到高潮嗷嗷叫费观| 午夜福利在线免费观看网站| 久久这里只有精品19| 亚洲欧美清纯卡通| 十八禁高潮呻吟视频| 国产高清不卡午夜福利| 少妇人妻 视频| 亚洲欧美日韩高清在线视频 | 天天躁夜夜躁狠狠久久av| 在线av久久热| 好男人视频免费观看在线| 老汉色∧v一级毛片| 国产伦理片在线播放av一区| 国产精品 国内视频| 成人三级做爰电影| 亚洲精品国产av成人精品| 自线自在国产av| 亚洲av日韩精品久久久久久密 | 久久精品久久久久久久性| 老司机影院毛片| 国产亚洲午夜精品一区二区久久| 精品亚洲乱码少妇综合久久| 国产一区二区三区av在线| 欧美乱码精品一区二区三区| 人妻 亚洲 视频| 麻豆国产av国片精品| 一本大道久久a久久精品| 熟女av电影| 操美女的视频在线观看| 国产精品国产三级国产专区5o| 亚洲成色77777| 欧美国产精品一级二级三级| 十八禁高潮呻吟视频| 亚洲精品自拍成人| 精品国产国语对白av| 人妻 亚洲 视频| 18禁黄网站禁片午夜丰满| 国产精品久久久久久精品电影小说| 91精品三级在线观看| 各种免费的搞黄视频| 热re99久久精品国产66热6| 一级毛片黄色毛片免费观看视频| 精品第一国产精品| 欧美在线黄色| 精品国产一区二区久久| 久久久精品区二区三区| 精品福利永久在线观看| 一本久久精品| 国产在线免费精品| 两个人看的免费小视频| 久热这里只有精品99| 亚洲av成人精品一二三区| 亚洲欧美一区二区三区国产| 一级毛片黄色毛片免费观看视频| 91精品伊人久久大香线蕉| 两个人免费观看高清视频| 久久这里只有精品19| 一级黄色大片毛片| 国产成人精品久久二区二区91| xxxhd国产人妻xxx| 少妇被粗大的猛进出69影院| 中文精品一卡2卡3卡4更新| 欧美日韩亚洲高清精品| 欧美日韩成人在线一区二区| 精品福利观看| 51午夜福利影视在线观看| 黄色a级毛片大全视频| 国产xxxxx性猛交| 在线观看免费午夜福利视频| 久久久久久免费高清国产稀缺| 男女之事视频高清在线观看 | 久久天堂一区二区三区四区| 自拍欧美九色日韩亚洲蝌蚪91| 午夜视频精品福利| 国产色视频综合| 欧美 亚洲 国产 日韩一| 深夜精品福利| 成人影院久久| 日韩 欧美 亚洲 中文字幕| 别揉我奶头~嗯~啊~动态视频 | 91麻豆精品激情在线观看国产 | 亚洲一区中文字幕在线| 色网站视频免费| 久久人人爽av亚洲精品天堂| 男人添女人高潮全过程视频| 七月丁香在线播放| 丝瓜视频免费看黄片| 国产黄色免费在线视频| 手机成人av网站| 人人妻人人澡人人爽人人夜夜| 在线观看国产h片| 免费黄频网站在线观看国产| 欧美人与性动交α欧美精品济南到| 亚洲国产精品成人久久小说| 免费女性裸体啪啪无遮挡网站| 亚洲色图综合在线观看| 亚洲男人天堂网一区| 亚洲综合色网址| www.自偷自拍.com| 在线观看免费午夜福利视频| 黄色 视频免费看| 一级,二级,三级黄色视频| 亚洲精品国产av成人精品| 亚洲国产日韩一区二区| 精品第一国产精品| av欧美777| 国产成人免费观看mmmm| 国产在线一区二区三区精| 亚洲五月婷婷丁香| 亚洲五月色婷婷综合| 久久精品国产亚洲av高清一级| 91精品伊人久久大香线蕉| 亚洲,欧美精品.| 2018国产大陆天天弄谢| 欧美av亚洲av综合av国产av| 国产成人免费观看mmmm| 好男人电影高清在线观看| 免费人妻精品一区二区三区视频| 亚洲精品第二区| 亚洲欧美精品自产自拍| 人妻人人澡人人爽人人| 啦啦啦 在线观看视频| 久久久久精品国产欧美久久久 | 亚洲,欧美,日韩| 亚洲国产中文字幕在线视频| 久久精品熟女亚洲av麻豆精品| 日韩一卡2卡3卡4卡2021年| 天堂俺去俺来也www色官网| 亚洲黑人精品在线| 又黄又粗又硬又大视频| 丝袜美腿诱惑在线| av国产久精品久网站免费入址| 久久久久精品人妻al黑| xxxhd国产人妻xxx| 久久久久精品人妻al黑| 欧美久久黑人一区二区| 尾随美女入室| 黑人欧美特级aaaaaa片| 国产成人一区二区在线| 黑人猛操日本美女一级片| 免费少妇av软件| 国产一区二区在线观看av| 飞空精品影院首页| 99香蕉大伊视频| 国产精品一二三区在线看| 电影成人av| 免费日韩欧美在线观看| 久久人妻熟女aⅴ| 王馨瑶露胸无遮挡在线观看| 波野结衣二区三区在线| 好男人视频免费观看在线| 啦啦啦 在线观看视频| 国产精品一区二区免费欧美 | 日本黄色日本黄色录像| 校园人妻丝袜中文字幕| 一二三四社区在线视频社区8| 国产高清国产精品国产三级| 免费看不卡的av| 美女扒开内裤让男人捅视频| 涩涩av久久男人的天堂| 精品久久久久久电影网| 女人高潮潮喷娇喘18禁视频| 一级毛片 在线播放| 丝瓜视频免费看黄片| 美女主播在线视频| 18禁观看日本| 国产国语露脸激情在线看| 悠悠久久av| 亚洲美女黄色视频免费看| 亚洲熟女毛片儿| 日韩av不卡免费在线播放| 亚洲午夜精品一区,二区,三区| 久久人妻熟女aⅴ| 美国免费a级毛片| 亚洲av电影在线进入| 中文字幕av电影在线播放| 亚洲精品美女久久久久99蜜臀 | 天天添夜夜摸| 丝袜美足系列| 美女脱内裤让男人舔精品视频| 午夜福利视频精品| 老司机影院毛片| 午夜福利视频精品| 成在线人永久免费视频| 最近手机中文字幕大全| 一级毛片黄色毛片免费观看视频| 欧美性长视频在线观看| 男女下面插进去视频免费观看| 超碰成人久久| 51午夜福利影视在线观看| 高清欧美精品videossex| 叶爱在线成人免费视频播放| 2021少妇久久久久久久久久久| 中文字幕人妻丝袜制服| 在线 av 中文字幕| 亚洲黑人精品在线| 精品少妇一区二区三区视频日本电影| 日韩大码丰满熟妇| 另类精品久久| 精品国产乱码久久久久久小说| 精品少妇一区二区三区视频日本电影| 老司机影院毛片| 大片电影免费在线观看免费| 2018国产大陆天天弄谢| 午夜两性在线视频| 国产有黄有色有爽视频| 免费一级毛片在线播放高清视频 | 2018国产大陆天天弄谢| 新久久久久国产一级毛片| 欧美精品人与动牲交sv欧美| 国产免费视频播放在线视频| 搡老岳熟女国产| 欧美精品一区二区免费开放| 无限看片的www在线观看| 90打野战视频偷拍视频| 人人澡人人妻人| 国产精品国产av在线观看| 一边亲一边摸免费视频| 中文字幕制服av| 香蕉国产在线看| 国产精品一区二区精品视频观看| 亚洲成色77777| 国产一区二区 视频在线| 亚洲伊人色综图| 国产精品久久久久久人妻精品电影 | 99re6热这里在线精品视频| 两个人看的免费小视频| 欧美黑人欧美精品刺激| 丁香六月欧美| 99精国产麻豆久久婷婷| 欧美性长视频在线观看| 日韩av不卡免费在线播放| 亚洲五月色婷婷综合| 大香蕉久久成人网| 欧美成人精品欧美一级黄| av福利片在线| 亚洲国产精品国产精品| 各种免费的搞黄视频| 精品一区二区三卡| 久久午夜综合久久蜜桃| 亚洲,欧美,日韩| 国产av国产精品国产| 国产免费现黄频在线看| 国产在视频线精品| 高清黄色对白视频在线免费看| 9热在线视频观看99| av网站免费在线观看视频| 国产精品免费视频内射| 国产精品一国产av| 亚洲伊人色综图| 国产成人欧美在线观看 | 黄色怎么调成土黄色| www.av在线官网国产| 热re99久久国产66热| 色网站视频免费| 2021少妇久久久久久久久久久| 99国产精品一区二区蜜桃av | 一区二区av电影网| 亚洲欧美色中文字幕在线| 国产男女内射视频| 久久毛片免费看一区二区三区| 精品一区在线观看国产| 精品少妇黑人巨大在线播放| 好男人视频免费观看在线| 狠狠婷婷综合久久久久久88av| 国产片特级美女逼逼视频| 麻豆av在线久日| 欧美日韩黄片免| 女警被强在线播放| 午夜影院在线不卡| 伦理电影免费视频| 欧美激情极品国产一区二区三区| 人妻人人澡人人爽人人| 亚洲国产欧美一区二区综合| 久久精品久久久久久久性| 欧美日韩福利视频一区二区| 在线观看免费午夜福利视频| 国产精品熟女久久久久浪| 欧美精品一区二区大全| 麻豆乱淫一区二区| 午夜av观看不卡| 日韩制服丝袜自拍偷拍| 首页视频小说图片口味搜索 | 日日夜夜操网爽| 夫妻性生交免费视频一级片| 日本欧美国产在线视频| 麻豆av在线久日| 美女视频免费永久观看网站| 精品一区二区三区四区五区乱码 | 亚洲国产av新网站| 搡老岳熟女国产| 精品一区在线观看国产| 国产一区二区三区综合在线观看| 亚洲av片天天在线观看| 最近最新中文字幕大全免费视频 | 精品久久久久久电影网| 美女午夜性视频免费| 久久久精品94久久精品| 十八禁网站网址无遮挡| 99九九在线精品视频| 高清黄色对白视频在线免费看| 久久人人爽人人片av| 亚洲人成电影免费在线| 丝袜喷水一区| av一本久久久久| 欧美日本中文国产一区发布| netflix在线观看网站| 精品国产一区二区三区久久久樱花| 曰老女人黄片| 亚洲美女黄色视频免费看| 亚洲图色成人| 中文精品一卡2卡3卡4更新| 国产不卡av网站在线观看| 日日摸夜夜添夜夜爱| 免费观看av网站的网址| 亚洲欧美日韩另类电影网站| 免费在线观看黄色视频的| 丁香六月天网| 伦理电影免费视频| 美女高潮到喷水免费观看| 老司机深夜福利视频在线观看 | 91老司机精品| 色网站视频免费| 亚洲欧美一区二区三区国产| 欧美人与性动交α欧美精品济南到| 男女午夜视频在线观看| 国产精品欧美亚洲77777| 国产在线视频一区二区| 国产麻豆69| 国产又爽黄色视频| 精品第一国产精品| 免费高清在线观看日韩| 777久久人妻少妇嫩草av网站| 免费观看a级毛片全部| 91字幕亚洲| 自拍欧美九色日韩亚洲蝌蚪91| 人妻一区二区av| 王馨瑶露胸无遮挡在线观看| 久久免费观看电影| 中文字幕制服av| 大码成人一级视频| 亚洲av美国av| 一级毛片我不卡| 亚洲精品日韩在线中文字幕| 久久人人97超碰香蕉20202| 2018国产大陆天天弄谢| 捣出白浆h1v1| 2021少妇久久久久久久久久久| 久久99精品国语久久久| 人人妻,人人澡人人爽秒播 | 一个人免费看片子| 久久国产精品人妻蜜桃| 人人妻人人澡人人爽人人夜夜| 热99国产精品久久久久久7| 久久性视频一级片| videos熟女内射| 国产片内射在线| 中文字幕人妻丝袜一区二区| 久久精品久久精品一区二区三区| 99国产精品免费福利视频| 一区在线观看完整版| 人人妻人人添人人爽欧美一区卜| 高清视频免费观看一区二区| 久久久久久久久免费视频了| 国产一卡二卡三卡精品| 天堂8中文在线网| 性高湖久久久久久久久免费观看| 99国产精品99久久久久| 国产精品.久久久| 啦啦啦在线免费观看视频4| 日韩,欧美,国产一区二区三区| 我的亚洲天堂| 五月开心婷婷网| 手机成人av网站| 亚洲人成电影免费在线| 在线观看免费视频网站a站| 黄色毛片三级朝国网站| 久久久久久久久久久久大奶| 嫩草影视91久久| av一本久久久久| 成在线人永久免费视频| 一区二区三区激情视频| 在线av久久热| 91精品三级在线观看| 热99国产精品久久久久久7| 日本色播在线视频| 国产亚洲av片在线观看秒播厂| 欧美久久黑人一区二区| 51午夜福利影视在线观看| 欧美亚洲日本最大视频资源| 又粗又硬又长又爽又黄的视频| 欧美日本中文国产一区发布| 手机成人av网站| 少妇人妻 视频| 国产不卡av网站在线观看| 国语对白做爰xxxⅹ性视频网站| 久久精品亚洲av国产电影网| 亚洲av在线观看美女高潮| 在线观看国产h片| 狠狠婷婷综合久久久久久88av| 999精品在线视频| 亚洲国产欧美在线一区| 欧美激情极品国产一区二区三区| 午夜激情av网站| 菩萨蛮人人尽说江南好唐韦庄| 国产91精品成人一区二区三区 | 男女边吃奶边做爰视频| 亚洲人成77777在线视频| 亚洲欧美激情在线| 欧美黄色淫秽网站| 国产一区二区三区综合在线观看| 精品福利观看| 操美女的视频在线观看| 91精品三级在线观看| 老司机在亚洲福利影院| 久9热在线精品视频| 亚洲精品一区蜜桃| 日韩伦理黄色片| 午夜免费鲁丝| www.av在线官网国产| 欧美人与性动交α欧美软件| 亚洲国产欧美一区二区综合| 性色av一级| 国产国语露脸激情在线看| av在线老鸭窝| 丰满人妻熟妇乱又伦精品不卡| 国产伦理片在线播放av一区| 啦啦啦中文免费视频观看日本| 男的添女的下面高潮视频| av天堂在线播放| 亚洲欧美日韩高清在线视频 | 少妇被粗大的猛进出69影院| 国产在视频线精品| av福利片在线| 亚洲精品一二三| 在现免费观看毛片| 国产精品秋霞免费鲁丝片| 久久av网站| a级毛片黄视频| 99热网站在线观看| 国产欧美日韩精品亚洲av| 国产黄色视频一区二区在线观看| 十分钟在线观看高清视频www| 亚洲精品美女久久av网站| 侵犯人妻中文字幕一二三四区| 成年女人毛片免费观看观看9 | av国产精品久久久久影院|