• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Image Desaturation for SDO/AIA Using Mixed Convolution Network

    2022-08-02 08:18:04XuexinYuLongXuZhixiangRenDongZhaoandWenqingSun

    Xuexin Yu, Long Xu, Zhixiang Ren, Dong Zhao, and Wenqing Sun

    1 National Astronomical Observatories, Chinese Academy of Sciences, Beijing 100101, China; lxu@nao.cas.cn

    2 University of Chinese Academy of Sciences, Beijing 100049, China

    3 Peng Cheng National Laboratory, Shenzhen 518000, China

    4 National Space Science Center, Chinese Academy of Sciences, Beijing 100190, China

    5 State Key Laboratory of Virtual Reality Technology and Systems, School of Computer Science and Engineering, Beihang University, Beijing 100191, China

    Received 2022 March 16; revised 2022 April 6; accepted 2022 April 22; published 2022 May 20

    Abstract The Atmospheric Imaging Assembly (AIA) onboard the Solar Dynamics Observatory (SDO) provides full-disk solar images with high temporal cadence and spatial resolution over seven extreme ultraviolet(EUV)wave bands.However, as violent solar flares happen, images captured in EUV wave bands may have saturation in active regions, resulting in signal loss. In this paper, we propose a deep learning model to restore the lost signal in saturated regions by referring to both unsaturated/normal regions within a solar image and statistical probability model of massive normal solar images. The proposed model, namely mixed convolution network (MCNet), is established over conditional generative adversarial network (GAN) and the combination of partial convolution(PC) and validness migratable convolution (VMC). These two convolutions were originally proposed for image inpainting.In addition,they are implemented only on unsaturated/valid pixels,followed by certain compensation to compensate the deviation of PC/VMC relative to normal convolution.Experimental results demonstrate that the proposed MCNet achieves favorable desaturated results for solar images and outperforms the state-of-the-art methods both quantitatively and qualitatively.

    Key words: Sun: activity – Sun: atmosphere – Sun: chromosphere

    1. Introduction

    The Atmospheric Imaging Assembly (AIA) (Lemen et al.2012) onboard the Solar Dynamics Observatory (SDO)(Pesnell et al. 2012) is composed of four dual-channel normal-incidence telescopes that capture full-disk images of the Sun’s atmosphere over seven extreme ultraviolet (EUV)(94 ?,131 ?,171 ?,193 ?,211 ?,304 ?,335 ?)wave bands with spatial resolution of 4096×4096 and temporal cadence of 12s. These captured data provide an unprecedented EUV view for studying the structure and dynamics of the solar atmosphere.

    However, when a solar flare occurs, the images captured by SDO/AIA in the EUV wave bands may present two kinds of artifacts, saturation and diffraction, as shown in Figure 1(a),which are closely associated with imaging process of AIA.Concretely, an image of AIA is the result of convolution between the incoming photon flux and point-spread function(PSF) which describes the response of AIA to an ideal point source. This process is formulated as

    where I is an image recorded by AIA, ?means convolution operator, f denotes the actual incoming photon flux, Acand Adare diffusion component and diffraction component of PSF,respectively. As shown in Figures 1(b) and (c), Acis a core peak, and Adis a peripheral regular diffraction pattern of varying intensity and replicates the core peak. Diffraction fringe is the convolution between Adand f.It becomes apparent above the background with increasing intensity of a given peak in f. Saturation happens in Ac?f term, which is actually categorized into primary saturation and second saturation/blooming. The former occurs because the charge-coupled device (CCD) pixels cannot accommodate additional charges of incoming photon flux f, while the latter is the result that the additional charges spill into neighbor pixels. From Equation (1), intense incoming flux may lead to signal loss in the component of Ac?f in case of saturation, but it is also coherently and linearly scattered to other regions due to diffraction (Ad?f) (Guastavino et al. 2019). Therefore, lost signal in primary saturation actually presents in diffraction fringes more or less, so it can be retrieved partially from diffraction fringes (Schwartz et al. 2014; Torre et al. 2015). In principle,DESAT(Schwartz et al.2015)formulated lost signal recovery in saturated regions into an inverse diffraction issue,which is described as

    Figure 1. (a) An example of saturated image in active region 12 130 for SDO/AIA at 193 ? wave band with over-saturation region and the diffraction fringes highlighted by blue bounding box (the event occurred at 14:47:06 UT, 2014 August 1). (b) The complete point-spread function (PSF) of 193 ? wave band. (c)Zoomed-in central part of the PSF,where region highlighted by red bounding box is diffusion component denoted by Ac,and the rest parts are diffraction component denoted by Ad.

    where Idis the known recorded image in diffraction regions,Bdis the unknown saturated image background related to diffraction fringes. In Schwartz et al. (2015), Bdis estimated from the interpolation of two neighbor unsaturated images.These two unsaturated images are obtained by reducing exposure time, which is automatically triggered by the feedback system of SDO/AIA during the solar flares.However, DESAT becomes ineffective for large solar flares because the neighbor images of short exposure time may be also saturated, e.g., the super storm happening in 2017 September. To solve this issue, Guastavino et al. (2019)proposed Sparsity-Enhancing DESAT (SE-DESAT) where estimation of saturated image background is not from consecutive unsaturated images but from the current saturated image itself. However, in these two methods, segmentation of diffraction fringes and primary saturation regions and estimation of background would affect desaturated results. In addition, the blooming regions cannot be restored in both methods.

    With significant success of deep learning in image inpainting,two learning-based approaches,Mask-Pix2Pix(Zhao et al.2019) and PCGAN (Yu et al. 2021), were proposed to desaturate solar images in our previous efforts. They differ from DESAT (Schwartz et al. 2015) and SE-DESAT(Guastavino et al. 2019) in three aspects. First, DESAT(Schwartz et al.2015)and SE-DESAT(Guastavino et al.2019)explicitly model the problem and resolve it under the assumption that lost signal in saturated regions may present in diffraction fringes of unsaturated regions, while Mask-Pix2Pix and PCGAN model the problem implicitly by using neural network. Specially, a neural network first learns the distribution of unsaturated image from massive data, and then infers the lost signal in saturated regions from well-learnt distribution.Second,Mask-Pix2Pix and PCGAN have stronger representation ability than DESAT and SE-DESAT because neural networks can approximate any complex function theoretically (Hornik et al. 1989; Cybenko 1989; Leshno et al. 1993). Third, compared with DESAT and SE-DESAT,Mask-Pix2Pix and PCGAN automatically extract related information (including diffraction fringes) to restore the whole saturated regions including blooming and primary saturation in an end-to-end optimization manner. They do not need explicit segmentation of diffraction fringes and primary saturation regions and estimation of background. In an image, saturated region contains useless/invalid pixels. Once a standard convolution slides to the boundary of saturated region, invalid pixels would participate in convolution, resulting in deviation of convolution, e.g, Mask-Pix2Pix (Zhao et al. 2019). To overcome this problem, partial convolution (PC) (Liu et al.2018) was employed to replace standard convolution in our previous effort (Yu et al. 2021), which excludes saturated pixels from block-wise convolution and compensates deviation of PC to approach normal convolution as far as possible.

    In this paper, to further improve deaturating results, we propose a mixed convolution network (MCNet), where validness migratable convolution (VMC) (Wang et al. 2021)and partial convolution (PC) (Liu et al. 2018) are employed in encoder and decoder of generator, respectively. These two types of convolutions can extract features only from normal regions, and compensate deviation caused by saturated pixels in encoder and decoder in different ways respectively,which is beneficial to recovery of saturated regions.

    The rest of paper is organized as follows. Section 2 introduces data set used in this work. Section 3 introduces network architecture, convolutions and loss functions of the proposed model in details.Experimental results are provided in Section 4. Conclusion and discussion are listed in Section 5.

    Figure 2.An sample in desaturation data set established by Yu et al.(2021),which is composed of four images:Isat,Igt,Im and Id.Isat is a saturated image and Igt is the unsaturated image following Isat immediately.Im is a binary mask which indicates saturated and unsaturated pixels of Isat by 1 and 0,respectively.Id is the degraded image which is obtained by Igt ⊙Im.

    Figure 3. The architecture of the proposed mixed convolution network (MCNet). The generator learns a mapping from Im and Id to Igt. The discriminator and loss functions supervise the learning process of the generator by classifying fake{Id,Ig,Igg}and real{Id,Igt,Igtg}pairs and minimizing distance between Ig and Igt from pixel-level and feature-level, respectively.

    Figure 4. The differences among standard convolution, partial convolution and validness migratable convolution when receptive field of convolution contains saturated and unsautrated pixels. The white and orange boxes denote saturated and unsaturated pixels, respectively. The blue grid means receptive field. The green boxes represent unsaturated pixels which are used to fill saturation location in receptive field.

    Figure 5.Visual quality comparison of desaturated results by Mask-Pix2Pix(Zhao et al.2019),PCGAN(Yu et al.2021),our MCNet and MCNet(w/o VMC).From top to bottom, the first, third and fifth rows are full-images, and the others are zoomed-in patches.

    Table 1 Quantitative Comparison with State-of-the-art Methods on Testing Set

    2. Data set

    To train and evaluate the proposed model, the desatuartion data set (Yu et al. 2021) is used in this work, which collected M-class and X-class solar flare data at 193 ? of SDO/AIA(Lemen et al. 2012) from 2010 to 2017. After a series of data pre-processing, such as normalization for exposure time,scaling using the log function, as shown in Figure 2, each sample is composed of image quadruple, saturated, mask,degraded and unsaturated image, whose size are 256×256. It is worth noting that saturated image Isatis only utilized to obtain realistic shape of saturated regions by imposing a threshold on it, which is denoted by Im. Degraded image Idis the result of Igt⊙Im(⊙represents element-multiplication operator). During the training of model, the triplet {Id, Im, Igt}is fed into our model to optimize parameters of network. The whole data set contains about 18,700 samples. Following Yu et al. (2021), we split data set in chronological order: 2012-2017 for training (15,700 samples) and 2010-2011 for testing(3000 samples).

    3. Method

    In this section,we first introduce network architecture of the proposed model, and then convolutions and loss functions are discussed.

    3.1. Network Architecture

    The overall architecture of the proposed MCNet is shown in Figure 3, which is composed of a generator and a discriminator. The generator is a UNet-like architecture,which obtains favorable results in image inpainting. Concretely, it consists of encoder that extracts a representation of input,and decoder that utilizes the representation to output an image with the same as original input. The basic module of generator consists of VMC/PC, regional composite normalization (RCN) (Wang et al. 2021), and ReLU/LeakyReLU,and is stacked repeatedly in generator. In addition, skip connection is implemented in corresponding layer of encoder and decoder to shuttle different level information between them. The mask of image is nontrivial during encoding and decoding process because it indicates the unsaturated/normal regions and saturated regions/holes by 1 and 0, respectively.Therefore, it has an independent updating branch and then is fed into corresponding layer of image branch to effectively extract features from images.Following Isola et al.(2017)and Zhu et al. (2017), discriminator is a PatchGAN architecture which judge whether an input patch is real. For an input, the discriminator outputs a matrix where each element corresponds an overlap patch of input. In our work, its input is a triplet including input degraded image Id,the generated image Igor the ground truth Igtand its corresponding gradient map Iggor Igtg.

    3.2. Convolutions

    Our goal is to restore the lost signal in saturated/invalid regions or holes from unsaturated/valid/normal regions within an image. To achieve the goal, convolution need to meet two requirements. First, the output of convolution is only conditioned on unsaturated pixels because the lost signal is only in unsaturated regions. Second, deviation caused by unsaturated pixels should be compensated when receptive field of convolution crosses the boundary between normal regions and holes. Therefore, the validness migratable convolution (VMC) (Wang et al. 2021) and partial convolution (PC) (Liu et al. 2018) are employed in encoder and decoder of the generator respectively instead of standard convolution, because both of them can fulfill the above requirements.Given the input degraded image/feature map x,convolution weight w and bias b, the standard convolution is described as

    where p0denotes each position on the output features map y,R defines the receptive field and dilation size of convolution.For example, 3×3 receptive field with dilation 1 is formulated as

    This process is shown in Figure 4 when receptive field of convolution contains valid and invalid pixels. We can see that standard convolution use both valid and invalid pixels and cannot treat these two kinds of pixels with differences in Figure 4, which cannot meet two proposed requirements for convolution.

    To solve this issue,the PC is introduced,which is described as receptive field, while the VMC uses valid pixels outside receptive field as shown in Figure 4.To get more photorealistic

    where m is corresponding mask(0 for saturated pixels and 1 for normal pixels) of x, 1 is a constant all-ones matrix with the same size as m, and m is updated by Equation (6) for the next layer. From Equations (5) and (6), output of PC only depends on valid pixels by introducing a mask,and deviation caused by invalid pixels is calibrated by scaling the standard output according to proportion of valid pixels in receptive field. This process can be equivalent to filling holes with valid pixels in current receptive field to exclude invalid pixels and compensate deviation caused by them before implementing standard convolution, which is shown in Figure 4.

    VCM solves the above issue in a different way.It consists of feature migration,regional combination,convolution and mask updating. They are formulated by

    where irregular receptiveR is augmented with feature migration △pn, y is the deformed feature map of x. The △pnis automatically learned from input feature map,and n=1,...,N andN= ∣R ∣.

    After validness migratable convolution, mask m is updated by Equation(6).From Equations(7)–(9),we can see that the holes of input feature map x is first filled by surround pixels in the learning way, then filled regions are copied into holes of x by Equation (8), and a standard convolution is imposed on new feature map yrclastly. This process is simply treated as filling holes with pixels outside receptive field to avoid the interference of invalid pixels before applying standard convolution, as shown in Figure 4.

    Both these convolutions extract features only from normal regions, while their compensating deviation caused by saturated pixels are different. The PC uses valid pixels inside recovery of saturated regions/holes,we apply VMC and PC to encoder and decoder of generator, respectively. To encoder,VCM has more choices to get valid pixels outside receptive field as shown in Figure 4(c)to fill holes while PC is limited to receptive field as shown in Figure 4(b). Roughly speaking,saturated regions/holes are gradually filled along with convolution progress,so encoder finally gets encoding features without holes for decoder. In decoder, encoding features with holes in each layer of encoder is provided by skip connection,while decoding features coming from the last layer of encoder are without holes. Thus, PC is employed in decoder, using decoding features within the receptive field to fill holes indicated by encoding features.

    3.3. Loss Functions

    To restore missing signal in saturated regions, multiple losses are integrated together to minimize the difference between generated image and ground truth from both pixellevel and feature-level. The pixel-level losses include pixel reconstruction loss (Liu et al. 2018), gradient loss (Ma et al.2020) and total variation loss (Johnson et al. 2016), and feature-level losses includes perceptual loss (Johnson et al.2016), style loss (Gatys et al. 2016) and adversarial loss (Mao et al. 2017).

    Let Idbe the input degraded image,Iminitial binary mask(0 for saturated/invalid regions, 1 for normal/valid regions), Igthe generated image by generator, and Igtthe ground truth, we introduce pixel reconstruction loss to guarantee pixels similarity of output image. It is defined as

    where ‖·‖1denotes L1loss, and λhole_recand λvalid_recare the corresponding weights for saturated regions and normal regions, and empirically are set 100 and 10, respectively.

    The gradient loss (Ma et al. 2020) is adopted to recover structural information, which imposes L1loss on gradient maps. Therefore, it is formulated as

    Figure 6.The linear fitting results between ground-truth pixels and recovered pixels of saturated regions for the last example in Figure 5,where x and y axes represent ground-truth pixels and recovered pixels respectively.The straight line in red color represents x=y,while dotted straight lines give the best linear fitting between x and y.

    where λhole_graand λvalid_graare the corresponding weights of saturated regions and normal regions, empirically are set 300 and 10, respectively. Here, the GM(·) denotes an operator computing gradient map of an image(Ma et al.2020),which is described as

    where‖·‖2computes the length of vector ?I(i,j)at each pixel location. To process margin pixels, the input image I is zeropadded by 1-pixel dilation before extraction of gradient map.

    Total variation loss (Johnson et al. 2016) is included to eliminate artifacts in the boundary between saturated and unsaturated regions. It is calculated by

    where P connects saturated and unsaturated regions.

    The adversarial loss (Mao et al. 2017) is adopted to ensure recovered structure information more realistic from featurelevel, which is formulated as

    To capture high-level semantics information and alleviate grid-shaped artifacts in recovered regions (Liu et al. 2018), we introduce perceptual loss (Johnson et al. 2016):

    Figure 7.Visual quality comparison of desaturated results for real saturated images by Mask-Pix2Pix (Zhao et al. 2019),PCGAN(Yu et al. 2021),our MCNet and MCNet (w/o VMC). From top to bottom, the first, third and fifth rows are full-images, and the others are zoomed-in patches.

    where Icomp=(1-Im)⊙Ig+Im⊙Igt,meaning combination of recovered regions in generated image and normal regions in ground truth. The perceptual loss computes L1loss in feature domain for both Igand Icompand Igt,respectively.The Ψiis the feature map of the ith pooling layer of VGG-16 (Simonyan &Zisserman 2015). Here, the first three pooling layers are adopted in Equation (15).

    Style loss (Gatys et al. 2016) is effective for capturing semantics information, which first computes Gram matrix for each feature map of VGG-16 and then calculates L1loss.Therefore, it is defined as

    where Kiis a scaling factor,given by 1/HiWiCifor the ith layer of VGG-16, and feature map Ψi(I) is a tensor of size Hi×Wi×Ci.

    4. Experimental Results

    Experiments are conducted for evaluating the proposed MCNet. We first compare our model with two state-of-the-art desaturation methods, Mask-Pix2Pix (Zhao et al. 2019) and PCGAN (Yu et al. 2021). Then effect of VCM for overall performance is verified by an ablation experiment. The source code of MCNet can be accessed via GitHub (https://github.com/filterbank/MCNet).

    4.1. Implementation Details

    We evaluate the proposed model on the desaturation data set(Yu et al.2021)which is described in detail in Section 2.In the following experiments, a series of data augmentation techniques are employed in training process, including randomly cropping input image triplet (the degraded image, corresponding mask and ground truth) from 350×350 to 256×256,and randomly rotating and flipping them in four angles (0°,90°, 180° and 270°) and horizontal direction respectively. Our model is implemented on PyTorch platform. It is trained on a single NVIDIA GeForce RTX 3090 GPU with a batch size of 28, and the epoch number of 200. We initialize convolution weights using the initialization method proposed in He et al.(2015) and optimize them by the Adam algorithm (Kingma &Ba 2014) with β1=0.500 and β2=0.999. The initial learning rate is set to 2e-4,and then decays half at the 100th and 150th epoch successively.

    4.2. Comparisons with State-of-the-Art

    Our MCNet is compared with two benchmarks and its variant, Mask-Pix2Pix (Zhao et al. 2019), PCGAN (Yu et al.2021), and MCNet (w/o VMC). In MCNet (w/o VMC),VCMs in encoder are replaced by PCs to verify its contribution to the proposed model.

    Desaturated results in the testing set are shown in Figure 5,our method and other two benchmarks effectively recovery the overall intensity distribution of the lost signal compared with ground truths. However, Mask-Pix2Pix struggles to generate structure information and there are apparent artifacts in boundary between recovered regions and valid regions.Results of PCGAN and our approach are sharp and contain rich structure information,but fine structures in our results are more consistent with the ground truths. Although MCNet (w/o VMC)also generates favorable results,MCNet is superior to it in details of structures, which indicates the benefits of VCMs which automatically fill holes by copying surrounding pixels before convolutional operation. Following Mask-Pix2Pix and PCGAN, we also employed peak signal-to-noise ratio (PSNR)and structural similarity (SSIM) (Wang et al. 2004) as metrics to evaluate performance of the models objectively. Table 1 shows the results of PSNR and SSIM on the whole testing set,where our model outperforms other two methods and its variant. Concretely, the proposed model receptively improves 4.0980 dB, 0.4769 dB and 0.6175 dB in PSRN, and 0.0999,0.0160 and 0.0195 in SSIM, compared with Mask-Pix2Pix,PCGAN and MCNet (w/o VMC). Following Zhang et al.(2020), we also analyze the linear fitting result between the ground-truth pixels and recovered pixels in saturated regions.The linear fitting lines for the last example in Figure 5 are shown in Figure 6. The linear fitting slopes of MaskPix2Pix,PCGAN and MCNet(w/o VMC)are comparable,which are all close to 0.6000, while our MCNet outperforms them largely and achieves linear fitting slope of 0.7263. Our model is also superior to SE-DESAT (Guastavino et al. 2019) because it is demonstrated that its performance is inferior to PCGAN in Yu et al. (2021). Therefore, our MCNet achieves better results in qualitative and quantitative and VCMs have contribution to overall performance.

    In addition, we also conduct experiments on real saturated images of solar flare to evaluate the proposed model. Figure 7 shows desaturated results of Mask-Pix2Pix, PCGAN, MCNet and MCNet (w/o VMC). It can be seen that although these models cannot completely restore all missing content in saturated regions/holes,the size of saturated regions becomes small obviously.Mask-Pix2Pix fails to recover sharp structural content. Our model MCNet performs the best, while the three benchmarks achieve similar efficiency. However, the achievement of our model is slightly below the results presented in Figure 5.The main reason lies in intensity gap between training images and real saturated images, resulting in performance drop.

    5. Conclusions and Discussion

    This paper proposes an MCNet model to recover saturated regions of SDO/AIA images.Compared with benchmarks,the proposed model can achieve better results in both visual experience and quantitative comparison, which attributes to applying different types of specialized convolutions(VCM and PC) to the encoder and decoder of the generator. However,there are still unrecovered or mis-recovered regions in real saturated image. Three solutions can possibly further improve our model. First, training images and real saturated images are normalized regarding to exposure time to bridge the gap between them. Second, EUV image and corresponding magnetogram of photosphere are used jointly as the input of model,which can provide additional information for recovering the lost signal.Third,physical principle of SDO/AIA imaging is integrated with deep learning model to improve robustness of model and reducing risk of overfitting. In addition, the proposed model can be easily transferred to similar observation devices through transferring learning.

    This work was supported by the Peng Cheng Laboratory Cloud Brain(No.PCL2021A13),the National Natural Science Foundation of China (NSFC) under 11790305, 11973058 and 12103064.

    免费观看在线日韩| 国产黄片美女视频| 黄色欧美视频在线观看| 国产综合懂色| 亚洲av成人精品一区久久| 又黄又爽又刺激的免费视频.| 97在线视频观看| 我的女老师完整版在线观看| 最新中文字幕久久久久| 中文字幕av成人在线电影| 三级国产精品欧美在线观看| 亚洲成人久久性| 免费观看在线日韩| 精品一区二区三区人妻视频| 欧美日韩一区二区视频在线观看视频在线 | 嫩草影院入口| 精品免费久久久久久久清纯| 嫩草影院精品99| 国产精品爽爽va在线观看网站| 久久这里只有精品中国| 欧美日韩精品成人综合77777| 国产亚洲5aaaaa淫片| 亚洲真实伦在线观看| 中文字幕av成人在线电影| 久久精品91蜜桃| 国产精品乱码一区二三区的特点| 欧美潮喷喷水| 日本一本二区三区精品| 国产单亲对白刺激| 国产老妇伦熟女老妇高清| 夜夜爽天天搞| 我要搜黄色片| 超碰av人人做人人爽久久| 免费大片18禁| 哪里可以看免费的av片| 亚洲欧美日韩无卡精品| 91久久精品电影网| 1024手机看黄色片| 99riav亚洲国产免费| 我要搜黄色片| 18禁在线播放成人免费| 亚洲国产欧洲综合997久久,| 波多野结衣高清无吗| 国产老妇女一区| 伦精品一区二区三区| 亚洲无线观看免费| 成人美女网站在线观看视频| 级片在线观看| 日韩欧美国产在线观看| 午夜精品国产一区二区电影 | 麻豆成人午夜福利视频| 人妻久久中文字幕网| 尤物成人国产欧美一区二区三区| 亚洲av成人精品一区久久| 日韩视频在线欧美| 成人鲁丝片一二三区免费| 国产高清三级在线| 尤物成人国产欧美一区二区三区| 99热精品在线国产| 国产精品人妻久久久久久| 国产午夜精品一二区理论片| 看片在线看免费视频| 国产精品电影一区二区三区| 亚洲精品乱码久久久v下载方式| 欧美成人精品欧美一级黄| 美女 人体艺术 gogo| 亚洲欧美日韩高清专用| 国模一区二区三区四区视频| 狠狠狠狠99中文字幕| av在线天堂中文字幕| 国产精品日韩av在线免费观看| 亚洲av免费在线观看| 久久精品久久久久久噜噜老黄 | 高清午夜精品一区二区三区 | 99久国产av精品国产电影| 精品人妻熟女av久视频| 国产日本99.免费观看| 成年av动漫网址| 可以在线观看毛片的网站| kizo精华| 一级黄色大片毛片| 国产午夜福利久久久久久| 亚洲欧美精品自产自拍| 日本黄大片高清| eeuss影院久久| 亚洲精品色激情综合| 男女边吃奶边做爰视频| 狂野欧美激情性xxxx在线观看| 一夜夜www| 久久韩国三级中文字幕| 毛片一级片免费看久久久久| 超碰av人人做人人爽久久| 亚洲va在线va天堂va国产| 熟女人妻精品中文字幕| 久久99热6这里只有精品| 一卡2卡三卡四卡精品乱码亚洲| 午夜激情欧美在线| 少妇熟女欧美另类| 人妻系列 视频| 能在线免费看毛片的网站| 欧美在线一区亚洲| 亚洲aⅴ乱码一区二区在线播放| 天美传媒精品一区二区| 91在线精品国自产拍蜜月| 偷拍熟女少妇极品色| 一个人免费在线观看电影| 69av精品久久久久久| 亚洲最大成人手机在线| 日本免费a在线| 久久精品国产清高在天天线| 久久精品久久久久久噜噜老黄 | 日本黄色视频三级网站网址| 两个人的视频大全免费| 夜夜看夜夜爽夜夜摸| 熟女电影av网| 日本黄大片高清| 欧美精品一区二区大全| 日本黄色视频三级网站网址| 九九在线视频观看精品| 久久久久久久久久久免费av| 日本熟妇午夜| 成人高潮视频无遮挡免费网站| 日本熟妇午夜| 久久久久久久久久久免费av| 三级男女做爰猛烈吃奶摸视频| .国产精品久久| 精品不卡国产一区二区三区| 岛国毛片在线播放| 国产成人freesex在线| 亚洲最大成人中文| 亚洲av二区三区四区| 久久久午夜欧美精品| 欧美zozozo另类| 日本与韩国留学比较| 亚洲美女视频黄频| 26uuu在线亚洲综合色| 国产高清激情床上av| 日韩欧美一区二区三区在线观看| 色噜噜av男人的天堂激情| 成人国产麻豆网| 久久精品夜夜夜夜夜久久蜜豆| 男人舔女人下体高潮全视频| 深夜a级毛片| 国产精品无大码| 3wmmmm亚洲av在线观看| 日韩欧美三级三区| 在线免费观看的www视频| 五月玫瑰六月丁香| 亚洲内射少妇av| 床上黄色一级片| a级毛片a级免费在线| or卡值多少钱| 深爱激情五月婷婷| 少妇裸体淫交视频免费看高清| 观看美女的网站| 综合色丁香网| 亚洲av一区综合| 久久久久久久午夜电影| 国产一区二区激情短视频| 亚洲最大成人av| 偷拍熟女少妇极品色| 亚洲欧美日韩卡通动漫| 嫩草影院入口| 美女cb高潮喷水在线观看| 日韩欧美精品免费久久| 少妇猛男粗大的猛烈进出视频 | 嫩草影院精品99| 日本免费a在线| 色播亚洲综合网| 能在线免费看毛片的网站| 免费观看的影片在线观看| 亚洲最大成人手机在线| 国产精品一区www在线观看| 99热6这里只有精品| 非洲黑人性xxxx精品又粗又长| 中文字幕制服av| 国产av一区在线观看免费| 久久精品国产亚洲av天美| 又黄又爽又刺激的免费视频.| 国产色爽女视频免费观看| 国产色婷婷99| 欧美高清成人免费视频www| 特级一级黄色大片| 三级经典国产精品| 国产免费一级a男人的天堂| 神马国产精品三级电影在线观看| 久久国产乱子免费精品| 长腿黑丝高跟| 色综合站精品国产| av在线播放精品| 成人无遮挡网站| 99在线视频只有这里精品首页| 秋霞在线观看毛片| 成熟少妇高潮喷水视频| 欧美一级a爱片免费观看看| 国产av麻豆久久久久久久| 亚洲欧美成人精品一区二区| www.av在线官网国产| 人体艺术视频欧美日本| 一进一出抽搐gif免费好疼| 欧美日韩精品成人综合77777| 精品一区二区免费观看| 波多野结衣高清作品| av在线天堂中文字幕| 亚洲aⅴ乱码一区二区在线播放| 国产精华一区二区三区| 欧美xxxx性猛交bbbb| 一级av片app| 国产亚洲av片在线观看秒播厂 | 插逼视频在线观看| 中文字幕av成人在线电影| 亚洲国产精品sss在线观看| 国产私拍福利视频在线观看| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 国产免费男女视频| 精品不卡国产一区二区三区| 美女大奶头视频| 美女黄网站色视频| 国产精品精品国产色婷婷| 久久午夜福利片| 麻豆国产av国片精品| 男人狂女人下面高潮的视频| 日本一本二区三区精品| 少妇熟女aⅴ在线视频| 两个人的视频大全免费| 日韩制服骚丝袜av| 最近视频中文字幕2019在线8| 少妇人妻精品综合一区二区 | 久久精品国产亚洲av涩爱 | .国产精品久久| 亚洲精品亚洲一区二区| 美女国产视频在线观看| a级毛片免费高清观看在线播放| 日日摸夜夜添夜夜爱| 国产精品一区www在线观看| 亚洲av不卡在线观看| 免费av观看视频| 免费无遮挡裸体视频| 99久久久亚洲精品蜜臀av| 男人舔女人下体高潮全视频| 欧美精品国产亚洲| 两个人视频免费观看高清| 久久九九热精品免费| 99热6这里只有精品| 亚洲国产精品久久男人天堂| 久久久精品大字幕| 大香蕉久久网| 悠悠久久av| 少妇丰满av| 国产精品日韩av在线免费观看| 成人午夜高清在线视频| 卡戴珊不雅视频在线播放| 国产老妇伦熟女老妇高清| 中文字幕免费在线视频6| 精品久久久久久久人妻蜜臀av| 亚洲av二区三区四区| 亚洲精品亚洲一区二区| 在线观看一区二区三区| 亚州av有码| 性插视频无遮挡在线免费观看| 久久久国产成人免费| 高清在线视频一区二区三区 | 亚洲精品国产成人久久av| av视频在线观看入口| 亚洲国产精品久久男人天堂| 12—13女人毛片做爰片一| 欧美性感艳星| 中文字幕免费在线视频6| 黄色日韩在线| 久久久久久久久大av| 一本久久中文字幕| 亚洲av电影不卡..在线观看| 精品久久久久久久人妻蜜臀av| 熟女电影av网| 中文欧美无线码| 国产一区亚洲一区在线观看| 日日摸夜夜添夜夜爱| 亚洲国产精品成人久久小说 | av在线蜜桃| 看非洲黑人一级黄片| 国产精品,欧美在线| 国产精品爽爽va在线观看网站| 丝袜美腿在线中文| 啦啦啦啦在线视频资源| 欧美最黄视频在线播放免费| 在线观看66精品国产| 黄片wwwwww| 18禁在线播放成人免费| 可以在线观看的亚洲视频| 国产亚洲av片在线观看秒播厂 | 欧美日本亚洲视频在线播放| 一边亲一边摸免费视频| 免费观看的影片在线观看| 又爽又黄a免费视频| 在线观看免费视频日本深夜| 亚洲国产精品sss在线观看| 国产一区二区三区在线臀色熟女| 亚洲最大成人手机在线| 日本一本二区三区精品| 女同久久另类99精品国产91| 亚洲欧洲国产日韩| 99久久无色码亚洲精品果冻| 国产精品蜜桃在线观看 | 国产午夜精品久久久久久一区二区三区| 听说在线观看完整版免费高清| 在线免费观看不下载黄p国产| 99精品在免费线老司机午夜| av.在线天堂| 亚洲av.av天堂| 免费搜索国产男女视频| 如何舔出高潮| 99精品在免费线老司机午夜| 日本-黄色视频高清免费观看| 亚洲欧洲国产日韩| 女人十人毛片免费观看3o分钟| 久久久国产成人精品二区| 欧美人与善性xxx| 村上凉子中文字幕在线| 国产男人的电影天堂91| 亚洲婷婷狠狠爱综合网| 国产精品伦人一区二区| 亚洲精华国产精华液的使用体验 | 精品久久久久久久末码| 国产精品蜜桃在线观看 | 哪个播放器可以免费观看大片| 少妇裸体淫交视频免费看高清| 久久精品国产亚洲网站| 在线播放无遮挡| 蜜臀久久99精品久久宅男| 国产精品爽爽va在线观看网站| 精品一区二区三区视频在线| 精品久久久久久久久久久久久| 精品少妇黑人巨大在线播放 | 亚洲第一区二区三区不卡| 色综合亚洲欧美另类图片| 日韩成人av中文字幕在线观看| 狂野欧美白嫩少妇大欣赏| 免费一级毛片在线播放高清视频| 1000部很黄的大片| 亚洲18禁久久av| 午夜福利视频1000在线观看| 亚洲av电影不卡..在线观看| 日韩在线高清观看一区二区三区| 少妇人妻一区二区三区视频| 久久国产乱子免费精品| 国产精品久久久久久精品电影| 色播亚洲综合网| 国产精品蜜桃在线观看 | 91久久精品电影网| 变态另类丝袜制服| 简卡轻食公司| 可以在线观看毛片的网站| 国内精品美女久久久久久| 美女国产视频在线观看| 一区二区三区免费毛片| 亚洲图色成人| 久久久久久久久久成人| 村上凉子中文字幕在线| 欧美性猛交╳xxx乱大交人| 免费在线观看成人毛片| 亚洲欧美成人精品一区二区| 日日啪夜夜撸| 久久久精品欧美日韩精品| 秋霞在线观看毛片| 18+在线观看网站| 91午夜精品亚洲一区二区三区| 国产一区亚洲一区在线观看| 国产亚洲精品久久久com| 日本一本二区三区精品| 永久网站在线| 色综合站精品国产| 一级黄片播放器| 日韩成人伦理影院| 成人三级黄色视频| 天美传媒精品一区二区| 极品教师在线视频| 欧美激情久久久久久爽电影| 国产 一区精品| 搡老妇女老女人老熟妇| 国产一级毛片在线| 欧美不卡视频在线免费观看| 一区福利在线观看| 在线观看午夜福利视频| 国产av一区在线观看免费| 国产熟女欧美一区二区| 亚洲熟妇中文字幕五十中出| 日本撒尿小便嘘嘘汇集6| 天天一区二区日本电影三级| 99国产极品粉嫩在线观看| 狠狠狠狠99中文字幕| av黄色大香蕉| 久久久午夜欧美精品| 亚洲无线在线观看| 人妻久久中文字幕网| 亚洲第一区二区三区不卡| 免费电影在线观看免费观看| 69人妻影院| 亚洲av电影不卡..在线观看| 国产黄片美女视频| 色哟哟·www| 干丝袜人妻中文字幕| 黄色日韩在线| 中文精品一卡2卡3卡4更新| 少妇人妻精品综合一区二区 | 久久久久久九九精品二区国产| 偷拍熟女少妇极品色| 九九爱精品视频在线观看| 婷婷精品国产亚洲av| 精品久久国产蜜桃| 女的被弄到高潮叫床怎么办| 久久精品91蜜桃| 美女xxoo啪啪120秒动态图| 亚洲av电影不卡..在线观看| 国产极品精品免费视频能看的| 欧美精品国产亚洲| 能在线免费观看的黄片| 亚洲国产日韩欧美精品在线观看| av免费观看日本| 亚洲高清免费不卡视频| 啦啦啦韩国在线观看视频| ponron亚洲| 99九九线精品视频在线观看视频| 亚洲欧美精品综合久久99| 网址你懂的国产日韩在线| 亚洲av.av天堂| 久久久色成人| 久久久精品大字幕| 12—13女人毛片做爰片一| 久久久久久国产a免费观看| 又爽又黄无遮挡网站| 高清午夜精品一区二区三区 | 精品无人区乱码1区二区| 日韩国内少妇激情av| 亚洲av男天堂| 午夜激情欧美在线| 黄色日韩在线| 色综合站精品国产| 成人亚洲精品av一区二区| 美女脱内裤让男人舔精品视频 | 舔av片在线| 亚洲美女搞黄在线观看| 色哟哟哟哟哟哟| 亚洲av免费在线观看| 欧美成人一区二区免费高清观看| 国产成人a区在线观看| 亚洲av中文av极速乱| 好男人视频免费观看在线| 免费av观看视频| 国内揄拍国产精品人妻在线| 欧美成人a在线观看| 久久99热6这里只有精品| 亚洲无线观看免费| 少妇人妻一区二区三区视频| videossex国产| 99热这里只有是精品50| 1000部很黄的大片| 日韩强制内射视频| 99热这里只有是精品在线观看| 不卡视频在线观看欧美| 国产精品,欧美在线| 啦啦啦啦在线视频资源| 美女脱内裤让男人舔精品视频 | 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 亚洲一级一片aⅴ在线观看| 男女下面进入的视频免费午夜| 两个人视频免费观看高清| 大又大粗又爽又黄少妇毛片口| 亚洲欧洲国产日韩| 在现免费观看毛片| 岛国毛片在线播放| 麻豆国产97在线/欧美| 内地一区二区视频在线| 日韩av不卡免费在线播放| 性插视频无遮挡在线免费观看| 99久久精品一区二区三区| 日日撸夜夜添| 干丝袜人妻中文字幕| av在线蜜桃| 岛国在线免费视频观看| 99久久精品一区二区三区| 亚洲av.av天堂| 99热这里只有是精品在线观看| 天堂中文最新版在线下载 | 此物有八面人人有两片| 好男人视频免费观看在线| 日韩欧美在线乱码| 色综合亚洲欧美另类图片| 免费看日本二区| 欧美三级亚洲精品| 日本爱情动作片www.在线观看| 精品国内亚洲2022精品成人| 97人妻精品一区二区三区麻豆| 日韩欧美精品免费久久| 日韩在线高清观看一区二区三区| 丰满人妻一区二区三区视频av| 免费看日本二区| 欧美+亚洲+日韩+国产| 国产精华一区二区三区| 久久久久久久久久久免费av| 久久久久久国产a免费观看| av黄色大香蕉| 村上凉子中文字幕在线| 精品欧美国产一区二区三| 婷婷亚洲欧美| 亚洲最大成人av| 国产综合懂色| 亚洲成人中文字幕在线播放| 国产人妻一区二区三区在| 99视频精品全部免费 在线| 一级毛片久久久久久久久女| 青春草视频在线免费观看| 久久这里只有精品中国| 中文字幕精品亚洲无线码一区| av.在线天堂| 卡戴珊不雅视频在线播放| 国产精品不卡视频一区二区| 深夜精品福利| 国产精品久久电影中文字幕| 国产极品天堂在线| 国产淫片久久久久久久久| 一级黄色大片毛片| 日本爱情动作片www.在线观看| 看片在线看免费视频| 国产精品免费一区二区三区在线| 国产日本99.免费观看| 寂寞人妻少妇视频99o| 毛片一级片免费看久久久久| 国产成人一区二区在线| 狂野欧美激情性xxxx在线观看| 免费观看精品视频网站| 国产精品一及| 国产91av在线免费观看| 自拍偷自拍亚洲精品老妇| 能在线免费观看的黄片| 成人特级黄色片久久久久久久| 欧美zozozo另类| 尾随美女入室| 国国产精品蜜臀av免费| 亚洲在线自拍视频| 国产私拍福利视频在线观看| 直男gayav资源| 菩萨蛮人人尽说江南好唐韦庄 | 一级毛片久久久久久久久女| 97超碰精品成人国产| 国产一区二区激情短视频| 日韩视频在线欧美| 我的女老师完整版在线观看| 日韩欧美国产在线观看| 久久久久久久久大av| 午夜福利成人在线免费观看| 国产精品无大码| 青春草国产在线视频 | 五月玫瑰六月丁香| 久久久a久久爽久久v久久| 久久99热这里只有精品18| 欧美另类亚洲清纯唯美| 此物有八面人人有两片| 在线免费十八禁| 夜夜看夜夜爽夜夜摸| 一区二区三区免费毛片| 99久久九九国产精品国产免费| 日本av手机在线免费观看| 国产伦理片在线播放av一区 | 美女cb高潮喷水在线观看| 女同久久另类99精品国产91| 一级毛片我不卡| 久久精品91蜜桃| 男人和女人高潮做爰伦理| 一进一出抽搐gif免费好疼| 日韩亚洲欧美综合| 亚洲电影在线观看av| 一本久久精品| 国产熟女欧美一区二区| 欧美xxxx黑人xx丫x性爽| 国产高清不卡午夜福利| 一本久久精品| 精品不卡国产一区二区三区| 插阴视频在线观看视频| 美女脱内裤让男人舔精品视频 | 久久草成人影院| 国产精品无大码| 日本黄色片子视频| 91av网一区二区| 国产精品久久久久久久久免| 3wmmmm亚洲av在线观看| 亚洲第一电影网av| av免费在线看不卡| 99热精品在线国产| 狂野欧美激情性xxxx在线观看| 亚洲无线观看免费| 国产精品女同一区二区软件| 亚洲在久久综合| 国产黄片视频在线免费观看| 黄色一级大片看看| 免费搜索国产男女视频| 久久久久久久久久久免费av| 久久韩国三级中文字幕| 网址你懂的国产日韩在线| 国产成人a∨麻豆精品| 网址你懂的国产日韩在线| 亚洲婷婷狠狠爱综合网| 国产淫片久久久久久久久| 麻豆久久精品国产亚洲av| 51国产日韩欧美| 亚洲欧美精品专区久久| 嘟嘟电影网在线观看| 一区福利在线观看| 国产在视频线在精品| 亚洲18禁久久av| 国产av麻豆久久久久久久| 一级毛片电影观看 | 能在线免费看毛片的网站| 熟女电影av网| 国产真实乱freesex| 欧美+亚洲+日韩+国产|