• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Fringe removal algorithms for atomic absorption images: A survey

    2022-05-16 07:08:42GaoyiLei雷高益ChenchengTang唐陳成andYueyangZhai翟躍陽(yáng)
    Chinese Physics B 2022年5期

    Gaoyi Lei(雷高益) Chencheng Tang(唐陳成) and Yueyang Zhai(翟躍陽(yáng))

    1School of Instrumentation and Optoelectronic Engineering,Beihang University,Beijing 100191,China

    2Quantum Sensing Center,Zhejiang Laboratory,Hangzhou 310000,China

    3Research Institute of Frontier Science,Beihang University,Beijing 100191,China

    Keywords: atomic absorption image,fringe removal,principal component analysis,deep learning

    1. Introduction

    Absorption imaging is an important technique for the precise measurement[1–3]and many-body quantum systems.[4–6]The absorption imaging technique passes a probe beam through the atomic cloud,and leaves a shadow in the transmitted beam.[7]This shadow contains the spatial distribution of the atomic cloud,and can be recorded by the CCD camera.[8,9]Optical density (OD) can be calculated from the atomic distribution,which is useful for quantum metrology,[10,11]phase transitions[12,13]or dimensional crossover.[14,15]To compute the OD of the absorption image, researchers can record the reference image shortly after the absorption image without the atoms, and do the logarithm subtraction with the absorption image.[16]This survey assigns this result as ODref. In practice,ODrefsuffers from the unwanted reflections which generate the stripes and Newton’s rings.

    The dynamics of the beam light causes different residual fringe noises in the ODref. These fringe noises cause the problem to distinguish the atom distribution from the background. To improve the quality of the OD,the fringe removal algorithm analyzes the fringe pattern to eliminate the influence of the noises around the atom distribution. As Fig. 1 shows, the fringe removal algorithm searches the edge areas to compute the fringe patterns. The edge areas can be chosen as the reference images or the edge parts of the absorption images, which are filled with the fringe signals without atom information. Then the algorithm generates the ideal reference image,whose fringe pattern has been fitted with the test atom area. This fitness process of the fringe patterns provides the promise to remove the fringe signals thoroughly with the generated ideal reference image.

    According to the feature extraction approaches, this survey classifies the fringe removal algorithms(e.g.,listed in Table 1) into two categories: the image-decomposition based methods[16–21]and the deep-learning based methods.[22]The image-decomposition based methods decompose the reference images or the edge areas (without atom information) to calculate the fringe basis set. The deep-learning based methods directly predict the ideal reference image with the masked atom area in a trained network. Ockeloenet al.computed the singular vectors of the fringe patterns with singular value decomposition (SVD) in a dataset of hundreds of absorption images.[19]Niuet al.proposed the optimized fringe removal algorithm(OFRA)to reduce the noise signal,and determined the fringe patterns by modifying conditional principal component analysis (PCA).[18]Nesset al.introduced the deep learning into the fringe removal research,and adopted a U-net model to predict the ideal reference image of the absorption image.[22]

    These researches usually assigned the fringe removal algorithms with the absorption imaging systems,leaving the gap of interpreting the algorithm development. The aim of this survey is to review the fringe removal algorithms in the workflow level, and present the breakpoints of current fringe removal researches. We draw the workflows of the enhanced PCA (EPCA) method[16]and the U-net method[22]in Algorithm 1 and Algorithm 2, respectively. The former is a complex image-decomposition based method, and the latter is a novel deep-learning based method.Then we discuss the potential applications of generative adversarial network(GAN)and transfer learning for atomic fringe removal. Finally, we conduct fringe removal experiments on the absDL ultracold image dataset[22]and evaluate the performance with the peak signal noise rate (PSNR). Results suggest that the SVD method[19]outperforms other fringe removal algorithms, and the deeplearning based method is feasible.

    The remainder of this paper is organized as follows. Section 2 presents the image-decomposition based methods, and Section 3 shows the deep-learning based methods. Experiments of the fringe removal algorithms are conducted in Section 4. And Section 5 concludes this survey.

    Table 1. Details of the fringe removal algorithms for atomic absorption images.

    Fig.1. The flow chart of the fringe removal algorithms. Fringe patterns are extracted from the edge area(without the atom information),and are utilized to generate the ideal reference image of the atom area(with the atom information). These data samples are selected from the open-access absDL dataset.

    2. Image-decomposition based methods

    This section reviews the image-decomposition based methods in the process of the absorption images. The edge areas are decomposed into matrices multiplication, and the eigen-vectors are collected to build the fringe basis set. So we mark this kind of fringe removal algorithms as imagedecomposition based methods.

    Assume there arenabsorption images andnrecorded reference images(ornedge area images),and the pixel number of these images ism. Firstly, we subtract the absorption images and the reference images with the background of dark exposure. Then the absorption images are stacked into the absorption matrixA(A ∈Rm×n),and the reference images are stacked into the reference matrixL(L ∈Rm×n). Suppose there exists a matrixA′,which excludes the atom information and has the same fringe structures with the absorption matrix. Then OD is calculated as following:

    where the logarithm operation is implied in every element of the matrixAandA′.

    We can regard the matrixLas an alternative of the ideal reference matrixA′, and assign the result as ODref. However, the dynamic difference of the imaging system causes residual fringe noises in the ODref. The aim of the imagedecomposition based methods is to calculate the fringe basis set from the train dataset, and reconstruct the ideal reference images of the absorption images. A classical workflow of the image-decomposition based methods would obtain the“mean image”matrix ?L,by subtracting the reference matrixLwith its mean value. Then apply the singular value decomposition

    whereUis the left singular vector,Σis the singular value matrix,andVis the right singular vector.

    The left singular vectorUis utilized to build the fringe basis set, and the matrixAsubtracts its mean valueE(A), resulting in the mean absorption matrix ?A. This classical workflow projects the ?Ainto the fringe basis set, and adds the result withE(A)to generate the ideal reference matrixA′. The breakpoints of the image-decomposition based methods are mainly the processes of computing the vectorUand building the fringe basis set.

    The SVD method calculates the singular vector of every column of the mean reference matrix ?L, and uses all singular vectors to build the fringe basis set, without examining the importance of different singular vectors for fringe removal.[19]The optimized fringe removal algorithm(OFRA)method masks the atom area in the projection of matrix ?A.Then the PCA technique is implied to search the principal components of the fringe patterns. The OFRA method is suggested to outperform the conditional PCA method in computing the physical parameters of the time-of-flight (TOF) images,by excluding the atom information in the reconstruction of the ideal reference images.[18]

    Algorithm 1 The workflow of the EPCA method Input: The reference images L Input: The atom absorption images A Input: The dark background image G Output: The optical density OD Subtract L and A with G, obtaining ?L and ?A. Create an empty fringe basis set U.for i=1:EpochNum do ?Lorigin= ?L-mean(?L);Initialization,L= ?Lorigin for j=1:8 do LF)filter=GaussianFilter(LF =FourierTransform(L)(LF)LF)filter)Select the largest singular vector of Lfilter=InvesrFourier((Lfilter,uj filts=pinv(L·uj),pinv is the pseudo-inverse operation Fitting the dataset with uj,L=L-uj·filts·L end for Select the largest four singular vectors in {u1,...,u8},add them into U.Compute the largest singular vector of L,and add it into U.end for Project the ?A into the fringe basis set U, and generate the ideal reference images A′Apply the logarithm operation and compute the optical density,OD=log(A)-log(A′)

    The EPCA method applies Fourier transformation before decomposing ?L.[16]?Lis filtered with a two-dimensional bandpass filter in the position of highest pixel in the frequency domain. Filtered images are transformed back through the Fourier inverse transformation operation, with the real parts and imaginary parts. Real parts and imaginary parts are stacked to construct one filtered fringe pattern. Fitting this fringe pattern to dataset and repeat above steps to generate next filter fringe patter(as shown in Algorithm 1). The EPCA method computes 8 filtered fringe patterns and 1 normal fringe pattern in single epoch with 15 epochs in total,and deprecates the useless patterns with least squares fit.[16]

    Songet al.considered the defect of ultracold imaging system and proposed a data augmentation method to extend the number of the fringe basis setF.[21]Data augmentation can enlarge the dataset size without acquiring extra samples. As the interference fringe presents in plane wave,a spatially shift in horizontal axis and vertical axis is designed to augment the original absorption image in this data augmentation method.Caoet al.recognized the physical sources of the fringe structures in time-of-flights images with the PCA method.[17]They pointed out that the main physical sources of the fringe noises in one-dimensional optical lattices are the position fluctuation,the atom number fluctuation,and the normal fluid fraction,and so on.

    The image-decomposition based methods analyze the singular vector of the reference images,and reconstruct the ideal reference images of the absorption images with the fringe basis set. Though the breakpoints of the image-decomposition based methods are different,the fringe patterns are calculated as the fringe basis set in the matrix style.

    3. Deep-learning based methods

    This section reviews the deep-learning based fringe removal methods, and discusses the advanced deep learning techniques in the process of the absorption images. The deeplearning based methods learn the fringe patterns in the hidden states of the neural network,and provide an approximation of the ideal light distribution in the masked atom area.

    3.1. The U-net method

    The U-net method introduces the deep learning techniques and the image inpainting ideas into the fringe removal in absorption images.[22]The U-net method masks the atom area and transfer the fringe pattern extraction problem to the image restoration problem. The deep neural network adopted in this method is the U-net model,[23]which consists of the down-sample path, the up-sample path and the skipconnections between these two paths. The basic network layers in the U-net model are the convolution layer,the activation layer,the batch normalization(BN)layer,and so on.

    The convolution layer implies the convolution filters on the input and computes the feature maps, whose dimension is the number of the convolution filters.[24]The convolution filters are often designed as 3×3 or 5×5 matrices. The calculation of the(i,j)-th pixel of the convolution layer is presented as Conv(i,j)=(xconv*ω)[i,j],(3)where thexconvis the input to the convolution layer,ωstands for the convolution filter,and*is the convolution operation.

    As the data distribution shifts during the training of deep neural network, the batch norm (BN) layer is adopted to stabilize the data distribution during the training process of deep neural networks. The BN layer can make the network optimization smoothly such the gradients are more predictive and the network converge faster.[25]The equation of the BN layer is presented as following:

    wherexBNis the input of the BN layer,μis the mean value ofxandσ2is the variance ofx.εis a small number to prevent the failure of the division.

    Fig.2. The network architecture of the U-net method. The size of the input is 476×476, and the size of the output is 190×190. The meaning of the blocks are present in the right box. “BN” is the batch normalization layer,“Maxpool” is the 2D max-pooling layer, and “TransposeConv” is the 2D transpose convolution layer.

    The non-linear activation layer, rectified linear unit(ReLU),[26]is applied after the convolution layer and the BN layer in the U-net method. Other possible activation functions are sigmoid, leaky ReLU, tanh, and so on. The equation of ReLU is

    wherexReluis the input of the ReLU function.

    The workflow detail of the U-net method is shown in Algorithm 2. The U-net method transfers the atom images and the reference images into the 32-bit type,and crops their size into 476×476 pixels. The central area of the recorded images is masked with a circle of 190 pixel diameter to create an image restoration problem. A U-net architecture is implied to learn the masked images, and predicts the ideal light distribution around the atom area. As Fig.2 shows,the network architecture consists of 18 convolution layers,2 separable convolution layers,4 max-pooling layers,4 transpose convolution layers and 5 cropping layers. These 32 layers contains about 20×106parameters. The Adam optimizer with the learning rate of 5×10-6is applied to update the network parameters,and the batch size is 8. The objective function of the restoration problem in the U-net method is designed as the mean error loss function. Four cropping 2D layers cut the shape of data from the 476×476 (input) to the 190×190 (output), which forces the U-net model predict the information around the central area. Nesset al.suggested that the U-net method is robust to the variations of the working conditions through time.[22]

    The U-net method introduces the novel image inpainting idea into the fringe removal researches,which extends the applications of the deep learning techniques in the process of the absorption images.The U-net method can learn the fringe patterns from massive data, and perform robust to the variations in the experimental conditions. However, the U-net method requires sufficient data to guarantee the performance. The implementation of the deep-learning based methods is more complex than the image-decomposition based methods, from the perspective of the basic modules and software platforms.The module layers of the U-net method are implemented in the Keras and tensorflow platform,[27]which are developed for the deep learning techniques.

    Algorithm 2 The workflow of the U-net method Input: Train dataset,the absorption images A;Output: The optical density OD;Initialize the U-Net model and the Adam optimizer;Preprocess the data in the Train dataset into the scale(0,1);for i=1:EpochNum do for j in the train dataset do Mask the central area of j as jmask;jout=U-Net(jmask)errorj =MSE(jout, j);Update the Adam optimizer with errorj;Update the network parameters of U-net;end for end for Mask the central area of the absorption images,and predict the ideal reference areas with the U-Net network Re-scale the results of the U-Net model to 32-bit,obtaining the reference images A′Calculate the optical density,OD=log(A)-log(A′)

    3.2. GAN and transfer learning

    This subsection discusses the potential applications of GAN and transfer learning. Though Nesset al.succeeded to imply the U-net method and restore the masked absorption images,[22]the U-net model often suffers from the boundary artifacts and blurry textures in image inpainting.[28]GAN is a popular technique to generate high-quality restoration samples, and has been applied in the inpainting of medical images,[29]face images,[30]and scene images.[31]

    A common module of GAN consists of a generator and a discriminator. The generator learns the distribution of the actual dataset and produces the generated samples. And the discriminator distinguishes the generated samples from the actual data. Thus,the generator and the discriminator play a minmax game, training the networks in an adversarial approach.[32]In order to enhance the divergence of GAN, the Wasserstein GAN(WGAN)is proposed to imply the Lipschitz restriction and measure the distance of the data distributions with the Wasserstein distance.[33]Yuet al.combined the WGAN and the contextual attention to capture the global information and generate high-quality restoration samples.[28]This contextual WGAN method is potential to solve the ineffectiveness problem of the U-net method,with the loss function on the global areas and the masked areas.

    Current fringe removal algorithms explore the impact of the fringe removal algorithms on the physical system, without extending the algorithms across different physical systems.Assume there are two absorption imaging systems with different alkali metal,the source system and the target system. Current fringe removal algorithms require sufficient recorded data both in the source system and the target system, and the extracted fringe basis set in the source system is not considered for the target system.Transfer learning can conduct the knowledge transformation across different domains.[34]Panet al.proposed a transfer learning method named transfer component analysis(TCA),mapping the source domain and the target domain to a reproducing kernel Hilbert space to learn the transfer components.[35]TCA is potential to transfer the fringe basis set from the source system to the target system.

    The U-net method introduces the idea of the image inpainting to the fringe removal research. However, the U-net method suffers from the ineffectiveness of the convolution layers,and the acquired fringe patterns can not be utilized across different physical systems. GAN and transfer learning are popular techniques in computer vision techniques, which can be implied to extend the fringe removal researches.

    4. Experiments

    This section conducts experiments of four fringe removal algorithms on the absDL dataset, to present the performance and the realization of fringe removal algorithms. The four fringe removal algorithms shown in this section are the SVD method,[19]the OFRA method,[18]the EPCA method,[16]and the U-net method.[22]

    4.1. Dataset and the PSNR evaluation

    The atom absorption images and the reference images utilized in this section are selected from the absDL ultracold absorption images dataset. Nesset al.conducted experiments with a quantum degenerate Fermi gas of40K atoms,and published the absDL dataset with 37000 reference images and 720 atom absorption images.[22]The images in the absDL dataset are taken with a laser at the wavelength of~766.7 nm and the line width of 100 kHz. We select 100 atom absorption images as the test set, and 250 reference images for the calculation of fringe patterns in the image decomposition methods. The number of the reference images required by the image decomposition methods is determined according to the description of the original research articles. Then we introduce the peak signal noise rate(PSNR)as the quality evaluation in our experiments. We mark the reference image asy,and the reconstruction image asx. The fundamental mean square error (MSE)value is calculated as following:

    4.2. Results

    This subsection presents the OD results of the four fringe removal algorithms. The variable parameters and settings of the experimented fringe removal algorithms are determined according to their original researches. Results show that the SVD method with 250 recorded images achieves best performance,and the U-net method accomplishes the fringe removal in a novel approach.

    The PSNR evaluation results are listed in Table 2,where the SVD method achieves the score of 32.4795. The U-net method outperforms the OFRA method and the EPCA method with the score of 32.2564. Figure 3 shows the OD results of the fringe removal algorithms on a random sample from the absDL dataset. The top row of Fig. 3 presents the reference image and the generated ideal reference images of the four fringe removal algorithms. The bottom row of Fig. 3 shows the OD results which also suggest that the SVD method performs well.It is observed that the atom area in the result of the EPCA method is a little blurry compared with other imagedecomposition based methods, which may be caused by the information loss in the process of the Gaussian filter. The four corners of the OD result of the U-net method are near black,because the areas in the four corners are unmasked and the network only predicts the circle masked area.

    The idea of the image-decomposition methods is to compute the fringe basis set, and project the absorption images into this set to removal residual fringe structures. The SVD method,the OFRA method and the EPCA method design different strategies of extracting the fringe basis set, in the process of computing the singular vectors. However, the results in our experiments suggest that the simple SVD method can achieve outstanding performance with proper parameters and adequate train set. The success of the U-net method guarantees the future applications of the deep learning techniques,whose developments are fast in the computer vision area.

    Table 2. The mean PSNR evaulation result.

    Fig.3. The OD results of the reference image and the fringe removal algorithms on one absorption sample. The top row shows the reference image and the generated ideal reference images,and the bottom row presents the OD results of the fringe removal algorithms. The test sample is selected from the ultracold absDL dataset.

    5. Discussion

    This survey bridges the gap of interpreting the fringe removal algorithms in the workflow level,including the imagedecomposition based methods and the deep-learning based methods. This survey draws the workflows of the EPCA method and the U-net method to present their process details. Experiments in this survey suggest that the SVD method outperforms other image-decomposition based methods with proper settings. The image inpainting idea introduced by the U-net method can predict the ideal reference images in a novel approach.

    The advantages of the image-decomposition based methods are the simple implementation and the clear interpretation. The image-decomposition based methods compute the singular vectors as the fringe basis set, and the deep-learning based methods learn the features of the fringe patterns in the hidden states, which are hard to interpret. The drawbacks of the image-decomposition based methods are the similarity requirement between the reference images and the absorption images,and the manual parameter selection.

    Deep learning usually relies on sufficient data to train neural networks, and is able to capture complex highdimension features in the hidden state. The success of the U-net method provides a novel solution for fringe removal,different from prior fringe removal researches which extract the fringe patterns with PCA.The future work can explore the data efficiency of the absorption images with deep learning.

    Acknowledgement

    This research was founded by the National Natural Science Foundation of China(Grant No.62003020).

    一边摸一边做爽爽视频免费| 精品熟女少妇八av免费久了| 国产人伦9x9x在线观看| 国产精品久久久久成人av| 亚洲精品国产区一区二| a级片在线免费高清观看视频| 肉色欧美久久久久久久蜜桃| 人人妻人人澡人人看| 人人妻人人添人人爽欧美一区卜| 国产亚洲一区二区精品| 亚洲国产精品999| 老鸭窝网址在线观看| 狂野欧美激情性xxxx| 久久精品久久精品一区二区三区| 国产一级毛片在线| 捣出白浆h1v1| 亚洲天堂av无毛| 国产成人啪精品午夜网站| 中文字幕精品免费在线观看视频| 纯流量卡能插随身wifi吗| 国产精品二区激情视频| 国产精品一国产av| 国产成人av激情在线播放| a 毛片基地| 国产成人系列免费观看| 性色av乱码一区二区三区2| 18禁黄网站禁片午夜丰满| 人人妻人人添人人爽欧美一区卜| 每晚都被弄得嗷嗷叫到高潮| 九草在线视频观看| 日本色播在线视频| 国产av国产精品国产| 久久久久久免费高清国产稀缺| 久久久欧美国产精品| 交换朋友夫妻互换小说| 777米奇影视久久| 欧美人与性动交α欧美软件| 国产又爽黄色视频| 国产女主播在线喷水免费视频网站| 爱豆传媒免费全集在线观看| 欧美 亚洲 国产 日韩一| 午夜免费成人在线视频| 久久鲁丝午夜福利片| 久久精品亚洲av国产电影网| 高潮久久久久久久久久久不卡| 日韩中文字幕欧美一区二区 | 亚洲欧洲国产日韩| 色播在线永久视频| 丝袜喷水一区| 大型av网站在线播放| 精品久久蜜臀av无| 手机成人av网站| 日韩 欧美 亚洲 中文字幕| 国产福利在线免费观看视频| 国产精品 欧美亚洲| 国产老妇伦熟女老妇高清| 国产免费又黄又爽又色| 天天操日日干夜夜撸| xxx大片免费视频| 一级黄片播放器| 天天影视国产精品| 在线看a的网站| 成年女人毛片免费观看观看9 | 亚洲久久久国产精品| 欧美成狂野欧美在线观看| 人人澡人人妻人| 亚洲成人免费电影在线观看 | 久久国产亚洲av麻豆专区| 久久久久久久国产电影| 亚洲成色77777| 国产成人系列免费观看| 色精品久久人妻99蜜桃| 欧美另类一区| 啦啦啦啦在线视频资源| 捣出白浆h1v1| 欧美xxⅹ黑人| 18禁裸乳无遮挡动漫免费视频| 青青草视频在线视频观看| av不卡在线播放| 欧美精品高潮呻吟av久久| 国产亚洲精品久久久久5区| 久久久久国产一级毛片高清牌| 五月天丁香电影| 精品国产一区二区三区四区第35| 亚洲精品日韩在线中文字幕| 午夜免费鲁丝| 午夜视频精品福利| 99精国产麻豆久久婷婷| 成年女人毛片免费观看观看9 | 国产高清不卡午夜福利| 桃花免费在线播放| 下体分泌物呈黄色| 久久av网站| 成年人黄色毛片网站| xxxhd国产人妻xxx| 91成人精品电影| 你懂的网址亚洲精品在线观看| av又黄又爽大尺度在线免费看| 中文乱码字字幕精品一区二区三区| av欧美777| 久久精品成人免费网站| 免费在线观看日本一区| 国产日韩欧美亚洲二区| 伊人亚洲综合成人网| 男女国产视频网站| 久久久久久久大尺度免费视频| 美女高潮到喷水免费观看| 18禁黄网站禁片午夜丰满| av国产久精品久网站免费入址| 精品亚洲成国产av| 在线看a的网站| 欧美少妇被猛烈插入视频| 亚洲熟女毛片儿| kizo精华| 一区在线观看完整版| 真人做人爱边吃奶动态| av在线老鸭窝| 日韩大码丰满熟妇| 国产99久久九九免费精品| 韩国精品一区二区三区| 久久99一区二区三区| 精品一区在线观看国产| 晚上一个人看的免费电影| 午夜福利免费观看在线| 国产成人免费观看mmmm| 97精品久久久久久久久久精品| 亚洲,欧美精品.| 99精品久久久久人妻精品| 王馨瑶露胸无遮挡在线观看| 国产主播在线观看一区二区 | 亚洲少妇的诱惑av| 日韩av不卡免费在线播放| 国产日韩一区二区三区精品不卡| 日韩制服丝袜自拍偷拍| 国产成人精品久久二区二区91| 激情五月婷婷亚洲| 好男人电影高清在线观看| 黄频高清免费视频| 久久影院123| 晚上一个人看的免费电影| 色网站视频免费| 91麻豆av在线| 欧美日韩黄片免| 色94色欧美一区二区| 亚洲国产欧美在线一区| 欧美日韩黄片免| 亚洲人成77777在线视频| 大码成人一级视频| 欧美亚洲日本最大视频资源| 色94色欧美一区二区| 啦啦啦在线观看免费高清www| 一级a爱视频在线免费观看| 一二三四社区在线视频社区8| 久久精品久久久久久噜噜老黄| 女人爽到高潮嗷嗷叫在线视频| 国产精品香港三级国产av潘金莲 | 老熟女久久久| 纵有疾风起免费观看全集完整版| 久久精品久久精品一区二区三区| 国产真人三级小视频在线观看| 亚洲人成电影观看| 国精品久久久久久国模美| 波多野结衣av一区二区av| 1024视频免费在线观看| 中文字幕高清在线视频| 日韩欧美一区视频在线观看| 麻豆国产av国片精品| 人妻人人澡人人爽人人| 黄片小视频在线播放| 黄色片一级片一级黄色片| 久久精品亚洲熟妇少妇任你| 十八禁网站网址无遮挡| 各种免费的搞黄视频| 又紧又爽又黄一区二区| 夫妻性生交免费视频一级片| 国产精品一区二区在线不卡| 91成人精品电影| 久久精品成人免费网站| 美国免费a级毛片| 两个人免费观看高清视频| 香蕉国产在线看| 美国免费a级毛片| 亚洲视频免费观看视频| 成人国产av品久久久| 亚洲国产精品一区三区| 在线观看人妻少妇| 麻豆国产av国片精品| 亚洲自偷自拍图片 自拍| 久久亚洲国产成人精品v| bbb黄色大片| 中文乱码字字幕精品一区二区三区| 中文字幕人妻丝袜一区二区| 中文精品一卡2卡3卡4更新| 另类精品久久| 香蕉丝袜av| 国产视频首页在线观看| 午夜福利视频在线观看免费| 欧美日韩视频精品一区| 中文字幕精品免费在线观看视频| 在线天堂中文资源库| 少妇人妻久久综合中文| 亚洲中文字幕日韩| 亚洲欧美一区二区三区黑人| 亚洲精品av麻豆狂野| 日本欧美视频一区| 视频区图区小说| 丝袜脚勾引网站| 国产在线免费精品| 青春草亚洲视频在线观看| 乱人伦中国视频| 亚洲精品自拍成人| 夜夜骑夜夜射夜夜干| 熟女av电影| 日本vs欧美在线观看视频| 97精品久久久久久久久久精品| 国产欧美亚洲国产| 久久久久精品国产欧美久久久 | 国产真人三级小视频在线观看| 青草久久国产| 女性被躁到高潮视频| 深夜精品福利| 2018国产大陆天天弄谢| 中文字幕制服av| 国产女主播在线喷水免费视频网站| 久久精品久久精品一区二区三区| 国产又色又爽无遮挡免| 久久久久视频综合| 久久久久久久国产电影| 久久性视频一级片| 亚洲精品日本国产第一区| 19禁男女啪啪无遮挡网站| 熟女少妇亚洲综合色aaa.| 悠悠久久av| 永久免费av网站大全| 丁香六月欧美| 深夜精品福利| www.熟女人妻精品国产| 免费少妇av软件| 欧美人与性动交α欧美精品济南到| 久久久久网色| 久久人妻福利社区极品人妻图片 | 99精国产麻豆久久婷婷| 男女边吃奶边做爰视频| 亚洲国产毛片av蜜桃av| 后天国语完整版免费观看| 免费日韩欧美在线观看| 在线天堂中文资源库| 国产精品三级大全| 在线观看人妻少妇| 亚洲国产av影院在线观看| 久久人人爽人人片av| 久久中文字幕一级| 亚洲成人手机| 精品少妇黑人巨大在线播放| www.熟女人妻精品国产| 美女中出高潮动态图| 亚洲欧洲国产日韩| 丰满饥渴人妻一区二区三| 国产免费福利视频在线观看| av国产久精品久网站免费入址| 午夜福利,免费看| 又粗又硬又长又爽又黄的视频| 国产欧美日韩一区二区三区在线| 成人手机av| www日本在线高清视频| 看十八女毛片水多多多| 久久青草综合色| av视频免费观看在线观看| 十分钟在线观看高清视频www| 在线观看www视频免费| 亚洲七黄色美女视频| 91精品伊人久久大香线蕉| 欧美激情 高清一区二区三区| 精品久久蜜臀av无| 成年av动漫网址| 婷婷色麻豆天堂久久| 亚洲自偷自拍图片 自拍| 女人久久www免费人成看片| 久久ye,这里只有精品| 国产精品久久久av美女十八| 老司机午夜十八禁免费视频| 晚上一个人看的免费电影| 婷婷色麻豆天堂久久| 欧美精品啪啪一区二区三区 | av网站免费在线观看视频| 亚洲av片天天在线观看| 19禁男女啪啪无遮挡网站| 搡老乐熟女国产| 巨乳人妻的诱惑在线观看| 国产欧美日韩综合在线一区二区| 一二三四在线观看免费中文在| 国产亚洲精品久久久久5区| 91精品国产国语对白视频| 午夜福利,免费看| 多毛熟女@视频| 国产成人一区二区在线| 精品久久久久久电影网| 欧美+亚洲+日韩+国产| 国产精品一国产av| 久久久久久免费高清国产稀缺| 电影成人av| 欧美性长视频在线观看| 天堂中文最新版在线下载| 亚洲av片天天在线观看| 国产精品一区二区在线观看99| 精品一区二区三区av网在线观看 | 久久国产精品影院| 美女大奶头黄色视频| 一本综合久久免费| 一级毛片黄色毛片免费观看视频| 一本久久精品| 免费观看人在逋| 亚洲中文日韩欧美视频| 九草在线视频观看| 日日摸夜夜添夜夜爱| 国产成人一区二区在线| 黄色a级毛片大全视频| 午夜福利一区二区在线看| 啦啦啦在线观看免费高清www| 精品人妻一区二区三区麻豆| 色婷婷av一区二区三区视频| 极品少妇高潮喷水抽搐| 亚洲av男天堂| 久久久国产欧美日韩av| 日韩制服骚丝袜av| 人人妻,人人澡人人爽秒播 | 国产成人欧美| 亚洲熟女精品中文字幕| 每晚都被弄得嗷嗷叫到高潮| 亚洲美女黄色视频免费看| 亚洲精品成人av观看孕妇| 久久久久久久久久久久大奶| 国产精品久久久久久精品电影小说| 国产有黄有色有爽视频| 亚洲自偷自拍图片 自拍| 国产av精品麻豆| 丝袜美足系列| 精品国产一区二区三区久久久樱花| 91精品伊人久久大香线蕉| 19禁男女啪啪无遮挡网站| 人人妻人人添人人爽欧美一区卜| 精品少妇黑人巨大在线播放| 亚洲,一卡二卡三卡| 精品少妇黑人巨大在线播放| 国产av国产精品国产| 捣出白浆h1v1| 真人做人爱边吃奶动态| 亚洲人成77777在线视频| 午夜福利视频精品| 黑丝袜美女国产一区| 国产成人免费无遮挡视频| www.999成人在线观看| 欧美大码av| 赤兔流量卡办理| 亚洲欧洲日产国产| 国产精品国产av在线观看| 黑人猛操日本美女一级片| 久久亚洲精品不卡| 国产片内射在线| 赤兔流量卡办理| 男女高潮啪啪啪动态图| 宅男免费午夜| 在线观看国产h片| 国产91精品成人一区二区三区 | 中文字幕色久视频| 日韩电影二区| 激情五月婷婷亚洲| 亚洲国产精品999| 欧美久久黑人一区二区| 男女床上黄色一级片免费看| 脱女人内裤的视频| 亚洲成色77777| 王馨瑶露胸无遮挡在线观看| 免费观看人在逋| 免费在线观看视频国产中文字幕亚洲 | 欧美人与善性xxx| 人人澡人人妻人| 亚洲成av片中文字幕在线观看| 91老司机精品| 女人久久www免费人成看片| 99国产精品99久久久久| 国产精品av久久久久免费| netflix在线观看网站| 男女边摸边吃奶| 亚洲欧美精品综合一区二区三区| 亚洲av国产av综合av卡| 亚洲精品国产区一区二| 亚洲欧洲国产日韩| av在线老鸭窝| 多毛熟女@视频| 免费看不卡的av| 精品第一国产精品| 亚洲成人国产一区在线观看 | 午夜影院在线不卡| 男女边吃奶边做爰视频| 久久国产精品男人的天堂亚洲| 波多野结衣一区麻豆| 久久精品久久久久久久性| 欧美亚洲 丝袜 人妻 在线| 精品少妇一区二区三区视频日本电影| 亚洲精品久久久久久婷婷小说| 黄网站色视频无遮挡免费观看| 日本av免费视频播放| 欧美 日韩 精品 国产| 日本一区二区免费在线视频| 亚洲伊人色综图| 国产97色在线日韩免费| 美女午夜性视频免费| 国产男女内射视频| 蜜桃在线观看..| 国产精品一区二区免费欧美 | 亚洲一卡2卡3卡4卡5卡精品中文| 久久精品国产a三级三级三级| 色婷婷av一区二区三区视频| 亚洲少妇的诱惑av| 高清av免费在线| 美女午夜性视频免费| 免费av中文字幕在线| 亚洲精品一二三| 亚洲情色 制服丝袜| 在线天堂中文资源库| 国产成人精品久久二区二区免费| 香蕉丝袜av| a级毛片黄视频| 国产黄色免费在线视频| 悠悠久久av| 日本wwww免费看| videos熟女内射| 国产男人的电影天堂91| 国产不卡av网站在线观看| 50天的宝宝边吃奶边哭怎么回事| 亚洲av电影在线进入| 高清欧美精品videossex| 纵有疾风起免费观看全集完整版| 黄网站色视频无遮挡免费观看| 后天国语完整版免费观看| 韩国精品一区二区三区| 啦啦啦在线观看免费高清www| 赤兔流量卡办理| 黄色a级毛片大全视频| 亚洲av日韩精品久久久久久密 | 19禁男女啪啪无遮挡网站| 大片免费播放器 马上看| 久久人妻熟女aⅴ| 欧美日韩精品网址| 欧美日韩视频精品一区| 新久久久久国产一级毛片| 啦啦啦在线观看免费高清www| 国产午夜精品一二区理论片| 黄色a级毛片大全视频| 看十八女毛片水多多多| 最近中文字幕2019免费版| av电影中文网址| 高清黄色对白视频在线免费看| 秋霞在线观看毛片| 视频区图区小说| 欧美精品高潮呻吟av久久| 欧美国产精品一级二级三级| 欧美日本中文国产一区发布| 亚洲欧美清纯卡通| 一级,二级,三级黄色视频| 老汉色∧v一级毛片| 黄色片一级片一级黄色片| 欧美av亚洲av综合av国产av| 国产精品 欧美亚洲| 99re6热这里在线精品视频| 91精品伊人久久大香线蕉| 精品免费久久久久久久清纯 | 女警被强在线播放| 欧美精品高潮呻吟av久久| 精品一区在线观看国产| 性高湖久久久久久久久免费观看| 欧美 亚洲 国产 日韩一| 免费看十八禁软件| 在线亚洲精品国产二区图片欧美| 国产精品亚洲av一区麻豆| 一区二区日韩欧美中文字幕| 久久青草综合色| 精品国产乱码久久久久久男人| 亚洲欧美精品综合一区二区三区| av福利片在线| 欧美在线黄色| 国产一区二区 视频在线| 久久女婷五月综合色啪小说| 亚洲欧美中文字幕日韩二区| 一区在线观看完整版| 久久久国产欧美日韩av| 久久午夜综合久久蜜桃| 国产免费现黄频在线看| 美女国产高潮福利片在线看| 国产精品久久久久久人妻精品电影 | 最新在线观看一区二区三区 | 成人国语在线视频| 免费看不卡的av| 91精品伊人久久大香线蕉| 亚洲国产精品一区二区三区在线| 黄网站色视频无遮挡免费观看| 国产成人av教育| 日韩精品免费视频一区二区三区| 久久99热这里只频精品6学生| 国产精品.久久久| 美女国产高潮福利片在线看| 麻豆国产av国片精品| 91成人精品电影| 国产视频首页在线观看| 久久鲁丝午夜福利片| av国产久精品久网站免费入址| 久久久久久久久久久久大奶| 亚洲国产欧美日韩在线播放| 成人国语在线视频| 亚洲男人天堂网一区| 精品久久久精品久久久| 汤姆久久久久久久影院中文字幕| 亚洲av在线观看美女高潮| 亚洲欧洲日产国产| 女性生殖器流出的白浆| 欧美精品亚洲一区二区| 又大又黄又爽视频免费| 国产日韩欧美在线精品| 色视频在线一区二区三区| 香蕉丝袜av| 天堂俺去俺来也www色官网| 国产一区二区在线观看av| 少妇被粗大的猛进出69影院| 啦啦啦视频在线资源免费观看| 亚洲欧美精品自产自拍| 黄色 视频免费看| 国产一区亚洲一区在线观看| 亚洲精品美女久久久久99蜜臀 | 精品福利永久在线观看| 999久久久国产精品视频| 大片电影免费在线观看免费| 黄色一级大片看看| 亚洲午夜精品一区,二区,三区| 国产熟女午夜一区二区三区| 巨乳人妻的诱惑在线观看| 亚洲欧美日韩高清在线视频 | 国产三级黄色录像| 丝袜脚勾引网站| 久久精品久久久久久噜噜老黄| 精品视频人人做人人爽| 高清不卡的av网站| 美女大奶头黄色视频| 伊人久久大香线蕉亚洲五| 国产激情久久老熟女| 女性生殖器流出的白浆| 一二三四在线观看免费中文在| 国产精品秋霞免费鲁丝片| 80岁老熟妇乱子伦牲交| 丝袜美足系列| 日韩人妻精品一区2区三区| 天天躁夜夜躁狠狠躁躁| 激情五月婷婷亚洲| 久久女婷五月综合色啪小说| 国产日韩欧美视频二区| 操出白浆在线播放| 久久久精品免费免费高清| 99国产精品99久久久久| 国产黄色免费在线视频| 新久久久久国产一级毛片| 亚洲情色 制服丝袜| 久久精品国产亚洲av高清一级| 丝袜在线中文字幕| 国产精品免费大片| 亚洲成色77777| 侵犯人妻中文字幕一二三四区| 国产成人91sexporn| 熟女av电影| 免费在线观看视频国产中文字幕亚洲 | av福利片在线| 考比视频在线观看| 欧美乱码精品一区二区三区| 亚洲美女黄色视频免费看| 巨乳人妻的诱惑在线观看| 高清不卡的av网站| 亚洲激情五月婷婷啪啪| 精品亚洲成a人片在线观看| 黄片播放在线免费| 国产成人精品久久久久久| 亚洲精品国产av蜜桃| 丰满人妻熟妇乱又伦精品不卡| 九草在线视频观看| 一级a爱视频在线免费观看| 桃花免费在线播放| 久久精品国产亚洲av涩爱| 老司机在亚洲福利影院| 菩萨蛮人人尽说江南好唐韦庄| 国产不卡av网站在线观看| 超碰成人久久| av福利片在线| 蜜桃在线观看..| 日本欧美国产在线视频| 免费在线观看黄色视频的| 久久天堂一区二区三区四区| 91九色精品人成在线观看| 成人黄色视频免费在线看| av欧美777| 天天躁日日躁夜夜躁夜夜| 成年人黄色毛片网站| 久久亚洲精品不卡| 好男人视频免费观看在线| 亚洲九九香蕉| 热re99久久精品国产66热6| 亚洲综合色网址| 亚洲av电影在线进入| 在线观看www视频免费| 美女扒开内裤让男人捅视频| 老司机影院毛片| 国产成人欧美| 国产精品 欧美亚洲| 国产亚洲精品久久久久5区| 婷婷成人精品国产| 啦啦啦在线观看免费高清www| 波野结衣二区三区在线| 另类精品久久|