• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    TendiffPure: a convolutional tensor-train denoising diffusion model for purification

    2024-03-06 09:17:10MingyuanBAIDerunZHOUQibinZHAO

    Mingyuan BAI, Derun ZHOU,2, Qibin ZHAO

    1RIKEN AIP, Tokyo 1030027, Japan

    2School of Environment and Society, Tokyo Institute of Technology, Tokyo 1528550, Japan

    E-mail: mingyuan.bai@riken.jp; zhouderun2000@gmail.com; qibin.zhao@riken.jp

    Received May 31, 2023; Revision accepted Jan.3, 2024; Crosschecked Jan.15, 2024

    Abstract: Diffusion models are effective purification methods, where the noises or adversarial attacks are removed using generative approaches before pre-existing classifiers conducting classification tasks.However, the efficiency of diffusion models is still a concern,and existing solutions are based on knowledge distillation which can jeopardize the generation quality because of the small number of generation steps.Hence, we propose TendiffPure as a tensorized and compressed diffusion model for purification.Unlike the knowledge distillation methods, we directly compress U-Nets as backbones of diffusion models using tensor-train decomposition, which reduces the number of parameters and captures more spatial information in multi-dimensional data such as images.The space complexity is reduced from O(N2)to O(NR2)with R ≤4 as the tensor-train rank and N as the number of channels.Experimental results show that TendiffPure can more efficiently obtain high-quality purification results and outperforms the baseline purification methods on CIFAR-10, Fashion-MNIST,and MNIST datasets for two noises and one adversarial attack.

    Key words: Diffusion models; Tensor decomposition; Image denoising

    1 Introduction

    Diffusion models (Dhariwal and Nichol, 2021;Gao et al., 2023) are ubiquitous in the recent three years in text, image, and video generation.They appeal to both academics and practitioners for their mode coverage, stationary training objective, and easy scalability.Among the generative models,compared with generative adversarial networks(GANs),as diffusion models do not require adversarial training, they are able to process a significantly larger range of distributions of features and hence avoid mode collapse.For the same reason, their training process is more stable than that of GANs.In terms of sample quality, diffusion models outperform variational autoencoders (VAEs) and normalizing flows(Ho and Salimans, 2021).They demonstrate strong capabilities as purification methods of removing both noises and adversarial attacks for data preprocessing,followed by the classifiers.

    Benefiting from denoising score matching (Vincent, 2011) or sliced score matching (Song Y et al.,2020),diffusion models using score-based generative modeling methods are scalable to high-dimensional data in the deep learning settings.However, they still suffer from low sampling speed,which is caused by the iterative generation process.In specific, for each sampling or generation step,data are iteratively updated following the direction determined by the score until the mode is reached, where the score can be described by a score function.This score function is commonly approximated by a U-Net which is the backbone of a diffusion model.A large variety of data naturally possess multi-dimensional spatial structures, which can be easily neglected by convolution kernels of U-Nets (Ronneberger et al., 2015).U-Nets are the common backbone of diffusion models and in substance enable them to generate highquality images compared with other generative models, where nearly all U-Nets in pre-trained diffusion models have the same number of parameters,except a small number of them,such as U-Nets in denoising diffusion implicit models (DDIMs) (Song JM et al.,2021).Nevertheless, the large number of parameters in U-Nets still prevents diffusion models from achieving efficient generation and purification.

    With the purpose of obtaining efficient and highquality purification and generation with diffusion models,a majority of existing solutions are in knowledge distillation (Meng et al., 2023; Song Y et al.,2023).For these methods, the goal is to reduce the number of iterative steps to accelerate the generation process, where the student models are diffusion models.In practice, a limited number of steps in the student models can hardly achieve the same performance as the teacher models (Song Y et al.,2023).These knowledge distillation methods did not consider the number of parameters or the multidimensional structural information in data.Hence,the qualitative performance of compressed models can easily be unrealistic.Besides, LoRA as a finetuning method for pre-trained diffusion models was recently proposed and it relies on matrix factorization, where the number of parameters was reduced and the two-dimensional structural information was tackled (Hu et al., 2022).However, when the pretrained diffusion models are unavailable or when there is complicated multi-dimensional structural information, LoRA will not be so effective and other methods are demanded.

    Given the aforementioned problems in the scalability of diffusion models, we propose to compress diffusion models for purification and evaluate their performance on purification tasks.In specific, we design the tensor denoising diffusion purifier (TendiffPure), where we tensorize the convolution kernels in U-Nets using tensor-train (TT) decomposition(Oseledets,2011)as shown in Fig.1,enhancing or at least not jeopardizing the purification quality and reducing the space complexity fromO(N2) toO(NR2) with usuallyR ≤4 as the TT rank andNas the number of channels, especially for noisy or perturbed images (Li et al., 2019).This tensorization for compression distinguishes TendiffPure from knowledge distillation methods for diffusion models.We conduct three experiments on CIFAR-10 (Krizhevsky and Hinton, 2009), Fashion-MNIST(Xiao et al.,2017),and MNIST(LeCun et al.,1998)datasets separately,on two noises and one adversarial attack: Gaussian noises, salt and pepper noises,and AutoAttack (Croce and Hein, 2020).

    2 Background

    2.1 Diffusion models for purification

    Fig.1 A brief summary of TendiffPure

    Purification is to eliminate noises and adversarial perturbations in data using generative models before classification.Unlike other defense methods,purification does not assume the forms of noises,adversarial attacks, and classification models.Hence, generative purification models do not require retraining of classifiers and are not trained with threat models.Diffusion models as emerging generative models have been recently scrutinized for purification (Nie et al., 2022) due to their extraordinary generative power.They purify noised or adversarially perturbed data in two phases.First, in the forward process of diffusion models,Gaussian noises are iteratively added to the noised or adversarially perturbed data until they become Gaussian noises as well.Afterwards, in the reverse process, they are denoised iteratively to generate the purified data.Hence, the noises or adversarial perturbations are eliminated.Note that for datasets on which there are diffusion models pre-trained,we can directly use the pre-trained diffusion models for purification,and hence no training process is required.Besides, for those without pre-trained diffusion models, we need to first train diffusion models on clean, i.e., unperturbed, data, and then use these trained diffusion models for purification.

    Diffusion models have been the prevalent generative model in recent years.They impress the machine learning and deep learning community with their powerfulness on sample quality, sample diversity, and mode coverage (Ho et al., 2020; Dhariwal and Nichol, 2021; Song JM et al., 2021; Vahdat et al., 2021).Benefiting from these advantages,they become appealing tools for purification, for example, DiffPure (Nie et al., 2022), where noises and even adversarial attacks in the perturbed dataxa∈Rd,xa~q(x) can be removed by diffusion models.The denoised or purified data should be as close to the clean datax ∈Rd,x~p(x)as possible.A typical diffusion model consists of two procedures:forward process and reverse process.The forward process progressively injects Gaussian noises to the data where the perturbed dataxaare diffused towards a noise distribution.For a discrete diffusion model, its forward process is formulated as

    Instead of predictingμθ(xt,t), which is a linear combination of∈θ(xt,t) andxt, practically it is common to predict the noise component as part ofμθ(xt,t) using the noise predictor U-Net∈θ(xt,t)(Ho et al., 2020).θis the parameter for describing the mean and variance.The covariance predictorΣθ(xt,t) can be learnable parameters for enhanced model quality(Nichol and Dhariwal,2021).To thoroughly remove the noise or adversarial attack and keep the semantic information,Nie et al.(2022)proposed to add Gaussian noise int*∈(0,T]steps.

    2.2 Tensor decomposition

    Tensor decomposition and tensor networks are prevalent workhorses for multi-dimensional data analysis to capture their spatial structural information,to reduce the number of model parameters,and to avoid the curse of dimensionality issue, including images (Luo et al., 2022).Here we refer to a multidimensional array as a tensor, where the number of“aspects” of a tensor is its order and the aspects are the modes of this tensor;for example,a 1024×768×3 image is a 3rd-order tensor with the sizes of mode 1,mode 2, and mode 3 being 1024,768, and 3, respectively.The key of tensor decomposition and tensor networks is to dissect a tensor into the sum of products of vectors as CANDECOMP/PARAFAC(CP)decomposition(Carroll and Chang,1970),matrices and tensors as Tucker decomposition (Hitchcock, 1927; Tucker, 1966), small-sized tensors such as TT decomposition (Oseledets, 2011) and tensor ring decomposition (Zhao et al., 2016), and tensor networks such as multi-scale entanglement renormalization ansatz (MERA) (Giovannetti et al., 2008).Among them, TT decomposition demonstrates its prevalence in a number of deep learning models for compression because of its low space complexity and capabilities of improving the performance of deep learning models (Su et al., 2020).In specific, TT decomposition considers aDth-order tensorY ∈RI1×I2×···×IDas the product ofD3rd-order tensorsXd ∈RRd-1×Id×Rd,d= 1,2,···,D, with the rankRdmuch smaller than the mode sizeId:Y=X1×13X2×···×13XD.Here,Xd×13Xd+1is the contraction of mode 3 ofXdand mode 1 ofXd+1.Note that forX1andXD,R0=RD=1.

    3 Tensorizing diffusion models for purification

    As aforementioned,we aim to compress the diffusion models from the perspective of reducing the parameter size,at least to attain similar performance of the uncompressed diffusion model on image denoising and purification tasks, i.e., using generative models to remove perturbations in data including adversarial attacks.Therefore, we propose Tendiff-Pure which is a convolutional TT denoising diffusion model.

    In each step of a generic diffusion model as in Eqs.(1) and (2), the key backbone is the U-Net∈θ(xt,t) in the reverse process.Hence, it provides the potential to compress the diffusion models by reducing the number of parameters of the U-Net.Note that the U-Net at each step of the reverse process shares the same parameters.For the U-Net∈θ(xt,t),we compress it as

    For ConvTTUNet(xt,t), each convolution kernel is parameterized using TT decomposition.In existing diffusion models, U-Nets often employ 2Dconvolution kernels,where each convolutional kernel isWi ∈ROi×Ci×Ki×Di, whereOiis the number of output channels,Ciis the number of input channels,Kiis the first kernel size,andDiis the second kernel size.In TendiffPure, we decompose these 4th-order tensors into the following TT cores:

    whereU1∈R1×Oi×R1,i,U2∈RR1,i×Ci×R2,i,U3∈RR2,i×Ki×R3,i, andU4∈RR3,i×Di×1as demonstrated in Fig.2.This parameterization follows the standard TT decomposition in Section 2.2, whereR0,i=R4,i= 1.Hence, the space complexity is reduced fromO(N2)toO(NR2).

    Fig.2 Convolution tensor-train kernels of Tendiff-Pure

    Practically,R0,ican equal the number of input channels.Hence, we have parameterization of U-Nets as

    whereU1∈ROi×Ci×R1,i,U2∈RR1,i×Ki×R2,i, andU3∈RR2,i×Di×1.We allow for a more generic parameterization,where the convolution kernels are decomposed into two TT cores,i.e.,

    withU1∈R1×Oi×Ci×Ki×R1,iandU2∈RR1,i×Di×1.Note that for all three decomposition schemes, the convolution kernelsWiare squeezed to remove the modes with size 1 for programming.We design these three decomposition schemes to enable a wider range of choices of ranks of TT cores, as the performance of the decomposed model can be sensitive to the ranks of parameters, and we aim to attain the optimal ranks.At the end, each convolution operation in the convolutional TT U-Nets is defined as

    where“?” represents the convolution.

    Building on these convolutional TT U-Nets as backbones,the proposed TendiffPure is in substance a convolutional TT denoising diffusion model.We follow the general architecture of the denoising diffusion probabilistic model (DDPM) to remove the perturbations,including the adversarial attacks.Instead of completing the forward process,we add only Gaussian noises until stept*, wheret*<T, inspired by Nie et al.(2022).Hence, we can control the amount of Gaussian noises added to ensure that the perturbations can be properly removed and that the semantic information is not destroyed in the denoised or purified images.In our case, we use the search methods to find the optimalt*.These search methods include the commonly applied grid search and random search for hyperparameter tuning.For the experiments,we use grid search to seekt*for its simplicity.

    Furthermore, we recognize that low rankness might be related to the robustness of diffusion models (Nie et al., 2022).In particular, according to Theorem 3.2 in DiffPure (Nie et al., 2022), thel2distance between the clean dataxand the purified data ?x0is

    4 Experiments

    4.1 Experimental settings

    4.1.1 Datasets and network architectures

    With the purpose of investigating the numerical performance of the proposed TendiffPure, we implement experiments on three datasets: CIFAR-10 (Krizhevsky and Hinton, 2009), Fashion-MNIST(Xiao et al.,2017),and MNIST(LeCun et al.,1998).After conducting the purification or denoising tasks,we intend to investigate if the purified images by the models are close to the clean images enough.Hence,we harness the pre-trained classifiers ResNet56 and LeNet,where ResNet56 is for the CIFAR-10 dataset and LeNet is for the Fashion-MNIST and MNIST datasets.Then we use them to classify the purified or denoised images.If the purified or denoised images can be classified into their original classes by the classifier,it is quantitatively close enough to the clean image.Note that for all the diffusion models in our experiments,we employ the classifier guidance.

    4.1.2 Noises and adversarial attacks

    We add two different noises,Gaussian noise and salt and pepper noise (S&P noise), and one adversarial attack, AutoAttack, to each of CIFAR-10,Fashion-MNIST, and MNIST datasets.The Gaussian noise level is 51,whereas the proportion of S&P noise added in images is 15%.In terms of the adversarial attack,AutoAttack?2threat models are commonly used (Croce and Hein, 2020).Here we use the STANDARD version of AutoAttack.It consists of APGDCE(which does not have random starts),the targeted version of APGD (APGDT) as the difference of logit ratio loss handling a model with a minimum of four classes, the targeted version of the FAB attack (FABT), and the Square Attack as a score-based blackbox attack for norm-bounded perturbations.In practice, the STANDARD version AutoAttack actually makes stronger attacks (Nie et al., 2022).For AutoAttack, we evaluate Tendiff-Pure against the?2threat model with∈=0.5.

    4.1.3 Baselines

    We compare our proposed TendiffPure with two other diffusion models,DDPM (Ho et al.,2020)and DDIM (Song JM et al., 2021), which are the core of nearly all existing diffusion models.Note that we employ two settings of the diffusion timesteps for DDPM and DDIM ast*∈N+(t*≤T) andT, and we present the better results between the two settings.The reason is that we aim to follow the vital diffusion model for purification, DiffPure,where the amount of Gaussian noise is carefully chosen to ensure that the noise or adversarial attacks in the images can be eliminated and that the label semantics of the purified images is not destroyed.As the discrete version of DiffPure is DDPM with the diffusion timestept*,we emphasize DDPM as a discrete DiffPure in the experimental results if its performance under settingt*is better than that underT.

    4.1.4 Evaluation criteria

    1.Quantitative criterion

    The quantitative evaluation metrics are the standard accuracy, which measures the generative power, and the robust accuracy, which shows both the generative power and the robustness of purification models.To obtain the robust accuracy, the perturbed and adversarial examples are the input of purification models.Once the purification models produce the purified data,these purified data are classified by the classification models whose output is the classification accuracy, i.e., the robust accuracy.To obtain the standard accuracy,it follows the same procedure as what is for the robust accuracy,except that the input data of purification models are clean data without adding noises or adversarial attacks.For the quantitative results,we run the experiments multiple times.Then we report the average standard accuracy and robust accuracy with their error bars.We use the aforementioned classifiers to test if TendiffPure is able to sufficiently remove the noises and adversarial attacks and meanwhile preserve the label semantics of images with the reduced number of parameters compared with the baselines.

    2.Qualitative criterion

    The performance of TendiffPure is also evaluated in the qualitative perspective.Whether as a human we agree the purified or denoised images by TendiffPure to be more realistic than those purified by the baselines is a vital criterion to evaluate the performance of TendiffPure.Hence, we present the purified images by TendiffPure.Those generated by DDPM witht*as discrete DiffPure orTas DDPM of purification are presented,where the images with the higher quality are selected.Note that we decide not to present the results generated by DDIM, because of its incapability of removing the noises and adversarial attacks, even compared with DDPM.This is indicated in the quantitative results.

    4.2 Experimental result analysis

    4.2.1 Quantitative result analysis

    1.Comparison with baselines

    To begin with, we scrutinize the quantitative performance of TendiffPure compared with those of the baseline models.As aforementioned, a higher robust accuracy produced by the pre-trained classifier ResNet56 or LeNet indicates that the denoised or purified images are closer to the clean images.Table 1 demonstrates that for the CIFAR-10 dataset,the proposed TendiffPure outperforms the baseline diffusion models on the Gaussian noise, S&P noise,and AutoAttack.It also shows that the TT parameterization in TendiffPure successfully captures the multi-dimensional spatial structural information in images and enhances the performance of diffusion models in denoising and purification tasks, along with the reduction of the number of parameters.We can draw the same conclusions from the results on the Fashion-MNIST dataset(Table 2).However,for the results on the MNIST dataset demonstrated in Table 3, TendiffPure produces the purified images with the highest quality in terms of the classification accuracy, except the robust accuracy under Gaussian noises where DDPM (DiffPure) ranks the first.The possible reason is that tensor decomposition methods prefer spatially complicated data, whereas the MNIST dataset contains only handwritten digits with simple spatial information compared with Fashion-MNIST and CIFAR-10 datasets.

    2.Ablation studies

    We are interested in the effect of TT ranks,i.e.,Rd,i’s, on the purification or denoising results, because they can reveal how much compression can be beneficial to the purification or denoising performance of TendiffPure.Tables 4-6 indicate that TendiffPure prefers TT parameterization as in Eq.(5)with a smaller number of parameters reduced, in specific, with the compression rates being 44.29%,22.78%,and 22.78%for CIFAR-10,Fashion-MNIST,and MNIST respectively, where the compression rates are computed as the number of parameters of TendiffPure divided by the number of parameters of DDPM (DiffPure).It unveils that in practice,it may not be beneficial to dissect the convolution kernel in terms of the product of the numbers of input channels and output channels,for purification or denoising tasks.We observe that the ranges of thestandard accuracy and the robust accuracy among different TT ranks are larger than the difference between those of DiffPure and TendiffPure with the optimal TT ranks.For all three datasets, TendiffPure has the lowest standard accuracy and robust accuracy at rank(3,3,3)according to ablation studies in Tables 4-6, where TendiffPure performs worse than DiffPure using either DDPM or DDIM.It is possible that the effect of TT parameterization is sensitive to the TT ranks, which can significantly affect the robustness of diffusion models.In specific, the lowest standard accuracy and robust accuracy often occur at the higher TT ranks, which can be an interesting finding about the relationship between low rankness of parameters and robustness of diffusion models.In conclusion, these findings can pave the way for our future study on theoretically analysis of how to decompose convolution kernels using tensor decomposition or tensor networks to compress U-Nets in diffusion models properly.

    Table 1 Purification performance of TendiffPure on CIFAR-10 evaluated by the pre-trained ResNet56 classifier

    Table 2 Purification performance of TendiffPure on Fashion-MNIST evaluated by the pre-trained LeNet classifier

    Table 3 Purification performance of TendiffPure on MNIST evaluated by the pre-trained LeNet classifier

    Table 4 Ablation studies of TendiffPure on CIFAR-10 evaluated by the pre-trained ResNet56 classifier

    Table 5 Ablation studies of TendiffPure on Fashion-MNIST evaluated by the pre-trained LeNet classifier

    Table 6 Ablation studies of TendiffPure on MNIST evaluated by the pre-trained LeNet classifier

    4.2.2 Qualitative result analysis

    As for the qualitative performance of Tendiff-Pure, we present the purified or denoised images as a subset of the purified or denoised CIFAR-10 dataset on Gaussian noise, S&P noise, and Auto-Attack.In Fig.3, TendiffPure generates evidently more realistic images which are closer to the original images, i.e., clean images.In specific, DDPM as a discrete DiffPure even produces an image of a dog with two heads in the third row and second column of Fig.3c.For the case with S&P noise as shown in Fig.4, although DDPM (DiffPure) and Tendiff-Pure both demonstrate their limitations on removal of noises added on structurally complicated images such as toads and frogs, TendiffPure still preserves more structural information with a largely reduced number of parameters and possesses more robustness.It is consistent with the qualitative performance of TendiffPure for AutoAttack perturbed on the CIFAR-10 dataset.TendiffPure also eliminates this adversarial attack and generates images with more realism than DDPM (DiffPure) as in Fig.5.

    5 Conclusions

    Fig.5 Selected purified or denoised images by TendiffPure on the CIFAR-10 dataset with AutoAttack compared with DDPM (DiffPure): (a) original images; (b) AutoAttacked; (c) DDPM (DiffPure);(d) TendiffPure

    To enhance the efficacy of diffusion models in purification, we propose TendiffPure as a diffusion model with convolutional TT U-Net backbones.Compared with existing methods,TendiffPure largely reduces the space complexity,and is able to analyze spatially complicated information in multi-dimensional data such as images.Our experimental results on CIFAR-10, Fashion-MNIST, and MNIST for Gaussian and S&P noises and AutoAttack show that TendiffPure outperforms existing diffusion models for purification or denoising tasks,quantitatively and qualitatively.

    However, there are still potential limitations of TendiffPure.At this stage, how the TT ranks affect the purification or denoising performance is not theoretically studied.Hence, other than grid search,there is no better method to provide an optimal scheme to decide how to decompose the convolution kernel in U-Nets as backbones of diffusion models using TT decomposition or even tensor decomposition or tensor networks.In the future work, we aim to theoretically analyze the effect of tensor decomposition methods on diffusion models for purification.

    Contributors

    Mingyuan BAI designed the research.Derun ZHOU processed the data.Mingyuan BAI drafted the paper.Qibin ZHAO helped organize the paper.Mingyuan BAI and Derun ZHOU revised and finalized the paper.

    Compliance with ethics guidelines

    All the authors declare that they have no conflict of interest.

    Data availability

    The data that support the findings of this study are available from the corresponding author upon reasonable request.

    一本大道久久a久久精品| av在线老鸭窝| www.av在线官网国产| 男女免费视频国产| 哪个播放器可以免费观看大片| 久久久精品94久久精品| 亚洲四区av| 日日啪夜夜爽| 日本免费在线观看一区| 日韩制服骚丝袜av| 日日摸夜夜添夜夜爱| 三级国产精品片| 涩涩av久久男人的天堂| 黄色日韩在线| 色婷婷av一区二区三区视频| av国产久精品久网站免费入址| 丰满饥渴人妻一区二区三| 日韩一区二区视频免费看| 夜夜看夜夜爽夜夜摸| 99热网站在线观看| av卡一久久| 欧美3d第一页| 一个人看视频在线观看www免费| 大话2 男鬼变身卡| 18禁裸乳无遮挡动漫免费视频| 好男人视频免费观看在线| 久久久久久久大尺度免费视频| 人人妻人人爽人人添夜夜欢视频 | 久久青草综合色| 色视频www国产| 久久久欧美国产精品| 男的添女的下面高潮视频| 亚洲精品乱久久久久久| 一本久久精品| 国产免费又黄又爽又色| 久久人妻熟女aⅴ| 国产精品秋霞免费鲁丝片| 一二三四中文在线观看免费高清| 自线自在国产av| 人妻制服诱惑在线中文字幕| 精华霜和精华液先用哪个| 免费大片18禁| 在线观看www视频免费| 蜜桃在线观看..| 91午夜精品亚洲一区二区三区| 多毛熟女@视频| 国产片特级美女逼逼视频| 午夜影院在线不卡| 亚洲欧美精品自产自拍| 大码成人一级视频| 亚洲av免费高清在线观看| 精品亚洲成国产av| 亚洲电影在线观看av| 成人国产麻豆网| 亚洲怡红院男人天堂| 啦啦啦中文免费视频观看日本| av卡一久久| 91精品国产九色| 国产在线视频一区二区| 欧美bdsm另类| 最近手机中文字幕大全| 久久久欧美国产精品| 久久人妻熟女aⅴ| 中国三级夫妇交换| 亚洲四区av| 欧美一级a爱片免费观看看| 久久人妻熟女aⅴ| 日韩中文字幕视频在线看片| 少妇 在线观看| 国产精品国产av在线观看| 亚洲精品视频女| 成人特级av手机在线观看| 曰老女人黄片| 亚洲国产精品999| 中文欧美无线码| 人人澡人人妻人| 亚洲成人一二三区av| 伊人久久精品亚洲午夜| 69精品国产乱码久久久| 国产黄片视频在线免费观看| 又爽又黄a免费视频| 国产熟女欧美一区二区| 在线观看三级黄色| av女优亚洲男人天堂| 国产淫语在线视频| 91午夜精品亚洲一区二区三区| 国产亚洲最大av| 国产69精品久久久久777片| 高清视频免费观看一区二区| 欧美日韩精品成人综合77777| 啦啦啦中文免费视频观看日本| 午夜精品国产一区二区电影| 成年美女黄网站色视频大全免费 | 国产精品嫩草影院av在线观看| 久久狼人影院| 午夜日本视频在线| 99九九线精品视频在线观看视频| 亚洲国产欧美日韩在线播放 | 婷婷色综合www| 亚洲国产毛片av蜜桃av| av.在线天堂| 国产成人精品婷婷| 在线免费观看不下载黄p国产| 亚洲精华国产精华液的使用体验| 久久精品国产自在天天线| 男男h啪啪无遮挡| 精品亚洲成a人片在线观看| .国产精品久久| 久久久久久伊人网av| 一区二区三区免费毛片| 国产成人精品一,二区| 成人毛片a级毛片在线播放| 欧美少妇被猛烈插入视频| 国产日韩欧美视频二区| 精品少妇黑人巨大在线播放| 18禁裸乳无遮挡动漫免费视频| 曰老女人黄片| 大香蕉久久网| 久久久国产一区二区| 久久精品夜色国产| 国产成人freesex在线| 日韩伦理黄色片| 一级毛片我不卡| 久久国产精品男人的天堂亚洲 | 我的老师免费观看完整版| 一级毛片电影观看| 嘟嘟电影网在线观看| h日本视频在线播放| 日韩大片免费观看网站| 国产老妇伦熟女老妇高清| av在线老鸭窝| 亚洲第一区二区三区不卡| 国产男女内射视频| 国产亚洲一区二区精品| 久久女婷五月综合色啪小说| 欧美国产精品一级二级三级 | 永久网站在线| 曰老女人黄片| 久久人人爽人人片av| 国产乱人偷精品视频| 国产精品久久久久久久电影| 久久久久人妻精品一区果冻| 五月天丁香电影| 亚洲av不卡在线观看| 国产色爽女视频免费观看| 人妻 亚洲 视频| 看十八女毛片水多多多| 亚洲精品自拍成人| 最近最新中文字幕免费大全7| 精品久久国产蜜桃| 亚洲欧美日韩卡通动漫| 欧美xxxx性猛交bbbb| 欧美97在线视频| 精品久久久噜噜| 不卡视频在线观看欧美| 亚洲美女黄色视频免费看| 日韩中文字幕视频在线看片| 国产综合精华液| 插阴视频在线观看视频| 精品一区二区三区视频在线| 丝袜在线中文字幕| 少妇被粗大的猛进出69影院 | 亚洲一级一片aⅴ在线观看| 精品少妇久久久久久888优播| 一区二区三区乱码不卡18| 国产中年淑女户外野战色| 精品亚洲成a人片在线观看| 成年人免费黄色播放视频 | 国产男女内射视频| av国产久精品久网站免费入址| 最近2019中文字幕mv第一页| 久久久国产欧美日韩av| 黄色配什么色好看| 久久99蜜桃精品久久| 黑人高潮一二区| 免费大片18禁| 熟女人妻精品中文字幕| 国产色爽女视频免费观看| 岛国毛片在线播放| 亚洲熟女精品中文字幕| 男女国产视频网站| 男人和女人高潮做爰伦理| 国产成人精品一,二区| 国产成人精品无人区| 黑人高潮一二区| 日本猛色少妇xxxxx猛交久久| 国产av国产精品国产| 亚洲欧美一区二区三区国产| 国产又色又爽无遮挡免| 国产在线男女| 人人妻人人澡人人爽人人夜夜| 国产一区二区三区av在线| 免费av中文字幕在线| 国产免费视频播放在线视频| 久久精品国产鲁丝片午夜精品| 亚洲精华国产精华液的使用体验| 欧美精品人与动牲交sv欧美| 如日韩欧美国产精品一区二区三区 | 99九九在线精品视频 | 久久人妻熟女aⅴ| 精品一区二区免费观看| 久久 成人 亚洲| 国内精品宾馆在线| av线在线观看网站| 久久毛片免费看一区二区三区| 国产av精品麻豆| 日本猛色少妇xxxxx猛交久久| 男人和女人高潮做爰伦理| 国产伦在线观看视频一区| av免费在线看不卡| 麻豆乱淫一区二区| 老司机影院毛片| 欧美国产精品一级二级三级 | 成人黄色视频免费在线看| 一区二区三区乱码不卡18| 97超碰精品成人国产| 日本vs欧美在线观看视频 | 九九久久精品国产亚洲av麻豆| 久久久久精品性色| av有码第一页| 精品久久国产蜜桃| 精品亚洲乱码少妇综合久久| 蜜臀久久99精品久久宅男| 亚洲va在线va天堂va国产| 欧美高清成人免费视频www| 久久精品国产自在天天线| av女优亚洲男人天堂| 国产在线免费精品| 夫妻午夜视频| 免费少妇av软件| 观看av在线不卡| 久久久久网色| 看十八女毛片水多多多| 91久久精品国产一区二区三区| 自线自在国产av| 亚洲av中文av极速乱| 精品卡一卡二卡四卡免费| 亚洲精品一二三| 亚洲一级一片aⅴ在线观看| 亚洲av综合色区一区| 国产免费福利视频在线观看| 日韩欧美精品免费久久| 99国产精品免费福利视频| 亚洲一级一片aⅴ在线观看| 成人二区视频| av在线观看视频网站免费| 一本大道久久a久久精品| 免费看光身美女| 大片电影免费在线观看免费| 狠狠精品人妻久久久久久综合| 天天躁夜夜躁狠狠久久av| 热re99久久国产66热| 亚洲精华国产精华液的使用体验| 九色成人免费人妻av| 在线亚洲精品国产二区图片欧美 | 久久ye,这里只有精品| 丝袜喷水一区| 日韩不卡一区二区三区视频在线| 另类精品久久| 亚洲真实伦在线观看| 老司机亚洲免费影院| 成人免费观看视频高清| 免费大片18禁| 日韩在线高清观看一区二区三区| 夫妻午夜视频| 久久精品熟女亚洲av麻豆精品| 视频区图区小说| 国产精品一二三区在线看| 人妻夜夜爽99麻豆av| 日日爽夜夜爽网站| 日韩成人伦理影院| 三级国产精品欧美在线观看| 99久久综合免费| 色5月婷婷丁香| 免费久久久久久久精品成人欧美视频 | 精华霜和精华液先用哪个| 国产黄色视频一区二区在线观看| 亚洲av日韩在线播放| 少妇猛男粗大的猛烈进出视频| 尾随美女入室| 成人影院久久| 国产av一区二区精品久久| 丰满人妻一区二区三区视频av| 国产精品伦人一区二区| 男的添女的下面高潮视频| 日韩免费高清中文字幕av| 九草在线视频观看| 观看美女的网站| 亚洲人成网站在线观看播放| 亚洲欧洲日产国产| 高清毛片免费看| 久久99蜜桃精品久久| 欧美精品亚洲一区二区| 成年美女黄网站色视频大全免费 | 精品亚洲成a人片在线观看| 日韩中字成人| 91精品国产九色| 永久免费av网站大全| 国产免费一区二区三区四区乱码| 亚洲av二区三区四区| 我要看日韩黄色一级片| 国产淫语在线视频| 久久久久视频综合| 中文在线观看免费www的网站| 精华霜和精华液先用哪个| 国产日韩欧美视频二区| 日韩av在线免费看完整版不卡| 色婷婷久久久亚洲欧美| 久久人妻熟女aⅴ| 国模一区二区三区四区视频| 久久久久久久久久久免费av| 美女福利国产在线| 欧美激情国产日韩精品一区| 亚洲av电影在线观看一区二区三区| h视频一区二区三区| 视频区图区小说| 香蕉精品网在线| 久久99蜜桃精品久久| 亚洲精品成人av观看孕妇| 亚洲av日韩在线播放| 夜夜骑夜夜射夜夜干| .国产精品久久| 国产成人免费无遮挡视频| 人妻一区二区av| 欧美少妇被猛烈插入视频| 国产淫语在线视频| 2018国产大陆天天弄谢| 日本欧美视频一区| 777米奇影视久久| 极品少妇高潮喷水抽搐| 久久久久久久久大av| 亚洲精品456在线播放app| 在线观看免费日韩欧美大片 | 蜜桃在线观看..| 国产精品三级大全| 人妻 亚洲 视频| 免费久久久久久久精品成人欧美视频 | 男女国产视频网站| 赤兔流量卡办理| 中文精品一卡2卡3卡4更新| 十八禁高潮呻吟视频 | 午夜免费观看性视频| 日韩三级伦理在线观看| 成人美女网站在线观看视频| 日韩大片免费观看网站| 国产高清三级在线| 黑人高潮一二区| 又黄又爽又刺激的免费视频.| 女人精品久久久久毛片| 国产高清三级在线| av又黄又爽大尺度在线免费看| 日本91视频免费播放| 国产一区二区在线观看av| 午夜精品国产一区二区电影| 亚洲怡红院男人天堂| 蜜桃久久精品国产亚洲av| 99热国产这里只有精品6| 久久久久久久大尺度免费视频| 亚洲精品日韩av片在线观看| 久久精品久久久久久久性| 亚洲av中文av极速乱| 97精品久久久久久久久久精品| 91久久精品电影网| 精品人妻熟女av久视频| 日韩av不卡免费在线播放| 亚洲国产精品成人久久小说| 男女边摸边吃奶| 丝袜脚勾引网站| 亚洲精品aⅴ在线观看| av女优亚洲男人天堂| 交换朋友夫妻互换小说| 最近的中文字幕免费完整| 伊人亚洲综合成人网| av女优亚洲男人天堂| 男的添女的下面高潮视频| 精品人妻熟女av久视频| 亚洲精品久久久久久婷婷小说| 久久精品国产亚洲网站| 如日韩欧美国产精品一区二区三区 | 久久久久网色| 男女边摸边吃奶| 一级黄片播放器| 国产亚洲5aaaaa淫片| 性色av一级| 国产 一区精品| 香蕉精品网在线| 26uuu在线亚洲综合色| 99热这里只有精品一区| 精品国产一区二区久久| 乱人伦中国视频| 99久久精品热视频| 乱码一卡2卡4卡精品| 国产成人免费观看mmmm| 国产色爽女视频免费观看| av又黄又爽大尺度在线免费看| 成人综合一区亚洲| 国产亚洲午夜精品一区二区久久| 久久久欧美国产精品| 成人18禁高潮啪啪吃奶动态图 | 全区人妻精品视频| 日本欧美国产在线视频| a级毛片在线看网站| 少妇熟女欧美另类| 国产熟女午夜一区二区三区 | av天堂中文字幕网| 国产精品熟女久久久久浪| 亚洲av日韩在线播放| 精品人妻熟女av久视频| 亚洲欧美精品专区久久| 免费黄网站久久成人精品| 国产伦精品一区二区三区视频9| 高清在线视频一区二区三区| 777米奇影视久久| 99久久精品一区二区三区| 久久久精品94久久精品| 久久精品国产亚洲av涩爱| 啦啦啦视频在线资源免费观看| 国产精品.久久久| 精品卡一卡二卡四卡免费| 日本免费在线观看一区| 亚洲人与动物交配视频| 欧美日韩视频精品一区| 久久精品国产鲁丝片午夜精品| 97超视频在线观看视频| 国产av一区二区精品久久| 中国国产av一级| 亚洲在久久综合| 一二三四中文在线观看免费高清| 夜夜看夜夜爽夜夜摸| 亚洲国产精品成人久久小说| 国产精品偷伦视频观看了| 特大巨黑吊av在线直播| 99久久精品热视频| 青春草视频在线免费观看| 男女国产视频网站| 看免费成人av毛片| 欧美日韩国产mv在线观看视频| 一级毛片黄色毛片免费观看视频| 麻豆成人午夜福利视频| av播播在线观看一区| 中文字幕免费在线视频6| 免费播放大片免费观看视频在线观看| 精品人妻一区二区三区麻豆| 亚洲内射少妇av| 国产av一区二区精品久久| 精品午夜福利在线看| av黄色大香蕉| 一级a做视频免费观看| 99久久精品一区二区三区| 国产一区亚洲一区在线观看| 国产精品麻豆人妻色哟哟久久| 夫妻午夜视频| 国产欧美亚洲国产| 高清欧美精品videossex| 纵有疾风起免费观看全集完整版| 肉色欧美久久久久久久蜜桃| 老司机亚洲免费影院| 久久久午夜欧美精品| 亚洲国产欧美日韩在线播放 | 99九九在线精品视频 | 麻豆乱淫一区二区| 人人妻人人澡人人看| 丝袜喷水一区| 久久久久人妻精品一区果冻| 久久久久久久久久久免费av| 成年人午夜在线观看视频| 免费高清在线观看视频在线观看| 九九爱精品视频在线观看| 亚洲一级一片aⅴ在线观看| 日韩欧美一区视频在线观看 | 久久国产精品大桥未久av | 久久99蜜桃精品久久| 亚洲精品乱码久久久v下载方式| 亚洲国产欧美在线一区| 亚洲欧美成人精品一区二区| 六月丁香七月| 在线观看av片永久免费下载| 欧美 日韩 精品 国产| 国产成人91sexporn| 中文字幕久久专区| 人人澡人人妻人| 老司机影院毛片| 久久久久精品性色| 观看av在线不卡| 国产精品麻豆人妻色哟哟久久| 亚洲精品国产色婷婷电影| 国产在线免费精品| 男人爽女人下面视频在线观看| 亚洲欧美精品专区久久| 国产亚洲av片在线观看秒播厂| 激情五月婷婷亚洲| 久久久国产精品麻豆| 麻豆成人午夜福利视频| 日本午夜av视频| 久久久久久久亚洲中文字幕| 9色porny在线观看| 黑丝袜美女国产一区| 欧美精品亚洲一区二区| 26uuu在线亚洲综合色| 免费少妇av软件| 青青草视频在线视频观看| 精品国产一区二区三区久久久樱花| 国产精品久久久久久精品电影小说| 亚洲精品,欧美精品| 亚洲精品一区蜜桃| 成人特级av手机在线观看| 亚洲中文av在线| 麻豆成人午夜福利视频| 最黄视频免费看| 日韩视频在线欧美| 啦啦啦中文免费视频观看日本| 永久网站在线| 一级av片app| 97超视频在线观看视频| 亚洲性久久影院| 亚洲综合精品二区| 日本与韩国留学比较| 中文在线观看免费www的网站| 亚洲欧洲精品一区二区精品久久久 | 一本一本综合久久| 国产真实伦视频高清在线观看| 乱码一卡2卡4卡精品| 久久午夜综合久久蜜桃| 欧美区成人在线视频| 亚洲精品第二区| 亚洲国产av新网站| 欧美高清成人免费视频www| 精品卡一卡二卡四卡免费| 男人舔奶头视频| 精品99又大又爽又粗少妇毛片| 国产精品免费大片| 九草在线视频观看| 精品人妻熟女av久视频| 亚洲美女视频黄频| a级一级毛片免费在线观看| 亚洲,一卡二卡三卡| 高清黄色对白视频在线免费看 | 亚洲国产欧美在线一区| 国产一区二区三区综合在线观看 | 好男人视频免费观看在线| 亚洲精品自拍成人| 极品少妇高潮喷水抽搐| 18禁在线播放成人免费| 天堂中文最新版在线下载| 在线观看www视频免费| 爱豆传媒免费全集在线观看| 亚洲无线观看免费| 热99国产精品久久久久久7| 欧美精品一区二区免费开放| 国产日韩欧美在线精品| 国精品久久久久久国模美| 美女大奶头黄色视频| 久久久久久久国产电影| 激情五月婷婷亚洲| 欧美区成人在线视频| 色视频www国产| 国产精品人妻久久久影院| 亚洲国产日韩一区二区| 水蜜桃什么品种好| 国产成人91sexporn| 亚洲国产精品999| 欧美 亚洲 国产 日韩一| 日本午夜av视频| 日韩一区二区视频免费看| 亚洲婷婷狠狠爱综合网| 自拍偷自拍亚洲精品老妇| 在线观看一区二区三区激情| 亚洲精品久久午夜乱码| 成人毛片60女人毛片免费| 久久久久久久国产电影| av一本久久久久| 国产有黄有色有爽视频| 欧美3d第一页| 国产男女内射视频| 欧美最新免费一区二区三区| 日本黄色片子视频| 久久人人爽人人爽人人片va| 成人亚洲精品一区在线观看| a级毛色黄片| 在线观看三级黄色| 久久久久久久久久久免费av| 人妻系列 视频| 日韩三级伦理在线观看| 大香蕉久久网| 国产日韩欧美视频二区| xxx大片免费视频| 在线精品无人区一区二区三| 丝袜脚勾引网站| 亚洲综合色惰| 亚洲在久久综合| 黄色视频在线播放观看不卡| 日韩制服骚丝袜av| 99国产精品免费福利视频| 黄色视频在线播放观看不卡| 99热国产这里只有精品6| kizo精华| 亚洲精品日本国产第一区| 日本wwww免费看| 国产毛片在线视频| 亚洲欧美中文字幕日韩二区| 99热网站在线观看| 国产一区二区三区综合在线观看 | 五月天丁香电影| 亚洲国产av新网站| 精品少妇黑人巨大在线播放| 国产成人aa在线观看| 色吧在线观看| 免费看光身美女| 久久久久国产精品人妻一区二区| 99热网站在线观看| 免费大片黄手机在线观看| 狂野欧美激情性bbbbbb| 黑人高潮一二区| 亚洲成色77777| 嘟嘟电影网在线观看|