• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Ensuring User Privacy and Model Security via Machine Unlearning:A Review

    2023-12-15 03:57:18YonghaoTangZhipingCaiQiangLiuTongqingZhouandQiangNi
    Computers Materials&Continua 2023年11期

    Yonghao Tang,Zhiping Cai,★,Qiang Liu,Tongqing Zhou and Qiang Ni

    1College of Computer,National University of Defense Technology,Changsha,410073,China

    2School of Computing and Communications,Lancaster University,England,B23,UK

    ABSTRACT As an emerging discipline,machine learning has been widely used in artificial intelligence,education,meteorology and other fields.In the training of machine learning models,trainers need to use a large amount of practical data,which inevitably involves user privacy.Besides,by polluting the training data,a malicious adversary can poison the model,thus compromising model security.The data provider hopes that the model trainer can prove to them the confidentiality of the model.Trainer will be required to withdraw data when the trust collapses.In the meantime,trainers hope to forget the injected data to regain security when finding crafted poisoned data after the model training.Therefore,we focus on forgetting systems,the process of which we call machine unlearning,capable of forgetting specific data entirely and efficiently.In this paper,we present the first comprehensive survey of this realm.We summarize and categorize existing machine unlearning methods based on their characteristics and analyze the relation between machine unlearning and relevant fields(e.g.,inference attacks and data poisoning attacks).Finally,we briefly conclude the existing research directions.

    KEYWORDS Machine learning;machine unlearning;privacy protection;trusted data deletion

    1 Introduction

    Machine Learning (ML) requires the use of massive amounts of practical data—usually those that contain sensitive information provided by users to train algorithm models.Moreover,following the online learning paradigm,new data is collected regularly and incrementally used to refine existing models further.Conversely,data may also need to be deleted.There are many reasons that users want systems to forget specific data.

    From a privacy perspective,the data provider hopes to ensure the security of the data provided to the trainer,including that the data is not used for violating purposes,and the trained model will not leak any sensitive information about trained data when it is attacked (e.g.,inference attack [1]).The trainer of the model shall preserve the data’s confidentiality and verify the data’s security to the user.This requires skillful authentication measures because,usually,users do not have the right to read the source code of the machine learning model directly and can only indirectly interact with the model.Generally speaking,the right to use the model for a limited time,to input the model,and get feedback from the model.Take sentence completion model training as an example.We expect the trained model to output completed sentences with incomplete input.The data provider can query whether the sensitive data has been leaked by entering keywords,similar to ‘User password is:’or‘Professional experience:’[2].Therefore,users unsatisfied with these rising risks would want their data and its effects on the models and statistics to be forgotten entirely.Moreover,the European Union’s General Data Protection Regulation(GDPR)and the former Right to Be Forgotten[3,4]both mandate that companies and organizations take reasonable steps to withdraw users’consent to their data at any time under certain circumstances.Taking the United Kingdom as an example,the email sent by the British Biobank to the researcher stated that the data provider has the right to withdraw the provided data at any time;the British legal department is still arguing about the responsibility of the trained model for the data it uses,and potential legal disputes.The Information Commissioner’s Office(UK) pessimistically stated in 2020 that if users request data to be retrieved,the ongoing machine learning model may be forced to retrain or even be wholly suspended[5].

    From a security perspective,users concerned about the system’s future privacy risks would tend to force the system to forget about their data.Consider an E-mail sorting system.The system’s security depends on the model of normal behaviors extracted from the training data.Suppose an attacker contaminates the sorting system by injecting specific designed into the training dataset.In this case,large amounts of spam will be sent to receivers,which will seriously compromise the security of the model.This type of attack,known as a data poisoning attack[6],has received widespread attention in academia and industry,where it has caused severe damage.For example,Microsoft Tay,a chatbot designed to talk to Twitter users,was shut down after just 16 h after releasing.As it started making racist comments after the poisoning attack.Such attacks make us reflect the security of machine learning models.Once the model is poisoned,the service provider must completely forget about the data to regain security.

    Ideally,a model with part of the data forgotten will behave as if it was trained without those data.An intuitive way to make such models demonstrably forgettable is to retrain them from scratch.To avoid the significant computational and time overhead associated with fully retraining models affected by data deletion requests,the system must be designed with the core principle of complete and rapid forgetting of training data in such a way as to restore privacy and security.Such forgetting systems must assure the user that the systems will no longer be trained using data that the user has chosen to unlearn.In the meantime,they let users designate the data to be forgotten at different degrees of granularity.For example,a conscious privacy user who accidentally searches for a series of previously posted social photos that reveal sensitive information about the user can ask the search engine to forget that particular data.These systems then delete the data and restore its effects so that all future operations operate as if the data had never existed.Further,in a distributed learning scenario,users collaborate to forget the data.This collaborative forgetting has the potential to extend to the entire network.Users trust the forgetting system to comply with forgetting requests because the service providers mentioned above have a strong incentive to comply.

    Cao et al.[7] first introduced the concept of unlearning,as a dual to ML,removing the impact of a data point on the model obtained upon completion of training,where data deletion is efficient and exact.In general,MU (Machine Unlearning) aims to guarantee that unlearning part of the training data will produce the same distribution of models that have not been trained on those data.Then,Ginart et al.[8] formalized the problem of efficient data forgetting and provided engineering principles for designing forgetting algorithms.However,these methods only behave well for nonadaptive (later training does not depend on earlier training) machine learning models,such as kmeans clustering [9].For that,Bourtoule et al.[10] proposed SISA (sharded,isolated,sliced,and aggregated),a model-independent method,which divides the training set into disjoint slices.They train an ensemble of models and save snapshots of each model for every slice.This allows perfect unlearning but incurs heavy storage costs as multiple gradients need to be stored.Accordingly,Golatkar et al.[11]distributed a“readout”function to remove weights from the model corresponding to training data that need to be forgotten.This method does not require retraining.Currently,considering different assumptions,a wide variety of methods are emerging.For instance,in the context of model interpretability and cross-validation,researchers provided various“influence”functions for estimating the impact after removing a training point on the model [12,13].Besides,to effectively erase the corresponding“influence”from the ML model,various forgetting techniques such as weight removal [11,14,15],linear replacement [16-18] and gradient updating [19,20] have been proposed.Except by directly measuring unlearning in different dimensional spaces to achieve approximate forgetting [1,21],current MU method also provides strict mathematical guarantees [22-24] under linear constraints and ensures that the data is certified removal in the DNN(Deep Neural Networks)model [25-27].In other words,machine unlearning has evolved into a broad field of study,and a comprehensive introduction is beneficial for newcomers to the field.

    The rest of this paper is organized as follows.Section 2 briefly describes the common threats of attacks on machine learning.Section 3 provides an overview of existing machine unlearning methods.The conclusion is provided in Section 4 at the end.

    2 Weaknesses of Machine Learning Models

    Machine learning models,instense learning models,have achieved fruitful results in recent decades,producing research on how to attack machine learning models.This section will discuss several kinds of attacks on machine learning models.

    2.1 Inference Attacks

    Liu et al.[1]summarized four inference attacks on machine learning models,including membership inference,model inversion,attribute inference,and model stealing.The differences between the various attacks can be seen in“Table 1:Inference Attacks on Machine Learning Models”.

    Table 1: Inference attacks on machine learning models

    1)Membership inference:Membership inference refers to malicious trainers in the training party who intervene in the training model,which will finally force the model to malfunction in specific training data[2].

    2)Model inversion: Refers to the existence of external enemies trying to replicate the model’s training data by studying the model.

    3)Attribute inference:Attribute inference means that the model is used for unexpected purposes.For example,a machine learning model that predicts the person’s age in the picture may be used to identify the person’s face in the photo.

    4)Model stealing: Model stealing aims to reproduce a model with similar effects by evaluating the trained model’s performance.

    In particular,the first three methods can determine whether certain specific data belongs to a machine learning model [28,29].This could be a double-edged sword for the confidentiality of the model.It allows users to verify that their data has not been maliciously used for model training and exposes the training data to the risk of leakage.

    2.2 Data Poisoning Attacks

    Another type of machine learning attack comes from data poisoning attacks,also known as harmful data attacks or adversarial examples [30].Data poisoning attacks include malicious label attacks,irrelevant data bombings,and reflection data attacks.Generally speaking,data poisoning attacks add maliciously tampered data to the training data of the machine learning model in order to create backdoors on the model.Then attackers use these backdoors to perform destructive work,such as creating specifically labelled faces in the face recognition model.This attack usually fulfils the following properties:the targeted model typically performs when recognizing data without a backdoor,the targeted model can correctly respond to the backdoor,and the overall structure of the targeted model is not changed.

    Some malicious label attacks use data with artificial misleading labels,while others include welllabelled data that have been processed invisibly to the naked eye.Huang et al.[6]used harmful data to attack a product recommendation system.They create fake users and send them to a product recommendation system based on machine learning so that the target product can be recommended to users.As a result,even under the protection of the fake user detector,the product recommendation model is still under attack.This shows that poisoned data attacks are effective and difficult to counter.Besides,poisoned data attacks are also brutal to prevent as it is difficult for trainers to check the reliability of the data.

    Liu et al.[31] described in detail how they deploy reflection attacks on a deep learning neural network.They generate a poisoned image by adding a subtitle cover of reflection to clean background images,which will create a backdoor in the training model.Although the model still functions accurately in most of the standard inputs,the attacker will control any inputs with backdoor patterns.Their experiment attacked a machine learning model that recognizes road signs.Their attack made the model’s correct rate of identifying stop warning signs and speed limit signs plummet by 25 per cent.Even after the machine learning model was retrained,the effects of harmful data attacks still existed.

    Recently,an opportunistic backdoor attack approach has been proposed in speech recognition which is characterized by passively triggering and opportunistically invoking[32].Unlike the current backdoor design in CV(Computer Vision)and SR(Speech Recognition)systems require an adversary to put on a mask or play some audios in the field to trigger the backdoor in the poisoned model.In contrast,the opportunistic attack is plug-and-conditionally-run,which avoids unrealistic presence requirements and provides more possibilities for the attacking scenarios(e.g.,indoor).

    2.3 Other Attacks

    Gupta et al.[33]discussed the machine learning model in the case of adaptive,that is,the machine model under the request of the user to withdraw specific data due to how the models behave.They mainly focused on the model’s situation relying on a convex optimization problem.They designed an attacking stratagem and finished an experiment which proved that the machine model would be vulnerable to attack after the user withdraws the data.In the non-convex machine model,the model’s safety is difficult to guarantee after users request to delete specific data.In particular,they proved that adding a little noise to the model can meet the needs of adaptive data deletion.

    Customary computation model,such as MapReduce,requires weeks or even months to train with high hardware costs[34].Some model trainers would like to train their models based on pre-trained models acquired from outsourced code societies.An adversary may spread technically modified models through the Internet[35].When a model trainer uses these modified models as samples of their model,the robustness and security of machine learning models may be compromised.Gu et al.[36]showed that this attack is similar to inference attack,but still has special qualities when referring to fully outsourced trained and transfer training.

    3 Machine Unlearning

    In the past seven years,many machine unlearning (data forgotten) methods were proposed.In this section,we formally define unlearning focus on machine learning and give the corresponding evaluation metrics.After that,we summarize and categorize existing MU approaches in detail based on their characteristics,as shown in Fig.1.Unlearning for other tasks or paradigms is also discussed at the end.We next discuss the definition of MU,its metrics and various well-established MU.

    Figure 1:Taxonomy of machine unlearning with different categorization criteria.In this figure,the red boxes represent categorization criteria,while the blue boxes indicate MU subtypes.The categories are parallel to each other and intersect with each other

    3.1 The Definition of Unlearning

    LetUDdefine the distribution of models a training rule could return when trained on a datasetD.LetD′=D/x*,wherex*∈Dis the datapoint that would be unlearned.Similarly,denote the distribution of models learned using same training rule onD′.Lastly,we define the mechanism(i.e.,some randomized or deterministic process)F:U→S,whereS=F(UD,x*) represent the distribution of output model after the transformation byFonx*.Now ifS=,we say theFis theexact unlearning mechanism.As such,naively retraining withoutx*as the unlearning mechanismFcan guarantee the above definition.However,the issue with naively retraining is the sizeable computational overhead associated with it.Approximate unlearning mechanismtries to alleviate these cost-related concerns.Instead of retraining,researchers attempt to execute computationally less expensive operations to measure the distance betweenSandUD′,using different their defined forgotten space(outputs or weights of the model).

    3.2 Evaluation Metrics

    To evaluate the performance of MU in the experiment,two classical metrics are usually adopted,including(1)how completely they can forget data(completeness)and(2)and how quickly they can do so(timeliness).The higher these metrics,the better the algorithm is at restoring privacy and security.Besides,for some Pre-processing MU,we are still required to consider the computational overhead rather than merely evaluating forgetting efficiency[37].

    3.3 Machine Unlearning for Removing Data

    1)In-processing MU: The core idea of In-processing MU is to store some of the training parameters during the training of the model,and when a requirement to unlearn the training points exists,we simply retrain the part of the affected model,and since the shards are smaller than the whole training set,this reduces the retraining time to achieve unlearning.This typical approach is SISA[38],which we have described in Section 1.Similarly,Wu et al.[39] splited the data into multiple subsets and trains the models separately based on the combination of these subsets.The advantage of the above approaches is that it provides a strong demonstration of why the new model is not influenced in any way by the point to be learned since it has not been trained on that point.Accordingly,Brophy et al.[22]proposed two unlearning algorithms for random forest:(1)cache data statistics at each node and training data at each leaf so that only the necessary subtrees are retrained.(2) randomly select split variables at the upper level of each tree so that the selection is entirely independent of the data and no changes are required.At lower levels,split variables are selected to greedily maximize splitting conditions such as the Gini index or mutual information.Other methods,including[8,40-42],require parameters to be stored during training,the dwelling of note as their other more salient details we will detail subsequently in other parts.

    2)Approximate MU:Approximate MU is a kind of post-hoc(post-training)approximate(avoiding retraining)unlearning method.Approximate MU has the advantage of being more computationally efficient than retraining.Still,at the cost of weaker guarantees: the learned model may not be completely unaffected by unlearned data points.Underneath the ambiguity of various approximate statements about forgetting,there are various methods to be considered for this forgetting criterion and their associated metrics: each type has their metric to measure forgetting,but it is not clear how to compare the statements of the different metrics.Thudi et al.[43] introduced an inexpensive unlearning mechanism called single-gradient forgetting,proposing a Standard Deviation (SD) loss metric for the unlearning space.This makes it possible to use single-gradient cancellation to cancel a point from a model trained with SD loss,effectively reducing this unlearning error.Mahadevan and Mathioudakis[44]proposed a similar approach to differential privacy.It uses differential privacy and full information to give a general reduction from deletion guarantees for adaptive sequences to deletion guarantees for non-adaptive sequences.It is worth noting that it essentially defines forgetting in terms of Logits and measures the degree of unlearning through the distribution of Logits or membership inference[2].Except by directly measuring unlearning,researchers also utilize the“influence”function to measure the impact of data points in model predictions [12,13,16].A hessian-based method is provided for estimating the influence of a training point on the model predictions [16].In all,how to definite the unlearning and design its measured metric is still an important open question.

    3)Non-adaptive MU: MU was first proposed in [7],which is a typical non-adaptive MU,and also including[8,45],which relies on the notion of model stability,arguing that removing a sufficiently small proportion of data should not lead to significant changes in the learned model.These studies aim to remove precisely one or more training samples from a training model: their measure of success is near-optimal parameters or objective values and the distinguishing feature is that they are specific to a particular model and can provide tight mathematical guarantees under some constraint settings.Based on the same idea,Fu et al.[23] proposed a Bayesian Inference Forgetting (BIF)framework and develops unlearning algorithms for variational inference and Markov chain Monte Carlo algorithms.It is also shown that BIF can demonstrate the elimination of the effect of a single benchmark on a learning model.Accordingly,a similarly novel unlearning approach proposed by Nguyen in[24]argues that the Kullback-Leibler scatters between the approximate posterior beliefs of the model parameters after direct removal from the erased data and the exact posterior beliefs after retraining with the remaining data is minimized.Despite their not insignificant limitations in practical application,empirically,we cannot deny that they have laid a solid foundation for MU and effectively contributed to the development of adaptive MU(most of the MU methods[33,40]in this paper fall into the adaptive category.We will not expand on their specific description here).

    4)Weight Removal-based MU:To effectively erase the corresponding“influence”from the ML model,Golatkar et al.[11]proposed to use a“readout”function to remove weights from the model.Instead,the weights are modified so that any probe function of the weights is indistinguishable from the same function applied to the network weights,and the network training data is not forgotten.This condition is a generalized,weaker form of differential privacy.Afterwards,Golatkar et al.[18] proposed a method that improves and generalizes previous approaches [11] to different readout functions and can be extended to ensure unlearning when the network is eventually activated.Introducing a new constraint condition can effectively remove the upper limit of information from the forgetting queue per query in a black-box setting(i.e.,only observes input-output behaviour).Besides,in a white box setting(i.e.,with complete control over the model),their proposed unlearning process has a deterministic part derived from a linearized version of the differential equations of the model and a stochastic part that ensures information destruction by adding noise tailored to the geometry of the loss landscape,thus effectively removing the weight of the unlearn data.Accordingly,another similar method redefines unlearning and further provides for approximate unlearning by removing weight[15].This attempt is an essential step towards successful machine unlearning for model restoration.

    5)Gradient Updating-based MU: Kamalika et al.[19] provided a method based on residual projection updates that use synthetic data points to remove data points based on linear regression models.It computes the projection of the exact parameter update vector to a specific low-dimensional subspace.Its feature is that the residual projection update has a runtime that only scales linearly in the dimensionality of the data.Other methods,such as [8],have a quadratic or higher dependence on dimensionality.Reference [20] proposed a mask gradient unlearning algorithm and a forgetting rate indicator based on affiliation inference.The core idea of being unlearned is to mask the neurons of the target model with gradients (called mask gradients) that are trained to eliminate the memory of some of the training samples in the specific model.For the convex risk minimization problem,Sekhari designed noisy Stochastic Gradient Descent (SGD) based TV stabilisation algorithms [21].Their main contribution is the design of corresponding efficient cancellation learning algorithms based on constructing a(maximally)coupled Markov chain to the noisy SGD process.Since this work is well suited to exist adaptive MU,we believe it is very malicious and worth further exploration.

    6)Linear Replacement MU:Baumhauer et al.proposed a logarithm-based method for unlearning classification models,where the output algorithm is linearly transformed,but the information in the weights is not removed[17].Essentially,this is a filtering technique for output results,which can be used to prevent privacy breaches.Reference[38]pre-trained a nonconvex model on data that will never be deleted and then does convex fine-tuning on user data.To effectively remove all the information contained in the non-core data(i.e.,user data),by replacing the standard deep network with a suitable linear approximation,with appropriate changes to the network structure and training process,it has shown that this linear approximation can achieve comparable performance to the original network and that the forgetting problem becomes quadratic and can even be effectively solved for large models.Other methods like[16]have a similar mindset.

    7)Trusted MU: Different frompreviousstandard MU methods,the issue of users determining whether their data has been deleted in ML is crucial to the field of MU,and this has a degree of influence on whether MU can be used in commercial systems.Therefore,unlike the previous categories,we will give the necessary emphasis to the relevant context in this section.

    (1)Relevant Background:The user may request the machine learning model trainer to recover the provided data due to his confidentiality requirements,or simply distrust,which leads to the concept of trusted data deletion.The birth of trusted data deletion predates machine learning.The initial research in this period was based on the physical level.Some data stored in computer hardware urgently needs to be destroyed,and it is necessary to ensure that no one can restore the data after the deletion is completed.The primitive method uses physical media that need to be continuously rewritten or cannot be stored for a long time,such as flash memory.However,machine learning is an algorithm model based on experience,and its iterative update characteristics are destined to require reliable preservation[46].

    There are several points to pay attention to when removing data.The first point is authentication,which means the data owner could verify whether their data is indeed deleted from the model.Yang et al.[47]gave a data deletion method based on the cloud server,without a third party,allowing the holder of the data to delete and verify the result of the deletion.Their algorithm is based on vector authentication,which can prevent external attackers from stealing data,preventing cloud servers from tampering with data,and preventing cloud servers from maliciously backing up data or transferring data.Hua et al.[48]produced another survey about data deletion in cloud server.The confidentiality,data integrity,authenticity,and accountability of data users proposed by them are similar to the definitions of data protection by many legislative bodies.

    (2)Existing Methods:Based on a linear classifier,Guo et al.[25]proposed a MU mechanism that supports authentication.The mechanism is based on differential privacy technique,and an algorithm for handling convex problems based on second-order Newtonian updates is given.To ensure that an adversary cannot extract information from the small residuals (i.e.,proof removal),randomly interfering with training losses is used to mask the residuals.Furthermore,Sommer et al.[26]proposed a verifiable probabilistic MU algorithm based on a backdoor attack,querying whether the output of the backdoor data is injected (specified) label by the user in advance to confirm whether the model truly deletes the data.This is a general trusted MU method but has a disadvantage that it does not allow exact forgetting (i.e.,forgetting specific individual data).Similarly,Reference [49] proposed a trusted ML method based on membership inference.Constructing a model with honeypots,which can infer whether those adversary data still existed in the training set,thus guaranteeing that the MU could be trusted.Currently,only a few works are focusing on this area.Leom et al.[50]researched the remote wiping problem in mobile devices.They assumed that the problem occurred when the device was stolen and the user sent a data deletion instruction to the device.Ullah et al.[51] identified a notion of algorithmic stability.Their work propose MU on smooth convex empirical risk minimization problems.What’s more,their algorithm also fulfills differentially private.

    3.4 Machine Unlearning for Other Fields or Paradigms

    Currently,most existing MU methods are still focusing on a monolithic form of data or form of application[52].However,the vast differences between different tasks and paradigms might make the similar methods to the entirely different results.For example,Gradient Updating-based MU may fail in federal scenarios,as the federal averaging algorithm will vastly reduce the impact of gradients.As such,each MU algorithm against other tasks or paradigms faces new challenges.Reference [53]provided the first framework for quick data summarization with data deletion using robust streaming submodular optimization.Accordingly,an exact MU method is proposed under the assumption that learning takes place in federated learning [38].In federated learning,independent ML models are trained on different data partitions,and their predictions are aggregated during the inference process.In graph neural networks,due to the directly applying of SISA in the graph data can severely damage the graph structural information,Chen et al.[41]proposed two novel graph partitioning algorithms and a learning-based aggregation method.Generally speaking,MU can be used for good privacy protection.However,a new privacy threat is revealed:MU may leave some data imprints in the ML model[5].Using the original model and the post-deletion model,ordinary member inference attacks can infer the deletion of a user’s private information.More interestingly,Marchant et al.[54]argued that current approximation MU and retraining training do not set practical bounds for computation and proposed a poisoning attack against MU which can effectively increase the computational cost of data forgetting.

    4 Conclusion

    Machine Unlearning,including general MU and trusted MU,is a critical and booming research area.In this survey,we first summarize and categorize major ML attacks and existing machine unlearning.Specifically,we detail inference and poisoning attacks,which are threats encountered by MU’s antagonist,ML,and where MU attempts to mitigate.In addition,we present a comprehensive overview of MU from five perspectives:data storage,unlearning metrics,model properties,means of unlearning and authentication of unlearning.We hope that this paper could remind researchers of the ML attack threat,and the significance of MU and provide a timely view.It would be an essential step towards trustworthy deep learning.

    Acknowledgement:None.

    Funding Statement:This work is supported by the National Key Research and Development Program of China (2020YFC2003404),the National Natura Science Foundation of China (No.62072465,62172155,62102425,62102429),the Science and Technology Innovation Program of Hunan Province(Nos.2022RC3061,2021RC2071),and the Natural Science Foundation of Hunan Province (No.2022JJ40564).

    Author Contribution:Yonghao Tang:Wrote the initial draft of the manuscript,reviewed and edited the manuscript.Qiang Liu:Wrote the paper,Reviewed and edited the manuscript.Zhiping Cai:Performed the project administration,reviewed and edited the manuscript.Tongqing Zhou:Reviewed and edited the manuscript.Qiang Ni:Reviewed and edited the manuscript.

    Availability of Data and Materials:Data and materials availability is not applicable to this article as no new data or material were created or analyzed in this study.

    Conflicts of Interest:The authors declare they have no conflicts of interest to report regarding the present study.

    成人漫画全彩无遮挡| 亚洲第一av免费看| 国产免费视频播放在线视频| 高清黄色对白视频在线免费看| 久热久热在线精品观看| 人人妻人人添人人爽欧美一区卜| 成人毛片a级毛片在线播放| 夜夜爽夜夜爽视频| 国产片特级美女逼逼视频| 中国三级夫妇交换| 国产白丝娇喘喷水9色精品| 欧美日韩亚洲高清精品| 久久99热这里只频精品6学生| 亚洲色图 男人天堂 中文字幕 | 国产一区有黄有色的免费视频| 一本大道久久a久久精品| 十分钟在线观看高清视频www| 欧美最新免费一区二区三区| 国产爽快片一区二区三区| 精品第一国产精品| 中文欧美无线码| 亚洲欧美成人精品一区二区| 久久久精品区二区三区| 日本爱情动作片www.在线观看| 美女主播在线视频| 免费在线观看完整版高清| 婷婷色av中文字幕| 一二三四中文在线观看免费高清| 亚洲伊人久久精品综合| 全区人妻精品视频| 亚洲第一av免费看| 色婷婷久久久亚洲欧美| 国产精品人妻久久久影院| 国产福利在线免费观看视频| 美女中出高潮动态图| 欧美日韩av久久| 日本猛色少妇xxxxx猛交久久| 中文天堂在线官网| 日韩欧美一区视频在线观看| 在现免费观看毛片| 九色成人免费人妻av| 免费av不卡在线播放| 亚洲av在线观看美女高潮| 青青草视频在线视频观看| 日韩av不卡免费在线播放| 少妇精品久久久久久久| 国产免费一区二区三区四区乱码| 91aial.com中文字幕在线观看| 少妇的逼水好多| 性色av一级| 精品少妇内射三级| 久久久国产一区二区| 最新的欧美精品一区二区| 人妻一区二区av| 晚上一个人看的免费电影| 大香蕉97超碰在线| 免费大片18禁| 亚洲综合色惰| 亚洲av电影在线进入| 亚洲第一av免费看| 黑人巨大精品欧美一区二区蜜桃 | 另类亚洲欧美激情| 中文字幕亚洲精品专区| 男男h啪啪无遮挡| 黑人高潮一二区| 老司机影院毛片| 免费少妇av软件| 国产精品国产三级国产专区5o| 人人妻人人添人人爽欧美一区卜| 我的女老师完整版在线观看| 欧美亚洲 丝袜 人妻 在线| 亚洲成人av在线免费| 欧美97在线视频| 一级黄片播放器| 亚洲av.av天堂| 母亲3免费完整高清在线观看 | 精品一区在线观看国产| 黄片无遮挡物在线观看| 看免费成人av毛片| 男人舔女人的私密视频| 精品久久久精品久久久| 在线观看三级黄色| 22中文网久久字幕| 精品国产一区二区三区久久久樱花| 亚洲少妇的诱惑av| 久久午夜福利片| 国产精品人妻久久久影院| 在线观看免费日韩欧美大片| 免费大片黄手机在线观看| 久久精品国产a三级三级三级| 国产黄色视频一区二区在线观看| 中文字幕最新亚洲高清| 婷婷色综合大香蕉| 国产不卡av网站在线观看| 欧美3d第一页| 久久久久网色| 高清不卡的av网站| 丰满迷人的少妇在线观看| 日韩不卡一区二区三区视频在线| 精品熟女少妇av免费看| 久久久久精品人妻al黑| 十八禁高潮呻吟视频| 国产黄色视频一区二区在线观看| 亚洲欧美日韩卡通动漫| 成人影院久久| 免费不卡的大黄色大毛片视频在线观看| 国产精品国产三级专区第一集| 久久人妻熟女aⅴ| 不卡视频在线观看欧美| 久久精品aⅴ一区二区三区四区 | 最近最新中文字幕免费大全7| 91午夜精品亚洲一区二区三区| 精品国产一区二区三区久久久樱花| 深夜精品福利| 久久精品国产a三级三级三级| 视频中文字幕在线观看| 精品少妇黑人巨大在线播放| 亚洲精品一区蜜桃| 三级国产精品片| 国内精品宾馆在线| 欧美性感艳星| 亚洲国产精品专区欧美| 国产精品一国产av| 满18在线观看网站| 色94色欧美一区二区| 大码成人一级视频| 七月丁香在线播放| 国产精品偷伦视频观看了| 中文字幕另类日韩欧美亚洲嫩草| 丝袜喷水一区| 韩国高清视频一区二区三区| 国产精品久久久久久精品电影小说| 一二三四中文在线观看免费高清| 欧美国产精品一级二级三级| 色婷婷av一区二区三区视频| 两性夫妻黄色片 | 日产精品乱码卡一卡2卡三| 男女免费视频国产| 精品国产一区二区三区久久久樱花| 人妻一区二区av| av线在线观看网站| 欧美人与善性xxx| 美女中出高潮动态图| 国产无遮挡羞羞视频在线观看| a级毛片黄视频| 亚洲 欧美一区二区三区| 少妇的逼好多水| 久久久久久伊人网av| 69精品国产乱码久久久| 亚洲精品日韩在线中文字幕| 亚洲第一区二区三区不卡| 看免费成人av毛片| 久久精品国产亚洲av涩爱| 国产成人午夜福利电影在线观看| 极品少妇高潮喷水抽搐| 狠狠精品人妻久久久久久综合| 亚洲av综合色区一区| 国产黄色视频一区二区在线观看| 少妇猛男粗大的猛烈进出视频| av国产精品久久久久影院| 欧美国产精品一级二级三级| 亚洲精品色激情综合| 免费观看a级毛片全部| 18在线观看网站| 母亲3免费完整高清在线观看 | 国产精品国产三级国产专区5o| 亚洲人与动物交配视频| a 毛片基地| 成年人午夜在线观看视频| 99久国产av精品国产电影| 国产成人a∨麻豆精品| xxx大片免费视频| 一级毛片 在线播放| 菩萨蛮人人尽说江南好唐韦庄| av网站免费在线观看视频| 久久99一区二区三区| 免费观看av网站的网址| 免费高清在线观看日韩| 日韩熟女老妇一区二区性免费视频| 黄色配什么色好看| 久久久久久久久久成人| 亚洲在久久综合| 色5月婷婷丁香| 亚洲,欧美,日韩| 国产精品国产av在线观看| 日韩欧美一区视频在线观看| 青春草视频在线免费观看| 丝袜喷水一区| 日本91视频免费播放| 丝瓜视频免费看黄片| 国产精品久久久久久精品古装| 黄网站色视频无遮挡免费观看| 麻豆精品久久久久久蜜桃| 国产精品99久久99久久久不卡 | 女人久久www免费人成看片| 久久鲁丝午夜福利片| 久久鲁丝午夜福利片| 久久精品国产亚洲av涩爱| av视频免费观看在线观看| tube8黄色片| 精品视频人人做人人爽| 黄色 视频免费看| 成人无遮挡网站| 卡戴珊不雅视频在线播放| 一二三四中文在线观看免费高清| av在线播放精品| 最近中文字幕高清免费大全6| 人人妻人人添人人爽欧美一区卜| 久久人人爽av亚洲精品天堂| 两个人看的免费小视频| av国产精品久久久久影院| 久久97久久精品| 卡戴珊不雅视频在线播放| 少妇猛男粗大的猛烈进出视频| videossex国产| 中文字幕精品免费在线观看视频 | 91精品三级在线观看| 伦精品一区二区三区| 精品一区二区三卡| 久久久久久人妻| 婷婷色av中文字幕| 韩国精品一区二区三区 | 丝袜人妻中文字幕| 大话2 男鬼变身卡| 亚洲精品视频女| 超色免费av| 日韩av在线免费看完整版不卡| 国产成人精品福利久久| 高清视频免费观看一区二区| 深夜精品福利| 中国美白少妇内射xxxbb| 99re6热这里在线精品视频| 亚洲经典国产精华液单| 日韩av不卡免费在线播放| 国产激情久久老熟女| 又黄又粗又硬又大视频| 久久久久久人妻| 色94色欧美一区二区| 26uuu在线亚洲综合色| 成人18禁高潮啪啪吃奶动态图| 亚洲精品中文字幕在线视频| 少妇高潮的动态图| 亚洲熟女精品中文字幕| 黑人欧美特级aaaaaa片| 少妇人妻精品综合一区二区| 青春草视频在线免费观看| 制服丝袜香蕉在线| 国产永久视频网站| 亚洲美女视频黄频| 夫妻午夜视频| 又黄又粗又硬又大视频| 男女午夜视频在线观看 | 一本—道久久a久久精品蜜桃钙片| 最近中文字幕高清免费大全6| 亚洲国产欧美日韩在线播放| 亚洲欧美日韩另类电影网站| 亚洲欧美中文字幕日韩二区| 欧美成人午夜精品| 女人被躁到高潮嗷嗷叫费观| 免费黄频网站在线观看国产| 免费少妇av软件| 在线观看免费视频网站a站| 亚洲精品一二三| www.av在线官网国产| 99精国产麻豆久久婷婷| 在线天堂最新版资源| 久久久亚洲精品成人影院| 一区在线观看完整版| 免费在线观看黄色视频的| 亚洲综合色惰| 国产亚洲欧美精品永久| 久久人人爽人人爽人人片va| 午夜免费鲁丝| 黄色怎么调成土黄色| 黄色怎么调成土黄色| 国产免费一区二区三区四区乱码| 亚洲国产色片| 成人二区视频| 91成人精品电影| 日韩视频在线欧美| 久久毛片免费看一区二区三区| 国产白丝娇喘喷水9色精品| 国产永久视频网站| 亚洲精品久久久久久婷婷小说| 搡女人真爽免费视频火全软件| 亚洲一级一片aⅴ在线观看| 大码成人一级视频| 久久精品久久精品一区二区三区| 日本爱情动作片www.在线观看| 最后的刺客免费高清国语| 久久久精品区二区三区| 亚洲av在线观看美女高潮| 赤兔流量卡办理| 亚洲人成77777在线视频| 亚洲av国产av综合av卡| 亚洲美女视频黄频| 美女国产视频在线观看| 2018国产大陆天天弄谢| 看免费av毛片| 欧美日韩av久久| 99久久人妻综合| 少妇的逼水好多| 色94色欧美一区二区| 丝袜人妻中文字幕| 18禁裸乳无遮挡动漫免费视频| 五月玫瑰六月丁香| 日本91视频免费播放| 97超碰精品成人国产| 国产高清不卡午夜福利| 婷婷色麻豆天堂久久| 人人妻人人澡人人看| 国产视频首页在线观看| 人妻人人澡人人爽人人| 亚洲国产欧美日韩在线播放| 精品一品国产午夜福利视频| 夫妻性生交免费视频一级片| 久久免费观看电影| 一级毛片 在线播放| 青青草视频在线视频观看| 春色校园在线视频观看| 18禁动态无遮挡网站| 午夜福利乱码中文字幕| 人体艺术视频欧美日本| 丝袜人妻中文字幕| 自拍欧美九色日韩亚洲蝌蚪91| 精品少妇久久久久久888优播| 亚洲欧美一区二区三区国产| 99久久人妻综合| 在线天堂最新版资源| 婷婷色麻豆天堂久久| 成人国产av品久久久| 亚洲一码二码三码区别大吗| 中文欧美无线码| 国产成人精品婷婷| 少妇猛男粗大的猛烈进出视频| 少妇人妻精品综合一区二区| 国产高清三级在线| 亚洲久久久国产精品| 欧美精品人与动牲交sv欧美| 少妇人妻 视频| 亚洲国产精品一区三区| 亚洲欧美清纯卡通| 精品99又大又爽又粗少妇毛片| 日韩成人伦理影院| 母亲3免费完整高清在线观看 | 尾随美女入室| 大陆偷拍与自拍| 大香蕉久久成人网| 欧美国产精品一级二级三级| 亚洲欧美清纯卡通| 国产精品久久久久久精品古装| 日韩电影二区| 岛国毛片在线播放| 91国产中文字幕| 亚洲内射少妇av| 大香蕉97超碰在线| 亚洲图色成人| 欧美+日韩+精品| 亚洲精品视频女| 有码 亚洲区| 日韩免费高清中文字幕av| 亚洲精品中文字幕在线视频| 国产成人午夜福利电影在线观看| 永久免费av网站大全| 亚洲国产精品专区欧美| 中文字幕av电影在线播放| 成人二区视频| 色5月婷婷丁香| 国产极品天堂在线| 啦啦啦在线观看免费高清www| 精品国产国语对白av| 久久久久久人妻| 国产在线视频一区二区| 18在线观看网站| 午夜久久久在线观看| 亚洲综合精品二区| 亚洲美女搞黄在线观看| 高清视频免费观看一区二区| 久久久久久久久久久久大奶| 大码成人一级视频| 大香蕉97超碰在线| 在线观看人妻少妇| 99视频精品全部免费 在线| 亚洲国产av影院在线观看| 日日摸夜夜添夜夜爱| 国产精品蜜桃在线观看| 在线精品无人区一区二区三| 2022亚洲国产成人精品| 搡老乐熟女国产| 久久99热6这里只有精品| 国产精品不卡视频一区二区| 亚洲激情五月婷婷啪啪| 国产精品三级大全| 欧美精品亚洲一区二区| 久热这里只有精品99| 久久久亚洲精品成人影院| 国国产精品蜜臀av免费| a级毛片黄视频| 在线观看一区二区三区激情| 成年美女黄网站色视频大全免费| 亚洲成国产人片在线观看| 国内精品宾馆在线| 男女午夜视频在线观看 | 久久99热这里只频精品6学生| 亚洲成人一二三区av| 十八禁网站网址无遮挡| 亚洲色图 男人天堂 中文字幕 | 黄色视频在线播放观看不卡| 秋霞伦理黄片| 1024视频免费在线观看| 制服人妻中文乱码| 久久人人爽人人片av| 色5月婷婷丁香| 哪个播放器可以免费观看大片| 人人妻人人添人人爽欧美一区卜| 亚洲国产欧美日韩在线播放| 国产精品.久久久| 免费播放大片免费观看视频在线观看| 免费观看无遮挡的男女| 亚洲图色成人| 中文字幕另类日韩欧美亚洲嫩草| 精品午夜福利在线看| 97人妻天天添夜夜摸| a级片在线免费高清观看视频| 一级毛片我不卡| 丝袜喷水一区| 天天操日日干夜夜撸| 青春草亚洲视频在线观看| 亚洲五月色婷婷综合| 亚洲国产精品999| 99香蕉大伊视频| 亚洲婷婷狠狠爱综合网| 国产一区有黄有色的免费视频| 岛国毛片在线播放| 亚洲av男天堂| 国产精品久久久久久精品电影小说| 国产精品国产av在线观看| 亚洲熟女精品中文字幕| 午夜福利视频精品| av在线app专区| 高清黄色对白视频在线免费看| 国产男人的电影天堂91| 亚洲欧美日韩另类电影网站| 亚洲精品美女久久久久99蜜臀 | 久久99精品国语久久久| 视频中文字幕在线观看| 熟妇人妻不卡中文字幕| 肉色欧美久久久久久久蜜桃| 在线观看国产h片| 黄色 视频免费看| 久久久久久久久久人人人人人人| 桃花免费在线播放| www日本在线高清视频| 亚洲,欧美精品.| 国产精品.久久久| 一本色道久久久久久精品综合| 国产成人aa在线观看| 国产毛片在线视频| 菩萨蛮人人尽说江南好唐韦庄| www日本在线高清视频| 久久久a久久爽久久v久久| 国产在线视频一区二区| av片东京热男人的天堂| av女优亚洲男人天堂| 男人添女人高潮全过程视频| 国产免费现黄频在线看| 中国三级夫妇交换| 免费日韩欧美在线观看| 国产69精品久久久久777片| videosex国产| 久久人人爽人人爽人人片va| 在线观看www视频免费| 久久国产精品大桥未久av| 精品久久国产蜜桃| 欧美丝袜亚洲另类| 九九在线视频观看精品| 国产片内射在线| 欧美日韩亚洲高清精品| 精品国产一区二区三区久久久樱花| 夫妻午夜视频| 在线亚洲精品国产二区图片欧美| 欧美xxⅹ黑人| 视频中文字幕在线观看| 丁香六月天网| 一区二区三区精品91| 国产精品久久久久久av不卡| 久久精品熟女亚洲av麻豆精品| 在线观看免费日韩欧美大片| 欧美日韩国产mv在线观看视频| 成人综合一区亚洲| 黄片无遮挡物在线观看| 成人无遮挡网站| 少妇被粗大的猛进出69影院 | 制服人妻中文乱码| 国产在线一区二区三区精| 男女啪啪激烈高潮av片| 国产成人a∨麻豆精品| 一区二区日韩欧美中文字幕 | av天堂久久9| 97在线人人人人妻| 蜜臀久久99精品久久宅男| 最后的刺客免费高清国语| 18禁国产床啪视频网站| 国产一区二区在线观看日韩| 久久国内精品自在自线图片| 久久久久久久久久成人| 精品久久蜜臀av无| 晚上一个人看的免费电影| 欧美精品一区二区大全| 少妇猛男粗大的猛烈进出视频| 亚洲精品久久成人aⅴ小说| 哪个播放器可以免费观看大片| 在现免费观看毛片| 欧美激情极品国产一区二区三区 | 国产乱人偷精品视频| a级毛片在线看网站| 国产免费一区二区三区四区乱码| 亚洲欧美成人精品一区二区| 久久久久精品久久久久真实原创| 欧美精品亚洲一区二区| 波野结衣二区三区在线| 久久热在线av| 亚洲少妇的诱惑av| 日本av免费视频播放| 2021少妇久久久久久久久久久| 日韩,欧美,国产一区二区三区| 国产精品免费大片| 高清在线视频一区二区三区| 亚洲精品视频女| 久久国内精品自在自线图片| 国产成人精品福利久久| 国产片内射在线| 欧美精品高潮呻吟av久久| 国产一区二区在线观看日韩| 在线免费观看不下载黄p国产| 日本免费在线观看一区| 国产av国产精品国产| 国产精品国产av在线观看| 各种免费的搞黄视频| 婷婷色av中文字幕| 精品人妻偷拍中文字幕| 最近手机中文字幕大全| 日韩中字成人| 99热这里只有是精品在线观看| 大话2 男鬼变身卡| 亚洲婷婷狠狠爱综合网| 亚洲在久久综合| 久久久精品94久久精品| 欧美精品高潮呻吟av久久| 久久精品久久久久久噜噜老黄| 麻豆乱淫一区二区| 天天操日日干夜夜撸| 人妻人人澡人人爽人人| 国产在线免费精品| 亚洲情色 制服丝袜| 新久久久久国产一级毛片| 69精品国产乱码久久久| 麻豆乱淫一区二区| 男女下面插进去视频免费观看 | 亚洲久久久国产精品| www.熟女人妻精品国产 | 亚洲国产av新网站| 少妇 在线观看| 交换朋友夫妻互换小说| 欧美 亚洲 国产 日韩一| 免费av不卡在线播放| 亚洲情色 制服丝袜| 考比视频在线观看| 九色成人免费人妻av| 午夜影院在线不卡| 成人国产av品久久久| 成人二区视频| 蜜臀久久99精品久久宅男| 51国产日韩欧美| 边亲边吃奶的免费视频| 国内精品宾馆在线| 高清黄色对白视频在线免费看| 老司机影院成人| 成人毛片a级毛片在线播放| 美国免费a级毛片| 精品人妻偷拍中文字幕| 最近的中文字幕免费完整| 一级,二级,三级黄色视频| 天堂俺去俺来也www色官网| 婷婷色麻豆天堂久久| 久久久久国产精品人妻一区二区| 汤姆久久久久久久影院中文字幕| 建设人人有责人人尽责人人享有的| 男女啪啪激烈高潮av片| 亚洲精品第二区| 黄色一级大片看看| 人妻人人澡人人爽人人| 日韩av在线免费看完整版不卡| 久久 成人 亚洲| 免费观看av网站的网址| 最近2019中文字幕mv第一页| 满18在线观看网站| 色婷婷久久久亚洲欧美| 大话2 男鬼变身卡| 老司机影院成人| 国产成人精品无人区| av女优亚洲男人天堂| 成人毛片a级毛片在线播放| 亚洲av免费高清在线观看| 人人妻人人澡人人爽人人夜夜| 嫩草影院入口| 大陆偷拍与自拍| 春色校园在线视频观看| 午夜福利在线观看免费完整高清在| 欧美人与性动交α欧美软件 | 高清不卡的av网站| 日韩熟女老妇一区二区性免费视频| 日日爽夜夜爽网站| 久久久国产精品麻豆| 午夜91福利影院|