• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An Intelligent Secure Adversarial Examples Detection Scheme in Heterogeneous Complex Environments

    2023-10-26 13:15:36WeizhengWangXiangqiWangXianminPanXingxingGongJianLiangPradipKumarSharmaOsamaAlfarrajandWaelSaid
    Computers Materials&Continua 2023年9期

    Weizheng Wang ,Xiangqi Wang ,Xianmin Pan ,Xingxing Gong ,Jian Liang ,Pradip Kumar Sharma ,Osama Alfarraj and Wael Said

    1College of Information Science and Engineering,Hunan Women’s University,Changsha,410138,China

    2School of Mathematics and Statistics,Hunan First Normal University,Changsha,410138,China

    3School of Computer&Communication Engineering,Changsha University of Science&Technology,Changsha,410114,China

    4Department of Computing Science,University of Aberdeen,Aberdeen,AB24 3FX,UK

    5Department of Computer Science,Community College,King Saud University,Riyadh,11437,Saudi Arabia

    6Department of Computer Science,Faculty of Computers and Informatics,Zagazig University,Zagazig,44511,Egypt

    ABSTRACT Image-denoising techniques are widely used to defend against Adversarial Examples(AEs).However,denoising alone cannot completely eliminate adversarial perturbations.The remaining perturbations tend to amplify as they propagate through deeper layers of the network,leading to misclassifications.Moreover,image denoising compromises the classification accuracy of original examples.To address these challenges in AE defense through image denoising,this paper proposes a novel AE detection technique.The proposed technique combines multiple traditional image-denoising algorithms and Convolutional Neural Network(CNN)network structures.The used detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine-learning voting algorithm.By analyzing the discrepancy between predictions made by the model on original examples and denoised examples,AEs are detected effectively.This technique reduces computational overhead without modifying the model structure or parameters,effectively avoiding the error amplification caused by denoising.The proposed approach demonstrates excellent detection performance against mainstream AE attacks.Experimental results show outstanding detection performance in well-known AE attacks,including Fast Gradient Sign Method(FGSM),Basic Iteration Method(BIM),DeepFool,and Carlini&Wagner(C&W),achieving a 94%success rate in FGSM detection,while only reducing the accuracy of clean examples by 4%.

    KEYWORDS Deep neural networks;adversarial example;image denoising;adversarial example detection;machine learning;adversarial attack

    1 Introduction

    With the improvement in computer performance and data processing capacity,deep neural networks have demonstrated great advantages in intelligent environments such as image and speech recognition [1],autonomous driving [2],natural language processing [3,4],and network security detection [5].However,recent studies have shown that deep neural networks are vulnerable to AEs[6–8].AEs are intentionally crafted inputs that can deceive deep learning models,leading to incorrect and potentially harmful outputs.AEs are almost indistinguishable from real samples by the naked eye but can cause models to be misclassified with high confidence.Even if several models have different structures and training data,the same AEs can attack them [9].As shown in Fig.1,in a realistic scenario,AEs can perform targeted attacks on machine learning-based target systems,which can be applications with high-security requirements such as autonomous driving,face recognition app,and smart home.The presence of AEs puts the application of machine learning in security-sensitive fields at serious risk,and these threats can trigger machine learning-driven recognition systems to execute incorrect instructions or become paralyzed,even posing a significant risk to people’s lives.Motivated by these pressing challenges,this paper aims to address the security issues associated with AEs and enhance the robustness of machine learning systems.The primary motivation behind this study stems from the urgent need to promote the widespread adoption and application of machine learning technologies while ensuring their reliability and security.

    Figure 1:The attacker generates an AE that makes the system different from humans

    In recent years,researchers attach great importance to the security issue of AEs,and great progress has been made in the research of adversarial example defense.For example,adversarial training[10]uses the attack algorithm to generate AEs before model training,and then it mixes the AEs with the original samples to train the model,constantly generating new AEs in each step of training to improve the recognition accuracy of the model to the AEs.Defensive distillation[11]reduces the sensitivity of a neural network model to input perturbations by generating a smooth classifier,making the classifier more adaptive to AEs.These defense techniques by modifying the model structure or modifying the training process can significantly enhance the robustness of neural network models.However,the training overhead is high (adversarial training requires a large number of AEs for training),the complexity is high (defensive distillation requires adding distillation temperature and modifying the objective function),and the defense against black-box attacks,which are widely used in real-world scenarios,is ineffective.In addition to enhancing the ability of the model to resist AEs,AE detection[12–15] is also the mainstream defense technology at present.In this technology,before the image is classified by the image recognition model,the image is screened by the detection system to detect whether it is an AE.AE detection does not require AEs for training,nor does it require modification of model structure and parameters,reducing training overhead and defense complexity,and making it easier to deploy in realistic machine learning systems.However,the performance of the AE detection is closely related to the detector.In addition,this method only detects whether there are AEs,and can not improve the robustness of the model.Table 1 summarizes the advantages and disadvantages of the current mainstream AE defense technology.

    Table 1:The advantages and disadvantages of defense techniques against AEs

    Previous research has made notable strides in AE defense techniques,including adversarial training and defensive distillation,which modify model structures or training processes to enhance robustness.However,these approaches often suffer from high training overhead,increased complexity,and limited effectiveness against real-world black-box attacks.On the other hand,AE detection has gained prominence as a defense mechanism that screens inputs to identify AEs,without modifying the model or incurring significant training costs.However,the performance of AE detection heavily relies on the choice of the detector and does not improve the overall robustness of the model.

    This paper presents a new AE detection technology based on image denoising.Without modifying the model structure and affecting the accuracy of clean samples,the input samples are detected based on the difference between the prediction of adversarial examples and clean samples before and after image denoising.The training cost of this method is very small,and it can effectively detect the AEs generated by the current mainstream attacks.As well,the proposed detection technology can be applied to image data filtering.In applications with high-security requirements,the image dataset is first detected,and if an image is detected as an AE,it will be filtered or restored for secondary processing.Therefore,the research of this paper has important theoretical and practical significance.

    The main contributions of this paper are as follows:

    (1) The paper presents a novel adversarial example detection scheme based on the inconsistency of model prediction between original samples and AEs before and after image denoising.The detection technique aims to identify AEs without modifying the model structure or affecting the classification accuracy of clean samples.

    (2) The detection scheme involves training a detector model by integrating the classification results of different models using a machine-learning voting algorithm.The detector is fine-tuned by comparing the classification results of original images and denoised images obtained from multiple image-denoising algorithms.

    (3) The AE detection framework consists of classifying an input image using the detector,denoising the image,and reclassifying it using the detector.The deviation between the classification results before and after denoising is calculated using the L2 norm and compared with a detection threshold.If the deviation exceeds the threshold,the image is classified as an AE.

    (4) The proposed method is evaluated using the MNIST dataset and various AE attack algorithms.The results demonstrate the effectiveness of the detection scheme,with success rates ranging from 88% to 94% for detecting AEs generated by different attacks.Compared to other defense techniques,the proposed method shows better performance while maintaining high prediction accuracy for clean samples.

    The rest of this paper is organized as follows: Section 2 introduces the preliminary knowledge of this paper,including the basic concepts of image denoising and AEs.Section 3 describes in detail the detection framework and the specific implementation process proposed in this paper.Section 4 evaluates our proposed detection technique under different AE attacks by experiments.Finally,we summarize the current research results,give an outlook on the subsequent research of this paper,and propose possible solutions and research directions.

    2 Related Work

    In 2014,Szegedy et al.[6] first introduced the concept of adversarial examples and proposed the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to construct AEs.The nonlinear and nonconvex objective optimization problem is approximated by minimizing the loss function to find the minimum additional term of the loss function that makes the model misclassify.This method is stable and effective,but it has high computational complexity.Goodfellow et al.[10]explained subsequently the basic principle of AEs and proposed one of the simplest and most effective FGSM.Adding perturbations in the gradient direction and linearizing the loss function causes the model to misclassify the generated images.This method is efficient in constructing AEs and thus has been widely used in the field of AE research,often as a benchmark attack for new defense frameworks.However,the perturbation size of FGSM-generated AEs is not well controlled and prone to label leakage.Kurakin et al.[16] proposed a BIM for optimization under FGSM.By applying the adversarial perturbation to small step sizes and clipping the results after each iteration step to ensure that the perturbation size is within the neighborhood of the original class,a high-quality AE can eventually be generated.Recently,Carlini et al.[17] proposed a C&W attack based on iterative optimization of L-BFGS.Under theL0,L2,andL∞distance metrics,the attack constructs highquality AEs by using the objective function of optimization constraints,and can successfully defeat the mainstream defensive distillation technology.By now,adversarial attacks have been studied widely in many fields,such as object detection and tracking [18,19],reinforcement learning [20,21],face recognition[22,23],and healthcare data[24].

    In recent years,the proliferation of research on AE attack algorithms has made the development and application of deep learning in security-sensitive fields suffer from a great threat[25],and these threats may trigger a complete breakdown of deep learning-driven recognition systems,and even pose a risk to life for applications with high security-level requirements such as autonomous driving.To counter the security risks posed by AEs,researchers have proposed a series of defense mechanisms for AEs.

    The research on AEs defense is mainly divided into two aspects:1)Active defense:Enhancing the robustness of neural network model through technical reinforcement;2)Passive defense:Independent of the attack algorithm and the network model structure,it is only necessary to determine whether the image is an AE or not.Fig.2 shows the different forms of AE defenses.For the study of active defense,Goodfellow et al.[10] were the first to propose a direct and effective adversarial training defense technique.The basic idea is to include AEs in the training process,train the model with clean samples as part of the training set,and continuously generate new AEs at each step of training,thus enhancing the model’s ability to resist the AEs.However,to ensure the recognition accuracy of AEs,a large number of AEs are needed to train the model,and the training process is tedious and costly.The defensive distillation proposed by Papernot et al.[11]reduces the sensitivity of the neural network model to input perturbations by generating smooth classifiers,making the classifier more adaptable to adversarial samples.This defensive technique is independent of the generation of AEs and has a high generalization capability.However,an attacker can easily bypass the distillation model by training an alternative model similar to the distillation model and then using the gradient of the alternative model to generate the AEs,so the defensive distillation can be easily broken by black-box attacks.Moreover,the technique requires changing the model structure and retraining the classifier,further increasing the overhead and complexity of the defense.

    Figure 2:Different ways of defending against AEs

    Compared with active defense,the research of passive defense is relatively much simpler.Passive defense distinguishes clean samples and AEs by detection.Gondara[12]used density ratio estimation as a measurement method of the model to detect AEs.Based on the fact that clean samples and AEs have different potential probability densities,AEs with high confidence are detected by estimation of the density ratio.This method can handle grayscale and color images well,but has high computational complexity and can only detect AEs far from the decision boundary.Meng et al.[13] proposed a defense framework MagNet that includes multiple detector networks and a reformer network.The detector network learns to distinguish clean samples from AEs by approximating multiple forms of clean samples,and the reformer network shifts AEs to multiple forms of clean samples.This defense technique has good detection in both black-box and gray-box attacks,but the training overhead is high,and modifying the original model reduces the classification accuracy of clean samples.Jia et al.[14]proposed a defense framework called ComDefend based on image compression reconstruction.ComDefend consists of two CNN modules including ComCNN and ResCNN,where the role of ComCNN is to store the main structural information of the original image and the role of ResCNN is to restore the original image with high resolution.ComDefend processes the image in image blocks instead of directly processing the whole image,which reduces the training time and computational overhead,but the clean samples after image compression will lose some of the prediction accuracies.Xu et al.[15]proposed an AE defense strategy based on feature compression.The method adds two external models to the Deep Neural Networks (DNN) classifier for reducing the color bit depth of each pixel and smoothing pixels using a spatial filter,respectively.The samples are distinguished based on the model’s inconsistency in the prediction of clean samples and AEs before and after feature compression.The method shows excellent detection performance under different attacks,but it degrades the classification accuracy of the model for clean samples on a relatively complex dataset like ImageNet.

    Before the work of this paper,many scholars carried out in-depth research on AE defense based on image denoising[26–28].The basic framework of its defense is shown in Fig.3.For a given input image,it is first filtered by the detector.Then,the detected AEs are preprocessed by an image-denoising algorithm to reduce the small disturbance of AEs.Finally,the processed AEs are input into the model to be correctly classified,while the clean samples are directly input into the model for prediction and classification.In this way,the image recognition model can be protected from the attack of AEs.

    Figure 3:The framework of defending against AEs based on traditional image denoising

    With the continuous development of AE defense technology,image denoising has been widely used as an effective defense method.Xie et al.[29] proposed a method to remove minor adversarial perturbations by performing a random transformation on the input image.Specifically,two random layers,a random resize layer and a random fill layer,are introduced before the original image enters the classification model,and the original image is transformed by the above two random layers and then passed to the classification model.This method can resist white-box attacks well without retraining or changing the model.However,this framework requires a large number of AEs trained in a complex way to have a good defense.Gu et al.[27] proposed a defense framework for deep compression networks using a smoothness penalty term regularization network similar to the compressive selfencoder to enhance the robustness of the network.The defense of the deep compression network is better under adversarial training,but the computational complexity is too high.Hu[28]proposed a DD model based on deep residual learning denoising,which improves the accuracy of identifying AEs by superimposing two different defense models,reverses the law of denoising by using residual learning,and improves the training efficiency of the model by using Batch Normalization(BN)and Rectified Linear Units(ReLU)layers.Dong et al.[26]proposed a High-level representation Guided Denoiser(HGD)network to train a neural network-based denoiser to remove adversarial perturbations.In the training process,the loss function is added directly to the higher-level feature layers of the network,trained on a small subset of images,and then narrowly shifted to other classes.Compared to other denoising networks,HGD has good migration properties.In the conference and workshop on Neural Information Processing Systems(NIPS)adversarial defense competition,the HGD network won with a significant lead in detection performance.However,HGD training requires a large number of AEs,and the training overhead is high.Besides,the detection performance under white-box attacks is poor.

    3 System Model and Definitions

    3.1 Image Denoising

    Digital images will be disturbed by different noises in the process of acquisition and transmission to varying degrees,which brings some trouble to the high-level image processing,so image denoising has become an important part of image pre-processing.The goal of image denoising is to obtain a clean image from a noise-containing image by subtracting noise and to restore the original image information to the maximum extent while still retaining enough detailed information.Specifically,for an imagev(x)with noisy input,the additive noise can be expressed as:

    whereu(x)represents the image without noise,η(x)is additive noise,representing the impact of noise,andΩis the set of pixels,i.e.,the whole image.According to whether denoising is combined with a machine learning model,the image-denoising algorithm is mainly divided into traditional denoising algorithm and machine learning denoising algorithm.

    The traditional denoising algorithm finds out the law from the noisy image,and then it carries on the corresponding denoising processing.The main algorithms are mean denoising[30],Non-Local Means(NLM)denoising[31],and Block-Matching and 3D(BM3D)filtering[32],which are usually able to handle images with specific noise types.For example,mean denoising can effectively deal with Gaussian noise,while wavelet denoising is mainly applicable to images with white noise.In real scenarios,due to the imperfection of digital devices,the original images are inevitably contaminated by various kinds of noise in the process of transmission and storage.Therefore,we can carry out comprehensive denoising according to the characteristics of different denoising algorithms,such as combining median denoising and wavelet denoising for filtering,which can effectively reduce the noise in images while ensuring the integrity of image edges,textures,and other detailed information.In practical applications,to improve the generality of the denoising algorithm,different denoising algorithms or combinations of many different denoising algorithms should be applied according to the characteristics of the original image and the type of noise.If we cannot find the law from the noisy image itself,we can use machine learning to denoise.We summarize the inherent attributes of the image by constantly learning the characteristics of noise,and we denoise through the statistical characteristics of the image.Machine learning denoising mainly includes convolution neural network denoising[33],self-encoder denoising [34],and generative adversarial network denoising [35].Compared with the traditional image-denoising algorithm,the machine learning denoising algorithm can denoise well and retain the detailed information of the image edge,but the denoising speed still needs to be improved.Based on the target of image denoising,image denoising should meet four aspects at the same time:

    (1) The noise contained in the image should be removed as much as possible;

    (2) The integrity of important detail information(such as edge and texture)contained in the image should be ensured;

    (3) Additional noise types should not be introduced in the process of noise removal;

    (4) In the real scene,the denoising efficiency should be high enough.

    Only when these four requirements are met simultaneously can the best effect of noise removal be achieved.However,the current traditional image denoising is difficult to achieve a balance between removing noise and retaining image edge detail information.Machine learning denoising satisfies the requirements of (1)(2)(3) well,but the existing machine learning denoising methods are still in the experimental stage,and further improvement is needed for high training speed and recovery performance.

    3.2 Adversarial Example

    In 2014,Szegedy et al.[6] first proposed the concept of AEs.The AE introduces a slight disturbance to the input of the neural network model so that the disturbed input is incorrectly classified by the neural network model with high confidence,but the human eye cannot distinguish the changes after image disturbance.Formally,supposing that there is a machine learning modelMand the original sampleCcorrectly classified by the model,M(C)=ytrue,whereytrueis the real label ofC.Perturbing the original sampleCto generate an adversarial exampleC’,M(C′)ytrue,which is incorrectly classified by the model.A typical example is shown in Fig.4.In the image on the left,the neural network model thinks that the image is a“Panda”(57.7%),but with small perturbations,it is classified as“gibbon”by the model with 99.3%confidence after being transformed to the right when the human eye cannot see the difference at all.

    Figure 4:Generating an AE with the FGSM attack algorithm

    For a more intuitive representation,we use the neural network model in Fig.4 as an example to show the change in output by perturbing the inputs.For any of the inputs,small changes will not affect the overall prediction of the classifier,but small changes to all dimensions of the inputs will result in large changes in the output of the classifier.As shown in Fig.5,whereW(1)andW(2)are weight matrices,and the original input value and weight value are randomly initialized.After disturbing the original input by size sign(0.5),the adversarial inputs,are all equal to 1.5.Then,after the transformation operation of the weight matrixW(1)in the first layer and the activation function ReLU,the adversarial outputsin the first layer are all equal to 1.5.Finally,after the transformation operation of the weight matrixW(2)in the second layer and the activation function sigmoid,we find that the probability of the output class changes from 0.2689 to 0.8176,which is enough to make the model misclassify with high confidence.With the increase of the depth of the neural network model,the probability change of the output class will be more obvious.

    Figure 5:The change of output after adding perturbations to the inputs of the neural network[36]

    AE attacks can be directed,in which the opponent’s goal is to classify the output into a specific class.It can also be nondirectional,in which the opponent only needs to classify the output into any class other than the correct class.More formally,taking the directed attack as an example,given an inputx∈Xand a classifier(f(·)),the attacker’s goal is to find an antagonistic inputx′∈Xby adding small perturbations to the inputX,andx′is classified as target classt.Modeled as a mathematical problem,the target of the directional attack is:

    where‖·‖prepresents the distance norm of the original inputxand the antagonistic inputx′,andpcan be 0,1,2,and infinity.εis used to limit the size of the disturbance and avoid the excessive disturbance added to the input being detected by human eyes.

    4 Our Proposed AE Detection Scheme

    The detection technique proposed in this paper is based on the inconsistency of model prediction between the original sample and the AE before and after the denoising process to detect the input samples.Namely,the prediction accuracy obtained by applying the AE to the model changes significantly before and after image denoising,and the prediction accuracy of the clean sample remains unchanged.Specifically,the image recognition model is first used to classify the original image,and the results of the classification (prediction accuracy) are recorded.Then different image-denoising algorithms are used to denoise the images,and the processed images are classified under the same model,and the classification results are recorded.Finally,the differences in classification results of images before and after denoising are compared.If the classification results before and after image denoising are the same,the sample is clean,otherwise,the sample is an AE.For the detected AEs,we can also perform a secondary intervention to determine whether to discard and restore the samples.For example,if a suspected AE is detected in autonomous driving,we can first pull over and then manually intervene to judge the next operation;if a suspected AE is detected in face recognition,we can stop the machine recognition to manually verify it.The detection technique proposed in this paper can effectively detect the AEs generated by the current mainstream attack algorithms without modifying the model structure and without affecting the classification accuracy of the original samples,and will avoid the error amplification effect brought by image denoising.The research process of this detection technique is divided into two main stages:training the detector model and detecting AE.

    4.1 Training Detector Model

    The detector model integrates the classification results of different models as the input to the detector and calculates the final output of the detector based on a machine learning voting algorithm.As shown in Fig.6,the specific training process is as follows:

    (1) The adversarial dataset is constructed using the mainstream AE attack algorithms (FGSM,BIM,DeepFool[37],C&W).

    (2) Input the adversarial data set to the image recognition model 1 for classification,and record the result p1 of image classification by model 1.

    (3) For each input image of the same data set,four different image-denoising algorithms (two traditional denoising algorithms and two machine learning denoising algorithms) are used for denoising.Then the other four models are given for classification,respectively,and the classification results corresponding to these models are recorded as p2,p3,p4,and p5.

    (4) The results of classifying the same image by different models in step (1) and step (3) are compared correspondingly,and four different deviation values d1,d2,d3,and d4are obtained.Then,based on the machine learning voting algorithm,the classification results of the detector model and the corresponding confidence score are outputted.

    Figure 6:Training a detector model

    The detector is obtained by integrating five image recognition models for fine-tuning.During the training process,we first keep the structure and parameters of the detector consistent with those of the image recognition model before the fully connected layer.Then the weights and biases of the fully connected layer of the detector are randomly initialized.At last,the final output of the detector is obtained by continuously iteratively updating the fully connected layer weights.

    Algorithm 1 implements the specific process of training the detector model.The detector represents the detector model,which is a set of 5 objects,each object represents an image recognition model.The detector is trained by the input imageXand the prediction accuracy of the image recognition models(prediction accuracy of one original model and prediction accuracy of four denoised models),and the result of the detector voting,“vote”,and the corresponding class average confidence score,“prob”,are obtained based on a machine learning voting algorithm.

    4.2 AE Detection

    4.2.1 AE Detection Framework

    The main goal of the AE detection part is to verify the feasibility of detecting AEs based on image denoising and test the detection performance of the detector.As shown in Fig.7,for an input image to be tested,it is first classified by the detector to obtain the predicted resultp.Then the input image is denoised by using the denoising algorithm,and it is classified by the detector to get the predicted resultp1.Based on the distance normL2,the deviationdbefore and after image denoising is calculated and then compared with the detection thresholdKobtained in the training phase.If the deviation is less than the detection threshold,the image is a clean sample.Otherwise,the image is an AE,and it will be discarded or restored for secondary processing.

    Figure 7:AE detection framework

    4.2.2 Detection Threshold Selection

    The detection threshold is determined based on multiple update iterations of different models during the training phase.The selection of an appropriate distance threshold is specifically described by using the following steps(Algorithm 2,multiple iterations):

    (1) Record the prediction accuracy of each model;

    (2) Compare the absolute value of the difference between the prediction accuracy after denoising and the prediction accuracy without any processing;

    (3) According to the value obtained in step(2),the distance value is calculated by the norm L2,and sorted;

    (4) According to the majority voting algorithm,the threshold that satisfies the majority of the distance values is selected as the final detection threshold of the detection model.

    As shown in Algorithm 2,based on the known input imageXand the denoised imageXdenoise,we use the voting algorithm in machine learning to calculate the prediction difference thresholdCbetween different models.Cis defined specifically as:

    whereVOTEis the voting algorithm.We use the minimum value of the predicted difference thresholdCbetween every two models as the candidate threshold.Each candidate threshold is then evaluated in turn according to the accuracy of the original samples,and in this way,the optimal threshold is selected.It should be noted that the optimal detection threshold varies under different AE attacks.In general,the higher the strength of the attack,the smaller the optimal detection threshold.

    5 Experiment Results and Analysis

    5.1 Experimental Setup

    The dataset adopted in this experiment is the most classical MNIST dataset in the field of image recognition,where the MNIST dataset is provided by NIST,USA.This dataset contains 60000 training images and 10000 test images.The category labels of each image correspond to 0–9,and each sample is a grayscale handwritten digital image of fixed size 28×28 pixels with values between 0 and 1.We used AlexNet as the original model for training,and we used Denoising Convolutional Neural Network(DnCNN) and Denoising Autoencoder (DAE) as the models for convolutional neural network denoising and self-encoder denoising.As shown in Table 2,AlexNet is an 8-layer convolutional neural network model consisting of 5 convolutional layers,a maximum pooling layer,a fully connected layer,and a softmax layer.DnCNN is an 18-layer feedforward noise-reducing convolutional neural network.The middle 16 layers are normalized using batch normalization,which can speed up the model training and image-denoising efficiency.As shown in Fig.8,Autoencoder consists of an encoder and a decoder,where the encoder has 2 convolutional layers and 2 maximum pooling layers,and the decoder also has 2 convolutional layers and 2 maximum pooling layers.Compared with convolutional neural network denoising,self-encoder denoising generates the corresponding input and output by coding and decoding,and then generates the corresponding denoised image by adding specific noise to the input image.

    Table 2:Network architecture of the AlexNet and DnCNN

    Figure 8:Network architecture of autoencoder

    5.2 Image Denoising Effect

    We used 2 traditional denoising methods and 2 machine-learning denoising methods for our experiments.Among them,the traditional denoising methods are median denoising and mean denoising,which are implemented using Python’s Scipy library.As shown in Fig.9,we test the effect of the 2 traditional denoising methods by adding pretzel noise and Gaussian noise,respectively.These two methods are fast and can remove the pepper noise and Gaussian noise relatively well,but they also blur the edge information of the image.We use the machine learning denoising of the denoising selfencoder and Feed-forward DnCNN,respectively.The encoder and decoder of DAE are implemented using Keras,and DnCNN is implemented using Tensorflow and OpenCV.the results of DAE and DnCNN denoising are shown in Figs.10 and 11.Compared with the traditional denoising methods,the quality of the images obtained by machine learning denoising is significantly higher.Because the noise with certain regularity can be generated by feature extraction and autonomous learning of machine learning,the machine learning denoising method has a good effect on different types of noise and noise with different coefficients.However,during the experiments,it was found that machine learning denoising takes several minutes in the best case to denoise an image while the traditional denoising algorithm takes only a few seconds.As the architecture becomes increasingly complex,denoising algorithms may be time-consuming and resource intensive.Therefore,the efficiency of machine learning denoising needs to be further improved.

    5.3 Effectiveness of Antagonistic Sample Detection

    The AE detection is based on the inconsistency of model prediction between the original samples and AEs before and after the denoising process.After processing by denoising algorithms,the prediction accuracy of the classifier for the original samples is unchanged,while the prediction accuracy for the AEs varies greatly.Especially,the difference is more obvious for the AEs after combining various types of denoising algorithms.In this paper,AEs are generated by using FGSM,BIM,DeepFool,and C&W attacks in“cleverhans”[38].“cleverhans”is an open-source Python library for adversarial attacks,defense,and benchmarking of machine learning models,which contains instructions and code implementations of the current mainstream AE attack algorithms and defense techniques.As shown in Table 3,we evaluated the detection effectiveness of our proposed defense method using AEs generated by different attack algorithms under optimal thresholds in the MNIST dataset.

    Figure 10:Effect of the autoencoder denoising algorithm

    Figure 11:Effect of the DnCNN denoising algorithm

    Table 3:The detection success rate of the detection model proposed in the paper

    Table 3 shows the success rate of the proposed method to detect the AEs in this paper.For the optimal detection threshold,we choose the size of 0.01 for the FGSM and BIM attacks.Due to the high strength of DeepFool and C&W attacks,we choose sizes of 0.0044 and 0.008,respectively.In the undefended case,the C&W attack shows the strongest aggressivity.The lower success rate of the FGSM attack is mainly attributed to the excessive perturbation of the attack with the label leakage effect.After using our proposed defense method,the success rate of model detection is over 88%in all cases.Especially,the success rate of detection of the AEs generated by the FGSM attack reaches 94%.This shows that our proposed defense method is effective and generalizes well under different attack algorithms.

    To measure the performance of the proposed method,we compare the detection technique proposed in this paper with other related defense techniques.Table 4 shows the comparison results with other defense techniques on the MNIST image dataset.Compared with HGD and ComDefend,the method has less impact on the original samples and loses only 4% of the prediction accuracy of the original samples.The best detection performance is indicated under FGSM,BIM,and C&W attacks.One possible reason for the slightly degraded detection performance under the DeepFool attack is that the perturbation required to generate the AE for this attack is extremely small,and image denoising is sometimes difficult to effectively remove this small imperceptible perturbation,which ultimately leaves the label of the AE unchanged.In addition,the proposed denoising method mainly removes redundant information from the images and thus has an almost negligible impact on the original samples.

    Table 4:Comparison of the defense mentioned with other defense techniques

    6 Conclusion

    In this paper,our study presents an AE detection framework that combines multiple imagedenoising algorithms and CNN network structures.Our key finding is that by analyzing the inconsistency in model predictions between original samples and AEs before and after the denoising process,we can effectively detect AEs without modifying the model structure or compromising the accuracy of original samples.The framework integrates traditional denoising algorithms such as mean and median denoising,as well as machine learning denoising algorithms like convolutional neural networks and self-encoders.By utilizing these techniques,the method filters AEs and clean samples,significantly reducing the error amplification effect associated with image denoising and minimizing defense costs and overhead.Our experimental results demonstrate the excellent detection performance of the proposed method against mainstream AE attacks,including FGSM,BIM,DeepFool,and C&W.Notably,the method achieves a 94% detection success rate for FGSM,with only a 4% reduction in the accuracy of clean examples.These findings underscore the effectiveness and efficiency of our AE detection technique,showcasing its potential to enhance the security of intelligent environments susceptible to targeted attacks.In summary,this research contributes to the field of adversarial defense by providing a practical and robust AE detection framework.By leveraging multiple image-denoising algorithms and analyzing prediction inconsistencies,our method achieves high detection performance while preserving the accuracy of original samples.This study opens up new avenues for developing efficient and reliable defense mechanisms against adversarial examples in various machine-learning applications.

    Acknowledgement:The authors would like to thank the Researchers Supporting Project of King Saud University for supporting this work.

    Funding Statement:This work was supported in part by the Natural Science Foundation of Hunan Province under Grant Nos.2023JJ30316 and 2022JJ2029,in part by a project supported by Scientific Research Fund of Hunan Provincial Education Department under Grant No.22A0686,and in part by the National Natural Science Foundation of China under Grant No.62172058.This work was also funded by the Researchers Supporting Project (No.RSP2023R102) King Saud University,Riyadh,Saudi Arabia.

    Author Contributions:Study conception and design: W.Wang,X.Pan,J.Liang;data collection: X.Gong,J.Liang;analysis and interpretation of results: X.Wang,P.K.Sharma;draft manuscript preparation: O.Alfarraj,W.Said.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data underlying this article will be shared on reasonable request to the corresponding author.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    少妇熟女aⅴ在线视频| 看黄色毛片网站| 亚洲专区国产一区二区| 国产av在哪里看| 一进一出好大好爽视频| 搡老妇女老女人老熟妇| 好男人在线观看高清免费视频| 国产久久久一区二区三区| 免费av观看视频| 欧美三级亚洲精品| 亚洲av.av天堂| 亚洲黑人精品在线| 国产日本99.免费观看| 日日夜夜操网爽| 亚洲电影在线观看av| 国产精品av视频在线免费观看| 一夜夜www| xxxwww97欧美| 亚洲中文日韩欧美视频| 12—13女人毛片做爰片一| 丰满人妻一区二区三区视频av| 一个人观看的视频www高清免费观看| 日本黄色片子视频| 搡老熟女国产l中国老女人| 999久久久精品免费观看国产| 99久久精品热视频| 毛片一级片免费看久久久久 | 日韩av在线大香蕉| 女人被狂操c到高潮| 日韩欧美在线乱码| 亚洲精品亚洲一区二区| 国产精品三级大全| 麻豆国产av国片精品| 久久国产精品人妻蜜桃| 九九在线视频观看精品| 国产免费av片在线观看野外av| 久久久久久久精品吃奶| 91麻豆av在线| 少妇熟女aⅴ在线视频| 97人妻精品一区二区三区麻豆| 99国产综合亚洲精品| 在线看三级毛片| 中文字幕av在线有码专区| 国产精品综合久久久久久久免费| 长腿黑丝高跟| 亚洲国产色片| 啦啦啦韩国在线观看视频| 757午夜福利合集在线观看| .国产精品久久| 国产一区二区亚洲精品在线观看| 日韩av在线大香蕉| 久久久国产成人免费| 如何舔出高潮| 国产亚洲精品av在线| 全区人妻精品视频| 午夜a级毛片| 一区二区三区四区激情视频 | 伊人久久精品亚洲午夜| 午夜福利18| 一个人免费在线观看的高清视频| 能在线免费观看的黄片| 天堂网av新在线| 久久香蕉精品热| 自拍偷自拍亚洲精品老妇| 黄色女人牲交| 97超视频在线观看视频| 最新中文字幕久久久久| av天堂在线播放| 国产精品日韩av在线免费观看| 久久性视频一级片| 国产美女午夜福利| 男人和女人高潮做爰伦理| 成人无遮挡网站| 日韩欧美在线二视频| a级一级毛片免费在线观看| 欧美激情国产日韩精品一区| 中文资源天堂在线| 日本免费一区二区三区高清不卡| 精品人妻1区二区| 在线a可以看的网站| 国产午夜精品论理片| 999久久久精品免费观看国产| 国产极品精品免费视频能看的| 人人妻人人澡欧美一区二区| 国产蜜桃级精品一区二区三区| 成年女人永久免费观看视频| 一区二区三区激情视频| av天堂中文字幕网| 99久久久亚洲精品蜜臀av| 精品人妻视频免费看| 他把我摸到了高潮在线观看| 日韩欧美精品v在线| 变态另类成人亚洲欧美熟女| netflix在线观看网站| av福利片在线观看| 一进一出抽搐gif免费好疼| av欧美777| 亚洲第一电影网av| 免费黄网站久久成人精品 | 国产一区二区在线观看日韩| 精华霜和精华液先用哪个| 美女高潮喷水抽搐中文字幕| 欧美bdsm另类| 久久中文看片网| 亚洲欧美日韩高清专用| 高清在线国产一区| 国产精品一区二区免费欧美| 伊人久久精品亚洲午夜| 夜夜看夜夜爽夜夜摸| 一区二区三区激情视频| 午夜a级毛片| 狠狠狠狠99中文字幕| 1024手机看黄色片| 国产午夜精品论理片| 91午夜精品亚洲一区二区三区 | 亚洲人成网站在线播放欧美日韩| 欧美又色又爽又黄视频| 3wmmmm亚洲av在线观看| 精品久久国产蜜桃| 亚洲最大成人手机在线| 一卡2卡三卡四卡精品乱码亚洲| 国产精品人妻久久久久久| 日韩欧美在线乱码| 亚洲欧美日韩东京热| 国产极品精品免费视频能看的| 成人三级黄色视频| 久久久久久大精品| 亚洲国产色片| 国产av一区在线观看免费| 99久久精品一区二区三区| 亚洲av二区三区四区| 国产精品久久久久久亚洲av鲁大| 国产欧美日韩一区二区三| 国产av不卡久久| 日韩高清综合在线| 国产69精品久久久久777片| 免费观看精品视频网站| 又黄又爽又免费观看的视频| 3wmmmm亚洲av在线观看| 男插女下体视频免费在线播放| 99精品在免费线老司机午夜| 午夜福利18| 性插视频无遮挡在线免费观看| 淫妇啪啪啪对白视频| 亚洲专区中文字幕在线| 性色av乱码一区二区三区2| 别揉我奶头 嗯啊视频| 久久精品91蜜桃| 精品久久久久久久久av| 欧美黄色淫秽网站| 欧美日韩综合久久久久久 | 中文字幕熟女人妻在线| 成人亚洲精品av一区二区| 熟女电影av网| 亚洲国产欧美人成| 在线看三级毛片| 国产亚洲精品综合一区在线观看| 欧美激情久久久久久爽电影| 高清毛片免费观看视频网站| 97超级碰碰碰精品色视频在线观看| 成人鲁丝片一二三区免费| 99久久精品一区二区三区| 天天一区二区日本电影三级| 亚洲自拍偷在线| 十八禁网站免费在线| 日日干狠狠操夜夜爽| 精品人妻熟女av久视频| 天堂网av新在线| .国产精品久久| 日韩成人在线观看一区二区三区| a级毛片a级免费在线| 毛片女人毛片| 日本三级黄在线观看| 欧美日本视频| 一区二区三区高清视频在线| 丰满的人妻完整版| 女人十人毛片免费观看3o分钟| 免费在线观看亚洲国产| 夜夜躁狠狠躁天天躁| 久久国产精品影院| 一级a爱片免费观看的视频| 深爱激情五月婷婷| 欧美成人免费av一区二区三区| 人人妻人人看人人澡| 欧美区成人在线视频| 一a级毛片在线观看| 一个人免费在线观看电影| 久99久视频精品免费| 怎么达到女性高潮| h日本视频在线播放| 深夜a级毛片| 国产精品爽爽va在线观看网站| 一卡2卡三卡四卡精品乱码亚洲| 三级国产精品欧美在线观看| 日本a在线网址| 久久人妻av系列| 亚洲精品一区av在线观看| 亚洲五月天丁香| 小蜜桃在线观看免费完整版高清| 搡女人真爽免费视频火全软件 | 婷婷色综合大香蕉| 麻豆国产av国片精品| 欧美精品啪啪一区二区三区| 哪里可以看免费的av片| 校园春色视频在线观看| 一进一出抽搐gif免费好疼| 久久6这里有精品| 欧美潮喷喷水| 久久亚洲精品不卡| 亚洲人与动物交配视频| .国产精品久久| 综合色av麻豆| 精品人妻一区二区三区麻豆 | 韩国av一区二区三区四区| 婷婷六月久久综合丁香| 午夜激情福利司机影院| 中文亚洲av片在线观看爽| av在线天堂中文字幕| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 变态另类丝袜制服| 精品一区二区三区视频在线| 成人一区二区视频在线观看| ponron亚洲| 日本五十路高清| 亚洲av成人精品一区久久| 波多野结衣巨乳人妻| 国产白丝娇喘喷水9色精品| 成人三级黄色视频| 国产精品精品国产色婷婷| bbb黄色大片| 免费人成在线观看视频色| 国产精品久久久久久久电影| 久久婷婷人人爽人人干人人爱| 亚洲av一区综合| 中文字幕精品亚洲无线码一区| 色综合婷婷激情| av中文乱码字幕在线| 欧美性猛交╳xxx乱大交人| 日本黄色片子视频| 国产免费男女视频| 中文字幕人成人乱码亚洲影| 给我免费播放毛片高清在线观看| 欧美绝顶高潮抽搐喷水| 99在线人妻在线中文字幕| 亚洲美女搞黄在线观看 | 国产亚洲精品av在线| 免费电影在线观看免费观看| 极品教师在线视频| 午夜福利高清视频| 日韩高清综合在线| 成人高潮视频无遮挡免费网站| 国产黄a三级三级三级人| 亚洲国产精品久久男人天堂| 自拍偷自拍亚洲精品老妇| 久久九九热精品免费| 日韩国内少妇激情av| 床上黄色一级片| 国产成人啪精品午夜网站| 久久精品人妻少妇| 日韩高清综合在线| 少妇熟女aⅴ在线视频| 欧美日韩中文字幕国产精品一区二区三区| 欧美成人免费av一区二区三区| 精品一区二区三区人妻视频| 亚洲精品影视一区二区三区av| 亚洲午夜理论影院| 亚州av有码| 色综合欧美亚洲国产小说| 91久久精品电影网| 亚洲精品久久国产高清桃花| 成人高潮视频无遮挡免费网站| a级毛片免费高清观看在线播放| 级片在线观看| 国产不卡一卡二| 免费在线观看亚洲国产| 亚洲成人久久性| 97热精品久久久久久| 久久精品国产亚洲av涩爱 | 国产精品一区二区三区四区免费观看 | 亚洲一区二区三区不卡视频| 日本精品一区二区三区蜜桃| 久久精品国产亚洲av天美| 国产野战对白在线观看| 久99久视频精品免费| 亚洲成人免费电影在线观看| 色播亚洲综合网| 欧美日本视频| 久久久久久久午夜电影| 亚洲欧美日韩高清在线视频| 国产免费av片在线观看野外av| 国产单亲对白刺激| 国产精品影院久久| 久久久国产成人免费| 国产精品98久久久久久宅男小说| 一个人免费在线观看电影| 中文字幕高清在线视频| www.999成人在线观看| 日韩欧美在线二视频| 日韩欧美精品v在线| 18禁黄网站禁片午夜丰满| 国产午夜福利久久久久久| 亚洲精华国产精华精| 两性午夜刺激爽爽歪歪视频在线观看| 亚洲av日韩精品久久久久久密| 成人美女网站在线观看视频| 国产蜜桃级精品一区二区三区| 老司机午夜福利在线观看视频| 久久欧美精品欧美久久欧美| 熟女人妻精品中文字幕| 国产一级毛片七仙女欲春2| 久久亚洲真实| 欧美日韩瑟瑟在线播放| 国产一区二区三区在线臀色熟女| 欧美潮喷喷水| 国产国拍精品亚洲av在线观看| 精品久久久久久久久久久久久| 日韩中字成人| 美女 人体艺术 gogo| 欧美日韩黄片免| 十八禁人妻一区二区| 99久久九九国产精品国产免费| 免费观看人在逋| 亚洲成av人片在线播放无| 国产成人福利小说| 国内精品久久久久精免费| 久久久久久久久大av| 日韩国内少妇激情av| 乱人视频在线观看| 99久久99久久久精品蜜桃| 99在线人妻在线中文字幕| 久久久久久久亚洲中文字幕 | 麻豆一二三区av精品| 长腿黑丝高跟| 怎么达到女性高潮| 波多野结衣高清作品| 丁香欧美五月| 久久99热6这里只有精品| av在线蜜桃| 男女视频在线观看网站免费| 窝窝影院91人妻| 最好的美女福利视频网| 欧美黑人巨大hd| 丁香六月欧美| 日本精品一区二区三区蜜桃| av在线老鸭窝| 国产精品av视频在线免费观看| 国产精品久久久久久久久免 | 99精品久久久久人妻精品| 999久久久精品免费观看国产| 一本久久中文字幕| 成人特级黄色片久久久久久久| 一边摸一边抽搐一进一小说| 亚洲精品一卡2卡三卡4卡5卡| 91久久精品国产一区二区成人| 国内精品一区二区在线观看| 一区二区三区四区激情视频 | 亚洲最大成人手机在线| 成年女人看的毛片在线观看| 免费观看的影片在线观看| av福利片在线观看| 欧美成人a在线观看| 成年版毛片免费区| 亚洲精品影视一区二区三区av| netflix在线观看网站| 丰满的人妻完整版| 精品久久久久久成人av| 国产伦精品一区二区三区视频9| 中文字幕免费在线视频6| 少妇被粗大猛烈的视频| 一卡2卡三卡四卡精品乱码亚洲| 中文字幕人成人乱码亚洲影| 欧美高清性xxxxhd video| 久久精品久久久久久噜噜老黄 | 亚洲色图av天堂| 69av精品久久久久久| 757午夜福利合集在线观看| 天堂√8在线中文| 精品日产1卡2卡| 国产午夜福利久久久久久| 久久久久久久午夜电影| 精品日产1卡2卡| 国产一区二区三区视频了| 亚洲av一区综合| 亚洲在线自拍视频| 男人舔女人下体高潮全视频| www日本黄色视频网| 18美女黄网站色大片免费观看| 欧美日韩瑟瑟在线播放| 免费看美女性在线毛片视频| 久久伊人香网站| 日韩人妻高清精品专区| 久久性视频一级片| 99国产精品一区二区三区| 舔av片在线| 久久久久久久亚洲中文字幕 | 熟女人妻精品中文字幕| 婷婷精品国产亚洲av| 乱人视频在线观看| 精品国内亚洲2022精品成人| 午夜a级毛片| 99久久精品国产亚洲精品| 999久久久精品免费观看国产| 男女视频在线观看网站免费| 日日摸夜夜添夜夜添av毛片 | 一级毛片久久久久久久久女| 欧美国产日韩亚洲一区| 精品人妻视频免费看| 无人区码免费观看不卡| 亚洲电影在线观看av| 日日干狠狠操夜夜爽| 波多野结衣巨乳人妻| 亚州av有码| 亚洲国产精品久久男人天堂| 神马国产精品三级电影在线观看| 男插女下体视频免费在线播放| 网址你懂的国产日韩在线| 一级毛片久久久久久久久女| 国产精品电影一区二区三区| 国产蜜桃级精品一区二区三区| 日本黄色片子视频| 一进一出抽搐gif免费好疼| 一区二区三区四区激情视频 | 久久欧美精品欧美久久欧美| 久久性视频一级片| 婷婷精品国产亚洲av| 观看美女的网站| 日日摸夜夜添夜夜添小说| 搡女人真爽免费视频火全软件 | 国内少妇人妻偷人精品xxx网站| 久久天躁狠狠躁夜夜2o2o| 老司机深夜福利视频在线观看| 亚洲熟妇熟女久久| 在线播放国产精品三级| 欧美又色又爽又黄视频| 欧美成人性av电影在线观看| 国产在线精品亚洲第一网站| av国产免费在线观看| 青草久久国产| 看十八女毛片水多多多| 一级毛片久久久久久久久女| 麻豆一二三区av精品| 91麻豆精品激情在线观看国产| 成年女人毛片免费观看观看9| 久久久久久九九精品二区国产| 色哟哟·www| 亚洲,欧美精品.| 99久国产av精品| 12—13女人毛片做爰片一| 欧美xxxx黑人xx丫x性爽| 亚洲一区二区三区不卡视频| 精品午夜福利在线看| 欧美国产日韩亚洲一区| 亚洲精品一区av在线观看| 亚洲欧美日韩高清专用| 又紧又爽又黄一区二区| av视频在线观看入口| 欧美+日韩+精品| 欧美一区二区国产精品久久精品| 日韩欧美在线二视频| 精品人妻视频免费看| 免费在线观看成人毛片| 99久久久亚洲精品蜜臀av| 久久午夜福利片| 日韩欧美免费精品| av在线老鸭窝| 我的女老师完整版在线观看| 尤物成人国产欧美一区二区三区| 中文字幕人成人乱码亚洲影| 无人区码免费观看不卡| 身体一侧抽搐| 日本一二三区视频观看| 亚洲精品一卡2卡三卡4卡5卡| 精品免费久久久久久久清纯| 国产精品人妻久久久久久| 十八禁网站免费在线| 国产 一区 欧美 日韩| 亚洲片人在线观看| 极品教师在线视频| 97人妻精品一区二区三区麻豆| 桃色一区二区三区在线观看| 又粗又爽又猛毛片免费看| 三级毛片av免费| 精品一区二区三区视频在线观看免费| 亚洲精品久久国产高清桃花| 少妇高潮的动态图| 国产精品女同一区二区软件 | 国产av麻豆久久久久久久| 亚洲精品粉嫩美女一区| 夜夜躁狠狠躁天天躁| 国产私拍福利视频在线观看| www.熟女人妻精品国产| 国产精品久久久久久精品电影| 日韩欧美在线二视频| 国内揄拍国产精品人妻在线| 精品日产1卡2卡| 搡老岳熟女国产| 校园春色视频在线观看| 3wmmmm亚洲av在线观看| 老司机午夜福利在线观看视频| 亚洲成人中文字幕在线播放| 99热这里只有是精品在线观看 | 久久午夜亚洲精品久久| 在线天堂最新版资源| 性色avwww在线观看| 欧美绝顶高潮抽搐喷水| 深爱激情五月婷婷| 国产精品人妻久久久久久| 国产一区二区激情短视频| 国产欧美日韩精品一区二区| 日韩欧美三级三区| 国产av一区在线观看免费| 精品一区二区免费观看| 亚洲男人的天堂狠狠| 在线观看一区二区三区| 女人被狂操c到高潮| 2021天堂中文幕一二区在线观| x7x7x7水蜜桃| 男人舔奶头视频| 久久久久性生活片| 国产高潮美女av| 精品人妻熟女av久视频| 亚洲 国产 在线| 亚洲欧美日韩无卡精品| 一级黄片播放器| ponron亚洲| 亚洲成人久久爱视频| 国产高潮美女av| 亚洲最大成人av| bbb黄色大片| 中文亚洲av片在线观看爽| 最后的刺客免费高清国语| 中文字幕高清在线视频| 九色成人免费人妻av| 国产在线精品亚洲第一网站| 丰满人妻一区二区三区视频av| 国产综合懂色| 精品乱码久久久久久99久播| 免费黄网站久久成人精品 | 久久午夜福利片| 全区人妻精品视频| 亚洲欧美清纯卡通| 熟女电影av网| 在线观看午夜福利视频| 少妇熟女aⅴ在线视频| 99国产精品一区二区蜜桃av| 老熟妇乱子伦视频在线观看| 99国产精品一区二区蜜桃av| 国产白丝娇喘喷水9色精品| 国产黄色小视频在线观看| 欧美激情久久久久久爽电影| 两个人视频免费观看高清| 国产毛片a区久久久久| 嫩草影院入口| 一区二区三区激情视频| 十八禁人妻一区二区| 国产视频一区二区在线看| 男人和女人高潮做爰伦理| 在线观看av片永久免费下载| 一级黄片播放器| 2021天堂中文幕一二区在线观| 99久国产av精品| 久久久久久久久大av| 成人美女网站在线观看视频| 国产成人福利小说| 三级毛片av免费| 淫妇啪啪啪对白视频| 麻豆国产97在线/欧美| 亚洲一区二区三区不卡视频| 俄罗斯特黄特色一大片| 伊人久久精品亚洲午夜| 欧美一区二区国产精品久久精品| 成人永久免费在线观看视频| 岛国在线免费视频观看| 婷婷精品国产亚洲av| 99在线人妻在线中文字幕| 一边摸一边抽搐一进一小说| 久久香蕉精品热| 欧美日韩福利视频一区二区| 在线国产一区二区在线| 成人三级黄色视频| 婷婷丁香在线五月| 两个人的视频大全免费| 国产真实乱freesex| 美女大奶头视频| 九色国产91popny在线| 亚洲熟妇熟女久久| 18+在线观看网站| 最近中文字幕高清免费大全6 | 首页视频小说图片口味搜索| 一级a爱片免费观看的视频| 丰满人妻熟妇乱又伦精品不卡| 免费av毛片视频| 精品一区二区三区av网在线观看| 色综合站精品国产| 美女高潮的动态| 婷婷亚洲欧美| 一本久久中文字幕| 国内少妇人妻偷人精品xxx网站| 欧美3d第一页| 国产伦精品一区二区三区视频9| 男女那种视频在线观看| 国产一区二区亚洲精品在线观看| 国产亚洲精品av在线| 亚洲aⅴ乱码一区二区在线播放| 九九在线视频观看精品| 麻豆国产av国片精品| av天堂在线播放| 老司机午夜福利在线观看视频| 精品一区二区免费观看| 啦啦啦观看免费观看视频高清| 最近视频中文字幕2019在线8| 国内精品一区二区在线观看| 最近中文字幕高清免费大全6 | 精品久久久久久成人av| 热99在线观看视频|