• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    VeriFace:Defending against Adversarial Attacks in Face Verification Systems

    2023-10-26 13:14:04AwnySayedSohairKinlanyAlaaZakiandAhmedMahfouz
    Computers Materials&Continua 2023年9期

    Awny Sayed ,Sohair Kinlany ,Alaa Zaki and Ahmed Mahfouz,3,?

    1Information Technology Department,Faculty of Computing and Information Technology,King Abdulaziz University,Jeddah,Saudi Arabia

    2Computer Science Department,Faculty of Science,Minia University,Al Minya,Egypt

    3Faculty of Computer Studies,Arab Open University,Muscat,Oman

    ABSTRACT Face verification systems are critical in a wide range of applications,such as security systems and biometric authentication.However,these systems are vulnerable to adversarial attacks,which can significantly compromise their accuracy and reliability.Adversarial attacks are designed to deceive the face verification system by adding subtle perturbations to the input images.These perturbations can be imperceptible to the human eye but can cause the system to misclassify or fail to recognize the person in the image.To address this issue,we propose a novel system called VeriFace that comprises two defense mechanisms,adversarial detection,and adversarial removal.The first mechanism,adversarial detection,is designed to identify whether an input image has been subjected to adversarial perturbations.The second mechanism,adversarial removal,is designed to remove these perturbations from the input image to ensure the face verification system can accurately recognize the person in the image.To evaluate the effectiveness of the VeriFace system,we conducted experiments on different types of adversarial attacks using the Labelled Faces in the Wild(LFW)dataset.Our results show that the VeriFace adversarial detector can accurately identify adversarial images with a high detection accuracy of 100%.Additionally,our proposed VeriFace adversarial removal method has a significantly lower attack success rate of 6.5% compared to state-of-the-art removal methods.

    KEYWORDS Adversarial attacks;face aerification;adversarial detection;perturbation removal

    1 Introduction

    Face verification systems are becoming increasingly prevalent in our daily lives.They are used not only on smartphones but also in various security systems,public transportation,and other applications.Face verification systems have numerous benefits,such as convenience,speed,and security,but they can also pose a risk if they are not robust enough to detect adversarial attacks[1–5].These attacks on face verification systems can occur in different ways,such as spoofing attacks,where an attacker tries to present a fake face to the system to gain access,or impersonation attacks,where an attacker tries to mimic the face of an authorized user to deceive the system.Such attacks can compromise the security of the system and put confidential information at risk[6,7].

    Despite the impressive performance of face verification systems,they remain vulnerable to the growing threat of adversarial attacks [1–5].Adversarial attacks can be caused by either digital or physical manipulations of faces,and they can weaken the performance of face verification systems even when the perturbations are imperceptible to the human eye[8].Digital manipulations involve the use of techniques such as image manipulation or Generative Adversarial Networks(GANs)to create adversarial examples that can fool the face verification system[6].Physical manipulations,on the other hand,involve the use of physical objects such as masks or contact lenses to spoof the system[4,5].

    Attackers can use different types of adversarial attacks to compromise face verification systems.There are two common types of attacks,impersonation attacks,and obfuscation attacks.In an impersonation attack,the attacker tries to impersonate the identity of a specific target victim to gain access to the system.The attacker manipulates their facial image to match that of the target victim[3].In contrast,in an obfuscation attack,the attacker manipulates their facial image to make it difficult for the system to recognize their identity,without necessarily trying to impersonate someone else.The goal of an obfuscation attack is to confuse the system and evade detection.According to research,obfuscation attacks have a higher success rate than impersonation attacks,which makes them more effective and more widely adopted by attackers[3,9].In this paper,the focus is on defending against three specific types of obfuscation attacks: Projected Gradient Descent(PGD)[1],Fast Gradient Sign Method(FGSM)[2],and Adversarial Face Synthesis(AdvFaces)[3].

    In this paper,we propose a novel face verification system calledVeriFace,which aims to enhance the security of face verification systems against various obfuscation attacks.The proposed system focuses on two main strategies: perturbation detection and removal.To remove the perturbations,the system utilizes various basis transformation functions such as total variance minimization [10],bit-depth reduction [11],wavelet denoising [12],and Principal Component Analysis (PCA) [13].Additionally,the authors fine-tune a powerful image detection model,MobileNet[14],to accurately differentiate between clean and adversarial face images,resulting in a high-performance rate.

    In summary,the contributions of this paper are as follows:

    ? We propose a novel face verification system called VeriFace that comprises two defense mechanisms,adversarial detection,and adversarial removal,to strengthen the face verification systems against obfuscation attacks.

    ? We evaluated VeriFace against different types of attacks,and the experimental results demonstrate its effectiveness in mitigating the impact of adversarial attacks on face verification systems.

    ? The feasibility of the proposed system is demonstrated using the Labelled Faces in the Wild(LFW)dataset.

    The rest of the paper is organized as follows.In Section 2,we briefly reviewed some related works for removing perturbation and detection strategy.In Section 3,we described the perturbation removal strategy and the experimental results for these defense methods.While in Section 4,we described the adversarial detection techniques used in our study and presented our experimental results to show that we can build a simple binary classifier to determine if the face image is an adversarial example or a clean image with high accuracy.Section 5 shows a detailed discussion of the proposed system.Finally,we presented the conclusions.

    2 Related Work

    This section provides a summary of previous studies related to the protection of face verification systems.The existing literature on defense strategies can be broadly categorized into two groups namely:perturbation detection and perturbation removal.

    2.1 Perturbation Detection

    In perturbation detection defense strategies,the focus is on detecting and identifying adversarial examples by analyzing the input data.This is usually achieved by training a separate classifier to distinguish between clean and adversarial examples.One important approach to defending face verification systems against adversarial attacks is to detect adversarial examples.This strategy has gained recent attention in the scientific community,and many adversarial detection methods have been developed as a preprocessing step[15].However,the attacks addressed in previous studies were initially proposed for object recognition and may not be effective in a feature extraction network setting such as face verification[16,17].Therefore,existing detectors for hostile faces have only been shown to be effective in a highly restricted environment where the number of people is limited and constant during training and testing.

    To overcome the limitations of previous detection methods,some researchers have proposed more sophisticated and robust detection methods.For example,Grosse et al.[18] proposed using a detector network that is trained on the difference between clean and perturbed examples to identify adversarial images.Gong et al.[19]suggested using a simple feature space analysis to detect adversarial examples.Xu et al.[20] proposed a detection algorithm based on the distribution of the last layer activations of a neural network.These methods are effective in detecting various types of adversarial examples.Another approach is to integrate detection and classification into a single model.For example,Metzen et al.[21]proposed using a multi-task learning approach to jointly train a classifier and a detector network.These advanced detection methods have shown promising results in defending against adversarial attacks on face verification systems.

    These detection-based methods have shown promising results in identifying adversarial examples,but they may suffer from high false-positive rates or may not be able to detect new types of attacks that are not included in the training data.Therefore,we propose a novel systemVeriFacewhich is a combination between perturbation detection and perturbation removal approaches to enhance the robustness against a wide range of adversarial attacks.

    2.2 Perturbation Removal

    Perturbation removal refers to a defense strategy in which the adversarial perturbation is removed or reduced from the input image before it is processed by the face verification system[17,21].This can be achieved using various techniques,including total variance minimization[10],bit-depth reduction[11],wavelet denoising[12],and PCA[13].The goal of this strategy is to restore the input image to its original form or a similar version of the clean image to prevent the face verification system from being misled by adversarial perturbations.

    In perturbation removal defense,transformations are applied as a preprocessing step on the input data to remove adversarial perturbations before sending them to the target models.For example,Guo et al.[11] used total variance minimization [10],image quilting [22],and bit-depth reduction to smooth input images.These methods have shown high efficiency against attacks such as the fast gradient sign method [11],Deepfool [23],and the Carlini-Wagner attack [24],especially when the convolutional network is trained on transformed images.Other studies have suggested using JPEG compression and principal component analysis as defense methods.For instance,Liu et al.[25]proposed a Deep Neural Networks(DNN)feature distillation JPEG compression by redesigning the standard JPEG compression algorithm.

    While previous studies evaluated these methods on the ImageNet dataset[26],we evaluated them on the LFW dataset for face verification as we aim to demonstrate their effectiveness for the specific task of face verification on the LFW dataset.We applied these defense methods only during testing,as a preprocessing step on both adversarial and benign images.

    3 The Proposed System:VeriFace

    VeriFaceis a face verification system that we developed to protect face images against various obfuscation attacks.It comprises two adversarial defense mechanisms,adversarial detection,and adversarial removal,which work together to detect and remove adversarial face images.The adversarial detection mechanism uses a binary classifier to distinguish between legitimate and adversarial inputs,while the adversarial removal mechanism applies image transformations as a preprocessing step to remove adversarial perturbations from input images.Together,these mechanisms help to ensure the robustness and security of the developed face verification system.Fig.1 shows the general pipeline of the face verification system.

    Figure 1:Face verification system pipeline

    3.1 Motivation and Objectives

    Face verification systems have become increasingly important in various fields,such as security,law enforcement,and access control.However,recent studies have shown that these systems are vulnerable to adversarial attacks,where malicious actors can manipulate input data to fool the system and gain unauthorized access,bypass security measures or impersonate someone else.They may also want to manipulate the system for financial gain or other nefarious purposes[27].Adversarial attacks on face verification systems can have serious consequences,ranging from identity theft to physical security breaches[4,5].

    To address these issues,there has been a growing interest in developing defense mechanisms that can protect face verification systems against adversarial attacks[16,19,28].Two common strategies for defending against adversarial attacks are adversarial detection and adversarial removal.Adversarial detection aims to identify and reject adversarial inputs [4,6,16,29],while adversarial removal aims to preprocess input data to remove any adversarial perturbations before feeding them into the face verification system[10].

    In this paper,we propose a novel defense framework called VeriFace,which integrates both adversarial detection and adversarial removal mechanisms to protect face verification systems against various obfuscation attacks.Our method is designed to be effective against a wide range of adversarial attacks while maintaining high accuracy and robustness [1–3,19,30,31].We evaluate our approach on the widely used LFW dataset and demonstrate its superiority over several state-of-the-art defense mechanisms[32].

    3.2 VeriFace Adversarial Detection Architecture

    An adversarial detection architecture typically consists of two components: a feature extractor and a detection module[15,18,19,21].The feature extractor is a neural network that extracts features from the input data.The output of the feature extractor is a set of feature vectors that represent the input data.

    3.2.1 VeriFace Detector Components

    In this paper,we build aVeriFacedetector which is an adversarial detection mechanism in the VeriFace system.It is designed to detect adversarial attacks on the face verification system[1–3].The VeriFace detector is a binary classifier that determines whether an input image is a legitimate image or an adversarial one.The input image will go through a preprocessing step in which several key operations will be conducted.These key operations include image resizing,normalization,face detection and alignment,and noise removal.The face detection and alignment processes were conducted using Multi-task Cascade Convolutional Networks,a highly accurate face detection method.This approach has demonstrated excellent performance in detecting faces,achieving high accuracy even in challenging conditions such as variations in pose,scale,and occlusions[33].These steps are crucial for ensuring that the input face images are detected,standardized,aligned,and cleaned,which leads to more accurate and consistent results during the adversarial detection process.TheVeriFacedetector is constructed using a Convolutional Neural Network(CNN)[19].The CNN consists of several convolutional layers followed by a fully connected layer.The output of the last convolutional layer is then flattened and fed into the fully connected layer,which produces the final binary classification result.The input to the CNN is the feature maps generated by the face verification model at the time of inference.We train the VeriFace detector on CASIA-WebFace dataset[34],which consists of 494,414 legitimate images and adversarial examples generated using different attack methods[28,32].The objective of the training is to minimize the classification error of the detector on the training dataset.Once trained,the VeriFace detector can be used to detect adversarial examples at the time of inference.

    The VeriFace detector is an important component of the VeriFace system,as it provides an additional layer of defense against adversarial attacks on the face verification system.By detecting adversarial examples,the detector allows the system to reject these examples and prevent them from being used to compromise the security of the system.

    3.2.2 Proposed Detection Methods

    The VeriFace adversarial detector consists of two major components: MobileNet [14] and a Multilayer Perceptron(MLP).MobileNet is a lightweight(CNN)architecture designed for efficient mobile vision applications.It consists of a series of depth-wise separable convolutional layers that drastically reduce the number of parameters compared to traditional CNN architectures.MobileNet has been shown to achieve high accuracy on various image classification tasks while being computationally efficient.In the VeriFace adversarial detector,MobileNet is used as a feature extractor to extract meaningful features from face images that are fed into the MLP.The MLP is a type of feedforward artificial neural network consisting of multiple layers of perceptrons(i.e.,neurons)that process input signals.Each layer of perceptrons processes the output of the previous layer to produce a new set of outputs.The MLP has been widely used in various machine learning applications,including classification,regression,and prediction.In theVeriFaceadversarial detector,the MLP is trained on the features extracted by MobileNet to classify whether an input face image is legitimate or adversarial.The training process included binary cross-entropy loss function and is optimized using the Adam optimization algorithm.

    Fig.2 shows the block diagram for the modified MobileNet network which consists of several layers,including a global average pooling(GAP)layer,batch normalization(BN)layers,fully connected(FC)layers,sigmoid layer,and dropout layer.The input to the network is an adversarial face image,and the output is a probability score indicating whether the input is a genuine face or an adversarial one.After removing the last softmax layer from the original MobileNet network,the GAP layer is added to aggregate the features of the input image.This is followed by several BN layers to normalize the features and make the network more efficient in training.The FC layers are added to learn highlevel features of the input image and the sigmoid layer is used to convert the final output of the network into a probability score.The dropout layer is used to prevent overfitting during training.

    Figure 2:The block diagram of the VeriFace adversarial detector

    3.3 VeriFace Adversarial Removal Architecture

    TheVeriFaceadversarial removal aims to develop a PRN that can effectively remove adversarial perturbations from the face image and recover the original face image.This is achieved by training a neural network as an adversarial purifier[35],which takes the adversarial face image as input and outputs the corresponding clean face image.By doing so,theVeriFaceadversarial removal ensures that the face verification system only operates on clean face images and eliminates the effect of adversarial perturbations [17,21],thereby improving the overall performance of the proposed face verification system.

    3.3.1 Perturbation Removal Network(PRN)

    The PRN typically consists of several layers of CNN and Fully Connected layers(FCs).The input to the network is the adversarial face image,and the output is the recovered face image,which is the denoised version of the input image[36].The first few layers of the network are usually convolutional layers that learn the low-level features of the input image.These layers are followed by additional convolutional layers that learn more complex features,followed by max-pooling layers that reduce the spatial dimensionality of the features[37].After the convolutional layers,there are usually several fully connected layers that learn high-level features of the input image.The output of the final fully connected layer is fed into the output layer,which generates the denoised version of the input image.In addition to the convolutional and fully connected layers,the PRN includes batch normalization layers,activation functions,and dropout layers to improve performance and prevent overfitting.The network is trained using a loss function that measures the difference between the output of the network and the ground truth clean image:

    whereMoutis the output image of the PRN andMgtis the ground truth clean image.

    During training,the network learns to map the adversarial face images to their corresponding clean face images by minimizing the difference between the output of the network and the ground truth clean image.

    3.3.2 Total Variation Minimization(TVM)

    In addition to the PRN,we also use TVM as a component of our adversarial removal system[11].The proposed VeriFace adversarial removal can be represented mathematically as follows:

    where TVM(x)represents the denoised image,x is the input image with adversarial perturbations,y is the original face image without perturbations,||x-y||∧2 is the Euclidean distance between the input and original images,||?x||∧2 is the L2 norm of the gradient of x,and lambda is a hyperparameter that controls the strength of the regularization term.

    The TVM algorithm seeks to minimize the sum of the Euclidean distance between the input and original images and the L2 norm of the gradient of the denoised image,subject to a regularization term controlled by lambda.This regularization term encourages the removal of unnecessary details from the input image while preserving important features such as edges.

    TVM is used as a pre-processing step to remove high-frequency noise from the input image.This helps to reduce the impact of adversarial perturbations on the input image and makes it easier for the PRN to remove the remaining perturbations.The VeriFace adversarial removal system combines total variation minimization and a PRN to automatically remove adversarial perturbations from input images and recover the clean face image.

    4 Experimental Results

    4.1 Datasets

    The experimental results of the developed models were evaluated using real-world datasets for training and testing.The CASIA-WebFace dataset was used for training[34],which consists of 494,414 face images from 10,575 different subjects.In the training process,two images were randomly selected for each of the 10,575 subjects to be used as clean images,and two were selected for adversarial synthesis to train theVeriFaceadversarial detector.In the process of testing,we used LFW[32]which is a standard face verification testing dataset that includes 13,233 web-collected face images from 5749 identities.We evaluate the detection accuracy on the 6,000 face pairs.Among them,3000 pairs as clean,and another 3000 pairs represent an adversarial synthesis.

    To evaluate theVeriFaceadversarial removal,we train a PRN using 6,000 face pairs from LFW[32],out of which 3,000 pairs belonged to the same identity and the remaining 3,000 pairs belonged to different identities.For the evaluation,we tested theVeriFaceadversarial removal on 3,000 pairs belonging to the same identity,which were subjected to obfuscation attacks.

    4.2 Evaluation Metrics

    The effectiveness of VeriFace adversarial detection and adversarial removal mechanisms was evaluated by calculating the attack success rate,as described by Zhou et al.[38].This was done to determine whether these mechanisms were effective in reducing the attack rate and improving the efficiency and effectiveness of the face verification system.The attack success rate was calculated using the following equation:

    Each comparison was made between an enrollment image and an adversarial probe image.The pre-determined thresholdτwas set to 1.1 at a 0.001 False Acceptance Rate (FAR) for the FaceNet verification system.A score above 1.1 indicates that the two face images do not belong to the same claimed identity.We considered the amount of perturbation ∈belonging to ranges of 0.1,0.2,0.3,0.4 for FGSM(L∞),1,2,3,4 for FGSM(L2),and 2,4,6,8 for PGD.

    4.3 VeriFace Adversarial Detection Results

    We evaluated the VeriFace detector against three attack methods: AdvFaces [3],PGD [1],and FGSM[2]to produce adversarial samples.In the experiment,the binary classifier models were trained on one type of attack with a specific amount of perturbation and tested on different values of perturbation over different types of unseen attacks.This helped us to study the robustness of our methods for generalization.We considered the amount of perturbation ∈belong to 3 for AdvFaces[3]and 0.1,0.2,0.3,0.4 for FGSM(L∞)[2],and 2,4,6 for PGD[1].

    Tables 1–3 show the results of the proposed VeriFace adversarial detection method compared to state-of-the-art adversarial face: Gong et al.[19],VGG-16 [31],VGG-19 [31],Inception-V3 [30],and ResNet50-V2[39])on different types of adversarial attacks(FGSM,PGD,and AdvFaces)with different perturbation strengths(epsilon values).The models were trained on mixed clean data and an adversarial dataset was generated using the same type of attack as the test set.

    Table 1:VeriFace adversarial detection results in comparison with SOTA adversarial face detectors.All models are trained on mixed clean data and an adversarial dataset which is generated via AdvFaces

    In Table 1,the detection results of the models on seen AdvFaces and unseen FGSM (L∞) and PGD attacks are shown.VeriFace outperforms all other models with a detection rate of 94.83% on seen AdvFaces and 95.38% on unseen FGSM and PGD attacks.

    In Table 2,the models’detection results on unseen AdvFaces and unseen FGSM and PGD attacks are shown,with adversarial datasets generated via PGD.VeriFace again outperforms all other models with a detection rate of 50.05%on unseen AdvFaces and 97.13% on unseen FGSM and PGD attacks.

    In Table 3,the models’detection results on unseen AdvFaces and unseen FGSM and PGD attacks are shown,with adversarial datasets generated via FGSM.Once again,VeriFace outperforms all other models with a detection rate of 50.02% on unseen AdvFaces and 94.85% on unseen FGSM and PGD attacks.

    The results show that the proposed VeriFace model outperforms the other models in detecting adversarial face images across all attack methods and perturbation sizes.In particular,VeriFace achieves a detection rate of 95.4% for all values ofεin the case of seen AdvFaces,and a detection rate of 100% for all values ofεin the case of unseen PGD-generated adversarial images with perturbation sizes up to 6.

    Fig.3 shows a confusion matrix that represents the performance of a binary classification of the proposed VeriFace detector model on an LFW dataset with a total of 6,000 samples.The model predicted 2999 of the samples as negative (true negative) and 2,944 of the samples as positive (true positive),correctly.There was a wrong prediction of 56 positive samples(false positive),and only one sample that was predicted as negative(false negative).Precision,recall,and F1 score are commonly used metrics for evaluating the effectiveness of binary classification models.For the confusion matrix presented in Fig.2,the precision value was determined to be 0.9817,indicating that 98.17% of the predicted positive instances were positive.The model’s recall value was 99.97%,indicating that it correctly identified 99.97% of the actual positive instances.The F1 score for the model was 99.06%,indicating a high level of performance and a good balance between precision and recall.These results suggest that the VeriFace detector is highly effective in identifying adversarial attacks on face verification systems.

    Figure 3:The confusion matrix of the proposed VeriFace detector model

    4.4 VeriFace Adversarial Removal Results

    We evaluate the transformation-based image defense mechanism using a gray-box setting.In this case,the attacker is aware of the classifier’s details without any knowledge about the defense strategy’s details.The parameters of each of the defenses were chosen to optimize the performance according to the gray-box setting.We fixed the hyper-parameters of the defense strategy in all experiments.For instance,the PCA was performed by retaining the largest 36 principal components of each image,but Patchwise PCA was performed on patches of size 13 by 13 retaining the largest 13 principal components.These values were changed to find the best coefficients,but these values were the best in terms of reducing the attack-success rate.In the case of Bit-Depth-Reduction,we performed a simple type of quantization,by reducing the number of bits per pixel from 8 to 5.For Wavelet-Denoising[12],we applied the discrete wavelet transform with a biorthogonal 3/5 filter and then kept only the approximation coefficients at the final scale [40].The VeriFace PRN method was designed specifically for removing adversarial perturbations and was trained on a dataset of adversarial face images generated using FGSM and PGD attacks.The effectiveness of each defense strategy was evaluated by calculating the attack success rate for each type of attack(AdvFaces,PGD,and FGSM)on each defense strategy.

    We eliminated the impact of adversarial perturbations and their effect on the FV system under adversarial attacks to improve the performance of the verification on an adversarial face image.We show the result of the proposed removal technique in comparison with different perturbation removal techniques such as PCA[13],TVM[11],Patch-wise PCA[13],Wavelet Denoising[12],and Bit-depth reduction[11].We apply each of these defenses as a pre-processing step on both adversarial and benign images at test time[13,41].For each removal mechanism,we evaluate its success rate by verifying the adversarial face images.Our focus is on image transformation at test time[13].

    Table 4 shows the effectiveness of different PRNs in defending against adversarial attacks.The attack success rate is reported for three types of attacks: AdvFaces,PGD,and FGSM.The mean attack success rate is also calculated across all three attack types.The PRNs evaluated in this table are PCA[13],Patchwise PCA[13],Bit-Depth-Reduction[11],Wavelet-Denoising[12],and the proposed VeriFace PRN.

    Table 4:The effect of adversarial attacks AdvFaces,PGD,and FGSM over different PRNs

    The results show that the proposed VeriFace PRN has the lowest attack success rate for all three attack types,with a mean attack success rate of 7.43% as shown in Fig.4.Wavelet-Denoising has the next lowest attack success rate,with a mean of 12.68%.PCA,Patchwise PCA,and Bit-Depth-Reduction have higher attack success rates,with means ranging from 9.17% to 9.79%.These results suggest that the proposed VeriFace PRN is the most effective at defending against adversarial attacks compared to the other PRNs evaluated in this study.

    5 Discussion

    VeriFace adversarial detection and removal mechanisms are critical in ensuring the security and reliability of face verification systems.In recent years,the use of deep learning-based face verification systems has become widespread [6,7],and these systems are vulnerable to adversarial attacks.Adversarial attacks are a type of attack that involves adding carefully crafted perturbations to an input image to fool the face verification system into misclassifying the image.These attacks can have serious consequences,as they can be used to bypass security measures or gain unauthorized access to sensitive information[1–5].

    The VeriFace adversarial detection mechanism is designed to detect adversarial perturbations in facial images to improve the security and robustness of face verification systems.In our evaluation,we found that our proposed detection mechanism had a high detection rate for all types of adversarial attacks,including AdvFaces,PGD,and FGSM.Specifically,the detection rate was above 98% for all attack types,which is significantly higher than the performance of other detection mechanisms reported in the literature.

    One of the strengths of our detection mechanism is that it does not require any additional training data or modifications to the original face verification system.Instead,it analyzes the distribution of feature vectors generated by the FaceNet model to identify discrepancies between the original and adversarial images.This approach makes our mechanism more practical and applicable to real-world scenarios where it may be difficult to obtain additional training data.

    On the other hand,the results presented for the VeriFace PRN demonstrate the effectiveness in mitigating the impact of adversarial attacks on face verification systems.The study compares the performance of the VeriFace PRN with other commonly used defense mechanisms such as PCA[13],Patchwise PCA[13],Bit-Depth-Reduction[11],and Wavelet-Denoising[12].The evaluation was conducted under the gray-box setting,where the attacker has knowledge about the classifier but not the defense mechanism.

    The study also evaluated the effect of different PRN on the attack success rate.The results showed that the Wavelet Denoising [12] defense mechanism performed the best among the other defense mechanisms,but the VeriFace PRN still outperformed it.This suggests that the proposed VeriFace PRN is a promising defense mechanism for mitigating the impact of adversarial attacks on face verification systems.

    Both components of the VeriFace demonstrate the importance of developing effective defenses against adversarial attacks on FV systems.While the first study focuses on removing adversarial perturbations from face images,the second study aims to detect adversarial images before they enter FV systems.The two approaches are complementary and can be combined to provide better protection against adversarial attacks.

    The VeriFace system may encounter potential failure cases due to targeted adversarial attacks and the presence of imperceptible perturbations.Targeted attacks aim to evade the VeriFace detection mechanism by strategically crafting perturbations that exploit system vulnerabilities.These attacks can lead to false negatives,where adversarial examples are misclassified as legitimate images.The justification for such failure cases lies in the constantly evolving nature of adversarial attacks,which adapt to bypass detection methods.Additionally,imperceptible perturbations pose a challenge for the VeriFace adversarial removal component.Despite its effectiveness,subtle perturbations that mimic natural variations in face appearance may remain,resulting in residual adversarial effects.Justifying this failure case is the inherent difficulty in distinguishing between genuine variations and adversarial perturbations.While the VeriFace system is robust,it may still have limitations in detecting and removing adversarial examples that exploit its weaknesses.Justifying these potential failure cases lies in the dynamic and evolving nature of adversarial attacks,emphasizing the need for continuous updates and improvements to enhance the system’s resilience.

    6 Conclusion

    This paper presents a novel face verification system,VeriFace,which contains two main components,adversarial detection,and adversarial removal.We evaluated the VeriFace detector against three attack methods.The results show that the proposed VeriFace model outperforms the other models in detecting adversarial face images across all attack methods and perturbation sizes that range from 95%to 100%.The results of the adversarial removal show that the proposed VeriFace PRN has the lowest attack success rate of 6.5%for all three attack types.It also tends to perform better than other tested defenses in three attacks FGSM,PGD,and AdvFaces.The developed model can be generalized to different types of attacks that were not seen during training.We show that pre-processing defenses can be effective against existing attacks such as FGSM,PGD,and AdvFaces with different amounts of perturbation.The developed model is robust to unseen attacks,it was trained on one attack AdvFaces to learn a tight decision boundary around real and adversarial faces and tested on unseen attacks such as PDG and FGSM with different amounts of perturbation for each attack.Future work can explore the integration of these two approaches to developing more robust and reliable FV systems that can withstand adversarial attacks in various real-world scenarios.

    Acknowledgement:This research work was funded by Institutional Fund Projects under Grant No.(IFPIP:329-611-1443).The authors gratefully acknowledge the technical and financial support provided by the Ministry of Education and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.

    Funding Statement:This research was funded by Ministry of Education and King Abdulaziz University,DSR,Jeddah,Saudi Arabia.

    Author Contributions:The authors confirm contribution to the paper as follows:Conceptualization:S.Kilany and A.Mahfouz;methodology:S.Kilany,A.Mahfouz,A.Zaki,A.Sayed;formal analysis:S.Kilany,A.Mahfouz,A.Zaki,A.Sayed;software:S.Kilany,A.Zaki,A.Sayed;funding acquisition:A.Sayed,visualization: S.Kilany;All authors have read and agreed to the published version of the manuscript.

    Availability of Data and Materials:The data that support the findings of this study are openly available in Labeled Faces in the Wild at http://vis-www.cs.umass.edu/lfw/.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    Supplementary Materials

    A.Implementation Architecture of the proposed model

    国产99白浆流出| 国产高清视频在线观看网站| 性色av乱码一区二区三区2| 一本久久中文字幕| 国产成人啪精品午夜网站| 一级毛片精品| 国产一区二区三区视频了| 久久精品aⅴ一区二区三区四区| 国产黄色小视频在线观看| 黄色片一级片一级黄色片| 午夜精品一区二区三区免费看| av福利片在线观看| 国产私拍福利视频在线观看| 色精品久久人妻99蜜桃| 99久久综合精品五月天人人| 搡老熟女国产l中国老女人| 日本免费一区二区三区高清不卡| 久久久成人免费电影| 国产高清有码在线观看视频| 婷婷丁香在线五月| 老司机在亚洲福利影院| 亚洲国产看品久久| 免费人成视频x8x8入口观看| 日本黄色片子视频| www日本在线高清视频| 免费看a级黄色片| 久久久久久大精品| 99久久精品国产亚洲精品| 成人一区二区视频在线观看| www.自偷自拍.com| 香蕉丝袜av| 99久久精品国产亚洲精品| 国产亚洲精品综合一区在线观看| 人人妻,人人澡人人爽秒播| 国产三级黄色录像| 国产精品美女特级片免费视频播放器 | 欧美又色又爽又黄视频| 国产精品av久久久久免费| 99久久国产精品久久久| 亚洲中文日韩欧美视频| 欧美日本亚洲视频在线播放| 18禁美女被吸乳视频| 国产单亲对白刺激| 色老头精品视频在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 欧美在线一区亚洲| 亚洲男人的天堂狠狠| 757午夜福利合集在线观看| 丁香欧美五月| 国产精品久久久久久人妻精品电影| www国产在线视频色| 国产精品亚洲美女久久久| 亚洲精品一区av在线观看| 久久香蕉国产精品| 两人在一起打扑克的视频| 看黄色毛片网站| 91字幕亚洲| 淫秽高清视频在线观看| 日本黄色片子视频| 日本熟妇午夜| 九九久久精品国产亚洲av麻豆 | 精品一区二区三区av网在线观看| 亚洲成人久久性| bbb黄色大片| 亚洲av中文字字幕乱码综合| 欧美日本亚洲视频在线播放| 老鸭窝网址在线观看| 亚洲精品456在线播放app | 久久久久久久精品吃奶| 午夜成年电影在线免费观看| 久久国产精品影院| 波多野结衣高清作品| 好男人在线观看高清免费视频| 亚洲欧美日韩高清在线视频| 亚洲欧美精品综合一区二区三区| 中文字幕人妻丝袜一区二区| 亚洲人成电影免费在线| 99国产精品一区二区三区| www日本黄色视频网| 亚洲av熟女| 欧美中文日本在线观看视频| 国语自产精品视频在线第100页| 国产一区二区在线观看日韩 | 男插女下体视频免费在线播放| 久久人妻av系列| 女同久久另类99精品国产91| 国产蜜桃级精品一区二区三区| 中国美女看黄片| 99久久精品国产亚洲精品| 真实男女啪啪啪动态图| 曰老女人黄片| 国产精品国产高清国产av| 亚洲国产精品合色在线| 亚洲欧美日韩卡通动漫| 国产精品久久视频播放| 国产单亲对白刺激| 日本在线视频免费播放| 超碰成人久久| 动漫黄色视频在线观看| 男人舔奶头视频| 巨乳人妻的诱惑在线观看| 少妇人妻一区二区三区视频| 国产精品久久久久久精品电影| 麻豆国产97在线/欧美| 国产精品一区二区三区四区久久| 观看美女的网站| 在线观看日韩欧美| 黑人欧美特级aaaaaa片| 日韩精品中文字幕看吧| 婷婷亚洲欧美| 免费无遮挡裸体视频| 神马国产精品三级电影在线观看| 精品国产三级普通话版| 精品久久久久久久毛片微露脸| 高清毛片免费观看视频网站| 亚洲成a人片在线一区二区| 亚洲人成网站高清观看| 欧美激情久久久久久爽电影| 日韩人妻高清精品专区| 99热只有精品国产| 99久久无色码亚洲精品果冻| 美女 人体艺术 gogo| 日韩欧美国产在线观看| 久久精品亚洲精品国产色婷小说| 老汉色∧v一级毛片| 久久婷婷人人爽人人干人人爱| 国产一区二区三区在线臀色熟女| 久久午夜亚洲精品久久| 欧美另类亚洲清纯唯美| 99久久99久久久精品蜜桃| 精品国产亚洲在线| 男女做爰动态图高潮gif福利片| 亚洲无线在线观看| 免费观看的影片在线观看| 成人一区二区视频在线观看| 五月伊人婷婷丁香| 精品一区二区三区视频在线 | 色老头精品视频在线观看| 久久精品国产清高在天天线| x7x7x7水蜜桃| 欧美不卡视频在线免费观看| 在线免费观看的www视频| 久9热在线精品视频| 欧美高清成人免费视频www| 在线视频色国产色| 青草久久国产| 韩国av一区二区三区四区| 国产一区二区在线观看日韩 | xxxwww97欧美| 九九在线视频观看精品| 久久中文字幕人妻熟女| 黑人欧美特级aaaaaa片| 神马国产精品三级电影在线观看| 日韩av在线大香蕉| 免费观看精品视频网站| www日本黄色视频网| 亚洲五月婷婷丁香| 亚洲国产欧美网| 午夜a级毛片| 国产亚洲精品久久久com| 亚洲第一电影网av| 亚洲一区二区三区不卡视频| 白带黄色成豆腐渣| 一级a爱片免费观看的视频| 国内毛片毛片毛片毛片毛片| 又大又爽又粗| 天堂影院成人在线观看| 婷婷精品国产亚洲av| 在线观看一区二区三区| 免费大片18禁| 国产69精品久久久久777片 | 亚洲熟妇中文字幕五十中出| 男人和女人高潮做爰伦理| 两个人看的免费小视频| 欧美日韩综合久久久久久 | 一个人免费在线观看的高清视频| 亚洲人成电影免费在线| 一级毛片精品| 国产精品影院久久| 中文字幕熟女人妻在线| 在线观看日韩欧美| 日本黄色片子视频| 一本精品99久久精品77| netflix在线观看网站| 久久精品人妻少妇| 亚洲精品国产精品久久久不卡| 久久久久九九精品影院| 久久久久精品国产欧美久久久| 激情在线观看视频在线高清| 久久精品人妻少妇| 国产精品久久久av美女十八| 国产三级黄色录像| 久久久国产成人免费| 丰满人妻熟妇乱又伦精品不卡| 波多野结衣巨乳人妻| 国产亚洲av高清不卡| 又紧又爽又黄一区二区| 一区福利在线观看| 欧美zozozo另类| 成人鲁丝片一二三区免费| 亚洲在线观看片| 欧美三级亚洲精品| 色吧在线观看| 两个人视频免费观看高清| 后天国语完整版免费观看| 夜夜躁狠狠躁天天躁| 精品国产超薄肉色丝袜足j| 国产精品一及| 18禁裸乳无遮挡免费网站照片| 特大巨黑吊av在线直播| 欧美日韩国产亚洲二区| 中文字幕人妻丝袜一区二区| 精品久久蜜臀av无| 此物有八面人人有两片| 18禁裸乳无遮挡免费网站照片| 91在线观看av| 国产av不卡久久| 成年女人毛片免费观看观看9| 成年女人看的毛片在线观看| 母亲3免费完整高清在线观看| 窝窝影院91人妻| 少妇丰满av| 在线观看日韩欧美| av天堂中文字幕网| 叶爱在线成人免费视频播放| 国产伦精品一区二区三区四那| 国产精品香港三级国产av潘金莲| 精品熟女少妇八av免费久了| 欧美三级亚洲精品| 亚洲欧美一区二区三区黑人| 国产探花在线观看一区二区| 久久99热这里只有精品18| 非洲黑人性xxxx精品又粗又长| 麻豆成人av在线观看| 一个人免费在线观看的高清视频| 精品国内亚洲2022精品成人| 国产精品av视频在线免费观看| 中文字幕精品亚洲无线码一区| 女同久久另类99精品国产91| 国产伦人伦偷精品视频| netflix在线观看网站| 99热只有精品国产| 色av中文字幕| 成人国产一区最新在线观看| 99在线视频只有这里精品首页| 激情在线观看视频在线高清| 国产精品九九99| 国产激情久久老熟女| 国产精品久久久久久亚洲av鲁大| 欧美色视频一区免费| 色老头精品视频在线观看| 成人特级av手机在线观看| 亚洲欧洲精品一区二区精品久久久| av中文乱码字幕在线| 香蕉国产在线看| 亚洲午夜精品一区,二区,三区| 九九热线精品视视频播放| 中文在线观看免费www的网站| 久久人人精品亚洲av| 麻豆成人午夜福利视频| 一区二区三区激情视频| 天天躁日日操中文字幕| 老司机深夜福利视频在线观看| 国产黄a三级三级三级人| 色综合欧美亚洲国产小说| 精品电影一区二区在线| 亚洲欧美日韩高清专用| 中文亚洲av片在线观看爽| 亚洲欧美一区二区三区黑人| 国产伦一二天堂av在线观看| 嫩草影院精品99| 国产成人av激情在线播放| 一区福利在线观看| 欧美黄色淫秽网站| 午夜免费成人在线视频| av女优亚洲男人天堂 | 国产淫片久久久久久久久 | 黑人欧美特级aaaaaa片| 久久精品国产清高在天天线| 特大巨黑吊av在线直播| 国产精品久久久av美女十八| 国产精品亚洲av一区麻豆| 18禁观看日本| 久99久视频精品免费| 国产亚洲精品久久久久久毛片| 观看美女的网站| 一级毛片高清免费大全| 国产黄a三级三级三级人| 国产久久久一区二区三区| 999精品在线视频| 精品国产乱码久久久久久男人| 日本三级黄在线观看| 日本黄色视频三级网站网址| 黄色视频,在线免费观看| 亚洲精品456在线播放app | 美女免费视频网站| 久久九九热精品免费| 国产亚洲精品综合一区在线观看| 午夜福利18| 制服丝袜大香蕉在线| www.精华液| 一区二区三区激情视频| 国产黄a三级三级三级人| 麻豆成人午夜福利视频| 少妇的逼水好多| 天天躁狠狠躁夜夜躁狠狠躁| a级毛片在线看网站| 国产成人av激情在线播放| 12—13女人毛片做爰片一| 国内毛片毛片毛片毛片毛片| 熟女少妇亚洲综合色aaa.| 精品国产亚洲在线| 久久久精品欧美日韩精品| 国产精品永久免费网站| 欧美一区二区精品小视频在线| 国模一区二区三区四区视频 | 一本一本综合久久| 久久精品91蜜桃| 亚洲精华国产精华精| 757午夜福利合集在线观看| 免费大片18禁| 特级一级黄色大片| 亚洲狠狠婷婷综合久久图片| 亚洲午夜理论影院| 成年女人永久免费观看视频| 色尼玛亚洲综合影院| 精品电影一区二区在线| 成人特级黄色片久久久久久久| 亚洲国产欧美网| 亚洲精品在线美女| 我要搜黄色片| 国内久久婷婷六月综合欲色啪| 精品国产亚洲在线| 久久久久久久午夜电影| www.熟女人妻精品国产| 又粗又爽又猛毛片免费看| a级毛片在线看网站| 中文字幕高清在线视频| 久久精品国产亚洲av香蕉五月| 夜夜夜夜夜久久久久| 九九在线视频观看精品| 国产精品日韩av在线免费观看| 国产成人欧美在线观看| 全区人妻精品视频| cao死你这个sao货| 好看av亚洲va欧美ⅴa在| 19禁男女啪啪无遮挡网站| 最新美女视频免费是黄的| avwww免费| 欧美中文日本在线观看视频| 亚洲自偷自拍图片 自拍| 亚洲午夜理论影院| 国产不卡一卡二| 最近最新免费中文字幕在线| 成人永久免费在线观看视频| 在线永久观看黄色视频| 免费在线观看成人毛片| 人妻夜夜爽99麻豆av| cao死你这个sao货| 日韩大尺度精品在线看网址| 日韩成人在线观看一区二区三区| 国产美女午夜福利| 免费无遮挡裸体视频| 久久精品综合一区二区三区| 亚洲人成网站高清观看| 精品电影一区二区在线| 午夜免费激情av| 超碰成人久久| 精品国产美女av久久久久小说| 午夜激情欧美在线| 国产亚洲精品久久久久久毛片| 欧美性猛交╳xxx乱大交人| www.精华液| 亚洲精品在线观看二区| 亚洲国产欧美一区二区综合| 日韩国内少妇激情av| 国产精品爽爽va在线观看网站| 黑人欧美特级aaaaaa片| 精华霜和精华液先用哪个| 最近在线观看免费完整版| 国产精品99久久久久久久久| 90打野战视频偷拍视频| svipshipincom国产片| 亚洲 国产 在线| 成年人黄色毛片网站| 99视频精品全部免费 在线 | 夜夜躁狠狠躁天天躁| 少妇丰满av| 成年免费大片在线观看| 97碰自拍视频| av女优亚洲男人天堂 | 成人av在线播放网站| 夜夜看夜夜爽夜夜摸| 国产精品 国内视频| 亚洲欧美日韩高清在线视频| 国产午夜福利久久久久久| 亚洲精品中文字幕一二三四区| 看免费av毛片| 97人妻精品一区二区三区麻豆| 国产真实乱freesex| 亚洲午夜精品一区,二区,三区| 天堂√8在线中文| 熟女少妇亚洲综合色aaa.| 国产精品综合久久久久久久免费| 嫩草影院入口| 黑人操中国人逼视频| 哪里可以看免费的av片| 午夜久久久久精精品| www日本黄色视频网| 日韩人妻高清精品专区| x7x7x7水蜜桃| 欧美在线黄色| 级片在线观看| 19禁男女啪啪无遮挡网站| 小蜜桃在线观看免费完整版高清| 观看美女的网站| 婷婷亚洲欧美| 日本一二三区视频观看| 午夜福利高清视频| e午夜精品久久久久久久| 97人妻精品一区二区三区麻豆| 一卡2卡三卡四卡精品乱码亚洲| 黄色女人牲交| 日本三级黄在线观看| 99re在线观看精品视频| 成年女人毛片免费观看观看9| 国产男靠女视频免费网站| 69av精品久久久久久| 夜夜爽天天搞| 99久国产av精品| 欧美日韩精品网址| 最近视频中文字幕2019在线8| 色视频www国产| 手机成人av网站| 国产一区二区三区在线臀色熟女| 久久久久精品国产欧美久久久| 国产男靠女视频免费网站| 亚洲精品久久国产高清桃花| 韩国av一区二区三区四区| 久久中文看片网| 美女高潮的动态| ponron亚洲| 青草久久国产| av福利片在线观看| 日本a在线网址| 9191精品国产免费久久| 老汉色av国产亚洲站长工具| 午夜成年电影在线免费观看| 男女下面进入的视频免费午夜| av黄色大香蕉| 麻豆国产97在线/欧美| 97超级碰碰碰精品色视频在线观看| 午夜福利在线在线| 色av中文字幕| 中文字幕人妻丝袜一区二区| 久久精品国产综合久久久| 欧美激情在线99| 日韩中文字幕欧美一区二区| 99热6这里只有精品| 五月伊人婷婷丁香| 国产精品一区二区免费欧美| 色综合站精品国产| 亚洲午夜精品一区,二区,三区| 俺也久久电影网| 91字幕亚洲| 91麻豆av在线| 在线免费观看的www视频| 一级毛片高清免费大全| 欧美zozozo另类| 欧美三级亚洲精品| 亚洲成a人片在线一区二区| 国内揄拍国产精品人妻在线| 看免费av毛片| 性色av乱码一区二区三区2| 精品不卡国产一区二区三区| 亚洲成人久久爱视频| 日韩成人在线观看一区二区三区| 亚洲天堂国产精品一区在线| 欧美性猛交黑人性爽| www日本在线高清视频| 嫁个100分男人电影在线观看| 久久精品人妻少妇| 亚洲国产中文字幕在线视频| 99精品欧美一区二区三区四区| 亚洲国产欧美网| 色播亚洲综合网| 麻豆久久精品国产亚洲av| 日本撒尿小便嘘嘘汇集6| 国产高清激情床上av| 啦啦啦观看免费观看视频高清| 成在线人永久免费视频| 在线观看66精品国产| 成人性生交大片免费视频hd| 91av网一区二区| 黑人欧美特级aaaaaa片| 免费看美女性在线毛片视频| 国产亚洲精品久久久久久毛片| 午夜激情欧美在线| 草草在线视频免费看| 日韩大尺度精品在线看网址| 日本a在线网址| 精品国产亚洲在线| 亚洲一区高清亚洲精品| 宅男免费午夜| 好看av亚洲va欧美ⅴa在| 一边摸一边抽搐一进一小说| 成人国产一区最新在线观看| 午夜成年电影在线免费观看| 琪琪午夜伦伦电影理论片6080| 国产毛片a区久久久久| 真人一进一出gif抽搐免费| 精品久久久久久,| 观看免费一级毛片| 国产精品av久久久久免费| 在线观看免费视频日本深夜| 精品一区二区三区四区五区乱码| 国产精品久久久人人做人人爽| 波多野结衣巨乳人妻| 99久久无色码亚洲精品果冻| 18禁观看日本| 日本在线视频免费播放| 亚洲无线在线观看| 一个人看的www免费观看视频| 亚洲人成电影免费在线| av欧美777| 窝窝影院91人妻| 欧美在线一区亚洲| 欧美一区二区精品小视频在线| 色av中文字幕| 女生性感内裤真人,穿戴方法视频| 成人国产综合亚洲| 熟女人妻精品中文字幕| 亚洲美女视频黄频| 一区二区三区国产精品乱码| a级毛片在线看网站| 国产三级中文精品| 级片在线观看| 免费看美女性在线毛片视频| 精品国产超薄肉色丝袜足j| 欧美黄色淫秽网站| 五月玫瑰六月丁香| 欧美日韩瑟瑟在线播放| 精品熟女少妇八av免费久了| 热99在线观看视频| 看黄色毛片网站| 国模一区二区三区四区视频 | 国产成人aa在线观看| 久久亚洲精品不卡| 真人做人爱边吃奶动态| 欧美xxxx黑人xx丫x性爽| 久久欧美精品欧美久久欧美| 欧美大码av| 少妇裸体淫交视频免费看高清| 999久久久精品免费观看国产| 久久久久国产一级毛片高清牌| 免费高清视频大片| 一本综合久久免费| 后天国语完整版免费观看| 国产一区二区在线观看日韩 | 国产精品一区二区三区四区免费观看 | 99riav亚洲国产免费| 国产野战对白在线观看| 夜夜躁狠狠躁天天躁| 99国产精品一区二区蜜桃av| 999久久久国产精品视频| 日本在线视频免费播放| 极品教师在线免费播放| 亚洲中文字幕一区二区三区有码在线看 | 我的老师免费观看完整版| 欧美日韩亚洲国产一区二区在线观看| 久久精品综合一区二区三区| 在线观看免费视频日本深夜| 女人被狂操c到高潮| 国产av不卡久久| 一个人看的www免费观看视频| 很黄的视频免费| 亚洲av免费在线观看| 国产精品乱码一区二三区的特点| 操出白浆在线播放| 香蕉av资源在线| 亚洲自偷自拍图片 自拍| 国产真实乱freesex| 草草在线视频免费看| 在线观看舔阴道视频| 给我免费播放毛片高清在线观看| 美女午夜性视频免费| 麻豆一二三区av精品| 一本久久中文字幕| 啦啦啦免费观看视频1| 一级毛片高清免费大全| 黄片大片在线免费观看| www日本黄色视频网| 中文字幕高清在线视频| 国产亚洲av嫩草精品影院| 美女免费视频网站| 成人精品一区二区免费| 国产精华一区二区三区| 少妇丰满av| 韩国av一区二区三区四区| 欧美色视频一区免费| 久久99热这里只有精品18| 亚洲一区二区三区不卡视频| av女优亚洲男人天堂 | 看黄色毛片网站| 国产亚洲精品久久久com| 最新中文字幕久久久久 | 日韩三级视频一区二区三区| 又紧又爽又黄一区二区| 久久久久国产一级毛片高清牌| 欧美性猛交黑人性爽| 999久久久精品免费观看国产| 蜜桃久久精品国产亚洲av| 极品教师在线免费播放| 久久久久久人人人人人| 最近最新中文字幕大全电影3|