• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An Empirical Study on the Effectiveness of Adversarial Examples in Malware Detection

    2024-03-23 08:18:12YounghoonBanMyeonghyunKimandHaehyunCho

    Younghoon Ban,Myeonghyun Kim and Haehyun Cho

    School of Software,Soongsil University,Seoul,06978,Korea

    ABSTRACT Antivirus vendors and the research community employ Machine Learning (ML) or Deep Learning (DL)-based static analysis techniques for efficient identification of new threats,given the continual emergence of novel malware variants.On the other hand,numerous researchers have reported that Adversarial Examples(AEs),generated by manipulating previously detected malware,can successfully evade ML/DL-based classifiers.Commercial antivirus systems,in particular,have been identified as vulnerable to such AEs.This paper firstly focuses on conducting black-box attacks to circumvent ML/DL-based malware classifiers.Our attack method utilizes seven different perturbations,including Overlay Append,Section Append,and Break Checksum,capitalizing on the ambiguities present in the PE format,as previously employed in evasion attack research.By directly applying the perturbation techniques to PE binaries,our attack method eliminates the need to grapple with the problem-feature space dilemma,a persistent challenge in many evasion attack studies.Being a black-box attack,our method can generate AEs that successfully evade both DL-based and ML-based classifiers.Also,AEs generated by the attack method retain their executability and malicious behavior,eliminating the need for functionality verification.Through thorogh evaluations,we confirmed that the attack method achieves an evasion rate of 65.6% against well-known ML-based malware detectors and can reach a remarkable 99% evasion rate against well-known DL-based malware detectors.Furthermore,our AEs demonstrated the capability to bypass detection by 17% of vendors out of the 64 on VirusTotal(VT).In addition,we propose a defensive approach that utilizes Trend Locality Sensitive Hashing(TLSH)to construct a similarity-based defense model.Through several experiments on the approach,we verified that our defense model can effectively counter AEs generated by the perturbation techniques.In conclusion,our defense model alleviates the limitation of the most promising defense method,adversarial training,which is only effective against the AEs that are included in the training classifiers.

    KEYWORDS Malware classification;machine learning;adversarial examples;evasion attack;cybersecurity

    1 Introduction

    Machine Learning(ML)and Deep Learning(DL)based models,a subset of artificial intelligence,have demonstrated exceptional performance in various domains,including recommendation systems,handling imbalanced datasets,bioinformatics,medical diagnosis,financial risk management,and stock exchange [1].Moreover,ML and DL models play a crucial role in cyber security systems,particularly in fraud detection,intrusion detection,spam detection,and malware detection [2–6].ML and DL-based static analysis approaches offer significant efficiency and cost advantages when compared to traditional static analysis methods.Performing a thorough and meticulous static analysis of the ever-increasing array of emerging malware strains is a resource-intensive and time-consuming undertaking.Also,with the continuous emergence of new forms of malware,we are facing strong challenges in promptly and efficiently identifying these new threats [7–9].Therefore,the research community is increasingly focusing on developing ML and DL-based static malware classifiers.We have been observing ML-based malware detectors offer notable advantages,primarily their high scalability and their ability to rapidly and effectively detect large volumes of malware [10,11].Furthermore,ML-based static analysis is more cost-effective than dynamic analysis,many anti-virus vendors use static analysis to deal with a ton of malware emerging everyday[12].In addition,security researchers believe that recently proposed ML-based approaches are able to detect zero-day malware with the high accuracy[10,11,13,14].On the other hand,unfortunately,many researchers have reported that the ML/DL-based classifiers can be bypassed if we falsify malware that had been previously detected [10,12–15].Cylance,a commercial antivirus system,has been identified as vulnerable to Adversarial Examples(AEs),which are created by subtly altering previously known malware[11].

    In general,an AE is a specially crafted input generated to deceive ML or DL models.AEs are primarily designed for evasion attacks on models that process images,sound,and Portable Executable(PE) files.For example,in the case of an image-based model,AEs are created by perturbing each pixel of the image.By adding imperceptible noise to an image of a dog,an Image-based AE can be generated in a way that the model incorrectly classifies it as a different label[16,17].Adversarial attack approaches can be categorized into three distinct levels,depending on the attacker’s knowledge about the target model[16,18].

    White-box attack:The attacker has access to the target model,knowing its structure,weights,parameters,output,features,etc.,and is aware of the dataset used to train the target model.

    Gray-box attack:The attacker possesses only a fraction of the knowledge about the target model compared to a white-box attack.

    Black-box attack:The attacker sends a query to the target model and only has access to the query’s response,representing the best-case scenario for real-world situations.

    Under the white-box attack scenario,we exploit the ambiguity in the specifications of the Windows PE file format to inject adversarial noise or optimize the injected noise,creating AEs capable of misclassifying malware as benign.In the black-box attack on PE binaries,AEs are generated by applying various perturbation techniques to PE malware,rewriting the PE malware itself,or utilizing byte sequences or strings from benign binaries [10,19].Previously proposed evasion attacks such as RAMEN [10],GAMMA [12],and Multi-Armed Bandit (MAB)-malware [11] used benign contents under the black-box scenario.Specifically,RAMEN [10] and GAMMA [12] achieved high evasion rates by adding strings,bytes,or Application Programming Interfaces(APIs)extracted from benign binaries.The use of benign content enhances computational efficiency by avoiding the generation of random byte patterns injected into the original PE binary.Additionally,it plays a crucial role in altering the byte entropy of the original malware in a specific direction,making it highly effective for attacking machine learning models.These findings demonstrate the effectiveness of using benign content in generating AEs with high evasion rates[10–12,20].Most black-box evasion attacks rely on the effective concept of the transferability[17,18,21,22].The transferability denotes that AE created to evade a specific model can also successfully evade models with different architectures[18,23].

    On the other hand,a line of research on defense against evasion attacks has proposed various methods such as adversarial training [24],defensive distillation [25],and feature selection [26] to counter different types of evasion attacks [17].Among them,adversarial training stands out as the most effective and promising approach to defend against AE[27].The approach trains the model with AEs during the training process to make the model capable of classifying AE[15,24,27,28].However,because adversarial training can only defend against the specific adversarial noise or perturbation techniques applied in the AEs used in the training process,it becomes challenging to defend against unknown AEs.Moreover,adversarial training may lead to significant changes in the performance of the existing classifier,potentially causing a degradation in model performance[27].

    In this paper,we perform an evasion attack by generating AEs using various perturbations and benign content under the black-box scenario that reflects real-world environments,targeting wellknown static analysis-based ML/DL malware classifiers.We use the attack method while preserving the PE format of the original malware,ensuring the executability of PE malware and preserving malicious behavior.Since our attack method directly applies perturbations and benign content to the malware,we do not need to consider the problem-feature space dilemma.Additionally,to verify the transferability of AEs generated for evading DL-based target models,we analyze the results of ML-based models with varying features in AE detection.Furthermore,we validate the transferability of AEs generated through our attack technique by employing models with different structures and features,as well as VirusTotal.Finally,to defend against target models vulnerable to evasion attacks,we construct a defense model by applying the fuzzy hash,Trend Locality Sensitive Hashing(TLSH),known for its capability to find similar files.We then execute our attack technique against the constructed defense model,measure the evasion rate of AEs generated by us,and evaluate the performance of the defense model.The contributions of this paper are as follows:

    ? We conduct adversarial attacks employing a total of seven PE format-preserving perturbation techniques,including overlay append,a technique utilized in various evasion attack studies.Our attack method directly applies perturbations to the PE binary,ensuring the integrity of the PE format of malware.Consequently,there is no need to validate the malicious behavior of AEs generated with applied perturbations.

    ? To assess the transferability of the AE we generated,we conducted experiments using AE produced from each target model.The results revealed that 99% of the AEs crafted to evade ML-based malware detectors were also successful in bypassing DL-based malware detectors,showcasing the capability of our attack method to create powerful transferability AE.Furthermore,when querying our generated AEs on VirusTotal(VT),we managed to evade detection by 17% of the 64 detection vendors participating in VT.

    ? We present a defense method utilizing TLSH,a robust similarity hashing technique recognized for its capacity to identify similar files with low false positives and high detection rates.Our proposed defense method effectively mitigates AE generation by our attack method,successfully defending against over 90% of the generated AEs.

    2 Background

    In this section,we discuss the limitations of research on evading ML-based malware detectors and explain the studies aimed at defending against evasion attacks.

    Threats to Validity:In this paper,we have included studies that deal with deep learning-based malware classifiers or machine learning-based malware detection techniques and evasion attack by using combinations of search strings such as“deep learning,”“PE malware,”“adversarial examples,”and“adversarial attack defense.”We also explore attack methods for generating AEs which preserves the PE format and the malicious behavior of malware.Additionally,we discuss a novel defense strategy.

    2.1 Learning-Based Malware Detectors

    ML and DL models automatically learn complex relationships among file attributes from an extensive training dataset [14].Research in PE malware detection leverages the learning capabilities of ML/DL models,renowned for their ability to generalize predictions for new instances,even for previously unobserved forms of malware,such as zero-day malware [16].Many studies in malware detection employing ML/DL models typically encompass three stages: data collection,feature engineering,and model training/predictions[13,14,16].

    ML/DL models operate on numerical inputs,necessitating a feature extraction process to convert various features of PE malware into numerical values.Static features of PE malware are extracted without executing the malware,involving the extraction of API calls,byte sequences,readable strings,and header information contained within the malware.Additionally,dynamic features involve executing the PE malware to extract API call sequences,system resource status,and file status during runtime.After extracting static or dynamic features from malware,these features are mapped into numeric values to serve as inputs to ML/DL models.The feature engineering process fundamentally represents the mapping of the problem space(original PE malware without feature extraction)to the feature space(numeric features).This mapping function,expressed mathematically as Eq.(1),can be defined as follows:

    Here Z represents the problem space of the original PE malware before feature extraction,and X corresponds to the associated feature space.X numerically represents the intrinsic properties of the object(malware)from Z.

    The “model train/predictions” phase refers to the process of training a model using numeric features extracted from malware,following feature extraction.The goal is to create a discrimination functionfwith parameters that can classify malware or benignware.Here X represents the numeric features of malware generated after the feature engineering process.Y represents the label space for X,andfdenotes a DL/ML model with parameters.This process can be expressed in Eq.(2)as shown below:

    ML/DL models have demonstrated the high generalization performance.These models were expected to be robust even to small perturbations.However,Szegedy et al.[21] discovered that imperceptible non-random perturbations applied to input images in object recognition tasks could arbitrarily alter the model’s predictions.This finding revealed the vulnerability of ML/DL models to AE attacks,as widely discussed in numerous studies [16,29].AE attacks were originally explored in image classification tasks,and the PE file and image domains exhibit distinct characteristics,giving rise to the problem-feature space dilemma[30].Similar to Ling et al.[16],Fig.1 illustrates the dilemma in the problem-feature space,specifically addressing the Inverse Feature Mapping Function that connects the problem space and feature space for PE files.Nevertheless,AE attacks have been introduced into the cybersecurity domain,and researchers are proposing AE attacks in various security domains such as malware classification[10–12,31].

    Figure 1:This figure illustrates the relationship between the problem space and feature space through a feature mapping function.The Feature mapping function and Inverse feature mapping function serve to connect the problem space and feature space.However,obtaining the Inverse feature mapping function in the context of PE binary proves challenging,rendering it unsuitable for application in evasion attacks

    Attackers employ various methods to generate AEs to evade ML/DL-based malware classifiers.For instance,in the white-box attack (feature-space attack),the continuous nature of feature representation in the feature space is typically leveraged to create AEs using the gradient information of ML/DL malware classifiers [10,12].Fig.2 demonstrates the performance of an evasion attack in the feature space to generate PE file AE using white-box attacks,while showing the impossibility of evasion in the problem space.In the end,gradient-based attacks in the feature space reveal their impracticality in real-world PE malware detection.In the black-box attack (problem-space attack),where information about the ML/DL malware classifier is unavailable,attacks are conducted in the problem space.This attack involves generating AEs by modifying the original malware or creating a surrogate model with performance similar to the black-box model and then using methods from the white-box attack [11,22,31].Therefore,our attack method performs a problem-space attack on PE malware,preserving the PE format,and applies perturbations to create AEs that evade ML/DL models.

    2.2 Perturbing Malware to Evade Classifiers

    Black-box attacks are particularly well-suited for real-world scenarios where the attacker lacks information about the target model and can only make queries.Many black-box attacks capitalize on the ambiguities within the PE format to execute problem-space attacks.Leveraging these ambiguities provides the advantage of generating AE without impacting the original binary’s behavior,as emphasized in[10].Exploiting this advantage enables us to maintain the PE format of the original PE malware,ensuring the executability and malicious behavior of the generated AE,thereby eliminating the need for functionality verification.

    Figure 2:This figure illustrates the distinction between problem space attacks and feature space attacks.The original PE malware,represented as Z,undergoes continuous modifications in the problem space,resulting in the generation of AE denoted as Z’1,Z’2,Z’3.The illustration demonstrates how these AEs are mapped to AE Features,specifically X’1,X’2,X’3,in the feature space.In the feature space,X’3 is misclassified as benign,whereas it is evident that the corresponding Z’3 in the problem space has not been evaded

    We introduce perturbations that exploit the ambiguities within the PE format and incorporate them into our attack method.These perturbations,leveraging the uncertainties within the PE format,includeOverlay Append,Section Append,Break Checksum,Section Rename,Section Add,Remove Debug,Remove Certificate,Perturb Header Fields,Filling Slack Space,Padding,Manipulating DOS Header and Stub Extend the DOS Header,Content Shifting,Import Function Injection,Section Injection[10–12,31].As mentioned earlier,these perturbations have been employed in previous research and offer advantages due to their capability to preserve the executability and behavior of the original PE malware by leveraging the ambiguities within the PE format.We selectively incorporate some of these perturbations from this set into our attack method.

    2.3 Preventing AEs of Malware

    As a line of research has been conducted on evasion attacks,defense research has also been active.Methods to defend against evasion attacks include feature selection [26],defensive distillation [25],and adversarial training [15,24,27].Feature selection originally aims to reduce the classifier’s spacetime complexity and enhance its generalization capability by choosing a subset of relevant features for classification and detection.Zhang et al.[26] proposed a feature selection model that harnesses the advantages of feature selection in real-time processing tasks without compromising security.

    Defensive distillation was initially introduced to reduce computational complexity by transferring knowledge from a larger model to a smaller one.Distillation involves two models: the teacher and the student.The teacher model undergoes conventional training,while the student model is trained using the probabilities (soft class labels) learned from the teacher model.Moreover,distillation is controlled by the distillation temperature,T.WhenTis large,it causes the teacher model to assign higher probabilities to each class.Papernot et al.[25] developed a model that exhibits robustness to evasion attacks through distillation.However,it was observed that the distillation-based approach wasn’t particularly beneficial in the field of malware security,and the model’s accuracy declined as the distillation temperature increased[17].

    Although adversarial training enhances robustness against various perturbations depending on the AE used,it comes with limitations,including the requirement for a separate AE dataset for training,challenges in generalizing to new AEs even after retraining,and the overhead associated with model retraining [32].Furthermore,the most promising defenses,knowledge distillation,and adversarial training,were not found to be effective in defense [17].Therefore,in this paper,we undertake a defense against AEs with various perturbations to minimize the loss of malware detection accuracy and mitigate the limitations of adversarial training.We implement a similarity-based defense method that compares the original malware and the AE’s similarity using TLSH [33],a localitysensitive hash with excellent detection capabilities for identifying similar files.

    3 Overview

    Researcher’s endeavors focused on Evasion Attacks often generate AEs by injecting noise into the original malware within a white-box attack,employing Genetic Algorithms(GAs)for optimization,thus introducing minimal perturbation[10,12].However,GAs encounter limitations when applied to target models that return hard labels,as they heavily rely on fitness functions during the optimization process.Furthermore,as depicted in Fig.2,a white-box attack may be feasible in the feature space but may not succeed in evading the target model in the actual problem space.In the real world,attackers conducting evasion attacks typically find themselves in a state of uncertainty,lacking knowledge regarding the parameters,structure,and features employed by the target model.Therefore,this paper performs evasion attacks,a type of black-box attack.Our attack method involves incorporating benign content and perturbations into the original malware,ensuring that the original malware’s PE format remains unchanged while guaranteeing executability and malicious behavior.Furthermore,our attack method eliminates the need for functionality verification when generating AEs.

    We have also proposed a defense method to protect models vulnerable to evasion attacks.Our proposed defense method assesses similarity values through the computation of a fuzzy hash value called TLSH[33].TLSH,a high-performance similarity hash,offers outstanding detection capabilities for identifying similar byte streams or files while minimizing false positives associated with malware detection[34].As our proposed defense method relies on measuring similarity values,we can mitigate the limitations of adversarial training,which can only defend against limited perturbations within the dataset,and address the issue of accuracy degradation found in defensive distillation.To validate our defense method,we apply our attack method,using various types of perturbations,to our defense model and detect AE generated by our attack method as a test of the defense model’s effectiveness.Fig.3 provides an overview of both our attack and defense methods.

    Figure 3:Our attack and defense method overview

    3.1 Target Model

    Many antivirus vendors claim to detect malware threats through static analysis without executing binaries[12].Therefore,in our quest to select well-known ML/DL-based malware classifiers as target models,we explored malware classifiers that use static features to detect malware threats without the need to run the malware.As a result,we chose to utilize two target models: Malconv [13],a DLbased malware classifier,and Gradient Boosting Decision Tree(GBDT)[14],an ML-based malware classifier.Malconv uses raw bytes as features and leverages convolutional layers to learn these features,subsequently classifying input binaries as malicious or benign.GBDT,on the other hand,learns manually engineered features to classify input binaries into malicious or benign categories.Both of our selected target models are implementations from the Machine Learning Static Evasion Competition(MLSEC)2019[35].

    3.1.1 Deep Learning Model

    The malware classifier model proposed by Raff et al.[13]employs the raw bytes of the PE binary as features to simplify the malware detection tool and enhance its accuracy.The rationale behind using raw inputs is their demonstrated exceptional performance across various domains,including images,signals,and text.Malconv takes the first 2 MB of raw bytes from a PE binary as input,processes them through an embedding layer,two 1D convolution layers,one dense layer,and an output layer to calculate the probability that the given PE binary is malware.If the raw bytes of the input binary exceed 2 MB in length,they are truncated to fit this size;if they are less than 2 MB,they are padded with zeros.The raw byte input is divided into 500-byte segments without overlapping and passed through a total of 128 convolution filters.Furthermore,Malconv,composed of convolution layers,exhibits an O(n)time complexity.Consequently,Malconv’s time complexity increases linearly depending on the input size used[36].

    3.1.2 Machine Learning Model

    Anderson et al.[14] utilized the GBDT malware classifier,trained on the Elastic Malware Benchmark for Empowering Researchers(EMBER)dataset,which comprises various static features extracted from 1.1 million PE malware samples through static analysis.GBDT incorporates diverse static features from the PE binary,such as general file information (e.g.,the number of imported and exported functions,and the virtual size of the file),header information (including executable characteristics like architecture and version),byte histograms(indicating the occurrences of each byte divided by the total number of bytes),information extracted from strings,and various hand-engineered features,including section information.These features are trained to classify a PE binary input into either malware or benign.Significantly,the GBDT model exhibited superior performance compared to Malconv [14].Many evasion attack works have also targeted Malconv and GBDT as vulnerable models,and their experiments illustrate the susceptibility of these models to evasion attacks[10–12,27].In this paper,we introduce a novel defense approach designed to address the vulnerability of the two target models to evasion attacks.

    4 For Bypassing Malware Classifiers:Attack Method

    Our attack method aims to generate an AE that maintains the original malware’s executability and malicious behavior,eliminating the necessity for functional verification.This preservation is achieved without compromising the original malware’s PE format.We meticulously choose and apply perturbations that do not interfere with the PE format of the original malware.For instance,we utilize perturbations that can impact the static features of the original malware,such as file hash,section hash,section count/name/padding,checksum,and byte entropy.

    ? Overlay Append(OA):In OA,benign content is appended to the end of the original malware binary.Consequently,the AE generated with OA exhibits newly added bytes at the end of the binary,distinguishing it from the original malware binary.As a result,the AE created with OA and the original malware binary have distinct hash values and byte entropy.

    ? Section Append(SP):SP performs appending random bytes or benign content to the unused space between two sections of the original malware binary.This perturbation influences the hash value of the original malware binary,the section hash,and also impacts the section padding.

    ? Break Checksum(BC):BC replaces the checksum value of the optional header in the original malware binary with zero.When comparing the AE generated with BC and the original malware binary,differences arise in the file hash value and the checksum of the original malware binary.Since the checksum value of the optional header is unique in the original malware binary,this perturbation should be applied only once to the original malware binary.

    ? Section Rename(SR):SR utilizes the section name from a benign binary to change the section name of the original malware binary.As a result,the AE generated through this perturbation differs from the original malware binary in terms of file hash value and section name.

    ? Section Add(SA):SA performs adding content extracted from a benign binary to the original malware binary.It is one of the perturbations designed to obscure the distinction between the static features of the original malware binary and those of the benign binary given as input.Notably,the file hash value,section count,and byte entropy of the SA applied AE are distinct from those of the original malware binary.

    ? Remove Debug (RD): RD modifies the debug information of the original malware binary to zero.This leads to disparities in the file hash value,section hash,and debug information between the AE generated with RD and the original malware binary.Furthermore,given that the debug information in the original malware binary is unique,RD should be executed only once on the original malware binary provided as input.

    ? Remove Certificate(RC):RC sets the signed certificate of the original malware binary to zero.The AE subjected to RC exhibits distinct file hash value and certificate compared to the original malware binary.Additionally,since the original malware binary possesses a unique signed certificate,RC should be applied only once.

    Fig.4 illustrates the flow of our attack method.We employ a Perturbation Applicator and a Perturbation Combination Dictionary to generate an AE when given the original malware as input.

    Figure 4:Our attack method flow

    Perturbation Combination Dictionary.The perturbation combination dictionary utilized in our attack method is a repository that includes perturbation combinations along with corresponding sets of benign content,crucial for generating AEs.To construct this dictionary,we implemented specific modifications to the MAB-malware framework [11],a research initiative dedicated to generating Reinforcement Learning-based AEs.The following adjustments were implemented:the exclusion of code randomization perturbation,and for every AE generation,we updated the log file to document the perturbations and benign content used,along with the detection score of identified AEs.We conducted a thorough analysis of the AEs generated using the modified MAB-malware.This analysis yielded a total of 998 pairs,each comprising perturbation combinations and benign content applied during AE generation.We categorized the perturbation combinations into various lengths,ranging from 1 to 10,and selected the top 10 perturbation combinations for each length.As shown in Fig.5,a single perturbation can be paired with various benign contents.In the perturbation combination dictionary,we have stored 21 distinct perturbation combinations,each associated with 206 unique benign contents,all of which are utilized in the AE generation process.

    Figure 5:Perturbation applicator and perturbation combination dictionary are used in our attack method

    Perturbation Applicator.The perturbation applicator generates AEs by applying perturbation combinations and benign content to the original malware.The perturbation combinations are applied sequentially,starting from length 1.The perturbation applicator randomly selects benign content corresponding to the currently chosen perturbation combination from the perturbation combination dictionary.Subsequently,the perturbation applicator applies the selected perturbation combination and its corresponding benign content to the original malware.This process ensures the preservation of the PE format,executability,and malicious behavior,aligning with our objectives and resulting in a AE that does not require functional verification.Subsequently,the detection results of the AE,as identified by the target model,are examined if the target model classifies the generated AE as benign,it indicates a successful evasion of the target model by our created AE In such cases,the perturbation applicator discontinues the application of the remaining perturbation combinations to the generated AE and prepares a new malware sample for the creation of a new AE.Conversely,if the generated AE is unsuccessful in evading the target model,the remaining perturbation combinations that have not been applied yet are sequentially used to create a new AE.The evasion capability of the new AE is then assessed.If the perturbation applied to the generated AE is the last one in the currently selected perturbation combination,the input malware is initialized to its initial state without perturbation.Then,a new perturbation combination and its corresponding benign content are prepared to create a fresh AE.This iterative process continues until all 21 perturbation combinations have been applied to the input malware or all prepared malware samples have been exhausted.

    4.1 Evaluation

    Experimental Environments.The experimental environment consisted of two Intel(R) Xeon(R)Gold 6230 20-core 2.10 GHz CPUs with Ubuntu 20.04 LTS,256 GB of RAM,and four NVIDIA GeForce RTX 2080 Ti GPUs.The language used was Python 3.7.We have adapted and employed MAB-Malware to build the perturbation combination dictionary.The pefile library has proven its capability to apply perturbations to binaries reliably,without introducing unnecessary changes that could compromise the executability of the PE binary[11].Consequently,we have opted to use the pefile library,as it aligns with the objectives of our attack method.Additionally,for our defense method,we have utilized python-tlsh 4.5.0 to construct the defense model.

    Dataset.The dataset used in the experiment was acquired through MAB-malware.We collected 1,000 malware samples and 27,167 benign contents from this dataset.Among them,206 benign contents were included in our perturbation combination dictionary.All 1,000 malware samples were utilized in our experiments.

    Evaluation Metrics.To quantitatively assess the number of malware samples that evaded the target model,we employed the Evasion Rate as an evaluation metric for AEs.Eq.(3)represents the formula for the evasion rate,as follows.This metric will be utilized in our paper for analysis and evaluation purposes.

    4.1.1 Target Model Classification Performance Measurement

    Both well-known malware classifiers,Malconv and GBDT,were trained on the EMBER 2018 dataset [14] and implemented in MLSEC 2019 [35].Before generating AEs,we evaluate the classification performance of these two models.The malware samples we acquired are executables,whereas benign content consists of specific byte sequences extracted from particular sections of binary or.dll files.To evaluate the classification performance of our target models,we randomly selected 1,000 executable benignware samples from the DikeDataset [37].In total,we utilized 2,000 executable datasets.We use several evaluation metrics,including Accuracy,macro-recall,macro-precision,and macro-F1-score,to assess the performance of the target model.Recall measures the proportion of samples belonging to the benign class that the model correctly classifies as benign.Precision represents the ratio between the model’s classification results and the number of samples that actually belong to the corresponding class.The F1-score is computed as the harmonic mean of precision and recall.Accuracy,being the most commonly used metric for evaluating a model,indicates how closely the model’s predictions align with the actual class labels of the samples.Table 1 presents the classification performance results of both target models.The GBDT model,which utilizes various types of features extracted through static analysis of PE binaries,uses raw bytes as input and exhibits overall higher performance compared to Malconv,which employs a relatively lower threshold.Consequently,Malconv,despite showing slightly inferior classification performance to GBDT,maintains a precision of 0.79,indicating satisfactory classification performance.Both models are employed as our target models for our attack and proposed defense method.

    Table 1:Classification performance of our target model

    4.1.2 Adversarial Example Generation for Bypassing Deep Learning Classifiers

    Using a dataset of 1,000 malware samples,our goal is to generate AEs that evade the target model,Malconv,ensuring both viability and malicious behavior without requiring functionality verification.When our attack method was applied to the target model,the AEs generated through our attack method achieved a 99.9% evasion rate.This indicates that out of the 1,000 malware samples in our dataset,999 AEs were successfully generated to evade the target model.We achieved this high evasion rate by analyzing the AEs generated by Malconv after modifying the MAB-malware and then storing perturbation combinations and benign content using a perturbation combination dictionary.Consequently,we can confirm the effectiveness of our attack method against Malconv,as it achieved a high evasion rate.Additionally,among the AEs generated by our method,the AE with the lowest detection score for the target model applied a single OA perturbation.The original malware associated with this AE had a detection score of 0.72,while this AE achieved a significantly lower detection score of 0.00027,well below Malconv’s threshold of 0.5.This result clearly demonstrate that our attack method can effectively circumvent Malconv.

    We analyzed the 999 generated AEs to identify the applied perturbation combinations.Fig.6 illustrates the distribution of perturbation combination lengths used in the 999 generated AEs.Among them,802 AEs were evaded by applying a single perturbation with a length of 1,constituting 80% of all AEs.Interestingly,no AEs showed perturbation combinations of lengths 5,6,7,or 9 with perturbation lengths greater than 1.

    We investigated the reason why our attack method predominantly generates AEs with a single perturbation.Fig.7 provides insights into the perturbation combination lengths that should have been applied to the original malware,revealing instances where the length is 2 or 3.Our attack method leverages the detection results of the target model each time an AE is generated.As a result,many AEs are produced by applying the initial perturbation from perturbation combinations with lengths of 1 or more.Among the 802 single-perturbation AEs we generated,707 had OA applied,while 55 had SA applied.It is evident that Malconv is susceptible to evasion attacks using both SA and OA.This vulnerability arises from Malconv’s utilization of raw bytes as features.Furthermore,Malconv is susceptible to append-based attacks (OA,SA) and benign content due to its composition,which consists of a convolutional layer and a lack of learning regarding the location information of input features.

    Figure 6:To bypass Malconv,distribution of perturbation combination length applied to generated AEs

    Figure 7:Distribution of original perturbation combination lengths for generated AEs

    Therefore,our attack method achieved a high evasion rate against Malconv due to its use of benign content and append-based attacks,such as OA and SA.Table 2 presents a list of benign content used in the 802 single-perturbation AEs,demonstrating that the size of benign content does not significantly impact AE generation.Notably,“System.ni.dl”was employed in generating 112 of the 802 AEs,“NL7Data0804.dll”in 104 AEs,and“Microsoft.VisualBasic.Compatibility.dll”in 100 AEs.As a result,our attack method successfully evaded Malconv with a high evasion rate using a perturbation combination dictionary.Additionally,due to the target model’s composition with a convolutional layer and a lack of learning about input feature locations,it remains vulnerable to our attack method using benign content,OA,and SA perturbations.

    Table 2:List of benign contents used to generate AE that evades Malconv

    4.1.3 Adversarial Example Generation for Bypassing Machine Learning Classifiers

    We generated AEs using 1,000 malware samples to evade the machine learning-based malware classifier,GBDT.Employing our attack method to evade GBDT resulted in 656 AEs successfully evading GBDT out of a dataset of 1,000 malware samples.As a result,our attack method achieved a 65.6%evasion rate against GBDT.This evasion rate is lower than the result presented in Section 4.1.2.The low evasion rate can be attributed to the fact that the perturbation combinations and benign contents stored in the perturbation combination dictionary did not exert sufficient influence on the input features used by GBDT.

    We analyzed the original perturbation combination lengths that should have been applied to the 341 AEs with a perturbation combination length of 1,along with the benign content used in these AEs.Fig.8 illustrates the distribution of perturbation combination lengths applied to the 656 generated AEs.Approximately 52% (341 cases) had a perturbation combination length of 1,while about 24%(158 cases) had a length of 4.The reason why more than half of the generated AEs have a single perturbation applied is as explained earlier.Our attack method detects each AE by the target model every time a perturbation is applied to the original malware for AE generation,and that is why evasion primarily involves single perturbations.

    Figure 8:To bypass GBDT,distribution of perturbation combination length applied to generated AEs

    Fig.9 shows that 323 AEs were generated using a single perturbation with a perturbation combination length of one.Additionally,out of the 341 AEs with a perturbation combination length of 1,340 AEs were generated by applying OA,while only 1 AE was generated by applying SA.The dominance of OA over SA can be attributed to the fact that OA affects format-agnostic features among GBDT’s input features,such as raw byte histograms and byte entropy histograms.This observation indicates that OA was more frequently employed in AE generation.

    Table 3 presents the benign content utilized for AEs with a perturbation combination length of 1,along with the number of AEs generated using each content.In contrast to the results in Section 4.1.2,it becomes evident that the size of the benign content employed directly affects the number of AEs generated.The benign content size that generated the highest number of AEs is 1867.5 KB.The combination of OA and substantial benign content has an impact on format-agnostic features,such as raw byte histograms and byte entropy histograms,used by GBDT as input.Consequently,this leads to the creation of AEs capable of evading GBDT.In simpler terms,GBDT employs a diverse range of feature types,making it more challenging to evade than Malconv,which relies solely on raw bytes as input.Through an analysis of the 656 generated AEs,we discovered that GBDT can be evaded by applying a single perturbation,specifically OA and SA.Additionally,the benign content size used in the AEs generated by GBDT is larger than that used in the AEs generated by Malconv.

    Table 3:List of benign contents used to generate AE that evades GBDT

    Figure 9:Distribution of original perturbation combination lengths for GBDT evasion AEs

    4.2 AE Transferability

    In this section,we assess the transferability of our generated AEs.Initially,we identify AEs that do not necessitate functional verification,ensuring both guaranteed executability and the manifestation of malicious behavior,using a malware detection model with distinct structures and features.

    To assess whether the 999 AEs that evaded Malconv could also bypass GBDT,we subjected these 999 AEs to detection by GBDT.The detection results revealed an evasion rate of 33.2%,indicating that out of the total 999 AEs,332 AEs successfully evaded GBDT.The lower evasion rate can be attributed to the perturbation and benign content applied to the generated AEs.The AEs were initially designed to evade Malconv,which relies on raw bytes as features.Consequently,many AEs may not effectively evade GBDT,which employs a diverse set of static features,unlike Malconv.

    We then evaluated whether the 656 AEs that evaded GBDT could also bypass Malconv.For this assessment,Malconv was utilized to detect these 656 AEs.The results demonstrated that 650 AEs,constituting a 99% evasion rate,confirmed their capability to evade Malconv.It is noteworthy that the AEs generated with GBDT as the target exhibited a relatively higher evasion rate compared to those generated with Malconv as the target.The high evasion rate of the AEs generated to evade GBDT is inferred to be due to the benign content and perturbations applied to these AEs,which also influenced the raw bytes used by Malconv.

    In conclusion,the AEs that evaded the ML-based malware classifier GBDT achieved a high evasion rate against the DL-based malware classifier Malconv.We observed that the transferability of AEs depends more on the features used by the target model rather than its structural complexity.Additionally,we have confirmed that the AEs generated by our attack method exhibit transferability.

    4.2.1 AE Transferability at Real-World Malware Detector

    From the pool of 650 AEs that evaded both GBDT and Malconv,we randomly selected one AE to be submitted to VirusTotal(VT)to assess how effectively an AE generated by our attack method could evade various antivirus vendors participating in VT.For this evaluation,we submitted a total of two binaries to VT:a randomly selected AE and its corresponding original malware.

    Figs.10 and 11 illustrate the number of antivirus vendors that participated in the detection process and the number of vendors that actually detected the original malware and the AE within the VT results.Notably,the AE managed to evade detection by 11 antivirus vendors compared to the detection results of the original malware.Fig.12 provides a list of the antivirus vendors that were evaded.It can be inferred that these 11 vendors utilize static features and ML or DL models for malware detection.

    Figure 10:Part of the VT report of the original malware

    Figure 11:Part of the VT report of the AE

    Figure 12:List of antivirus vendors participating in VT that our generated AE evaded

    Among the 53 vendors that detected the AE as malicious,some vendors provided detailed detection scores.For instance,“MAX”,an antivirus vendor,displays a score in the detection label.The original malware was detected by “MAX”with an AI score of 100,while the AE was detected with an AI score of 87,indicating a 13-point drop in the AI score.

    We conducted an analysis of the 53 vendors that detected the AE as malicious and observed that six vendors identified the AE with the same label,namely,“Trojan.Ransom.Cerber.1.” These vendors include ALYac,Arcabit,BitDefender,eScan,GData,and VIPRE.It is plausible that these six vendors either shared the same detection label or employed similar logic for malware detection.Additionally,seven vendors classified the AE under the same malware family label,“Cerber.”These vendors are AhnLab-V3,ClamAV,Cyren,Emsisoft,ESET-NOD32,Ikarus,and Microsoft.This suggests that these vendors may have utilized similar detection logic or shared the same detection label.In conclusion,our analysis revealed that some vendors on VirusTotal share detection labels,as previously noted in Peng et al.[38].

    4.2.2 Section Injection

    As observed in Sections 4.1.2 and 4.1.3,it is evident that the most frequently used perturbations in the AEs generated by our attack method are two single perturbations:OA and SA.Both OA and SA involve adding benign content to the original malware binary.These two methods are akin to the simple section injection approach,which injects adversarial noise by adding a dummy section to a PE binary and creates an AE by manipulating that section.The simple section injection attack is a type of white-box-based attack that optimizes the injected section to reduce the logit value of the target model[10,39].

    We generated AEs capable of evading four target models using a simple section injection method that does not incorporate benign content,and subsequently,we assessed their evasion rates.Out of the four selected target models,three of them-Malconv [13],NonNeg [40],and IBLTMC-ff [41]-are designed for malware detection on PE binaries.The fourth target model,DexRay [42],specializes in detecting Android malware through image analysis.For our experiments,we utilized datasets consisting of PE binaries,randomly selecting 100 samples from the DikeDataset [37].In the case of the DexRay model,we employed the DexRay dataset provided by the model itself.The Simple Section Injection attack is a method that optimizes the injected noise.Therefore,to generate AEs evading the Malconv,NonNeg,and IBLTMC-ff models,we injected 2 KB of noise into the original malware binary through the optimization process.For the DexRay model,noise was introduced equivalent to 0.5% of the input size in bytes.The results of the Simple Section Injection attack are presented in Table 4.Remarkably,even without the use of benign content,Simple Section Injection achieves a high evasion rate against the target model by introducing adversarial noise into the original malware and optimizing it.

    Table 4:Simple section injection experiment results

    4.3 Compare with the State-of-the-Art Approaches

    We compare our attack method to MAB-malware,a state-of-the-art research method that utilizes reinforcement learning to generate AEs.To ensure a fair comparison,we adjusted our code to exclude the code randomization perturbation used by MAB-malware and utilized the same malware dataset.Table 5 shows the results of the evasion rate comparison between our attack method and MABmalware.In the table,bold indicates a high-performance evasion rate,and an asterisk indicates that thep-value,evaluated by a pairedt-test,is significant at 0.05.

    Table 5:Comparison of our attack and a state-of-the-art’s evasion rate

    Both MAB-malware and our attack methodology utilized the same set of 1,000 malware samples,with a common focus on generating AEs targeted at the Malconv and GBDT models.MAB-malware successfully generated 812 AEs that evaded GBDT,achieving an evasion rate of 81.2%.In contrast,our attack method generated around 65.6% of AEs,amounting to a total of 656 AEs when applied to the identical set of malware samples.These findings underscore that our attack method,relying on perturbation combinations,demonstrates a lower evasion rate compared to MAB-malware,which utilizes individual perturbations.Moreover,our achieved evasion rate was relatively lower,given that we employed a limited set of benign content stored in the Perturbation Combination Dictionary compared to MAB-malware.

    Similarly,when generating AEs targeting Malconv,MAB-malware excelled by producing 967 AEs out of 1,000 malware samples,achieving an impressive 96.7% AE evasion rate.In contrast,our attack method outperformed MAB-malware by achieving a 99.9% AE creation rate with 999 AEs when targeting Malconv.According to the comparison results,our attack method demonstrated limitations in evading ML-based classifiers compared to the state-of-the-art research utilizing reinforcement learning.We attribute this limitation to the perturbation combinations included in our Perturbation Combination Dictionary and the limited number of benign content we employed.However,despite the restricted set of benign content and perturbation combinations,our study clearly outperformed recent research utilizing reinforcement learning in evading DL-based models.

    5 Mitigating Adversarial Examples of Malware

    To defend against repetitive queries in black-box attacks and protect the target model from AEs with various applied perturbations,we employ TLSH,a similarity score-based approach.Oliver et al.[43] conducted experiments to evaluate the efficiency of TLSH in comparison to other similarity score comparison schemes.Their objective was to assess how effectively TLSH can identify changes in files when their contents have been altered.Their findings confirmed that TLSH exhibits a lower false positive rate for malware detection and a higher detection rate compared to other similarity score comparison schemes,specifically,SSDEEP[44]and SDHASH[45].Overall,their experimental results demonstrate that TLSH is robust in identifying altered files and offers greater resistance to evasion compared to other similarity-based approaches.Additionally,TLSH,which employs the kskip-n-gram method,can generate a 72-digit change-sensitive locality-sensitive hash for files of at least 50 bytes,facilitating similarity comparison[46].

    Therefore,we utilize TLSH [33],a high-performance similarity hash known for its excellent detection rate in identifying similar byte streams or files while minimizing malware false positives,to construct our defense model.Fig.13 illustrates the flow of our proposed defense method.We begin by storing the TLSH values and detection scores of the malware samples provided as input to the target model in a database.Subsequently,when a new input binary is given to the target model,we check the database for similar TLSH values.If a similar TLSH value is found,we return the corresponding detection score associated with that TLSH value.Conversely,if the TLSH value is not found in the database,we store the TLSH value along with its detection score in the database,thereby constructing our defense model.

    Figure 13:The flow of our proposed defense method

    TLSH employs a distance metric for calculating similarity scores,resulting in scores that range from 0 to potentially 1,000,where 0 represents a complete perfect match and 1,000 signifies a mismatch[33].Due to TLSH’s wider range of similarity scores compared to SSDEEP and SDHASH,it has been demonstrated that a broader range of thresholds can be effectively utilized to achieve low false positives and high detection rates,distinguishing it from existing similarity comparison methods.Hence,we set the threshold to the average of the smallest and largest TLSH similarity scores between the original malware and the AEs generated by our attack method.Specifically,we set the threshold to 450 for Malconv and 543 for GBDT.

    5.1 Evaluation Results of Mitigating Adversarial Examples of Malware

    To evaluate the effectiveness of our defense model,we applied our attack method to it.The results demonstrate that our attack method was unsuccessful in generating an AE capable of bypassing the defense model,highlighting the robustness of our defense approach.Additionally,our defense model detected a total of 1,655 AEs generated by our attack method that evaded the two target models without the application of TLSH.Subsequently,when TLSH was incorporated into Malconv,all 1,655 AEs were no longer able to evade detection and were correctly classified as malicious.However,when TLSH was applied to GBDT,9 AEs out of the 1,655 still managed to evade GBDT and were misclassified as benign.We analyzed the reasons why these 9 evaded AEs were successful in bypassing detection.Firstly,we checked the similarity scores between the evaded AEs and their corresponding original malware.The results showed an average similarity score of 640,exceeding the TLSH threshold set for GBDT.The AE with the highest differ similarity score in our dataset scored 726 when compared to its original malware,indicating significant dissimilarity.Upon analyzing the 9 AEs that evaded detection,we identified two types of perturbations applied: OA and SA.Two AEs were perturbed using OA,while seven AEs were perturbed using SA.The benign content associated with OA is“d3d10warp.dll,”and for SA,it is“QuickConnectUI.dll.”

    These AEs,generated using simple perturbations and corresponding benign content,exhibited substantial dissimilarity to the original malware when TLSH-based similarity scores were calculated using raw bytes as input.

    Figs.14 and 15 present sections of the VT report for the original malware and an AE with the highest similarity score of 726.In particular,Fig.15 displays a segment of the VT report for the AE generated with the application of OA,showcasing the Overlay section.Notably,this section is absent in the VT report for the original malware.The entropy of the AE’s overlay measures at 6.3,and in Fig.14,the hash values in the Basic properties are all different.This verification supports the conclusion that the benign content used in the OA perturbation significantly influenced the raw byte histogram and byte entropy histogram,which are features utilized by GBDT,resulting in evasion beyond the established threshold.The notable variance in similarity scores can be attributed to the utilization of substantial benign content sizes,such as“d3d10warp.dll”(5,417 KB) and “QuickConnectUI.dl”(3,101 KB),which impact the format-agnostic features employed by GBDT,namely the raw byte histogram and byte entropy histogram.In summary,the AEs generated using our attack method showcase significantly distinct TLSH values compared to the original malware,providing evidence of their successful evasion of GBDT,with similarity scores surpassing the established threshold.

    Figure 14:The above VT report is from the original malware and the below report is from the AE

    Figure 15:Overlay section of AE’s VT report

    6 Related work

    In this section,we review research that employs diverse methods for conducting evasion attacks and present studies that have proposed defense mechanisms against such attacks.Table 6 offers a concise summary of studies focused on evasion attacks.

    Table 6:Researches conducting evasion attacks

    6.1 Related Work on Evasion Attack Research

    In this section,we categorize and introduce studies on evasion attacks into four distinct categories.

    6.1.1 Research on Evasion Attacks Based on Optimization Methods

    The optimization method utilizing a GA exploits the inherent ambiguities in the PE format to inject adversarial noise either into the slack space of the PE format or at the end of the binary file.Subsequently,AEs are generated by optimizing this adversarial noise using GA.AEs generated through GA capitalize on the PE format’s ambiguities,offering the advantage of not requiring functional verification.However,these studies do not guarantee that the AE exhibits the same behavior as the original malware.Additionally,since GA employs a fitness function,it faces limitations when applied to models that provide a binary output label[10,12].

    6.1.2 Research on Evasion Attacks Based on RL Methods

    In Reinforcement Learning (RL),the agent applies perturbations to the original malware by executing predefined actions,resulting in the generation of AEs and earning rewards.The RL agent learns to maximize its rewards.Anderson et al.[15]introduced an RL-based black-box attack method.In this approach,the authors automatically generated new variants of Windows PE exploit malware by modifying binary files.However,as the number of actions an agent must undertake to create an AE increases,so does the search space for actions,making it challenging to apply with current RL algorithms [49].Nevertheless,in reference [15],RL-based AE generation method demonstrated a 15% improvement in performance compared to a method that randomly selects perturbations(actions).It should be noted that this method does not guarantee the preservation of AE functionality.Song et al.[11] proposed an RL-based framework for generating AEs that can evade PE malware classifiers and antivirus engines.By considering the adversarial attack as a MAB problem,the authors aimed to strike a balance between discovering evasion patterns and generating more variants.The results showed an evasion rate ranging from more than 74%to 97%against ML detectors.However,MAB-Malware requires a significant amount of time to attack commercial AV systems,making it less suitable for directly targeting such systems[50].

    6.1.3 Research on Evasion Attacks Based on GAN Methods

    Research on Generative Adversarial Network(GAN)-based AE generation offers the advantage of being able to target various types of models due to the use of generative models,such as GANs.GAN-based AE generation research employs two primary approaches:one uses malware and noise as input to generate AEs,while the other employs benign binaries and noise as input to GANs to generate adversarial payloads,which are then injected into malware[32,47,48,51].It is worth noting that AEs generated by MalGAN[47]and GAPGAN[48]do not result in executable binaries[51].Additionally,MalGAN generates AEs based on API calls as features,limiting their effectiveness in evading models that use different features.Conversely,GAPGAN demonstrates that generating AEs using bytes is more efficient for conducting black-box attacks compared to using API calls as features.Lastly,it is important to acknowledge that relying solely on GANs for AE generation can be challenging due to stability issues during GAN training,which may hinder effective convergence on specific datasets[22].

    6.1.4 Research on Evasion Attacks Based on Packing and Encryption Methods

    Most researchers employ packers or binary encryption methods to generate AEs that can evade the target model.They either pack and unpack malware or modify the encoding of the original malware’s binary bytes,creating a new malware variant in the form of a dropper using techniques like XOR encryption.Studies utilizing these methods have demonstrated effective evasion results[11,15,20,31].However,the goal of our attack method in this paper is to generate AEs by leveraging the advantages of preserving the original malware’s PE format,including maintaining executability,preserving malicious behavior,and avoiding unnecessary functional verification.Therefore,we do not employ packing or obfuscation techniques that alter the PE format.

    6.2 Related Work on Research for Defenses against Evasion Attacks

    Universal Adversarial Perturbations(UAP)is a type of adversarial perturbation that applies the same perturbation to various inputs to induce errors in ML classifiers.Labaca-Castro et al.[27]proposed an attack method in feature space and problem space using UAP.Additionally,they introduced a defense method utilizing adversarial training with AEs generated through the attack method.Evaluation results demonstrated that UAP attacks represent a serious and real threat to MLbased malware detection systems.The defense method leverages the generated AEs for adversarial training,considered the most promising defense strategy.The authors used a dataset comprising an equal mix of original malware and generated AEs for adversarial training.However,following adversarial training,the classifier exhibited a slight degradation in performance when detecting the original malware.The defense method still grapples with the limitation inherent in adversarial training—the challenge of defending against various types of perturbations.

    Chen et al.[52] introduced EvnAttack,an effective evasion attack model designed to target PE malware detectors using Windows API calls as input.They also proposed SecDefender,a security learning framework aimed at effectively countering evasion attacks.EvnAttack assesses the significance of API calls and categorizes them into two sets:Mcomprising API calls highly relevant to malware andBcomprising API calls highly relevant to benignware.It then proposed an attack method to either inject API calls from setBinto the extracted malware API calls or remove API calls from setM.SecDefender,as a defense method,suggests retraining the classifier using EvnAttack and incorporates a defense strategy that employs a security regularization term to recover any performance degradation of the model post-retraining.Both research efforts demonstrate the potential to mitigate one or multiple adversarial attacks.However,it is important to note that adversarial retraining entails significant additional costs for generating AEs and retraining the model,rendering it impractical in some attacks[16,53].

    Quiring et al.[54] presented a combinatorial framework for adversarial defense that achieved the top position in Microsoft’s 2020 Machine Learning Security Evasion Competition.The Peberus framework begins by inspecting input malware based on predefined heuristics.It then leverages an ensemble comprising existing malware detectors,a monotonic skip-gram model,and a majority voting mechanism involving signature-based models to make predictions.Finally,Peberus employs a stateful nearest-neighbor detector to continuously assess whether the PE file exhibits similarity to previously identified malware samples.

    Lucas et al.[53] introduced an attack method that modifies the instructions within the original binary without injecting maliciously crafted bytes.Additionally,the authors proposed a defense approach employing binary normalization and instruction masking to counter adversarial attacks that insert runtime-executable instructions into different sections of the binary.The defense method initially reverses any potential adversarial manipulations applied to the PE malware,ensuring that the input to the PE malware detector is in a form that enables accurate classification.However,it is worth noting that attackers could potentially overcome the defense by using obfuscation techniques or transformations that the normalization algorithm does not recognize.

    7 Conclusion

    In this paper,we employ perturbations utilized in prior AE generation research,particularly those perturbations that leverage the ambiguity present in the original malware’s PE format,in conjunction with benign content.Our attack method aims to generate AEs that preserve the PE format of the original malware,ensuring both executability and malicious behavior without the need for unnecessary functional verification.In contrast to traditional AE generation research,our attack method does not employ complex optimization processes or intricate attack methods.Instead,we showcase the significant threat posed by our approach,which simply employs perturbations and benign content to generate AEs capable of effectively evading well-known malware classifiers.Additionally,we analyzed the generated AEs and confirmed that the inclusion of benign content in AE generation is beneficial for evading the target model in a black-box attack.

    To assess the transferability of AEs generated by our attack method,we evaluate their detection capabilities against two distinct target models.The results indicate that AEs achieved evasion rates of 33.2% and 99%,respectively.Subsequently,we examined the effectiveness of our generated AEs in evading antivirus vendors participating in VT.The results demonstrated that our AEs successfully evaded 11 antivirus vendors,estimating that these 11 vendors employ static features and utilize ML or DL models for malware detection.In conclusion,our attack method has clearly demonstrated its effectiveness in generating AEs with robust transferability in black-box attacks.This success is attributed to the modification of the PE format of the original malware and the incorporation of benign content.

    To mitigate the transferability of our generated AEs and defend against our attack method,which utilizes various perturbations,we built a defense model using TLSH,which is sensitive to changes and demonstrates excellent performance in finding similar files.To validate our proposed defense method,we performed our attack method on the defense model we constructed.The results confirmed the inability to generate AEs capable of evading the target model when the defense was applied.The impossibility of AE generation can be attributed to the effectiveness of TLSH in detecting similar byte streams or files,thereby preventing the creation of AEs capable of evading the target model through the use of various perturbations and benign content.Therefore,we can infer that the TLSH employed in our proposed defense method is effective in countering persistent queries in black-box attacks.Finally,we subjected the generated 1,655 AEs to the defense-applied model for detection.The detection results revealed that,among these AEs,9 were successful in evading the GBDT defense model.However,it was confirmed that not a single sample could bypass the Malconv defense model.

    8 Limitations

    Our attack method is tailored to target static analysis-based classifiers such as Malconv or GBDT.Therefore,it may not be applicable for evasion when targeting a dynamic analysis-based classifier.Moreover,the AEs we have generated are theoretically guaranteed to be executable and malicious,eliminating the necessity for functional verification.However,it is worth noting that,since we did not execute these AEs within a sandbox environment,there remains a possibility that the original malware and the AEs may exhibit different behaviors.The proposed defense method relies on a similarity score comparison of the TLSH value between the original malware and the AE to protect the model from vulnerability to evasion attacks.Nevertheless,it is important to note that TLSH’s similarity score comparison is distance-based,resulting in a score range of 0 to 1,000,which is broader than other similarity comparison methods.Therefore,while the thresholds we employed in our experiments may be suitable for our specific attack,they might not be directly transferable to real-world applications.

    Acknowledgement:The authors extend their appreciation to all the authors who contributed to this paper and to the reviewers and editors who have assisted in its improvement.The authors wish to express their appreciation to the reviewers for their helpful suggestions which greatly improved the presentation of this paper.

    Funding Statement:This work was supported by Institute of Information&Communications Technology Planning&Evaluation(IITP)Grant funded by the Korea government,Ministry of Science and ICT (MSIT) (No.2017-0-00168,Automatic Deep Malware Analysis Technology for Cyber Threat Intelligence).

    Author Contributions:The authors confirm contribution to the paper as follows: study conception,design and data collection: Younghoon Ban;coding: Younghoon Ban,Myeonghyun Kim;data collection,analysis and interpretation of results: Younghoon Ban;draft manuscript preparation:Younghoon Ban,Haehyun Cho.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The datasets and tools utilized in this paper were sourced from publicly accessible repositories,guaranteeing accessibility.Access to these resources is not constrained,and they can be obtained through the sources listed in the References section of this paper.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    男女无遮挡免费网站观看| 一个人免费看片子| 两个人看的免费小视频| 国产 一区精品| 亚洲成人av在线免费| 午夜视频国产福利| 久久国产精品男人的天堂亚洲 | 久久人妻熟女aⅴ| 国产精品.久久久| 一级爰片在线观看| 香蕉精品网在线| 建设人人有责人人尽责人人享有的| 亚洲精品第二区| 亚洲国产精品成人久久小说| 中文字幕制服av| av女优亚洲男人天堂| 9191精品国产免费久久| 91精品伊人久久大香线蕉| 久久综合国产亚洲精品| av又黄又爽大尺度在线免费看| 日韩av在线免费看完整版不卡| 国产免费福利视频在线观看| 夫妻性生交免费视频一级片| 亚洲久久久国产精品| 狂野欧美激情性bbbbbb| 在线免费观看不下载黄p国产| 精品久久蜜臀av无| 国产成人精品婷婷| 高清在线视频一区二区三区| 久久韩国三级中文字幕| 国产1区2区3区精品| 日日摸夜夜添夜夜爱| 啦啦啦啦在线视频资源| 99久国产av精品国产电影| 好男人视频免费观看在线| 久久综合国产亚洲精品| 成人18禁高潮啪啪吃奶动态图| 免费久久久久久久精品成人欧美视频 | 亚洲精华国产精华液的使用体验| 婷婷色av中文字幕| 美女内射精品一级片tv| 少妇猛男粗大的猛烈进出视频| 少妇猛男粗大的猛烈进出视频| 久久97久久精品| 啦啦啦视频在线资源免费观看| 免费观看无遮挡的男女| 日日摸夜夜添夜夜爱| 嫩草影院入口| 亚洲色图综合在线观看| 丰满迷人的少妇在线观看| 欧美国产精品一级二级三级| 狂野欧美激情性xxxx在线观看| www.熟女人妻精品国产 | 麻豆精品久久久久久蜜桃| 精品一区二区三区视频在线| 春色校园在线视频观看| 欧美xxxx性猛交bbbb| 国产 精品1| 欧美精品亚洲一区二区| 久久久久国产精品人妻一区二区| 在线免费观看不下载黄p国产| 日本欧美国产在线视频| 免费大片18禁| av卡一久久| 人妻人人澡人人爽人人| 性色avwww在线观看| 好男人视频免费观看在线| 男男h啪啪无遮挡| 国产在视频线精品| 18禁在线无遮挡免费观看视频| 国产在视频线精品| 好男人视频免费观看在线| 午夜老司机福利剧场| 日日爽夜夜爽网站| 亚洲高清免费不卡视频| 欧美 亚洲 国产 日韩一| 国产老妇伦熟女老妇高清| 五月天丁香电影| 97精品久久久久久久久久精品| 啦啦啦啦在线视频资源| 亚洲精品中文字幕在线视频| 国产精品蜜桃在线观看| 午夜精品国产一区二区电影| 三上悠亚av全集在线观看| 午夜福利视频在线观看免费| 国产亚洲精品第一综合不卡 | 亚洲av成人精品一二三区| 视频区图区小说| 777米奇影视久久| 免费av中文字幕在线| 欧美日韩视频高清一区二区三区二| 在线观看美女被高潮喷水网站| 国产精品久久久久久av不卡| 午夜老司机福利剧场| 国产老妇伦熟女老妇高清| 美女国产高潮福利片在线看| a级毛色黄片| 亚洲av综合色区一区| 中文字幕亚洲精品专区| 日本91视频免费播放| 午夜免费观看性视频| 另类亚洲欧美激情| 亚洲在久久综合| 夫妻性生交免费视频一级片| 人人澡人人妻人| 日本av手机在线免费观看| 校园人妻丝袜中文字幕| av卡一久久| 国产精品嫩草影院av在线观看| 97精品久久久久久久久久精品| 久热久热在线精品观看| av在线app专区| 日韩欧美一区视频在线观看| 亚洲国产精品999| 久久国产精品大桥未久av| 精品国产露脸久久av麻豆| 精品人妻偷拍中文字幕| 黄片无遮挡物在线观看| 日本午夜av视频| 我的女老师完整版在线观看| 亚洲av福利一区| 成人漫画全彩无遮挡| 国产免费一区二区三区四区乱码| 国产老妇伦熟女老妇高清| 青春草国产在线视频| 老女人水多毛片| 国产成人午夜福利电影在线观看| 国产精品.久久久| 成年美女黄网站色视频大全免费| 亚洲av欧美aⅴ国产| 1024视频免费在线观看| 日韩中字成人| 精品久久蜜臀av无| 久久综合国产亚洲精品| 国语对白做爰xxxⅹ性视频网站| 成年人午夜在线观看视频| 韩国高清视频一区二区三区| 国产亚洲精品第一综合不卡 | 我的女老师完整版在线观看| 色哟哟·www| 久久97久久精品| 亚洲国产精品成人久久小说| 久热久热在线精品观看| 久久狼人影院| 看免费成人av毛片| 国产女主播在线喷水免费视频网站| 欧美日韩精品成人综合77777| 亚洲一级一片aⅴ在线观看| 成人影院久久| av电影中文网址| 婷婷成人精品国产| 亚洲三级黄色毛片| 日韩,欧美,国产一区二区三区| 大话2 男鬼变身卡| 精品人妻偷拍中文字幕| 亚洲精品,欧美精品| 国产一区二区在线观看av| www.熟女人妻精品国产 | 黄色配什么色好看| av网站免费在线观看视频| 看非洲黑人一级黄片| 国产精品三级大全| 久久ye,这里只有精品| 亚洲国产最新在线播放| 高清欧美精品videossex| 51国产日韩欧美| 免费黄网站久久成人精品| 精品一品国产午夜福利视频| videosex国产| 日韩av不卡免费在线播放| 亚洲综合色惰| 久久热在线av| 精品久久久精品久久久| 免费黄色在线免费观看| 高清毛片免费看| 秋霞伦理黄片| videosex国产| 免费高清在线观看视频在线观看| av免费在线看不卡| 丰满迷人的少妇在线观看| av不卡在线播放| 香蕉精品网在线| 精品一区二区三卡| 久久毛片免费看一区二区三区| 国产av精品麻豆| 国产精品偷伦视频观看了| 成人国语在线视频| 伦理电影免费视频| 一区二区三区乱码不卡18| 飞空精品影院首页| 人妻 亚洲 视频| 中文字幕人妻丝袜制服| 欧美日韩精品成人综合77777| 一级a做视频免费观看| 亚洲国产最新在线播放| 如何舔出高潮| 18+在线观看网站| 国产精品一区二区在线观看99| 成人手机av| 久久久国产一区二区| 亚洲精品久久午夜乱码| 欧美 亚洲 国产 日韩一| 久久久久久久大尺度免费视频| av视频免费观看在线观看| 中文字幕另类日韩欧美亚洲嫩草| 一区在线观看完整版| 一级a做视频免费观看| 9191精品国产免费久久| 亚洲成人一二三区av| 精品人妻在线不人妻| 90打野战视频偷拍视频| 亚洲天堂av无毛| 欧美国产精品一级二级三级| 亚洲精品国产av成人精品| 中文字幕免费在线视频6| 欧美+日韩+精品| 国产国拍精品亚洲av在线观看| 人人妻人人添人人爽欧美一区卜| 五月天丁香电影| 97在线人人人人妻| 午夜福利,免费看| 国产黄色免费在线视频| 女人精品久久久久毛片| 国产精品女同一区二区软件| 在线 av 中文字幕| 精品亚洲乱码少妇综合久久| 欧美人与善性xxx| 国产精品偷伦视频观看了| av又黄又爽大尺度在线免费看| 青春草亚洲视频在线观看| 精品一区二区三区视频在线| 成年av动漫网址| 久久久久精品人妻al黑| 中文天堂在线官网| 亚洲国产精品专区欧美| 青青草视频在线视频观看| 97人妻天天添夜夜摸| 涩涩av久久男人的天堂| 精品少妇黑人巨大在线播放| 久久久亚洲精品成人影院| 中国三级夫妇交换| videossex国产| 亚洲成人一二三区av| av视频免费观看在线观看| 满18在线观看网站| 精品国产一区二区久久| 亚洲av电影在线观看一区二区三区| 国产精品嫩草影院av在线观看| 99久久人妻综合| a级片在线免费高清观看视频| 午夜福利网站1000一区二区三区| av免费在线看不卡| 高清不卡的av网站| 亚洲成人一二三区av| 亚洲第一av免费看| 人人妻人人澡人人爽人人夜夜| 99热国产这里只有精品6| 成人18禁高潮啪啪吃奶动态图| 亚洲人与动物交配视频| tube8黄色片| 久久久久国产网址| 亚洲色图 男人天堂 中文字幕 | 在线 av 中文字幕| 亚洲国产精品国产精品| 日本黄色日本黄色录像| 国产黄色免费在线视频| 一级毛片电影观看| 亚洲欧美日韩卡通动漫| 亚洲,一卡二卡三卡| 国产免费视频播放在线视频| 好男人视频免费观看在线| 嫩草影院入口| 99九九在线精品视频| 一区二区三区四区激情视频| 国产综合精华液| 精品第一国产精品| 一级毛片 在线播放| 日韩中字成人| 欧美人与性动交α欧美精品济南到 | 国产成人精品久久久久久| 狠狠婷婷综合久久久久久88av| 国产精品久久久久久久电影| 边亲边吃奶的免费视频| 人成视频在线观看免费观看| a级毛片在线看网站| 只有这里有精品99| 观看av在线不卡| 日本午夜av视频| 美女国产视频在线观看| 自拍欧美九色日韩亚洲蝌蚪91| 久久久久人妻精品一区果冻| 精品卡一卡二卡四卡免费| 九九爱精品视频在线观看| 国产精品不卡视频一区二区| 一级,二级,三级黄色视频| 国产精品欧美亚洲77777| 成人无遮挡网站| 18禁观看日本| 国产白丝娇喘喷水9色精品| 日韩中字成人| 丝袜美足系列| 国产精品国产三级国产专区5o| 午夜福利乱码中文字幕| 亚洲激情五月婷婷啪啪| 国产精品欧美亚洲77777| 最近的中文字幕免费完整| av视频免费观看在线观看| 男女啪啪激烈高潮av片| 成人漫画全彩无遮挡| 免费看av在线观看网站| 亚洲久久久国产精品| av福利片在线| 日本爱情动作片www.在线观看| 国产一级毛片在线| 五月开心婷婷网| 亚洲精品自拍成人| 在线观看三级黄色| 国产精品熟女久久久久浪| 国产精品麻豆人妻色哟哟久久| 免费高清在线观看日韩| 黑人高潮一二区| 在线 av 中文字幕| h视频一区二区三区| 亚洲av国产av综合av卡| 久久ye,这里只有精品| 欧美丝袜亚洲另类| 国产探花极品一区二区| 亚洲欧美精品自产自拍| 最新中文字幕久久久久| 黄色毛片三级朝国网站| 国产日韩欧美亚洲二区| 亚洲精品国产av蜜桃| 两个人免费观看高清视频| 亚洲熟女精品中文字幕| 日本爱情动作片www.在线观看| 国产精品久久久久久久久免| 90打野战视频偷拍视频| 我的女老师完整版在线观看| 春色校园在线视频观看| 精品人妻在线不人妻| 欧美日韩成人在线一区二区| 天堂8中文在线网| 中文字幕制服av| 免费av不卡在线播放| 国产成人91sexporn| 91精品伊人久久大香线蕉| 涩涩av久久男人的天堂| 你懂的网址亚洲精品在线观看| 日韩中文字幕视频在线看片| www日本在线高清视频| 18+在线观看网站| 中文欧美无线码| 9191精品国产免费久久| 久久国产精品男人的天堂亚洲 | 亚洲欧洲精品一区二区精品久久久 | 欧美丝袜亚洲另类| 黄色一级大片看看| 亚洲成国产人片在线观看| 国产精品99久久99久久久不卡 | 视频中文字幕在线观看| 日韩制服丝袜自拍偷拍| 肉色欧美久久久久久久蜜桃| 夫妻午夜视频| 国产欧美日韩一区二区三区在线| 久久女婷五月综合色啪小说| 日韩av免费高清视频| 天堂俺去俺来也www色官网| 日韩熟女老妇一区二区性免费视频| 亚洲av.av天堂| 久久久久国产精品人妻一区二区| 九草在线视频观看| 十分钟在线观看高清视频www| 丝袜脚勾引网站| 亚洲国产精品999| 女人精品久久久久毛片| 国产精品99久久99久久久不卡 | 热re99久久国产66热| 亚洲第一av免费看| 视频在线观看一区二区三区| 日本爱情动作片www.在线观看| av在线app专区| 日韩av在线免费看完整版不卡| 黄色毛片三级朝国网站| 国产又爽黄色视频| 制服诱惑二区| 国内精品宾馆在线| 色5月婷婷丁香| 女人精品久久久久毛片| 高清欧美精品videossex| 久久av网站| 国产成人91sexporn| 午夜老司机福利剧场| 熟女av电影| 七月丁香在线播放| 99热全是精品| 五月开心婷婷网| 久久久精品94久久精品| 九九爱精品视频在线观看| a级毛片在线看网站| 日本av手机在线免费观看| 国产亚洲av片在线观看秒播厂| 男女边摸边吃奶| 国产成人免费无遮挡视频| 美女中出高潮动态图| 日韩中字成人| 亚洲国产精品成人久久小说| 十八禁网站网址无遮挡| 精品久久蜜臀av无| 国产亚洲一区二区精品| 免费观看av网站的网址| 国产精品不卡视频一区二区| 国产一区二区在线观看av| 老司机亚洲免费影院| 欧美人与善性xxx| 色婷婷av一区二区三区视频| 狂野欧美激情性bbbbbb| 久久国产精品大桥未久av| 久久久久久伊人网av| 嫩草影院入口| 99热这里只有是精品在线观看| 丝袜脚勾引网站| 最近2019中文字幕mv第一页| 精品一区二区三区视频在线| 国产精品熟女久久久久浪| 国产黄色视频一区二区在线观看| 日本91视频免费播放| 精品人妻偷拍中文字幕| www.熟女人妻精品国产 | 欧美精品一区二区大全| 免费大片18禁| 日韩视频在线欧美| 亚洲欧洲国产日韩| 日韩 亚洲 欧美在线| 捣出白浆h1v1| 久久人人爽人人爽人人片va| 三上悠亚av全集在线观看| 成人亚洲欧美一区二区av| 夫妻午夜视频| 激情视频va一区二区三区| 最近中文字幕2019免费版| 国产片内射在线| 久久久久久人人人人人| 亚洲激情五月婷婷啪啪| 午夜免费鲁丝| 十八禁高潮呻吟视频| 亚洲激情五月婷婷啪啪| 午夜91福利影院| 欧美人与善性xxx| 一级a做视频免费观看| 黑人巨大精品欧美一区二区蜜桃 | 欧美国产精品va在线观看不卡| 人人妻人人爽人人添夜夜欢视频| 少妇的丰满在线观看| 超碰97精品在线观看| 大话2 男鬼变身卡| 一二三四中文在线观看免费高清| 成人午夜精彩视频在线观看| 日日撸夜夜添| 999精品在线视频| 男女边摸边吃奶| 亚洲综合精品二区| 看免费成人av毛片| 黑人猛操日本美女一级片| 在线观看人妻少妇| 欧美bdsm另类| 侵犯人妻中文字幕一二三四区| av.在线天堂| 精品少妇久久久久久888优播| 欧美变态另类bdsm刘玥| 极品人妻少妇av视频| 蜜桃国产av成人99| 伊人亚洲综合成人网| 中文字幕精品免费在线观看视频 | 亚洲图色成人| videos熟女内射| 免费高清在线观看视频在线观看| 日韩免费高清中文字幕av| 成年人免费黄色播放视频| 国产激情久久老熟女| 男女午夜视频在线观看 | 少妇人妻精品综合一区二区| 人体艺术视频欧美日本| 一区二区三区四区激情视频| 人妻 亚洲 视频| 欧美精品国产亚洲| 精品人妻一区二区三区麻豆| 欧美人与性动交α欧美精品济南到 | 日韩熟女老妇一区二区性免费视频| 久久影院123| 久久精品aⅴ一区二区三区四区 | 91午夜精品亚洲一区二区三区| 亚洲四区av| 国产精品.久久久| 欧美日韩视频高清一区二区三区二| 80岁老熟妇乱子伦牲交| 精品午夜福利在线看| 涩涩av久久男人的天堂| 欧美日韩精品成人综合77777| 最近的中文字幕免费完整| 中文字幕人妻丝袜制服| 精品少妇内射三级| 久久久久精品人妻al黑| 在线观看国产h片| 欧美97在线视频| 亚洲av成人精品一二三区| 亚洲国产欧美日韩在线播放| 亚洲精品日韩在线中文字幕| 下体分泌物呈黄色| 亚洲欧美一区二区三区黑人 | 一二三四中文在线观看免费高清| 亚洲,欧美精品.| 国产片内射在线| 久久这里有精品视频免费| 亚洲成国产人片在线观看| 高清毛片免费看| 两性夫妻黄色片 | 成年动漫av网址| 99久久中文字幕三级久久日本| 欧美成人精品欧美一级黄| 亚洲精品久久久久久婷婷小说| 在线 av 中文字幕| 国产白丝娇喘喷水9色精品| 人人妻人人添人人爽欧美一区卜| 热99久久久久精品小说推荐| 国产成人精品婷婷| 少妇精品久久久久久久| 日韩大片免费观看网站| 久久精品国产a三级三级三级| av免费观看日本| 国产免费一级a男人的天堂| av卡一久久| 少妇高潮的动态图| 搡老乐熟女国产| av视频免费观看在线观看| 国产亚洲午夜精品一区二区久久| 免费少妇av软件| 欧美精品国产亚洲| 免费少妇av软件| xxx大片免费视频| 国产日韩欧美在线精品| 十分钟在线观看高清视频www| 久久精品国产鲁丝片午夜精品| 观看av在线不卡| 久久久久久久国产电影| 又大又黄又爽视频免费| 国产成人欧美| 69精品国产乱码久久久| 欧美亚洲日本最大视频资源| 男人操女人黄网站| 国产精品久久久久久久久免| 亚洲天堂av无毛| 国产精品久久久久久久久免| 亚洲熟女精品中文字幕| 亚洲国产欧美日韩在线播放| 最黄视频免费看| 黄色 视频免费看| 成人黄色视频免费在线看| 美女视频免费永久观看网站| 日本av免费视频播放| 丰满饥渴人妻一区二区三| 日本与韩国留学比较| 韩国高清视频一区二区三区| 久久99蜜桃精品久久| 另类精品久久| 国产精品无大码| 国产成人欧美| 欧美bdsm另类| 免费黄网站久久成人精品| 亚洲欧美成人综合另类久久久| 天天躁夜夜躁狠狠躁躁| 26uuu在线亚洲综合色| 久久久久久人妻| 少妇人妻久久综合中文| 男女午夜视频在线观看 | 韩国高清视频一区二区三区| 黄片无遮挡物在线观看| 国产精品久久久久久久久免| 午夜av观看不卡| 三上悠亚av全集在线观看| 最新中文字幕久久久久| 乱人伦中国视频| 桃花免费在线播放| 天天影视国产精品| 国产精品欧美亚洲77777| 两个人免费观看高清视频| 青春草亚洲视频在线观看| 亚洲在久久综合| 久久 成人 亚洲| 国产成人精品在线电影| 99九九在线精品视频| 熟女电影av网| 男人爽女人下面视频在线观看| 久久免费观看电影| 最近的中文字幕免费完整| 免费人妻精品一区二区三区视频| 亚洲成人一二三区av| 国产国语露脸激情在线看| 涩涩av久久男人的天堂| 精品午夜福利在线看| 在线观看免费视频网站a站| 精品一区在线观看国产| 99re6热这里在线精品视频| 91在线精品国自产拍蜜月| 在线天堂中文资源库| 大片免费播放器 马上看| 全区人妻精品视频| 下体分泌物呈黄色| 国产精品无大码| 九九爱精品视频在线观看| av免费在线看不卡| 亚洲国产看品久久| 97在线视频观看| 亚洲,欧美精品.| 久久国产亚洲av麻豆专区| 国产有黄有色有爽视频|