• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Novel Framework for Windows Malware Detection Using a Deep Learning Approach

    2022-08-24 12:58:58AbdulbasitDarem
    Computers Materials&Continua 2022年7期

    Abdulbasit A.Darem

    Northern Border University, Arar, 9280, Saudi Arabia

    Abstract: Malicious software (malware) is one of the main cyber threats that organizations and Internet users are currently facing.Malware is a software code developed by cybercriminals for damage purposes, such as corrupting the system and data as well as stealing sensitive data.The damage caused by malware is substantially increasing every day.There is a need to detect malware efficiently and automatically and remove threats quickly from the systems.Although there are various approaches to tackle malware problems, their prevalence and stealthiness necessitate an effective method for the detection and prevention of malware attacks.The deep learning-based approach is recently gaining attention as a suitable method that effectively detects malware.In this paper, a novel approach based on deep learning for detecting malware proposed.Furthermore, the proposed approach deploys novel feature selection, feature co-relation, and feature representations to significantly reduce the feature space.The proposed approach has been evaluated using a Microsoft prediction dataset with samples of 21,736 malware composed of 9 malware families.It achieved 96.01% accuracy and outperformed the existing techniques of malware detection.

    Keywords: Malware detection; malware analysis; deep learning; feature extraction; feature selection; cyber security

    1 Introduction

    As cybercriminals continue to refine their tactics and adopt ingenious attack methods, cyberattacks are increasing drastically and have remained a serious issue to organizations and Internet users.The annual loss to world economy due to cybercrime is predicted to be more than $6 trillion by 2021[1].Moreover, cyberattacks are taking longer to resolve and cost organizations much more than before,with malware attacks being the most expensive costing businesses, about $2.6 million per attack [2].Malware is any code that purposely executes malicious payloads on victim machines.It is a popular and prime attack vector as it is simple to deploy automatically and remotely.Malwares have been steadily evolving in terms of diversity, stealthiness and complexity over the past decades, making them undetectable by the conventional antimalware approaches.Kaspersky identified 24,610,126 unique malwares in 2019, which represents an increase of 14 percent over the previous year [3].With the enormous profit that malware could yield to cybercriminals, cybercriminals have resorted to crafting malware that can cripple organization’s entire network.The sustained evolution and diversity of malware make an efficient malware detection approach critical to protect organizations.

    A wide variety of malware detection approaches have been developed over the years with increasingly deploying machine learning (ML) techniques.These malware detection studies have resulted in signature-based approach [4], behavioral approach [5], heuristic approach [6], and model-based approach[7].These malware detection methods can be classified as static analysis based[8]oras dynamic analysis-based solutions [9].In static analysis methods, malicious files are analyzed without executing them, whereas the files are required to execute in dynamic analyses methods.The execution of the malicious files must be done in a controlled (e.g., virtual machine or sandbox) environment.The static analysis can analyze malicious files fast provided that the files are not obscured or packed.In contrast, packing techniques do not affect much the dynamic analysis since the files are analyzed while they are being executed.However, newer malwares can detect the runtime environment (e.g.,virtual environment) and may simply stop executing.Moreover, malware may only execute in certain circumstances [9] making the collection of malware behavior unattainable.

    Malicious software (malware) detection is defined as the process of using some markers (e.g.,features or signatures) to classify software code as either malware or benign.Malware is a software code developed by cyber criminals for nefarious purposes such as corrupting the system and data as well as stealing sensitive data.The malware caused damages are substantially increasing [10], thus,there is a need to detect malware efficiently and automatically and remove them quickly from the systems.Malware detection is an NP-complete problem [11] and thus it is a very challenging and difficult problem to solve.This challenge has sparked research to find effective solutions to the problem of malware detection.The conventional malware detection solutions are based on signatures extracted from malware by reverse engineering malware instances.These approaches are no longer effective as malware writers constantly modify malware signatures.The complexity of malware detection has also increased substantially with the increase of malware numbers at an alarming rate.Also,malware writers deploy various techniques to evade detection.While various new malware detection techniques have been developed using machine learning methods and deep learning [12-14], there are still improvements to be made in the accuracy of malware detection.Also,the performance of machine learning technique is affected by various factors including the parameters they use, the malware analysis process deployed, the type of features used.Therefore, the need to develop an efficient malware detection continue to be one of the major cyber security research quests.

    In this paper, the problem of malware detection using a new sub area of machine learning methods commonly known as deep learning is addressed.Deep learning is evolving quickly and showing remarkable performance results in various application domains.It is also receiving increased attention in malware detection research [15].The contribution can be summarized as follows:

    ?A novel approach based on deep learning for detecting malware called TRACE is proposed.Unlike the existing approaches that regard malware detection as a classification problem, the proposed approach deploys both regression as well as a classification technique and use various feature engineering techniques to significantly reduce the feature space.

    ?The proposed approach evaluated using Microsoft prediction dataset with samples of 21,736 malware composed of 9 malware families.

    ?The proposed approach achieved 96.01% accuracy, which is an improvement in the detection rate.

    The rest of the paper is structured in the following manner.First the related work in malware detection is presented in Section2.Section3 presents the proposed malware detection method.Section 4 presents the complexity analysis.The performance evaluation is presented in Section 5.The results and discussion are highlighted in Section 6, followed by concluding remarks in Section 7.

    2 Related Work

    Malware detection is an active research area within academic and commercial environments.An Wadkar et al.[12] proposed an approach for detecting malware evolution using SVM model.Abawajy et al.[13] proposed an ensemble-based approach referred to as a hybrid consensus pruning (HCP).HCP uses a consensus function to generate the base classifiers for the final ensemble.HCP is validated through AUC metric.Manavi et al.[16] developed an approach based on OpCode sequence and evolutionary algorithm.The OpCodes are extracted from the executables and a weighted graph is built.An evolutionary algorithm is used to build a graph for each malware family and benign code,which is used to compare with when performing classification.Li et al.[17] proposed a CNN-based malware detection approach.In the approach, virtual machine memory snapshot image of running malware and benign is captured and memory images converted to grayscale images, which is used for training and testing on the CNN-based model.Zhang et al.[18] present an approach for malware classification.The approach is based on data flow analysis to extract semantic structure features of the code and the graph convolutional networks (GCNs) for detection.The approach achieved 95.8%detection accuracy.In the approach proposed by Han et al.[19], malware is profiled based on its structure and behavior.It then uses several classifiers, namely Random Forest, Decision Tree, CNN,and XGBoost to classify the input data.

    A method for malware detection based on visualization of the code texture was presented by Hassan et al.[15].The approach deploys a version of Faster Region-Convolutional Neural Networks(RCNN) with transfer learning.The malicious code is mapped through visualization technology onto matching images with usual texture features to detect malware.The approach produced an accuracy of 92.8%.Huang et al.[20] developed a deep learning and visualization-based approach for detection of malware based on Windows API.It uses static features obtained from sample files to produce static visualization images.It then generates dynamic visualization images by using Cuckoo Sandbox to perform behavior analysis.Two images are then merged into hybrid images.Evaluation of a classification model based on static images and a classification model based on hybrid images were preformed showing that the latter performed better than the static approach alone.Marín et al.[21]proposed a malware detection and classification model based on network traffic.The model is based on deep learning and combines a convolutional neural network layer and the recurrent neural network layer in a different manner to create different models.

    Table 1: Summary of sample existing techniques for malware detection

    Approaches that exploit the generative adversarial network (GAN) have also been proposed in [10,24,27,28].GAN uses a generator and a discriminator.The purpose of the discriminator to differentiate fake data from actual data.The purpose of the generator is to use a known probability distribution and generate fake data such that the discriminator will not be able to differentiate between the fake and actual data.An adaptive malware detection that mimics benign network traffic based on the GAN parameter is presented by Rigaki et al.in [27] .The approach proposed by Kim et al.[10]combines several deep learning approaches and uses the GAN components to generate fake malware from a given probability distribution and to learn the features ofmalware data.Adetector is used in the approach learns different features of the malware from the actual and fake data.The proposed model achieves 95.74% average classification accuracy.A new adversarial attack that changes the bytes of the binary to create adversarial examples has also been proposed by Suciu et al.[28].Tab.1 summarizes some of the recent malware detection approaches based on machine learning techniques.Although a good effort is being expanded on solving the problem, it is obvious from the results that there is still some room for improvement.Also, exiting methods mainly approach the malware detection problem as a classification problem.In contrast, malware detection is viewed as a regression as well as a classification problem in the proposed work.

    3 Proposed Malware Detection Approach

    In this section, the proposed approach is discussed in detail.As shown in Fig.1, the proposed framework has four different phases: (1) Data Pre-Processing phase; (2) Feature Processing phase; (3)Ensemble Classification phase, and (4) Malware Detection phase.Each of these phases is described in subsequent subsections.

    Figure 1: The proposed malware detection framework with different phases and complete procedure of proposed framework

    Algorithm 1 shows the process of generating a set of filtered features.The input to the algorithm is the raw Microsoft data set.The first set of steps (Line 1 to 6) perform basic data cleaning function followed by class balancing function (Line 7 to 8).The feature selection (Lines 12 to 14) and feature co-relations (lines 15 to 17) are performed.The output, which is a set of filtered features, is finally obtained.Each of these processes is described in the following subsections.

    3.1 Data Preprocessing Phase

    The preprocessing phase consists of two sub-phases namelydata cleaningandclass balancing.In this paper, the Microsoft prediction dataset with samples of 21,736 malware was used, where each malware in the dataset belongs to one of the 9 malware families (e.g., virus, worm and trojan).The dataset is full of not applicable (NAs) and NULL entries.Each cell of every feature should be noisefree and clean.To achieve this, the data set was cleaned by replacing the NA and the NULL values in the dataset with a mean value.

    After cleaning, class balancing was performed.The data set’s feature space explains its relevance as each feature has its own importance for contributing to the file.The features should be balanced with respect to the number of cases in each feature.The optimal ratio of multiple cases in each feature helps both in data balancing as well as in prediction.Since lower number of cases represents minor class and greater number of cases represents major class, it is essential to perform class balancing (i.e.,balancing the minor and major classes).In this paper, SOTU (Split by Oversampling and Train by Under-fitting) technique was used for class balancing [29].SOTU works by over-sampling the minor class with its multiple classes, where training is done in a single set at one time to avoid over-fitting.The set is created in such a manner thateach set contains equal numberof major class and minor class.For the formulation of sample sets, this Eq.(1) is followed.

    whereFtraindenotes the training file,nis the number of samples features,xaandxbrepresent the instances of minor and major classes, respectively.In this work, the number of features isn=83, and the generated sample sets are 8.

    Algorithm 1: Feature Processing Phase Input: Microsoft prediction dataset Output: Filtered features 1: procedure FUNCTION (Feature processing)2: for i=1 to n do 3: if i==0 or i==‘NULL’then 4: i = Mean(n) →Data cleaning removal of O’s or NULL 5: end if 6: end for 7: Compute S →Number of sets according to Eq.(1)8: Set S=8 9: for i=1 to n do 10: Vi= PC →PC scores are computed using function PCA 11: end for 12: for Vi=1 to n do 13: Gini =c∑i=1 pi* log2(pi) →Gini index computation for FS 14: end for 15: for Vi=1 to n do 16: Different co-relation parameters are computed as shown in Tab.3(Continued)

    17: end for 18: end procedure

    3.2 Feature Processing Phase

    A clean and balanced data is passed on to this step.The feature processing phase consists of three steps: (1) feature reduction, feature selection and feature-co-relation.

    3.2.1 Feature Reduction

    This step is applied to reduce the dimensionality of the data.The Principal Component Analysis(PCA) was used for feature reduction.PCA is a multivariate process that converts the input data (i.e.,a series of correlated variables) into an uncorrelated set of variables.

    A principal component (PC) score is produced by multiplying each row with the uniform score of each column.The estimated component scores are the individual values that state the information regarding the variability of the data.Fig.2 represents the first four PCs.The parameters computed using the PCA function are presented in Tab.2.

    Figure 2: Transformations of PCs

    Table 2: Parameters evaluated using PCA

    3.2.2 Feature Selection

    Feature selection is the process of identifying and selecting a subset of input variables that aremost relevant to the target variable.A filter method based onrandom forestwas used to rank the features.To rank the features, the method computes‘Giniindex’score, which stands for data homogeneity.With the assistance of features, data is partitioned into the nodes.The value of theGiniis determined for both the root and the leaves.The mean value ofGinidecrease is created after the analysis of the difference between the differentGinivalues.For the most critical function, this value is the highest.After selecting 41 features based on random forest, the features are further analyzed.This process is repeated until a feature is judged.

    3.2.3 Feature Correlation

    After selecting the 41 features, it is very important to find the relation among all the selected features.For this, we need to measure variance to determine how far observed values differ from the average of predicted values based on methods such as Root Mean Square Error (RMSE) (Eq.(2)),R2score (Eq.(3), Mean Absolute Error (RAE) (Eq.(4)) and Mean Squared Error (RSE) (Eq.(5)).Tab.3 shows the computed value of RMSE,R2score, MAE, and RAE.

    Table 3: Feature co-relation parameters

    3.3 Ensemble Learning Phase

    To minimize bias and variance, ensemble learning (i.e., a collection of models) using four base classifiers: Light Gradient Boosting Machine (LightGBM), eXtreme Gradient Boosting (XGBoost),Convolutional Neural Network (CNN) and Long-Short Term Memory (LSTM) was deployed.LightGBM[30]is a gradient boosting platform which uses algorithms based on tree learning.It allowscomplete and effective use of the gradient boosting system by first processing the dataset and making itlighter.LightGBM is suitable for this study since it is able to process big datasets, runs fast and requires less memory.XGBoost [31], commonly used in machine learning technique, is also from a family of a gradient boosting.It is originated from Gradient Boosting Decision Tree (GBDT).It provides good accuracy and relatively fast speed compared to traditional machine learning algorithms.For performance, XGBoost chooses an algorithm based on histograms.The histogram-based algorithm uses bins that are separated by data point characteristics into discrete types.It is, therefore, more accurate than the pre-sorted process, but all the potential split points must be enumerated as well.

    CNN [32] extracts the raw data directly and outputs the outcome of classification or regression in an end-to-end structure.CNN’s neuron weights are trained via backpropagation algorithm.A collection of feature maps is generated by each convolution layer, while each feature map represents a high-level feature extracted through a specific convolution filter.The pooling layer primarily uses the local correlation theory to complete the down sampling, so features can be extracted from a more global perspective by the subsequent convolution layer.These substantially lower the weight parameters number as well as the estimation of a deep network’s preparation.The LSTM [32], a commonly used deep neural network approach, is an enhanced Recurrent Neural Networks (RNNs)based model.RNN uses an internal state to represent previous input values, allowing temporal background to be captured.For long input sequence, it is not easy to train LSTM.However, compared to RNN, LSTM can capture the background of longer time series.In this problem, the census parameters are lengthy in nature, so to minimize the length, LSTM was used.This phase divided into two sub parts; firstly, opcode sequence and metadata feature of the low carnality features are extracted using de-compilation tools (i.e.,CNN and LSTM).In the second part, the numerical features are extracted and trained by two different models, LightGBM and XGBoost.The features were treated separately because both feature sets reflect different patterns of information.Each classifier generates a normalized predicted score (NPS), NPS = S1, S2, ..., Sn.The NPS indicates the likeliness of a given file to be malware infected or not and produced as illustrated in Algorithm 2.

    Algorithm 2: Vulcanization Input: Filtered Features Output: normalized predicted score (NPS)1: procedure FUNCTION (Vulcanization Phase)2: A [m, n]←Vi→Matrix computation 3: create Dij=images/BZ_466_644_1850_673_1896.png1 if (i,j)∈E o otherwise →Spare matrix creation using LGBM Baseline 4: for i=1 to n do 5: Create jump probability matrix: xi= (Q - pAD ((D - (D == 1)) 0))/N 6: Set i←i + 1 7: end for 8: for i=1 to n do →Compute the scores by all the four models 9: for j=1 to 4 do 10: Sj= i 11: end for 12: end for 13: end procedure

    Note that each classifier is an independent method and produces the classification decision as well as the class probability estimation.The estimator produced by all the classifiers are combined in Eq.(6).In this equation,hlis the classifier, which results in true prediction forkat a data pointx.

    3.4 Decision Phase

    This is the last phase of the TRACE framework where the decision is made about a given file.In this phase, the decision tree algorithm was deployed.This phase uses the normalized predicted score S1 to S4 generated in Algorithm 2 by each classifier.Based on this, the machine’s reliability is derived using Eq.(7).

    whered[i]is the rate of difference measured using the NPS vector and the real arrays.Using attribute importance score and error rate support, an NPS (i.e.,S) is computed as follows:

    A threshold value is fixed by alpha probability to determine the decision.The complete procedure for the computation of the NPS and decision making is defined in the Algorithm 3.NPSs of all the models are used to train the decision tree.

    Algorithm 3: Malware Detection Phase Input: Normalized Predicted Score (NPS)Output: Detection of malware 1: procedure FUNCTION (Malware Detection Phase)2: Compute S5→With the support of decision tree and S1,S2, S3, S4 3: if S5<zithen →by α probability 4: Set xi←1 →Malware 5: else 6: Set xi←0 →Benign 7: end if 8: Set i←i + 1 9: end procedure

    4 Complexity Analysis

    It is essential to compute the complexity of the algorithm to ensure its validation.The Algorithms 1,2,3 illustrates each step followed in the proposed framework.Complexity is calculated by evaluating each step of the algorithm.Here, two different types of complexity are evaluated.

    4.1 Time Complexity

    The number of iterations and the procedure of each iteration decide the time complexity.In Algorithm 1, steps 2 to 6 are comparison statements withnbeing the number of comparisons.Therefore, these steps takeO(n) time.The loop in steps 9-11, the loop in 12-14, and steps 15-17 takeO(n) time in the worst case.Therefore, the overall Algorithm 1 time complexity isO(n).In Algorithm 2, steps 2 and 3 perform assignments, thus they takeO(1) time each.O(n) time is needed for steps 4-12 for the linear matrix formulation.The overall time complexity for Algorithm 2 isO(1) +O(n).Therefore, the worst time complexity isO(n) time.In Algorithm 3, the computation step like step 2 would take onlyO(1)time.The comparison steps3to8takeO(n)time.In total, this algorithm requiresO(n) time.The overall time complexity for all algorithms is

    Therefore, the upper bound time complexity of the proposed algorithm isO(n).

    4.2 Space Complexity

    The input data to each algorithm requirednspace.Each loop requires atmostO(n)spaces whereas the arithmetic operations needO(1) space.The overall space complexity (SC) of the algorithm is given by:

    Therefore, the upper bound space complexity of the proposed algorithm isO(n).

    5 Performance Evaluation

    In this section, the performance evaluation of the proposed approach and compare the different algorithms (CNN, LSTM, LightGBM, XGBoost) was presented.In the first subsection, the experimental setup followed by the dataset used were discussed.Then, the impact and comparison of used approaches are discussed.Finally, the analysis of the proposed framework TRACE is discussed in the following subsection.

    The CNN is designed with 64 layers: the input layers, convolution layers, fully connected layers,sequence layer, activation layer, pooling layers, combination layers, and output layers.The input to the CNN is 27 low cardinality features.The convolution layer is experimented to accept the input.At the activation layer, the ReLU activation function is considered with the same parameters as in convolution layer so that the accurate deep information is not lost.The key benefit of using the ReLU feature over other kernel function is that it does not simultaneously stimulate all the neurons.This implies that the neurons are only deactivated if the linear transformation output is less than 0.The effect of negative input values is zero, which means that the neuron is not triggered.As only a certain number of hidden neurons are activated, when contrasted to thesigmoidortanhfunction, the ReLU function ismuch more computationally effective.Sigmoid activation function was used at the last layer as only spamicity score is expected from the model.Maximum pooling method is adopted as pooling layer.The parameters for the same are 28*28*32, keeping the filter size also the same.The highest proportion is taken and placed in a new grid for each filtered element.This is simply taking the most significant features and compressing them into one vector.

    The LSTM network starts with two key layers, namely an input sequence layer and an LSTM layer.In sequence or time series, a sequence input layer enters information into the network.The LSTM layer.The LSTMlayer is used to learn the long-lasting dependency among the time periods of the sequence information.For class labels predictions, the LSTM network will have a fully connected layer, a SoftMax layer, and an output classification layer.The sources size was set to be 41 sequences (the size of the input data).The network architecture consists of bidirectional LSTM layer of 100 concealed units and yields the last part of the series.The final network has 41 classes by including a completely connected layer of size 9 followed by a layer of softmax and a classification layer.For training the device using LSTN,data is loaded in the same manner as in CNN.For padding, the LSTM network partitions the training sample dataset into dinky-batches.To ensure that the segments have the same length, the network applies padding.For training and testing, the train Network use feature is used to train the LSTM network.The testing is done in the same way as the training, except for the data being the difference.The NPS rating is generated following testing.

    5.1 Experimental Setup

    Windows 64bit operating system was used with the support of 8GB RAM.Also, Python programming language was used with Python libraries, for example, TensorFlow, Docker Server,Anaconda.The hyper-parameters considered in all the models are shown in Tab.4.

    Table 4: Various hyper parameters during the experimentation of different models

    5.2 Dataset

    To perform the experiments, publicly available Microsoft Malware Prediction dataset was used.This data set is created for Research Prediction Competition at Kaggle [33].This dataset contains 83 features in total and contains noise and imbalanced data.The dataset consists of three files, i.e.,sample submission, test, and train.The proposed scheme focused on balancing the data and making the data ready for experiments.Around 21,736 malwares were used that come from 9 different malware families.

    5.3 Performance Metric

    As a performance metric to validate the efficiency of proposed framework TRACE, the following parameters were used to compare the results of all the deep learning algorithms.In the equations,the termTPdenotes the true positive,FPis the false positive,TNdenotes the true negative, andFNdenotes the false negative.The accuracy metric(Eq.(11))quantifies the ratio of correctly judged results by the model.

    Also AUC (Area Under the Curve) was used to determine which of the used models predicts the classes best.AUC is based on the false positive rate (FPR) and the true positive rate (TPR).The true positive rate, also known as recall, quantifies the number of valid classifications made by the classifier out of all observations that are true.

    The false positive rate (FPR) measures the ratio of false positives within the negative samples and computed as follows:

    The Precision quantifies the number of positive class predictions that belong to the positive class and computed as follows:

    F1 score (Eq.(15) is defined as weighted average of recall (Eq.(12)) and precision (Eq.(14)), thus balances any concerns associated with the recall and precision results.

    where β reflects recallvs.precision.

    6 Results and Discussions

    It is important to look at the performance of the deep learning models when working with a large dataset like the Microsoft Malware Prediction Competition dataset.As noted early, the dataset was transformed into a sparse matrix (SM) so that it can be used by the deep learning models as well as the decision tree model.The SM with an LGBM baseline model were paired in this experiment.For memory management purposes, the information was partitioned into smaller chunks of 120000 rows(sample set produced by SOTU).Tab.5 shows some of the performance data for LSTM and CNN.The confusion matrix was used to measure the performance of the LSTM, LightGBM, XGBoost and CNN models.The results of all the models are shown in Tab.6 in terms of (TPR), (FPR), true negative rate (TNR) and false negative rate (FNR).In terms of TPR, LSTM performs better than the other models.

    Table 5: Various evaluation parameters during training and testing of CNN and LSTM

    Table 5: Continued

    Table 6: Confusion matrix of all the models

    Tab.7 shows the performance of LSTM, LightGBM, XGBoost and CNN models in terms of precision, recall and F1-score computed during the training and testing of the models.To evaluate the efficiency and effectiveness of the proposed framework TRACE, a discrimination probability which is computed in the last phase of the framework was used.This probability is computed by combining the results of all the four models (CNN, LSTM, LightGBM, XGBoost), i.e., S1, S2, S3, and S4.This probability indicates the likelihood of malware injected into the machine.Our framework TRACE took 220 s, and the estimation of malware significantly outperformed the other methods.

    Table 7: Resulting parameters during training and testing of models

    Next, results were presented when TRACE is compared against other models with respect to True Positive Rate (TPR) (Fig.4), F-score (Fig.3), and AUC (Area Under Curve) (Fig.5).Fig.3 shows the malware detection performance of TRACE (i.e., Decision) as compared to others in terms of F-score metrics.From the diagram, it can be observed that TRACE (i.e., Decision in the graph) outperforms the other approaches substantially.This is because all the other models significantly contributed to the results of malware detection of TRACE model.Fig.4 shows the malware detection performance of TRACE (i.e., Decision) with respect to TPR as compared to other models.TRACE (i.e., Decision in the graph) performs best in terms of TPR as it efficiently combines the results of other models.Fig.5 shows the malware detection performance of TRACE (i.e., Decision) with respect to AUC as compared to other models.TRACE (i.e., Decision in the graph) performs best in terms of AUC as it efficiently combines the results of other models.From this graph, it can be observed that TRACE (i.e.,Decision) achieves a precision of 96.01% whereas LSTM and CNN achieve slightly under 80%, and LightGBM and XGBoost achieve about 80%.

    Figure 3: Malware detection performance of TRACE (i.e., decision) with respect to F-score

    Figure 4: Malware detection performance of TRACE (i.e., decision) with respect to TPR

    Figure 5: Malware detection performance of TRACE (i.e., decision) with respect to AUC

    7 Conclusion

    Malware is at the root of many cyber-security threats, including national security threats.It is estimated that cyber-attacks are the most dangerous security issue in the world today.Sometimes a single cyber-security breach may cost more than the cost of many natural disasters.The race between cyber-attackers and anti-malware tool developers is never ending.Therefore, researchers must put sustained pressure on cyber criminals to ensure that malware is detected as early as possible.To this end, a malware detection algorithm called TRACE was proposed, which combines malware analysis,feature extraction and deep learning architectures.Unlike the existing approaches that regard detection as a classification problem, the proposed approach deploys both regression as well as a classification technique and use various feature engineering techniques to significantly reduce the feature space.Extensive performance evaluation indicates that the proposed mechanism maintains outstanding classification capability.The proposed method achieved 96.01%precision,outperforming other models.In addition, the efficiency of the proposed model was measured to validate the efficacy and sustainability of various deep learning approaches.On average, it only took 0.76s for TRACE to identify a fresh file.

    Acknowledgement:The author extends his appreciation to the Deputyship for Research and Innovation, Ministry of Education in Saudi Arabia for funding this research work through project number 1385.The help of Prof.Jemal Abawajy is greatly appreciated.

    Funding Statement:This research was funded by the Deputyship for Research and Innovation,Ministry of Education in Saudi Arabia through Project Number 1385.

    Conflicts of Interest:The author declares that he has no conflicts of interest to report regarding the present study.

    亚洲av片天天在线观看| 国产一区二区在线观看av| 日韩大码丰满熟妇| 69精品国产乱码久久久| 女警被强在线播放| 欧美另类亚洲清纯唯美| 亚洲国产中文字幕在线视频| 国产高清国产精品国产三级| 成人精品一区二区免费| 国产精品久久久久成人av| 成人国语在线视频| 欧美日韩中文字幕国产精品一区二区三区 | 狠狠狠狠99中文字幕| 人人妻人人添人人爽欧美一区卜| 国产亚洲精品久久久久5区| 男女无遮挡免费网站观看| 亚洲精品国产区一区二| 丰满少妇做爰视频| 制服人妻中文乱码| 极品人妻少妇av视频| 国产亚洲精品久久久久5区| 久久天堂一区二区三区四区| 国产成人系列免费观看| 老熟妇乱子伦视频在线观看| 在线观看免费高清a一片| 亚洲免费av在线视频| 国产免费视频播放在线视频| 99riav亚洲国产免费| 欧美激情极品国产一区二区三区| 国产免费现黄频在线看| 国产亚洲午夜精品一区二区久久| 动漫黄色视频在线观看| 亚洲av日韩精品久久久久久密| 在线播放国产精品三级| 好男人电影高清在线观看| 精品福利永久在线观看| 一本久久精品| 亚洲,欧美精品.| 一进一出抽搐动态| 亚洲三区欧美一区| 亚洲黑人精品在线| 夜夜骑夜夜射夜夜干| 性色av乱码一区二区三区2| 黄色毛片三级朝国网站| 自线自在国产av| 高清黄色对白视频在线免费看| 怎么达到女性高潮| av欧美777| 精品一区二区三区四区五区乱码| 国产精品一区二区精品视频观看| 免费高清在线观看日韩| 一本久久精品| 日本a在线网址| 精品国内亚洲2022精品成人 | 男女床上黄色一级片免费看| 中文字幕最新亚洲高清| 欧美黑人欧美精品刺激| 丰满人妻熟妇乱又伦精品不卡| av线在线观看网站| tocl精华| 一边摸一边抽搐一进一出视频| 国产精品av久久久久免费| 丰满人妻熟妇乱又伦精品不卡| videos熟女内射| 精品国产一区二区久久| 亚洲av成人不卡在线观看播放网| 人妻久久中文字幕网| 国产高清videossex| 在线观看免费视频网站a站| 日韩大片免费观看网站| 男人舔女人的私密视频| 国产色视频综合| 国产高清视频在线播放一区| 精品少妇久久久久久888优播| 一级,二级,三级黄色视频| 深夜精品福利| 水蜜桃什么品种好| 亚洲欧洲精品一区二区精品久久久| 操出白浆在线播放| 久久久水蜜桃国产精品网| 狠狠精品人妻久久久久久综合| 变态另类成人亚洲欧美熟女 | 18在线观看网站| 91精品三级在线观看| 午夜福利视频在线观看免费| 成年人午夜在线观看视频| 高潮久久久久久久久久久不卡| 菩萨蛮人人尽说江南好唐韦庄| a级毛片在线看网站| 99精品欧美一区二区三区四区| 91字幕亚洲| 亚洲情色 制服丝袜| 无人区码免费观看不卡 | 国产成人影院久久av| 欧美黑人欧美精品刺激| 久久久精品区二区三区| 91精品国产国语对白视频| 欧美成狂野欧美在线观看| 国产免费福利视频在线观看| 日韩视频在线欧美| 国产日韩欧美视频二区| 一级毛片精品| 免费久久久久久久精品成人欧美视频| 黄片大片在线免费观看| 亚洲性夜色夜夜综合| 国产伦理片在线播放av一区| 久久精品国产a三级三级三级| 岛国在线观看网站| 久久 成人 亚洲| 欧美一级毛片孕妇| 桃红色精品国产亚洲av| 日韩欧美免费精品| 老汉色av国产亚洲站长工具| 免费黄频网站在线观看国产| 亚洲专区中文字幕在线| 超碰成人久久| 啦啦啦视频在线资源免费观看| 美女扒开内裤让男人捅视频| 国产熟女午夜一区二区三区| 黄频高清免费视频| 99re6热这里在线精品视频| 免费日韩欧美在线观看| 我的亚洲天堂| 午夜激情久久久久久久| 麻豆av在线久日| 欧美日韩中文字幕国产精品一区二区三区 | 国产视频一区二区在线看| 久久影院123| 热99re8久久精品国产| 在线看a的网站| 日韩三级视频一区二区三区| 日韩欧美一区视频在线观看| 婷婷成人精品国产| 五月天丁香电影| 免费在线观看日本一区| 色综合欧美亚洲国产小说| 国产黄频视频在线观看| 国内毛片毛片毛片毛片毛片| 久久精品aⅴ一区二区三区四区| 美女视频免费永久观看网站| 老司机深夜福利视频在线观看| 两个人看的免费小视频| 日韩熟女老妇一区二区性免费视频| avwww免费| 久久久久久久久免费视频了| 午夜成年电影在线免费观看| 一本色道久久久久久精品综合| 久9热在线精品视频| 国产精品久久久久久人妻精品电影 | 高清视频免费观看一区二区| 久久久水蜜桃国产精品网| 精品久久久久久久毛片微露脸| 中文字幕人妻丝袜制服| 国产亚洲精品久久久久5区| 性高湖久久久久久久久免费观看| 午夜福利在线免费观看网站| 18禁美女被吸乳视频| cao死你这个sao货| 大香蕉久久成人网| 久久这里只有精品19| 两性夫妻黄色片| 99精品久久久久人妻精品| 性少妇av在线| 菩萨蛮人人尽说江南好唐韦庄| 精品欧美一区二区三区在线| 黄色丝袜av网址大全| 50天的宝宝边吃奶边哭怎么回事| 亚洲情色 制服丝袜| 在线观看免费高清a一片| 欧美人与性动交α欧美软件| av免费在线观看网站| 另类精品久久| 老熟女久久久| 亚洲第一av免费看| 80岁老熟妇乱子伦牲交| 日本黄色日本黄色录像| 国产精品一区二区在线观看99| 亚洲av日韩在线播放| 丝袜人妻中文字幕| 亚洲天堂av无毛| 美女主播在线视频| 一本色道久久久久久精品综合| 国产精品久久电影中文字幕 | 久久狼人影院| 久久天堂一区二区三区四区| 成年人免费黄色播放视频| 国产免费av片在线观看野外av| 99在线人妻在线中文字幕 | av欧美777| 人人妻人人爽人人添夜夜欢视频| 国产欧美亚洲国产| 亚洲欧美一区二区三区黑人| 亚洲av欧美aⅴ国产| 制服诱惑二区| 日本撒尿小便嘘嘘汇集6| www.精华液| 欧美久久黑人一区二区| 欧美日韩国产mv在线观看视频| 亚洲欧美精品综合一区二区三区| 欧美日韩av久久| 亚洲精品国产区一区二| 国产在线一区二区三区精| 国产又色又爽无遮挡免费看| 久久天堂一区二区三区四区| 19禁男女啪啪无遮挡网站| 国产不卡av网站在线观看| 满18在线观看网站| 操美女的视频在线观看| 窝窝影院91人妻| 婷婷成人精品国产| 欧美变态另类bdsm刘玥| 男女免费视频国产| 亚洲成a人片在线一区二区| 午夜免费鲁丝| 欧美精品亚洲一区二区| 国产成人系列免费观看| 亚洲av电影在线进入| 99香蕉大伊视频| 国产精品一区二区免费欧美| 美女福利国产在线| 亚洲成人手机| 99久久99久久久精品蜜桃| 中文字幕最新亚洲高清| 少妇猛男粗大的猛烈进出视频| 咕卡用的链子| 在线十欧美十亚洲十日本专区| 日韩一区二区三区影片| 亚洲国产成人一精品久久久| 男女无遮挡免费网站观看| 国产欧美亚洲国产| 亚洲成人国产一区在线观看| 露出奶头的视频| 色精品久久人妻99蜜桃| 国产av又大| 亚洲五月婷婷丁香| 老司机午夜福利在线观看视频 | 亚洲情色 制服丝袜| 99久久99久久久精品蜜桃| 在线观看免费视频日本深夜| 97在线人人人人妻| 蜜桃在线观看..| 露出奶头的视频| 纯流量卡能插随身wifi吗| 久久人人97超碰香蕉20202| 午夜福利欧美成人| 国产一区二区三区综合在线观看| 欧美激情高清一区二区三区| 一区二区三区精品91| 最新在线观看一区二区三区| 丝袜人妻中文字幕| 两性夫妻黄色片| 成年女人毛片免费观看观看9 | 国产三级黄色录像| 黄频高清免费视频| 成年人黄色毛片网站| 一本色道久久久久久精品综合| 怎么达到女性高潮| 在线十欧美十亚洲十日本专区| 女人被躁到高潮嗷嗷叫费观| 新久久久久国产一级毛片| 午夜福利,免费看| 天堂中文最新版在线下载| 国产黄频视频在线观看| 大码成人一级视频| 国产99久久九九免费精品| 又黄又粗又硬又大视频| 一本久久精品| 少妇粗大呻吟视频| 一区福利在线观看| 99re6热这里在线精品视频| 免费女性裸体啪啪无遮挡网站| 成年人黄色毛片网站| √禁漫天堂资源中文www| 日日摸夜夜添夜夜添小说| 亚洲专区国产一区二区| 国产深夜福利视频在线观看| 国产精品久久久人人做人人爽| 99国产综合亚洲精品| 亚洲人成电影观看| 久久久国产一区二区| 国产av国产精品国产| 变态另类成人亚洲欧美熟女 | 精品高清国产在线一区| 亚洲欧美日韩另类电影网站| 久久婷婷成人综合色麻豆| 久久国产精品男人的天堂亚洲| 深夜精品福利| 久久久久久久精品吃奶| 两人在一起打扑克的视频| 9色porny在线观看| 午夜91福利影院| 超碰97精品在线观看| 亚洲第一青青草原| 国产福利在线免费观看视频| 国产麻豆69| 狂野欧美激情性xxxx| 日日爽夜夜爽网站| 黑人巨大精品欧美一区二区mp4| 新久久久久国产一级毛片| 黄片小视频在线播放| 国产一区二区激情短视频| 欧美在线一区亚洲| av片东京热男人的天堂| 国产一区二区 视频在线| 色播在线永久视频| 日韩一区二区三区影片| 精品视频人人做人人爽| 性高湖久久久久久久久免费观看| 亚洲精品中文字幕一二三四区 | 伦理电影免费视频| 国产在视频线精品| 一边摸一边做爽爽视频免费| 18禁观看日本| 深夜精品福利| 夜夜爽天天搞| 老司机福利观看| 精品福利观看| 欧美变态另类bdsm刘玥| 久久亚洲真实| 久久国产精品人妻蜜桃| 欧美成人午夜精品| 精品一区二区三卡| 国产伦人伦偷精品视频| 亚洲精品乱久久久久久| 亚洲午夜理论影院| 国产精品久久久久久精品古装| 男人舔女人的私密视频| 精品乱码久久久久久99久播| 久久人妻福利社区极品人妻图片| 精品熟女少妇八av免费久了| 久久午夜综合久久蜜桃| 九色亚洲精品在线播放| 精品一区二区三卡| 国产成人精品无人区| 国产色视频综合| 黄色丝袜av网址大全| 成年人午夜在线观看视频| 色在线成人网| 亚洲九九香蕉| 一本—道久久a久久精品蜜桃钙片| 免费在线观看日本一区| 超碰97精品在线观看| 国产精品二区激情视频| 高清毛片免费观看视频网站 | 日韩一卡2卡3卡4卡2021年| 国产高清激情床上av| 免费观看av网站的网址| 99热网站在线观看| 精品卡一卡二卡四卡免费| 男人舔女人的私密视频| 国产精品免费一区二区三区在线 | 操美女的视频在线观看| 精品久久蜜臀av无| 亚洲欧洲精品一区二区精品久久久| 国产99久久九九免费精品| 高清欧美精品videossex| www日本在线高清视频| 欧美日韩精品网址| 亚洲精品在线美女| 激情视频va一区二区三区| 中文字幕制服av| 精品亚洲成a人片在线观看| 国产日韩欧美在线精品| 两个人免费观看高清视频| 国产欧美日韩精品亚洲av| 亚洲精品国产精品久久久不卡| 91成年电影在线观看| 俄罗斯特黄特色一大片| 满18在线观看网站| 国产成人影院久久av| 无人区码免费观看不卡 | 一区二区三区乱码不卡18| 99九九在线精品视频| 九色亚洲精品在线播放| 乱人伦中国视频| 亚洲欧洲日产国产| 天天影视国产精品| 欧美激情 高清一区二区三区| 天天影视国产精品| 国产精品 欧美亚洲| 日日夜夜操网爽| 久久香蕉激情| 亚洲欧洲精品一区二区精品久久久| 青青草视频在线视频观看| 777米奇影视久久| 制服人妻中文乱码| 欧美日韩国产mv在线观看视频| 2018国产大陆天天弄谢| 可以免费在线观看a视频的电影网站| 成人18禁高潮啪啪吃奶动态图| 一区二区三区乱码不卡18| 亚洲精品国产区一区二| 国产一区有黄有色的免费视频| 91av网站免费观看| 国产男女内射视频| 国产无遮挡羞羞视频在线观看| 成人亚洲精品一区在线观看| 一本—道久久a久久精品蜜桃钙片| 老熟妇乱子伦视频在线观看| 亚洲av美国av| 日本wwww免费看| 欧美精品啪啪一区二区三区| 国产欧美日韩一区二区三| 一边摸一边抽搐一进一小说 | 久久久久国内视频| 操美女的视频在线观看| 国产成人啪精品午夜网站| 国产激情久久老熟女| 啦啦啦视频在线资源免费观看| 久久精品成人免费网站| 狠狠婷婷综合久久久久久88av| 国产av精品麻豆| 叶爱在线成人免费视频播放| 欧美在线一区亚洲| 女警被强在线播放| 亚洲欧美日韩高清在线视频 | 欧美变态另类bdsm刘玥| 国产av精品麻豆| 操出白浆在线播放| 亚洲欧洲精品一区二区精品久久久| 老司机在亚洲福利影院| 一区二区日韩欧美中文字幕| 69av精品久久久久久 | 国产深夜福利视频在线观看| 欧美日韩一级在线毛片| 亚洲国产成人一精品久久久| 黄色片一级片一级黄色片| 三上悠亚av全集在线观看| 青草久久国产| 一边摸一边做爽爽视频免费| 久久精品人人爽人人爽视色| 亚洲欧美一区二区三区黑人| www.999成人在线观看| 五月开心婷婷网| 母亲3免费完整高清在线观看| 亚洲成a人片在线一区二区| 国产1区2区3区精品| 亚洲,欧美精品.| 国产真人三级小视频在线观看| 老汉色av国产亚洲站长工具| 亚洲人成77777在线视频| 精品一区二区三区视频在线观看免费 | 国产一区二区三区视频了| 欧美乱妇无乱码| 女同久久另类99精品国产91| 这个男人来自地球电影免费观看| 在线观看免费视频日本深夜| 高清毛片免费观看视频网站 | 国产区一区二久久| www.自偷自拍.com| 日韩大片免费观看网站| 巨乳人妻的诱惑在线观看| 国产亚洲欧美精品永久| 精品人妻在线不人妻| 亚洲精品国产一区二区精华液| 美女午夜性视频免费| 精品午夜福利视频在线观看一区 | 一区二区av电影网| 欧美av亚洲av综合av国产av| 老司机午夜福利在线观看视频 | 亚洲欧美一区二区三区黑人| av免费在线观看网站| 色精品久久人妻99蜜桃| 下体分泌物呈黄色| 视频在线观看一区二区三区| 亚洲成人免费电影在线观看| 久久九九热精品免费| 999精品在线视频| 亚洲一卡2卡3卡4卡5卡精品中文| 男人舔女人的私密视频| 亚洲成a人片在线一区二区| 免费少妇av软件| 欧美另类亚洲清纯唯美| 热99久久久久精品小说推荐| 国产在线精品亚洲第一网站| 视频区欧美日本亚洲| netflix在线观看网站| 亚洲性夜色夜夜综合| 不卡av一区二区三区| 一级片免费观看大全| 黄网站色视频无遮挡免费观看| 桃红色精品国产亚洲av| 不卡一级毛片| 99久久国产精品久久久| 搡老岳熟女国产| 国产男女内射视频| 99精品欧美一区二区三区四区| 日日夜夜操网爽| 精品国产一区二区三区久久久樱花| 亚洲专区字幕在线| 国产福利在线免费观看视频| 曰老女人黄片| 欧美亚洲 丝袜 人妻 在线| 天堂俺去俺来也www色官网| 欧美乱妇无乱码| 日韩制服丝袜自拍偷拍| 亚洲综合色网址| 久久狼人影院| 国产精品一区二区在线不卡| 久久av网站| 美女福利国产在线| 午夜福利在线免费观看网站| 国产成人精品久久二区二区免费| 久久中文字幕一级| 9热在线视频观看99| 汤姆久久久久久久影院中文字幕| 老司机靠b影院| 极品人妻少妇av视频| 日韩视频在线欧美| 亚洲情色 制服丝袜| 久久久国产精品麻豆| 亚洲专区字幕在线| 这个男人来自地球电影免费观看| 三上悠亚av全集在线观看| 十分钟在线观看高清视频www| 精品一区二区三卡| 久久影院123| 国产午夜精品久久久久久| 久久精品亚洲精品国产色婷小说| 啪啪无遮挡十八禁网站| 妹子高潮喷水视频| 国产一区二区在线观看av| 天天影视国产精品| 国产一区二区 视频在线| 国产一区二区激情短视频| 亚洲国产欧美网| 人成视频在线观看免费观看| 欧美亚洲日本最大视频资源| 两人在一起打扑克的视频| 国产淫语在线视频| 人人澡人人妻人| 人人妻人人澡人人爽人人夜夜| 免费日韩欧美在线观看| 国产成人影院久久av| 国产高清视频在线播放一区| 最新在线观看一区二区三区| 两个人免费观看高清视频| 国产日韩一区二区三区精品不卡| 免费高清在线观看日韩| 精品福利观看| 久9热在线精品视频| 国产日韩欧美亚洲二区| 黑人猛操日本美女一级片| 国产在线视频一区二区| av电影中文网址| 99riav亚洲国产免费| 中文字幕制服av| 99久久精品国产亚洲精品| 少妇被粗大的猛进出69影院| 午夜免费鲁丝| 老司机影院毛片| 水蜜桃什么品种好| 最黄视频免费看| 天天躁狠狠躁夜夜躁狠狠躁| 欧美乱妇无乱码| 黄色成人免费大全| 视频区欧美日本亚洲| 中文字幕人妻丝袜制服| 午夜久久久在线观看| 老汉色∧v一级毛片| 一夜夜www| www.精华液| 18禁观看日本| 18禁美女被吸乳视频| 青青草视频在线视频观看| 天天躁狠狠躁夜夜躁狠狠躁| 黄片大片在线免费观看| 色婷婷久久久亚洲欧美| 成人免费观看视频高清| 欧美国产精品一级二级三级| 久久久国产欧美日韩av| 青草久久国产| 久久久久网色| 丝袜美腿诱惑在线| 人人澡人人妻人| 国产欧美日韩一区二区三| 欧美日韩亚洲国产一区二区在线观看 | 亚洲人成77777在线视频| 日日摸夜夜添夜夜添小说| 国产不卡av网站在线观看| √禁漫天堂资源中文www| 日韩成人在线观看一区二区三区| 日本vs欧美在线观看视频| 黑人巨大精品欧美一区二区蜜桃| 国产欧美日韩综合在线一区二区| 精品少妇一区二区三区视频日本电影| 超色免费av| 精品亚洲成a人片在线观看| 黑人欧美特级aaaaaa片| 精品国产乱码久久久久久男人| 另类精品久久| 国产成人免费观看mmmm| 一本一本久久a久久精品综合妖精| 国产成+人综合+亚洲专区| 成年人黄色毛片网站| 99久久人妻综合| 搡老乐熟女国产| 精品国产乱码久久久久久男人| 热re99久久精品国产66热6| 老司机午夜福利在线观看视频 | 午夜成年电影在线免费观看| 亚洲精品国产精品久久久不卡| 久久精品亚洲熟妇少妇任你| 免费看十八禁软件| 国产精品麻豆人妻色哟哟久久| 久久ye,这里只有精品| 俄罗斯特黄特色一大片| 热re99久久精品国产66热6| 777米奇影视久久| 美女高潮到喷水免费观看| 欧美日韩精品网址| 99riav亚洲国产免费| 在线观看人妻少妇| xxxhd国产人妻xxx| 电影成人av|