• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Two-Dimensional Projection-Based Wireless Intrusion Classification Using Lightweight EfficientNet

    2022-11-11 10:47:26MuhamadErzaAminantoIbnuRifqiPurbomuktiHarryChandraandKwangjoKim
    Computers Materials&Continua 2022年9期

    Muhamad Erza Aminanto,Ibnu Rifqi Purbomukti,Harry Chandra and Kwangjo Kim

    1School of Strategic and Global Studies,Universitas Indonesia,Depok,16424,Indonesia

    2National Institute of Information and Communications Technology(NICT),Koganei,184-8795,Japan

    3Department of Electrical Engineering and Information Technology,Universitas Gadjah Mada,Yogyakarta,55281,Indonesia

    4School of Computing,Korea Advanced Institute of Science and Technology(KAIST),Daejeon,34141,Korea

    Abstract: Internet of Things(IoT)networks leverage wireless communication protocols, which adversaries can exploit.Impersonation attacks, injection attacks, and flooding are several examples of different attacks existing in Wi-Fi networks.Intrusion Detection System (IDS)became one solution to distinguish those attacks from benign traffic.Deep learning techniques have been intensively utilized to classify the attacks.However, the main issue of utilizing deep learning models is projecting the data, notably tabular data,into an image.This study proposes a novel projection from wireless network attacks data into a grid-based image for feeding one of the Convolutional Neural Network(CNN)models,EfficientNet.We define the particular sequence of placing the attribute values in a grid that would be captured as an image.Combining the most important subset of attributes and EfficientNet, we aim for an accurate and lightweight IDS module deployed in IoT networks.We examine the proposed model using the Wi-Fi attacks dataset, called the AWID2 dataset.We achieve the best performance by a 99.91% F1 score and 0.11% false-positive rate.In addition, our proposed model achieved comparable results with other statistical machine learning models, which shows that our proposed model successfully exploited the spatial information of tabular data to maintain detection accuracy.

    Keywords: Intrusion detection; impersonation attack; convolutional neural network;anomaly detection

    1 Introduction

    Nowadays, the IoT is developing very rapidly.The Internet has become a primary need for everyone.People are always connected to the Internet network through their smartphone, laptop,or personal computer.Adults, kids, and older people are inseparable from their devices.The recent technology development of IoT networks has led to the prosperity of smart environments [1].Information pooled by IoT sensors could manage smart city applications’assets, revenues, and resources with increased performance and efficiency[2].

    IoT is commonly applied in particular domains such as smart grids, smart cities, and smart homes[3].Jalal et al.provided the one advantage of using smart homes for helping daily human life[4].At the same time, Asaad et al.reviewed the benefit of leveraging smart grids in a country [5].Despite all the prosperity,IoT networks leave a vulnerable hole to be exploited by adversaries,which is wireless communication channels[1].When people are connected to the Internet network,they are vulnerable to various malicious cyber attacks from adversaries.Different types of attacks on Wi-Fi include impersonation attacks,injection attacks,and flooding[6].

    An impersonation attack is a form of attack in which an adversary poses as a trusted person to trick the victim [7].Usually, the adversary will collect someone’s data through the Internet and use it to convince the victim that he is the real person.An injection attack is a malicious code injected into the network and steals all the victims’databases [8].Several well-known injection attacks are SQL Injection,Cross-Site Scripting(XSS),and SMTP/IMAP Command Injection.Finally,a flooding attack occurs when adversaries send massive traffic into the victim’s network[9].The main goal is to create network congestion to hinder legitimate traffic.Because of these various attacks, a defensive mechanism as a countermeasure is needed.The mechanism is called IDS.

    IDS can be classified into two classes:signature-based and anomaly-based IDS[10,11].Signaturebased IDS is a classic IDS system that uses an attack signature database as the detection tool.Anomaly-based IDS monitors the inbound traffic to detect any malicious action.The anomaly-based IDS can detect novel attacks by leveraging a machine learning model, which a firewall could not do.However, there is a problem with the current IDS.Recent publications of IDS show that it is difficult to handle complex datasets with high dimensionality [12].Because of that reason, we want to propose a lightweight machine learning framework using two-dimensional projection for IDS.We train our system using the AWID2 dataset,a Wi-Fi intrusion dataset consisting of four classes:normal,impersonation,injection,and flooding attacks.

    This paper proposes a two-dimensional projection-based IDS system that utilizes a lightweight CNN, EfficientNet [13], for the classification process.Our system consists of three main parts:1.Dataset Preparation, 2.Data Preprocessing with feature selection using Random Forest, and 3.Image Classification using EfficientNet.This paper is the first DL-based IDS that combines data-toimages projection with EfficientNet for Wi-Fi networks IDS to the best of our knowledge.Compared to previous works,our main contributions are listed below:

    1.We provide a data-to-image conversion process by using a zigzag scan pattern from the JPEG images compression technique based on their feature importance.

    2.We handle the IDS data conversion into graphical number format that represents the attribute value of each feature.

    3.Our framework explores feature selection using random forest models with cross-validation.

    4.We propose a lightweight CNN-based IDS using EfficientNet-B0 architecture to handle complex datasets.

    5.Our proposed system identifies the spatial correlation between features through grid-based images.

    6.We provide the performance analysis by highlighting our model’s F1 score and accuracy.

    The remainder of this paper is organized as follows:Section 2 provides related work on utilizing deep learning techniques in IDS.The proposed model and data processing are explained in Section 3.Section 4 shows the experimental results of each module in the proposed model.While Section 5 provides the comparison of the proposed model with other machine learning models.Section 6 closes this paper with conclusions and outlines future research directions.

    2 Related Works

    Study about IDS has been conducted continuously since several years ago[14].Starting from using a list of attack databases;until leveraging the latest machine learning method[15,16].Smys et al.[17]proposed a hybrid convolutional neural network model for IDS suitable for many IoT applications.Khan et al.[18]introduced an efficient and intelligent IDS to detect malicious attacks.They leverage Convolutional Autoencoder (Conv-AE)from spark MLlib for misused attack detection.Another work by Li et al.[19]introduced an AE-based IDS on random forest feature selection.However,these works have faced the difficulty in handling the traffic of massive IDS datasets.

    SwiftIDS [20] tried to address the scalability issue by using a parallel intrusion detection mechanism to analyze the network traffic.The system showed an encouraging performance, but it still requires a longer processing time.Another approach by Rahman et al.improved the parallel IDS model by applying side-by-side feature selection, followed by a single multilayer perceptron classification[21].Finally,a hybrid scheme that combines the deep Stacked Autoencoder(SAE)and machine learning methods is introduced by Mighan et al.[22].Those works show a relation between several features,processing time,and accuracy,which are three main variables in the IDS model.

    Seonhee et al.[23] proposed an IDS with CNN using a malware dataset.They converted the malware file into an 8-bit grayscale image.Li et al.[24]also adopted image classification using CNN,but with a different dataset, NSL-KDD.Unlike both works [23,24], our work adopts and improves the text-to-image conversion method proposed by Al-Turaiki et al.[25].They [25] convert dataset attributes from text format into grayscales.The value ranges between 0 and 255[26].The grayscales generate small square images; then, they are placed sequentially.We improve their method [25] by utilizing number conversion.In our case,the number represents the level of importance of each feature.We place the number based on the JPEG compression technique sequence,starting from the upper left corner.We also leverage EfficientNet-B0,a form of CNN,to classify the image that we generate.

    3 Methodology

    The proposed methodology in this study is divided into four main parts,namely data preparation,data preprocessing,modeling,and evaluation.We use several techniques to obtain train,validation,and test sets in data preparation.First, tabular data was converted into grid-based images data through data preprocessing.Then we do the modeling to classify the images.Finally,we evaluate the performance of the trained model.

    3.1 Data Preparation

    We used the normalized AWID2 dataset,which has Test and Train sets.First,we combined these two parts into a dataset to avoid bias.Then,we created train,validation,and test sets from this dataset.The entire data preparation process can be seen in Fig.1.

    Our combined dataset has 2,371,218 samples with 154 columns plus one target column.In this dataset,there is one normal(0)class and three attack classes,that is,impersonation(1),injection(2),and flooding(3).More than 90%of the existing samples are normal(0)class;this causes an imbalanced class distribution.

    Figure 1:Data preparation process

    We used an undersampling technique to tackle imbalances in our dataset and reduce the number of samples used to save resource consumption.We used 40,000 samples or 1.7%of total samples with 10,000 samples per class.From this process,we got a new dataset,the Balanced Dataset.

    We divided our balanced dataset into the train,validation,and test sets with a ratio of 8:1:1;we got 32,000,4,000,and 4,000 data for train,validation,and test sequentially.We also created an imbalanced test set with the same distribution as our combined dataset.In this test set,20,000 samples are used with the distribution,as shown in Fig.2.In total,we got a test set with the size of 24,000 samples or 1%from our combined dataset.

    Figure 2:Imbalanced test set distribution

    3.2 Data Preprocessing

    We use data-to-images projection by writing feature values on an image using a program.Each data instance is turned into a single image.The writing of this feature value follows a pattern where each value fills one grid.The number of grids in one image is the square of positive integers to getn×ngrids.Due to the nature of the method,we did not use all features in the data.Therefore,the first step in our method is to perform feature selection.

    We sort the features based on their importance in influencing the classification in feature selection.We took the top-kfeatures from this ranking,wherekis the number of features to be used.The ranking was obtained through the average value of feature importance from 5 random forest models.Finally,we trained five random forest models using 5-cross-validation from our train set.This whole process can be seen in Fig.3.

    Figure 3:Data preprocessing process

    The pattern we used in our method mimics zigzagging the scan pattern like the interleaving code in the JPEG [27], as shown in Fig.4.First, the highest rank feature is on the grid in the upper-left corner,then the next feature follows a zigzag pattern so that the position of each feature based on its rank will look like Fig.4.Finally,the lowest rank feature is in the lower-right corner.

    Figure 4:Pattern example for k=25(left)and features placement based on their ranking(right)

    The image we produced has a resolution of 224×224 with RGB channels for the totalkvalues.We wrote the feature value using the Hersley Simplex font with white color on an image with a black background.A sample of the Hersley font can be seen in Fig.5.To maintain consistency, we wrote each feature value in 3 decimal formats.The whole process of data-to-image projection was done using Python and OpenCV.

    Figure 5:Numeric samples from Hersyler simplex font

    3.3 Image Classification

    We used the CNN model to classify the images generated in this method.We used the EfficientNet-B0 architecture [13], a light-weighted CNN model, which performed well on the ImageNet dataset while maintaining model efficiency.The nature of this architecture is suitable for application in lowend devices,which are preferred for use in wireless networks.

    For our generated image that has 224×224 resolution with RGB channel,EfficientNet-B0 offered 4,054,695 total parameters or 16 MB file size in H5 format.This number is the smallest compared to other architectures in the EfficientNet family.We used the Tensorflow Framework running on an RTX 2070 laptop with 32 GB memory for our modeling purpose.

    We trained our model for ten epochs using a Stochastic Gradient Descent (SGD)optimization algorithm with a learning rate=0.05 and a batch size of 32.We rescaled the pixel value to be in the range of 0 to 1.Three models were trained on eachkvalue,so there were 33 models that we trained.We did this to get more robust data from each image classification on valuek.As a reminder,we trained each model on 32,000 images and 4,000 images as the validation set.

    4 Evaluation

    4.1 Evaluation Metrics

    For model evaluation, we used four metrics: Accuracy, F1 score, False Alarm Rate (FAR), and False Negative Rate(FNR).The accuracy score is good to see how well our model guesses the class,but it fails to provide good insight into the imbalanced dataset.Therefore,we used F1 score for our primary metrics for our imbalanced dataset.It combines precision and recall scores,making F1 scores better insight into how well our model predicts in imbalanced datasets.We conducted multiclass classification task since there are four classes in the dataset.Precision, recall, and F1 score were originally matrices for binary classification, so we used weighted scores on precision, recall, and F1 scores for our multiclass classification.

    From a different perspective,our classification can be a binary classification.In this classification,the positive class (P)indicates the attack class (impersonation, injection, and flooding), and the negative class(N)indicates the normal class.This means True Positive(TP)will indicate the number of attack classes that have been detected correctly,False Negative(FN)indicates the number of attack classes that were not detected, False Positive (FP)indicates the number of normal classes detected as attack classes (False Alarm), and True Negative (TN)indicates the normal class that has been recognized correctly.Conversion from multiclass confusion matrix to binary confusion matrix can be seen in Fig.6.

    Figure 6:Confusion matrix for multiclass to binary classification

    4.2 Feature Selection

    As we mentioned before,we used feature ranking to decide the feature we were putting for image projection.The complete list of our feature ranking can be seen in Tab.1.The total of all feature importance scores is 1.The maximum importance score is 0.0708 belonging to feature 141.About 43%of features have an importance score close to zero.A score close to zero means some features may be just noises.

    Table 1: Feature rank

    Table 1:Continued

    We plotted a graph of cumulative feature importance score,as shown in Fig.7.We got a cumulative score of 1 using only 88 features.This confirms that the remaining 66 features are probably just noises.Based on this ranking,we get the top-kfeature used on image projection.We used the value ofk=25 as the baseline,and we decreased and increased the value.In total,we used 11 values ofk,which were 4,9,16,25,36,49,64,81,100,121,and 144.

    Figure 7:Cumulative feature importance score

    4.3 Image Projection Result

    A sample of the images for each value ofkthat we used can be seen in Fig.8.It is essential to mention that there were drawbacks in the feature writing because we used the same resolution for each value ofk.First,the writing of the feature value becomes smaller every timekincreases.Second,the writing of feature values deform every timekincreases,as shown in Fig.9,wherekis the number of attributes.This has happened because the number of pixels on each grid decreases as the value ofkincreases.Third,the writing of the value of the features started to be hard to read atk=49;atk=144,the writing seems to be shaped like a straight line.

    Figure 8:Image projection samples for each value of k

    Figure 9:Writing in a grid for each value of k(left)and number of pixels per grid(right)

    4.4 Test Result

    Our experiment consists of training three models based on given top-kfeatures ranking.We did two tests on our models:the first was on a balanced dataset,and the second was on an imbalanced dataset.First,we compared our models using predefined metrics,and we took the average score for each value ofk(3 models for each value of k).The result of our test can be seen in Tab.2.

    We highlight the highest value in each column in the table.The model that uses 49 features looks better on the balanced dataset while the model that uses 36 features looks better on the imbalanced dataset.F1 value and accuracy start to decrease atk=100.We argue that k values that produce the best performance range are from 25 to 100.

    Table 2: Test result

    This result shows a decrease in performance in the imbalanced dataset compared to the balanced dataset.Although we cannot mention the significance of the decrease in performance, we assumed that this decrease was due to the larger number of samples in the imbalanced dataset.This may happen because we only use 1%of the dataset,which may not capture all the information in the imbalanced dataset.

    4.5 False Alarm Rate and False Negative Rate

    We plotted False Alarm Rate (FAR)and False Negative Rate (FNR)for each value ofk, see Fig.10, wherekis the number of attributes.As we previously mentioned, we obtained these values by calculating the mean of the combined test results from the balanced and imbalanced dataset in a binary perspective.The highest FAR and FNR were 11.50%atk=4%and 2.43%atk=9,respectively.Configurations atk <25 have poor values, either FAR, FNR, or both.Meanwhile, the FAR value starts to increase atk >100.These results strengthen our statement that the range ofkvalues that produce the best performances is 25 to 100.

    4.6 Effects of Feature Importance and Writing Deformation

    The nature of our method requires the selection of some features that are not very important.We, therefore, delve deeper into the effect of feature importance and writing deformation on our method.We used the average F1 score from the balanced and imbalanced dataset,cumulative features importance score,and pixels per grid,see Fig.11,wherekis the number of attributes.The F1 score shows a high value and stabilizes when the cumulative feature importance score is 0.89 or atk=25.We argue that the threshold value ofkis needed in our method to get the best performance.

    Furthermore, we note that atk=9, the performance is already the best when the cumulative feature importance score is only at 0.45.We assume that this may be an anomaly,where the features selected are enough to provide sufficient information.After we analyzed FAR and FNR atk=9,we found that the false negative rate fork=9 is relatively high while the false alarm rate is low.This indicates that atk=9,our trained models have poor performance despite their high F1 score.

    Figure 10:FAR and FNR

    Figure 11:F1 score vs.cumulative feature importance score(left)and F1 score vs.pixels per grid(right)

    We also noticed that atk=121,the performance slowly deteriorated.We argue that this might be because the additional features we add have a noise effect that affects performance, causing overfit.In addition, we believe that this is also due to the effect of the writing deformation on the image projection.As shown in Fig.9,reducing pixels per grid makes the text on the image unreadable.

    We previously mentioned that writing begins to be unreadable by humans atk=49.However,this does not seem to apply to the model we were trained for.The model starts struggling to classify atk=121.We conclude that adding more features creates noise by deforming the writing instead of enriching the information.Furthermore,we argue thatk=100 is an upper limit in adding features.

    5 Comparison with State-of-the-Art Methods

    We compared our method with several statistical models trained on tabular data.We used the same train and test sets(balanced and imbalanced).We trained Random Forest(RF)[28],Support Vector Machine (SVM)(RBF kernel)[29], and XGBoost [30] 3 times with the samekvalues as our CNN model.Random Forest was chosen because we used it as our feature ranking algorithm.Meanwhile,SVM and XGBoost were selected to provide a better understanding of the performance of our CNN model.The results can be seen in Tab.3.We highlighted the bestkvalues and models with the best performance.

    Table 3: Comparison with statistical models

    The combination with the best performance is XGBoost atkvalues=81, 100, 121, and 144.Random Forest has the best performance on the mostkvalues (4, 9, 16, 25, 36, and 49).If we take the F1 score and accuracy average acrossk,the rankings are 1)Random Forest,2)XGBoost,3)our CNN model,and 4)SVM.The performance difference between our model with Random Forest and XGBoost is very small, between 0.35% and 0.40%.We argue that our model is comparable to the statistical model from these results alone.

    However,there is a significant gap in the time required to train between our CNN and statistical models.The time required to train the statistical model was less than 10 min while the time required to train our CNN model was approximately an hour.Despite its success in classifying tabular data sets using image projection,implementing CNN has a significant drawback in its time to train the model.

    6 Conclusion

    This study proposes a novel projection method of tabular data into grid-based images fed to convolutional neural networks classifiers.We built the IDS module leveraging the EfficienNet to reduce the computation load to suit IoT networks.We project the tabular data of wireless attacks into images by exploiting the zigzag sequences of attributes placed in a matrix.Each attribute value represents the matrix values in the dataset.Using the essential attributes using Feature Ranking and EfficientNet classifier,we achieved the best performance with 99.91%of F1 score.We also successfully maintained the false positive rate of about 0.11%.We also compared the proposed model with other machine learning models,and it is shown that our proposed model achieved comparable results with the other three models.The spatial information should be considered by projecting the tabular data into grid-based images.

    In the future, the sequence and pattern to place the attributes in the grid might be affected the image classification performance.In addition,a more lightweight model should be considered when implementing IDS for IoT networks.

    Acknowledgement:This research was conducted under a contract of 2021 International Publication Assistance Q1 of Universitas Indonesia.

    Funding Statement:This research was conducted under a contract of 2021 International Publication Assistance Q1 of Universitas Indonesia.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产蜜桃级精品一区二区三区| 久久久久国产精品人妻aⅴ院| 一个人观看的视频www高清免费观看 | 亚洲欧洲精品一区二区精品久久久| 男插女下体视频免费在线播放| 亚洲五月婷婷丁香| 国产精品美女特级片免费视频播放器 | 欧美国产日韩亚洲一区| 精品午夜福利视频在线观看一区| 女警被强在线播放| 在线看三级毛片| 真人一进一出gif抽搐免费| 亚洲成av人片免费观看| 午夜精品在线福利| 久久久国产成人免费| 女人被狂操c到高潮| 两人在一起打扑克的视频| 在线看三级毛片| 夜夜夜夜夜久久久久| 首页视频小说图片口味搜索| 亚洲欧美精品综合久久99| 成熟少妇高潮喷水视频| 国产成人一区二区三区免费视频网站| 亚洲国产精品成人综合色| 三级男女做爰猛烈吃奶摸视频| 免费在线观看视频国产中文字幕亚洲| 精品国内亚洲2022精品成人| 精品久久久久久久人妻蜜臀av| 亚洲天堂国产精品一区在线| 制服人妻中文乱码| 国内精品久久久久精免费| 国产麻豆成人av免费视频| 国产淫片久久久久久久久 | 久久久久久久精品吃奶| 日韩欧美三级三区| 99久久精品热视频| 婷婷六月久久综合丁香| 久久精品aⅴ一区二区三区四区| 一级毛片高清免费大全| 99久久综合精品五月天人人| 在线观看日韩欧美| 一边摸一边抽搐一进一小说| 嫩草影院入口| 久久香蕉国产精品| 国产黄a三级三级三级人| 俄罗斯特黄特色一大片| 午夜福利在线观看吧| 操出白浆在线播放| 精品一区二区三区四区五区乱码| 久久久久国产一级毛片高清牌| 国产av一区在线观看免费| 高潮久久久久久久久久久不卡| 国产亚洲精品综合一区在线观看| 国产精品99久久99久久久不卡| 日韩免费av在线播放| av国产免费在线观看| 国产91精品成人一区二区三区| 亚洲av日韩精品久久久久久密| 国产精品久久视频播放| 国内精品久久久久久久电影| 级片在线观看| 熟妇人妻久久中文字幕3abv| 中文亚洲av片在线观看爽| 国产三级中文精品| 欧美一区二区精品小视频在线| 午夜视频精品福利| 九九在线视频观看精品| 国产视频一区二区在线看| 超碰成人久久| 法律面前人人平等表现在哪些方面| 亚洲欧美日韩高清专用| www.www免费av| 久久久久九九精品影院| 亚洲精华国产精华精| 青草久久国产| 中文字幕精品亚洲无线码一区| 成熟少妇高潮喷水视频| 人妻丰满熟妇av一区二区三区| 成人永久免费在线观看视频| 麻豆成人av在线观看| 亚洲 国产 在线| 久久精品国产亚洲av香蕉五月| 国产黄a三级三级三级人| 久久亚洲真实| 精品一区二区三区四区五区乱码| 九色国产91popny在线| 两性午夜刺激爽爽歪歪视频在线观看| 国内精品美女久久久久久| 国产av在哪里看| 国产激情久久老熟女| 成人高潮视频无遮挡免费网站| 亚洲欧美日韩无卡精品| 日韩欧美免费精品| 中文字幕精品亚洲无线码一区| 亚洲国产日韩欧美精品在线观看 | 村上凉子中文字幕在线| 午夜视频精品福利| 舔av片在线| av片东京热男人的天堂| 久久精品国产综合久久久| 亚洲真实伦在线观看| 俄罗斯特黄特色一大片| 欧美一级毛片孕妇| 亚洲色图av天堂| 啦啦啦免费观看视频1| 成人午夜高清在线视频| 午夜成年电影在线免费观看| 亚洲 欧美 日韩 在线 免费| 亚洲国产欧洲综合997久久,| 麻豆成人午夜福利视频| 国产伦在线观看视频一区| 村上凉子中文字幕在线| 亚洲欧美日韩卡通动漫| 国产在线精品亚洲第一网站| 久久久久久人人人人人| 最近视频中文字幕2019在线8| 免费在线观看成人毛片| 手机成人av网站| 91麻豆精品激情在线观看国产| 午夜福利视频1000在线观看| 色综合婷婷激情| 午夜福利18| 精品久久久久久久毛片微露脸| 首页视频小说图片口味搜索| 午夜两性在线视频| 长腿黑丝高跟| 美女高潮的动态| 久久久久国产精品人妻aⅴ院| 男女做爰动态图高潮gif福利片| 一个人观看的视频www高清免费观看 | 国产精品乱码一区二三区的特点| 婷婷六月久久综合丁香| 国产美女午夜福利| 欧美大码av| 国产成人精品久久二区二区免费| 精品久久久久久久久久久久久| 真实男女啪啪啪动态图| 白带黄色成豆腐渣| 日韩欧美精品v在线| 成年女人永久免费观看视频| 午夜免费激情av| 午夜福利成人在线免费观看| 久久精品国产综合久久久| 成人av一区二区三区在线看| 日韩欧美三级三区| 狠狠狠狠99中文字幕| 精品无人区乱码1区二区| 天堂网av新在线| 又紧又爽又黄一区二区| 午夜a级毛片| 俄罗斯特黄特色一大片| 成人av在线播放网站| av黄色大香蕉| 亚洲无线观看免费| 亚洲av成人不卡在线观看播放网| 久久久水蜜桃国产精品网| 丝袜人妻中文字幕| 99久久综合精品五月天人人| 国产熟女xx| 男人舔女人的私密视频| 午夜免费观看网址| 亚洲七黄色美女视频| 97超级碰碰碰精品色视频在线观看| 精品久久久久久久人妻蜜臀av| 动漫黄色视频在线观看| 成在线人永久免费视频| 久久热在线av| 搡老岳熟女国产| 一边摸一边抽搐一进一小说| 亚洲国产欧美网| 亚洲片人在线观看| bbb黄色大片| www日本在线高清视频| 日韩欧美一区二区三区在线观看| 在线免费观看不下载黄p国产 | 久久天躁狠狠躁夜夜2o2o| 中文字幕久久专区| av中文乱码字幕在线| 免费人成视频x8x8入口观看| 在线国产一区二区在线| 99久久精品热视频| 中亚洲国语对白在线视频| 国产单亲对白刺激| 一区二区三区高清视频在线| 国产精品久久久久久人妻精品电影| 国产精品久久久久久精品电影| 日日干狠狠操夜夜爽| 最新美女视频免费是黄的| 国产三级中文精品| 久久久久九九精品影院| 搡老熟女国产l中国老女人| 日韩 欧美 亚洲 中文字幕| 美女cb高潮喷水在线观看 | 黄色视频,在线免费观看| 99热这里只有精品一区 | 老鸭窝网址在线观看| 久久人妻av系列| 12—13女人毛片做爰片一| 我要搜黄色片| 免费在线观看日本一区| 草草在线视频免费看| 国产亚洲精品久久久com| 亚洲国产日韩欧美精品在线观看 | 嫩草影院入口| 欧美日韩国产亚洲二区| 国产精品一区二区免费欧美| 日韩欧美国产一区二区入口| 黄片大片在线免费观看| 日韩大尺度精品在线看网址| 欧美极品一区二区三区四区| 国产乱人视频| 午夜福利视频1000在线观看| 亚洲av五月六月丁香网| 亚洲av免费在线观看| 97超级碰碰碰精品色视频在线观看| 亚洲精品一卡2卡三卡4卡5卡| 国产精品九九99| 国产97色在线日韩免费| 我的老师免费观看完整版| 久久中文字幕人妻熟女| 脱女人内裤的视频| 18禁黄网站禁片免费观看直播| 一本一本综合久久| 香蕉久久夜色| 美女高潮喷水抽搐中文字幕| 欧美日韩亚洲国产一区二区在线观看| www.自偷自拍.com| 给我免费播放毛片高清在线观看| 老熟妇乱子伦视频在线观看| 久久香蕉国产精品| 国产高清三级在线| 国产不卡一卡二| av女优亚洲男人天堂 | 国产亚洲av嫩草精品影院| 啦啦啦韩国在线观看视频| 97人妻精品一区二区三区麻豆| 久久性视频一级片| 激情在线观看视频在线高清| av在线天堂中文字幕| 色尼玛亚洲综合影院| ponron亚洲| 天天躁日日操中文字幕| 99精品在免费线老司机午夜| 在线观看一区二区三区| 欧美色欧美亚洲另类二区| 国产亚洲精品久久久久久毛片| 欧美日韩中文字幕国产精品一区二区三区| 精品久久久久久,| 亚洲av电影在线进入| a在线观看视频网站| 国产精品一区二区免费欧美| 午夜久久久久精精品| 国产一区二区三区视频了| 九九久久精品国产亚洲av麻豆 | 日本免费a在线| 成人亚洲精品av一区二区| 久久久久国产精品人妻aⅴ院| 国产一区二区在线观看日韩 | 国产精品久久久人人做人人爽| 国产精品久久久久久精品电影| 日韩有码中文字幕| 男女那种视频在线观看| www.自偷自拍.com| 嫩草影院精品99| av黄色大香蕉| 国产精品久久久久久亚洲av鲁大| 日韩av在线大香蕉| svipshipincom国产片| 小蜜桃在线观看免费完整版高清| 999久久久精品免费观看国产| 19禁男女啪啪无遮挡网站| 老司机午夜福利在线观看视频| 不卡av一区二区三区| 天堂动漫精品| 非洲黑人性xxxx精品又粗又长| 国产毛片a区久久久久| 天堂动漫精品| 国产69精品久久久久777片 | 一夜夜www| 最近在线观看免费完整版| 国产激情偷乱视频一区二区| 亚洲 国产 在线| 国产精品自产拍在线观看55亚洲| 久久久久精品国产欧美久久久| 12—13女人毛片做爰片一| 51午夜福利影视在线观看| 成年免费大片在线观看| 国产亚洲精品久久久久久毛片| 噜噜噜噜噜久久久久久91| 欧美中文综合在线视频| 国产高潮美女av| 香蕉丝袜av| 欧美另类亚洲清纯唯美| 三级男女做爰猛烈吃奶摸视频| 国产精品久久久av美女十八| 在线观看美女被高潮喷水网站 | 一个人看视频在线观看www免费 | 在线观看日韩欧美| 精品国内亚洲2022精品成人| 波多野结衣高清无吗| 在线观看美女被高潮喷水网站 | ponron亚洲| 香蕉丝袜av| 法律面前人人平等表现在哪些方面| 亚洲国产高清在线一区二区三| 露出奶头的视频| 亚洲av电影不卡..在线观看| 日本黄色视频三级网站网址| 天堂√8在线中文| 美女高潮的动态| 精品不卡国产一区二区三区| 国产成年人精品一区二区| 两人在一起打扑克的视频| 亚洲专区国产一区二区| 国产欧美日韩精品亚洲av| 久久久国产成人精品二区| 中文亚洲av片在线观看爽| 中文字幕熟女人妻在线| 精品久久久久久成人av| 亚洲中文字幕一区二区三区有码在线看 | 久久伊人香网站| 丁香欧美五月| 99久国产av精品| 午夜影院日韩av| 精品无人区乱码1区二区| 午夜视频精品福利| 女警被强在线播放| 我的老师免费观看完整版| 男人的好看免费观看在线视频| 老熟妇乱子伦视频在线观看| 男女视频在线观看网站免费| 老司机在亚洲福利影院| 色精品久久人妻99蜜桃| 黑人巨大精品欧美一区二区mp4| cao死你这个sao货| 精品福利观看| 中亚洲国语对白在线视频| 国产精品国产高清国产av| 18禁美女被吸乳视频| 国产精品亚洲美女久久久| 最近视频中文字幕2019在线8| 久久久精品大字幕| 亚洲熟妇熟女久久| 啦啦啦韩国在线观看视频| 中文字幕人妻丝袜一区二区| 脱女人内裤的视频| 女生性感内裤真人,穿戴方法视频| 亚洲欧美日韩高清在线视频| 长腿黑丝高跟| 99热这里只有是精品50| 亚洲av成人av| 99热精品在线国产| 国产三级黄色录像| 亚洲中文日韩欧美视频| a级毛片a级免费在线| 真人做人爱边吃奶动态| 亚洲狠狠婷婷综合久久图片| netflix在线观看网站| 色综合站精品国产| 亚洲av成人一区二区三| 免费大片18禁| 亚洲国产精品合色在线| 男女视频在线观看网站免费| 国产亚洲av高清不卡| 亚洲精品久久国产高清桃花| 夜夜看夜夜爽夜夜摸| 男人的好看免费观看在线视频| 中文亚洲av片在线观看爽| a在线观看视频网站| 国产亚洲精品久久久久久毛片| 久久久久国内视频| 999久久久精品免费观看国产| 给我免费播放毛片高清在线观看| 亚洲成人久久性| 国产淫片久久久久久久久 | 中文字幕久久专区| 色综合站精品国产| 欧美日本视频| 少妇裸体淫交视频免费看高清| 免费在线观看日本一区| 亚洲欧美日韩无卡精品| 国产欧美日韩一区二区精品| 亚洲国产精品合色在线| 成人特级av手机在线观看| 亚洲av电影不卡..在线观看| 伊人久久大香线蕉亚洲五| 18禁美女被吸乳视频| 99热精品在线国产| 午夜a级毛片| 非洲黑人性xxxx精品又粗又长| 婷婷亚洲欧美| 色噜噜av男人的天堂激情| 最近在线观看免费完整版| 真人一进一出gif抽搐免费| 国产1区2区3区精品| 精品福利观看| 国产私拍福利视频在线观看| 亚洲精品456在线播放app | 午夜视频精品福利| 日本 欧美在线| 亚洲 欧美一区二区三区| 欧美av亚洲av综合av国产av| 亚洲色图 男人天堂 中文字幕| 99国产精品99久久久久| 18禁黄网站禁片免费观看直播| 男插女下体视频免费在线播放| 日韩国内少妇激情av| 免费av毛片视频| 午夜激情欧美在线| 成人国产综合亚洲| 最近视频中文字幕2019在线8| 国产成人啪精品午夜网站| 夜夜看夜夜爽夜夜摸| 日本免费a在线| 成年女人毛片免费观看观看9| 一区二区三区激情视频| 丰满的人妻完整版| 欧美丝袜亚洲另类 | 欧美激情在线99| 午夜激情欧美在线| 国产精品一区二区三区四区免费观看 | 真人一进一出gif抽搐免费| 小蜜桃在线观看免费完整版高清| 91麻豆精品激情在线观看国产| 日本撒尿小便嘘嘘汇集6| 亚洲一区二区三区色噜噜| 噜噜噜噜噜久久久久久91| 最近在线观看免费完整版| 黄色成人免费大全| 亚洲中文av在线| 久久午夜综合久久蜜桃| www.自偷自拍.com| 欧美乱妇无乱码| 国产精品乱码一区二三区的特点| 欧美日韩中文字幕国产精品一区二区三区| 99热这里只有精品一区 | 日韩成人在线观看一区二区三区| 精品久久久久久久毛片微露脸| 成人亚洲精品av一区二区| av天堂在线播放| 动漫黄色视频在线观看| 亚洲成人精品中文字幕电影| 欧美一区二区精品小视频在线| 丰满的人妻完整版| 一本久久中文字幕| 丁香六月欧美| 国产精品综合久久久久久久免费| 久久精品夜夜夜夜夜久久蜜豆| 哪里可以看免费的av片| 国产美女午夜福利| 欧美绝顶高潮抽搐喷水| 在线免费观看的www视频| 97超级碰碰碰精品色视频在线观看| 欧美日韩黄片免| 18禁美女被吸乳视频| 国产亚洲精品av在线| 国产极品精品免费视频能看的| 成人午夜高清在线视频| 国产在线精品亚洲第一网站| 午夜福利高清视频| 美女cb高潮喷水在线观看 | 欧美日韩黄片免| 男女床上黄色一级片免费看| 99久国产av精品| 成年女人看的毛片在线观看| 叶爱在线成人免费视频播放| 免费在线观看日本一区| 巨乳人妻的诱惑在线观看| 99热6这里只有精品| 在线a可以看的网站| 亚洲国产欧美人成| 亚洲人成网站高清观看| 99久久精品热视频| 亚洲五月天丁香| 又爽又黄无遮挡网站| 国产精品综合久久久久久久免费| 久久久国产精品麻豆| 美女扒开内裤让男人捅视频| 三级国产精品欧美在线观看 | 国产高清三级在线| 久久久国产欧美日韩av| 欧美xxxx黑人xx丫x性爽| 波多野结衣巨乳人妻| 日本五十路高清| 日本一二三区视频观看| 国产精品野战在线观看| av欧美777| 男女视频在线观看网站免费| 免费一级毛片在线播放高清视频| 精品国产美女av久久久久小说| 国产视频内射| 国产乱人视频| 俄罗斯特黄特色一大片| 宅男免费午夜| 18禁黄网站禁片午夜丰满| 三级国产精品欧美在线观看 | 日韩高清综合在线| 亚洲国产精品合色在线| 久久久久久大精品| 亚洲 欧美一区二区三区| 1024香蕉在线观看| 国产aⅴ精品一区二区三区波| 在线视频色国产色| 午夜精品久久久久久毛片777| 欧美精品啪啪一区二区三区| 女生性感内裤真人,穿戴方法视频| 国产久久久一区二区三区| 日韩欧美精品v在线| 99在线视频只有这里精品首页| 在线观看免费视频日本深夜| 午夜亚洲福利在线播放| 国产精品亚洲美女久久久| 九色成人免费人妻av| 99热精品在线国产| 色综合亚洲欧美另类图片| 99热6这里只有精品| 1000部很黄的大片| 丝袜人妻中文字幕| 亚洲欧美精品综合一区二区三区| 亚洲九九香蕉| 韩国av一区二区三区四区| 天堂动漫精品| 天堂网av新在线| 在线永久观看黄色视频| 日韩欧美在线二视频| 在线视频色国产色| 性欧美人与动物交配| 日韩欧美国产一区二区入口| 中文字幕精品亚洲无线码一区| 成人特级黄色片久久久久久久| 免费人成视频x8x8入口观看| 中文字幕高清在线视频| or卡值多少钱| 久久久水蜜桃国产精品网| 91老司机精品| 巨乳人妻的诱惑在线观看| 成年女人毛片免费观看观看9| 国产亚洲精品久久久久久毛片| 九色成人免费人妻av| 国产亚洲欧美98| 一区二区三区高清视频在线| 他把我摸到了高潮在线观看| 国产伦人伦偷精品视频| 国内久久婷婷六月综合欲色啪| 亚洲午夜精品一区,二区,三区| 18禁国产床啪视频网站| 亚洲国产精品成人综合色| 91麻豆av在线| 全区人妻精品视频| 九九久久精品国产亚洲av麻豆 | 国语自产精品视频在线第100页| 啦啦啦免费观看视频1| 一级作爱视频免费观看| 美女扒开内裤让男人捅视频| 桃红色精品国产亚洲av| 91老司机精品| 国产精品综合久久久久久久免费| 久久久久久久精品吃奶| 欧美色欧美亚洲另类二区| 亚洲一区二区三区色噜噜| 在线观看午夜福利视频| 国产精品久久久久久久电影 | 国产99白浆流出| 亚洲色图 男人天堂 中文字幕| 国产乱人伦免费视频| 性色avwww在线观看| 国产伦一二天堂av在线观看| 亚洲九九香蕉| 欧美黄色片欧美黄色片| 欧美黄色淫秽网站| 黑人操中国人逼视频| 午夜免费成人在线视频| 久9热在线精品视频| 日本黄色视频三级网站网址| 国产成人aa在线观看| 日韩人妻高清精品专区| 一边摸一边抽搐一进一小说| 国内精品美女久久久久久| 麻豆国产97在线/欧美| 欧美极品一区二区三区四区| 日韩免费av在线播放| 午夜精品久久久久久毛片777| 好男人电影高清在线观看| 成人av在线播放网站| 嫩草影视91久久| 真人一进一出gif抽搐免费| 12—13女人毛片做爰片一| 亚洲狠狠婷婷综合久久图片| 午夜免费激情av| 亚洲av成人一区二区三| 国产精品爽爽va在线观看网站| 久久久久久久久免费视频了| 久久婷婷人人爽人人干人人爱| 日本精品一区二区三区蜜桃| 国产又黄又爽又无遮挡在线| 男人舔女人下体高潮全视频| 国产av一区在线观看免费| 少妇丰满av| 国产 一区 欧美 日韩| 哪里可以看免费的av片| 国产成人av激情在线播放| 久久久国产精品麻豆| 一区二区三区国产精品乱码| 欧美性猛交╳xxx乱大交人| 成人一区二区视频在线观看| 少妇裸体淫交视频免费看高清| 香蕉久久夜色| 日韩高清综合在线| 国产乱人视频| 亚洲人成伊人成综合网2020| 中文字幕人成人乱码亚洲影| 最新在线观看一区二区三区| 国产精品久久久av美女十八| 免费观看精品视频网站| 噜噜噜噜噜久久久久久91|