• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Generating Adversarial Samples on Multivariate Time Series using Variational Autoencoders

    2021-07-23 10:20:30SamuelHarfordFazleKarimandHoushangDarabi
    IEEE/CAA Journal of Automatica Sinica 2021年9期
    關(guān)鍵詞:酸中毒幅度狀況

    Samuel Harford,, Fazle Karim, and Houshang Darabi,

    Abstract—Classification models for multivariate time series have drawn the interest of many researchers to the field with the objective of developing accurate and efficient models. However,limited research has been conducted on generating adversarial samples for multivariate time series classification models.Adversarial samples could become a security concern in systems with complex sets of sensors. This study proposes extending the existing gradient adversarial transformation network (GATN) in combination with adversarial autoencoders to attack multivariate time series classification models. The proposed model attacks classification models by utilizing a distilled model to imitate the output of the multivariate time series classification model. In addition, the adversarial generator function is replaced with a variational autoencoder to enhance the adversarial samples. The developed methodology is tested on two multivariate time series classification models: 1-nearest neighbor dynamic time warping(1-NN DTW) and a fully convolutional network (FCN). This study utilizes 30 multivariate time series benchmarks provided by the University of East Anglia (UEA) and University of California Riverside (UCR). The use of adversarial autoencoders shows an increase in the fraction of successful adversaries generated on multivariate time series. To the best of our knowledge, this is the first study to explore adversarial attacks on multivariate time series. Additionally, we recommend future research utilizing the generated latent space from the variational autoencoders.

    I. INTRODUCTION

    MACHINE learning drives many facets of society including online search engines, content analysis on social networks, and smart appliances such as thermostats. In computer vision, machine learning is commonly used for recognizing objects in images or video [1], [2]. In natural language processing for transcribing speech into text, matching news articles, and selecting relevant results [3], [4]. In healthcare, to diagnose and predict patient’s survival [5], [6].

    Time series classification is a subfield of machine learning that has received a lot of attention over the past several decades [7], [8]. A time series can be univariate or multivariate. Univariate time series are a time ordered collection of measurements from a single source. Multivariate time series are time ordered collections of measurements from at least two sources [9]. While most time series research focused on univariate time series [10]–[17], research in the area of multivariate time series has increased over the past decade [18]–[21]. Multivariate time series classification is applied in fields including healthcare [22], manufacturing[23], and action recognition [24]. Time series classification models aim to capture the underlying patterns of the training data and generalize the findings to classify unseen testing data. The field of multivariate time series classification has primarily focused on more traditional algorithms, including 1-nearest neighbor dynamic time warping (1-NN DTW) [25],WEASEL+MUSE [19], and Hidden-Unit Logistic Model [18].With the surge in computational power, deep neural networks(DNNs) are increasingly being applied in machine learning applications [26], [27]. Due to their simplicity and effectiveness, DNN are becoming excellent methods for time series classification [20], [28], [29].

    While machine learning and deep learning techniques allow for many important and practical problems to be automated,many of the classifiers have proven to be vulnerable to adversarial attacks [30], [31]. An adversarial example is a sample of input data that has been slightly modified in a way that makes the classifier mislabel the input sample. Similar examples can be created by generative adversarial networks(GANs), which look to generate data instances similar to that of the modeling data but not aim to manipulate classifiers[32]–[34]. In the field of computer vision, it has been shown that image recognition models can be tricked by adding information to an image that is not noticeable to the human eye [31]. Although DNNs are powerful models for a number of classification tasks, they have proven to be vulnerable to adversarial attacks when minor carefully crafted noise is added to an input [35], [36]. These vulnerabilities have a harmful impact on real-world applicability where classification models are incorporated in the pipeline of a decision making process [37]. A significant amount of research in adversarial attacks has focused on computer vision. Papernotet al.’s work has shown that it is easy to transfer adversarial attacks on a particular classifier to other similar classifiers [38]. Recent years have shown an increased focus on adversarial attacks in the field of time series classification [39]–[42]. However, these studies have been limited to attacks on univariate time series.

    Many strategies have been developed for the generation of adversarial samples to trick DNN models. Most techniques work by targeting the gradient information of DNN classifiers[43]–[45]. In time series classification, classifiers used to monitor the electrocardiogram (ECG) signals of a patient can be manipulated to misclassify important changes in a patient’s status. When attacking traditional time series classifiers, it is important to note that the model mechanics are nondifferentiable. For this reason, attacks are not able to directly utilize the gradient information from traditional models. There are two main types of attacks. Black-box (BB) attacks rely only on the models output information, the training process and architecture of the target models. White-box (WB) attacks gives the attacker all information about the attacked model,including the training data set, the training algorithm, the model’s parameters and weights, and the model architecture[45].

    This study proposes extending the gradient adversarial transformation network (GATN) methodology to attack multivariate time series and utilize different adversarial generators [40]. GATN is extended by exploring the use of adversarial autoencoders to generate adversarial samples under both black-box and white-box attacks. The GATN methodology works by training a student model with the objective of replicating the output behavior of a targeted multivariate time series classifier. The targeted model is referred to as a teacher model. Once the student model has learned to mimic the behavior of the teacher model, the GATN model can learn to attack the student model. This study uses the 1-NN DTW and fully convolutional network(FCN) as the teacher models. Given a trained student model,the proposed multivariate gradient adversarial transformation network (MGATN) is then trained to attack the student model.Our methodologies are applied to 30 multivariate time series bench marks from the University of East Anglia (UEA) and the University of California, Riverside (UCR) [46]. To the best of our knowledge, this is the first study to conduct adversarial attacks on multivariate time series.

    The remainder of this paper is structured as follows: Section II provides a background on the utilized multivariate time series classification models and techniques for creating adversaries.Section III details our proposed methodologies. Section IV presents the experiments conducted on the benchmark multivariate time series models. Section V illustrates and discusses the results of the experiments. Section VI concludes the paper and proposes future work.

    II. DEFINITIONS AND BACKGROUND

    For the task of multivariate time series classification, each instance is a set of time series which may be of varying lengths.

    Definition 1:Univariate Time SeriesT=t1;t2;...;tnis a time ordered set of lengthncontinuous values. The length of a time series is equal to the number of values. A time series dataset is a collection of time series instances.

    This work focuses on multivariate time series.

    Definition 2:Multivariate Time Series consist ofMunivariate time series, where theMis the number of dimensions andM≥2

    Each multivariate time series instance has a corresponding class label. The objective of multivariate time series classification is to develop models that can accurately identify the class label of an unseen instance.

    A. Multivariate Time Series Classifiers

    whereirefers to the position on time seriesQandjrefers to the position on time seriesC. The warping matrix is initialized as

    2) Multi Fully Convolutional Network

    Inspired by their success in the fields of computer vision and natural language processing, deep learning models have been successfully applied to the task of time series classification [20], [28], [52]. The multivariate fully convolutional network (Multi-FCN) is one of the first deep learning networks used for the task of multivariate time series classification. Fig. 1 illustrates the Multi-FCN network. The three convolutional layers output filters of 128, 256, and 128 with kernels sizes of 8, 5, and 3, respectively. The model outputs a class probability for use in class labeling.

    Fig. 1. The Multi-FCN architecture.

    B. Adversarial Transformation Network

    Multiple approaches for generating adversarial samples have been proposed to attack classification models. These methods have focused on classification tasks in the field of computer vision. Most methods use the gradient information with respect to the input sample or directly solving an optimization problem on the input sample. Baluja and Fischer[53] introduce adversarial transformation network (ATN), a neural network that transforms an input into an adversarial example by using a target network. ATNs may be trained for black-box or white-box attacks. ATNs work by first using a self-supervised method to train a feed-forward neural network.The model works by taking an original input sample and making slight modifications to the classifier output with the objective of matching the adversarial target. An ATN is defined as a neural network

    C. Transferability Property

    The transferability property of an adversarial sample is the property that the same adversary produced to mislead a targeted modelfcan mislead another models, regardless of model architecture [54], [55]. Papernotet al.[38] further study this property and propose a black-box attack by training a local substitute network,s, to replicate the target model,f.The local substitute networksis trained using generated samples and the targeted modelfis used to get the output label of generated samples. The transferability property is utilized in adversarial attacks by exploiting the full local modelson the targeted modelfwith the objective of achieving misclassifications. Papernotet al.[38] show that this method can be applied on both DNN and traditional machine learning classifiers.

    D. Knowledge Distillation

    A key strategy for reducing the cost of inference is model compression, also known as knowledge distillation [56]. The idea of knowledge distillation is to replace the original,computationally expensive model with a smaller model that requires less memory, parameters and computational time.This idea works by training a smaller student modelsto mimic the behavior of the larger teacher modelf. The teacher modelfhas its knowledge distilled into the student modelsby minimizing a loss function between the set of networks. This loss function aims to output the same class probability vector on the student modelsas that generated by the teacher modelf. Hintonet al. [57] state that the commonly used softmax function results in a skewed probability distribution where the correct probability class is very close to 1 and the remaining classes are close to 0. To reduce the resulting skewness in the probability class vector, Hintonet al. [57] recommends adjusting the output such that

    E. Gradient Adversarial Transformation Network

    F. Adversarial Autoencoders

    Makhzaniet al. [58] propose the adversarial autoencoder(AAE), which is a probabilistic autoencoder that utilizes the knowledge learned from generative adversarial networks(GANs) [59]. The GANs are used to perform variational inference by matching the aggregated posterior of the hidden code vector of the autoencoder with an arbitrary prior distribution. AAE are similar to standard autoencoders [60],where the objective is to accurately reconstruct the original input, subject to a limited amount of added noise.

    In addition to simple autoencoders, Makhzaniet al. [58]propose the use of variational autoencoders (VAE) [61].VAEs provide a formulation in which the encodingzis interpreted as a latent variable in a probabilistic generative model, a probabilistic decoder is defined by a likelihood functionpθ(x|z) and parameterized byθ. Alongside a prior distributionp(z) over the latent variables, the posterior distributionpθ(z|x)∝p(z)pθ(x|z) can then be interpreted as a probabilistic encoder. Fig. 2 illustrates this process. Ideally,the trained latent vector creates clusters that are as close as possible to each other while still being distinct, allowing smooth interpolation, and enabling the construction of new samples. To better create the latent vector a Kullback-Leibler(KL) divergence is introduced into the loss function [62]. The KL divergence between two probability distributions simply measures how much they diverge from each other.Minimizing the KL divergence here means optimizing the probability distribution parameters to closely resemble that of the target distribution. The KL divergence defined as

    Fig. 2. The VAE architecture.

    wherenis the length of the output,Pis the probabilistic distribution of the original data, andQis the probabilistic distribution of the adversarial data. Due to the limited information about the actual probabilistic distributions,Variational Inference is used to simplify the divergence calculation using known information [63]. The KL diverence is simplified to

    whereμis the mean output of the encoding layer, andσis the standard deviation output of the encoding layer.

    III. PROPOSED METHODOLOGY

    A. Multivariate Gradient Adversarial Transformation Network

    Fig. 3. The subfigures (a) and (b) illustrate the methodology of training the model distillation used the attacks. The subfigures (c) and (d) illustrate the methodology utilized to attack a time series classifier.

    The use of the GATN has proven to be a successful method for developing adversarial samples on univariate time series[40]. However, GATN and other adversarial attacks on time series have not been applied on multivariate time series. In this work, we purpose multivariate gradient adversarial transformation network (MGATN) which extends the GATN model for use in generating multivariate adversarial time series. In addition, we explore the use of Variational Encoders and Convolution Variational Encoders as alternative generator functions.similar architecture to the VAE generator, where the dens e layers of the encoder and decoder are modified to 1-D convolutional layers. MGATN method with a CVAE generator is referred to as MGATNCV. The MGATNCVgenerator encodes the samples by first passing through an intermediate convolutional layer of filter size 32 and kernel size 5, then feeds to two branched convolutional layers of filter size 16 and kernel size 5. The two convolutional layers then pass through a sampling layer. The final step decodes the sampling layer with two convolutional layers of filter size 32 and the original time series shape, both with kernel sizes or 5.

    B. Training Methodology

    This subsection discusses the training purpose for the MGATN methodology. Figs. 3(a) and 3(b) illustrates the full framework architecture defined in [15]. The framework architecture is similar to that of GATN, with notable modifications. The input to the framework is multivariate,where the shape of inputs changes from (batch size, length,value) to (batch size, channel, length, value). This increase in shape requires the student model to be modified to a LeNet-5 with 2D Conv layers. In addition, we explore different generator functions to create adversarial samples. The tested generator functions include a fully connected network, a variational autoencoder, and a convolutional variational autoencoder. Figs. 3(c)–3(e) illustrate the generator architectures. Fig. 3(c) illustrates the simple generator where the MGATN uses two dense layers in the autoencoder block. For a fair comparison of model parameters, a MGATNLis included in the experiments and results. MGATNLand MGATNVhave about the same amount of parameters as they have the same number and size of dense layers. Figs. 3(d) and 3(e) illustrate both the VAE and CVAE generators where the VAE utilizes dense layers in the encoder/decoder blocks and CVAE utilize convolutional layers in the encoder/decoder blocks.

    The loss calculation is dependent upon the choice the the reranking function, as discussed in Section II-B. The simplest form of the reranking function is a one hot encoding of the desired class label. However, the one hot encoding reranking fuction is not the best choice when the objective is to minimize perturbations per class label. Other options for reranking functions require the use of the class probability vector. This class probability vector is not available when performing black-box attacks or when attacking most traditional models. We utilize the transferability property and knowledge distillation to train a student networks, where the class probability vector is not available. The student neural networksis trained to mimic the classification output for the targeted modelf. The student model of the proposed architectures (MGATN, MGATNL, MGATNV, and MGATNCV) uses the same knowledge distillation loss function described in Section II-E. The knowledge learned from the student model is then used in substitute of the unknown information, such as the class probability vector.

    The MGATN framework aims to optimize the adversarial samples by balancing the loss on the input space and the loss of the prediction output. This loss function is described in (5).When training with VAE and CVAE generators, the KL divergence must be factored into the loss equation. The following loss function is optimized:

    IV. EXPERIMENTS

    A. Multivariate Time Series Benchmarks

    All methodologies presented in this work are tested on 30 multivariate time series benchmark datasets. These benchmarks are compiled and provided by University of East Anglia (UEA) and the University of California, Riverside(UCR) in the Multivariate Time Series Classification Archive[46]. Table I provides information about the test benchmarks.These benchmark datasets are from a variety of fields,including medical care, speech recognition and motion recognition.

    B. Black-Box and White-Box Restrictions

    The experiments to evaluate all versions of MGATN follow specific restrictions for black-box and white-box attacks. All versions of MGATN require the use of gradient information to attack the target classifier. In this study, black-box attacks are limited to the discrete class label of the model output, and not the class probability vector that is output from a neural network with a softmax output.

    C. Experimental Setup

    This work explores black-box and white-box attacks with different generators on traditional and neural network time series classifiers. We compare the different generator functions to present how the proposed generator functions perform in comparison to established generator functions.Details for these classifiers were explained in Section II-A.Based on the restrictions of the attacks and traditional time series classifiers, we utilize a student model for all attacksexcept the white-box attack on the Multi-FCN classifier.White-box attacks on traditional classifiers, such as 1-NN DTW, do not provide the required class probability information for MGATN to be utilized. Additionally, black-box attacks for both traditional and neural network do not allow the attacker access to the internal information of the classifier,such as the neural network weights. In these cases, the student modelsis utilized to train MGATN. The output of MGATN is then used to test if the teacher modelfis vulnerable to the adversarial sample. As discussed in Section IV-B, the initial testing set is split into two equally sized sets for evaluation,DevalandDtest. When evaluating the different variations of MGATN, we compute the fraction of successful adversaries on the targeted modelfgenerated on the evaluation setDeval.For a multivariate time series to be a successful adversarial example, we first input the time series into the classification model to see if the classifier results in the correct class label.If the classifier is able to correctly determine the class label of the time series, then we check to see if the corresponding adversarial sample of this time series is incorrectly labeled by the targeted classifier. This definition ensures that only correctly labeled samples can result in a possible adversarial sample. To evaluate our models ability to attack multivariate time series classification algorithms, we utilize both a traditional distance based classifier and a neural network classifier. The attacked time series classifiers are the Multivariate 1-Nearest Neighbor Dynamic Time Warping and the Multivariate Fully Convolutional Network. We evaluate based on the number of adversarial samples generated and the amount of perturbation added. The perturbation is measured by comparing the mean squared error of the original input sample compared to the output adversarial sample. The objective is to minimize the amount of perturbation and maximize the fraction of the training samples that result in successful adversaries. All experiments use a reranking weightαthat is set to 1.5. The target class is specified to class 0. For benchmarks that have non-numeric class labels, the labels are encoded and the corresponding class label can be found in our

    codebase. The reconstruction weight parameterβis selected by grid searching over a set of possibilities, such that β ∈{0.1,0.05,0.01,0.005,0.001,0.0005,0.0001,0.00001}. This work performs several experiments to show how multivariate time series classifiers can be attacked (comparing fraction of successful adversaries in Section V-A), how well the proposed architectures can generalize onto unseen multivariate time series data (generalization in Section V-B) and how well can these attacks be defended by retraining on these adversaries(defense in Section V-C).

    護(hù)理前兩組血糖餐前餐后監(jiān)測狀況、酸中毒癥狀積分、生存質(zhì)量接近(P>0.05);護(hù)理后綜合護(hù)理干預(yù)組血糖餐前餐后監(jiān)測狀況、酸中毒癥狀積分、生存質(zhì)量的改善幅度更大(P<0.05)。見表 2。

    TABLE I DATASET DESCRIPTION FOR UEA AND UCR MULTIVARIATE TIME SERIES BENCHMARKS

    D. Modeling Parameters

    All tested attacks utilize a student network to mimic the target network except white-box attacks that target the Multi-FCN classifier. ATNs can directly utilize the gradient information of the Multi-FCN model that is available to the attacker in the white-box case. White-box attacks on Multi-FCNs are evaluated directly on the target classification model.

    All student models utilize a simple LeNet-5 architecture that is modified for a multivariate input [65]. The LeNet-5 architecture is selected because it is one of the earliest and simplest convolutional neural networks, which has shown to work effectively on attacking univariate time series classifiers[40]. The architecture is a convolution neural network with two 2-dimensional convolutional layers with filter of size 6 and 16, kernels of size 5 and 5, ReLU activation functions,and valid padding. Each convolutional layer is passed to a 2-D Max Pooling layer. This is then passed to a flattening layer.The network then has two dense layers of 120 and 84 with tanh activation functions. Finally the network ends with a dense softmax layer consisting of the desired number of classes.

    We use the multivariate version of 1-NN DTW to evaluate adversarial attacks on traditional time series classifiers. While this classifier has proven to be an effective method for both univariate and multivariate time series, the distance based restriction requires some modifications to utilize probabilistic outputs. This is not an issue when conducting black-box attacks on the classifier. White-box attacks have access to the output class probability distribution for each sample. Karimet al. [40] introduced a method of generating a class probability distribution that can generate the same discrete class result as the original classification. This method can be extended to accept a set of distance matrices as inputs, as opposed to a single distance matrix.

    Our experiments evaluate the use of four different methods networks for adversary generation. The first is a simple fully connected network, which passes the original and gradient information to two dense layers with ReLU activation functions and the output is a gradient with matching input shape and a linear activation function. The second is an extended simple generator with two additional dense layers.The third network is a variational autoencoder with an intermediate dimension of 32 and latent dimension of 16. The final network is a convolutional variational autoencoder which alters the original dense layers of the VAE with convolutional layers. This network has the same intermediate dimension of 32 and latent dimension of 16. More detailed architecture parameters can be explored in our codebase.

    V. RESULTS

    A. Fraction of Successful Adversaries

    Fig. 4 illustrates the fraction of successful adversaries generated during black-box and white-box attacks on the 30 tested multivariate time series benchmarks. The fraction of successful adversaries is defined as the number of adversaries captured by the attack divided by the number of possible adversaries that could be generated. As an additional requirement, we select our models based on a mean squared error (MSE) limit of 0.1 between the original and adversarial time series. This requirement is used to determine the model based on the stated range of beta valuesβ. Fig. 4 also illustrate the differences in results for the tested generators. These include MGATN, MGATNL, MGATNV, and MGATNCV. The target class for all experiments is set to class 0, where string labels are encoded and these class encoding can be found in our codebase for reproducibility. The detailed results for all experiments can be found in Appendix.

    There are a total of 120 experiments for each generator function, 30 datasets on 4 different attack-target combinations.We compare these experiments across generators by evaluating the number of wins a method has across all experiments. A win is defined as one method having a greater fraction of successful adversaries than any other method,where ties are not collected. The number of wins for MGATN,MGATNL, MGATNV,and MGATNCVare 16, 10, 38, and 27,respectively. MGATNVhas the most wins across all the experiments where 27 of the 38 wins occur in black-box attacks. However, MGATNCVhas a high number of white-box wins (19 wins) compared to the other methods. When looking at the target model instead of attack, we see that MGATNVhas the most wins on Multivariate 1-NN DTW (14 wins) and has the most wins on Multi-FCN (24 wins). We postulate the AutoEncoder component of MGATNVand MGATNCVto be the reason why the performance is increasing and not the size of the network. This is because MGATNLand MGATNVhave about the same size and parameters, yet MGATNVoutperforms MGATNL, significantly. These findings demonstrate the significant improvement achieved with the modification of the generator functions. Fig. 5 illustrates adversarial samples with all attacks.

    Fig. 4. Black-box and white-box attacks on FCN and 1-NN DTW classifiers that are tested on Deval. Blank bars mean that the adversarial attacks did not result in any adversaries for a specific attack and dataset.

    Table III summarizes the results of MGATNVand MGATNCVcompared to the original MGATN and MGATNL.Table cells in green show a significant improvement (at a pvalue of 0.05) over the original MGATN. These results show MGATNVis superior to MGATN and MGATNLwhen applied on both tested black-box attacks. Even with approximately the same number of parameters, MGATNVstatistically outperforms MGATNLfor black-box attacks. However, MGATNVdoes not perform the original MGATN or MGATNLon whitebox attacks. This difference in adversarial results comes from the VAEs objective of regularizing the latent space.MGATNCVoutperforms MGATN and MGATNLwhen applying white-box attack on the Multi-FCN classifier. The MGATNCVperforms the best for white-box on the neural network classifier, which we postulate is because the convolutional component provides superior encoding for this attack. This indicates the importance of the AutoEncoders in the generator and the significant improvement in the resultant fraction of adversaries achieved.

    In order to test the distribution of the generated adversary a Cramer test is used to compare generated adversaries with the original multivariate time series. The Cramer test is a nonparametric two sampled test of the underlying distributions between multivariate sets of data [66]. The Cramer test has a null hypothesis of the two samples belonging to the same distribution and an alternative hypothesis that they are drawn from different distributions. All generated adversaries have a p-value below 0.05. These results prove that all adversaries are drawn from the same distribution as their original multivariate time series.

    B. Generalization

    In this subsection we evaluate the trained MGATN models on the unseen testing data,Dtest. This evaluation is important when the situation does not allow time for model retraining.This is the case when sensor data only has the time for a forward pass through the network. This evaluation is not common when generating adversaries. However, time series applications often require real-time analysis of collected data.For this reason, it is beneficial that MGATN is able to generate adversarial samples without the need for retraining.Fig. 6 illustrates the fraction of successful adversaries for all testing sets and on all attacks. These results show a similar fraction for each type of attack. The white-box attack on the Multi-FCN classifier obtains the highest fraction of successful adversaries across most datasets. Additionally, we see that the black-box attack on Multi-FCN results in the lowest fraction of successful adversaries. The black-box limitation along with the increased complexity of neural networks compared to distance based models makes it difficult to generate adversaries on black-box attacks on Multi-FCNs. This generalization shows the potential to generate adversaries based on pretrained models. This allows for MGATN to be applied on edge devices with limited computational resources and still be able to generate successful adversaries.

    Fig. 5. Sample Adversarial Samples on the LSST dataset.

    C. Defense

    The ability to defend against an adversarial attack is an important post-evaluation that utilizes the findings of MGATN. In this work we explore a simple defense process that utilizes the pretrained MGATN models. Given a trained MGATN, we output the successful adversarial samples for an attack. These adversarial samples are appended onto our original training data. The classifier and MGATN are then retrained using this new training set. Based on this defense process, we evaluate MGATN by analyzing the change in testing accuracy and the fraction of successful adversaries.Table IV shows a Wilcoxon Sign-Rank Test for a comparison of testing accuracy of developed models. The testing setDtestis defined in Section IV-B. Table cells that are green show a significant improvement in testing accuracy (at a p-value of 0.05). This analysis evaluates whether the use of adversarialsample in addition to training data results in significantly higher testing accuracy. The results show that only white-box samples from Multi-FCN generated from MGATNVand MGATNCVresults in a significant increase in model accuracy.The remainder of the experiments show that the adversarial samples do not result in an improved testing accuracy when the models are retrained. Table V shows a WSRT for the comparison of the fraction of successful adversaries generated. These results show that there is a significant decrease in the fraction of adversaries generated when the model is retrained with adversaries in the training data (at a pvalue of 0.05). The one exception is the MGATNCV, where the fraction of adversaries is statistically large compared to the previous tests. This shows that both distance based and neural network classifiers become more robust when retrained with new samples.

    TABLE II FRACTION OF SUCCESSFUL ADVERSARIES ON DEVAL OF OUR METHODS COMPARED TO FGSM (P-VALUE SHOWN)

    TABLE III FRACTION OF SUCCESSFUL ADVERSARIES ON DEVAL COMPARISON OF MGATN GENERATORS (P-VALUE SHOWN)

    D. Latent Space

    The MGATNVand MGATNCVmethods make use of variational autoencoders to generate adversarial samples.Variational autoencoders provides a probabilistic manner for describing an observation in latent space. VAEs formulate the encoder to describe a probability distribution for each latent attribute. To understand the implications of a variational autoencoder, we visualize the latent space. Fig. 7 illustrates the latent space of evaluation samples on the CharacterTrajectories dataset generated by MGATNVand MGATNCVusing a t-distributed stochastic neighbor embedding (TSNE) data reduction. TSNE is a statistical method for visualizing high-dimensional data in a two or three dimensional space using a stochastic neighbor embedding[67]. This figure shows that MGATNCVmethod is able to learn clear differences between classes. The MGATNVmethod results non-separable latent spaces using the TSNE dimensionality reduction. However, this does notmean MGATNVhas no linearly separable classes on different different or reduction techniques. Further research is required to understand the latent spaces of MGATNVModels that generate a clearly defined latent space can be used to form a generative model capable of creating new data similar to what was observed during training. New data generated from an interpretable latent space can be used to retrain classifiers to increase accuracy, further improve defense against adversarial attacks, and many other data mining applications. Further, the embeddings from the latent space can also be utilized for other classification tasks and anomaly detection.

    Fig. 6. Black-box and white-box attacks on FCN and 1-NN DTW classifiers that are tested on Dtest. Blank bars mean that the adversarial attacks did not result in any adversaries for a specific attack and dataset.

    TABLE IV WSRT COMPARING TESTING ACCURACY OF MODELS DEVELOPED ON ORIGINALTRAINING DATA VS TRAINING DATA WITH ADVERSARIAL SAMPLES (P-VALUE SHOWN)

    TABLE V WSRT COMPARING FRACTION OF SUCCESSFUL ADVERSARIES FOR MODELS DEVELOPED ON ORIGINAL TRAINING DATA VS TRAINING DATA WITH ADVERSARIAL SAMPLES (P-VALUE SHOWN)

    Fig. 7. Example illustrations of Latent Dimensions using TSNE reduction.

    VI. CONCLUSION

    This work extends the GATN by modifying the generated function to significantly improve the generated adversaries on multivariate time series. We evaluate the different generation methods on 30 multivariate time series datasets. These evaluations test both black-box and white-box attacks on multivariate 1-NN DTW and Multi-FCN classifiers. Our results show that the most vulnerable model is the Multi-FCN attacked with white-box information. We further prove the generated adversaries are from the same distribution as the original series using a Cramer test. Utilizing an unseen testing set, we see that our MGATN models are able to generate adversaries on data without the need for retraining. A simple defense procedure shows that the use of generating adversaries when retraining our models makes them less vulnerable to future attacks while maintaining the same level of testing accuracy. Finally, we see that the latent space modeled by MGATNCVresults in a clear class separation that can be used for future data generation. Future research in this area should explore the development of targeted adversarial attacks that misclassify input to a specific class. Finally, the developed latent space information can be exploited to better understand the underlying patterns of the time series classes.

    ACKNOWLEDGMENTS

    We acknowledge Somshubra Majumdar for his assistance and insightful comments that laid the foundation to the research work. Further, we would like to thank all the researchers that spent their time and effort to create the data we used.

    APPENDIX DETAILED RESULTS

    TABLE VI FRACTION OF SUCCESSFUL ADVERSARIES ON DEVAL FOR THE BLACK-BOX ATTACK ON MULTI 1-NN DTW

    TABLE VI FRACTION OF SUCCESSFUL ADVERSARIES ON DEVAL FOR THE BLACK-BOX ATTACK ON MULTI 1-NN DTW (CONTINUED)

    TABLE VII FRACTION OF SUCCESSFUL ADVERSARIES ON DEVAL FOR THE BLACK-BOX ATTACK ON MULTI-FCN

    TABLE VII FRACTION OF SUCCESSFUL ADVERSARIES ON DEVAL FOR THE BLACK-BOX ATTACK ON MULTI-FCN (CONTINUED)

    TABLE VIII FRACTION OF SUCCESSFUL ADVERSARIES ON DEVAL FOR THE WHITE-BOX ATTACK ON MULTI 1-NN DTW

    TABLE IX FRACTION OF SUCCESSFUL ADVERSARIES ON DEVAL FOR THE WHITE-BOX ATTACK ON MULTI-FCN

    猜你喜歡
    酸中毒幅度狀況
    中西醫(yī)結(jié)合治療牛瘤胃酸中毒
    聲敏感患者的焦慮抑郁狀況調(diào)查
    二甲雙胍與乳酸酸中毒及其在冠狀動脈造影期間的應(yīng)用
    2019年中國國際收支狀況依然會保持穩(wěn)健
    中國外匯(2019年13期)2019-10-10 03:37:38
    微波超寬帶高速數(shù)控幅度調(diào)節(jié)器研制
    第五節(jié) 2015年法學(xué)專業(yè)就業(yè)狀況
    基于ANSYS的四連桿臂架系統(tǒng)全幅度應(yīng)力分析
    反芻動物瘤胃酸中毒預(yù)防及治療策略
    2014年中期預(yù)增(降)幅度最大的50家上市公司
    育肥羊口炎并發(fā)慢性瘤胃酸中毒的診治
    看免费av毛片| 99国产极品粉嫩在线观看| 国产日韩欧美亚洲二区| 亚洲男人天堂网一区| 日韩欧美免费精品| 中文字幕人妻熟女乱码| 久久人人爽av亚洲精品天堂| 黄色成人免费大全| 巨乳人妻的诱惑在线观看| 美女高潮喷水抽搐中文字幕| 国产精品成人在线| 天堂8中文在线网| 一级黄色大片毛片| 午夜福利一区二区在线看| 天天添夜夜摸| 国产精品亚洲av一区麻豆| 国产1区2区3区精品| 不卡一级毛片| 精品人妻熟女毛片av久久网站| 亚洲一区二区三区欧美精品| 91大片在线观看| 国产精品1区2区在线观看. | 日韩欧美国产一区二区入口| 成年人免费黄色播放视频| 人成视频在线观看免费观看| 夫妻午夜视频| 老司机在亚洲福利影院| 久久天堂一区二区三区四区| 视频在线观看一区二区三区| 久久国产精品人妻蜜桃| 999精品在线视频| 精品乱码久久久久久99久播| 亚洲色图 男人天堂 中文字幕| av天堂久久9| 老司机亚洲免费影院| 91成人精品电影| 大陆偷拍与自拍| 久久久精品国产亚洲av高清涩受| av电影中文网址| 国产精品影院久久| 免费女性裸体啪啪无遮挡网站| 日韩一卡2卡3卡4卡2021年| 脱女人内裤的视频| 男人舔女人的私密视频| 国产精品av久久久久免费| 一进一出抽搐动态| 搡老熟女国产l中国老女人| 韩国精品一区二区三区| 俄罗斯特黄特色一大片| 人妻久久中文字幕网| 香蕉久久夜色| 国产精品麻豆人妻色哟哟久久| 亚洲性夜色夜夜综合| 菩萨蛮人人尽说江南好唐韦庄| 国产免费视频播放在线视频| 日韩欧美国产一区二区入口| 一个人免费在线观看的高清视频| 一本—道久久a久久精品蜜桃钙片| 在线观看免费高清a一片| 999久久久精品免费观看国产| 热99久久久久精品小说推荐| 青草久久国产| 一级毛片精品| 国产亚洲精品第一综合不卡| 狠狠狠狠99中文字幕| av又黄又爽大尺度在线免费看| 久久精品亚洲精品国产色婷小说| 日韩欧美国产一区二区入口| 高清黄色对白视频在线免费看| 日韩中文字幕视频在线看片| 下体分泌物呈黄色| 99精品欧美一区二区三区四区| 桃红色精品国产亚洲av| 亚洲精品av麻豆狂野| 欧美 日韩 精品 国产| 国产欧美日韩精品亚洲av| 69精品国产乱码久久久| 9191精品国产免费久久| 欧美精品高潮呻吟av久久| 一夜夜www| 久久久久国产一级毛片高清牌| 五月天丁香电影| 国产精品香港三级国产av潘金莲| 嫁个100分男人电影在线观看| 成人影院久久| 午夜视频精品福利| 狠狠婷婷综合久久久久久88av| 久久久久久亚洲精品国产蜜桃av| 一边摸一边抽搐一进一出视频| 亚洲国产看品久久| 最近最新免费中文字幕在线| 亚洲精品久久午夜乱码| videos熟女内射| 精品国产乱码久久久久久男人| 成人国语在线视频| 黄色成人免费大全| 在线天堂中文资源库| 一进一出抽搐动态| 欧美激情久久久久久爽电影 | 人人澡人人妻人| 蜜桃国产av成人99| 成人免费观看视频高清| 99精品欧美一区二区三区四区| 中文字幕精品免费在线观看视频| 国产老妇伦熟女老妇高清| 少妇猛男粗大的猛烈进出视频| 高清av免费在线| 淫妇啪啪啪对白视频| 高潮久久久久久久久久久不卡| 色综合婷婷激情| 午夜福利影视在线免费观看| 一级a爱视频在线免费观看| 免费在线观看影片大全网站| 黑丝袜美女国产一区| 99香蕉大伊视频| 69精品国产乱码久久久| 99久久人妻综合| 国产成人欧美| 女同久久另类99精品国产91| 啦啦啦视频在线资源免费观看| 免费不卡黄色视频| 久久久久国产一级毛片高清牌| 亚洲欧洲精品一区二区精品久久久| 免费一级毛片在线播放高清视频 | 日日夜夜操网爽| 精品久久久久久电影网| 欧美黄色淫秽网站| 三级毛片av免费| 桃红色精品国产亚洲av| 国产精品秋霞免费鲁丝片| 欧美一级毛片孕妇| 国产成人av激情在线播放| 亚洲午夜精品一区,二区,三区| 少妇被粗大的猛进出69影院| 老鸭窝网址在线观看| 精品国产一区二区久久| cao死你这个sao货| 欧美日韩中文字幕国产精品一区二区三区 | 十分钟在线观看高清视频www| 国产在视频线精品| 黄片小视频在线播放| av福利片在线| 啦啦啦中文免费视频观看日本| 亚洲专区国产一区二区| 色综合婷婷激情| 99久久99久久久精品蜜桃| 在线 av 中文字幕| 一级黄色大片毛片| 免费看a级黄色片| 国产高清激情床上av| 高清视频免费观看一区二区| 国产一区二区在线观看av| 国产精品久久久久成人av| 欧美在线黄色| 国产男女超爽视频在线观看| 亚洲欧美一区二区三区黑人| 亚洲精品粉嫩美女一区| 国内毛片毛片毛片毛片毛片| 欧美精品一区二区大全| 18在线观看网站| 久久久久国内视频| 精品少妇黑人巨大在线播放| 如日韩欧美国产精品一区二区三区| 大片免费播放器 马上看| 亚洲精品一卡2卡三卡4卡5卡| 精品福利观看| 在线观看66精品国产| 每晚都被弄得嗷嗷叫到高潮| 国产又色又爽无遮挡免费看| 亚洲国产欧美一区二区综合| 免费在线观看日本一区| 精品人妻在线不人妻| 欧美乱妇无乱码| 精品国内亚洲2022精品成人 | 国产单亲对白刺激| 在线播放国产精品三级| 淫妇啪啪啪对白视频| 香蕉久久夜色| 日本黄色日本黄色录像| 久久久久国产一级毛片高清牌| 亚洲成人免费电影在线观看| 天堂中文最新版在线下载| 精品第一国产精品| 搡老岳熟女国产| 久久精品亚洲av国产电影网| 欧美性长视频在线观看| 另类精品久久| 午夜福利视频在线观看免费| 久久天堂一区二区三区四区| 亚洲av电影在线进入| 国产免费福利视频在线观看| 亚洲天堂av无毛| 精品少妇内射三级| kizo精华| 亚洲国产欧美在线一区| 久久精品亚洲熟妇少妇任你| 新久久久久国产一级毛片| 国产成人精品无人区| 亚洲伊人久久精品综合| 两性午夜刺激爽爽歪歪视频在线观看 | 天天躁日日躁夜夜躁夜夜| 国产av一区二区精品久久| 亚洲色图 男人天堂 中文字幕| 又黄又粗又硬又大视频| 18禁国产床啪视频网站| 可以免费在线观看a视频的电影网站| 亚洲中文字幕日韩| 满18在线观看网站| 色综合婷婷激情| 考比视频在线观看| 成人国语在线视频| 动漫黄色视频在线观看| 肉色欧美久久久久久久蜜桃| xxxhd国产人妻xxx| 亚洲精品久久成人aⅴ小说| 男人操女人黄网站| 侵犯人妻中文字幕一二三四区| 国产精品一区二区在线观看99| 成人18禁在线播放| 欧美精品高潮呻吟av久久| 亚洲色图 男人天堂 中文字幕| 不卡av一区二区三区| 高清视频免费观看一区二区| 黑人猛操日本美女一级片| 精品亚洲乱码少妇综合久久| 美女主播在线视频| 午夜激情av网站| av有码第一页| 色视频在线一区二区三区| 国产又爽黄色视频| 成人手机av| 黄色怎么调成土黄色| 亚洲av片天天在线观看| 两性午夜刺激爽爽歪歪视频在线观看 | 黄色怎么调成土黄色| 日韩一区二区三区影片| 久久人妻熟女aⅴ| 最近最新中文字幕大全电影3 | 精品人妻在线不人妻| 麻豆国产av国片精品| 国产一区二区 视频在线| 岛国在线观看网站| 少妇粗大呻吟视频| 啦啦啦 在线观看视频| 欧美亚洲 丝袜 人妻 在线| 日韩大片免费观看网站| 高清在线国产一区| 国产一区二区三区视频了| 国产亚洲欧美在线一区二区| 国产精品.久久久| 一本大道久久a久久精品| 色老头精品视频在线观看| 999久久久国产精品视频| 十八禁网站免费在线| 亚洲av国产av综合av卡| svipshipincom国产片| 亚洲精品一卡2卡三卡4卡5卡| 成人国产一区最新在线观看| av欧美777| 国产伦理片在线播放av一区| 在线天堂中文资源库| 久久免费观看电影| 久久久久国产一级毛片高清牌| 一本综合久久免费| 欧美激情极品国产一区二区三区| 91老司机精品| 国产真人三级小视频在线观看| 99久久国产精品久久久| 超碰97精品在线观看| 啪啪无遮挡十八禁网站| 两性午夜刺激爽爽歪歪视频在线观看 | 久久人人爽av亚洲精品天堂| 热99久久久久精品小说推荐| 法律面前人人平等表现在哪些方面| av线在线观看网站| av福利片在线| 另类精品久久| 一边摸一边抽搐一进一小说 | 国产在线观看jvid| 十分钟在线观看高清视频www| 女人被躁到高潮嗷嗷叫费观| 亚洲色图 男人天堂 中文字幕| 国产精品99久久99久久久不卡| 国产av精品麻豆| 日本五十路高清| 电影成人av| 一本综合久久免费| 国产在线免费精品| 欧美久久黑人一区二区| 热99久久久久精品小说推荐| 国产亚洲欧美精品永久| 国产男靠女视频免费网站| 久久久国产成人免费| 麻豆成人av在线观看| 亚洲国产精品一区二区三区在线| 国产精品久久久久成人av| 亚洲av日韩精品久久久久久密| 黑人猛操日本美女一级片| 午夜福利视频在线观看免费| 亚洲免费av在线视频| 少妇粗大呻吟视频| 午夜精品久久久久久毛片777| 久久国产精品影院| 欧美精品人与动牲交sv欧美| 757午夜福利合集在线观看| 亚洲av电影在线进入| 精品国内亚洲2022精品成人 | 色94色欧美一区二区| 国产成人av激情在线播放| 考比视频在线观看| 黄色丝袜av网址大全| 国产又色又爽无遮挡免费看| 国产亚洲欧美精品永久| av在线播放免费不卡| 国产单亲对白刺激| 丁香欧美五月| 自拍欧美九色日韩亚洲蝌蚪91| 免费看a级黄色片| 一进一出抽搐动态| 18禁观看日本| 日本一区二区免费在线视频| 在线亚洲精品国产二区图片欧美| 免费人妻精品一区二区三区视频| 黑人巨大精品欧美一区二区mp4| 欧美精品一区二区大全| 成年版毛片免费区| 久久免费观看电影| 亚洲精品成人av观看孕妇| 高清黄色对白视频在线免费看| 午夜福利影视在线免费观看| 欧美激情高清一区二区三区| 欧美 亚洲 国产 日韩一| 老司机影院毛片| 色视频在线一区二区三区| 亚洲成人国产一区在线观看| 国产精品一区二区在线不卡| 国产伦理片在线播放av一区| 青青草视频在线视频观看| 18禁国产床啪视频网站| 欧美久久黑人一区二区| 亚洲成av片中文字幕在线观看| 成人影院久久| 亚洲av美国av| 人人妻人人添人人爽欧美一区卜| 99精品欧美一区二区三区四区| 国产aⅴ精品一区二区三区波| 国产精品久久久人人做人人爽| 欧美另类亚洲清纯唯美| 国产主播在线观看一区二区| www.熟女人妻精品国产| 另类亚洲欧美激情| 最近最新中文字幕大全电影3 | 精品久久久久久久毛片微露脸| 91精品国产国语对白视频| 99在线人妻在线中文字幕 | 成人av一区二区三区在线看| 久久久国产成人免费| 欧美亚洲日本最大视频资源| 亚洲一卡2卡3卡4卡5卡精品中文| 少妇粗大呻吟视频| 99久久国产精品久久久| 女人高潮潮喷娇喘18禁视频| 国产97色在线日韩免费| 免费少妇av软件| 成年女人毛片免费观看观看9 | 亚洲男人天堂网一区| 久久人妻av系列| 亚洲,欧美精品.| 国产高清国产精品国产三级| 欧美激情 高清一区二区三区| 久久午夜综合久久蜜桃| 日韩欧美三级三区| 高潮久久久久久久久久久不卡| 天堂动漫精品| 少妇猛男粗大的猛烈进出视频| 女人久久www免费人成看片| 国精品久久久久久国模美| 一二三四社区在线视频社区8| 青青草视频在线视频观看| 男女床上黄色一级片免费看| 脱女人内裤的视频| 人人妻人人澡人人爽人人夜夜| 91精品国产国语对白视频| 国产一区二区激情短视频| 日日夜夜操网爽| 亚洲中文av在线| 日日爽夜夜爽网站| 国产精品自产拍在线观看55亚洲 | 免费黄频网站在线观看国产| 国产亚洲欧美精品永久| 久久久欧美国产精品| 久久ye,这里只有精品| 热99久久久久精品小说推荐| 国产精品亚洲av一区麻豆| 国产日韩欧美视频二区| 久久久精品免费免费高清| a级片在线免费高清观看视频| 视频在线观看一区二区三区| 男男h啪啪无遮挡| 国产高清激情床上av| 亚洲熟女精品中文字幕| 在线 av 中文字幕| 五月开心婷婷网| 在线播放国产精品三级| 丁香六月天网| 成人黄色视频免费在线看| 亚洲午夜精品一区,二区,三区| 美女高潮到喷水免费观看| 久久国产亚洲av麻豆专区| 国产免费现黄频在线看| 超碰成人久久| 亚洲一区二区三区欧美精品| 蜜桃国产av成人99| 极品人妻少妇av视频| 天天躁狠狠躁夜夜躁狠狠躁| 多毛熟女@视频| kizo精华| 日日摸夜夜添夜夜添小说| 亚洲av日韩精品久久久久久密| 亚洲欧美一区二区三区久久| 99国产精品一区二区蜜桃av | 亚洲 国产 在线| 免费不卡黄色视频| 日韩精品免费视频一区二区三区| 最新美女视频免费是黄的| 十八禁网站免费在线| 国产1区2区3区精品| 99精国产麻豆久久婷婷| 一边摸一边抽搐一进一出视频| 亚洲一区二区三区欧美精品| 岛国在线观看网站| 人妻 亚洲 视频| 一个人免费看片子| 久久中文字幕一级| 黑人欧美特级aaaaaa片| 岛国毛片在线播放| 国产亚洲午夜精品一区二区久久| 国产在线一区二区三区精| 日韩有码中文字幕| 欧美日韩亚洲高清精品| 18在线观看网站| 国产老妇伦熟女老妇高清| 99精国产麻豆久久婷婷| 日日摸夜夜添夜夜添小说| 美女高潮喷水抽搐中文字幕| 国产片内射在线| 啦啦啦免费观看视频1| 视频区图区小说| 日本黄色视频三级网站网址 | 国产真人三级小视频在线观看| 性少妇av在线| 欧美亚洲日本最大视频资源| 老鸭窝网址在线观看| 亚洲午夜理论影院| 一二三四社区在线视频社区8| 欧美国产精品一级二级三级| 一进一出好大好爽视频| 国精品久久久久久国模美| 19禁男女啪啪无遮挡网站| 淫妇啪啪啪对白视频| 国产av一区二区精品久久| 色综合欧美亚洲国产小说| 精品人妻1区二区| 欧美人与性动交α欧美精品济南到| 一边摸一边做爽爽视频免费| 啪啪无遮挡十八禁网站| 法律面前人人平等表现在哪些方面| 日本五十路高清| 国产精品偷伦视频观看了| 亚洲午夜精品一区,二区,三区| 久久久久精品人妻al黑| 搡老乐熟女国产| 日韩欧美一区视频在线观看| 国产精品亚洲av一区麻豆| 亚洲 国产 在线| 两性夫妻黄色片| 侵犯人妻中文字幕一二三四区| 狠狠婷婷综合久久久久久88av| 搡老熟女国产l中国老女人| 天天操日日干夜夜撸| 日本黄色视频三级网站网址 | 精品国内亚洲2022精品成人 | 欧美日韩成人在线一区二区| 韩国精品一区二区三区| 欧美中文综合在线视频| 国产在线视频一区二区| 国产亚洲精品一区二区www | 国产一区二区 视频在线| 91成年电影在线观看| 在线 av 中文字幕| 男人操女人黄网站| 天天躁狠狠躁夜夜躁狠狠躁| 久久热在线av| 免费高清在线观看日韩| 一级a爱视频在线免费观看| 99香蕉大伊视频| 国产在线观看jvid| 精品一区二区三区av网在线观看 | 丁香六月欧美| 国产成人精品无人区| 国产一区二区三区视频了| 国产成人免费观看mmmm| 国产一卡二卡三卡精品| 99九九在线精品视频| 久久免费观看电影| 亚洲 国产 在线| 欧美另类亚洲清纯唯美| 日韩三级视频一区二区三区| 久久精品亚洲熟妇少妇任你| 欧美日韩视频精品一区| tube8黄色片| 国产一区二区 视频在线| 日韩视频一区二区在线观看| 又紧又爽又黄一区二区| 精品久久久久久电影网| 国产欧美日韩一区二区三| 中文字幕高清在线视频| 最近最新免费中文字幕在线| 国产深夜福利视频在线观看| 一级毛片电影观看| 一个人免费在线观看的高清视频| 亚洲精品一卡2卡三卡4卡5卡| 操出白浆在线播放| 国产不卡一卡二| 午夜免费鲁丝| 最近最新中文字幕大全免费视频| 国产成人欧美| av网站在线播放免费| 中文字幕人妻丝袜一区二区| 久久精品亚洲精品国产色婷小说| 久久精品亚洲av国产电影网| tube8黄色片| 久久久久久久久免费视频了| 一夜夜www| 蜜桃国产av成人99| 自拍欧美九色日韩亚洲蝌蚪91| 国产欧美日韩精品亚洲av| 亚洲va日本ⅴa欧美va伊人久久| 一区二区三区精品91| 日本五十路高清| 国产aⅴ精品一区二区三区波| 极品教师在线免费播放| 动漫黄色视频在线观看| av又黄又爽大尺度在线免费看| 十八禁网站免费在线| 欧美精品啪啪一区二区三区| 麻豆国产av国片精品| 国产不卡一卡二| 久久国产亚洲av麻豆专区| 电影成人av| 久久精品国产a三级三级三级| 正在播放国产对白刺激| 丝袜美腿诱惑在线| 国产一区二区三区视频了| 欧美日韩福利视频一区二区| 天天躁夜夜躁狠狠躁躁| 黄片大片在线免费观看| 老司机午夜十八禁免费视频| 欧美+亚洲+日韩+国产| 亚洲伊人久久精品综合| 一个人免费看片子| 亚洲精品乱久久久久久| 日本欧美视频一区| 亚洲欧洲日产国产| 亚洲成人国产一区在线观看| 国产99久久九九免费精品| 人人妻人人澡人人爽人人夜夜| 成人永久免费在线观看视频 | av国产精品久久久久影院| 国产成人精品在线电影| 亚洲专区字幕在线| 免费看十八禁软件| 波多野结衣av一区二区av| 久久久久久久久免费视频了| 国产福利在线免费观看视频| 成人国产一区最新在线观看| 欧美大码av| 香蕉丝袜av| 欧美亚洲日本最大视频资源| www.精华液| 国产精品自产拍在线观看55亚洲 | 精品少妇黑人巨大在线播放| 国产精品久久久久久精品电影小说| 视频区图区小说| 欧美老熟妇乱子伦牲交| 亚洲专区字幕在线| 亚洲人成电影免费在线| 热99久久久久精品小说推荐| 我要看黄色一级片免费的| 波多野结衣一区麻豆| 久热这里只有精品99| 一区二区三区乱码不卡18| 波多野结衣一区麻豆| 久久久精品免费免费高清| 一区二区三区乱码不卡18| 麻豆国产av国片精品| 久久精品国产亚洲av香蕉五月 | 久久久国产欧美日韩av| 免费在线观看日本一区| 色精品久久人妻99蜜桃| 免费看a级黄色片| 久久青草综合色| 亚洲精品自拍成人| 亚洲美女黄片视频| 国精品久久久久久国模美| 又大又爽又粗| 国产一区有黄有色的免费视频| 99精品在免费线老司机午夜| 亚洲av国产av综合av卡| 日本欧美视频一区| 色播在线永久视频| 国产av又大| 少妇被粗大的猛进出69影院| 中文字幕精品免费在线观看视频| 免费在线观看黄色视频的|