• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Device Anomaly Detection Algorithm Based on Enhanced Long Short-Term Memory Network

    2023-10-29 11:41:46LUOXinCHENJingYUANDexin袁德鑫YANGTao

    LUO Xin(羅 辛), CHEN Jing(陳 靜), YUAN Dexin(袁德鑫), YANG Tao(楊 濤)

    1 School of Computer Science and Technology, Donghua University, Shanghai 201620, China 2 School of Continuing Education &School of Network Education, Donghua University, Shanghai 200051, China

    Abstract:The problems in equipment fault detection include data dimension explosion, computational complexity, low detection accuracy, etc. To solve these problems, a device anomaly detection algorithm based on enhanced long short-term memory (LSTM) is proposed. The algorithm first reduces the dimensionality of the device sensor data by principal component analysis (PCA), extracts the strongly correlated variable data among the multidimensional sensor data with the lowest possible information loss, and then uses the enhanced stacked LSTM to predict the extracted temporal data, thus improving the accuracy of anomaly detection. To improve the efficiency of the anomaly detection, a genetic algorithm (GA) is used to adjust the magnitude of the enhancements made by the LSTM model. The validation of the actual data from the pumps shows that the algorithm has significantly improved the recall rate and the detection speed of device anomaly detection, with the recall rate of 97.07%, which indicates that the algorithm is effective and efficient for device anomaly detection in the actual production environment.

    Key words:anomaly detection; production equipment; genetic algorithm (GA); long short-term memory (LSTM); principal component analysis (PCA)

    0 Introduction

    With the rapid development of the manufacturing industry, the stable and efficient operation of the equipment for production is important to improve production efficiency and save production costs, and it is necessary to detect abnormalities in the operation status of the equipment. A number of scholars and industry personnel in China and abroad are currently engaged in research on device anomaly detection[1], proposing methods that combine traditional statistical theory with machine learning, as well as deep learning detection methods[2].

    Anomaly detection can be defined as finding abnormal data points in the data that deviate from normal behaviors, and it depends on the application domain. These “deviations” can also be called anomalies, outliers, or singularities. The traditional method of statistical theory usually takes the statistical distribution in statistics as the criterion of abnormal judgment. By using these methods, the mean, variance, median and plural of statistical data are calculated. After its error is verified to obey the probability distribution model such as Gaussian distribution, it is determined whether the data is abnormal according to the central limit theorem and significant difference. This analysis method is simple and practical, and it maintains good monitoring efficiency for equipment in normal conditions. However, it cannot consider variable environmental and operating conditions, and cannot adapt to different working conditions of different equipment. Machine learning-based methods perform data analysis by means of temporal predictive analysis and clustering analysis. The temporal predictive methods are modeled based on certain regularity or periodicity presented by the data[3-4], but their anomaly detection models often lack generalization ability for cases without regularity or periodicity. The clustering methods require a large amount of normal data as well as anomalous data[5], but in the actual production environment, the collection of data samples is too difficult due to the low probability events of anomalous data.

    In recent years, with the rapid development of data processing capabilities, big data and artificial intelligence technologies have been rapidly applied in practice, and the field of deep learning methods for anomaly detection has received extensive attention from more and more researchers and scholars[6-7]. Deep learning networks typically consist of multiple hidden layers, and they can learn to capture nonlinear relationship in data by adjusting their own parameters, theoretically allowing them to approximate any complex function.The multilayer structure can extract more advanced features of the data to achieve the extraction and expression of the hidden information of the data itself and get the description of the results expected.

    Due to the characteristics of unstable, nonlinear, and dynamically changing time-series data, deep learning-based methods still face problems such as difficulty in determining the optimal network parameters, noise interference in the data, and high computational complexity. The anomaly time of the observed data is not regular and their anomalies do not occur at the same moment, which makes it difficult to define the anomalous parameters based on the data. The noise generated in the input data is also difficult to use for noise reduction or denoising like image data because the noise not generated by the anomaly can contribute significantly to the calculation of the anomaly level instead. In addition, as the length of the time-series data increases, the time required for the computation and the complexity of the data also increases, which can lead to a subsequent increase in the computational complexity and the difficulty of the neural network. The long short-term memory (LSTM) network algorithm solves the gradient disappearance and gradient explosion problems relative to traditional neural networks and can handle long-term dependency. It specifically refers to the challenge of capturing and modeling dependency that exists among the elements far apart in a sequence data. The LSTM network algorithm overcomes this challenge by utilizing the memory mechanism of cell states, allowing the network to learn more complex sequence information[8-9].

    To address the above problems, an enhanced LSTM network algorithm is proposed to break through the limitation of shallow deep learning network models in feature characterization at the network layer, making it challenging to distinguish the fault samples in the noisy environment due to the high dimensionality of data and mixed with a large amount of environmental noise in the actual industrial environment. Firstly, the principal component analysis (PCA) algorithm is applied to reduce the dimensionality of sensor data to extract the correlation between data with less noise and redundancy and reduce the computational complexity of the neural network algorithm[10-11], and then the memory mechanism of the enhanced LSTM model is used to predict the data of future periods or current periods by the data of previous periods to better discover the potential patterns of time series datasets. The genetic algorithm (GA) is used during the period to find the solution with the highest fitness for the augmented parameters in the LSTM network to optimize the training results. Finally, the algorithm is validated by pump data to be effective in the process of anomaly detection in real production equipment.

    1 Real Device Sensor Timing Data Correlation Analysis

    To analyze the correlation of sensor timing data of real equipment, a dataset from the Kaggle competition website is used to conduct experiments on a water pumping system of small town waterworks which fail seven times a year. These failures cause inconvenience to many households. The dataset includes measurement data from 51 sensors, including sensing of 5 types: pressure, lift, shaft power, flow rate and speed (Fig.1). The distribution of the data is shown in Fig.2. The total amount of data is 220 320, including 7 fault data (broken), 14 477 fault recovery status data (recovering), and 205 836 normal status data (normal).

    Fig.1 Distribution of sensor types

    Fig.2 Distribution of data

    To analyze the abnormal status of sensor timing data, the sensor measurement data in 5 months are intercepted for visual analysis(Fig.3). The red marked point in Fig.3 means the fault occurrence. From Fig.3, it can be seen that the data at the moment of failure of the sensors change abruptly and deviate from the normal range of the original values. Therefore, the abnormal condition of the equipment can be predicted by the measurement data of different sensors.

    Fig.3 Partial head dataset fault point status: (a)sensor_06; (b)sensor_07; (c)sensor_08

    Since the equipment components work in conjunction, for example, the indicator of the maximum height that the pump can lift, the headHis usually related to the flow rate, pressure and other factors. It is clear that the data from different types of sensors must be correlated to some extent.

    The formula for calculating the headHis

    (1)

    wherep1andp2represent the pressure of the liquid at the pump inlet and the pump outlet, respectively;c1andc2represent the flow rate of the fluid at the pump inlet and the pump outlet, respectively ;z1andz2represent the height of the inlet and the outlet, respectively;ρrepresents the density of the liquid;grepresents the acceleration of gravity.

    For the dataset, the relationship mapping drawn by traversing each sensor data is shown in Fig.4, indicating the distribution of each sensor correlation relationship, such as pressure (sensor_00) and shaft power (sensor_04). The darker the color of the vertical and the horizontal intersection grid areas beyond the diagonal, the larger the corresponding relationship value, which indicates a stronger relationship between the sensors. According to the color shades presented by their intersection points against the color labeled values on the right, it can be judged that the relationship values of sensors with strong correlation are above 0.850, which shows that there is a fairly high linkage between these sensor data.

    According to the analysis results, it can be seen that the equipment sensor measurement dataset represented by the pump has a complex form and a large amount of data, which increases the complexity and difficulty of the calculation. If the strong correlation data between the sensor data is extracted and the weak correlation data in the data is reduced, the purpose of dimensionality reduction can be achieved and the noise and redundancy can be reduced, thus reducing the complexity of the neural network algorithm calculation and improving the learning efficiency.

    2 Enhanced LSTM-Based Device Anomaly Detection Algorithm

    In real industrial environments where the equipment sensor timing data is highly dimensional and mixed with a large amount of environmental noise, shallow deep-learning network models are limited by the feature characterization capability of the network layer, and it is not easy to distinguish the fault sample problems in noisy environments. To address this problem, an enhanced LSTM network classification algorithm, the PCA-GA-LSTM algorithm, is proposed in this paper to better utilize contextual information for prediction. The framework of the PCA-GA-LSTM algorithm (Fig.5) includes a data processing module, a parameter optimization module, a data training module and a data prediction module. Specifically, the data processing module uses the PCA algorithm to reduce the dimensionality of the normalized data and then divides the dataset. The parameter optimization module enhances the LSTM network by GA. The data training module uses the enhanced LSTM network for training. The data prediction module uses the trained network to make predictions on real datasets.

    2.1 PCA-based data processing module

    In extracting the production equipment sensor feature parameters, the rich number and the type of sensors reflect the state of the equipment from different aspects, but there are very weakly correlated redundant information and noise between these parameters. The data processing module, through PCA, maps the originaln-dimensional features tok-dimensionnal features (n>k), replaces the original variables by linear combinations between the original variables, realizes high dimensional data for dimensionality reduction, retains more information of strongly correlated data[12], and minimizes the weakly correlated information to reduce the complexity of computation.

    The dataset used in this paper hasmsamples {x1,x2,…,xm} and each sample hasnfeaturesxi=[xi,1,xi,2,…,xi,n]T.The original data are formed into ann×mmatrixXby columns. The specific dimensionality reduction steps are shown in Fig.6.

    Fig.6 PCA dimensionality reduction steps

    The input of the GA in this paper is then-dimensional data of the dataset, and the output is thek-dimensional data after dimensionality reduction. The specific calculation process is as follows.

    1) Locate the zero mean of the sample attributes. Apply zero-mean normalization to individual rows of matrixXby subtracting the corresponding row mean from each element.

    2) Calculate the covariance matrixCusing the eigenvaluesλias defined in equationC, along with their corresponding eigenvectors represented byαi(i=1,2,…,k).

    3) Blend theknewly derived variablesZ1,Z2, ..., andZkin a linear fashion to effectively capture the data emanating from each sensorX1,X2, …, andXnacross various temporal segments. Strive to optimize and compute the variance inherent in these transformed variables using the provided formula.

    4) Compute the proportion of the variance attributed to the principal component.

    It is proved that the larger the contribution, the higher the degree of importance. The variance of thekth principal component is equal to thekth eigenvalueλkof the covariance matrix, and the variance contributionVkis calculated as

    (2)

    whereVkis the variance contribution of thekth principal component;λiis theith eigenvalue of the covariance matrix.

    5) Arrange the eigenvectors into a matrix according to the corresponding eigenvalue size from top to bottom. The firstkrows are taken to form the matrixP.Zis the data after dimensionality reduction tokdimensions,i.e.Z=PX.

    In order to verify the effectiveness of dimensionality reduction, the visualization analysis of dimensionality reduction of the experimental dataset is carried out in this paper based on the above principles. Figure 7(a) shows the dimensionality reduction of the dataset to two dimensions, with the horizontal axis representing variablex1and the vertical axis representing variablex2. Figure 7(b) shows the visualization image of the dimensionality reduction of the dataset to three dimensions, with the three axes beingx1,x2andx3. In Fig.7, the red part indicates the data at normal moments and the blue part indicates the data at abnormal moments. It can be seen from the visualization image analysis that the variables after the dimensionality reduction retain strong correlations. All moment dataxi=[xi,1,xi,2,…,xi,n]Tcan be reduced from 51 dimensions to two-dimensional (xi=[xi,1,xi,2]T) or three-dimensional (xi=[xi,1,xi,2,xi,3]T) as shown in Fig.6, which can significantly improve the efficiency of the calculation.

    Fig.7 Schematic diagram of data after dimensionality reduction: (a) visualization of dataset downscaled to 2 dimensions; (b) visualization of dataset downscaled to 3 dimensions

    However, a low dimensionality of the data can cause a large amount of information loss and affect the prediction accuracy, and it is also an issue to be considered to improve the efficiency of calculation and ensure the validity of the data. In this paper, the algorithm selects the dimensions of dimensionality reduction by the cumulative contribution rate, which represents the degree of reflection of the original data. The higher the value, the more it can reflect the information contained in the original data. The formula for calculating the cumulative contribution rate of each principal component is

    (3)

    whereTVis the cumulative contribution rate;Vkis the variance contribution of thekth principal component;λjis thejth eigenvalue of the covariance matrix.

    In general, a cumulative contribution rate of 85%-95% can reflect the information of the vast majority of the data[12]and effectively reduce the dimensionality of the original data, so variables with a contribution rate of 90% or more are used as inputs to the model in this paper.

    2.2 Enhanced LSTM data training module

    2.2.1EnhancedstackedLSTMnetworkstructure

    Deep recurrent neural networks provide empirical advantages beyond shallow networks[13], and in this paper, we propose an enhanced stacked LSTM network model that is compatible with the nonlinearity and complexity of time series data. On the one hand, we establish an enhanced stacked LSTM network to discriminate faulty samples in noisy environments by deepening the number of LSTM layers. On the other hand, we increase the number of fully connected layers and adjust the number of neurons to enhance the fit of the network to the data.

    Figure 8 shows the enhanced stacked LSTM network model proposed in this paper, where LSTM blocks are connected to the deep recurrent networks one by one to form the stacked LSTM layers and combine the advantages of individual LSTM layers. Each layer contains multiple cells to process the timing data. In the stacked LSTM layer, the sensor dataXtat timetis introduced into a second LSTM block with a previously hidden stateht-1. The hidden statehtat timetis computed as described in the previous section and then moved to the next time step, also rising to the second LSTM layer. Such a structure allows the states of each layer to operate at different time scales. It also allows for performing layering in time-based processes and storing data sequence structures more naturally, and the enhanced multilayer network model effect is evident when there are long-term dependent production equipment data or temporal datasets from multivariate sensors[14-15].

    Fig.8 Enhanced stacked LSTM network model

    Each LSTM cell is computed as follows.

    Firstly, the reduced-dimensional sensor sequence,i.e., the column vectorXtcontainingkelements, is passed into the LSTM structure at each moment. The forgetting gate decides which information in the sensor sequence data should be discarded or retained at that moment. The output value obtained is between 0 and 1, and the closer the value obtained is to 1, the more it indicates that it should be retained.

    ft=σ(WfXt+Ufht-1+bf),

    (4)

    whereftis the value of the forgetting gate;WfandUfare the weight matrices of the forgetting gate;ht-1denotes the hidden state at timet-1;bfdenotes the bias vector of the forgetting gate.WfandUfare used respectively to control the degree of the forgetting for the information from the input state at timetand the hidden state at timet-1.

    The input gate is used to update the cell state. The information ofht-1is passed to the sigmoid function after connecting it with the sensor sequencextinput at the current moment to adjust the degree of retention of the hidden information, as shown in Eq. (5).

    it=σ(WiXt+Uiht-1+bi),

    (5)

    whereitis the value of the input gate;WiandUiare the weight matrices of the input gate;bidenotes the bias vector of the input gate.WiandUiare used respectively to control the degree of retention for the information from the input state at timetand the hidden state at timet-1.

    C′t=tanh(WcXt+Ucht-1),

    (6)

    Ct=ftCt-1+itC′t,

    (7)

    whereCtandC′tare the update content and alternative update content, respectively.WcandUcare used respectively to represent the level of attention to the input state at timetand the hidden state at timet-1, in order to subsequently update content based on the forgetting gate and the input gate.

    Finally, after getting the updated value of the cell stateCt, the statehtat the current timetcan be generated based on the new cell state and is used to calculate the output of the current model or to participate in the input of the next layerht+1. The output gate of the hidden layer is expressed as Eq.(8), and then the output value of the LSTM network at the momenttcan be obtained by Eq. (9).

    Ot=σ(woXt+Uo+ht-1+bo),

    (8)

    ht=Ottanh(Ct),

    (9)

    whereOtdenotes the output value,σdenotes the sigmoid function,Uois the weight matrix of the output stateOt; storing all useful information at timetand before;bodenotes the bias vector.

    Since the output results of traditional LSTM networks are mapped to one layer of fully connected layers or the number of neurons is too small, it will lead to the limitation of its final fitting ability, and when complex functions need to be fitted, it often shows insufficient fitting ability, slow convergence in the training phase of the model, severe fluctuation, and large prediction error. Therefore, we add a multilayer fully connected layer network to the output layer of the stacked LSTM network model and adjust the number of neurons to enhance the fitting ability of the network, thus forming an enhanced stacked LSTM network model and reducing the volatility and error of the prediction results.

    2.2.2ParameterdebuggingmethodbasedonenhancedstackedLSTMnetworkmodel

    2.2.2.1 Optimization algorithm

    A good default implementation of gradient descent is the Adam algorithm. This is because it combines the best properties of AdaGrad and RMSProp methods and automatically uses a custom learning rate for each parameter (weight) in the model.

    2.2.2.2 Activation functions

    The activation function is usually fixed by the frame and the scale of the input or output layer. For example, an LSTM uses a sigmoid activation function for the input, so the input is usually scaled from 0 to 1.

    2.2.2.3 Regularization

    LSTM can converge quickly or even overfit on some sequence prediction problems. To solve this problem, regularization methods can be used. The dropout randomly skips neurons during training, forcing other neurons in the layer to select the remainder.

    2.2.3DatatrainingbasedonenhancedLSTMnetwork

    2.2.3.1 Data input

    After the device sensor data are dimensioned down by PCA in the data processing layer, the result of the dimensionality reduction is ak-dimensional temporal dataXtcomposed of multiple sensors linearly, and the divided sensor dataXtof different moments are passed into the LSTM network, and the training process based on the LSTM network is shown in Fig.9. Assuming that the incoming time series is the entire sensor dataset, there aretsequences, each consisting of a column vector ofkelements, and because there aretsequences, the LSTM requiresttime steps (i.e.,ttimes of self-loops) to complete the sequence.

    Fig.9 Model training based on enhanced LSTM network

    2.2.3.2 LSTM layer training

    The incoming sensor dataXt-1is processed by the weightsWand biasbin the LSTM gate structure to obtain the hidden statehtpassed to the next step, which connects the hidden layer output of the previous step with the inputXtof this layer and obtainsht+1by calculation. Thenht+1is passed to the next step and rises to the second LSTM layer. After multilayer multi-step processing, the output value of the LSTM network layer is finally obtained.

    2.2.3.3 Abnormal output

    After learning the relationship between the sensor sequence data and the equipment system fault through the LSTM layer, the learned information is passed to the multilayer fully connected layer behind for processing. It combines all the features learned by the previous LSTM layer and passes them to the output layer through weighted summation, and if the state of the equipment represented by the set of sensor data is judged to be normal, the output result is 0.

    2.3 GA-based parameter optimization module

    The number of fully connected layers, the number of stacked LSTM layers, and the number of neurons in the enhanced LSTM network model are all important factors affecting the training effect of the network, and the optimal parameters need to be chosen to achieve the best training effect. GA is a global optimization algorithm that overcomes the drawback of general numerical solution methods such as iterative operations, which tend to fall into the trap of local minima and become “dead loops” that prevent further progress[16-17]. GA can be used to obtain optimal solutions for the parameters of the augmented LSTM network to improve the accuracy of anomaly detection in neural networks[18-20]. In this paper, GA is used to select the number of network layers, the number of fully connected layers, and the number of neurons of the augmented LSTM to find the LSTM network parameters with the best results. The specific steps of the algorithm are shown in Fig.10.

    Fig.10 GA steps

    The input of the GA is the input dataset, test dataset, number of populations, rate of cross-operation, and rate of mutate operation. The output is the historical optimal solution OpPara and the specific algorithm flow is as follows.

    1) Suitable encoding scheme selection. Unlike traditional GAs that use binary encoding, each individual is a one-dimensional array consisting of individual parameters, andnindividuals form a population.

    2) Population initialization. Twenty populations are selected using repeated random sampling for population initialization.

    3) Fitness value of each individual calculation in the population. The higher the fitness value, the higher the probability of being selected as the final network parameter, indicating that the individual is more genetically competitive.

    4) Crossover and variation. The individual genes, i.e., parameters, are crossed and mutated in two with a certain probability, and the number of neurons is affected by the crossover mutation of layers.

    5) SETP3-SETP5 repetition. Repeat these steps until the number of iterations reaches the specified number or the fitness reaches the specified value.

    6) Individual selection. Select the individual with the highest fitness as the optimal solution to the problem, in other words, the set of LSTM network parameters with the highest recall. The optimal solution is the output to the LSTM network and verified.

    The parameters corresponding to the optimal fitness are finally obtained and used as the result of the parameter setting for the model.

    3 Analysis of Experimental Results

    3.1 Evaluation indicators

    In this paper, we use precision, recall, and receiver operating characteristic (ROC) curves[21]as metrics for evaluation. The confusion matrix describes how our model is confused in making predictions. By obtaining the confusion matrix, we can obtain the values of positive samples where the pump is predicted as normal by the model (CTP), faulty samples where the pump is predicted as normal by the model (CFP), positive samples where the pump is predicted as faulty by the model (CFN), and faulty samples where the pump is predicted as faulty by the model (CTN). Generally speaking, the higher the accuracy and recall, the better the classifier. The calculation formulae are

    (10)

    (11)

    whereEaccis the prediction accuracy;Erecis the prediction recall.

    ROC is a probabilistic model for comparing the true positive rate (TPR) with the false positive rate (FPR) at different thresholds. In the ROC curve, they-axis is the TPR and thex-axis is the FPR. The ideal effect of the ROC curve is to maximize the TPR while minimizing the FPR.

    (12)

    (13)

    whereRTPRis the ratio of the number of faulty samples predicted to be faulty to the actual number of faulty samples;RFPRis the ratio of the number of normal samples predicted to be faulty to the actual number of normal samples.

    3.2 Parameter-optimized LSTM for enhanced models

    The deep LSTM network structure can handle massive and complex data in device sensors, but the number of fully connected layers, the number of stacked LSTM layers, and the number of neurons are all important factors that affect the training effect of the network. The optimal parameters need to be chosen to achieve the best training effect. To optimize the parameters of the enhanced LSTM recurrent neural network in this paper, we validated the GA for fine-tuning the parameters of the LSTM for anomaly prediction.

    Each individual is initialized to correspond to a set of network parameters of the LSTM model, and each parameter corresponds to a gene in the individual. For example, an individual is (1, 2, 95, 227, 56), where the first three parameters respectively represent the number of LSTM layers, the number of fully connected layers, and the number of neurons in one LSTM layer; the last two parameters represent the number of neurons in two fully connected layers. The range of LSTM layers and fully connected layers is 1 to 3, the number of neurons is between 32 and 256, and eight genes on each chromosome do not reach the length requirement after complementary zeros. The initialization parameters of the GA optimization include the number of populations of 20, the crossover rate of 0.5, the variation rate of 0.01, and the number of iterations of 25. In each training round, the parameters obtained in this round and the size of the fitness are recorded. The number of generations optimized by the GA is 25, and the data for each generation are shown in Table 1.

    Table 1 Network parameters with the highest adaptability for each generation

    The parameters corresponding to the optimal fitness are finally obtained and used as the results of the parameter setting of the model. According to Table 1, the final values of the network parameters were set to the number of LSTM layers as 2, and the number of neurons in LSTM layers as 195 and 126, respectively. The number of fully connected layers was 2, and their corresponding number of neurons was 68 and 48, and the recall rate corresponding to this set of parameters was 97.07%, which was the highest among all generations. The enhanced stacked LSTM network model improved by the GA is shown in Fig.11.

    3.3 Accuracy and recall analysis of anomaly detection

    Figure 12 shows the accuracy and recall of the algorithm plain Bayesian (NB), recurrent neural networks (RNN), LSTM, and bi-directional LSTM (BiLSTM) after dimensionality reduction. The experimental results show that the network training accuracy and recall of LSTM are 93.56% and 92.45%, respectively, which outperformed the NB, RNN and BiLSTM networks. It has 16.50% higher accuracy and 13.95% higher recall than NB; 1.41% higher accuracy and 2.43% higher recall than RNN; 0.42% higher accuracy and 0.58% higher recall than BiLSTM. It indicates that the LSTM network is superior to the NB, RNN and BiLSTM networks for the processing of pump sensor timing data.

    After PCA dimensionality reduction, the accuracy of all three algorithms for pump anomaly detection was reduced to different degrees, but the efficiency was improved. Figure 12 shows the combined results of several experiments. Compared with the LSTM, the accuracy and recall of the PCA-LSTM decrease 1.50% and 2.44%,respectively. In terms of training time, the training results are shown in Fig.13. There is a significant improvement in efficiency. An average of 24 s is saved after dimensionality reduction, indicating the superiority of dimensionality reduction on efficiency improvement.

    Fig.13 Average dimensionality reduction time of multi-class algorithms

    Table 2 shows the accuracy and recall of different algorithms after the GA improvement. The experimental results show that the improved algorithm has a higher accuracy and a better prediction based on the LSTM model, with an accuracy of 96.75% and a recall of 97.07% which are 3.19% and 4.62% higher than those of the original LSTM, respectively.

    Table 2 Effect of different algorithms after the GA improvement

    3.4 ROC curve analysis

    The ROC curves of the original process after dimensionality reduction and after GA improvement are shown in Fig.14. Figure 14(a) shows the ROC curve under the original model. The data analysis results indicate that the training accuracy is higher due to the advantage of LSTM’s good processing of time series data. Therefore, LSTM has a larger area under curve (AUC) and better results compared with RNN and NB.

    Figure 14(b) shows the ROC curves after parameter optimization by GA, and the data results indicate that the AUC has improved significantly due to the adjustment of the network structure parameters, and the AUC values of the LSTM algorithm are 0.08 higher than that after dimensionality reduction and 0.05 higher than that of the original LSTM, respectively. It represents that the GA has a good effect on our enhanced stacked LSTM network model.

    Fig.14 ROC curve results of different algorithms: (a) ROC curves from original model; (b) ROC curves after parameter optimization

    4 Conclusions

    In this paper, an enhanced stacked LSTM network algorithm for device anomaly detection using sensor data on the device is proposed. The algorithm first extracts the strongly correlated variable data from the sensor data using principal component analysis, realizes the dimensionality reduction of the sensor data, performs anomaly prediction on the reduced dimensional data by augmented stacked LSTM neural network, and uses the GA to optimize the parameters of the augmented stacked LSTM neural network. The experimental validation results conducted by the actual pump real sensor timing data show that compared with the traditional plain Bayesian algorithm with RNN algorithm and the basic LSTM algorithm, the algorithm proposed in this paper has significantly improved accuracy and detection efficiency, which provides a valuable reference solution for anomaly detection of production equipment.

    In the future, we will focus on changing the generality of the algorithm, exploring the types of production equipment faults, and performing more refined fault prediction under different anomalies for a more robust equipment fault detection scheme.

    在线播放无遮挡| 国产黄色免费在线视频| 精品午夜福利在线看| 免费观看无遮挡的男女| 女的被弄到高潮叫床怎么办| 亚洲在久久综合| 午夜福利在线在线| 国产视频首页在线观看| 最近的中文字幕免费完整| 欧美一级a爱片免费观看看| 人人妻人人澡人人爽人人夜夜 | 国国产精品蜜臀av免费| 日韩亚洲欧美综合| 亚洲天堂国产精品一区在线| 精品国产一区二区三区久久久樱花 | 乱人视频在线观看| 三级毛片av免费| 欧美潮喷喷水| 国产乱人视频| 青春草亚洲视频在线观看| 亚洲四区av| videos熟女内射| 亚洲成色77777| 黄片wwwwww| 亚洲精品久久午夜乱码| 国产91av在线免费观看| 人妻制服诱惑在线中文字幕| 国内精品美女久久久久久| av在线亚洲专区| 深爱激情五月婷婷| 搡老乐熟女国产| 日本免费a在线| 欧美zozozo另类| 国产在视频线精品| 熟女人妻精品中文字幕| 午夜福利成人在线免费观看| 国产综合懂色| 国产精品人妻久久久久久| www.av在线官网国产| 欧美日韩一区二区视频在线观看视频在线 | 在现免费观看毛片| 国产v大片淫在线免费观看| 精品久久久久久久人妻蜜臀av| 国产女主播在线喷水免费视频网站 | 久久精品国产亚洲网站| 亚洲在久久综合| 亚洲熟女精品中文字幕| 男女视频在线观看网站免费| 成年av动漫网址| 看免费成人av毛片| 99热网站在线观看| 国产亚洲精品久久久com| 免费黄频网站在线观看国产| 永久免费av网站大全| 久久人人爽人人爽人人片va| 水蜜桃什么品种好| 国产伦理片在线播放av一区| 日韩中字成人| 性色avwww在线观看| 国产乱人偷精品视频| 国产精品1区2区在线观看.| 九九久久精品国产亚洲av麻豆| 一二三四中文在线观看免费高清| 老师上课跳d突然被开到最大视频| 国产老妇女一区| 亚洲国产精品专区欧美| 亚洲va在线va天堂va国产| av黄色大香蕉| 成人毛片a级毛片在线播放| 日韩av免费高清视频| 乱系列少妇在线播放| 80岁老熟妇乱子伦牲交| 国产精品国产三级专区第一集| 精品一区在线观看国产| 亚洲精品456在线播放app| 一个人看视频在线观看www免费| 亚洲国产精品sss在线观看| 嫩草影院新地址| 中文字幕制服av| 少妇高潮的动态图| 久久精品久久久久久噜噜老黄| 国产 亚洲一区二区三区 | 天天一区二区日本电影三级| 婷婷色av中文字幕| 九九爱精品视频在线观看| 亚洲国产精品国产精品| 天堂av国产一区二区熟女人妻| 色5月婷婷丁香| 国产极品天堂在线| 欧美另类一区| 最近2019中文字幕mv第一页| 伊人久久精品亚洲午夜| 草草在线视频免费看| 久久午夜福利片| 欧美日韩亚洲高清精品| 国产精品一区二区在线观看99 | 搡老妇女老女人老熟妇| 老师上课跳d突然被开到最大视频| 全区人妻精品视频| 亚洲成人久久爱视频| 在线观看人妻少妇| 色网站视频免费| 亚洲av免费在线观看| 婷婷色综合www| 亚洲在线自拍视频| 午夜福利在线在线| 久久6这里有精品| 蜜桃久久精品国产亚洲av| 亚洲av二区三区四区| 日日撸夜夜添| 亚洲成人久久爱视频| 看黄色毛片网站| 免费高清在线观看视频在线观看| 亚洲aⅴ乱码一区二区在线播放| 国产久久久一区二区三区| av福利片在线观看| 一边亲一边摸免费视频| 青春草国产在线视频| 熟妇人妻久久中文字幕3abv| 久久久久久伊人网av| 亚洲最大成人手机在线| 91狼人影院| 黄片wwwwww| 国精品久久久久久国模美| 精品久久久久久成人av| 亚洲精品aⅴ在线观看| 国产精品日韩av在线免费观看| 非洲黑人性xxxx精品又粗又长| 亚洲国产日韩欧美精品在线观看| 国产不卡一卡二| 亚洲婷婷狠狠爱综合网| 人人妻人人看人人澡| 亚洲国产高清在线一区二区三| 日韩欧美精品v在线| 国产精品日韩av在线免费观看| 欧美成人a在线观看| 非洲黑人性xxxx精品又粗又长| 久久久久久久大尺度免费视频| 国产精品麻豆人妻色哟哟久久 | 床上黄色一级片| 日韩av在线大香蕉| 在线免费观看不下载黄p国产| 色尼玛亚洲综合影院| 在线播放无遮挡| 高清午夜精品一区二区三区| 久久久午夜欧美精品| 亚洲美女视频黄频| 成人特级av手机在线观看| 大片免费播放器 马上看| 日本欧美国产在线视频| a级毛色黄片| 日韩欧美精品v在线| 国产爱豆传媒在线观看| 亚洲乱码一区二区免费版| 偷拍熟女少妇极品色| 久久草成人影院| 女人久久www免费人成看片| 国产精品久久久久久精品电影| 五月天丁香电影| 久久久欧美国产精品| 免费观看a级毛片全部| 好男人视频免费观看在线| 男女那种视频在线观看| 少妇人妻一区二区三区视频| 亚洲欧美清纯卡通| 午夜视频国产福利| 亚洲精品成人久久久久久| 91久久精品国产一区二区成人| 尤物成人国产欧美一区二区三区| 亚洲第一区二区三区不卡| 床上黄色一级片| 99热这里只有是精品在线观看| 国内精品一区二区在线观看| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 嫩草影院入口| 久久久欧美国产精品| 国产 一区精品| 精品久久久精品久久久| 成人一区二区视频在线观看| 神马国产精品三级电影在线观看| 能在线免费看毛片的网站| 亚洲久久久久久中文字幕| 黄色欧美视频在线观看| 黄色欧美视频在线观看| 亚洲人成网站在线观看播放| 免费观看的影片在线观看| 欧美zozozo另类| 成年av动漫网址| 成年版毛片免费区| 国产精品无大码| 久久久精品94久久精品| 国产精品熟女久久久久浪| 麻豆乱淫一区二区| 免费观看精品视频网站| 九九在线视频观看精品| 日韩中字成人| 青青草视频在线视频观看| 国产麻豆成人av免费视频| 国产一区有黄有色的免费视频 | 日本-黄色视频高清免费观看| 美女大奶头视频| 欧美性猛交╳xxx乱大交人| 国产人妻一区二区三区在| 亚洲国产精品国产精品| 欧美激情久久久久久爽电影| 汤姆久久久久久久影院中文字幕 | 18禁裸乳无遮挡免费网站照片| 一级黄片播放器| 久久久精品免费免费高清| 少妇熟女aⅴ在线视频| 男女边摸边吃奶| 亚洲av成人av| 两个人视频免费观看高清| 丰满少妇做爰视频| 99久国产av精品| 免费av观看视频| 永久免费av网站大全| 亚洲熟妇中文字幕五十中出| 最近手机中文字幕大全| 高清毛片免费看| 看免费成人av毛片| 精品人妻视频免费看| 最近视频中文字幕2019在线8| 国产淫片久久久久久久久| 又爽又黄无遮挡网站| 天堂av国产一区二区熟女人妻| av.在线天堂| 亚洲国产欧美人成| 国产乱人偷精品视频| 久久久亚洲精品成人影院| 日日摸夜夜添夜夜爱| 成人综合一区亚洲| 美女大奶头视频| 久久国内精品自在自线图片| 熟妇人妻久久中文字幕3abv| 有码 亚洲区| 久久久久久久国产电影| 久久久亚洲精品成人影院| 亚洲aⅴ乱码一区二区在线播放| 国产色婷婷99| 亚洲美女视频黄频| 日本黄色片子视频| 三级国产精品欧美在线观看| 亚洲国产精品成人综合色| 国产探花在线观看一区二区| 菩萨蛮人人尽说江南好唐韦庄| 成人av在线播放网站| 少妇熟女aⅴ在线视频| 国产男人的电影天堂91| 蜜桃久久精品国产亚洲av| 91午夜精品亚洲一区二区三区| 久久韩国三级中文字幕| 欧美高清成人免费视频www| 日韩不卡一区二区三区视频在线| 国产亚洲一区二区精品| 国产精品综合久久久久久久免费| 69人妻影院| 黄色欧美视频在线观看| 菩萨蛮人人尽说江南好唐韦庄| 狂野欧美白嫩少妇大欣赏| 成人亚洲精品一区在线观看 | 国产一级毛片七仙女欲春2| 偷拍熟女少妇极品色| 国产亚洲av片在线观看秒播厂 | 午夜爱爱视频在线播放| 一级毛片黄色毛片免费观看视频| 欧美zozozo另类| 天堂av国产一区二区熟女人妻| 搡女人真爽免费视频火全软件| 日韩 亚洲 欧美在线| 色综合色国产| 一级二级三级毛片免费看| 丝袜喷水一区| 亚洲美女视频黄频| 中文字幕免费在线视频6| 久久久久久九九精品二区国产| 国产成人a∨麻豆精品| 小蜜桃在线观看免费完整版高清| 天堂√8在线中文| 欧美日韩亚洲高清精品| 国产精品福利在线免费观看| 水蜜桃什么品种好| 人妻夜夜爽99麻豆av| 亚洲精品乱久久久久久| 精品欧美国产一区二区三| 成人亚洲精品av一区二区| 91精品一卡2卡3卡4卡| av在线蜜桃| 一边亲一边摸免费视频| 99久国产av精品国产电影| 国产探花在线观看一区二区| 欧美3d第一页| av黄色大香蕉| 人人妻人人澡欧美一区二区| 婷婷色综合www| 亚洲欧美日韩东京热| 一边亲一边摸免费视频| 免费黄网站久久成人精品| 日韩av不卡免费在线播放| av国产久精品久网站免费入址| 日韩成人伦理影院| 久久精品综合一区二区三区| 天天躁狠狠躁夜夜躁狠狠躁| 国产麻豆69| 亚洲成人一二三区av| 美女大奶头黄色视频| 2021少妇久久久久久久久久久| 黄色一级大片看看| 色视频在线一区二区三区| 亚洲视频免费观看视频| 国产精品免费大片| 日韩视频在线欧美| 日韩大片免费观看网站| 乱人伦中国视频| 久久99蜜桃精品久久| 中文乱码字字幕精品一区二区三区| 国产免费现黄频在线看| 亚洲,一卡二卡三卡| 热99久久久久精品小说推荐| 一级毛片电影观看| 少妇被粗大的猛进出69影院| 熟女少妇亚洲综合色aaa.| 高清在线视频一区二区三区| 精品一区二区三区四区五区乱码 | 校园人妻丝袜中文字幕| 国产精品不卡视频一区二区| 国产精品欧美亚洲77777| 国产日韩欧美亚洲二区| 少妇人妻 视频| 国产精品久久久久久久久免| 两性夫妻黄色片| 欧美成人午夜精品| 男女边摸边吃奶| 国产精品嫩草影院av在线观看| 国产免费现黄频在线看| 中文字幕av电影在线播放| 熟女av电影| 欧美国产精品一级二级三级| 日韩精品免费视频一区二区三区| 国产男人的电影天堂91| 天天躁狠狠躁夜夜躁狠狠躁| 各种免费的搞黄视频| 亚洲图色成人| 国产成人91sexporn| 国产成人欧美| 欧美日韩成人在线一区二区| 一边摸一边做爽爽视频免费| 高清黄色对白视频在线免费看| 欧美精品av麻豆av| 国产精品麻豆人妻色哟哟久久| 婷婷色综合大香蕉| 亚洲国产精品成人久久小说| 成人免费观看视频高清| 一二三四中文在线观看免费高清| 不卡视频在线观看欧美| 久久久久久久精品精品| 亚洲国产日韩一区二区| 免费观看性生交大片5| 高清视频免费观看一区二区| 国产综合精华液| 国产成人午夜福利电影在线观看| 在线观看国产h片| 日韩欧美一区视频在线观看| 在线观看三级黄色| 国产日韩欧美在线精品| 一级片'在线观看视频| 亚洲欧美色中文字幕在线| 十分钟在线观看高清视频www| 高清在线视频一区二区三区| 性少妇av在线| 丝袜喷水一区| 欧美精品国产亚洲| 精品国产一区二区三区四区第35| 午夜激情久久久久久久| 日本欧美视频一区| 日韩中文字幕视频在线看片| 美女中出高潮动态图| 自拍欧美九色日韩亚洲蝌蚪91| 日本欧美视频一区| 国产男女内射视频| 午夜福利乱码中文字幕| 一级毛片电影观看| 熟女av电影| 国产人伦9x9x在线观看 | 亚洲色图综合在线观看| 欧美激情极品国产一区二区三区| 91精品伊人久久大香线蕉| 久久久久国产网址| 久久国产亚洲av麻豆专区| 日本黄色日本黄色录像| 久久国产亚洲av麻豆专区| 激情视频va一区二区三区| 大香蕉久久成人网| 香蕉精品网在线| 波多野结衣一区麻豆| 性色av一级| 汤姆久久久久久久影院中文字幕| 最近的中文字幕免费完整| 男的添女的下面高潮视频| 欧美日韩精品网址| 婷婷色麻豆天堂久久| 亚洲三区欧美一区| 两性夫妻黄色片| 精品国产超薄肉色丝袜足j| 超色免费av| 麻豆乱淫一区二区| 91国产中文字幕| 国产亚洲一区二区精品| 午夜av观看不卡| 国产无遮挡羞羞视频在线观看| 如日韩欧美国产精品一区二区三区| 精品国产超薄肉色丝袜足j| 国产一区二区三区av在线| 18禁动态无遮挡网站| 看非洲黑人一级黄片| 亚洲国产成人一精品久久久| 菩萨蛮人人尽说江南好唐韦庄| av在线app专区| 最近最新中文字幕免费大全7| 国产精品久久久av美女十八| 熟女少妇亚洲综合色aaa.| 午夜免费观看性视频| 久久久精品94久久精品| 不卡视频在线观看欧美| 大片免费播放器 马上看| 人体艺术视频欧美日本| a 毛片基地| 91午夜精品亚洲一区二区三区| 一级黄片播放器| 亚洲色图 男人天堂 中文字幕| 国产精品蜜桃在线观看| 久久这里只有精品19| 午夜福利视频在线观看免费| av免费在线看不卡| 成年动漫av网址| 国产精品 欧美亚洲| 欧美黄色片欧美黄色片| 一二三四中文在线观看免费高清| 丝袜美腿诱惑在线| 水蜜桃什么品种好| 亚洲美女黄色视频免费看| 晚上一个人看的免费电影| 亚洲av成人精品一二三区| 日日啪夜夜爽| 777米奇影视久久| 午夜免费男女啪啪视频观看| 黄色怎么调成土黄色| 大香蕉久久网| 春色校园在线视频观看| 欧美精品一区二区大全| 久久久精品区二区三区| 日韩伦理黄色片| 男女午夜视频在线观看| 黑人猛操日本美女一级片| 国产伦理片在线播放av一区| 日日摸夜夜添夜夜爱| 色94色欧美一区二区| 咕卡用的链子| 天天操日日干夜夜撸| 久久人人爽人人片av| 青草久久国产| 老熟女久久久| 免费不卡的大黄色大毛片视频在线观看| 免费少妇av软件| 久热这里只有精品99| 美女大奶头黄色视频| 黄色毛片三级朝国网站| 国产av一区二区精品久久| 亚洲欧美成人综合另类久久久| 亚洲综合精品二区| 考比视频在线观看| 菩萨蛮人人尽说江南好唐韦庄| 久久久久精品性色| 亚洲经典国产精华液单| 日本猛色少妇xxxxx猛交久久| 国产精品国产三级国产专区5o| 寂寞人妻少妇视频99o| 免费观看无遮挡的男女| 亚洲国产精品999| 久久久精品94久久精品| 深夜精品福利| 久久国产亚洲av麻豆专区| 中文字幕另类日韩欧美亚洲嫩草| 国产精品嫩草影院av在线观看| 电影成人av| 一边摸一边做爽爽视频免费| 最新中文字幕久久久久| 欧美av亚洲av综合av国产av | 不卡av一区二区三区| 1024视频免费在线观看| 午夜精品国产一区二区电影| 美女国产视频在线观看| 久久精品熟女亚洲av麻豆精品| 美女xxoo啪啪120秒动态图| 国产一区二区激情短视频 | a级毛片在线看网站| 国产精品久久久久久精品古装| 日日撸夜夜添| 青春草视频在线免费观看| 黑人欧美特级aaaaaa片| 久久久久久久久久人人人人人人| www日本在线高清视频| 亚洲av国产av综合av卡| 亚洲精品国产av蜜桃| 亚洲av男天堂| 精品人妻在线不人妻| 黄色毛片三级朝国网站| 中文天堂在线官网| 亚洲精品国产av蜜桃| 国产一区亚洲一区在线观看| 欧美老熟妇乱子伦牲交| 亚洲欧美一区二区三区久久| 精品人妻偷拍中文字幕| 18禁动态无遮挡网站| 久久久久视频综合| 亚洲在久久综合| av电影中文网址| 亚洲一区二区三区欧美精品| 999久久久国产精品视频| 亚洲国产欧美在线一区| 在线精品无人区一区二区三| av不卡在线播放| 国产精品偷伦视频观看了| 精品亚洲乱码少妇综合久久| 国产视频首页在线观看| 国产成人精品在线电影| 男男h啪啪无遮挡| 午夜福利在线免费观看网站| 免费久久久久久久精品成人欧美视频| 精品少妇内射三级| 国产免费现黄频在线看| 午夜日韩欧美国产| 国产白丝娇喘喷水9色精品| 中文字幕av电影在线播放| 黄色一级大片看看| 国产综合精华液| 国产又色又爽无遮挡免| 两个人看的免费小视频| 欧美日本中文国产一区发布| 午夜日韩欧美国产| 免费观看在线日韩| 国产片特级美女逼逼视频| 男女免费视频国产| 中文乱码字字幕精品一区二区三区| 你懂的网址亚洲精品在线观看| 99久国产av精品国产电影| 最近最新中文字幕大全免费视频 | 亚洲中文av在线| 久久99热这里只频精品6学生| 久久亚洲国产成人精品v| 男女国产视频网站| 少妇的逼水好多| 国产亚洲最大av| 日韩在线高清观看一区二区三区| 91久久精品国产一区二区三区| a级片在线免费高清观看视频| 国产一区二区 视频在线| 亚洲久久久国产精品| 亚洲精品久久午夜乱码| 美女高潮到喷水免费观看| av国产久精品久网站免费入址| 免费观看在线日韩| 水蜜桃什么品种好| 国产黄频视频在线观看| 国产欧美日韩一区二区三区在线| 亚洲伊人久久精品综合| 亚洲第一av免费看| √禁漫天堂资源中文www| 国产毛片在线视频| 高清不卡的av网站| 丝袜美腿诱惑在线| 中文字幕精品免费在线观看视频| 青草久久国产| 亚洲欧美精品自产自拍| 狠狠精品人妻久久久久久综合| 欧美中文综合在线视频| 亚洲成人av在线免费| 亚洲精品在线美女| 2018国产大陆天天弄谢| 国产免费现黄频在线看| 一区二区三区激情视频| 国产成人精品婷婷| 午夜福利,免费看| 好男人视频免费观看在线| 国产成人精品无人区| 国精品久久久久久国模美| 成年av动漫网址| 性少妇av在线| 午夜影院在线不卡| 激情五月婷婷亚洲| 中文字幕人妻丝袜一区二区 | 国产成人91sexporn| 国产一区有黄有色的免费视频| 男女边吃奶边做爰视频| 人妻一区二区av| 午夜日韩欧美国产| 国产成人精品久久二区二区91 | 精品卡一卡二卡四卡免费| 亚洲第一区二区三区不卡| 亚洲欧美日韩另类电影网站| 欧美日韩精品成人综合77777| 国产亚洲精品第一综合不卡| 日本猛色少妇xxxxx猛交久久| 欧美人与善性xxx| 高清不卡的av网站| 成年美女黄网站色视频大全免费| 麻豆乱淫一区二区| 亚洲av成人精品一二三区| 国产xxxxx性猛交| 精品第一国产精品| 成人免费观看视频高清| www日本在线高清视频| 国产精品偷伦视频观看了| 老熟女久久久| www.精华液|