• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Continuous-Time Prediction of Industrial Paste Thickener System With Differential ODE-Net

    2022-04-15 04:03:56ZhaolinYuanXiaoruiLiDiWuXiaojuanBanNaiQiWuHongNingDaiandHaoWang
    IEEE/CAA Journal of Automatica Sinica 2022年4期

    Zhaolin Yuan, Xiaorui Li, Di Wu, Xiaojuan Ban, Nai-Qi Wu,,Hong-Ning Dai,, and Hao Wang,

    Abstract—It is crucial to predict the outputs of a thickening system, including the underflow concentration (UC) and mud pressure, for optimal control of the process. The proliferation of industrial sensors and the availability of thickening-system data make this possible. However, the unique properties of thickening systems, such as the non-linearities, long-time delays, partially observed data, and continuous time evolution pose challenges on building data-driven predictive models. To address the above challenges, we establish an integrated, deep-learning, continuous time network structure that consists of a sequential encoder, a state decoder, and a derivative module to learn the deterministic state space model from thickening systems. Using a case study, we examine our methods with a tailing thickener manufactured by the FLSmidth installed with massive sensors and obtain extensive experimental results. The results demonstrate that the proposed continuous-time model with the sequential encoder achieves better prediction performances than the existing discrete-time models and reduces the negative effects from long time delays by extracting features from historical system trajectories. The proposed method also demonstrates outstanding performances for both short and long term prediction tasks with the two proposed derivative types.

    I. INTRODUCTION

    AS a core procedure in modern mineral separation, a thickening process produces a paste with high concentration for subsequent tailing storage or backfilling[1]–[3]. During this thickening process, an industrial paste thickener achieves solid–liquid separation based on gravity sedimentation. The purpose of the industrial paste thickener is to efficiently control the final underflow concentration (UC).Most closed-loop control systems manipulate the underflow slurry pump speed and flocculant pump speed as the inputs to stabilize the underflow concentration within its specified range during operation. Previous studies [4], [5] showed that model prediction control (MPC) can facilitate the control process of thickening systems owing to the advantages of its high robustness and applicability. Hence, the accurate prediction of thickening systems has received extensive attention for the analysis and control of thickeners [5]–[7].

    A complex industrial system such as the thickening system typically has the following key features:

    1) Non-Linear System Dynamics:Most industrial systems have extremely complex high-order dynamical equations that are not affine or linear systems.

    2) PartiaLly Observed Data:The information extracted from sensors or other available methods is incomplete. In particular, a number of unknown hidden variables exist in such systems.

    3) Influence of Long Delays:The system states are influenced by external inputs or internal states that occur over a long previous time.

    4) Continuous-Time (CT) Evolution:Because real industrial systems follow various physical laws, their time evolution can be expressed via CT differential equations.

    The above features of a thickening system create challenges for predictive control. There are a number of studies addressing these challenges. Data-driven methods are emerging as one of the most successful techniques for modeling complex processes [8]. Traditional data-based CT system prediction methods focus on fitting high-order differential equations based on sampled noisy data from real systems. However, they lack the ability to cope with partially observed and extremely complex system dynamics. Recent advances of deep neural networks (DNNs) have shown their strengths in addressing these issues owing to their strong feature representation abilities and scalable parameter structures, leading to the wide usage of DNNs in computer vision [9]–[11], natural language processing [12], [13], time series prediction [5], [14]–[17], and fault diagnosis [18].However, most DNN-based system-modeling methods are based on discrete time, disregarding the CT properties of a system. The lost prior information from physical insights undoubtedly leads to the deterioration of the model accuracy.

    Studies of prediction control can be categorized into two types. The first type provides short-term predictions for model-based control algorithms [5], such as model predictive control (MPC). The learned system provides deterministic prior knowledge of the dynamical systems, thereby approximating the infinite-horizon optimal control as a shortterm optimization problem. The second type is mainly based on simulations, which imitate the outputs of an unknown system under a long-term feed of inputs [19]. Compared with short-term predictions, simulations require higher robustness and stability to provide long-term predictions. However, there are few studies on designing predictive models for short and long term predictions to support subsequent applications, such as MPC and simulations.

    To address the above challenges, we propose a deep CT network composed of a sequential encoder, a state decoder,and a derivative module to learn the auto-regressive processes and influences from the system inputs based on real thickening data in an end-to-end manner. Specifically, the long-time system delay motivated us to utilize a sequential encoder to extract features from historical system trajectories.We designed the derivative module for the CT state space model based on a DNN. This module fits the non-linear CT evolution of the system and infers the non-observable information by introducing hidden states. Moreover, the problems of short and long term predictions are solved by feeding historical system trajectories and system inputs with arbitrary lengths to the model after incorporating the designed non-stationary and stationary systems into the trained model.The future system outputs are then predicted. The contributions of this paper are threefold:

    1) We propose a novel deep-learning-based CT predictive model for a paste thickener. The deep learning network consists of three components: a sequential encoder, a state decoder, and a derivative module.

    2) We design two kinds of derivative modules, named stationary and non-stationary systems, to handle the shortterm and long-term prediction tasks, respectively.

    3) We conduct extensive experiments on real industrial data collected from a real industrial copper mining process. The results demonstrate the outstanding performance of the proposed model in providing predictions for the thickener system with non-linear and time-delay properties. In addition,we conduct ablation studies to evaluate the effectiveness of each module in the proposed model.

    The rest of the paper is organized as follows. We briefly introduce the related work in Section II. We then present the problem formulation in Section III. We next present the CT deep sequential model in Section IV. Experimental results are shown in Section V. We then summarize the paper and discuss future research directions in Section VI.

    II. PRELIMINARIES AND RELATED WORK

    As a core device in a thickening system, a paste thickener is generally composed of a high sedimentation tank and a raking system. Fig. 1 depicts the general structure of a thickener and its key components. After being fed with flocculant and tailing slurry with a low concentration, underflow with a high concentration is discharged from the bottom of the thickener and is then used to produce paste in the subsequent procedures. The prediction of a thickening system refers to the estimation of the future system outputs, such as the underflow concentration and mud pressure based on the historical system trajectories and system inputs. The prediction of a thickening system is essentially similar to system identification [8]. For system identification, interpretable model structures are designed based on prior knowledge, and the parameters are determined by fitting real data. As one of the subsequent applications of identified models, the prediction forecasts the system outputs according to the inputs.

    Fig. 1. Crude slurry flow with a low concentration is fed into the mix tank accompanied by flocculant. Under the effect of the flocculant, particles agglomerate to larger clumps and concentrate at the bottom. The paste thickener continuously produces underflow with a high concentration and clear water in an overflow pipe located at the top of the thickener.

    A. Prediction of Thickening Systems

    The prediction methods for thickening systems can be categorized into two types: 1) gray-box thickening system simulations and 2) black-box thickener system predictions. In the gray-box simulations, the sedimentation process is mainly considered from a physical perspective [4], [20], [21]. Theorybased gray-box methods can be exactly explained and implemented effectively for specific systems. However, most of them are mainly built on many ideal hypotheses and suffer from the complexity of slurry particle dynamics and external unknown environment disturbances.

    In contrast, the black-box methods do not require prior assumptions or constraints to be given. A complete parameterized model with a high degree of freedom is defined to predict the system outputs and learn the optimal parameters from real data. Since the offline system trajectories from an industrial system are always available and adequate, blackbox-based methods, including latent factor model [22]–[24],imitation learning [25], and deep neural network [26] have been widely used in the current industrial systems [27]–[32].

    Moreover, Oulhiqet al. [33] used a black-box linear dynamic model with a deterministic time delay to identify an industrial thickener system using historical data. Such parameterized linear model lacks adequate expressivity to represent the non-linear properties in a thickening system.Most recently, random forest model is presented for modeling a paste thickening system based on a purely data-driven approach for modeling and evolutionary strategies [34].Because random forest model only fits the thickening system dynamic in single step, it ignores the time delay and the correlations between adjacent positions in sequential inputs and outputs. A bidirectional gated recurrent unit (BiGRU)with an encoder–decoder deep recurrent neural network is introduced to model thickening systems [5]. Yuanet al. [6]proposed a dual-attention recurrent neural network to model the spatial and temporal features of a thickening system,thereby improving the prediction accuracy of the underflow concentration. However, the above studies [5], [6] only focus on discrete-time system predictions rather than a CT thickening system.

    B. Prediction of Continuous-Time Systems

    The prediction of physical systems based on CT models directly from sampled data has the following advantages [8]:1) transparent physical insights into the system properties, 2)inherent data filtering, and 3) the capability of dealing with non-uniformly sampled data. For any numerical schemes for solving CT differential equations, sophisticated discretization methods have high accuracy but suffer from enormous time and memory costs. A recently developed advanced ordinary differential equation (ODE) solver [35] introduces the reversemode automatic differentiation of ODE solutions, thereby only requiring O(1) memory cost. Meanwhile, this method also allows the end-to-end training of ODEs within a large DNN. Moreover, Demeester [19] proposed a stationary CT state space model for predicting an input–output system when the observations from a system are unevenly sampled.Although it has successfully improved the accuracy and stability of long-term predictions by introducing a stationary system, it did not take advantage of a non-stationary system in the short-term prediction task.

    Compared with the existing CT models, our model considers both short-term and long-term predictions, thereby achieving outstanding performance.

    III. FORMULATION AND NOTATION OF PASTE THICKENING SYSTEM PREDICTION

    Fig. 2. Proposed model is composed of several components. The recurrent neural network (RNN) encoder outputs the initial hidden state h(t0) according to historical sequences from the system. The derivative module is embedded into the ordinary differential equation (ODE) solver to calculate the hidden state at an arbitrary time. A parallel interpolation mechanism is embedded into the derivative module, which interpolates discrete input sequences to a CT series. Finally,the state decoder module transforms the hidden state to the predicted system output.

    IV. CONTINUOUS-TIME DEEP SEQUENTIAL MODEL FOR THICKENING SYSTEM PREDICTIONS

    A. RNN for Encoding Historical Sequences

    industrial experience, we use the system delay as prior knowledge denoted byTd. If we assume a uniform sampling of the sensors for a sampling intervalTs,NyandNxcan be estimated according to the equationNy=Nx=N=Td/Ts.The influence of parameterNon the model accuracy is examined in Section V. In the thickening system, the correlations between the current system state and historical trajectories are mostly compressed in the short term. This property encourages us to use a simple and unidirectional RNN to encode the historical trajectories of a thickening system. The solved hidden state h(t0) from encoder involves all necessary information of historical trajectories and is represented as an initial state of solved ODE.

    B. ODE Solver for Modeling State Space

    We employ the parameterized CT state space model to represent the relations between the system inputs, hidden states, and outputs:

    Generally, an ODE solver with a lower error tolerance increases the frequency for calling differential functiond. It leads to more time consumption but results in a higher accuracy. This guideline is also tenable when we construct a neural ODE network to fit sequential datasets. The detailed comparisons of the time cost and accuracy are shown in Section V.

    It is worth investigating the definition of a suitable structure ofd. The most intuitive solution is to employ a basic neural network to estimate the derivative that is named nonstationary model.

    Non-Stationary:

    where MLP(·) denotes a multi-layer perceptron. The combination of a non-stationary system with an ODE solver has a strong similarity to residual connections, which have been widely used in other advanced deep networks [35].

    In the field of stochastic process analysis, non-stationary systems are a stochastic process with a mean and covariance that vary with respect to time [40]. Differencing [41] is an effective way to make non-stationary time series stationary by eliminating trend and seasonality. Generally, a thickening system has strong trends in the underflow concentration, mud pressure, and other core variables. The thickening system is an approximation of non-stationary systems, indicating that the differencing operation can improve the fitting accuracy. In(11), the derivative module intrinsically learns the first-order difference of the hidden states in the latent space. In contrast to the operation of differencing the system outputs directly, a model that differences the hidden states has an equivalent or stronger ability to represent a non-stationary system that is of first or even higher order. However, the non-stationary system(11) also suffers from a severe problem when handling longterm prediction tasks. To solve an ODE over long intervals,repetitive accumulation in a CT range can lead to a significant magnitude increase of the hidden states. Consequently, the estimation error will grow accordingly, resulting in the difficulties in achieving accurate system output from the decoder.

    Therefore, we devise another derivative module, namely,the stationary system, to handle the long-term prediction problem. In particular, we have

    Stationary:

    In a stationary system, we use a GRU to construct the derivative module because it has a strong ability to carry longtime information. In a non-stationary system, consecutive accumulations diffuse the hidden state in an unconstrained range. Thus, we employ the MLP to learn the first-order difference h˙(t) directly under h(t) and x(t).

    C. Parallel Spline Interpolation

    Note that the calculation of h˙(t) in (11) and (12) depends on the external input x(t), which may not exist in our dataset.External input sequences XMfin the training data are discrete,while the computation of the ODE needs x(t) in a CT range.Before each forward pass of the network, it is necessary to interpolate the external inputs to the continuous form. Deep networks are typically trained in mini-batches to take advantage of efficient data-parallel graphics processing unit(GPU) operations. Therefore, we implement a parallel spline interpolation mechanism on top of PyTorch, which is a wellknown deep learning framework.

    In our dataset, the external input data are evenly sampled,thereby simplifying the implementation of parallel interpolation. To simplify the explanation, we assume that the dimension of the external input is equal to 1.

    D. State Decoder

    The state decoder mechanism is essentially a fully connected network. We therefore have the following equation to represent the output,

    Compared with other state space models that only employ a single matrix for decoding, the nonlinear decoder is chosen because the accumulative form in (11) causes the range of the input h(t) to be non-deterministic. The activation function tanh(·) constrains the output of the decoder to a rational range.

    E. Model Training

    Since all of the operations of the ODE solver in our model are smooth and differentiable, we can train the complete model by the standard back-propagation algorithm with the loss function defined as follows:

    V. EXPERIMENTAL RESULTS

    This section presents experimental results for the proposed method on the dataset of real thickening systems. We mainly investigate three issues: RQ1: What are the advantages of employing a CT deep sequential network with a high-accuracy ODE solver for modeling a thickening system? RQ2: What are the pros and cons of using stationary and non-stationary systems in prediction tasks? RQ3: How do the different interpolation methods and sequential encoder affect the accuracy of the proposed CT model? We first describe the dataset, the hyper-parameters of the model, and the training and test configurations. We then present the detailed experimental results.

    A. Thickening System Dataset

    For our experiments, the dataset was collected from the paste thickener manufactured by the FLSmidth from the NFC Africa Mining PLC, Zambian Copperbelt Province. Fig. 4 illustrates two identical thickeners in our experiments. They are used to concentrate copper tailings to produce paste in the backfilling station. Both devices operate in the closed-loop mode with PID controllers.

    Some key technical parameters of the studied thickener are listed in Table I.

    The measured data are sampled evenly with two-minute intervals from May 2018 to February 2019. A short piece of original dataset is shown in Table II. The collected data come from seven monitoring sensors just as the defined y(k) and x(k)in Section III. After deleting the records corresponding to the time when the system was out of service, there are 24 673 pieces of data remaining.

    Fig. 3. Illustration of the process of building both the training, validation, and test datasets. An independent data tuple for training or testing is composed of four vector sequences. X[i:i+N] and Y[i:i+N] represent the historical trajectories in conditional range. X[i+N:i+N+M] and X[i+N:i+N+L] represent the inputs sequences, which have equivalent length with predicted sequences. Y[i+N:i+N+M] and Y[i+N:i+N+L] represent the real system outputs. The former is utilized to generate optimized loss in training and the latter is only used in testing and validation phase for evaluating the accuracy of prediction.

    Fig. 4. The figures illustrate two identical paste thickeners in our experimental mining station, including one primary and one alternate thickener. Both devices operate in closed-loop mode with proportionalintegral-derivative (PID) controllers.

    TABLE I SOME KEY TECHNICAL PARAMETERS OF THE THICKENER

    We employ the first 70% of the entire dataset to train the model. In the remaining 30% portion, the first 15% is used for validation to determine the best training epochs, and the other 15% is the test dataset for evaluating the model accuracy. By splitting and building the inputs-outputs sequential tuples according to Fig. 3, there are 17 131 tuples left for training and 3561, 3421, 3121 tuples left for testing and validation forL=60, 200, 500, respectively. All the datasets are normalized to standard normal distributions with a unified mean and variance before the training and test phases.

    B. Experimental Setup

    We use the mini-batch stochastic gradient descent (SGD)with the Adam optimizer [42] to train the models. The batch size is 512, and the learning rate is 0.001 with an exponential decay. The decay rate is 0.95, and the period for decay is 10 epochs. The size of the hidden state h(t) in ODE is 32. The RNN encoder module has a single hidden layer and the size is equal to 32 that is consistent with the size of hidden state h(t).The size of the hidden layer in state decoder is 64. In both the adaptive ODE solvers, the time for solving the ordinary differential equations increases if we reduce the tolerance of approximate error. For balancing the time cost and accuracy,we set the relative tolerance to 1E–4 and absolute tolerance to 1E–5 in all of experiments.

    During the training procedure, the length of the historical sequences denoted byNis 80, and the length of predicted outputs denoted byMis 60. The best-performing model in the validation dataset is chosen for further evaluation with the test dataset. The training and test phases were performed on a single Nvidia V100 GPU. The implementation uses the PyTorch framework. We define the CT range as0 ≤t≤Mδtfor given discrete integral indices [0,1,...,M]. The time interval δtof the adjacent data points is set to 0.1.Accordingly, the normalized factor μtin (12) is also set to 0.1.When we use the Euler approximation to solve an ODE equation in a stationary system, the predicted hidden state in the next time step is equal to the output of the GRU cell corresponding to the discrete-time system:

    C. Results and Discussion

    1) Main Results

    We investigate the influence of the types of ODE solversand system types. We select four ODE solvers: Euler, Mid-Point, fourth-order Runge-Kutta (RK4), Dormand-Prince(Dopri5) [35], and 3-order Bogacki-Shampine (Bosh) [43].We investigate the performances of those ODE solvers in both non-stationary and stationary systems. To make a trade-off between the model accuracy and time consumption, we set the relative tolerance to 1E–4 and the absolute tolerance to 1E–5.Moreover, we also consider the discrete-time deep sequential model for the state space (DT-State-Space), the attentionbased Seq2Seq model (Attention-Seq2Seq) [5], and Transformer [44] for comparison. The DT-State-Space [45]model employs a parameterized per-time-series linear state space model based on a recurrent neural network (RNN) to forecast the probabilistic time series. The sizes of state space and RNN hidden layer are set to 16 and 32, respectively. The hyperparameters setting of Transformer and Attention-Seq2Seq are kept with the original literature.

    TABLE II A TABULAR EXAMPLE OF PASTE THICKENING SYSTEM DATASET

    We conduct three groups of experiments to investigate the RRSEs, MSEs, and time consumption of models with prediction lengths ofL=60, 200, and 500.

    a) Comparison of proposed and other baseline models:

    We first examine the performance of the Attention-Seq2Seq model, the DT-State-Space model, and Transformer, which are defined in discrete-time settings in Table III. Although they perform competitively, better than the proposed models with the Euler ODE solver, they perform worse than the models with high-order ODE solvers, especially for long-term predictions. The results also indicate that employing a CT model is consistent with the features of the CT evolution in thickening systems, thereby improving the prediction accuracy.

    b) Comparison of different ODE solvers:

    We analyze the comparisons of different ODE solvers respectively from stationary system and non-stationary system. When the derivative module is defined as a nonstationary system and we only focus on short-term prediction withL=60, we find that the Euler method achieves relatively higher RRSE and MSE values (i.e., poorer prediction performances) than the other four ODE solvers, though it has a much lower time consumption than the other solvers. As the simplest method for solving ODEs, the Euler method evaluates the derivative network only once between two adjacent time points. Meanwhile, the Mid-Point and RK4 methods have higher prediction accuracies than the Euler method, since they evaluate the derivative network two and four times, respectively, between two adjacent time points.Moreover, the Dopri5 and Bosh methods achieve better accuracies, though they have higher time consumptions.Dopri5 performs slightly better than Bosh. As adaptive methods in the Runge-Kutta family, the Dopri5 and Bosh methods ensure that the output is within a given tolerance of the true solution. Their time consumptions for solving an ODE equation increase as the accuracy tolerance is decreased.

    Strangely, with the increase in the prediction length, we find that the accuracies of non-stationary models crash gradually and the degradation of Euler is slightly lower than the others.The reason of this inconsistent phenomenon is that nonstationary system brings accumulative errors in long-term predictions. High-order ODE solvers evaluate the derivative module more times recursively, which brings more accumulative errors. Not only do the high-order solvers not improved the accuracies of non-stationary system in long-term predictions, they made the accuracies worse.

    When the derivative module is switched to a stationary system. It is worth mentioning that the time consumption for the two adaptive methods, Bosh and Dopri5, to solve an ODE equation significantly increases. We do not list the accuracies of the Dopri5 and Bosh for the stationary systems in Table III because the extremely slow speed makes the method ineffective for practical applications. According to the comparison of the ODE solvers, the high-order ODE solver,such as RK4, results in lower fitting errors than the low-order methods while requiring more time to evaluate the ODE equations intensively.

    c) Comparison of stationary models and non-stationary models:

    For comparing the distinctions between stationary models and non-stationary models more intuitively, we further visualize the prediction performance of the non-stationary and stationary systems with different ODE solvers. Fig. 5 depicts the predicted sequences of the non-stationary and stationary systems with different ODE solvers for the short-term prediction task withL=60. The results show that the nonstationary models outperform the stationary models in shortterm prediction tasks. The estimated sequences from the nonstationary models are slightly closer to the real system output than those from the stationary models. The learning process of a non-stationary system is essentially equivalent to differencing the hidden state and employing the MLP network to learn the relatively stationary first-order difference.Furthermore, the non-stationary models can predict the system outputs smoothly because the non-stationary structure limitsthe hidden states to only changing in a continuous and slow manner. This constraint is consistent with the properties of a slow thickening system that shrinks the searching space of the model parameters to prevent overfitting.

    TABLE III ROOT RELATIVE SQUARED ERROR (RRSE), MEAN SQUARED ERROR (MSE), AND TIME CONSUMPTION OF PREDICTED UNDERFLOW CONCENTRATION

    Fig. 5. In the short-term prediction task with L=60, the non-stationary models output more stable and accurate sequences than the stationary models.

    Fig. 6 presents the experimental results of a long-term prediction task withL=200 (similar results withL=500 can be found in Table III). The tabular results in Table III demonstrate that the RRSE and MSE of non-stationary models are much higher than those of the stationary ones in long-term prediction task, which is consistent with the graphical results.Compared to the excellent results of non-stationary models in Fig. 5, Fig. 6(a) shows that the prediction accuracy for the non-stationary system decays significantly, and the predicted outputs deviate from the true outputs gradually with the increase in the prediction length. However, the predicted results of stationary models are stabilized and closed to the true system outputs, which confirms the excellent accuracies of stationary models in the long-term prediction problem. The structure of non-stationary ODE leads to the hidden state in progressive evolution that is unconstrained and gradually expanding. Although we embed a tanh function for the decoder network to restrict the final prediction of the underflow concentration and pressure to a rational range, it is impossible for the decoder module to learn an effective mapping function from an extremely large hidden state space to the system output space. Similarly, Fig. 6 also demonstrates that high-order ODE solvers, such as the 4th-order Runge-Kutta, still perform slightly better than the low-order solvers,such as Euler, in long-term prediction.

    We conduct five other groups of experiments with different values of the prediction length to evaluate the prediction performance (i.e., the MSE) of the underflow concentration and ground-truth for both stationary and non-stationary systems. The results in Fig. 7 show that the non-stationary system performs better than the stationary system in the shortterm prediction task (e.g.,L<100), although the stationary system outperforms the non-stationary system in long-term prediction tasks. For example, whenLexceeds 120, the errors from the non-stationary system increase with the predicted length, while the stationary system significantly stabilizes the accumulative errors in the long-term prediction.

    Fig. 6. In the long-term prediction task with L=200, Fig. 6(a) illustrates that non-stationary models only performed well in the early time horizon. In the late horizon, the predicted sequences deviated from the true system outputs significantly compared with those of the stationary models, which are shown in Figs. 6(b) and 6(c). The results also indicate that the models with high-order ODE solvers performed better than the models with low-order ODE solvers in long-term prediction tasks.

    Fig. 7. Predicted length L affected the accuracy (log10 MSE ± 2σ, computed across five runs) of predicted underflow concentration for both stationary and non-stationary systems.

    2) Experiments for Evaluating Interpolation Order

    We next investigate the effect of the interpolation method on the prediction accuracy. We test four spline interpolation methods with different orders and compare the prediction accuracies on test datasets. The results in Table IV demonstrate that the higher-order interpolations slightly outperform the lower-order ones. This proves that the system input of the thickening system is a non-linear complex process, and that the information from external inputs is essential for predicting the outputs of the system. Higherorder spline interpolations exploit more correlational features from adjacent inputs and interpolate the empty area with a better accuracy than the lower-order interpolations.

    3) Ablation Experiments for Studying System Time Delays and Improvements From Sequential Encoder

    Finally, we investigate the significance of introducing the sequential encoder to confront the system time delay. We investigate the influence ofNon the model accuracy.Specially, whenNis set to 1, the sequential encoder is replaced by a neural network with one hidden layer that encodes the system output x(k?1) in a single time step to the initial hidden state h(t0). WhenNis set to 0, the initial state h(t0)is a learnable or zero vector [19] that had no relationship with historical system trajectories. We examine the different choices ofNin experiments withL=60, 200, and 500,respectively. In the experiments withL=60, the derivative module is set to be a non-stationary system with an MLP cell.We change it to a stationary system with a GRU cell whenL=200and 500. The ODE solver is the fourth-order Runge-Kutta solver for all of the models.

    The results shown in Table V demonstrate that the introduction of the sequential encoder to extract features from the historical sequence leads to better performance than those in the cases withN=1 or 0. The intuitive explanation is that the predicted output sequences have strong statistical correlations with historical system trajectories. The optimal length of the encoded sequence is approximatelyN=80,which is consistent with our prior experience of 2–3-h time delays in thickening systems. When the length of input sequence exceeds the optimal value, the accuracy slightly decreases.

    Intuitively, the short-term prediction task benefits more from historical system trajectories than the long-term prediction task. When the length of the predicted sequence increases, the advantage brought by the sequential encoder also decreases. In the task withL=500, the profit of employing sequential encoder decreases obviously.

    VI. CONCLUSIONS

    This paper focuses on the prediction of the outputs of a thickening system based on deep neural sequence models. We introduce a CT network composed of a sequential encoder, a state decoder, and a derivative module, with internal computation processes including interpolation and a differential ordinary differential equation solver, to describe the complex dynamics of a thickening system. Experiments on datasets from real thickening systems demonstrate that the introduction of the sequential encoder and parallel cubic spline interpolation play a crucial role in our model architecture. We conducted extensive experiments to evaluate the proposed models for both stationary and non-stationary systems with different ODE solvers. The results show that the non-stationary system outperforms the stationary system for short-term prediction tasks. However, the non-stationarymodel suffers from the accumulation of errors from the incremental calculation, thereby leading to inferior results in long-term prediction tasks. This demonstrates that the model with the non-stationary system is more suitable for being embedded in a model-based feedback controller (e.g., MPC controller) while the stationary system avoids this problem and performs better in long-term prediction tasks. Therefore,the model with the stationary system is a better choice when a stable and robust identified system is required to predict longterm sequences (e.g., simulations or controller testing).

    TABLE IV ACCURACY COMPARISONS WITH DIFFERENT ORDERS OF INTERPOLATIONS

    TABLE V ACCURACY COMPARISONS WITH DIFFERENT METHODS FOR GENERATING THE INITIAL HIDDEN STATE h(t0)

    In the industrial data processing field, it is a common requirement to process unevenly spaced data. Although the dataset employed in this paper is sampled evenly, we can extend our method to deal with uneven data naturally by adjusting the time intervals. This extension deserves further experimental verification in future work. Another promising research direction is to extend the method to probabilistic generative models and perturbed time-varying models [46] for determining the unknown sampling noise and uncertainty in thickening systems. Moreover, it is worth investigating our method for other dynamical industrial systems.

    国产99久久九九免费精品| 久久影院123| 日本五十路高清| 国产高清视频在线播放一区 | 涩涩av久久男人的天堂| 国产黄色免费在线视频| 亚洲专区中文字幕在线| 欧美大码av| 久久99精品国语久久久| 极品少妇高潮喷水抽搐| 午夜免费鲁丝| 午夜免费成人在线视频| 国产一区二区在线观看av| 丁香六月天网| av一本久久久久| 国产午夜精品一二区理论片| 丰满人妻熟妇乱又伦精品不卡| 18禁国产床啪视频网站| 啦啦啦视频在线资源免费观看| 久久中文字幕一级| 激情视频va一区二区三区| 51午夜福利影视在线观看| 亚洲欧美激情在线| 国产成人一区二区三区免费视频网站 | 国产片内射在线| 亚洲第一青青草原| 老司机亚洲免费影院| 亚洲av国产av综合av卡| 黑人巨大精品欧美一区二区蜜桃| 欧美少妇被猛烈插入视频| 老司机影院毛片| 亚洲 欧美一区二区三区| 亚洲成色77777| 午夜两性在线视频| 波多野结衣一区麻豆| 中文字幕制服av| 一级毛片电影观看| 久久女婷五月综合色啪小说| 久热爱精品视频在线9| 国产成人91sexporn| 欧美黄色片欧美黄色片| 午夜日韩欧美国产| 国产福利在线免费观看视频| 国产一级毛片在线| 久久久久久亚洲精品国产蜜桃av| 捣出白浆h1v1| 国产精品国产三级国产专区5o| 老司机亚洲免费影院| 久久亚洲国产成人精品v| 日本五十路高清| 精品少妇内射三级| 久久免费观看电影| 亚洲专区中文字幕在线| 日韩伦理黄色片| 午夜福利视频在线观看免费| 纵有疾风起免费观看全集完整版| 老司机影院成人| 中文字幕色久视频| 久久亚洲国产成人精品v| 九色亚洲精品在线播放| 欧美日韩亚洲国产一区二区在线观看 | 99久久精品国产亚洲精品| 亚洲欧美一区二区三区黑人| 在线精品无人区一区二区三| 欧美+亚洲+日韩+国产| 免费日韩欧美在线观看| 色播在线永久视频| 国产午夜精品一二区理论片| 一边摸一边做爽爽视频免费| 高清av免费在线| 精品国产乱码久久久久久男人| 一级毛片我不卡| 日韩av免费高清视频| 天天躁夜夜躁狠狠久久av| av视频免费观看在线观看| 国产精品熟女久久久久浪| 制服诱惑二区| 亚洲人成网站在线观看播放| 人妻人人澡人人爽人人| 免费看十八禁软件| 久久久国产欧美日韩av| 免费一级毛片在线播放高清视频 | 成人国语在线视频| www.av在线官网国产| 国产视频一区二区在线看| 人人澡人人妻人| 男的添女的下面高潮视频| 国产亚洲精品第一综合不卡| 咕卡用的链子| 极品人妻少妇av视频| 久久女婷五月综合色啪小说| 尾随美女入室| 久久99一区二区三区| 不卡av一区二区三区| 亚洲精品av麻豆狂野| xxxhd国产人妻xxx| 欧美中文综合在线视频| 91国产中文字幕| 精品免费久久久久久久清纯 | 欧美97在线视频| 欧美人与性动交α欧美软件| 七月丁香在线播放| 人人妻人人澡人人爽人人夜夜| 欧美+亚洲+日韩+国产| 国产精品三级大全| 久久久久久久久久久久大奶| 美女午夜性视频免费| 国产精品久久久久成人av| 首页视频小说图片口味搜索 | 婷婷色综合大香蕉| 美女午夜性视频免费| 色婷婷久久久亚洲欧美| 色婷婷av一区二区三区视频| 免费观看人在逋| 亚洲国产毛片av蜜桃av| 男女高潮啪啪啪动态图| 日韩制服骚丝袜av| 免费高清在线观看日韩| 国产爽快片一区二区三区| 嫩草影视91久久| av在线播放精品| 色综合欧美亚洲国产小说| 男女免费视频国产| 日本a在线网址| 波野结衣二区三区在线| 亚洲成人免费电影在线观看 | 9热在线视频观看99| 99精品久久久久人妻精品| 亚洲av美国av| 久久亚洲精品不卡| 97人妻天天添夜夜摸| 两人在一起打扑克的视频| 最新的欧美精品一区二区| 午夜福利免费观看在线| 久久亚洲国产成人精品v| 国产成人系列免费观看| 18禁观看日本| 国产日韩欧美视频二区| 女人爽到高潮嗷嗷叫在线视频| 一级毛片女人18水好多 | 天天操日日干夜夜撸| 午夜福利一区二区在线看| 国产精品国产三级国产专区5o| 亚洲精品国产一区二区精华液| 十八禁网站网址无遮挡| 悠悠久久av| 午夜日韩欧美国产| 成人国产av品久久久| 性少妇av在线| 你懂的网址亚洲精品在线观看| 免费在线观看完整版高清| 国产爽快片一区二区三区| 热99久久久久精品小说推荐| 制服诱惑二区| 视频在线观看一区二区三区| 免费观看人在逋| 亚洲av美国av| 狠狠精品人妻久久久久久综合| 一本综合久久免费| 国产成人啪精品午夜网站| 这个男人来自地球电影免费观看| 国产精品二区激情视频| 老司机深夜福利视频在线观看 | 欧美日韩福利视频一区二区| av网站免费在线观看视频| 免费高清在线观看日韩| 国产在线免费精品| 在线观看免费视频网站a站| 亚洲精品中文字幕在线视频| 久久影院123| 亚洲美女黄色视频免费看| 久久精品人人爽人人爽视色| 黑人巨大精品欧美一区二区蜜桃| 无限看片的www在线观看| 一边亲一边摸免费视频| 免费看十八禁软件| 国产免费福利视频在线观看| 啦啦啦啦在线视频资源| netflix在线观看网站| kizo精华| www日本在线高清视频| 国产精品国产三级国产专区5o| 久久久精品区二区三区| 国产片特级美女逼逼视频| 国产精品一二三区在线看| 欧美日韩国产mv在线观看视频| 精品久久蜜臀av无| 日韩av免费高清视频| 午夜激情久久久久久久| 日韩,欧美,国产一区二区三区| 新久久久久国产一级毛片| 精品一区二区三卡| 亚洲欧美色中文字幕在线| 亚洲国产精品成人久久小说| 女人精品久久久久毛片| 美女主播在线视频| 人成视频在线观看免费观看| 亚洲av成人不卡在线观看播放网 | 国产精品免费视频内射| 国产精品三级大全| 午夜老司机福利片| 天天操日日干夜夜撸| 亚洲国产精品999| 伊人久久大香线蕉亚洲五| 高清欧美精品videossex| 高清av免费在线| 香蕉国产在线看| 亚洲第一青青草原| 女人高潮潮喷娇喘18禁视频| 亚洲精品在线美女| 一本大道久久a久久精品| av网站在线播放免费| 中文字幕人妻丝袜制服| 一级毛片我不卡| 日韩中文字幕视频在线看片| xxxhd国产人妻xxx| 欧美人与善性xxx| 看免费成人av毛片| 人妻 亚洲 视频| 涩涩av久久男人的天堂| 亚洲欧美日韩另类电影网站| www.精华液| 久久这里只有精品19| 又大又黄又爽视频免费| 18禁国产床啪视频网站| 精品人妻1区二区| 中国国产av一级| 女人精品久久久久毛片| 免费观看人在逋| 亚洲久久久国产精品| 在线 av 中文字幕| 后天国语完整版免费观看| 久久精品人人爽人人爽视色| 久久久精品免费免费高清| 午夜影院在线不卡| 女人爽到高潮嗷嗷叫在线视频| 精品福利观看| 脱女人内裤的视频| 亚洲av片天天在线观看| 欧美性长视频在线观看| 纯流量卡能插随身wifi吗| 国产成人精品久久久久久| av天堂在线播放| 欧美黑人精品巨大| 国产欧美日韩精品亚洲av| 国产熟女欧美一区二区| 久久久亚洲精品成人影院| 69精品国产乱码久久久| 99香蕉大伊视频| e午夜精品久久久久久久| 一边亲一边摸免费视频| 久久ye,这里只有精品| 亚洲精品久久午夜乱码| 色播在线永久视频| 欧美国产精品va在线观看不卡| 久久国产精品影院| 亚洲av综合色区一区| 亚洲av成人不卡在线观看播放网 | 亚洲欧美一区二区三区久久| 水蜜桃什么品种好| 午夜精品国产一区二区电影| 国产一区有黄有色的免费视频| 午夜免费男女啪啪视频观看| 国产一区二区激情短视频 | 亚洲精品美女久久久久99蜜臀 | 男女国产视频网站| 男男h啪啪无遮挡| 欧美人与性动交α欧美软件| 高清欧美精品videossex| 黄网站色视频无遮挡免费观看| 乱人伦中国视频| 欧美变态另类bdsm刘玥| 日韩,欧美,国产一区二区三区| 交换朋友夫妻互换小说| 久久久国产一区二区| 大香蕉久久网| 国产一级毛片在线| 新久久久久国产一级毛片| 天天躁狠狠躁夜夜躁狠狠躁| 国产日韩欧美在线精品| a级片在线免费高清观看视频| 女性生殖器流出的白浆| 久久99一区二区三区| 国产精品偷伦视频观看了| 波多野结衣一区麻豆| 少妇人妻 视频| 国产成人啪精品午夜网站| 9色porny在线观看| 免费一级毛片在线播放高清视频 | 一本—道久久a久久精品蜜桃钙片| 欧美中文综合在线视频| 日日摸夜夜添夜夜爱| 久久久久网色| 十八禁高潮呻吟视频| 欧美黄色片欧美黄色片| 51午夜福利影视在线观看| 欧美人与性动交α欧美精品济南到| 又粗又硬又长又爽又黄的视频| 日日夜夜操网爽| 久久国产精品影院| 国产精品麻豆人妻色哟哟久久| 欧美日本中文国产一区发布| av网站免费在线观看视频| 久久这里只有精品19| 欧美大码av| 亚洲一区中文字幕在线| 亚洲图色成人| 久久天堂一区二区三区四区| 国产男女超爽视频在线观看| 亚洲精品日韩在线中文字幕| 国产成人91sexporn| 精品久久蜜臀av无| 久久 成人 亚洲| 九色亚洲精品在线播放| 黑人巨大精品欧美一区二区蜜桃| 热99久久久久精品小说推荐| 女人久久www免费人成看片| 美女大奶头黄色视频| 国产一区二区在线观看av| 免费av中文字幕在线| 国产精品一区二区免费欧美 | 一本大道久久a久久精品| 午夜两性在线视频| 精品国产一区二区三区四区第35| 欧美激情极品国产一区二区三区| 丝袜美腿诱惑在线| 人人妻人人澡人人看| 欧美日韩视频高清一区二区三区二| 久久久久网色| 精品亚洲成a人片在线观看| 国产亚洲欧美在线一区二区| 亚洲五月婷婷丁香| xxx大片免费视频| 中文字幕高清在线视频| 美女视频免费永久观看网站| 赤兔流量卡办理| 成年美女黄网站色视频大全免费| 亚洲av成人精品一二三区| 久久综合国产亚洲精品| 狂野欧美激情性bbbbbb| 无遮挡黄片免费观看| 看免费成人av毛片| 国产一区二区三区av在线| 亚洲国产中文字幕在线视频| 国产成人免费无遮挡视频| 亚洲五月色婷婷综合| 欧美在线黄色| 亚洲av日韩精品久久久久久密 | bbb黄色大片| 久久人人爽av亚洲精品天堂| 手机成人av网站| 国产亚洲精品第一综合不卡| 欧美激情极品国产一区二区三区| 国产成人精品无人区| 精品熟女少妇八av免费久了| 国产精品一区二区免费欧美 | 久久国产精品人妻蜜桃| 欧美性长视频在线观看| 少妇精品久久久久久久| 波野结衣二区三区在线| 国产精品成人在线| 国产精品 欧美亚洲| 亚洲自偷自拍图片 自拍| www.自偷自拍.com| 欧美黄色片欧美黄色片| 高清黄色对白视频在线免费看| 欧美日韩视频精品一区| 脱女人内裤的视频| 亚洲七黄色美女视频| 中文字幕人妻丝袜制服| 如日韩欧美国产精品一区二区三区| 国产精品三级大全| 美女国产高潮福利片在线看| 国产日韩欧美亚洲二区| 久久精品人人爽人人爽视色| 精品国产乱码久久久久久小说| 丰满迷人的少妇在线观看| 97精品久久久久久久久久精品| 亚洲国产看品久久| 亚洲七黄色美女视频| 欧美黄色淫秽网站| 国产免费视频播放在线视频| 性色av一级| 久久毛片免费看一区二区三区| 亚洲av欧美aⅴ国产| 又粗又硬又长又爽又黄的视频| 亚洲精品国产av成人精品| 亚洲图色成人| 久久久国产精品麻豆| 美女主播在线视频| 国产99久久九九免费精品| 美女视频免费永久观看网站| 亚洲成av片中文字幕在线观看| 又大又爽又粗| 丰满迷人的少妇在线观看| 亚洲三区欧美一区| 亚洲第一青青草原| 嫁个100分男人电影在线观看 | 妹子高潮喷水视频| 亚洲av美国av| 波多野结衣一区麻豆| 脱女人内裤的视频| 三上悠亚av全集在线观看| 一二三四社区在线视频社区8| 一区在线观看完整版| 久久久久久久精品精品| 老汉色∧v一级毛片| 十八禁网站网址无遮挡| 日本色播在线视频| 黄色a级毛片大全视频| 亚洲国产欧美日韩在线播放| 欧美日本中文国产一区发布| 少妇人妻久久综合中文| 日韩免费高清中文字幕av| 成人午夜精彩视频在线观看| 最新的欧美精品一区二区| 亚洲欧美清纯卡通| 波多野结衣av一区二区av| 久久九九热精品免费| 国产精品免费大片| 久久久久国产精品人妻一区二区| 另类亚洲欧美激情| 大香蕉久久成人网| 啦啦啦在线免费观看视频4| 人人妻,人人澡人人爽秒播 | 国产免费福利视频在线观看| 欧美国产精品一级二级三级| 日韩中文字幕欧美一区二区 | 熟女少妇亚洲综合色aaa.| 国产黄色视频一区二区在线观看| 热99久久久久精品小说推荐| 91国产中文字幕| 国产成人a∨麻豆精品| 搡老岳熟女国产| 天天躁夜夜躁狠狠久久av| 9色porny在线观看| 亚洲图色成人| 在线看a的网站| 成人影院久久| 国产在线观看jvid| 亚洲av国产av综合av卡| 日韩免费高清中文字幕av| 黄色视频不卡| 少妇人妻久久综合中文| 欧美 日韩 精品 国产| 高清黄色对白视频在线免费看| 王馨瑶露胸无遮挡在线观看| 久久精品国产亚洲av涩爱| 少妇的丰满在线观看| 又紧又爽又黄一区二区| 久久精品人人爽人人爽视色| 亚洲国产中文字幕在线视频| 晚上一个人看的免费电影| 你懂的网址亚洲精品在线观看| www.熟女人妻精品国产| 欧美+亚洲+日韩+国产| 别揉我奶头~嗯~啊~动态视频 | 人人妻,人人澡人人爽秒播 | 黄色怎么调成土黄色| 亚洲伊人色综图| 亚洲av欧美aⅴ国产| 青青草视频在线视频观看| 国产免费又黄又爽又色| 少妇裸体淫交视频免费看高清 | 久久鲁丝午夜福利片| 久9热在线精品视频| 热re99久久国产66热| 国产在线观看jvid| 婷婷色麻豆天堂久久| 久久青草综合色| 丁香六月天网| 一级,二级,三级黄色视频| 日本av手机在线免费观看| 精品人妻1区二区| 久久久久久久久久久久大奶| 久久精品久久精品一区二区三区| 日本五十路高清| 麻豆av在线久日| 免费在线观看日本一区| 午夜福利视频在线观看免费| 亚洲激情五月婷婷啪啪| 免费在线观看日本一区| 久久久国产欧美日韩av| 欧美亚洲日本最大视频资源| 国产一区二区三区综合在线观看| 一级毛片 在线播放| 老司机影院成人| 精品人妻熟女毛片av久久网站| av片东京热男人的天堂| 欧美精品亚洲一区二区| 国产精品九九99| 熟女少妇亚洲综合色aaa.| 老司机靠b影院| 大话2 男鬼变身卡| 国精品久久久久久国模美| 国产免费又黄又爽又色| 亚洲美女黄色视频免费看| 美国免费a级毛片| 免费在线观看完整版高清| 国产97色在线日韩免费| 亚洲三区欧美一区| 久久中文字幕一级| 久久综合国产亚洲精品| 日本91视频免费播放| 只有这里有精品99| 夜夜骑夜夜射夜夜干| 人妻一区二区av| 可以免费在线观看a视频的电影网站| 国产精品一二三区在线看| 国产有黄有色有爽视频| 黄色a级毛片大全视频| 无遮挡黄片免费观看| 亚洲熟女精品中文字幕| 中文乱码字字幕精品一区二区三区| 国产在线视频一区二区| 日韩中文字幕视频在线看片| 晚上一个人看的免费电影| 男女边摸边吃奶| 久久人人97超碰香蕉20202| 老司机影院毛片| 一级毛片电影观看| 新久久久久国产一级毛片| 欧美亚洲 丝袜 人妻 在线| av电影中文网址| 深夜精品福利| xxx大片免费视频| 男人舔女人的私密视频| 久久天躁狠狠躁夜夜2o2o | 三上悠亚av全集在线观看| 国产精品 国内视频| 一级毛片我不卡| 男女床上黄色一级片免费看| 自拍欧美九色日韩亚洲蝌蚪91| 999久久久国产精品视频| 国产精品国产三级专区第一集| 我要看黄色一级片免费的| 黄色视频不卡| 国产高清videossex| 天天躁狠狠躁夜夜躁狠狠躁| 日本午夜av视频| 久久国产精品影院| 少妇裸体淫交视频免费看高清 | 国产免费现黄频在线看| 久久影院123| 久久亚洲精品不卡| bbb黄色大片| 婷婷成人精品国产| 汤姆久久久久久久影院中文字幕| 久久 成人 亚洲| 精品少妇黑人巨大在线播放| 欧美另类一区| 国产日韩欧美在线精品| 精品国产一区二区久久| 超碰97精品在线观看| 日韩人妻精品一区2区三区| av国产久精品久网站免费入址| 99re6热这里在线精品视频| 欧美国产精品一级二级三级| 一个人免费看片子| 日本wwww免费看| 老司机靠b影院| 久久精品成人免费网站| 日本欧美国产在线视频| 夫妻性生交免费视频一级片| 50天的宝宝边吃奶边哭怎么回事| 80岁老熟妇乱子伦牲交| 国产一区亚洲一区在线观看| www.自偷自拍.com| 日本午夜av视频| 国产亚洲一区二区精品| 色精品久久人妻99蜜桃| 我要看黄色一级片免费的| 国产一区二区激情短视频 | 首页视频小说图片口味搜索 | 亚洲人成电影观看| 97人妻天天添夜夜摸| 亚洲av日韩在线播放| 婷婷色综合www| 成人亚洲欧美一区二区av| 久久99热这里只频精品6学生| 另类亚洲欧美激情| 欧美另类一区| h视频一区二区三区| 可以免费在线观看a视频的电影网站| 天天躁夜夜躁狠狠躁躁| 日本色播在线视频| 看十八女毛片水多多多| 成人18禁高潮啪啪吃奶动态图| 精品一区二区三卡| 一本色道久久久久久精品综合| 成年av动漫网址| 欧美人与性动交α欧美精品济南到| 麻豆乱淫一区二区| 电影成人av| 精品福利观看| 交换朋友夫妻互换小说| 免费日韩欧美在线观看| 国产伦人伦偷精品视频| 欧美乱码精品一区二区三区| 黑丝袜美女国产一区| av国产久精品久网站免费入址| 国产免费现黄频在线看| 91精品国产国语对白视频| 国产在线免费精品| 久久国产亚洲av麻豆专区| 国产精品.久久久| 真人做人爱边吃奶动态| 国产精品.久久久| 少妇猛男粗大的猛烈进出视频| 亚洲国产日韩一区二区| 亚洲精品久久午夜乱码| www.999成人在线观看| 高潮久久久久久久久久久不卡| 国产精品二区激情视频| 校园人妻丝袜中文字幕|