• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Machine Learning in Chemical Engineering: Strengths, Weaknesses,Opportunities, and Threats

    2021-03-22 07:43:04MrtenDoelerePieterPlehiersRuenVndeVijverChristinStevensKevinVnGeem
    Engineering 2021年9期

    Mrten R. Doelere, Pieter P. Plehiers, Ruen Vn de Vijver, Christin V. Stevens,Kevin M. Vn Geem,*

    a Laboratory for Chemical Technology, Department of Materials, Textiles and Chemical Engineering, Ghent University, Ghent 9052, Belgium

    b SynBioC Research Group, Department of Green Chemistry and Technology, Faculty of Bioscience Engineering, Ghent University, Ghent 9000, Belgium

    Keywords:Artificial intelligence Machine learning Reaction engineering Process engineering

    ABSTRACT Chemical engineers rely on models for design,research,and daily decision-making,often with potentially large financial and safety implications. Previous efforts a few decades ago to combine artificial intelligence and chemical engineering for modeling were unable to fulfill the expectations.In the last five years,the increasing availability of data and computational resources has led to a resurgence in machine learning-based research.Many recent efforts have facilitated the roll-out of machine learning techniques in the research field by developing large databases,benchmarks,and representations for chemical applications and new machine learning frameworks. Machine learning has significant advantages over traditional modeling techniques, including flexibility, accuracy, and execution speed. These strengths also come with weaknesses,such as the lack of interpretability of these black-box models.The greatest opportunities involve using machine learning in time-limited applications such as real-time optimization and planning that require high accuracy and that can build on models with a self-learning ability to recognize patterns,learn from data,and become more intelligent over time.The greatest threat in artificial intelligence research today is inappropriate use because most chemical engineers have had limited training in computer science and data analysis.Nevertheless,machine learning will definitely become a trustworthy element in the modeling toolbox of chemical engineers.

    1. Introduction

    In 130 years of chemical engineering, mathematical modeling has been invaluable to engineers for understanding and designing chemical processes. Octave Levenspiel even stated that modeling stands out as the primary development in chemical engineering[1]. Today,in a fast-moving world,there are more challenges than ever.The ability to predict the outcomes of certain events is necessary,regardless of whether such events are related to the discovery and synthesis of active pharmaceutical ingredients for new diseases or to improvements in process efficiencies to meet stricter environmental legislation. These events range from the reaction rate of a surface reaction or the selectivity of a reaction in a reactor,to the control of the heat supply to that reactor.Predictions can be made using theoretical models, which have been constructed for centuries. The Navier–Stokes equations [2,3], which describe viscous fluid behavior, are one example of such a theoretical model.However, many of these models cannot be solved analytically for realistic systems and require a considerable amount of computational power to solve numerically.This drawback has ensured that most engineers first use simple models to describe reality. An important historical—yet still relevant—example is Prandtl’s boundary layer model [4]. In computational chemistry, scientists and engineers are willing to give up some accuracy in favor of time.This willingness explains the popularity of density functional theory, in comparison with higher-level-of-theory models. However,in many situations, higher accuracy is desired.

    Decades of modeling, simulations, and experiments have provided the chemical engineering community with a massive amount of data,which adds the option of making predictions from experience as an extra modeling toolkit. Machine learning models are statistical and mathematical models that can ‘‘learn” from experience and discover patterns in data without the need for explicit, rule-based programming. As a field of study, machine learning is a subset of artificial intelligence (AI). AI is the ability of machines to perform tasks that are generally linked to the behavior of intelligent beings, such as humans. As shown in Fig. 1, this field is not particularly new. The term ‘‘a(chǎn)rtificial intelligence”was coined at Dartmouth College,USA in 1956,at a summer workshop for mathematicians who aimed at developing more cognizant machines. From that point on, it took more than a decade before the first attempts were made to apply AI in chemical engineering [5]. In the 1980s, greater efforts were made in the field with the use of rule-based expert systems, which are considered to be the simplest forms of AI. By that time, the field of machine learning had started to grow, but in the chemical engineering community, with some exceptions, a lag of about 10 years was experienced in the growth of machine learning. A sudden rise in publications on AI applications in chemical engineering in the 1990s can be observed,with the adoption of clustering algorithms,genetic algorithms,and—most successfully—artificial neural networks(ANNs).Nevertheless,the trend did not persist.Venkatasubramanian[6] names the lack of powerful computing and the difficult task of creating the algorithms as possible causes for this loss in interest.

    The past decade marked a breakthrough in deep learning,a subset of machine learning that constructs ANNs to mimic the human brain. As mentioned above, ANNs gained popularity among chemical engineers in the 1990s; however, the difference of the deep learning era is that deep learning provides the computational means to train neural networks with multiple layers—the socalled deep neural networks. These new developments triggered chemical engineers, as reflected by an exponential rise in publications on the topic.In the past,AI techniques could never become a standard tool in chemical engineering; thus, it can be asked whether this is finally the moment. In this perspective article, we will first give an overview of the three major links in machine learning today, applied to chemical engineering. In what follows,the growing potential of machine learning in chemical engineering will be critically discussed;we will examine the pros and cons and list possible reasons for why machine learning in chemical engineering with remain ‘‘hot” or end up as a ‘‘not.”

    2. Machine learning ABCs

    2.1. The ‘‘A” in machine learning ABCs: Data

    A machine learning approach consists of three important links,as illustrated in Fig.2:data,representations,and models.The first link in a machine learning approach is the data that is used to train the model. As will be discussed later, the data used also proves to be the weakest link in the machine learning process.Virtually any dataset containing results from experiments, first-principles calculations,or complex simulation models can be used to train a model.However,because it is expensive to gather large amounts of accurate data, it is customary to make use of ‘‘big data” approaches—using large databases from various existing sources.Due to the cost of real experiments,these large quantities of data are usually obtained via fast simulations or text mining from patents and published work.The increased digitalization of research provides the scientific community with a plethora of open-source and commercial databases.Examples of commonly used sources of chemical information are Reaxys[7],SciFinder[8],and ChemSpace[9]for reaction chemistry and properties; GDB-17 [10] for small drug-like molecules; and National Institute of Standards and Technology (NIST) [11] and International Union of Pure and Applied Chemistry (IUPAC) [12]for molecular properties such as solubility. In addition, several benchmarking datasets have been created to enable comparison between different machine learning models. Examples of these benchmarks are QM9 and Alchemy,for quantum chemical properties [13]; and ESOL [14] and FreeSolv [15], for solubilities. Before using any dataset for machine learning-based modeling, several steps should be undertaken to ensure that the used data is of high enough quality. The general aspect of ensuring data quality—from its generation to its storage—is known as data curation.More details about the necessity and consequences of data curation are discussed further on.

    Fig.1. Timeline of artificial intelligence,machine learning,and deep learning.The evolution of publications about AI in chemical engineering shows that a rise in publications is followed by a phase of disinterest. Currently, AI in chemical engineering is once again in a ‘‘hot” phase, and it is unclear whether or not the curve will soon flatten out.

    Fig.2. The three major links in machine learning for chemical engineering;every part has an impact on the eventual prediction performance and should be handled carefully.

    Several differences concerning data usage exist between machine learning—and, more specifically, deep learning methods—and traditional modeling. First, ANNs learn from data and train themselves,although doing so requires large amounts of data.Therefore, training datasets generally contain tens to hundreds of thousands of data points. Second, the dataset is split into three instead of two sets: a training, validation, and test set. Both the training and validation sets are used in the training phase, while only the data in the training set is used for fitting. The validation set is an independent dataset that provides an unbiased evaluation of the model fit during the training phase. The test set evaluates the final model fit with unseen data and is generally the main indicator of the model quality.

    2.2. The ‘‘B” in machine learning ABCs: Representation

    A second important link in a machine learning method is how the data is represented in the model.Even when the data is already in numerical format, the selection of the variables or features that will make up the model input can have a significant impact on the model performance.This process is known as feature selection and has been the topic of several studies[16–19].Limiting the number of selected features may reduce the computational cost of both training and executing the model, while improving the overall accuracy. This feature-selection process is of lesser importance in so-called deep learning methods, which are assumed to internally select those features that are considered to be important [20].Then, an input layer that consists of basic process parameters(e.g.,pressure,temperature,residence time),feed characterizations(e.g., distillation curves, feed compositions), or catalyst properties(e.g., surface area, calcination time) is often sufficient [21–27].However,the task of representing the data becomes far more challenging in the case of non-numerical data, such as molecules and reactions.

    Chemical engineering tasks often involve molecules and/or chemical reactions. Creating suitable numerical representations of these data types is a developing field in itself.In computer applications, the molecular constitution is typically represented by a line-based identifier, such as the simplified molecular-input lineentry system (SMILES) [28] or the (IUPAC) international chemical identifiers (InChIs) [29], or as three-dimensional (3D) coordinates.Recently, self-referencing embedded strings (SELFIES) [30] have been developed as a molecular string representation designed for machine learning applications.The molecular information is translated into a feature vector or tensor that is used as input for a deep neural network or another machine learning model. The first way to represent a molecule is by using a(set of)well-chosen molecular descriptor(s), such as the molecular weight, dipole moment, or dielectric constant [31–33]. Another way to generate a molecular feature vector is by starting from the 3D geometry.Coulomb matrices [34], bags of bonds [35], and histograms of distances, angles,and dihedrals [36] are a few examples of geometry-based representations. However, 3D coordinates or calculated properties are generally unavailable in many applications.In such cases,the representation can be created starting from a molecular graph,resulting in so-called topology-based representations.

    In topology-based representations, only a line-based identifier is available. Encoders exist that directly translate the line-based identifier into a representation with techniques from natural language processing [37–41], but usually the line-based identifier is transformed into a feature vector in a similar fashion to geometry-based representations [42–60]. This is done by adding simple atom and bond features to the molecular graph and then transmitting the information iteratively between atoms and bonds.Circular fingerprints [42–46] based on the Morgan algorithm [61],such as the extended-connectivity fingerprint [62], were among the first molecular representations for machine learning applications. These fingerprints are so-called fixed molecular representations because they do not change during the training of the machine learning model. They remain popular in drug design for rapidly predicting the physical,chemical,and biological properties of candidate drugs[63].Because a fixed representation vector represents a molecule by the same vector in every prediction task,this type of input layer seems to conflict with the definition of a deep neural network, which is assumed to learn the important features[64].There is a growing tendency to focus on learning how to represent a molecule [47,52] instead of on human-engineering the feature vector,as it is assumed that better capturing of the features will lead to higher accuracy,with less data and at a lower computational cost [53,58].

    Learned molecular representations are created as part of the prediction model. Starting from several initial molecular features—such as the heavy atoms,bond types,and ring features—a molecular representation is created that is updated during training. This choice also indicates that a molecule has different representations depending on the prediction task. An extensive variety of learned topology-based representations [47–58] can be described using the message-passing neural network framework reviewed by Gilmer et al. [59]. The weighted transfer of atom and bond information throughout the molecular graph is characteristic of message-passing neural networks.Many different representations exist, ranging in complexity, but it is important to note that a single representation that works for all kind of molecular properties has not (yet) been developed [65]. For a more detailed overview of the state of the art in representing molecules, readers are referred to the review by David et al. [60].

    Chemical reactions are more complex data types than molecules. Similar to line-based molecular identifiers, reactions can be identified by reaction SMILES [66] and reaction InChI (RInChI)[67], whereas SMIRKS [66] identify reaction mechanisms. As for molecules, chemical reactions should also be vectorized in order to be useful in machine learning models.The most straightforward method is to start from the molecular descriptors (e.g., fingerprints) of the reagents and sum [68], subtract [50,69], or concatenate [70–72] them. Another approach is to learn a reaction representation based on the atoms and bonds that take actively part in the reaction [73]. Reactions can also be kept as text (typically InChI) and, with a neural machine translation, the organic reaction product is then considered to be a translation of the reaction products [58,74–78].

    2.3. The ‘‘C” in machine learning ABCs: Model

    The final prerequisite for a machine learning method is a modeling strategy. There is a wide variety of machine learning models to choose from.Models can be categorized in different ways,either by purpose (classification or regression) or by learning methodology(unsupervised,supervised,active,or transfer learning).Generally speaking, the term ‘‘machine learning” can be applied to any method in which correlations within datasets are implicitly modeled [79,80]. Therefore, many techniques that are currently referred to as machine learning methods were in use long before they were termed machine learning. Two such examples are Gaussian mixture modeling and principal component analysis(PCA), which originated in, respectively, the late 1800s [81] and the early 1900s[82,83].Both examples are now regarded as unsupervised machine learning algorithms. Other similar unsupervised clustering methods are t-distributed stochastic neighbor embedding (t-SNE) [84] and density-based spatial clustering of applications with noise (DBSCAN) [85]. Fig. 3 shows the difference between unsupervised and supervised learning techniques, with a non-exhaustive list of useful algorithms for a specific task. In unsupervised learning, the algorithm does not need any ‘‘solutions”or labels to learn;it will discover patterns by itself.Unsupervised learning techniques have been used for various purposes in chemical engineering. Palkovits R and Palkovits S [86] used the k-means algorithm[87]for clustering catalysts based on their features and t-SNE for the visualization of high-dimensional catalyst representations. Not only used for catalysis, t-SNE is the preferred method for visualizing high-dimensional data; it has also been used in the context of fault diagnosis in chemical processes[88,89] and for predicting reaction conditions [69,90]. PCA is another algorithm for reducing dimensionality and has been used multiple times by chemical engineers for determining the features that account for the most variance in the training set [91–97]. In addition,PCA is used for outlier detection[93,98].Other algorithms used to detect anomalies include DBSCAN and long short-term memory(LSTM)[99,100].Interested readers are referred to Géron’s book [101] for a further introduction to machine learning algorithms.

    When the dataset is labeled—that is,when the correct classification of each data point is known—supervised classification methods such as decision trees (and, by extension, random forests)can be used[102,103].Support vector machines are another possible supervised classification method [104]. Although support vector machines are commonly used for classification purposes,extensions have been made to allow regression via support vector machines as well. Regression problems require supervised or active learning methods, although, in principle, any supervised learning method can be incorporated into an active learning approach. ANNs and all their possible variations [105–113], are the method that is most commonly associated with machine learning.Depending on the application,one might choose feed-forward ANNs(for feature-based classification or regression),convolutional neural networks (for image processing), or recurrent neural networks(for anomaly detection).A chemical engineer might encounter convolutional neural networks used for representing molecules(see Section 2.2)[42–60]and ANNs[32,33,47,91,114–117],support vector machines [32], or kernel ridge regression [36,118] for predicting the properties of the representations. ANNs have been applied as a black-box modeling tool for numerous applications in catalysis[23],chemical process control[119],and chemical process optimization [120]. A popular algorithm for classifying data points when the labels are known is k-nearest neighbors, which has been used, for example, for chemical process monitoring[121,122] and clustering of catalysts [86,123,124].

    3. Strengths

    In this and the following sections, we give a detailed overview of the strengths, weaknesses, opportunities, and threats in the use of machine learning for chemical engineers.Fig.4 summarizes what is described in the next sections.

    Fig. 3. Overview of unsupervised and supervised machine learning algorithms; a non-exhaustive list of useful algorithms is included. GMM: gaussian mixture modeling;LSTM:long short-term memory;t-SNE:t-distributed stochastic neighbor embedding.

    Fig. 4. Strengths, weaknesses, opportunities, and threats in using machine learning as a modeling tool in chemical engineering.

    Machine learning techniques have gained popularity in chemistry and chemical engineering for revealing patterns in data that human scientists are unable to discover. In contrast to physical models,which rely explicitly on physical equations(resulting from discovered patterns),machine learning models are not specifically programmed to solve a certain problem. For classification problems, this implies that not a single explicitly defined decision function must be programmed. For regression problems, this implies that no detailed model equations must be derived or parametrized [80]. These advantages allow efficient upscaling to large systems and datasets without the need for extensive computational resources. An example is the current boom in predicting quantum chemical properties using machine learning [32,33,35–37,39,40,47,49,50,52,55,65,68,71,73,115]. The usual ab initio methods often require hours or days to calculate the properties of a single molecule. Well-trained machine learning models can make accurate predictions in a fraction of a second. Of course,other fast techniques that can predict accurately have already been developed, but they are limited in application range compared with machine learning models [125]. The inability to extrapolate is the major weakness of machine learning, but the application range can be extended quite easily by simply adding new data points. Active learning [126,127] makes it possible to expand the range with a minimal amount of new data,which is ideal for cases in which labeling is expensive (i.e., finding the true values of data points), such as quantum chemical calculations [116] or chemical experiments[72,128,129].Furthermore,existing machine learning models,such as ChemProp[47]and SchNet[130,131],are ready to use and do not require experience.Machine learning in general has become very accessible with packages such as scikit-learn [132]and TensorFlow [133], and frameworks like Keras [134] (now part of TensorFlow [133]) or PyTorch [135], which restrict the training of a deep learning model to just a few lines of code.Such packages and frameworks give scientists the opportunity to shift their focus to the physical meaning of their research instead of spending precious time on developing high-level computer models.

    4. Weaknesses

    One of the main weaknesses of machine learning approaches is their black-box nature. Given a certain input, the approaches provide an output. This situation is illustrated by Fig. 5. Based on the statistical performance of the model on a test dataset, certain statements can be made about the accuracy and reliability of the generated output.Detailed analysis of the model hyperparameters(e.g.,the number of nodes in an ANN)can be tedious,but can provide some insight into the correlations that have been learned by the model. However, extracting physically meaningful explanations for certain behaviors is infeasible. Hence, regardless of their speed and accuracy,machine learning models are a poor modeling choice for explanatory studies.

    This lack of interpretability contributes to the difficulty of designing a proper machine learning model. As in any model, a machine learning model can overfit or underfit the data, with the proper model being situated somewhere in between. The risk of overfitting is typically much greater than the risk of underfitting for machine learning models, and depends on the quality and quantity of the training data, and on the complexity of the model.Overfitting is an intrinsic property of the model structure and does not depend on the actual values of the hyperparameters—it can be compared to fitting a (noisy) linear dataset with a polynomial of very high order. In deep learning, overfitting usually manifests itself in the form of overtraining, which arises when the model is shown the same data too many times. This results in the model memorizing noise instead of capturing general patterns.Overtraining can be identified by comparing the model performance on the training data with its performance on the validation and test datasets.If the training performance is much better than the validation performance, the model may be overtrained. Finding the number of training epochs is often a difficult exercise. In order to avoid overfitting,a machine learning model requires a stopping criterion,such as in other optimization problems. In traditional modeling,where models typically involve at least some form of simplification with respect to reality,this stopping criterion is typically based on the change in performance on the training dataset, as achieving a high accuracy of the training data is the main challenge due to the simplifications. Achieving accuracy on the training dataset is typically not the issue for machine learning models; rather, the challenge mainly lies in achieving high accuracy on data the model was not directly trained on. Therefore, the stopping criterion should be based on the performance of the model on ‘‘unseen”data—the so-called validation dataset. For rigorously testing the optimized dataset, a completely independent dataset—the test dataset—is required,as is also common practice in traditional modeling approaches.

    Fig.5. Unraveling the results from black-box models.A poor result is typically related to the training set used.When testing outside of the application range,a warning signal should be raised. Good results require validation to understand what the model learns.

    A final—but often critical—weakness in machine learning approaches is the data itself that was used. If there are too many systematic errors in the dataset,the network will make systematic errors itself, in what is known as the ‘‘garbage in–garbage out”(GIGO)principle[136].Some forms or sources of error can be identified relatively easily, while others—once made—are much harder to find. As in every statistical method, outliers may be present. A model trained on a small dataset is more affected by some outliers than a large dataset.This is why not only quality,but also quantity matters in machine learning. One possible solution to systematic errors is to manually remove these points from the dataset; it is also possible to use algorithms for anomaly detection, such as PCA [69,92], t-SNE [137,138], DBSCAN [139,140], or recurrent neural networks (LSTM networks) [111,141,142]. Recently, selflearning unsupervised neural network-based methods for anomaly detection [143] have been developed [144–146]. Next to simple outliers, there is always the possibility that the data points are actually wrong. Such data points might be one sample from an experiment in which a measurement error was made, or from a whole set of experiments that were conducted incorrectly. An example could be the results from a chemical analysis in which the apparatus was not calibrated. Training on a set of systematically false data is especially dangerous since the model will perceive the false trend as truth. Identifying such cases is possible through diligent scrutiny of the published data.This example illustrates the importance of data curation,which ensures that the data used is accurate, reliable, and reproducible.

    Obviously, data can only be curated when it is available.Although decades of modeling, simulating, and experimenting have provided the chemical engineering community with a massive amount of data, this data is often stored in research laboratories or companies,and is hence not readily available.Even in a case where data is accessible, such as from an in-house database, the available data might not be completely useful for machine learning. The same applies to data extracted from research papers or patents using text-mining techniques [147]. The reason such data might not be useful is because, in general, only successful experiments are reported, while failed experiments remain unpublished[148].Furthermore,experiments or operation conditions that seem to be nonsense to a human chemical engineer are not performed,because the engineer has insight and scientific knowledge.Machine learning algorithms, however, do not know these boundaries and not including such ‘‘trivial” data might lead to bad predictions.

    5. Opportunities

    The many strengths of machine learning methods present various application opportunities, and recent developments have provided ways to mitigate some of the most important criticisms.The exceptionally high execution speeds of almost any trained machine learning method makes such methods well-suited for applications in which accuracy and speed within predefined system boundaries are important.Examples of such applications include feed-forward process control and high-frequency real-time optimization [149–151]. While empirical models often lack the accuracy for these applications, detailed fundamental models are rarely fast enough to avoid computational delays. Machine learning models, trained on a fundamental model, can provide similar accuracy, yet at the computational cost of an empirical model. In this case, a model is trained on high-level data and tries to predict the difference between the empirical outcome and the true value [152,153].Unsupervised algorithms can be used in process control applications for discovering outliers in real-time data [93]. The combination of more accurate,rapid prediction and reliable industrial data offers opportunities for the creation of digital twins and better control, leading to more efficient chemical processes.

    A similar observation can be made in multiscale modeling approaches, where phenomena at a variety of different scales are modeled,resulting in a complex and strongly coupled set of equations. The potential of machine learning in such applications strongly depends on the aim of the multiscale approach. If the aim is to gain fundamental insights into the lower scale phenomena, then machine learning is not advisable, due to its black-box nature. However, if the smaller scales are incorporated into the approach in order to obtain a more accurate model for larger scale phenomena, then machine learning could be used to replace the slow fundamental models for the smaller scales, without impacting the interpretability of the larger scale phenomena.

    A final opportunity lies in providing an answer to one of the main flaws of machine learning:its non-interpretability.The issue of interpretable machine learning systems is not unique to chemical engineering problems—it is encountered in nearly any decision-making system [154–157]. An attempt has been made in the field of catalysis to rationalize what exactly machine models learn[158].This attempt,however,still does not provide any level of direct interpretation of the model outcomes. Fig. 5 shows a workflow for explaining why a certain result is obtained. When the model outputs a good result, such as a chemical reaction predictor giving the correct product,the model should only be trusted after examining what the prediction is based on.A first step toward interpretation of the model results is to quantify the individual prediction uncertainties [159,160], as this gives an idea of the confidence the model has in its own decisions [115,161–164].One relatively straightforward way of doing so is via ensemble modeling.This methodology has been used for decades in weather forecasting and can be used in combination with nearly any model type [165–167]. Several algorithms have also been created to determine how much certain input features influence the output[168], or to see which training points the model uses for a certain output [169,170]. When the results seem chemically or physically unreasonable,the model should be falsified instead of validated,by finding adversarial examples[159].Furthermore,the reason is usually found in the dataset,with erroneous data or bias being present in the dataset [171,172].

    Another way of making machine learning models more interpretable is to include chemically relevant and well-founded information in the models themselves.Interpretation will still require a considerable amount of postprocessing, but—if human-readable inputs are used and model architectures are not too complex—it remains a feasible task. Very complex recurrent neural networks using molecular fingerprints as input are nearly impossible to interpret, as the model input is already difficult for a human to decipher. In risk management, the ‘‘a(chǎn)s low as reasonably practicable” (ALARP) principle is often applied [173]. Analogously, one could suggest an ‘‘a(chǎn)s simple as reasonably possible” principle in order for machine learning models to be as interpretive as possible.

    6. Threats

    The accessibility of machine learning models is both a major strength and a major threat in research. While machine learning can be used by anyone with basic programming skills, it can also be misused due to a lack of algorithmic knowledge. Today, a plethora of machine learning algorithms are available, and a tremendous number of combinations of parameters and hyperparameters is possible.Even for experienced users,machine learning remains a reasoned trial-and-error method. Since researchers are often unable to explain why one algorithm works while another does not, some see machine learning as a type of modern alchemy[174].Moreover,the majority of published articles do not provide source code,or only a pseudocode,which makes it impossible to reproduce the work [175,176]. Although chemistry and chemical engineering do not face a reproducibility crisis as much as the social sciences do[177],skepticism might grow in the community due to the increasing irreproducible use of machine learning in the field.In Gartner’s hype cycle[178],machine learning and deep learning are beyond the peak of inflated expectations [179],and there is a risk of entering a period of disillusionment where interest is nearly gone. Next to irresponsible use of algorithms—and possibly more dangerous—is misinterpretation of the results.The black-box nature of the algorithms makes it difficult,and often nearly impossible, to understand why a certain result is obtained.In addition,a model might give the correct outcome for the wrong reasons [159]. Therefore, researchers should bear in mind an important rule from statistics when using machine learning: It is about the correlations, not the causations.

    Another kind of unreasonable use of machine learning occurs when the model leaves the application range it is created for.The application range is determined by the training dataset and is finite.When testing unknown data points,the researcher should check whether or not these points are within the application range.When the points are outside of the range,it should be seen a warning signal for the user that the model will perform poorly[92].The lower part of Fig. 5 depicts how the reason for obtaining a poor result is generally found by looking at the training set. Opensource applications using clustering algorithms are available for evaluating the data accuracy and its application range [180].

    A last threat to applying machine learning in chemical engineering research is the growing educational gap when it comes to machine learning techniques. When applying computer and data science to chemistry and chemical engineering,it is important to understand not only the tool that is used,but also the process it is applied to. Therefore, simple training on how to use machine learning algorithms might become insufficient in the near future.Instead, a good education on AI and statistical methods will become vital in chemical engineering undergraduate programs.On the other hand,there is a need for more collaboration between computer scientists and experts on the studied topic. Whereas undertrained researchers risk a wrong use of the computational tools, computer and data scientists might obtain suboptimal results when they are not fully familiar with the topic being studied. More interdisciplinary research and a symbiosis between machine learning experts and chemical experts might be a way to avoid a phase of disillusionment.

    7. Conclusions and perspectives

    In the past decade, machine learning has become a new tool in the chemical engineer’s toolkit. Indeed, driven by its execution speed, flexibility, and user-friendly applications, there is a strong,growing interest in machine learning among chemical engineers.On the flip side of this popularity is the risk of misusing machine learning or misinterpreting black-box results, which can potentially lead to a distrust of machine learning within the chemical engineering community. The following three recommendations can help to improve the credibility of machine learning models and turn them into an even more valuable and reliable modeling method.

    First, it is important to maintain easy and open access to data and models within the community. High-quality data and opensource models encourage researchers to use machine learning as a tool and grant them the ability to focus on their topic rather than on programming and gathering data. Second, but related to the first point, is the creation of interpretable models. Since machine learning is already established in other research areas,new models for chemical applications are often inspired by existing algorithms.Therefore, the field will benefit most from studying why a certain output is generated from a given input, rather than from maintaining black boxes.The last recommendation is to invest in a profound algorithmic education. Although chemical engineers typically have very strong mathematical and modeling skills,understanding the computer science behind the graphical interface is a prerequisite for any modeler.This should also make it possible to define the application range of the model,which is crucial for an understanding of when the model is interpolating and when it is extrapolating. This last point is definitely the most crucial:Machine learning models should be credible models, which can only be achieved by being vigilant for times when the model is being used outside of its training set.

    Acknowledgements

    The authors acknowledge funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation (818607). Pieter P. Plehiers and Ruben Van de Vijver acknowledge financial support,respectively,from a doctoral(1150817N) and a postdoctoral (3E013419) fellowship from the Research Foundation—Flanders (FWO).

    Compliance with ethics guidelines

    Maarten R. Dobbelaere, Pieter P. Plehiers, Ruben Van de Vijver,Christian V. Stevens, and Kevin M. Van Geem declare that they have no conflict of interest or financial conflicts to disclose.

    国产精品麻豆人妻色哟哟久久| 九色成人免费人妻av| 亚洲美女黄色视频免费看| 99re6热这里在线精品视频| 久久精品国产a三级三级三级| 国产精品久久久久久精品古装| 国产成人免费无遮挡视频| 国产亚洲午夜精品一区二区久久| 亚洲天堂av无毛| 只有这里有精品99| 久久国产精品大桥未久av | 国产成人freesex在线| av天堂中文字幕网| 国产女主播在线喷水免费视频网站| 伊人久久国产一区二区| 精品少妇久久久久久888优播| 性色avwww在线观看| 噜噜噜噜噜久久久久久91| 免费播放大片免费观看视频在线观看| 最黄视频免费看| 日日啪夜夜撸| 国产亚洲5aaaaa淫片| 国产永久视频网站| 日韩成人伦理影院| 国产精品不卡视频一区二区| 国产在线视频一区二区| 又爽又黄a免费视频| 精品一区二区三区视频在线| 99九九线精品视频在线观看视频| 精品午夜福利在线看| 亚洲综合色惰| 大码成人一级视频| xxx大片免费视频| 五月伊人婷婷丁香| 最新中文字幕久久久久| 少妇人妻 视频| 在线观看三级黄色| 激情五月婷婷亚洲| 人人妻人人添人人爽欧美一区卜| 精品国产国语对白av| 亚洲欧美一区二区三区国产| 激情五月婷婷亚洲| 国产极品粉嫩免费观看在线 | 亚洲国产色片| 久久人人爽人人片av| 丝袜在线中文字幕| 亚洲成人手机| 在线精品无人区一区二区三| 国产淫片久久久久久久久| 少妇人妻久久综合中文| 人人妻人人澡人人看| 国产黄色免费在线视频| 美女视频免费永久观看网站| 欧美一级a爱片免费观看看| 亚洲精品乱码久久久v下载方式| 黑人巨大精品欧美一区二区蜜桃 | 亚洲美女视频黄频| 这个男人来自地球电影免费观看 | 日本免费在线观看一区| 亚洲精品456在线播放app| av卡一久久| 亚洲国产欧美在线一区| 欧美一级a爱片免费观看看| 国产 精品1| 美女福利国产在线| 人妻人人澡人人爽人人| 亚洲美女黄色视频免费看| 亚洲四区av| 多毛熟女@视频| 国产国拍精品亚洲av在线观看| 七月丁香在线播放| 国产精品欧美亚洲77777| 免费av中文字幕在线| 丰满乱子伦码专区| 王馨瑶露胸无遮挡在线观看| 亚洲欧美日韩另类电影网站| 日本黄色片子视频| 久久精品国产鲁丝片午夜精品| 特大巨黑吊av在线直播| 亚洲国产精品一区二区三区在线| 国产精品无大码| 汤姆久久久久久久影院中文字幕| 人人妻人人澡人人看| 啦啦啦在线观看免费高清www| 热re99久久国产66热| 91精品伊人久久大香线蕉| 免费观看性生交大片5| 午夜久久久在线观看| 欧美国产精品一级二级三级 | 国产免费一区二区三区四区乱码| 国产成人免费无遮挡视频| 国产精品一区二区在线不卡| 久久人人爽av亚洲精品天堂| 卡戴珊不雅视频在线播放| av卡一久久| 欧美日韩av久久| 免费在线观看成人毛片| av国产久精品久网站免费入址| 男女边摸边吃奶| 日韩熟女老妇一区二区性免费视频| 亚洲丝袜综合中文字幕| 欧美成人精品欧美一级黄| 欧美成人午夜免费资源| 久久久久久久久久久丰满| 精品人妻一区二区三区麻豆| 男女免费视频国产| 在线观看免费高清a一片| 六月丁香七月| 2021少妇久久久久久久久久久| 久热久热在线精品观看| 亚洲丝袜综合中文字幕| 欧美xxⅹ黑人| 最黄视频免费看| 久久ye,这里只有精品| 久久精品久久精品一区二区三区| 国产亚洲精品久久久com| 日日爽夜夜爽网站| 人妻人人澡人人爽人人| 99热这里只有精品一区| 99re6热这里在线精品视频| 汤姆久久久久久久影院中文字幕| 午夜久久久在线观看| 女性被躁到高潮视频| 久久99热6这里只有精品| 极品少妇高潮喷水抽搐| 午夜视频国产福利| 成人免费观看视频高清| 肉色欧美久久久久久久蜜桃| 亚洲精品亚洲一区二区| 九九久久精品国产亚洲av麻豆| 久久久精品94久久精品| 久久久久久久精品精品| 中文字幕精品免费在线观看视频 | 制服丝袜香蕉在线| 黄片无遮挡物在线观看| 国产精品人妻久久久影院| 国产亚洲午夜精品一区二区久久| 最后的刺客免费高清国语| 建设人人有责人人尽责人人享有的| 亚洲国产精品国产精品| 中国美白少妇内射xxxbb| 国产免费福利视频在线观看| 精品国产一区二区久久| 免费av中文字幕在线| 午夜福利影视在线免费观看| 人妻系列 视频| 97在线人人人人妻| 精品酒店卫生间| 哪个播放器可以免费观看大片| 国产精品国产三级国产av玫瑰| 自拍偷自拍亚洲精品老妇| 亚洲怡红院男人天堂| 国产亚洲最大av| 人人妻人人添人人爽欧美一区卜| 国产真实伦视频高清在线观看| 欧美最新免费一区二区三区| 纵有疾风起免费观看全集完整版| 美女主播在线视频| 欧美人与善性xxx| 精品一区二区三卡| 看免费成人av毛片| 成人美女网站在线观看视频| 久久久精品免费免费高清| 成人美女网站在线观看视频| 久久精品国产亚洲av涩爱| 一个人免费看片子| 国产精品国产三级国产专区5o| 免费不卡的大黄色大毛片视频在线观看| 天天躁夜夜躁狠狠久久av| 欧美日韩一区二区视频在线观看视频在线| 日韩熟女老妇一区二区性免费视频| 欧美变态另类bdsm刘玥| 亚洲色图综合在线观看| 精品国产乱码久久久久久小说| 国产黄色免费在线视频| 久久免费观看电影| 日韩在线高清观看一区二区三区| 爱豆传媒免费全集在线观看| 一区二区三区精品91| 黄色日韩在线| 高清黄色对白视频在线免费看 | 日韩免费高清中文字幕av| 在线观看免费高清a一片| 男男h啪啪无遮挡| 综合色丁香网| 亚洲欧洲日产国产| 最近最新中文字幕免费大全7| 一区二区av电影网| 亚洲av.av天堂| 亚洲一级一片aⅴ在线观看| 激情五月婷婷亚洲| 2021少妇久久久久久久久久久| 三级国产精品片| 国产精品久久久久久精品古装| 亚洲精品日本国产第一区| 国产精品一区二区在线不卡| 免费观看性生交大片5| 国模一区二区三区四区视频| 国产片特级美女逼逼视频| 欧美精品亚洲一区二区| 亚洲欧美日韩另类电影网站| 另类精品久久| 亚洲av不卡在线观看| 99久久人妻综合| 99热这里只有是精品50| 国产在线男女| 热re99久久精品国产66热6| 午夜激情福利司机影院| 亚洲情色 制服丝袜| 国产精品.久久久| 日韩熟女老妇一区二区性免费视频| 人妻制服诱惑在线中文字幕| 国产精品国产av在线观看| 精品久久久噜噜| 成人午夜精彩视频在线观看| 黄色一级大片看看| 内射极品少妇av片p| 丁香六月天网| 精品亚洲成a人片在线观看| 美女中出高潮动态图| 日韩电影二区| 黄色欧美视频在线观看| 嫩草影院入口| 五月玫瑰六月丁香| 天堂中文最新版在线下载| 国产高清不卡午夜福利| 久久精品国产鲁丝片午夜精品| 亚洲人与动物交配视频| 我的女老师完整版在线观看| 国产黄色免费在线视频| 精品人妻偷拍中文字幕| 一级毛片黄色毛片免费观看视频| 亚洲国产精品一区二区三区在线| 亚洲成人av在线免费| 99视频精品全部免费 在线| 国产精品秋霞免费鲁丝片| 一级毛片我不卡| 高清视频免费观看一区二区| 国产日韩欧美视频二区| 国产精品不卡视频一区二区| 这个男人来自地球电影免费观看 | 成人漫画全彩无遮挡| 国产亚洲一区二区精品| 在线观看免费视频网站a站| 两个人的视频大全免费| 在线观看三级黄色| 51国产日韩欧美| 特大巨黑吊av在线直播| 在线天堂最新版资源| 男女国产视频网站| 水蜜桃什么品种好| 国产成人午夜福利电影在线观看| 男女免费视频国产| 国国产精品蜜臀av免费| 亚洲av免费高清在线观看| 一级爰片在线观看| 中文天堂在线官网| 丰满乱子伦码专区| 建设人人有责人人尽责人人享有的| 久久久久国产网址| 日本wwww免费看| 久久久久精品性色| 男女无遮挡免费网站观看| 在线天堂最新版资源| kizo精华| 亚洲欧美成人综合另类久久久| 美女国产视频在线观看| 中国国产av一级| 一级av片app| 十八禁网站网址无遮挡 | 一级毛片电影观看| 91精品伊人久久大香线蕉| 精品一区二区三区视频在线| 美女国产视频在线观看| 有码 亚洲区| 日韩 亚洲 欧美在线| 国产精品麻豆人妻色哟哟久久| 国产成人精品无人区| 一边亲一边摸免费视频| 欧美精品亚洲一区二区| 熟女av电影| 肉色欧美久久久久久久蜜桃| 又粗又硬又长又爽又黄的视频| 在线观看免费午夜福利视频| 精品少妇内射三级| 午夜影院在线不卡| 俄罗斯特黄特色一大片| e午夜精品久久久久久久| av电影中文网址| 国产又爽黄色视频| 午夜福利在线观看吧| 精品一区在线观看国产| 国产无遮挡羞羞视频在线观看| 亚洲精品国产精品久久久不卡| 欧美黄色淫秽网站| 又紧又爽又黄一区二区| 欧美激情 高清一区二区三区| 黄片大片在线免费观看| 日日爽夜夜爽网站| 91成人精品电影| 亚洲七黄色美女视频| 成人三级做爰电影| 欧美亚洲日本最大视频资源| 人人妻人人爽人人添夜夜欢视频| tocl精华| 黑人猛操日本美女一级片| 妹子高潮喷水视频| 亚洲成人免费av在线播放| 狠狠狠狠99中文字幕| 国产高清国产精品国产三级| 欧美+亚洲+日韩+国产| 狠狠婷婷综合久久久久久88av| 色婷婷av一区二区三区视频| 丰满少妇做爰视频| 欧美变态另类bdsm刘玥| 精品熟女少妇八av免费久了| 国产99久久九九免费精品| 老熟妇仑乱视频hdxx| 国产亚洲精品久久久久5区| 国产高清videossex| 蜜桃国产av成人99| 精品久久久久久久毛片微露脸 | 99久久99久久久精品蜜桃| 好男人电影高清在线观看| 日韩视频在线欧美| 成年人黄色毛片网站| 中文字幕高清在线视频| 欧美日韩亚洲高清精品| 欧美xxⅹ黑人| 人人澡人人妻人| 亚洲精品第二区| 男女床上黄色一级片免费看| 亚洲国产av影院在线观看| 国产精品一区二区精品视频观看| 蜜桃国产av成人99| 亚洲五月婷婷丁香| 午夜久久久在线观看| 日本av手机在线免费观看| 在线观看人妻少妇| 桃红色精品国产亚洲av| 啦啦啦中文免费视频观看日本| 啦啦啦免费观看视频1| 久久中文字幕一级| 黑人欧美特级aaaaaa片| 亚洲熟女毛片儿| 日韩熟女老妇一区二区性免费视频| 婷婷丁香在线五月| 秋霞在线观看毛片| 免费高清在线观看日韩| 亚洲欧洲日产国产| 在线亚洲精品国产二区图片欧美| 美女脱内裤让男人舔精品视频| 亚洲avbb在线观看| 午夜视频精品福利| 少妇裸体淫交视频免费看高清 | 一个人免费看片子| 亚洲五月婷婷丁香| 久久精品亚洲熟妇少妇任你| 亚洲成人国产一区在线观看| 一本—道久久a久久精品蜜桃钙片| 一级黄色大片毛片| 国产在线视频一区二区| 国产97色在线日韩免费| 超碰成人久久| 亚洲成人国产一区在线观看| 欧美日韩福利视频一区二区| 搡老熟女国产l中国老女人| 亚洲,欧美精品.| 中亚洲国语对白在线视频| 一级毛片精品| 精品人妻在线不人妻| 亚洲激情五月婷婷啪啪| 免费少妇av软件| 国产一区二区三区综合在线观看| 国产一区二区在线观看av| 亚洲欧美激情在线| 精品久久久久久久毛片微露脸 | 欧美成人午夜精品| 久久女婷五月综合色啪小说| 法律面前人人平等表现在哪些方面 | 亚洲情色 制服丝袜| 无遮挡黄片免费观看| 国产av又大| 亚洲精品国产av成人精品| 18禁国产床啪视频网站| 国产一区二区在线观看av| 亚洲熟女毛片儿| 免费观看av网站的网址| 国产成人免费无遮挡视频| 久久久久久久大尺度免费视频| 人人妻人人爽人人添夜夜欢视频| 亚洲欧美成人综合另类久久久| netflix在线观看网站| 亚洲第一av免费看| 美女主播在线视频| 欧美激情久久久久久爽电影 | av在线播放精品| 成人国产一区最新在线观看| 一个人免费在线观看的高清视频 | 黄片播放在线免费| 波多野结衣av一区二区av| 日韩人妻精品一区2区三区| 亚洲成人手机| 国产淫语在线视频| 制服诱惑二区| 老司机在亚洲福利影院| 美女高潮到喷水免费观看| 日韩大片免费观看网站| 亚洲国产成人一精品久久久| 国产精品久久久久久人妻精品电影 | 黄片大片在线免费观看| 精品久久久精品久久久| 无遮挡黄片免费观看| 国产欧美日韩综合在线一区二区| 精品卡一卡二卡四卡免费| 日本91视频免费播放| 青草久久国产| a级毛片黄视频| 俄罗斯特黄特色一大片| 色视频在线一区二区三区| 91麻豆精品激情在线观看国产 | 夜夜骑夜夜射夜夜干| 十八禁网站网址无遮挡| 91大片在线观看| 欧美黑人精品巨大| 亚洲伊人久久精品综合| 男人爽女人下面视频在线观看| 久久女婷五月综合色啪小说| 亚洲欧美日韩高清在线视频 | av在线app专区| 欧美日韩成人在线一区二区| 一边摸一边抽搐一进一出视频| 在线天堂中文资源库| 国产精品成人在线| 国产精品一区二区精品视频观看| 天天添夜夜摸| 欧美日韩亚洲国产一区二区在线观看 | av视频免费观看在线观看| 老鸭窝网址在线观看| 成年人午夜在线观看视频| 亚洲成人国产一区在线观看| 日本猛色少妇xxxxx猛交久久| 黑人巨大精品欧美一区二区蜜桃| 欧美久久黑人一区二区| 国产亚洲午夜精品一区二区久久| 纯流量卡能插随身wifi吗| 久久久久国内视频| 国产在视频线精品| 欧美日本中文国产一区发布| 免费av中文字幕在线| 天天躁夜夜躁狠狠躁躁| 久久精品亚洲av国产电影网| 欧美精品啪啪一区二区三区 | 日韩有码中文字幕| 一区二区三区乱码不卡18| 一本一本久久a久久精品综合妖精| 久久久精品94久久精品| 日韩 欧美 亚洲 中文字幕| 日本vs欧美在线观看视频| 美女视频免费永久观看网站| 久久久久国内视频| 曰老女人黄片| 91av网站免费观看| 精品第一国产精品| 日本五十路高清| 超色免费av| 丝袜脚勾引网站| 人成视频在线观看免费观看| 中文字幕高清在线视频| 亚洲精品美女久久久久99蜜臀| 日本撒尿小便嘘嘘汇集6| 国产无遮挡羞羞视频在线观看| 国产黄频视频在线观看| 一级a爱视频在线免费观看| 啦啦啦 在线观看视频| 久久久久国产一级毛片高清牌| tube8黄色片| 桃红色精品国产亚洲av| 一二三四在线观看免费中文在| 国产成人精品无人区| 丰满迷人的少妇在线观看| 午夜免费成人在线视频| 精品国产乱码久久久久久男人| 精品国内亚洲2022精品成人 | 久9热在线精品视频| av在线播放精品| 大香蕉久久网| 午夜福利视频精品| 老汉色av国产亚洲站长工具| 啦啦啦中文免费视频观看日本| 国产亚洲欧美在线一区二区| 91精品伊人久久大香线蕉| 十八禁网站网址无遮挡| 精品国产乱子伦一区二区三区 | 狂野欧美激情性xxxx| 午夜免费成人在线视频| 亚洲国产av影院在线观看| 十八禁高潮呻吟视频| 夜夜夜夜夜久久久久| 亚洲国产毛片av蜜桃av| 午夜精品久久久久久毛片777| 亚洲精品国产区一区二| 成人手机av| 亚洲欧美清纯卡通| 免费观看av网站的网址| 亚洲专区字幕在线| 久久人妻福利社区极品人妻图片| 精品少妇久久久久久888优播| 国产一区二区三区综合在线观看| 高清在线国产一区| 在线观看免费视频网站a站| svipshipincom国产片| 国产一卡二卡三卡精品| 青春草亚洲视频在线观看| 国产免费av片在线观看野外av| 两人在一起打扑克的视频| 久久人人97超碰香蕉20202| 日韩有码中文字幕| 国产在线一区二区三区精| 少妇人妻久久综合中文| 国产亚洲精品第一综合不卡| 成人国产av品久久久| 国产无遮挡羞羞视频在线观看| 满18在线观看网站| 欧美激情久久久久久爽电影 | 国产欧美日韩精品亚洲av| 各种免费的搞黄视频| 亚洲情色 制服丝袜| 91精品三级在线观看| 少妇猛男粗大的猛烈进出视频| 亚洲久久久国产精品| av网站在线播放免费| 亚洲欧美色中文字幕在线| 色综合欧美亚洲国产小说| av视频免费观看在线观看| 夫妻午夜视频| a级片在线免费高清观看视频| 99热网站在线观看| 男女午夜视频在线观看| 视频区图区小说| 成人国产一区最新在线观看| 三级毛片av免费| 人人妻,人人澡人人爽秒播| 18禁裸乳无遮挡动漫免费视频| 美女脱内裤让男人舔精品视频| 精品久久久久久电影网| 老汉色∧v一级毛片| av福利片在线| 国产深夜福利视频在线观看| 亚洲国产av影院在线观看| 免费少妇av软件| 中文字幕最新亚洲高清| 精品少妇黑人巨大在线播放| 中国国产av一级| 欧美午夜高清在线| 成人三级做爰电影| 午夜福利乱码中文字幕| 亚洲国产精品一区三区| 国产在线视频一区二区| 亚洲av成人一区二区三| 欧美黑人欧美精品刺激| 成人影院久久| 久久精品aⅴ一区二区三区四区| 欧美黄色淫秽网站| 国产人伦9x9x在线观看| 午夜福利,免费看| 一级毛片女人18水好多| 岛国毛片在线播放| 国产成+人综合+亚洲专区| 最近最新免费中文字幕在线| 精品少妇内射三级| 亚洲欧美一区二区三区黑人| 久久久久国产精品人妻一区二区| 精品国产一区二区三区久久久樱花| 久久久久久久久免费视频了| 日韩熟女老妇一区二区性免费视频| 天天躁日日躁夜夜躁夜夜| 国产欧美日韩一区二区三区在线| 日本一区二区免费在线视频| 国产亚洲av片在线观看秒播厂| 黑人猛操日本美女一级片| 欧美97在线视频| 亚洲国产精品一区三区| 亚洲av日韩精品久久久久久密| 午夜福利影视在线免费观看| 狠狠婷婷综合久久久久久88av| 手机成人av网站| 后天国语完整版免费观看| 纯流量卡能插随身wifi吗| av片东京热男人的天堂| 亚洲成人免费av在线播放| 中文精品一卡2卡3卡4更新| 美女午夜性视频免费| 国产激情久久老熟女| 国产又爽黄色视频| 国产高清videossex| 十八禁网站免费在线| 天天躁日日躁夜夜躁夜夜| 老鸭窝网址在线观看| 老汉色∧v一级毛片| 日本撒尿小便嘘嘘汇集6| 精品亚洲成国产av| 国产高清国产精品国产三级| 一二三四社区在线视频社区8| 亚洲精品av麻豆狂野| 国产成人精品在线电影| 女性被躁到高潮视频| 亚洲九九香蕉| 亚洲avbb在线观看| 人人妻人人爽人人添夜夜欢视频| 国产1区2区3区精品| 最近最新中文字幕大全免费视频| 国产亚洲一区二区精品| 日韩有码中文字幕| 男人添女人高潮全过程视频|