• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep Reinforcement Learning for Multi-Phase Microstructure Design

    2021-12-14 10:30:22JiongzhiYangSrivatsaHarishCandyLiHengduoZhaoBrittneyAntousandPinarAcar
    Computers Materials&Continua 2021年7期

    Jiongzhi Yang,Srivatsa Harish,Candy Li,Hengduo Zhao,Brittney Antous and Pinar Acar

    Virginia Polytechnic Institute and State University,Blacksburg,24061,VA,USA

    Abstract:This paper presents a de-novo computational design method driven by deep reinforcement learning to achieve reliable predictions and optimum properties for periodic microstructures.With recent developments in 3-D printing, microstructures can have complex geometries and material phases fabricated to achieve targeted mechanical performance.These material property enhancements are promising in improving the mechanical,thermal,and dynamic performance in multiple engineering systems, ranging from energy harvesting applications to spacecraft components.The study investigates a novel and efficient computational framework that integrates deep reinforcement learning algorithms into finite element-based material simulations to quantitatively model and design 3-D printed periodic microstructures.These algorithms focus on improving the mechanical and thermal performance of engineering components by optimizing a microstructural architecture to meet different design requirements.Additionally, the machine learning solutions demonstrated equivalent results to the physics-based simulations while significantly improving the computational time efficiency.The outcomes of the project show promise to the automation of the design and manufacturing of microstructures to enable their fabrication in large quantities with the utilization of the 3-D printing technology.

    Keywords: Deep learning; reinforcement learning; microstructure design

    1 Introduction

    Advancements in 3-D printing technology enabled the automation of the design and manufacturing of high-performance engineering materials.For example, computational models and machine learning (ML) techniques have an increasingly critical role in the design of 3-D printed microstructures as testing all possible microstructural combinations with a purely experimental approach remains infeasible.Accordingly, the present work focuses on the development and integration of ML methods into a physics-based simulation environment to enable and accelerate the design of 3-D printed multi-phase microstructures for different engineering applications.The current 3-D printing technology can produce structures that are made of multiple phases by utilizing different base materials.The multi-phase structures can provide complementary properties by balancing the desired features of the base materials.Our goal in this study is to find the optimum material properties of 3-D printed microstructures to improve the performance of engineering components under mechanical and thermo-mechanical loads and dynamic effects.The design optimization is first performed using a finite element model.Although finite element methods produce high-fidelity solutions, the computation times required for design studies may be excessive.Therefore, this study investigates the compatibility of ML and design optimization to produce computationally efficient results while maintaining a high level of accuracy.

    The ML paradigm has attracted a lot of interest from materials modeling and design communities [1–7] as it enables the integration of mathematical techniques/data science tools into experiments and/or physical models.ML has been recognized as a complementary and powerful tool to accelerate the design and discovery of new-generation materials [8,9].Prominent applications in this field include the design of 3-D printed composites involving multiple phases to improve the mechanical performance of materials [10–12] and ML-driven crystal plasticity modeling and design of polycrystalline microstructures [13–16].Although these previous studies focused on more rigorous mechanical problems in terms of the number of design variables and complexity of the objectives, the goal of the present work is to implement different mathematical strategies for the ML-driven design of 3-D printed multi-phase microstructures.Accordingly, different MLdriven design strategies are presented in this study by utilizing Q-Value Reinforcement Learning(RL), Deep RL, and feed-forward artificial neural networks (ANN).The design optimization strategy with the integration of the ML solutions demonstrates a promising methodology that can be adapted in the future to more complicated design problems for multi-phase materials.The organization of the paper is described next.Section 2 discusses the finite element-based design of multi-phase microstructures.Section 3 presents the ML techniques and the results obtained with ML-driven solutions.A summary of the study with potential future extensions is provided in Section 4.

    2 Finite Element Modeling

    3-D printed multi-phase microstructures are assumed to demonstrate orthotropic material properties.Due to the directional independent nature of orthotropic properties, there are 9 independent variables that make up the compliance matrix in the global frame.For assembly and solution purposes, the inverse of the compliance matrix is used to construct the stiffness matrix which will be used in the local and global finite element system.The stress–strain equation for a three-dimensional orthotropic material is given as follows:

    where the stiffness coefficients (Cijvalues) are calculated from the orthotropic material properties(Young’s modulus values:Exx,Eyy; Poisson’s ratios:νxy,νyzand shear modulus:Gxy,Gyz).This relationship is then used to solve an orthotropic plate problem that describes a periodic multi-phase microstructure.Carbon Epoxy and Boron Epoxy are used as base materials of the microstructure; therefore, the design is expected to demonstrate properties that range within the property values of the base materials.Carbon Epoxy and Boron Epoxy material properties can be found in Reference [17] and shown in Tab.1.The microstructure is assumed to demonstrate geometric periodicity, as visualized in Fig.1.The plate problem is broken up into 2151 quadrilateral finite elements.The von Mises stress (VMS), which is traditionally used to predict yielding,is utilized as the stress measure.Three different design problems are considered to optimize the performance of the periodic microstructure:(i) Minimization of the maximum VMS value throughout the microstructure under mechanical forces; (ii) Minimization of the maximum VMS value throughout the microstructure under thermo-mechanical forces; (iii) Maximization of the first natural frequency of the multi-phase microstructure to achieve targeted frequency values defined for cubic satellites (CubeSats).

    Table 1:Material properties of carbon epoxy and boron epoxy [17]

    Figure 1:Geometric definition/loading condition for mechanical and thermomechanical problems.(a) Visualization of the periodic microstructure (b) mechanical problem definition using periodicity

    2.1 Optimization for Mechanical Performance(Problem-1)

    The first problem aims to minimize the maximum VMS value experienced by the microstructure under tensile loads (Fig.1b).The design variables are the 9 material properties shown in Tab.1.The optimization is performed using the Sequential Quadratic Programming (SQP)algorithm.The mathematical definition of the optimization problem is given next:

    The objective function is defined as the minimization of the maximum VMS value andxshows the design variables that have real values between the lower and upper bounds defined by the properties of the base materials, Carbon Epoxy and Boron Epoxy.

    2.2 Optimization for Thermomechanical Performance(Problem-2)

    The second problem aims to minimize the maximum VMS value experienced by the microstructure under thermo-mechanical loads.The microstructure is modeled as a plate that represents a low-Earth orbit satellite component.The change of temperature is assumed to be 293 degrees according to the data presented in Reference [18].In the optimization problem, the thermal expansion coefficients alongxandydirections are defined as design variables in addition to the previous 9 material properties (Eq.(3)).The total strain is defined as the summation of the mechanical and thermal strains.The optimum results obtained with SQP for the minimization of the maximum VMS value for varying boundary conditions (BCs) can be seen in Tab.2.The optimum VMS values are found to be around 1100 MPa for different BCs.

    Table 2:Optimum material properties of the microstructures in P1 and P2.The elastic modulus values are given in GPa, the VMS values are given in MPa, and the thermal expansion parameters are given in (1/K)

    whereαshows the thermal expansion tensor that includes two independent thermal expansion coefficient values,αxxandαyy.The feasible solution space for thermal expansion tensor components is constrained by the lower and upper bound values that are determined using the base materials, Carbon Epoxy and Boron Epoxy.The distributions of the VMS and thermal stress throughout the optimum microstructure designs having the roller BCs at each side of the plate in problem-1 and problem-2 are visualized in Fig.2.

    Figure 2:VMS and thermal stress distributions throughout the optimum microstructure designs that have roller BCs.(a) VMS distribution for problem-1 (b) thermal stress distribution for problem-2

    2.3 Optimization for Natural Frequency(Problem-3)

    The last application in this study is the optimization of the periodic microstructure to enhance the natural frequency of a plate that is assumed to be a nano-satellite component.Particularly, the natural frequency of a 2-Unit (with dimensions of 20 cm × 10 cm × 10 cm) CubeSat structure is considered.Natural frequency is an important performance metric as undesired resonance can lead to the failure of satellite components.The most important vibration implications may occur during the rocket launch.To account for the effects of the vibrations, the natural frequency of a 2-Unit CubeSat is optimized using the finite element scheme.The example CubeSat is chosen as a sample QB50 satellite of the European Union.Accordingly, the first natural frequency of a QB50 satellite should be more than 200 Hz for safety purposes [19].The Carbon Epoxy and Boron Epoxy are used as the base materials for this 2-Unit QB50 satellite.To obtain the natural frequency value, the following dynamic system given is solved with finite element modeling.

    wheremis the mass,kis the stiffness of the system, andadefines the acceleration.For the orthotropic material system, the solution of the natural frequency leads to the following expression:

    Eq.(5) demonstrates an eigenvalue problem, where the eigenvalues of the system correspond to the natural frequencies of the satellite component.The frequency of the material is maximized to ensure that it is greater than 200 Hz.With the introduction of the mass matrix in the dynamic problem, one additional design variable (density of the material) is added to the optimization problem, as seen in Eq.(6) below.

    In this equation,ρis the material density.Similar to the other material properties, the density is allowed to acquire an optimum value that is in between the lower and upper bound values defined by the base materials.The optimum material properties that are obtained for problem-1(P1, mechanical problem) and problem-2 (P2, mechanical + thermal problem) are the same for all different BCs and they are shown in Tab.2.When defining BCs, F is a fixed condition and R is a roller condition.The BC determination is ordered in the clockwise direction starting from the left side of the plate.For instance, for a plate that is fixed on the left and top sides, and has rollers on the right and bottom surfaces, the BCs would be ‘FFRR’.Additionally, the optimum material properties that are obtained for problem-3 (P3, natural frequency problem) are shown in Tab.3.

    Table 3:Optimum material properties of the microstructure in P3.The elastic modulus values are given in GPa, the VMS values are given in MPa, the thermal expansion parameters are given in(1/K), and the material density is given in (g/cm3)

    3 Machine Learning-Driven Design Optimization of Microstructures

    To enhance the computational efficiency of the design solutions, concepts of ML (e.g., RL)are used to solve the problems involving the orthotropic material.RL is a subset of ML.It consists of an “agent” that learns how to behave in a given environment by performing actions and recording results based on those actions [20].Multiple strategies have been utilized to solve the design problems using ML and they are discussed in this section.First, a Deep RL Neural Network is implemented to improve the computational efficiency of design optimization.Next, a Q-Value RL program is used to act as a verification code to the finite element-based design results,as well as the Deep RL network.Lastly, a feed-forward ANN is implemented to improve the computational time efficiency.The computational cost requirements for different ML techniques applied in this study are directly associated with the training data generation using finite element simulations.This is because the ML model achieve the predictions in the order of seconds with the generation of sufficient training data.

    3.1 Deep Reinforcement Learning

    3.1.1 Advantage Actor-Critic(A2C)Overview

    The Deep RL algorithm in this work is based on the advantage actor critics RL, which is also known as Advantage Actor-Critic (A2C).Actor critics systems receive information from their external environment and select an action based on that information.After performing a specific action, the environment returns feedback, or reward in RL, to the actor.Furthermore, the critics use the state of the environment and output of the actor as its input, then evaluates how well the actor’s action is under the current environment.The critic output is similar to the reward that directly comes from the environment since both the reward and critics evaluate the actor output based on the current environment.After receiving the reward and evaluation from the critics, the actor adjusts itself and learns what action provides maximum reward under different kinds of environments.After many times of training (it took 200 iterations to train deep RL programs in this study), the actor has a very high probability of selecting the best action under a specific environment.The workflow of a Deep RL framework is shown in Fig.3.

    Figure 3:Workflow of a deep RL framework

    3.1.2 Overall Structure of Deep RL

    In this study, the material properties used in the finite element simulations define the environment for Deep RL.Deep RL starts at a random combination of material properties within the given range.The algorithm searches for the optimum combination of material properties by selecting different actions that either increase or decrease one of the material properties.After each action, the Deep RL program receives either a positive or a negative reward and then adjusts itself based on the reward so that it chooses a better action next time if it encounters a similar environment.The program selects an action, receives a reward, then adjusts itself based on the reward and selects an action again.The more iterations that the program runs, the more accurate the selected action.When the advantage is zero, RL stops adjusting itself and the combination of material properties, which is also the environment, and converges to the optimum properties.

    3.1.3 Actor and Critics

    The main structure of the A2C for two convolutional neural networks is defined as follows:one of the neural networks serves as an actor, and the other one serves as the critics.The actor convolution neural network has one input layer, two hidden layers, and one output layer.The inputs of the actor convolutional neural network are the material properties, which also define the environment, and the outputs of the actor are the probabilities of selecting different actions.Additionally, the critics system is designed using a convolutional neural network that consists of one input layer, one hidden layer, and one output layer.The critics network takes the actor output and current environment (material properties) as its input it outputs a Q-Value that evaluates the goodness of the actor output [20].

    3.1.4 Monte Carlo Return and Neural Network Adjustment

    For each action, the Deep RL receives a reward.The two convolutional neural networks can adjust themselves based on one reward from the environment, or they can adjust after selecting several actions and receiving several rewards from the environment.In this project, the onestep Monte Carlo return is chosen.In Eq.(7), the calculation of the Monte Carlo return also represents the accumulative reward after selecting one action.Here,γis the discount factor, which is defined with a value of 0.8.This discount factor prevents the code from completely relying on future rewards.The first-order Monte Carlo Return does not have accumulative or the second reward.Consequently, the final result for Monte Carlo Return is simply the reward plus the discount factor times the next Q-Value from the critics.The advantage of the A2C scheme is that the next Q-Value is obtained with critics which makes the calculations easier.

    The goal for the adjustment of two convolution neural networks is to minimize the loss function (l).Eq.(8) shows the loss function of the critics (w) and Eq.(9) shows the loss function of the actor (θ).These two convolutional neural networks use backpropagation to adjust the weight parameters in the network.

    3.1.5 Results for Problem-1,Problem-2,and Problem-3

    In this study, the Deep RL is initialized with randomized values for properties within the ranges of base material properties.Next, the actor takes the current material properties as input and produces an output that determines whether to increase or decrease each of the material properties.Backpropagation of convolutional neural networks only takes place after receiving the reward.The design problems discussed in Section 2 are then solved using the Deep RL framework.The changes in the objective function values are shown in Figs.4–6 for problem-1(mechanical problem), problem-2 (thermo-mechanical problem), and problem-3 (natural frequency problem), respectively.

    In problem-1, the randomly generated initial data point provided a VMS value around 1121.5 MPa.The Deep RL successfully learned and determined the optimum choice for the material properties.Although the Deep RL aims to learn the data to make the best decision,the output of the actor is the probability of increasing or decreasing one material property,which means that there is still a small chance for the program to select a wrong action.After 200 iterations, the minimum VMS was found as 1116.507 MPa.This result is the same as the finite element-based optimization solution.

    In problem-2, the final result produced by the Deep RL for VMS is 1114.1 MPa, while the finite element-based optimization solution was found as 1118.4 MPa.The Deep RL provided a better optimum solution in this problem due to the limitations arising from the gradient-based optimization algorithm (SQP) used with the finite element solution, such as the likelihood of producing local minimum solutions rather than the global solution.The gradient-based optimization is significantly dependent on the selection of the initial design point.For instance, with the change of the initial guess, the optimization result of the finite element solution also converged to 1114.1 MPa in this problem.

    Figure 4:Changes in the objective function (VMS) obtained by deep RL in problem-1

    Figure 5:Changes in the objective function (VMS) obtained by deep RL in problem-2

    Figure 6:Changes in the objective function (natural frequency) obtained by deep RL in problem-3

    In problem-3, the input environment consisted of 10 variables and the output used 20 probabilities that involve increasing or decreasing material properties.

    3.2 Q-Value Reinforcement Learning

    3.2.1 Methodology

    In this study, a value-based approach, called Q-Learning is chosen as the second RL strategy.The Q-Learning algorithm utilizes two inputs from the environment-the state and the action-and returns the expected future reward of that action and the state pair.The driving equation behind this algorithm is the Bellman Equation [21], which is shown below.

    The Bellman equation yields a “Q-Value,” which is a measure of the quality of performing an action at a given state.These values are stored in a Q table that is continually updated until the learning has stopped or the values have converged.The rows in the Q table are the states and the columns are the actions.The highest values in any row define the best action at that particular state.

    An approach used to determine which action gets chosen at each episode in which a Q-Value is calculated is called Epsilon-Greedy.This equation is used to balance exploration-choosing actions at random- and exploitation-choosing actions based on their Q-Value.The rate of exploration is determined by the exploration rate ofεthat can range from 0 to 1, with 1 meaning full exploration.After choosing an action, the action is evaluated, and the reward and next state are determined.These values are then used in the Bellman Equation (Eq.(10)).

    The Q-Value RL model consists of a separate function that maps each action to the appropriate next state and objective function.The objective function given by the action is then used in a reward structure to assign the correct reward value for the action that was chosen.The next state was determined based on the action that was taken.Both the next state value and the reward value were then used to compute the Q-Value for the current state-action pair and added to the Q table.The process was repeated for 1000 iterations to allow the table to converge.

    This code laid a foundational structure for the design problems involving an orthotropic material.For the first problem of finding the optimum material properties, vectors that equally distribute the range of the 9 material properties across their respective ranges were created.For computation purposes, the increments were made to only increase by a value of 20%.The Q table was set up so that each material property increment would be a state and a total of 18 actions resembled increasing or decreasing each of the 9 material properties.In this case, there were 6 total states and each odd action would increase the material property by one increment, and even actions would decrease the material property by one increment.To keep track of where each material property within their respective ranges, an index vector was created.

    To map each action to the correct next state and to calculate the resulting VMS, a new function was created.This function takes in the state and action pair as well as the index vector to properly assess each state and change the index vector according to the action that was called.One important note is that this function starts the index vector according to the state that is selected.This means that if the third state is called, then all index vector entries will be the value of three.Then, based on the action value associated with a material property, the corresponding index vector will change.Since there are edge cases, the first and last states will not allow for decreasing(first state) or increasing (last state) the index vector.Each new state is the next successive state until the sixth and final state, wherein the next state is randomly chosen.Finally, the VMS is calculated based on the new set of index vector variables.

    The VMS that is given by the function is used in the reward structure to aid in the calculation of the Bellman equation.It is also an integral part of the quality of the Q table that is produced.The goal of this particular problem was to find the minimized value of the maximum VMS and,thus, the reward structure had to reflect that.Two different types of reward structures were used to see the effect of reward structure on Q table quality and convergence.The first structure was a simple reward structure that would compare the new VMS value with the old VMS value.This is named the simple reward structure because the rewards are allocated using a simple principle-if the new VMS is greater than the old, a negative reward is given.Similarly, if the new VMS is less than the previous value, a positive reward is given.The other reward structure used both the previous and new, or currently calculated VMS, then used relative error to decide how rewards were given.If the relative error between the new and previous VMS values were less than or equal to 10 percent, then a positive reward was given.If the error was greater than 10 percent, then a negative reward was given.The relative error equation is given below for reference.

    Each of the two reward structures was compared against each other to see their effects on the Q table quality and convergence.In terms of quality, the simple reward structure gave smaller Q Values versus the relative error structure.This made distinguishing the best actions for each state a bit harder as compared to the relative error reward structure.The convergence of both structures was also very similar in that they took roughly the same amount of time to complete the desired number of iterations.Overall, they performed similarly, giving the same Q table trends when compared against each other.

    The same Q-Value RL code was used to complete the second and third optimization problems for the orthotropic material.Minor adjustments were made to accommodate the new quantities,but each code is built on the same foundation of Epsilon-Greedy and a “model” function to map the states to actions of the different material properties.

    The main purpose of the Q-Value RL framework is to serve as a verification tool and supplementary code that confirms results achieved through the finite element and Deep RL solutions.Rather than finding one minimized or maximized property, the Q-Value RL code serves to narrow down the range of each property value to find the optimum values to minimize VMS and maximize natural frequency.The next section discusses the results obtained by the RL Code for each of the three problems involving the orthotropic material microstructure.

    3.2.2 Results for Problem-1,Problem-2,and Problem-3

    The Q-Value RL framework was run for the three design problems (problem-1, problem-2,and problem-3).The optimum design parameters obtained by Q-Value RL using the RRRR boundary conditions are shown in Tabs.4–6 for problem-1 (mechanical problem), problem-2(thermo-mechanical problem), and problem-3 (natural frequency problem), respectively.1000 iterations and a total of three trials were performed to find the optimum ranges for each material property.

    Table 4:Q-value RL results for problem-1

    When comparing the results of the RL code and the optimization solution for problem-1, the results of the RL code are very similar to the finite element based optimization solution, exceptGxyandνzx.This indicates that the RL code is sufficient to predict material property values for minimizing the VMS.It is suspected that the reason for the discrepancies for theGxyandνzxvalues are due to their limited influence on the overall VMS value.An important note is that some trials experienced odd Q table tendencies in which there were inconclusive results.One such example is that sometimes the Q table will produce a zero value for both the increase and decrease column for some material properties.The relative error reward structure was implemented for this and all other problems were investigated.Additionally, only the RRRR boundary condition was explored in the Q-Value RL code.

    For problem-2, the material properties are not quite like the results of the finite element-based optimization solution.Some material properties were close, but others varied slightly.It is believed that the reason for this discrepancy is due to the separation of the y- and z-direction values that may have caused the fluctuations in the VMS and thus caused Q Values to be manipulated.

    Table 5:Q-value RL results for problem-2

    Table 6:Q-value RL results for problem-3

    For problem-3, the results from the three trials show that the Q-Value RL code was able to successfully match with the optimization solution with minor discrepancies.This is mainly due to the large number of iterations showing a clear trend in the Q table for this problem.

    Though the Q-Value RL code functions properly, there are still a few disadvantages of using this Q-Value RL approach to find the optimum material properties for minimizing VMS and maximizing natural frequency.The first is that the computation time using the finite element simulations is excessive.Therefore, an ANN surrogate model is utilized for this study to obtain the results with the Q-Value RL framework.Another issue is that the framework does not provide a specific value for the optimum material properties.It is only capable of showing tendencies for each material property within their respective ranges.It also does not show the minimum VMS,or maximum natural frequency, that is computed using optimum properties.Overall, it is a good validation technique for the development of more complex calculation methods and shows that rudimentary RL concepts can be applied to finite element-based design practices.

    3.3 Artificial Neural Network(ANN)

    3.3.1 Methodology

    Due to the iterative nature of optimization solutions, design studies can require significant computational times, especially for cases that involve multi-variable, non-linear problems like the finite element modeling optimization problems in this study.Therefore, this paper investigates and compares the benefit of incorporating ANNs with different network structures to estimate optimum microstructure designs.Instead of solely using the functions generated from finite element modeling, a neural network is also implemented with the SQP algorithm to find the desired material property combinations.Then, the differences between the estimated neural network and actual optimum values are compared against each other to analyze the integrity of using artificial networks to reduce the computational times for designing multi-phase microstructures.

    Before creating the neural network, 1000 sample data points are collected using the finite element model.These input data points are chosen randomly within the given material property ranges referenced in Tab.1.Afterward, the points are used to create various training sets that contain different numbers of data points, which are chosen randomly from the original 1000 collected sample points.Within the sets, 70% of the points go to training the neural network, 15%of the points go to validation, and the other 15% go to testing.Each training set corresponds with its ANN generated by built-in MATLAB Neural Network Fitting functions.To further understand the effect of different neural networks, the study also compares different architectures containing various hidden layers and nodes to identify the most suitable structures for each problem statement.

    Additionally, the actual finite element results are tested against single-layer neural networks containing 10 and 20 nodes.Similar trials are repeated with double layer networks containing 5 and 10 nodes.All neural network training is done using the Levenberg–Marquardt algorithm to fit the input and output data over 1000 epochs.Once the neural networks are trained, the function is put into an optimizer to determine the desired material property depending on the problem statement, which may utilize the maximum or minimum of the design function.For the three problems investigated in this study, the SQP algorithm was able to satisfy both conditions.Although the SQP optimizer was able to produce a result close to the actual values, it seemed to bounce between different answers depending on the initial guesses.This behavior suggested that the relationship between the material inputs and outputs contained several local minimum points, and the program would get stuck in one of them if given an uninformed initial point.Similar behavior was also referenced in the deep reinforcement algorithm discussed in Section 3.1.Thus, a GlobalSearch object was utilized to find a global minimum consistently for the optimization step of the problem.

    Overall, the artificial networks were highly successful in predicting the correct output values for all three problems.In fact, after a certain number of data points in the training set, the neural networks consistently produced values within the target data.Furthermore, after running the network function into an optimizer, many of the cases had results within 1% of the expected value found from traditional methods.The best neural network architecture investigated in the study contained one layer with 20 nodes since it provided accurate results without overfitting the training data and was relatively quick to run compared to the two-layer network.

    3.3.2 Results for Problem-1,Problem-2,and Problem-3

    60.Beg pardon:The sisters do not always beg for forgiveness in the tale. Sometimes their jealousy grows with Cinderella s good fortune and they are ultimately punished for their lack of charity. In the Grimm s Aschenputtel, they are filled with rage and scheme to capitalize on Cinderella s good fortune.Return to place in story.

    The ANN framework was run for the three optimization design problems discussed in the previous sections (Mechanical Performance, Thermo-mechanical Performance, and Natural Frequency).For problem-1, the results from optimization converged to the expected value for training sets containing around 400 data points.Problem-2 converged at around 300 data points.Finally,problem-3 converged at around 400 data points.These values suggest that to have a satisfactory artificial neural network, the training data set must be at least 400 data points.The average error and convergence of the objective functions as a function of the number of training data points is visualized in Figs.7–9 for problem-1, problem-2, and problem-3, respectively for an ML framework consisting of one layer with 20 nodes.

    Figure 7:ANN results for problem-1 for optimization of mechanical properties.(a) Average ANN error for problem-1 (b) change in objective function for problem-1

    After comparing the Q-Learning, Actor-Critic, and ANN solutions to the finite element analysis calculations, ML consistently demonstrated that artificial intelligence can accurately predict optimum results.In particular, the ANN saved much more calculation time compared to the finite element methods since it only required training one time before generating a vector function to estimate the relationship of the input and output variables.After collecting 1000 sample points for training, the framework can create and optimize 20 individual networks for 3 trials in under 10 min on a basic computational platform that utilizes a single processor.Compared to the computational times needed to run the finite element-based simulations, this new method is much faster while keeping high accuracy in the results.

    Figure 8:ANN results for problem-2 for optimization of thermo-mechanical properties.(a) Average ANN error for problem-2 (b) change in objective function for problem-2

    Figure 9:ANN results for problem-3 for optimization of the natural frequency.(a) Average ANN error for problem-3 (b) change in objective function for problem-3

    4 Conclusions and Future Work

    The summary of the research outcomes and potential extensions for future work are outlined below:

    ? A finite element analysis library that can analyze the deformation behavior and natural frequencies of periodic microstructures under coupled mechanical and thermal loads is developed.

    ? A Deep RL framework is developed and used to generate results that are compatible with the high-fidelity finite element solutions.

    ? The Q-Value RL is investigated to significantly narrow down the potential solutions for optimum designs while eliminating a significantly large number of microstructural degrees of freedom.

    ? Many different settings of the proposed three ML frameworks are investigated to better understand their nature and applicability to different design problems.

    The results are demonstrated for multi-phase microstructure design for various objectives, as specified in the Mechanical, Thermo-Mechanical, and Natural Frequency problems.Although the number of design variables was limited in the application problems, all different frameworks proposed and developed in this work are still applicable to the microstructure optimization problems with larger design spaces.Therefore, the future work for this topic will include an increased focus on the extension of the presented ML-driven design strategy to optimize multi-phase materials in larger length-scales by utilizing solution spaces that involve millions of variables.

    Funding Statement:The presented work was funded by the NASA Virginia Space Grant Consortium Grant (Project Title:“Deep Reinforcement Learning for De-Novo Computational Design of Meta-Materials”).

    Conficts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    交换朋友夫妻互换小说| 国产免费视频播放在线视频| 超碰97精品在线观看| 国产在线观看jvid| 黄色视频,在线免费观看| 久久香蕉激情| 久久热在线av| 精品少妇一区二区三区视频日本电影| kizo精华| 又大又爽又粗| 自拍欧美九色日韩亚洲蝌蚪91| 国产成人精品久久二区二区免费| 亚洲 国产 在线| 99精品欧美一区二区三区四区| 亚洲一码二码三码区别大吗| 色综合欧美亚洲国产小说| 在线观看舔阴道视频| 不卡一级毛片| 久久久水蜜桃国产精品网| 久久精品成人免费网站| 成人18禁高潮啪啪吃奶动态图| 亚洲色图av天堂| 在线观看一区二区三区激情| 精品免费久久久久久久清纯 | 午夜免费鲁丝| 悠悠久久av| 久久毛片免费看一区二区三区| 男女免费视频国产| 亚洲综合色网址| 午夜福利影视在线免费观看| 国产成人系列免费观看| 亚洲国产精品一区二区三区在线| 一级片'在线观看视频| 精品久久久精品久久久| 亚洲国产毛片av蜜桃av| 大型av网站在线播放| 最新在线观看一区二区三区| 超碰成人久久| 国产精品香港三级国产av潘金莲| 日韩人妻精品一区2区三区| 久久久久国产一级毛片高清牌| 精品国产乱子伦一区二区三区| 成人影院久久| h视频一区二区三区| 精品亚洲成a人片在线观看| 建设人人有责人人尽责人人享有的| 在线观看免费视频日本深夜| 精品少妇一区二区三区视频日本电影| 黑人巨大精品欧美一区二区mp4| 女同久久另类99精品国产91| 黄网站色视频无遮挡免费观看| 动漫黄色视频在线观看| 久久久久久人人人人人| 日韩制服丝袜自拍偷拍| 国产成人系列免费观看| 午夜福利在线免费观看网站| 久久久久久亚洲精品国产蜜桃av| www日本在线高清视频| 丁香六月天网| 最近最新中文字幕大全免费视频| 国产精品欧美亚洲77777| 9热在线视频观看99| 国产又色又爽无遮挡免费看| 一级,二级,三级黄色视频| 美女扒开内裤让男人捅视频| 又紧又爽又黄一区二区| 精品人妻熟女毛片av久久网站| 久久精品亚洲av国产电影网| 亚洲 欧美一区二区三区| 女人被躁到高潮嗷嗷叫费观| 免费高清在线观看日韩| 精品午夜福利视频在线观看一区 | 国产男靠女视频免费网站| 欧美日本中文国产一区发布| 久久久久久久精品吃奶| 欧美另类亚洲清纯唯美| 丰满少妇做爰视频| 人人妻,人人澡人人爽秒播| 国产亚洲欧美在线一区二区| 成人国产av品久久久| 成年人免费黄色播放视频| 日本av免费视频播放| 黄片播放在线免费| 热re99久久精品国产66热6| 高清视频免费观看一区二区| 国产精品美女特级片免费视频播放器 | 国产野战对白在线观看| 国产成人精品久久二区二区91| 国产激情久久老熟女| 国产精品99久久99久久久不卡| 考比视频在线观看| 中文字幕人妻熟女乱码| 一本综合久久免费| 久久精品国产亚洲av香蕉五月 | av天堂久久9| 国产区一区二久久| 老司机亚洲免费影院| 丁香欧美五月| 高清黄色对白视频在线免费看| 亚洲国产中文字幕在线视频| 亚洲国产欧美日韩在线播放| 99精国产麻豆久久婷婷| 国产精品免费大片| 97人妻天天添夜夜摸| 亚洲视频免费观看视频| 国产成人一区二区三区免费视频网站| 一本大道久久a久久精品| 涩涩av久久男人的天堂| 亚洲国产欧美一区二区综合| 啦啦啦中文免费视频观看日本| 悠悠久久av| 人人妻人人澡人人爽人人夜夜| 亚洲免费av在线视频| 在线十欧美十亚洲十日本专区| 成年动漫av网址| 男女边摸边吃奶| 黑人欧美特级aaaaaa片| 亚洲少妇的诱惑av| 一级片免费观看大全| 国产日韩一区二区三区精品不卡| 欧美日韩亚洲高清精品| 亚洲一码二码三码区别大吗| 看免费av毛片| 国产精品久久久av美女十八| 亚洲一区二区三区欧美精品| 美女福利国产在线| 丝袜喷水一区| 欧美日韩成人在线一区二区| 91九色精品人成在线观看| 巨乳人妻的诱惑在线观看| 中亚洲国语对白在线视频| 国产日韩一区二区三区精品不卡| 精品亚洲乱码少妇综合久久| 麻豆av在线久日| 午夜免费鲁丝| 久久国产精品男人的天堂亚洲| 亚洲精品自拍成人| 黑人操中国人逼视频| 欧美 日韩 精品 国产| 亚洲av成人不卡在线观看播放网| 中文字幕高清在线视频| 99国产精品一区二区三区| 国产亚洲av高清不卡| 国产一区有黄有色的免费视频| 女性生殖器流出的白浆| 欧美日韩视频精品一区| 成人18禁在线播放| 国产精品熟女久久久久浪| www.自偷自拍.com| 日本黄色日本黄色录像| 一级,二级,三级黄色视频| 久久久久国产一级毛片高清牌| 王馨瑶露胸无遮挡在线观看| 少妇粗大呻吟视频| 国产伦理片在线播放av一区| 色精品久久人妻99蜜桃| 久久青草综合色| 免费看a级黄色片| 亚洲七黄色美女视频| 成人影院久久| 好男人电影高清在线观看| 亚洲av日韩在线播放| 成人18禁高潮啪啪吃奶动态图| 18禁观看日本| 亚洲精华国产精华精| 亚洲精品国产色婷婷电影| 黄色视频在线播放观看不卡| 国产精品免费大片| 国产亚洲午夜精品一区二区久久| 中文字幕av电影在线播放| 一二三四在线观看免费中文在| 久久亚洲精品不卡| 国产福利在线免费观看视频| 日本wwww免费看| 国精品久久久久久国模美| 欧美日韩黄片免| 1024香蕉在线观看| 1024视频免费在线观看| 国产男女内射视频| 老司机亚洲免费影院| 久久久国产欧美日韩av| 亚洲人成77777在线视频| 99久久精品国产亚洲精品| 国产精品久久久久久精品电影小说| 中文字幕人妻丝袜一区二区| 免费久久久久久久精品成人欧美视频| 成人国产一区最新在线观看| 嫁个100分男人电影在线观看| 男男h啪啪无遮挡| 亚洲国产看品久久| 精品一区二区三区四区五区乱码| 999久久久精品免费观看国产| 亚洲性夜色夜夜综合| 夫妻午夜视频| 亚洲国产成人一精品久久久| 日本黄色日本黄色录像| 男女边摸边吃奶| 国产一区有黄有色的免费视频| 少妇裸体淫交视频免费看高清 | cao死你这个sao货| 亚洲成国产人片在线观看| 女人爽到高潮嗷嗷叫在线视频| 97在线人人人人妻| 91av网站免费观看| 一进一出抽搐动态| 精品国产国语对白av| 午夜精品国产一区二区电影| 国产欧美日韩一区二区三| 亚洲av欧美aⅴ国产| 多毛熟女@视频| 水蜜桃什么品种好| 亚洲欧美色中文字幕在线| 天堂中文最新版在线下载| av又黄又爽大尺度在线免费看| 久久精品亚洲熟妇少妇任你| 亚洲国产毛片av蜜桃av| 狠狠狠狠99中文字幕| 婷婷丁香在线五月| www日本在线高清视频| 欧美黑人精品巨大| 日韩大码丰满熟妇| 手机成人av网站| 五月开心婷婷网| 黑人猛操日本美女一级片| 日韩制服丝袜自拍偷拍| 桃红色精品国产亚洲av| 一边摸一边抽搐一进一出视频| 热99re8久久精品国产| 久久精品亚洲熟妇少妇任你| 精品久久久久久久毛片微露脸| 国产高清国产精品国产三级| 精品一区二区三卡| 性少妇av在线| 大型黄色视频在线免费观看| 中国美女看黄片| 成人永久免费在线观看视频 | 大片免费播放器 马上看| 久久久精品免费免费高清| 日韩一卡2卡3卡4卡2021年| 亚洲成人免费电影在线观看| 电影成人av| 午夜91福利影院| 国产一卡二卡三卡精品| 免费久久久久久久精品成人欧美视频| tube8黄色片| 久久香蕉激情| 999久久久精品免费观看国产| 国产成人精品久久二区二区91| 少妇猛男粗大的猛烈进出视频| 欧美黑人精品巨大| 人妻久久中文字幕网| www.自偷自拍.com| av福利片在线| 国产成人欧美| 青草久久国产| 中文亚洲av片在线观看爽 | 精品欧美一区二区三区在线| 免费看a级黄色片| 亚洲色图 男人天堂 中文字幕| 成年版毛片免费区| 亚洲七黄色美女视频| 成人精品一区二区免费| 19禁男女啪啪无遮挡网站| 国产高清videossex| 无遮挡黄片免费观看| 久9热在线精品视频| 九色亚洲精品在线播放| 99国产精品一区二区蜜桃av | 丁香六月天网| 国产一区二区在线观看av| 欧美日韩黄片免| 国产精品 国内视频| 无限看片的www在线观看| 国产在线免费精品| 精品少妇久久久久久888优播| 久热这里只有精品99| 久久久久久亚洲精品国产蜜桃av| 亚洲精品在线观看二区| 99久久国产精品久久久| 久久久久久久国产电影| 美女主播在线视频| 每晚都被弄得嗷嗷叫到高潮| 亚洲精华国产精华精| 欧美 亚洲 国产 日韩一| 日韩熟女老妇一区二区性免费视频| 两个人看的免费小视频| 亚洲第一青青草原| 国产精品一区二区在线不卡| 午夜福利乱码中文字幕| 国产男女内射视频| 精品视频人人做人人爽| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲伊人色综图| 欧美人与性动交α欧美精品济南到| 一二三四在线观看免费中文在| 天堂中文最新版在线下载| 亚洲av片天天在线观看| 露出奶头的视频| 我的亚洲天堂| 亚洲av美国av| av一本久久久久| 日韩视频在线欧美| 婷婷丁香在线五月| 99re在线观看精品视频| av网站在线播放免费| 99国产极品粉嫩在线观看| 一区在线观看完整版| 中文亚洲av片在线观看爽 | 国产男女内射视频| 飞空精品影院首页| 精品免费久久久久久久清纯 | 一边摸一边抽搐一进一出视频| 国产三级黄色录像| 男女高潮啪啪啪动态图| 好男人电影高清在线观看| 亚洲av国产av综合av卡| 一本大道久久a久久精品| 99精品欧美一区二区三区四区| 日本撒尿小便嘘嘘汇集6| 一区二区三区乱码不卡18| 精品一品国产午夜福利视频| 日本av免费视频播放| 国产成+人综合+亚洲专区| 久久久欧美国产精品| 视频区图区小说| 国产精品一区二区精品视频观看| 久久久水蜜桃国产精品网| 日韩中文字幕视频在线看片| 激情视频va一区二区三区| 国产精品自产拍在线观看55亚洲 | 我的亚洲天堂| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲人成电影观看| 精品久久久精品久久久| 国产欧美日韩一区二区三| 国产麻豆69| 黄色成人免费大全| 一个人免费在线观看的高清视频| 国产精品 欧美亚洲| 日本五十路高清| 女警被强在线播放| 国产在线一区二区三区精| 国产极品粉嫩免费观看在线| av线在线观看网站| 女人精品久久久久毛片| 国产精品麻豆人妻色哟哟久久| 亚洲成a人片在线一区二区| 国产深夜福利视频在线观看| 99久久精品国产亚洲精品| 国产深夜福利视频在线观看| 亚洲成a人片在线一区二区| 久久精品国产a三级三级三级| 日本一区二区免费在线视频| 国产深夜福利视频在线观看| 夜夜夜夜夜久久久久| 97在线人人人人妻| 国产单亲对白刺激| 国产淫语在线视频| 亚洲精品国产色婷婷电影| 在线永久观看黄色视频| 久久国产精品人妻蜜桃| 久久中文看片网| 国产人伦9x9x在线观看| 午夜福利一区二区在线看| 人人妻人人澡人人看| 成人特级黄色片久久久久久久 | 久久影院123| 国产一区有黄有色的免费视频| 一本—道久久a久久精品蜜桃钙片| 涩涩av久久男人的天堂| 亚洲国产欧美在线一区| 久久久精品免费免费高清| 午夜成年电影在线免费观看| 18禁国产床啪视频网站| 久久人妻av系列| 一本一本久久a久久精品综合妖精| 亚洲情色 制服丝袜| 亚洲av美国av| 国产1区2区3区精品| videosex国产| 夜夜骑夜夜射夜夜干| 天堂中文最新版在线下载| 老司机亚洲免费影院| 亚洲一码二码三码区别大吗| 一二三四在线观看免费中文在| 视频在线观看一区二区三区| 亚洲熟女毛片儿| 黄片大片在线免费观看| 午夜视频精品福利| 极品人妻少妇av视频| 亚洲美女黄片视频| 亚洲avbb在线观看| 宅男免费午夜| 国产av国产精品国产| 啦啦啦 在线观看视频| 久久 成人 亚洲| 午夜福利在线免费观看网站| 国产精品久久久久成人av| 亚洲精品中文字幕一二三四区 | 国产欧美日韩一区二区精品| 丝瓜视频免费看黄片| 色婷婷av一区二区三区视频| 日韩视频在线欧美| netflix在线观看网站| 中文字幕另类日韩欧美亚洲嫩草| 久久久久网色| 嫁个100分男人电影在线观看| 不卡一级毛片| 国产成人一区二区三区免费视频网站| 波多野结衣av一区二区av| 人妻久久中文字幕网| 久久久精品免费免费高清| 久久午夜综合久久蜜桃| 欧美激情极品国产一区二区三区| 91九色精品人成在线观看| 国产伦人伦偷精品视频| 九色亚洲精品在线播放| tocl精华| 亚洲伊人色综图| 狂野欧美激情性xxxx| 精品国产一区二区三区四区第35| 亚洲精品国产色婷婷电影| 一本一本久久a久久精品综合妖精| 国产99久久九九免费精品| 成人免费观看视频高清| 无限看片的www在线观看| 纵有疾风起免费观看全集完整版| 亚洲精品成人av观看孕妇| 大香蕉久久成人网| 日韩制服丝袜自拍偷拍| 婷婷丁香在线五月| 欧美激情 高清一区二区三区| 亚洲 欧美一区二区三区| 免费高清在线观看日韩| a级毛片黄视频| 免费看十八禁软件| 国产欧美日韩一区二区三区在线| 首页视频小说图片口味搜索| av视频免费观看在线观看| 精品一区二区三区av网在线观看 | 欧美大码av| www.999成人在线观看| 深夜精品福利| 成人18禁高潮啪啪吃奶动态图| 捣出白浆h1v1| 无限看片的www在线观看| av网站免费在线观看视频| 一区二区三区激情视频| 国产成人av教育| av天堂在线播放| 老司机午夜福利在线观看视频 | 久久久久视频综合| 他把我摸到了高潮在线观看 | 天堂中文最新版在线下载| 国产精品1区2区在线观看. | 国产成人免费观看mmmm| 国产高清激情床上av| 最新美女视频免费是黄的| 美女高潮到喷水免费观看| av电影中文网址| 老鸭窝网址在线观看| 超碰97精品在线观看| 午夜老司机福利片| 啦啦啦免费观看视频1| 亚洲欧洲精品一区二区精品久久久| 国产一区二区三区综合在线观看| 一区二区三区乱码不卡18| 啪啪无遮挡十八禁网站| 黄色 视频免费看| 老司机午夜十八禁免费视频| 日韩 欧美 亚洲 中文字幕| 国产日韩一区二区三区精品不卡| 视频区欧美日本亚洲| 在线 av 中文字幕| tocl精华| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲精品国产色婷婷电影| 国产欧美亚洲国产| 色婷婷久久久亚洲欧美| 香蕉国产在线看| 亚洲av成人不卡在线观看播放网| 国产亚洲欧美精品永久| 十八禁网站网址无遮挡| 夜夜骑夜夜射夜夜干| 免费看十八禁软件| 亚洲av成人不卡在线观看播放网| 国产精品免费大片| 19禁男女啪啪无遮挡网站| 国产精品久久久久成人av| 汤姆久久久久久久影院中文字幕| 天天躁夜夜躁狠狠躁躁| 欧美另类亚洲清纯唯美| 捣出白浆h1v1| 欧美精品啪啪一区二区三区| 欧美乱妇无乱码| 精品亚洲乱码少妇综合久久| 国产精品欧美亚洲77777| 成年人免费黄色播放视频| 免费在线观看视频国产中文字幕亚洲| 在线观看免费日韩欧美大片| 精品人妻1区二区| 男女高潮啪啪啪动态图| 午夜福利免费观看在线| 欧美精品亚洲一区二区| 99热国产这里只有精品6| 久久国产精品男人的天堂亚洲| 欧美久久黑人一区二区| 亚洲九九香蕉| 中文亚洲av片在线观看爽 | av电影中文网址| 亚洲av国产av综合av卡| 国产在线一区二区三区精| 国产精品欧美亚洲77777| 国产午夜精品久久久久久| 色婷婷久久久亚洲欧美| 国产熟女午夜一区二区三区| 少妇精品久久久久久久| 淫妇啪啪啪对白视频| 久久中文字幕人妻熟女| 亚洲欧美日韩另类电影网站| 精品久久久久久电影网| 久久天堂一区二区三区四区| 亚洲午夜理论影院| 一边摸一边抽搐一进一小说 | 99精国产麻豆久久婷婷| 午夜两性在线视频| 欧美黄色淫秽网站| 一本一本久久a久久精品综合妖精| 美国免费a级毛片| 亚洲五月婷婷丁香| 日韩成人在线观看一区二区三区| 91老司机精品| 免费观看av网站的网址| 欧美亚洲日本最大视频资源| 老司机福利观看| 国产一区二区三区综合在线观看| 757午夜福利合集在线观看| 欧美乱妇无乱码| 丁香欧美五月| 国产精品电影一区二区三区 | 一级毛片电影观看| 亚洲av电影在线进入| 黑人巨大精品欧美一区二区mp4| 菩萨蛮人人尽说江南好唐韦庄| 久久久精品免费免费高清| 亚洲中文日韩欧美视频| 最新的欧美精品一区二区| 国产日韩欧美亚洲二区| 国产在线精品亚洲第一网站| 亚洲精品中文字幕一二三四区 | 大码成人一级视频| 国产欧美日韩精品亚洲av| 色老头精品视频在线观看| 操出白浆在线播放| 国产精品久久久人人做人人爽| 亚洲午夜理论影院| 国产精品偷伦视频观看了| 色在线成人网| 一级毛片女人18水好多| 久久久久久久久久久久大奶| 久久精品国产a三级三级三级| av天堂久久9| 免费观看av网站的网址| 欧美日韩亚洲高清精品| 成人18禁在线播放| 丝袜美腿诱惑在线| 国产成人欧美| 多毛熟女@视频| 成在线人永久免费视频| 久久久久久免费高清国产稀缺| 自拍欧美九色日韩亚洲蝌蚪91| 国产午夜精品久久久久久| av免费在线观看网站| 操美女的视频在线观看| 99国产精品免费福利视频| 一本久久精品| 亚洲一码二码三码区别大吗| 亚洲成人免费电影在线观看| 国产国语露脸激情在线看| 女人高潮潮喷娇喘18禁视频| 午夜两性在线视频| 久久香蕉激情| 国产日韩欧美亚洲二区| 亚洲精品乱久久久久久| 一二三四在线观看免费中文在| 午夜精品久久久久久毛片777| 精品亚洲成国产av| 一级片'在线观看视频| 高清视频免费观看一区二区| 少妇猛男粗大的猛烈进出视频| 欧美激情高清一区二区三区| 久热爱精品视频在线9| 亚洲天堂av无毛| 狠狠精品人妻久久久久久综合| 操出白浆在线播放| avwww免费| 日日摸夜夜添夜夜添小说| 少妇裸体淫交视频免费看高清 | 少妇精品久久久久久久| av又黄又爽大尺度在线免费看| 久久精品国产亚洲av香蕉五月 | 午夜日韩欧美国产| 国产福利在线免费观看视频| 一区二区三区激情视频| 精品一区二区三区四区五区乱码| 欧美在线黄色| 久久国产精品人妻蜜桃| 色94色欧美一区二区| a级毛片黄视频| 日本av免费视频播放| 极品少妇高潮喷水抽搐| 欧美国产精品一级二级三级| 亚洲精品美女久久久久99蜜臀| 午夜福利影视在线免费观看| 久久精品91无色码中文字幕|