• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    基于改進(jìn)強(qiáng)化學(xué)習(xí)的小型模塊化核反應(yīng)堆智能路徑規(guī)劃

    2025-08-09 00:00:00董云峰周為政汪哲正張霄
    關(guān)鍵詞:反應(yīng)堆模塊化學(xué)報(bào)

    中圖分類號:029 文獻(xiàn)標(biāo)志碼:A DOI: 10.19907/j.0490-6756.2025.240168

    Intelligent path planning for small modular reactors based on improved reinforcement learning

    DONG Yun-Feng,ZHOU Wei-Zheng,WANG Zhe-Zheng,ZHANGXiao(SchoolofMathematics,SichuanUniversity,Chengdu 6loo65,China)

    Abstract: Small modular reactor (SMR) belongs to the research forefront of nuclear reactor technology. Nowadays,advancement of intelligent control technologies paves a new way to the design and build of unmanned SMR.The autonomous control processof SMR can be divided into three stages,say,state diagno sis,autonomous decision-makingand coordinated control. In this paper,the autonomous state recognition and task planning of unmanned SMR are investigated.An operating condition recognition method based on the knowledge base of SMR operation is proposed by using the artificial neural network (ANN) technology, which constructs a basis for the state judgment of intellgent reactor control path planning.An improved reinforcement learning path planning algorithm is utilized to implement the path transfer decision-makingThis algorithm performs condition transitions with minimal cost under specified modes.In summary,the fullrange control path inteligent decision-planning technology of SMR is realized,thus provides some theoretical basis for the design and build of unmanned SMR in the future.

    Keywords:Smallmodular reactor;Operating condition recognition; Path planning;Reinforcement learning

    1 Introduction

    Small modular reactor(SMR)has many advantages such as low power density,small size, short construction period and high safety performance[1]. In some long-term unmanned environments,the stability of SMRis crucial.On the other hand,currently many operations in SMR still heavilyrelyonmanual labor,whichinevitablyleads to errors and degraded stability.When it comes to condition monitoring or operating condition recognition of SMR,subtle or rapid changes often make it dificult for the operator to interpret the trends in interacting variables.As a result,erroneous decisions couldbemade dueto the lack ofknowledge on real operational status of SMR [2].

    Unmanned SMR(USMR)can be deployed in remote areasrarely visited by humans or even in deep-sea and deep-space environments.Nowadays, SMR needsto evolvetoward higherlevelsofauto mation and intelligence,and the industry urgently requires breakthroughs in the key technologies of SMRs that are applicable across various environments with safety and stability[3].

    The basic operations adopted for reactor conditionmonitoringin SMR both domesticallyand internationally follow the human-machine collaboration mode.That is to say,machines autonomously con trol the system according to predefined paths,while humans supervise and analyze the control system’s performance in real time.When deviations from expected outcomes or the loss of system ability to maintain automatic operation aredetected,manual intervention is needed to ensure the system resumes the normal operation or to ensure the device enters a relatively safe state.Itis extensively believed that the core role of humans throughout the operation processisinformationanalysisand decisionmaking,which leads to the idea of replacing humanswith machines.

    Inrecent years,with the revolution of artificial intelligence(AI),industry has introduced various control methods into the nuclear reactor control sys tems[1],such as PID control[4-6],intelligent control[-12],compositecontrol[13-15],etc.,which createspartial conditions for theunmanned operationof SMRs.To achieve the stable operation of SMRs,it is crucial to possess capabilities for cognitive analy sis of operating condition of the system and decisionmaking for task planning.

    Operating condition refers to the current working state of SMR. In the intelligent path planning decision across the entire spectrum of SMR,the capa bility for cognitive analysis of operating condition refers to the measurement and monitoring of equip ment operational status information and characteristicparameters.This includesthe data such astemperature,cycle,power,pressure,etc.,todeter mine whether or not the system is operating nor mallyand to identify the current operating condition of SMR.This can be thought as an operating condition recognition problem.

    Alternatively,path planning problems are es sentially involved in the task-planningand decisionmaking capabilities of SMR.Specifically,it refers to the problem of path planning in finite directed graphs and the combinatorial optimization problem. In SMR,every operating condition can be thought as a node in graph,and the transition between any two operating conditions can be thought as a connection between the nodes in graph. The ultimate goal is to solve the path planning problem for given requirements.

    In this paper,to address the challenges of oper atingcondition recognition and path-planning in the intelligent planning of full-range control paths for SMR,we utilize a feedforward neural network (FNN) for the operating condition recognition and propose an improved reinforcement learning path planning algorithm based on lattice theory. This ap proach enables the intelligent planning of SMR.

    2 Algorithms and models

    There are both safe and accident operating con ditions in SMR. It is necessary to categorize specific types of operating conditions,such as \"startup\", \"poweroperation\",\"powertransition\",\"shutdown\", etc.,as typical operating conditions[16] and accident operating conditions such as \"breaching accident\"[7. 18] and \"loss of flow\" [19]. There are hundreds of specific operating conditions[20]. To enable the intelligent op eration of SMR,real-time operating condition recognition is imperative[20-22].

    Theoperating condition is not static during the operation of SMR. Depending on the changes in op erational requirements,it may be necessary to transition the reactor from its current state to a target state.When operational requirementschange,the first step is to assess the current state,then plan a transition path based on the relationship between different operating conditions and the transition requirement. The transition process has three key features.

    (i) Not all operating condition nodes are necessarily connected. (ii)There may be multiple directed paths be tween two nodes. (iii)Each directed path may have various weight parameters such as time,energy and over shoot. To minimize a weight corresponds to three transition modes,say,economic mode,maneuver modeand stablemode.

    2.1 Operatingconditionrecognition

    To address the issue of operating condition recognition in intelligentplanning of full-range control paths fora reactor,weutilize an artificial neural network(ANN) algorithm.ANN constructs its networkbyusing neuron asnode.Especially,feedfor ward neural network(FNN) processes a relatively straightforward topology[23],which consists of an input layer,several hidden layers and an output layer, asshown in Fig.1.Each layer comprises multiple neuron nodes.During the training process,F(xiàn)NN adjusts its connection weights by using the backpropagation algorithm. By adjusting the parameters within neurons,F(xiàn)NN repeatedly fits the input data to the output data. Nowadays,F(xiàn)NNs have been ap plied in diverse areas such as function approximation,pattern recognition and time series prediction.

    In SMR,various sensors constantly generate data reflecting the current state of system,including the parameters such as temperature,cycle,power, pressure,etc.,amounting to dozens of parameters. In the state recognition model of SMR,the input layer receives parameters from various sensors.Subsequently,by setting appropriate numbers of layers and neurons in the hidden layer and determining the activation function of neurons,the output layer pro vides the predicted operating condition of system.

    Fig.1 The structure of FNN

    2.2 Path planning

    According to the research status of path plan ning,we use the reinforcement learning algorithm[24], genetic algorithm[25,26]or Dijkstra algorithm[27] to solve the path planning and combinatorial optimizationin finitedirected graph.However,the complexityofoperational parameters in reactor indicates that the known path planning or combinatorial optimization algorithms cannot cover full range of the needed control path planning decisions.Besides,large volume of computations lead to low computational efficiency.In this paper,to implement the intelligent planning decision algorithm for control paths and the optimal path control for operational status transitions efficiently,we choose a reinforcement learning pathplanningalgorithm.

    Reactor operation planning can be described as: When the system receives the planning request to thetarget operationstate,it firstlyidentifythecur rent state,then find the path with the least weight ofthe planned operation mode in the transition environment,and finally transform to the target state throughthese intermediatestates.

    Reinforcement learning is a machine learning method that maps environmental statesto actions.It can autonomously learns how to make correct decisions through interaction with the environment to maximize the expected cumulative rewards.Traditionally,reinforcement learning path planning algo rithm refers to a class of algorithms using reinforcement learning method to solve path planning problem and finding the optimal path from a starting point to an endpoint in a given environment.

    Q-Learning isa value function based on reinforcement learning that selects the best action by learning the value function of state-action pair, where the state can be position on a map and action canbe movement such asup,down,left orright. TheQ-value update formula for reinforcement learning is

    Through interacting with the environment to seek themaximum value reward,the algorithm ultimately finds the shortest path from current node to the target node.

    Reinforcement learning path planningalgo rithm based on lattice theory requires the definition of a partially ordered set and applies it to path graph.The algorithm refers to the definition of lat tice by making each optimal path as a lattice. Through the definition of sub-lattice,thealgorithm decomposes the optimal paths obtaining from each reinforcement learning iteration into a collection of sub-lattices for storage.

    Apartial orderisabinaryrelationonasetthat allows to compare elements within the set. However,itdoesnotberequiredthatallelementsinthe set can be compared in terms of magnitude.The definition of partially ordered set,lattice and sublatticecan be found in Ref.[28].Given a graph or path,with the starting point as the source and the endpointas the sink,if any node within satisfiesthe reflexivity,anti-symmetry and transitivitythen the set of all nodes in this path and their relation of proximityto the endpoint forma partially ordered set.

    Each path obtained from the path planning algo rithm based on reinforcement learning can be describedasalattice.For example,considerthe setof nodes X={1,2,3,4,5} . Assume that the starting pointis node 1 and the endpoint is node 5.By the definition of partial order,the leastupper bound and the greatest lower bound of any two nodes belong to the set,and this structure can be thought as a lattice.Meanwhile, the node sets such as X1= {1,2,3,4} , X2={2,3,4,5} and X3={2,3,4} are sub-lattices.

    When a path is obtained in reinforcement learning process within the action space,the corresponding sub-lattice set is decomposed.Each sub-lattice is stored in accordance with specific requirements. When there is a corresponding requirement in the next project,reinforcement learning will no longer berepeated and the path with the shortest weight will be directly output in the path set.

    According to the lattice theory,the path plan ning based on reinforcement learning can be im proved.Here we propose the algorithm of reinforcement learning path planning algorithm based on lattice theory.

    Algorithm 2.1 Reinforcement learning path planning algorithm based on lattice theory
    For instance,consider the following simple en

    vironmental map.If there isa requirement from node1 to node 3,then the traditional algorithm gen eratesthepaths[1,2,3],[1,2,4,3]and[1,4,3]. Finally,thesub-path[2,4],whichis outsidethe initial requirement,will be obtained.If we require a path from node 2 to node 4 in the future,the path planning based on traditional algorithm will not be used again.If the next demand is from node 1 to node 4,then the reinforcement learning generates paths[1,4],[1,2,4],[1,2,3,4],then thepath [2,3,4],which is outside the requirement,will be generated.This enriches the path collection from the current node 2 to node 4,gradually approaching the optimal solution.

    Fig.2An example of transition environment

    Traditional path planning algorithm cannot always generate the optimal solution,especially when the number of training episodes is lower than the number of possible actions in the action space.On the other hand,conducting reinforcement learning on other demands and simultaneously generating sub-lattice collections for existing demandscan graduallyenrich thepath resultsforalldemands.

    In comparison to the traditional path planning algorithm,the reinforcement learning path planning algorithm based on lattice theory has two advantages.

    (i)Faster calculation speed.After certain re quirementiscompleted,itcanbedirectlyaccessed at any time without the need to use traditional path planning algorithm to find the optimal path every time.

    (ii) Self-learning capability. When a specific demand is processed,it can plan paths for all sublattices corresponding to the demand.

    3 Simulation analysis

    3.1Operatingconditionrecognition

    The simulation uses MATLAB as the basic framework and is developed based on MATLAB 202la.The simulation data for operating condition recognition was provided by the project (SCUamp; DRSI-LHCX-6) of this fund(JointInnovation Found of Sichuan University and Nuclear Power In stitute of China).The simulation system generates 1O data points per second,thus results in a total of 291 084 time series data points.Each data point in cludes 38 sensor parameters,including tempera ture,pressure,powerandcycle.Additionally,each data point isassociated with a label derived from expert experience.These operating condition labels represent seven actual operating conditions in a SMR,such as“startup”,“self-sustaining” and “l(fā)ow power”,which is denoted as“operating condition 1”to“operating condition 7”,respectively. In the original data,from“operating condition 1”to “operating condition7”,thereare 1853,1546,18 184,154 251,28 200,35 799 and 51 251 labeled data,respectively.These simulated dataisused for neural network training and testing.

    In this paper,the following operations are carried out on the data successively.

    (i)Theparameterdata isstandardized and ran domly sorted. (ii) 80% of the data samples are used as the training set, 20% as the test set. (iii)Use SMOTE algorithm to increase the data of“operating condition 1”and“operating condition 2”in the training set to the average level. (iv)Setting of the FNN. The hidden layeris 50*50*50 ,in which the hidden layer activation function is selected“tansig”and the output layer is selected“purelin\".Gradient updating isselected “SCG\". The optimization function is MSE.The number of iterations is15oO epochs.The computa tion isperformed on GPU.

    (v)The test set is brought into the trained FNN for testing.

    According to the simulation,the total recognition accuracy is 99.9 811% ,which is a very good recognition effect. The convergence resultsare shown inFig.3.Themean square error(MSE)of the training set gradually decreases with the training progresses.With the gradual learning and optimiza tion of the algorithm,after 15OO training cycles,the bestperformanceM ,which shows a good convergence effect.

    Fig.3Mean squared error of the recognition on the training set

    The recognition results are shown in Fig. 4, which is a multi-classification confusion matrix rep resenting the recognition effect of FNN.The horizontal and vertical coordinates represent 7 operating condition types,with rows representing prediction labels and columns representing real labels.In

    Fig.4,the values are primarily concentrated along the diagonal of the confusion matrix,while there are onlya fewbitsand piecesof numbers elsewhere,in dicating that it has good operating condition recognition ability.

    Fig.4Confusion matrix of the operating condition recognition

    Tab.1 lists the evaluation indicators of various operatingconditions,whichisdescribed from the indicators of precision,accuracy,recall and F1-Score performance.The closer an indicator to 1,the betterthe recognition effectis.The simulation results show that all the parameters are greater than O.97, as shown in Tab.1. Among them,the various evaluation indexes of“operating condition 5′′ and \"operating condition 6′′ are 1,which has a strong recognition ability.

    Tab.1The value of indicators for different operating conditions

    In conclusion,Use FNN to identify the current operatingconditionisprettyeffectiveand canoffera basis for the next operating condition transfer.

    3.2 Path planning simulation

    In nuclear reactor control,the complex work ing environment,long transfer time and variable transfer operating condition can cause the demands to change in real time during the engineering conversion process.Based on the operating condition recognition,we resort to the reinforcement learning path planning algorithm based on lattice theory to planthe operating condition transition.

    The simulation is also developed based on MATLAB 2O2la. Set 5 typical operating conditions in Fig. 5. The figure contains5 nodes and 2O paths, each connection corresponds to three weights,and the weights in the simulation are random numbers.

    Fig.5Scheme for the operating condition nodes

    In the simulation,we set the exploration rate inreinforcementlearningat O.2,thediscount factor at O.98,thelearningrateat O.OOl,and the number of training sessions at lO Ooo.Assume that we have several operating conditions transfer tasks successively,that is,

    Task 1: Node 1 is converted to node 5; Task2:Node 4isconverted to node2; Task3:Node3istransformedtonode5. The reinforcement learning path planning algo rithmbased on lattice theoryand the traditional rein forcement learning path planning algorithm are simu lated under the same setting.

    According to Algorithm 2.1,when the new mapruns forthe firsttime,itwillinvokethe reinforcement learning algorithm to find the path.Therefore,the running time of both algorithms is basically the same.In Tab.2,the next node planned is node 2,and when the demand changesto theeconomicmode,therunningtime ofthereinforcement learning path planning algorithm based on lattice theoryis much lower than that of the traditional rein forcement learning path planning algorithm,be cause the path from node 2 to node 5 has been planned by the reinforcement learning path planning algorithm based on lattice theoryin thelast demand.

    InTab.3,Task2isexecutedafterTask1. SinceTask1doesnotgeneratethepathforTask2, Algorithm 2.1 must invoke reinforcement learning to find the required path,and the running time of the two algorithms is basically the same.

    InTab.4,Task3isexecutedafterTask2, first switch to node 4in maneuvermode,and then switch to node 5 in economy mode. The simulation results show that the running time ofAlgorithm 2.1 ismuchlowerthanthatofthetraditionalreinforcement learningpath planning algorithm.

    Tab.2The simulationresults forTask1
    Tab.3Thesimulationresults forTask 2

    Theabove simulation results indicate that, compared to the traditional reinforcement learning path planning algorithm,the Algorithm 2.1 does not require invoking the reinforcement learning algo rithm for each demand,thus results in lower compu tational complexity,reduced runningtime andbetter alignment with the path planning requirements of engineering nuclear reactors.When the number of nodes is particularly large and the number of training times in reinforcement learning is much lower than the number of action spaces,the path results will converge more and more to the global optimal with the increase of the algorithm running times, andthe advantage of Algorithm 2.1 will become more and more obvious in self-learning.

    Tab.4The simulationresults forTask3

    4 Conclusions

    The development of SMR that can adapt to various unmanned environments is of great strategic significance.Furtherresearch aimsat the safety,reliability and intellectuality of SMR is urgently needed.FNN is used to endow the reactor state cognitive analysis ability and reinforcement learning path planning algorithm based on lattice theoryis used to endow the reactor task planning decisionmaking ability in this paper.According to the simulation results,the data generated by reactor sensors and the expert knowledge base tags are used for trainingand simulation theFNNin orderto obtain strong recognition capability even in the reactor's complex system with strong couplingand nonlinearity.

    The operating condition conversion relation is described as a unidirectional graph. By combining lattice theory with the traditional reinforcement learning path planning algorithm,a reinforcement learning path planning algorithm based on lattice theory is proposed.The results of simulation indicate that,in comparison with the traditional reinforcement learning path planning algorithms,the newalgorithm has lower complexity and can meet the technical requirements of nuclear reactor engineering better.

    References:

    [1] ZhangWW,HeZX,WanXS,etal.Areviewonthecontrolmethodsin small modularreactors[J].JSichuanUniv:NatSciEd,2024,61:020001.[張薇薇,何正熙,萬雪松,等.小型模塊化反應(yīng)堆控制方

    法綜述[J].四川大學(xué)學(xué)報(bào)(自然科學(xué)版),2024,61: 020001.]

    [2]Chen Z,Wang PF,Liao L T,et al. Inteligent con-trol of small pressurized water reactor [M]. Shang-hai:Shanghai Jiaotong University Press.2O22.[陳智,王鵬飛,廖龍濤,等.小型壓水反應(yīng)堆智能控制[M].上海:上海交通大學(xué)出版社,2022.]

    [3]Zhang B W.Research on key technologies of autono-mous control for small modular reactor[D].Harbin:Harbin EngineeringUniversity,2O2O.[張博文.小型模塊化反應(yīng)堆自主控制關(guān)鍵技術(shù)研究[D].哈爾濱:哈爾濱工程大學(xué),2020.]

    [4] Wang QQ,Yin C C, Sun XJ,et al.PID design andsimulation of TMSR nuclear power control sys-tem[J].NuclTech,2015,38:58.[汪全全,尹聰聰,孫雪靜,等.TMSR核功率控制系統(tǒng)的PID設(shè)計(jì)與仿真[J].核技術(shù),2015,38:58.]

    [5] Wang XK,YangXH,LiuG,et al.Adaptiveneuro-fuzzy inference system PID controller for SGwater level of nuclear power plant [C]//2O09 Inter-national Conference on Machine Learning and Cyber-netics.Piscataway:IEEE,2009.

    [6]Yong E L.Robust H Control approach to steamgenerator water level of CPRlOoo nuclear powerplant[D]. Harbin:Harbin Engineering University,2014.[雍二磊.CPR1000核電廠SG水位魯棒 H 控制方法研究[D].哈爾濱:哈爾濱工程大學(xué),2014.]

    [7] Bartlett E B,Uhrig R E.Nuclear power plant statusdiagnostics using an artificial neural network[J].Nucl Technol,1992,97:272.

    [8] GernothKA,Clark JW,PraterJS,et al.Neuralnetwork models of nuclear systematics[J].Phys LettB,1993,300:1.

    [9] El-SefyM,YosriA,El-DakhakhniW,etal.Artificial neural network forpredictingnuclearpowerplantdynamic behaviors [J].Nucl Eng Technol,2021,53:3275.

    [10]Santosh TV,Vinod G, Saraf R K,et al.Applica-tion of artificial neural networks to nuclear powerplant transient diagnosis[J].Reliab Eng Syst Safe,2007,92:1468.

    [11]Ruan D,van der Wal A J.Controlling the power out-put of a nuclear reactor with fuzzy logic [J]. InformSciences,1998,110:151.

    [12] Nelson W R. REACTOR:An expert system for di-agnosisand treatment of nuclear reactor acci-dents[J].AAAI,1982,82:296.

    [13]ZengW,Jiang Q,Liu Y,et al. Core power controlof a space nuclear reactor based on a nonlinear modeland fuzzy-PID controller[J].Prog Nucl Energ,2021,132:103564.

    [14]Liu C,Peng JF,Zhao FY,et al.Design and opti-mization of fuzzy-PID controller for the nuclear reac-tor power control[J].Nucl Eng Des,2OO9,239:2311.

    [15]Wu P,Liu DC,Zhao J,et al. Fuzzy adaptive PIDcontrol-based load following of pressurized water re-actor[J].Power Sys Techno,2011,35:76.[吳萍,劉滌塵,趙潔,等.基于模糊自適應(yīng)PID控制的壓水堆負(fù)荷跟蹤[J].電網(wǎng)技術(shù),2011,35:76.]

    [16]Du Z Y,Ma Y G, Zhong RC,et al. Analysis ofstart-up characteristics of heat pipe reactor [J]. NuclPower Eng,2023,44:67.[杜政璃,馬譽(yù)高,鐘睿誠,等.熱管反應(yīng)堆啟堆特性分析[J].核動(dòng)力工程,2023,44:67.]

    [17]Yuan XB,Peng JQ,Zhang BH,et al.Analysis offission product source term release and decay heat un-derpressurized water reactor rupture accident [J].Nucl Tech,2024,47:149.[袁顯寶,彭玨欽,張彬航,等.壓水堆破口事故下裂變產(chǎn)物源項(xiàng)釋放及衰變熱分析[J].核技術(shù),2024,47:149.]

    [18]Zhan JX,ZhengY T,Huang SL,et al. Study onfeed and bleed based on SBLOCA of HPR1O0O [J].Nucl SciEng,2024,44:142[詹經(jīng)祥,鄭云濤,黃樹亮,等.“華龍一號\"小破口事故充排研究[J].核科學(xué)與工程,2024,44:142.]

    [19]Wang K,Yang JK,Zhao PC.Uncertainty analysisof loss of coolant flow accident in lead-bismuth reac-tor using subchannel code [J].Nucl Tech,2024,47:121.[王凱,楊俊康,趙鵬程.基于子通道程序的鉛鉍反應(yīng)堆失流事故不確定性分析[J].核技術(shù),2024,47:121.]

    [ ZU」Mena P,Borrell K A,Kerdy L.Expanaea anaiysisof machine learning models for nuclear transient iden-tification using TPOT[J].Nucl Eng Des,2022,390: 111694.

    [21]Zubair M,Akram Y.Utilizing MATLAB machinelearning models to categorize transient events in anuclear power plant using generic pressurized waterreactor simulator [J]. Nucl Eng Des,2O23,415:112698.

    [22]Moshkbar-Bakhshayesh K,Ghofrani M B. Transientidentification in nuclear power plants:A review [J].Prog Nucl Energ,2013,67:23.

    [23]Svozil D,Kvasnicka V,Pospichal J. Introduction tomulti-layer feed-forward neural networks [J].Che-mometr Intell Lab,1997,39:43.

    [24]Xiong W B,Guo L,Jiao T Y.A multi-agent pathplanning algorithm based on game theory and rein-forcement learning [J].J Shenzhen Univ Sci Eng,2024,41:274.[熊文博,郭磊,焦彤宇.基于博弈論與強(qiáng)化學(xué)習(xí)的多智能體路徑規(guī)劃算法[J].深圳大學(xué)學(xué)報(bào)(理工版),2024,41:274.]

    [25]ZhouR,LongW,LiYY,et al.Research on AGVpath planning for green remanufacturing systems [J].JSichuan Univ:Nat SciEd,2019,56:883.[周潤,龍偉,李炎炎,等.面向綠色再制造系統(tǒng)的AGV路徑規(guī)劃研究[J].四川大學(xué)學(xué)報(bào)(自然科學(xué)版),2019,56:883.]

    [26]Xia Q,LeiY,Ye X Y.The application of genetic al-gorithm in global optimal path searching of AGV[J].J Sichuan Univ:Nat Sci Ed,2008,45:1129.[夏謙,雷勇,葉小勇.遺傳算法在AGV全局路徑優(yōu)化中的應(yīng)用[J].四川大學(xué)學(xué)報(bào)(自然科學(xué)版),2008,45:1129.]

    [27]Zhou Z,Long H,Li S,etal.Electric vehicles charg-ing path planning method based on road network un-der multi parameters [J].J Sichuan Univ:Nat SciEd,2024,61:229.[周箏,龍華,李帥,等.多因素下基于路網(wǎng)拓?fù)涞碾妱?dòng)汽車充電路徑規(guī)劃策略[J].四川大學(xué)學(xué)報(bào)(自然科學(xué)版),2024,61:229.]

    [28]Zhang ZL,Zhang JL,Ying YQ,et al.Fuzzy settheory and methods[M].Wuhan:Wuhan UniversityPress,2010.[張振良,張金玲,殷允強(qiáng),等.模糊集理論與方法[M].武漢:武漢大學(xué)出版社,2010.]

    (責(zé)任編輯:周興旺)

    猜你喜歡
    反應(yīng)堆模塊化學(xué)報(bào)
    基于擾動(dòng)觀測器 PMSM 分?jǐn)?shù)階混沌系統(tǒng)自適應(yīng)滑模同步
    醫(yī)學(xué)科技期刊領(lǐng)域研究熱點(diǎn)分析
    今傳媒(2025年7期)2025-08-18 00:00:00
    甘蔗創(chuàng)新種質(zhì)后代對3種常見真菌性病害 的抗性鑒定評價(jià)
    中共黨史研究篇目索引
    模塊化自主水下機(jī)器人開發(fā)與應(yīng)用
    模塊化住宅
    ACP100模塊化小型堆研發(fā)進(jìn)展
    中國核電(2017年2期)2017-08-11 08:00:56
    模塊化VS大型工廠
    亚洲乱码一区二区免费版| 波多野结衣巨乳人妻| 久久久色成人| 久久久久久久久久黄片| 亚洲经典国产精华液单| 亚洲精品乱码久久久v下载方式| 一区二区三区四区激情视频 | 国产精品野战在线观看| 乱系列少妇在线播放| 长腿黑丝高跟| 在线观看av片永久免费下载| 免费一级毛片在线播放高清视频| 午夜激情欧美在线| 国产一区二区三区在线臀色熟女| 日日啪夜夜撸| 特级一级黄色大片| 中出人妻视频一区二区| 国内精品宾馆在线| 丰满人妻一区二区三区视频av| 国产精品乱码一区二三区的特点| 成人亚洲精品av一区二区| 六月丁香七月| 国产精品久久久久久精品电影| 国产老妇女一区| 啦啦啦观看免费观看视频高清| 亚洲四区av| 国产精品日韩av在线免费观看| 波多野结衣高清无吗| 亚洲无线观看免费| 亚洲精品456在线播放app| 丝袜美腿在线中文| 久久久成人免费电影| 全区人妻精品视频| 免费观看精品视频网站| 午夜激情福利司机影院| 免费大片18禁| 最新中文字幕久久久久| 熟女电影av网| 可以在线观看毛片的网站| 非洲黑人性xxxx精品又粗又长| 美女xxoo啪啪120秒动态图| 亚洲成人中文字幕在线播放| 亚洲最大成人中文| 精品久久久久久久久久久久久| 国产亚洲精品久久久com| 国产精品三级大全| 一级毛片电影观看 | 久久精品夜夜夜夜夜久久蜜豆| 老熟妇乱子伦视频在线观看| 亚洲一区高清亚洲精品| 精品久久国产蜜桃| 精品人妻视频免费看| 欧美日韩综合久久久久久| 国产探花在线观看一区二区| 日本黄色片子视频| 18禁在线无遮挡免费观看视频 | 欧美另类亚洲清纯唯美| 美女cb高潮喷水在线观看| 一级毛片电影观看 | 国产精品精品国产色婷婷| 久久亚洲精品不卡| 麻豆久久精品国产亚洲av| 日日摸夜夜添夜夜添av毛片| 精品久久久久久成人av| 1024手机看黄色片| 在线观看美女被高潮喷水网站| 日韩欧美一区二区三区在线观看| 免费在线观看影片大全网站| 中文资源天堂在线| 别揉我奶头 嗯啊视频| 最近2019中文字幕mv第一页| 欧美日韩乱码在线| 国产成人91sexporn| 亚洲精品亚洲一区二区| 免费无遮挡裸体视频| 大又大粗又爽又黄少妇毛片口| 日本黄色片子视频| 久久国内精品自在自线图片| 我要看日韩黄色一级片| 国产蜜桃级精品一区二区三区| 日韩人妻高清精品专区| 精品一区二区三区视频在线观看免费| 亚洲av二区三区四区| 日本一本二区三区精品| aaaaa片日本免费| 一边摸一边抽搐一进一小说| 露出奶头的视频| 久久精品国产清高在天天线| 此物有八面人人有两片| 亚洲专区国产一区二区| 国产精品野战在线观看| 欧美xxxx黑人xx丫x性爽| 国产一区二区激情短视频| 2021天堂中文幕一二区在线观| 精品人妻一区二区三区麻豆 | 日韩欧美精品免费久久| 国产精品爽爽va在线观看网站| 国产精品久久久久久精品电影| 久久久午夜欧美精品| 女同久久另类99精品国产91| 在线播放无遮挡| 国产 一区 欧美 日韩| 99热全是精品| 午夜免费激情av| 久久综合国产亚洲精品| 免费不卡的大黄色大毛片视频在线观看 | 18禁在线播放成人免费| 精品乱码久久久久久99久播| 欧美日韩在线观看h| 色哟哟哟哟哟哟| 国产白丝娇喘喷水9色精品| 99久久精品一区二区三区| 免费黄网站久久成人精品| 秋霞在线观看毛片| 国产亚洲av嫩草精品影院| 午夜亚洲福利在线播放| 国产亚洲91精品色在线| 少妇被粗大猛烈的视频| 亚洲精品日韩在线中文字幕 | 国产毛片a区久久久久| 麻豆一二三区av精品| 中文在线观看免费www的网站| 97超级碰碰碰精品色视频在线观看| 高清毛片免费观看视频网站| 免费看美女性在线毛片视频| 亚洲熟妇中文字幕五十中出| 国产不卡一卡二| 亚洲欧美日韩无卡精品| 成人亚洲欧美一区二区av| 在线国产一区二区在线| 亚洲人成网站高清观看| 99热这里只有精品一区| 桃色一区二区三区在线观看| 国产精品嫩草影院av在线观看| 亚洲av熟女| 欧美一区二区亚洲| 国产一区二区亚洲精品在线观看| 免费av毛片视频| eeuss影院久久| 神马国产精品三级电影在线观看| 你懂的网址亚洲精品在线观看 | 久久久精品94久久精品| 久久中文看片网| 草草在线视频免费看| 无遮挡黄片免费观看| 老熟妇仑乱视频hdxx| 你懂的网址亚洲精品在线观看 | 久久久久九九精品影院| 亚洲18禁久久av| 久久亚洲精品不卡| 亚洲国产精品sss在线观看| 亚洲精品亚洲一区二区| 国产免费一级a男人的天堂| 波野结衣二区三区在线| 欧美绝顶高潮抽搐喷水| 午夜日韩欧美国产| 亚洲三级黄色毛片| av在线老鸭窝| 日本黄色视频三级网站网址| 亚洲成人精品中文字幕电影| 变态另类丝袜制服| 在线免费观看的www视频| 一a级毛片在线观看| 亚洲乱码一区二区免费版| 嫩草影视91久久| 国产高清视频在线播放一区| 国产亚洲精品久久久久久毛片| 99久久九九国产精品国产免费| 伦精品一区二区三区| 久久久久久久久中文| 国产激情偷乱视频一区二区| 淫妇啪啪啪对白视频| 午夜久久久久精精品| 久久久久久久久久久丰满| 晚上一个人看的免费电影| 亚洲成a人片在线一区二区| 美女 人体艺术 gogo| 国产精品美女特级片免费视频播放器| 毛片女人毛片| 插阴视频在线观看视频| 六月丁香七月| 国产精品久久视频播放| 夜夜爽天天搞| 毛片一级片免费看久久久久| 69av精品久久久久久| av视频在线观看入口| av在线播放精品| 成人午夜高清在线视频| 一区二区三区四区激情视频 | 国产黄a三级三级三级人| 免费高清视频大片| 国产一区亚洲一区在线观看| 亚洲自偷自拍三级| 免费不卡的大黄色大毛片视频在线观看 | 亚洲中文日韩欧美视频| 免费一级毛片在线播放高清视频| 国产欧美日韩精品一区二区| 免费观看精品视频网站| 国产精品电影一区二区三区| 99久久精品一区二区三区| 看片在线看免费视频| 99久久精品一区二区三区| 波多野结衣巨乳人妻| 看黄色毛片网站| 日韩成人伦理影院| 国产亚洲精品av在线| 久久久久免费精品人妻一区二区| 人人妻人人看人人澡| 欧美日本视频| 久久久久免费精品人妻一区二区| 亚洲性久久影院| 亚洲欧美日韩东京热| 白带黄色成豆腐渣| 国产黄片美女视频| 亚洲人成网站在线播放欧美日韩| 在线天堂最新版资源| 寂寞人妻少妇视频99o| 我的老师免费观看完整版| 1000部很黄的大片| 亚洲三级黄色毛片| 亚洲三级黄色毛片| 波多野结衣巨乳人妻| 欧美日本视频| 国产精品亚洲美女久久久| 国产精品乱码一区二三区的特点| 国产成人精品久久久久久| 国产精品99久久久久久久久| 内射极品少妇av片p| 亚洲av二区三区四区| 国产午夜福利久久久久久| 精品人妻偷拍中文字幕| 蜜臀久久99精品久久宅男| 深夜a级毛片| 色噜噜av男人的天堂激情| 99久久精品热视频| 国产精品一区二区三区四区久久| 色综合站精品国产| 69人妻影院| 久久久久九九精品影院| 精品人妻熟女av久视频| 性欧美人与动物交配| 亚洲美女搞黄在线观看 | 蜜臀久久99精品久久宅男| 久久精品国产自在天天线| 有码 亚洲区| 在线观看美女被高潮喷水网站| 一个人观看的视频www高清免费观看| 春色校园在线视频观看| 亚洲欧美中文字幕日韩二区| aaaaa片日本免费| 九九爱精品视频在线观看| 黄色欧美视频在线观看| 久久亚洲国产成人精品v| 白带黄色成豆腐渣| 亚洲第一区二区三区不卡| 国产三级中文精品| 最新中文字幕久久久久| 日本精品一区二区三区蜜桃| 乱人视频在线观看| 亚洲精品影视一区二区三区av| 老司机影院成人| 日本黄色片子视频| 久久久久精品国产欧美久久久| 亚洲欧美日韩高清在线视频| 久久亚洲国产成人精品v| .国产精品久久| 国内少妇人妻偷人精品xxx网站| 深夜a级毛片| 亚州av有码| 黄色视频,在线免费观看| 少妇猛男粗大的猛烈进出视频 | 熟女电影av网| 人人妻,人人澡人人爽秒播| 男人舔女人下体高潮全视频| АⅤ资源中文在线天堂| 亚洲av一区综合| 亚洲内射少妇av| 一级毛片电影观看 | 国产精品三级大全| 嫩草影院精品99| 国产精品国产三级国产av玫瑰| 久久久久久大精品| 久久6这里有精品| 免费观看的影片在线观看| 韩国av在线不卡| 国产乱人偷精品视频| 天堂av国产一区二区熟女人妻| 伊人久久精品亚洲午夜| 午夜福利在线观看免费完整高清在 | 精品99又大又爽又粗少妇毛片| 国产色婷婷99| 国产探花极品一区二区| 亚洲欧美精品自产自拍| 久久国产乱子免费精品| 亚洲国产欧洲综合997久久,| 日韩欧美免费精品| 插阴视频在线观看视频| 美女高潮的动态| 国产精品一区二区免费欧美| 亚洲,欧美,日韩| 亚洲电影在线观看av| 国产av在哪里看| 亚洲一级一片aⅴ在线观看| 偷拍熟女少妇极品色| 亚洲最大成人手机在线| 成人综合一区亚洲| 日韩人妻高清精品专区| 亚洲七黄色美女视频| 观看免费一级毛片| 全区人妻精品视频| 精华霜和精华液先用哪个| 搡老岳熟女国产| 免费高清视频大片| 亚洲aⅴ乱码一区二区在线播放| 久99久视频精品免费| 精品99又大又爽又粗少妇毛片| 看片在线看免费视频| 亚洲成人av在线免费| 国产精品无大码| 久久人人爽人人爽人人片va| 丝袜美腿在线中文| 欧美xxxx性猛交bbbb| 久久精品国产鲁丝片午夜精品| 久久久久免费精品人妻一区二区| 美女cb高潮喷水在线观看| 午夜爱爱视频在线播放| 99久久成人亚洲精品观看| 国产精品99久久久久久久久| 成人av一区二区三区在线看| 国产蜜桃级精品一区二区三区| 18禁在线播放成人免费| 欧美色视频一区免费| 高清毛片免费看| 伦精品一区二区三区| 日韩欧美 国产精品| 草草在线视频免费看| 免费看美女性在线毛片视频| 在线免费十八禁| 乱码一卡2卡4卡精品| 亚洲国产精品合色在线| 久久久久久伊人网av| 校园人妻丝袜中文字幕| 最近的中文字幕免费完整| 精品人妻一区二区三区麻豆 | 一区二区三区高清视频在线| 久久精品夜夜夜夜夜久久蜜豆| 亚洲性夜色夜夜综合| 久久午夜亚洲精品久久| 国产av一区在线观看免费| 国产成人freesex在线 | 春色校园在线视频观看| 色av中文字幕| 日韩成人伦理影院| 日韩人妻高清精品专区| 日本精品一区二区三区蜜桃| 18禁在线无遮挡免费观看视频 | 精品人妻偷拍中文字幕| 国产一区亚洲一区在线观看| 久久午夜福利片| 中文字幕免费在线视频6| 国产亚洲精品av在线| 日韩人妻高清精品专区| 五月玫瑰六月丁香| videossex国产| 色av中文字幕| 精品少妇黑人巨大在线播放 | 老司机影院成人| 国产又黄又爽又无遮挡在线| 亚洲av二区三区四区| 亚洲成人中文字幕在线播放| 欧美一区二区亚洲| 99热6这里只有精品| 99久久精品一区二区三区| 身体一侧抽搐| 丰满人妻一区二区三区视频av| 一区二区三区免费毛片| 校园春色视频在线观看| 欧美人与善性xxx| 久久人人精品亚洲av| 九九热线精品视视频播放| 人妻少妇偷人精品九色| 成人综合一区亚洲| 九九在线视频观看精品| 日韩精品有码人妻一区| 亚洲aⅴ乱码一区二区在线播放| 欧美3d第一页| 国产伦一二天堂av在线观看| av中文乱码字幕在线| 舔av片在线| 国产黄色视频一区二区在线观看 | 男女下面进入的视频免费午夜| 两个人视频免费观看高清| 亚洲精品456在线播放app| 国产精品亚洲美女久久久| 亚洲美女黄片视频| 超碰av人人做人人爽久久| a级毛色黄片| 国产成人一区二区在线| 2021天堂中文幕一二区在线观| 我要看日韩黄色一级片| 男人狂女人下面高潮的视频| 欧美+日韩+精品| 国产精品久久久久久精品电影| 可以在线观看的亚洲视频| 亚洲精华国产精华液的使用体验 | 国产精品乱码一区二三区的特点| 亚洲自拍偷在线| 国产黄色小视频在线观看| 夜夜夜夜夜久久久久| 伦精品一区二区三区| 美女大奶头视频| 亚洲欧美日韩卡通动漫| 午夜精品国产一区二区电影 | 免费看a级黄色片| av在线天堂中文字幕| 麻豆av噜噜一区二区三区| 亚洲性久久影院| www日本黄色视频网| 国产成人a∨麻豆精品| 尤物成人国产欧美一区二区三区| h日本视频在线播放| 久久精品夜夜夜夜夜久久蜜豆| 美女内射精品一级片tv| 级片在线观看| 身体一侧抽搐| 色综合站精品国产| 在线看三级毛片| 国产成人freesex在线 | 99热这里只有是精品在线观看| 国产av麻豆久久久久久久| 麻豆久久精品国产亚洲av| 午夜亚洲福利在线播放| 精品久久久久久成人av| 欧美日韩综合久久久久久| 五月玫瑰六月丁香| 久久99热这里只有精品18| 毛片女人毛片| 久久久久国产精品人妻aⅴ院| 深爱激情五月婷婷| 成人综合一区亚洲| 精品久久久久久久久亚洲| 免费观看的影片在线观看| 又黄又爽又免费观看的视频| 淫秽高清视频在线观看| av专区在线播放| 精品午夜福利在线看| 国产黄色视频一区二区在线观看 | 亚洲内射少妇av| 最新在线观看一区二区三区| 伦理电影大哥的女人| 99久久中文字幕三级久久日本| 麻豆成人午夜福利视频| 日日摸夜夜添夜夜添av毛片| 男人舔女人下体高潮全视频| 18禁裸乳无遮挡免费网站照片| 99精品在免费线老司机午夜| 欧美日本亚洲视频在线播放| 国产精品久久久久久av不卡| 天堂网av新在线| 亚洲精品一区av在线观看| 色播亚洲综合网| av黄色大香蕉| 国产高潮美女av| 午夜亚洲福利在线播放| 级片在线观看| 国产三级中文精品| 免费人成视频x8x8入口观看| 亚洲在线观看片| 国产精品福利在线免费观看| 精品久久久久久久末码| 大又大粗又爽又黄少妇毛片口| 亚洲美女黄片视频| 18禁裸乳无遮挡免费网站照片| 午夜视频国产福利| 日韩精品有码人妻一区| 午夜精品在线福利| 免费av观看视频| 亚洲欧美成人综合另类久久久 | 天天一区二区日本电影三级| 久久久久久伊人网av| 精品人妻熟女av久视频| 国产成人freesex在线 | 人妻久久中文字幕网| 三级男女做爰猛烈吃奶摸视频| 亚洲熟妇熟女久久| 久久草成人影院| 国产成人精品久久久久久| 亚洲精品国产av成人精品 | 精品久久久久久久末码| 欧美日本视频| a级毛片免费高清观看在线播放| 99热精品在线国产| 一级a爱片免费观看的视频| 国模一区二区三区四区视频| 国产精品人妻久久久久久| 婷婷六月久久综合丁香| 国产成人aa在线观看| h日本视频在线播放| 成年女人永久免费观看视频| 成年av动漫网址| 久久精品人妻少妇| 18禁在线无遮挡免费观看视频 | 国产精品av视频在线免费观看| 好男人在线观看高清免费视频| 日韩成人伦理影院| 免费电影在线观看免费观看| 国产视频内射| 插逼视频在线观看| 免费观看人在逋| 精品久久久噜噜| 3wmmmm亚洲av在线观看| 国产真实乱freesex| 伊人久久精品亚洲午夜| 日韩精品青青久久久久久| 在线观看美女被高潮喷水网站| 国产aⅴ精品一区二区三区波| 日本五十路高清| 欧美激情在线99| 亚洲av不卡在线观看| 国产成人精品久久久久久| 免费搜索国产男女视频| 精品久久久久久久久久久久久| 一区二区三区高清视频在线| 亚洲欧美日韩卡通动漫| 美女被艹到高潮喷水动态| 一a级毛片在线观看| 国产精品美女特级片免费视频播放器| 少妇高潮的动态图| 亚洲熟妇中文字幕五十中出| 色av中文字幕| 老熟妇仑乱视频hdxx| а√天堂www在线а√下载| 成人一区二区视频在线观看| 午夜a级毛片| 秋霞在线观看毛片| 婷婷精品国产亚洲av| 久久久精品欧美日韩精品| 国产色爽女视频免费观看| 搡老岳熟女国产| 可以在线观看的亚洲视频| .国产精品久久| 亚洲av一区综合| 大又大粗又爽又黄少妇毛片口| 国产不卡一卡二| 97超级碰碰碰精品色视频在线观看| 亚洲第一区二区三区不卡| 亚洲国产精品成人综合色| 日韩在线高清观看一区二区三区| 日本精品一区二区三区蜜桃| 亚洲久久久久久中文字幕| 亚洲精品一区av在线观看| 国产欧美日韩一区二区精品| av天堂中文字幕网| 我要搜黄色片| 亚洲人成网站在线观看播放| 久久中文看片网| 九九爱精品视频在线观看| 亚洲自拍偷在线| 热99re8久久精品国产| 日本-黄色视频高清免费观看| 久久6这里有精品| av在线亚洲专区| 九色成人免费人妻av| 观看免费一级毛片| 久久亚洲精品不卡| 男人狂女人下面高潮的视频| 日韩精品青青久久久久久| 亚洲欧美日韩高清专用| 国产极品精品免费视频能看的| 久久午夜亚洲精品久久| 日本成人三级电影网站| 欧美zozozo另类| 特级一级黄色大片| 久久久精品94久久精品| 精品一区二区三区av网在线观看| 精品一区二区免费观看| 精品日产1卡2卡| 自拍偷自拍亚洲精品老妇| 午夜激情欧美在线| 成人毛片a级毛片在线播放| 亚洲最大成人手机在线| 国产老妇女一区| 亚洲精品影视一区二区三区av| 少妇熟女aⅴ在线视频| 免费观看人在逋| 亚洲国产日韩欧美精品在线观看| 国国产精品蜜臀av免费| 亚洲三级黄色毛片| 亚洲天堂国产精品一区在线| 久久精品国产自在天天线| 亚洲精品456在线播放app| 日本与韩国留学比较| 少妇熟女欧美另类| 又粗又爽又猛毛片免费看| 国产精品一区www在线观看| 国产精品一区二区性色av| 亚洲四区av| 不卡一级毛片| 午夜精品在线福利| 日本-黄色视频高清免费观看| 日韩高清综合在线| 一级黄色大片毛片| 国模一区二区三区四区视频| 免费av不卡在线播放| 亚洲高清免费不卡视频| 精品福利观看| 国产精品久久久久久av不卡| 成年版毛片免费区| 免费人成视频x8x8入口观看| 伦理电影大哥的女人| 免费搜索国产男女视频| 黄色一级大片看看| 夜夜爽天天搞| 亚洲专区国产一区二区| 在线观看午夜福利视频|