• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Self-Awakened Particle Swarm Optimization BN Structure Learning Algorithm Based on Search Space Constraint

    2023-10-26 13:14:20KunLiuPeiranLiYuZhangJiaRenXianyuWangandUzairAslamBhatti
    Computers Materials&Continua 2023年9期

    Kun Liu ,Peiran Li ,Yu Zhang ,Jia Ren ,Xianyu Wang and Uzair Aslam Bhatti

    1School of Information and Communication Engineering,Hainan University,Haikou,570228,China

    2Playload Research Center for Space Electronic Information System,Academy of Space Electronic Information Technology,Xi’an,710100,China

    3Research Center,CRSC Communication&Information Group Co.,Ltd.,Beijing,100070,China

    ABSTRACT To obtain the optimal Bayesian network(BN)structure,researchers often use the hybrid learning algorithm that combines the constraint-based(CB)method and the score-and-search(SS)method.This hybrid method has the problem that the search efficiency could be improved due to the ample search space.The search process quickly falls into the local optimal solution,unable to obtain the global optimal.Based on this,the Particle Swarm Optimization(PSO) algorithm based on the search space constraint process is proposed.In the first stage,the method uses dynamic adjustment factors to constrain the structure search space and enrich the diversity of the initial particles.In the second stage,the update mechanism is redefined,so that each step of the update process is consistent with the current structure which forms a one-to-one correspondence.At the same time,the“self-awakened”mechanism is added to prevent precocious particles from being part of the best.After the fitness value of the particle converges prematurely,the activation operation makes the particles jump out of the local optimal values to prevent the algorithm from converging too quickly into the local optimum.Finally,the standard network dataset was compared with other algorithms.The experimental results showed that the algorithm could find the optimal solution at a small number of iterations and a more accurate network structure to verify the algorithm’s effectiveness.

    KEYWORDS Bayesian network;structure learning;particle swarm optimization

    1 Introduction

    As a typical representative of the probability graph model,the Bayesian Network (BN) is an essential theoretical tool to represent uncertain knowledge and inference [1,2].BN is a crucial branch of machine learning,compared with other artificial intelligence algorithms[3],BN has good interpretability.BN is currently used in healthcare prediction [4],risk assessment [5,6],information retrieval[7],decision support systems[8,9],engineering[10],data fusion[11],image processing[12],and so on.Learning high-quality structural models from sample data is the key to solving practical problems with BN theory.Accurate calculation of BN structure learning is an NP-hard problem[13].Therefore,some scholars have proposed using heuristic algorithms to solve this problem.At present,common BN structure optimization methods include Genetic Algorithm[14],Hill Climbing Algorithm[15,16],Ant Colony Algorithm[17,18],etc.However,these heuristic algorithms are usually prone to local extremes.Aiming at the above problem,the Particle Swarm Optimization (PSO)algorithm has been gradually applied to BN structure learning.The PSO algorithm has information memory,parallelism,and robustness.It is a mature method of swarm intelligence algorithm [19].Moreover,because of the particularity of BN structure,it takes work to directly apply it to structure learning.Based on this,the researchers proposed a way to improve algorithms.

    Li et al.[20] proposed a PSO algorithm based on the maximum amount of information.This algorithm uses the chaotic mapping method to represent the BN structure.All the calculation results are mapped to the 0–1 value similarly.The algorithm complexity is high,and the accuracy of the result could be better.Liu et al.[21]proposed an edge probability PSO algorithm.In this algorithm,the position of the particle is regarded as the probability of the existence of the relevant edge,and the velocity is regarded as the increase or decrease of the probability of the existence of the relevant edge.To find a method to update the particles from the continuous search space.But algorithms rely too much on expert experience to define the existence of edges.Wang et al.[22]used binary PSO(BPSO)to flatten the network structure matrix and redefined the updating velocity and position of particles.Wang et al.[23]used the mutation operator and nearest neighbor search operator to avoid premature convergence of PSO.Although these two methods have obtained a network structure based on the PSO algorithm to a certain extent.However,during the CB stage,due to the need for more search space,the convergence speed could be faster,and the algorithm is prone to fall into local optimal.Gheisari et al.[24]proposed a structure learning method to improve PSO parameters,which combined PSO with the updating method of the genetic algorithm.However,because the initial particles are generated randomly,the reasonable constraints on the initial particles are ignored,which makes the BN structure learning precision low.Li[25]proposed an immune binary PSO method,combining the immune theory in biology with PSO.The purpose of the algorithm adding an immune operator is to prevent and overcome premature convergence.However,the algorithm complexity is higher,and the convergence speed is slow.

    We propose a self-awakened PSO BN structure learning algorithm based on search space constraints to solve these problems.First,we use the Maximum Weight Spanning Tree(MWST)and Mutual Information(MI)to generate the initial particles.We are establishing extended restraint space through the increase or decrease of dynamic control and adjustment factors.To reduce the complexity of the search process of the PSO algorithm,the initial particle diversity is increased and the network search space is reduced.Then establish a scoring function.The structure search is carried out by updating the formula with the new customized PSO algorithm.Finally,a self-awakened mechanism is added to judge the convergence of particles to avoid premature convergence and local optimization.

    The remainder of this paper is organized as follows.Section 2 introduces the fundamental conceptual approach to BN structure learning.In Section 3,we propose our self-awakened PSO BN structure learning algorithm.In Section 4,we present experimental comparisons of the performance of the proposed algorithm and other competitive methods.Finally,we give our conclusions and outline future works.

    2 Preliminaries

    2.1 Overview of BN

    BN is a probabilistic network used to solve the problem of uncertainty and incompleteness.It is a graphical network based on probabilistic reasoning.The Bayesian formula is the basis of this probability network.

    Definition:BN can be represented by a two-tupleBN=(G,Θ),whereG=(X,E)is a directed acyclic graph[26].X=(x1,x2,...,xn)is an n-element finite random variable.Eis the set of all directed edges between nodes,representing the dependencies among random variables.Θ=(θ1,θ2,...,θn)means the network parameter vector,which represents the set of conditional probability distributions of nodes,π(xi)is the parent node of the nodexi,andθi=P(xi|π(xi))represents the conditional probability distribution ofxiin the state of the parent node setπ(Xi).Therefore,the probability distribution between variables can be expressed as:

    Among them,BN structure learning refers to the process of obtaining BNs by analyzing data,which is the top priority of BN learning.Currently,the method of combining CB and SS is mostly used in structure learning.Firstly,the search constraint is performed by the CB method,and then the optimal solution is searched by the SS method.The swarm intelligence algorithm is widely used in it,but it has two problems.First of all,it takes a long time,and secondly,there will be premature convergence and fall into the local optimal state.Therefore,we plan to design a hybrid structure learning method to obtain the optimal solution in a short time under the condition of reasonable constraints and avoid falling into the local optimal state.

    2.2 MI for BN Structure Learning

    MI is a valuable information measure in information theory [27].In probability theory,it is a measure of the interdependence between variables.If there is a dependence between two random variables,the state of one variable will contain the state information of the other variable.The stronger the dependency,the more state information it contains,that is,the greater the MI.We should regard each node in the network as a random variable,calculate the MI between the nodes,respectively,and find out the respective related nodes.

    The amount of MI between two nodes is calculated by Eq.(2)

    where,H(X)represents the information entropy of a random variable,H(X|Y)represents the information entropy ofYunder the condition of a givenX,and bothXandYare discrete variables.P(X=xi,Y=yi)represents the joint probability distribution ofXandY.Therefore,we can seeI(X;Y)≥ 0 in Eq.(2).WhenI(X;Y)takes 0,it means that the random variablesXandYare independent of each other.There is no connecting edge betweenXandY.Conversely,whenI(X;Y) >0,it indicates that there may be a directed edge between nodesXandY.Because the direction of the edge is uncertain,it is expressed as an undirected edgeX-Y.MI is mainly used in BN structure learning to constrain the search range and update the current BN structure.It is an important basis for proving the correlation between two variables.However,using MI to determine the dependence between nodes depends on an accurate measurement.Without sufficient expert experience,the problem we need to solve is how to avoid the negative impact caused by the MI scale being too large or too small.

    2.3 Bayesian Information Criterion Scoring Function

    This article uses the Bayesian information criterion (BIC) score as the standard to measure the quality of the structure[28].The BIC score function consists of two parts:the log-likelihood function that measures the degree to which the candidate structure matches the sample data,and the penalty related to the dimensionality of the model and the size of the dataset.The equation is as follows:

    2.4 Particle Swarm Optimization Algorithm

    The PSO algorithm is a branch of the evolutionary algorithm,which is a random search algorithm abstracted by simulating the foraging phenomenon of birds in nature[29].First,we should initialize a group of particles,and then update its own two extreme values in the iterative process: individual extreme valuePbestand group extreme valueGbest.After evaluating the first set of optimal valuesPbestandGbest,the particle updates its velocity and position according to Eq.(6):

    where,ωis the inertia weight,which is used to control the influence of the previous iteration velocity on the current velocity.c1andc2are learning factors,r1andr2are random numbers between[0,1].trepresents the number of iterations,Vtrepresents the particle velocity at the current moment,andXtis the particle position at the current moment.

    The literature [23] proved that the evolution process of the PSO algorithm could replace the particle velocity with the particle position,and proposed a more simplified PSO update Eq.(7).

    The above Eq.(7),ωXtrepresents the influence of the past on the present,and its degree is adjusted byω,c1r1(Pbest-Xt)represents the self-awareness of the particles,andc2r2(Gbest-Xt)represents the resource sharing of particles and social information.Because the network structure represented by the particle position in BN is the adjacency Matrix,and the parametersω,c1,c2,r1andr2are all decimals.Multiplying with the matrix will cause the elements in the matrix to become decimals,which in turn makes the matrix unable to represent the network structure and loses the original update meaning.So,no matter what kind of PSO for this reason.Our algorithm can use the update idea of PSO to redefine the update equation and rationalize the update process.In order to shorten the search time of particles and improve the learning efficiency of the BN structure,the quality of the initial particles becomes particularly important.

    3 Method

    In this part,first,we use MWST to optimize the initialization of PSO.On the one hand,the formation of the edge between nodes and nodes is constrained.On the other hand,initialization can shorten the time of particle iteration optimization and increase particle learning efficiency.Secondly,we redefine the particle renewal formula.The position update of particles is defined in multiple stages.Finally,the“self-awakened”mechanism is added during the iterative process to effectively avoid particle convergence too fast and falling into the local optimal.We call this new method self-awakened particle swarm optimization(SAPSO).The flow chart is shown in Fig.1.

    Figure 1:Flow chart of SAPSO algorithm

    3.1 SAPSO Initialization Particle Structure Construction

    The MWST generates the initial undirected graph by calculating MI.The construction principle is as follows:if the variableXmay have three undirected edgesX-Y,X-W,X-V.AndI(X;Y)>I(X;W),I(X;Y) >I(X;V).In the MWST only reservedI(X;Y).According to this principle,all nodes are traversed in turn to create the largest MWST.It can be seen that the structure obtained from the largest MWST is an undirected graph.Because of the harsh conditions for generating edges,a node only retains one edge,so it is impossible to generate a ring structure after orientation.Therefore,in the absence of expert experience in node sequences.We adopt a randomly oriented way to generate a finite number of directed acyclic network structures with different node sequences.

    Since the spanning tree has a small number of connected edges and a single choice of edges.As the number of nodes increases,the complexity of the network structure increases.Although the obtained network structure is non-looping,it is too simple.This makes the initialized particle diversity insufficient,and the initial particles need to be further increased and optimized,and expanded.To avoid producing too many useless directed edges,which have a negative impact on the subsequent algorithm.We use MI to solve this problem.Firstly,the dependence degree between each node and other nodes is calculated.The calculated MI results are sorted to obtain the corresponding Maximum Mutual Information(MMI).Then set the dynamic adjustment factora=a+0.1R,R=1,2...to make the adjustment factorαgradually increase.In this way,the strong dependence relationship between node pairs is identified and the subsequently directed edges are constrained.WhenMI(xi,xj)>αMMI(xi)andMI(xi,xj)>αMMI(xj),we default that there is a strong dependency between the pair of nodes,which will generate connected edges.Otherwise,there are no connected edges.Whenαincreases,the constraint conditions become more stringent.In this case,the number of node pairs satisfying the connection edge condition is reduced,and the generated initialization particle structure is relatively simple.On the contrary,whenαdecreases,the constraint conditions gradually become looser.The dependency requirements of node pairs are reduced.The number of node pairs that meet the condition of connecting edges increases,and the generated initial particle structure is relatively complicated.Assuming we have a directed acyclic network structure withPdifferent node sequences.We can getRundirected edge addition combinations with different constraints by changing the regulation factorα Rtimes.Then,Ptypes of node sequences are generated by random orientation and a new edge-free orientation is added.Finally,check the acyclic and connectivity of the initial network structure.The overall process is shown in Fig.2.

    Figure 2:SAPSO generates initial particles

    Definition:Two directed acyclic graphs DAG1 and DAG2 with the same node set are equivalent,if and only if:

    (1) DAG1 and DAG2 have the same skeleton;

    (2) DAG1 and DAG2 have the same V structure.

    Since our initial PSO is based on the same spanning tree,even if the directivity of the edges is different,it is still easy to produce an equivalent class structure.

    Since the initial PSO particles we obtained are based on the same spanning tree,it is easy to produce an equivalent class structure even if the directivity of the edges is different.To ensure the differences and effectiveness of each particle,the initial particle needs to be transformed once.Select the same structure with the same score to perform edges and subtraction,and finally get the final initial particles.

    The pseudo-code at this stage is as follows:

    Therefore,the initial network structure with some differences and high scores is obtained.Initialization dramatically reduces the search space of the BN structure.This will help reduce the number of iterative searches for the PSO and shorten the structure learning time in the next step.

    3.2 Custom Update Based on PSO

    The BN structure encoding is usually represented by the adjacent matrix,so the update equation of the PSO cannot be used directly.There are generally two methods.The first method is to use the binary PSO algorithm.And the second one is to use the improved PSO algorithm to enable it to adapt to the adjacent matrix update of the BN structure.Since the binary PSO requires a normalization function,the update operation of the particle needs to be performed several times,which will cause the results to be inaccurate.Therefore,we choose the second method to redefine the particle position and velocity to update the method.The flowchart is shown in Fig.3.

    Figure 3:PSO custom update process

    In BN structure learning,according to the characteristics of its search space,we define the adjacency matrix expressing DAG as the position of the particleXt.Then we use the BIC score function as the fitness to determine the evaluation criteria.The higher the default fitness value,the better the particle.Thereby selecting the group extremumGbestand the highest fitness value of each particle.Namely,the individual extremumPbest.In Eq.(7),(Pbest-Xt)and(Gbest-Xt)are used as the update items of particles.

    In BN structure learning,the expression form of positionXis an adjacency matrix describing the network structure.Considering the rationality,it cannot directly correspond to the particle swarm update equation.The update process needs to be redefined.To facilitate the subsequent description,the update part is represented by anupdate.

    The update process can adjust the position movement of the particles from both the direction and the step length.To get better feedback on the current status of the particle in the process of iteration.The positioning movement of the particles is divided into two steps,orientation,and fixed length.First,we use the fitness value to determine the current position of the particle and determine the direction of the particle’s movement.

    Then we propose the complexity of particles as another index of particle renewal to determine the moving step size of particles.By comparing the number of edges in each network structure.We set the structure with a high fitness value as the target structure and calculate the complexity and compare it with the complexity of current particles.

    Because the fitness of the target structure is better in the case ofV1andV2.Therefore,the complexity of the current particle structure should be closer to the target structure when the target structure is closer.When the complexity of the target structure is high,we choose to accelerate the current structure,that is,adding edge operations to make up for the shortcomings of the lack of complication of the current particle structure.Similarly,when the complexity of the target structure is lower,the current particles are reduced by edging operations.In the case ofV3,the fitness of the current particle is better.Therefore,we replace the individual extreme value and the group extreme value with the current particle.Then move in a random direction with a fixed step.That is,the existing network structure is randomly added,cut,or reversed.

    The entire location update operation can be expressed in two-dimensional coordinates.As shown in Fig.4

    Figure 4:Two-dimensional coordinates of PSO update factors

    The complexity index is the difference between the number of directed edges of the target structure and the number of directed edges of the current particle structure.The fitness index is the difference between the fitness value of the target structure and the current particle.The fitness index determines whether the particle update method is a self-update or directional update and whether the current particle will replace the target particle.The complexity index determines whether the particle performs an edge addition operation or a subtraction operation.The pseudo code of the particle update process is shown as follows.

    As the“group learning”part of the network structure,this algorithm can continuously find better solutions faster through iterative optimization and interactive learning of group information.However,it still has the shortcomings of too fast convergence and easy to fall into local optimum.

    3.3 Breaking Out of the Local Optimal Particle“Self-Awakened”Mechanism

    To further accelerate the learning efficiency of the algorithm while avoiding the learning process from falling into the local optimum.We set up a“self-awakened”mechanism.The specific process is shown in Fig.5.

    Figure 5:Particle“self-awakened”mechanism

    Tis used as the“premature factor”to identify whether the result has converged.WhenT≥3Gbesthas passed more than three loop iterations and does not update.It means the result has converged.At this time,we force the deletion of the particle with the lowest BIC score and replace it with a new particle.There are two ways to generate new particles.One is to generate an adjacency matrix randomly.The other is to create new particles based on existing information.Because of the current mechanism requirements,it is necessary to add the best possible particles.Therefore,the best way is to make fine-tune based on the extremum of the group as a new particle to join the iterative optimization process.We learn from the update method of the mountain climbing algorithm,reverse the edge or randomly add the edge toGbest.Then as the“self-awakened”particle replaces the particle with the lowest score before entering the loop.The specific update mechanism is shown in Eq.(10):

    Through the“self-awakened”mechanism,we can not only make the particles jump out of the local optimum but also update the node sequence,reducing the reverse edge caused by the misleading of the node sequence.

    4 Experimental Results and Discussion

    4.1 Experimental Preparation

    In this section,to verify the performance of the algorithm.We chose the common standard datasets AISA network[30]and ALARM network[31]to complete the experiment.AISA network is a medical example that reflects respiratory diseases.State whether the patient has tuberculosis,lung cancer,or bronchitis associated with the chest clinic.Each variable can take two discrete states.The ALARM network has 37 nodes.It is a medical diagnostic system for patient monitoring.According to the standard network structure,the BNT toolbox is first used to generate datasets with sample sizes of 1000 and 5000 according to the standard probability table.Then the algorithm in this paper is used to learn the structure of the generated data.Meanwhile,the learning results were compared with improvement on the K2 algorithm via Markov blanket (IK2VMB) [32],max-min hill climbing(MMHC)[33],Bayesian network construction algorithm using PSO(BNC-PSO)[24],and BPSO[34]to verify the effectiveness of our algorithm.

    We used four evaluation indicators:

    (1) The final average BIC score(ABB).This index reflects the actual accuracy of the algorithm.

    (2) The average algorithm running time (ART).This index reflects the running efficiency of the algorithm.

    (3) The average number of iterations of the best individual (AGB).This index reflects the algorithm’s complexity.

    (4) The average Hamming distance between the optimal individual and the correct BN structure(AHD).Hamming distance is defined as the sum of lost edges,redundant edges,and reverse edges compared to the original network.This index reflects the accuracy of the algorithm.

    The experimental platform of this paper is a personal computer with IntelCorei7-5300U,2.30 GHz,8 GB RAM,and Windows 10 64-bit operator system.The programs were compiled with MATLAB software under the R2014a version,and the BIC score was used as the final standard score for determining structural fitness.Each experiment was repeated 60 times and the average value was calculated.

    4.2 Comparison between SAPSO and Other Algorithms

    In this section,we make an experimental comparison with BPSO,BNC-PSO,MMHC,and IK2vMB respectively.We set the population size to 100 and the number of iterations to 500.Independent repeated experiment 60 times to obtain the average.Table 1 shows the four datasets used in the experiment.

    Table 1:Dataset standard information

    Table 2 compares the average time,BIC score value,average Hamming distance,average convergence generation,and BIC score of the initialized optimal structure for the four structure learning methods to train two network structures under different datasets.It can be seen that in the AISA network,the method in this paper is significantly better than the BNC-PSO,BPSO,MMHC,and IK2vMB algorithms in the accuracy of the learning structure,that is,the average error edge number and time.

    Table 2 compares the experimental results of four structural learning algorithms in different datasets.Since MMHC and IK2vMB are not iterative algorithms,the average running time and convergence times are not counted.It can be seen from the table that in the AISA network,our algorithm is better than other algorithms in terms of learning accuracy and learning time.And with the increase of the dataset,the accuracy of the algorithm is constantly improving.Similarly,in the ALARM network,although the network structure becomes more complicated,the algorithm in this paper can still obtain the global optimal solution in a short time.It can complete the algorithm operations in the shortest time.

    Table 2:Comparison of different algorithms in each dataset

    4.3 Performance Analysis of SAPSO

    To better analyze the efficiency and performance of the algorithm,we compared the statistics of the previous section.As shown in Fig.6.

    The figure shows the relationship between the number of iterations and BIC scores in the AISA network in 1000 sets of datasets.It can be seen from the figure that the algorithm of this article can obtain the best BIC score value.Although the BPSO algorithm can converge as soon as possible,the algorithm’s accuracy is poor.The BPSO algorithm converges faster is that the BPSO algorithm is a global random search algorithm.The algorithm is randomly enhanced with the iterative operation,so the convergence speed is faster.However,because of its lack of local detection,it is more likely to fall into local optimal and lead to the rapid convergence of the algorithm.The algorithm we proposed is that the algorithm can quickly obtain the initial excellent initial planting group in the early stage of the search.And adding the self-awakened mechanism during the search stage,which allows the algorithm to jump out quickly after the local optimal is fell,and has obtained a globally optimal solution.

    Figure 6:The group’s extreme BIC score during the learning process of the 1000 dataset structure of the AISA network

    Fig.7 shows the relationship between the number of iterations and the BIC scoring function of the 5000 sets of datasets of the Alarm network.Like the AISA network,this article algorithm can obtain the optimal network structure,but the advantages of BIC scores compared to other algorithms could be clearer.This is because as the complexity of the network structure increases,the algorithm advantage gradually weakens.Although the BIC score results are similar,the algorithm of this article is still better than other algorithms in Hamming distance and time efficiency.

    Figure 7:The group’s extreme BIC score during the learning process of the 5000 dataset structure of the ALARM network

    To further explain the role of the self-awakened mechanism,we choose a specific iteration process of an AISA network,as shown in Fig.8.The algorithm obtained a local pole value in the early stage,but after a while,the algorithm eventually jumped out of the local pole value and obtained the global extreme value.This can verify the effectiveness of the algorithm in this article.

    Figure 8:A study of the AISA network structure

    5 Conclusion

    This paper constructs a hybrid structure learning method based on the heuristic swarm intelligence algorithm PSO and MWST.First,an undirected graph is constructed through MWST,and then new directed edges are added to increase the diversity of initial particles through random orientation and MI constraint edge generation conditions.Form multiple connected directed acyclic graphs as the initial particles of PSO,and then use the idea of PSO to reconstruct the particle update process,with fitness and complexity as the conditions for judging the quality of particles,by adding edges,subtracting edges,and reversing edge.Finally,add a“self-awakened“mechanism in the PSO optimization process to constantly monitor the updated dynamics of the particle’s optimal solution,to survive the fittest,and replace the“inferior”particles with new particles in time to avoid premature convergence and local optimization.Experiments have proved that the initialization process of the particles makes the quality of the initial particles better and speeds up the optimization velocity of the particles;the reconstruction of the PSO optimization method allows the BN structure learning to be reasonably combined with the particle update so that the accuracy of the learning results higher;the“self-awakened”mechanism can effectively avoid the algorithm from prematurely converging into a local optimum.Compared with the experiments of other algorithms,the method in this paper can achieve shorter learning times,higher accuracy,and more efficiency.It can be further applied to complex network structures and to solve practical problems.

    Although the algorithm proposed by us solves the learning problem of BN structure to a certain extent,the algorithm itself is affected by the scoring function,and multiple structures correspond to the same scoring result.Therefore,over-reliance on the scoring function has an impact on the final accurate learning of the structure.And selecting individual parameters in the PSO algorithm is not necessarily the optimal result.The following research work can improve the scoring function and study how to set the parameters of the PSO algorithm more reasonably,which can further reduce the algorithm complexity and improve the operation efficiency of the algorithm.

    Acknowledgement:The authors wish to acknowledge Dr.Jingguo Dai and Professor Yani Cui for their help in interpreting the significance of the methodology of this study.

    Funding Statement:This work was funded by the National Natural Science Foundation of China(62262016),in part by the Hainan Provincial Natural Science Foundation Innovation Research Team Project(620CXTD434),in part by the High-Level Talent Project Hainan Natural Science Foundation(620RC557),and in part by the Hainan Provincial Key R&D Plan(ZDYF2021GXJS199).

    Author Contributions:The authors confirm contribution to the paper as follows:study conception and design:Kun Liu,Peiran Li;data collection:Kun Liu,Uzair Aslam Bhatti;methodology:Yu Zhang,Xianyu Wang;analysis and interpretation of results: Kun Liu,Peiran Li,Yu Zhang,Jia Ren;draft manuscript preparation:Kun Liu,Peiran Li;writing-review&editing:Kun Liu,Yu Zhang;funding:Jia Ren;supervision:Yu Zhang,Jia Ren;resources:Xianyu Wang,Uzair Aslam Bhatti.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Data available on request from the authors.The data that support the findings of this study are available from the corresponding author Yu Zhang,upon reasonable request.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    av免费在线看不卡| 春色校园在线视频观看| 亚洲精品456在线播放app| 日韩视频在线欧美| 美女脱内裤让男人舔精品视频| 亚洲精品日韩av片在线观看| 欧美性感艳星| 美女高潮的动态| 国产精品人妻久久久久久| 日本黄色视频三级网站网址| 五月玫瑰六月丁香| 久久久国产成人免费| av在线老鸭窝| 精品少妇黑人巨大在线播放 | 国产一区二区在线av高清观看| 插阴视频在线观看视频| 老师上课跳d突然被开到最大视频| 国产伦在线观看视频一区| 91久久精品电影网| 97在线视频观看| 国产老妇伦熟女老妇高清| 丝袜喷水一区| 我的老师免费观看完整版| 美女黄网站色视频| 久久久国产成人免费| 午夜精品一区二区三区免费看| 精品国产一区二区三区久久久樱花 | 成人毛片60女人毛片免费| 美女黄网站色视频| 欧美成人一区二区免费高清观看| 99久久成人亚洲精品观看| 成年版毛片免费区| 久久精品夜夜夜夜夜久久蜜豆| 最近中文字幕高清免费大全6| 一级黄片播放器| 国产精品野战在线观看| 只有这里有精品99| 久久6这里有精品| 免费看日本二区| 日韩成人伦理影院| av免费观看日本| 国产黄色小视频在线观看| 桃色一区二区三区在线观看| 一个人看视频在线观看www免费| 日本五十路高清| 噜噜噜噜噜久久久久久91| 最近的中文字幕免费完整| 亚洲丝袜综合中文字幕| 国产成人精品久久久久久| 亚州av有码| 2021少妇久久久久久久久久久| 久久人妻av系列| 欧美区成人在线视频| 日韩制服骚丝袜av| 大香蕉97超碰在线| 黄色欧美视频在线观看| 亚洲精品亚洲一区二区| 超碰av人人做人人爽久久| 成年免费大片在线观看| 看十八女毛片水多多多| 日本wwww免费看| 国产成人a∨麻豆精品| 一卡2卡三卡四卡精品乱码亚洲| 亚洲性久久影院| 黄片无遮挡物在线观看| 美女xxoo啪啪120秒动态图| 99热6这里只有精品| 亚洲人成网站在线观看播放| 国产真实伦视频高清在线观看| 一级毛片久久久久久久久女| 内地一区二区视频在线| 欧美三级亚洲精品| 国产一区二区在线av高清观看| 最近的中文字幕免费完整| 国产又色又爽无遮挡免| 亚洲欧美精品专区久久| 性插视频无遮挡在线免费观看| 高清毛片免费看| 最近中文字幕2019免费版| 亚洲综合色惰| 简卡轻食公司| 熟妇人妻久久中文字幕3abv| 婷婷六月久久综合丁香| 在现免费观看毛片| kizo精华| 九九久久精品国产亚洲av麻豆| 美女内射精品一级片tv| 夜夜爽夜夜爽视频| 久久久午夜欧美精品| 欧美性猛交╳xxx乱大交人| 久久精品国产99精品国产亚洲性色| 日本熟妇午夜| 亚洲欧美成人综合另类久久久 | 天天躁夜夜躁狠狠久久av| a级毛片免费高清观看在线播放| 非洲黑人性xxxx精品又粗又长| 日日摸夜夜添夜夜添av毛片| 日韩精品有码人妻一区| 99久国产av精品国产电影| 欧美一级a爱片免费观看看| 永久免费av网站大全| 成年女人永久免费观看视频| 成人鲁丝片一二三区免费| 天天一区二区日本电影三级| 26uuu在线亚洲综合色| 亚洲婷婷狠狠爱综合网| 久久久久久久亚洲中文字幕| 国产极品天堂在线| 一个人观看的视频www高清免费观看| av线在线观看网站| 一区二区三区高清视频在线| 视频中文字幕在线观看| 99久久人妻综合| 在线天堂最新版资源| 长腿黑丝高跟| 婷婷色av中文字幕| 黄色日韩在线| 九色成人免费人妻av| 少妇人妻精品综合一区二区| 三级经典国产精品| 麻豆国产97在线/欧美| 一级黄片播放器| 免费看光身美女| 久久99热这里只有精品18| 2021少妇久久久久久久久久久| 美女被艹到高潮喷水动态| 日本五十路高清| 久久6这里有精品| 午夜久久久久精精品| 三级国产精品欧美在线观看| 偷拍熟女少妇极品色| 亚洲一区高清亚洲精品| 一级黄色大片毛片| 国产国拍精品亚洲av在线观看| 日本爱情动作片www.在线观看| 特大巨黑吊av在线直播| 免费av观看视频| 边亲边吃奶的免费视频| 偷拍熟女少妇极品色| 免费黄网站久久成人精品| 欧美日韩一区二区视频在线观看视频在线 | 国产精品野战在线观看| 亚洲自拍偷在线| 69av精品久久久久久| 丝袜喷水一区| 精品人妻视频免费看| 99久久无色码亚洲精品果冻| 国产精品一区二区三区四区免费观看| 中文精品一卡2卡3卡4更新| 韩国av在线不卡| 国产精品一区二区三区四区免费观看| 国产亚洲午夜精品一区二区久久 | 国产成人aa在线观看| 精品久久久久久久人妻蜜臀av| 国产一区二区三区av在线| 一区二区三区免费毛片| 久久久欧美国产精品| 国产不卡一卡二| 美女cb高潮喷水在线观看| 国产伦精品一区二区三区四那| 国产一区二区三区av在线| 日韩欧美精品免费久久| 久久亚洲国产成人精品v| 亚洲av成人精品一二三区| 1024手机看黄色片| 色综合色国产| 亚洲最大成人av| 亚洲久久久久久中文字幕| 国产一级毛片七仙女欲春2| or卡值多少钱| 最近最新中文字幕大全电影3| 一区二区三区高清视频在线| 18禁在线无遮挡免费观看视频| 一级毛片aaaaaa免费看小| 精品人妻视频免费看| 性色avwww在线观看| 天堂av国产一区二区熟女人妻| 精品人妻视频免费看| 亚洲va在线va天堂va国产| 波多野结衣巨乳人妻| 少妇被粗大猛烈的视频| 秋霞在线观看毛片| 夜夜看夜夜爽夜夜摸| 一边亲一边摸免费视频| 内地一区二区视频在线| 免费看a级黄色片| 久久韩国三级中文字幕| 国产又色又爽无遮挡免| 国产一区亚洲一区在线观看| 亚洲欧美一区二区三区国产| 可以在线观看毛片的网站| 老师上课跳d突然被开到最大视频| 性插视频无遮挡在线免费观看| 精品人妻熟女av久视频| 99热这里只有是精品50| 色5月婷婷丁香| 天天躁夜夜躁狠狠久久av| АⅤ资源中文在线天堂| 人妻少妇偷人精品九色| 成人午夜精彩视频在线观看| 久久人人爽人人片av| 一本一本综合久久| 欧美极品一区二区三区四区| 啦啦啦韩国在线观看视频| 久久久久久久久久成人| 大话2 男鬼变身卡| 久久精品久久久久久噜噜老黄 | 蜜桃久久精品国产亚洲av| www.av在线官网国产| 26uuu在线亚洲综合色| 成人午夜高清在线视频| 国产69精品久久久久777片| 国产精品99久久久久久久久| 亚洲成人中文字幕在线播放| 国产黄色小视频在线观看| 婷婷色av中文字幕| 大香蕉久久网| 久久精品夜夜夜夜夜久久蜜豆| 亚洲精品影视一区二区三区av| 天堂影院成人在线观看| 夜夜看夜夜爽夜夜摸| АⅤ资源中文在线天堂| 男人的好看免费观看在线视频| 赤兔流量卡办理| 国产精品av视频在线免费观看| 精品人妻熟女av久视频| 亚洲成色77777| 国产精品爽爽va在线观看网站| 尤物成人国产欧美一区二区三区| 亚洲精品久久久久久婷婷小说 | 久久久久国产网址| 激情 狠狠 欧美| 国产 一区 欧美 日韩| 日韩欧美精品v在线| 毛片女人毛片| 女人久久www免费人成看片 | h日本视频在线播放| 成人特级av手机在线观看| 精品久久久久久久久久久久久| 蜜臀久久99精品久久宅男| 99久久精品国产国产毛片| 伊人久久精品亚洲午夜| 久久精品综合一区二区三区| 亚洲人成网站高清观看| 国产成人91sexporn| 国产亚洲一区二区精品| 国产三级中文精品| 国产真实乱freesex| 网址你懂的国产日韩在线| 综合色av麻豆| 国产日韩欧美在线精品| 少妇被粗大猛烈的视频| 热99re8久久精品国产| 人体艺术视频欧美日本| 深爱激情五月婷婷| 欧美一级a爱片免费观看看| 亚洲欧洲国产日韩| 亚洲欧美精品综合久久99| 国产精品综合久久久久久久免费| 最近手机中文字幕大全| 成人亚洲欧美一区二区av| 欧美性感艳星| 亚洲成av人片在线播放无| 爱豆传媒免费全集在线观看| 中文字幕人妻熟人妻熟丝袜美| 日韩欧美精品v在线| 熟女人妻精品中文字幕| 最近中文字幕2019免费版| 日本爱情动作片www.在线观看| 日韩一区二区视频免费看| 国产成人91sexporn| 嘟嘟电影网在线观看| 亚洲精品乱码久久久久久按摩| 国产在视频线精品| 在线a可以看的网站| 日本免费a在线| 国产伦精品一区二区三区四那| 国产私拍福利视频在线观看| 99久久成人亚洲精品观看| 深夜a级毛片| 免费黄色在线免费观看| 男的添女的下面高潮视频| 国产在视频线精品| 九九在线视频观看精品| 丰满少妇做爰视频| 亚洲成人av在线免费| 校园人妻丝袜中文字幕| 亚洲性久久影院| 久久精品综合一区二区三区| 日韩一区二区视频免费看| 我的女老师完整版在线观看| 国产乱人偷精品视频| 精华霜和精华液先用哪个| 久久久久性生活片| 男女啪啪激烈高潮av片| 欧美xxxx性猛交bbbb| 国产成人精品久久久久久| 99在线视频只有这里精品首页| 在线观看66精品国产| 69人妻影院| 日韩在线高清观看一区二区三区| or卡值多少钱| 国产精品不卡视频一区二区| 噜噜噜噜噜久久久久久91| 亚洲丝袜综合中文字幕| 色哟哟·www| 国产精品麻豆人妻色哟哟久久 | 亚洲婷婷狠狠爱综合网| 欧美最新免费一区二区三区| 一级二级三级毛片免费看| av专区在线播放| 国产高清视频在线观看网站| 国产一区有黄有色的免费视频 | 97在线视频观看| 国产高清三级在线| 国产精品一及| 亚洲人成网站高清观看| 亚洲自偷自拍三级| 亚洲av成人精品一区久久| 美女国产视频在线观看| 国产在视频线在精品| 最新中文字幕久久久久| 欧美成人午夜免费资源| 少妇丰满av| 一级av片app| av播播在线观看一区| 久久精品夜色国产| 国产极品天堂在线| 久久久久性生活片| 久久99蜜桃精品久久| 免费在线观看成人毛片| 久久久亚洲精品成人影院| 欧美色视频一区免费| 日本黄色视频三级网站网址| 18禁裸乳无遮挡免费网站照片| 国产白丝娇喘喷水9色精品| 搡老妇女老女人老熟妇| 伊人久久精品亚洲午夜| 99国产精品一区二区蜜桃av| 伦理电影大哥的女人| 免费观看性生交大片5| 亚洲性久久影院| 成人漫画全彩无遮挡| 青春草视频在线免费观看| 国产精品人妻久久久影院| 国产大屁股一区二区在线视频| 亚洲av成人精品一二三区| 中文字幕免费在线视频6| 久久国产乱子免费精品| 亚洲av不卡在线观看| 国产一区二区三区av在线| 国产亚洲一区二区精品| 91av网一区二区| 久久久国产成人精品二区| 午夜福利在线在线| 久久精品久久久久久久性| 亚洲真实伦在线观看| 免费在线观看成人毛片| 国产一区二区三区av在线| 女人被狂操c到高潮| 欧美日韩在线观看h| 国产视频内射| 2021天堂中文幕一二区在线观| 国产精品国产三级国产专区5o | 亚洲国产精品国产精品| 色吧在线观看| 国产色婷婷99| a级毛片免费高清观看在线播放| 午夜视频国产福利| 18禁在线播放成人免费| 久久精品国产亚洲av涩爱| 免费观看性生交大片5| 亚洲欧美日韩东京热| 一区二区三区四区激情视频| 网址你懂的国产日韩在线| 六月丁香七月| 高清午夜精品一区二区三区| 久久久色成人| 日韩av不卡免费在线播放| 热99re8久久精品国产| 久久欧美精品欧美久久欧美| 久久久国产成人精品二区| av免费观看日本| 午夜福利成人在线免费观看| 热99在线观看视频| 国产综合懂色| 国产一区二区在线av高清观看| 看片在线看免费视频| 国产精品日韩av在线免费观看| 99久久精品一区二区三区| 亚洲欧洲日产国产| 色5月婷婷丁香| 国产黄色小视频在线观看| 久久久久国产网址| 国模一区二区三区四区视频| 寂寞人妻少妇视频99o| 国产淫片久久久久久久久| 天天躁夜夜躁狠狠久久av| 99热全是精品| 欧美日韩在线观看h| 亚洲av不卡在线观看| 亚洲美女视频黄频| 国产成人a区在线观看| 亚洲18禁久久av| 联通29元200g的流量卡| 最近视频中文字幕2019在线8| 麻豆成人av视频| 精品免费久久久久久久清纯| 女人久久www免费人成看片 | 自拍偷自拍亚洲精品老妇| 嘟嘟电影网在线观看| 精品国产三级普通话版| 久久人妻av系列| 高清av免费在线| 看黄色毛片网站| 激情 狠狠 欧美| 两性午夜刺激爽爽歪歪视频在线观看| 嫩草影院入口| 国内精品美女久久久久久| 日韩 亚洲 欧美在线| 2022亚洲国产成人精品| 女的被弄到高潮叫床怎么办| 精品久久久久久电影网 | 青春草亚洲视频在线观看| 亚洲性久久影院| 99国产精品一区二区蜜桃av| 欧美日韩一区二区视频在线观看视频在线 | 中文字幕av成人在线电影| 国产乱来视频区| 精华霜和精华液先用哪个| 国产久久久一区二区三区| 最近最新中文字幕免费大全7| 国产日韩欧美在线精品| 舔av片在线| 人妻制服诱惑在线中文字幕| 国产在视频线精品| 日韩av在线免费看完整版不卡| 桃色一区二区三区在线观看| 亚洲av成人精品一二三区| 久久人妻av系列| 男女边吃奶边做爰视频| 国产精品99久久久久久久久| 国产精品综合久久久久久久免费| 2021少妇久久久久久久久久久| 22中文网久久字幕| 91精品国产九色| 免费av毛片视频| 久久久精品大字幕| 欧美性感艳星| 在线a可以看的网站| 国产老妇伦熟女老妇高清| 神马国产精品三级电影在线观看| 极品教师在线视频| 国产精品,欧美在线| 国产精品电影一区二区三区| 观看美女的网站| 在线观看66精品国产| 好男人在线观看高清免费视频| 精品人妻偷拍中文字幕| 晚上一个人看的免费电影| 日日撸夜夜添| 久久久久久久久中文| 国产高清有码在线观看视频| 亚洲国产高清在线一区二区三| 欧美xxxx性猛交bbbb| 97人妻精品一区二区三区麻豆| 99久久成人亚洲精品观看| 精品久久久久久久久av| 国产探花极品一区二区| 一级毛片我不卡| 国产爱豆传媒在线观看| 欧美3d第一页| 国产精品久久久久久精品电影| 一级毛片aaaaaa免费看小| 在线播放国产精品三级| 亚洲综合色惰| 国产午夜精品一二区理论片| 亚洲av熟女| 永久网站在线| 免费看av在线观看网站| 国产成人免费观看mmmm| 18+在线观看网站| 日日摸夜夜添夜夜爱| 久久精品91蜜桃| 久久精品夜色国产| 亚洲精品日韩av片在线观看| 男女那种视频在线观看| 美女xxoo啪啪120秒动态图| 欧美最新免费一区二区三区| 最近2019中文字幕mv第一页| 久久精品久久久久久噜噜老黄 | 超碰av人人做人人爽久久| av天堂中文字幕网| 最近2019中文字幕mv第一页| 久久久久精品久久久久真实原创| 99热精品在线国产| av在线播放精品| 天堂中文最新版在线下载 | 校园人妻丝袜中文字幕| 免费观看在线日韩| 桃色一区二区三区在线观看| 七月丁香在线播放| 免费av观看视频| 欧美成人a在线观看| 久久久久久九九精品二区国产| 干丝袜人妻中文字幕| 免费av毛片视频| 久久久欧美国产精品| 欧美成人a在线观看| 午夜福利高清视频| 亚洲国产成人一精品久久久| 国产一区亚洲一区在线观看| 国产免费男女视频| 2021少妇久久久久久久久久久| 人人妻人人澡欧美一区二区| 尤物成人国产欧美一区二区三区| 国产精品爽爽va在线观看网站| 亚洲成色77777| 狂野欧美白嫩少妇大欣赏| 免费播放大片免费观看视频在线观看 | 亚洲欧美精品自产自拍| 成年免费大片在线观看| 亚州av有码| 欧美激情国产日韩精品一区| 99久久精品国产国产毛片| 国产三级在线视频| 最近最新中文字幕大全电影3| 成人漫画全彩无遮挡| 插逼视频在线观看| 日韩一区二区三区影片| 久久久久久伊人网av| 亚洲成av人片在线播放无| 成人无遮挡网站| 久久久午夜欧美精品| av黄色大香蕉| 亚洲av电影在线观看一区二区三区 | 国产在视频线在精品| 舔av片在线| 熟女电影av网| 三级国产精品片| 丰满人妻一区二区三区视频av| 国产午夜精品论理片| 久久国内精品自在自线图片| 91av网一区二区| 国产探花在线观看一区二区| 国产免费视频播放在线视频 | 舔av片在线| 中文亚洲av片在线观看爽| 国产69精品久久久久777片| 亚洲精品乱久久久久久| 亚洲,欧美,日韩| 精品欧美国产一区二区三| 3wmmmm亚洲av在线观看| 亚洲丝袜综合中文字幕| 毛片一级片免费看久久久久| 中文字幕av成人在线电影| 国产av一区在线观看免费| 国产精品精品国产色婷婷| 精品国产一区二区三区久久久樱花 | 伦精品一区二区三区| 国内精品宾馆在线| 久久精品国产自在天天线| av专区在线播放| 国产精品久久视频播放| av在线观看视频网站免费| 国产精品麻豆人妻色哟哟久久 | 蜜桃久久精品国产亚洲av| 91久久精品国产一区二区三区| 九九在线视频观看精品| 成人二区视频| 少妇的逼好多水| 国产精品一区二区在线观看99 | 日韩欧美国产在线观看| 综合色丁香网| 国产高潮美女av| 免费黄网站久久成人精品| 国产高清国产精品国产三级 | 亚洲精品456在线播放app| 中文精品一卡2卡3卡4更新| 女人久久www免费人成看片 | 亚洲伊人久久精品综合 | 成年女人永久免费观看视频| 在线a可以看的网站| 爱豆传媒免费全集在线观看| 欧美激情久久久久久爽电影| 欧美xxxx性猛交bbbb| 国产精品,欧美在线| 亚洲欧美日韩东京热| 狂野欧美白嫩少妇大欣赏| 99久久中文字幕三级久久日本| 免费看光身美女| 亚洲欧美一区二区三区国产| 色吧在线观看| 岛国毛片在线播放| 秋霞在线观看毛片| 国产高清视频在线观看网站| 网址你懂的国产日韩在线| 桃色一区二区三区在线观看| 高清av免费在线| 欧美日韩综合久久久久久| 蜜桃亚洲精品一区二区三区| 成人特级av手机在线观看| 精品熟女少妇av免费看| 欧美另类亚洲清纯唯美| 淫秽高清视频在线观看| 亚洲中文字幕一区二区三区有码在线看| 国产91av在线免费观看| 欧美性猛交╳xxx乱大交人| 永久免费av网站大全| 级片在线观看| 一区二区三区四区激情视频| 日日啪夜夜撸| 黄色欧美视频在线观看| 美女内射精品一级片tv| 国产一区二区三区av在线|