• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    STABC-IR: An air target intention recognition method based on bidirectional gated recurrent unit and conditional random field with space-time attention mechanism

    2023-04-22 02:14:56SiyuanWANGGangWANGQiangFUYafeiSONGJiayiLIUShengHE
    CHINESE JOURNAL OF AERONAUTICS 2023年3期

    Siyuan WANG, Gang WANG, Qiang FU, Yafei SONG, Jiayi LIU, Sheng HE

    Air Defense and Antimissile School, Air Force Engineering University, Xi’an 710051, China

    KEYWORDS Bidirectional gated recurrent network;Conditional random field;Intention recognition;Intention transformation;Situation cognition;Space-time attention mechanism

    Abstract The battlefield environment is changing rapidly, and fast and accurate identification of the tactical intention of enemy targets is an important condition for gaining a decision-making advantage.The current Intention Recognition(IR)method for air targets has shortcomings in temporality, interpretability and back-and-forth dependency of intentions.To address these problems,this paper designs a novel air target intention recognition method named STABC-IR, which is based on Bidirectional Gated Recurrent Unit(BiGRU)and Conditional Random Field(CRF)with Space-Time Attention mechanism(STA).First,the problem of intention recognition of air targets is described and analyzed in detail.Then, a temporal network based on BiGRU is constructed to achieve the temporal requirement.Subsequently, STA is proposed to focus on the key parts of the features and timing information to meet certain interpretability requirements while strengthening the timing requirements.Finally, an intention transformation network based on CRF is proposed to solve the back-and-forth dependency and transformation problem by jointly modeling the tactical intention of the target at each moment.The experimental results show that the recognition accuracy of the jointly trained STABC-IR model can reach 95.7%,which is higher than other latest intention recognition methods.STABC-IR solves the problem of intention transformation for the first time and considers both temporality and interpretability,which is important for improving the tactical intention recognition capability and has reference value for the construction of command and control auxiliary decision-making system.

    1.Introduction

    With the development of military technology and aviation technology, the confrontation and complexity of the air defense battlefield have increased significantly.Situational cognition is a core aspect of command and control in operations,and is a prerequisite and basis for effective decision making and correct action.1In the highly real-time and highly adversarial information-based battlefield environment, the types and numbers of air targets have increased.We need to quickly and accurately perceive the battlefield state,and realize a clear and robust battlefield situation assessment to provide reliable basis for commanders’command decisions.Target intention recognition is a core component of situational cognition.Intention recognition is a key step in the transition from the information domain to the cognitive domain, and is a prerequisite and foundation for battlefield cognition and executive decision making.Correctly determining the operational intention of enemy targets will help commanders understand the battlefield situation and make reasonable decisions, so as to obtain the initiative of the war.

    Target intention recognition is essentially a pattern recognition problem in dynamic adversarial conditions.Target intention recognition is essentially a pattern recognition problem under dynamic confrontational conditions.As technology has continued to develop, the amount of information in the battlefield environment has increased dramatically, and it is difficult to quickly and accurately identify the intention of a target from multiple battlefield data simply by relying on a commander’s manual judgment.Intention recognition requires a series of highly abstract and complex thinking activities,such as key feature extraction, comparative analysis, association and reasoning,to achieve accurate target intention recognition based on professional knowledge and relevant experience,considering key information such as the battlefield environment,target attributes and target status.In a massive data environment, manual processing methods are inadequate in terms of real-time utilization and accuracy.Therefore, it is necessary to design intelligent intention recognition models that can take advantage of the efficient processing power of computers and the pattern reasoning and cognitive experiences of humans to achieve accurate recognition and reasoning of target intention in real time.

    Intention recognition is widely used in fields such as human behavior prediction,2–5vehicle lane change intention6–9and question and answer systems,10–12and related technical methods are relatively mature.Currently, intention recognition methods are divided into two main categories: model-based intention recognition methods and data-based intention recognition methods.The former is predefined and combined with the adaptive adjustment of relevant parameters, resulting in a deterministic model.The latter is based on neural networks,which do not require the assumption of a priori prototypes and learn potential patterns directly from the data to eventually obtain an intention recognition model.Model-based intention recognition methods, which have made great progress, have gradually become mature.This class of methods mainly includes template matching methods,13,14expert systems,15,16decision tree methods,17,18Bayesian networks,19–22etc.In recent years, increasing data acquisition and computational power has made data-driven intention recognition algorithms possible.The main methods in this category include neural networks,17,23–26deep learning,27–29etc.We will briefly introduce the above air target intention recognition methods and compare the advantages and disadvantages of different methods in Section 2.

    Most of the above studies only consider the state of the target at a single moment to perform intention recognition.However, in practical scenarios, the intention usually needs to be reflected by a change of action in a continuous period of time,and the intention of the target at different moments has a back-and-forth connection,and the process of intention recognition should have a certain degree of interpretability.In this paper, we propose a novel air target intention recognition method named STABC-IR based on deep learning.Here,STA refers to the space–time attention mechanism, B refers to BiGRU (Bidirectional Gated Recurrent Unit), C refers to CRF(Conditional Random Field),and IR refers to the intention recognition.

    Our main contributions are summarized as follows.

    (1) We describe and analyze the problem of air targets intention recognition.The target intention space and intention recognition characteristic set are constructed and uniformly encoded.A hierarchical strategy is used to select twelve-dimensional target features, which are normalized and uniformly encoded, and the cognitive experience of decision makers is encapsulated as intention labels.

    (2) We construct a temporal network based on BiGRU to address the temporal characteristics of the intention recognition problem.BIGRU can model the connection between the input of the preceding and following moments and mine the target intention information embedded in the temporal data.

    (3) We propose the space–time attention mechanism to address the interpretability problem of neural network applications in the field of intention recognition.The relationships among different categories of features are first mined and analyzed using space attention,followed by the relationships among inputs at different moments using time attention.The model results are represented using visualization techniques, which further improve the model’s temporal analysis capability while endowing it with a certain degree of interpretability.

    (4) We design an intention transformation network based on CRF to address the possible back-and-forth dependency characteristic of intention sequences.The tactical intention of each moment of the target is jointly modeled using the intention transfer feature function to characterize the back-and-forth dependency of the intention,and the negative log-likelihood loss function is constructed as the loss of the whole network to finally obtain the target intention recognition result of the current moment.

    Our proposed STABC-IR model for the first time simultaneously considers the temporal sequence and the back-andforth dependency of intentions in the process of intention recognition, and makes the model with certain interpretable functions through the space–time attention mechanism.To verify the performance of STABC-IR, comparative experiments and visualization experiments were conducted.The latest intention recognition methods for comparison mainly include model-based decision tree method17and IE-DSBN,20as well as data-based DBP,25SVM,26PCLSTM,27LSTMAttention,28and GRU-Attention.29The experimental results show that the STABC-IR model has overall advantages and can effectively solve the key problems of temporality, interpretability, and back-and-forth dependency in the intention recognition process.In addition, we also conducted exploiting experiments.We added STABC-IR to the command decision system, which helped us achieve a higher winning percentage in the confrontation simulation.The recognition process and results of STABC-IR are in line with the situational cognition thinking of commanders.STABC-IR is of great significance to improve the tactical intention recognition of air targets and has reference value for the construction of command and control auxiliary decision system.

    The remainder of this paper is organized as follows.We briefly describe several intention recognition methods and compare their advantages and disadvantages in Section 2.Section 3 provides a detailed description of the air target intention recognition problem and shows the intention space and the method to encode the intention features.The proposed STABC-IR model is introduced in detail in Section 4,followed by the model experiment and analysis in Section 5.Conclusions are drawn in Section 6.

    2.Related works

    The basic principles of battlefield target intention recognition are as follows.The intention-related feature data are extracted from the static and dynamic attributes of the battlefield environment and the target, and then the intrinsic linkage of the feature data is synthesized for analysis and inference to determine the operational intention of enemy targets.Target intention recognition is the focus of combat, and has become a hot research field in recent years.This section will briefly introduce the current model-based and data-based intention recognition methods, and compare their advantages and disadvantages.

    2.1.Model-based intention recognition methods

    2.1.1.Template matching methods

    The template matching method first establishes a template database based on the war experience summarized by military experts, and then extracts the intention feature information based on the actual actions of the enemy.Finally,the matching degree between the feature information and the template is deduced by the maximum similarity method,and the template with the maximum matching degree is the recognition result.Noble13proposed a pattern-based knowledge extraction system that enables an operational expert to represent his understanding of situations as a hierarchy of schema-like templates.Each template models a class of activities.It specifies the participants in these activities,the events to be performed by each participant, and temporal and logical relationships between events.Floyd et al.14applied the template matching method to the field of over-the-horizon air warfare.By designing a model that can learn the enemy aircraft and missile behavior to classify the enemy aircraft types and weapon capabilities,it is proved that the system has good adaptability to the limited classification opportunities and noisy air combat scenarios.

    The template matching method is easy to implement, conforms to the laws of human cognition, and is suitable for the combat intention recognition with clear intention category.However, this method mechanically fragments the battlefield situation without considering the concealment and deceptiveness of intentions.At the same time,the establishment of template database depends on the prior knowledge of domain experts, so it is difficult to guarantee the objectivity and credibility.There are also difficulties in updating the template database.

    2.1.2.Expert systems

    Expert systems use rules to describe knowledge,which usually consists of two parts.The first part is the conditions,which are the possible battlefield situations.The first part is the conditions, which are the possible battlefield situations.The second part is the results, which are the intention of the target.If the condition matches a known fact, the intention recognition result is output.The rule-based intention recognition model requires the construction of a knowledge base and an inference engine.The knowledge base is given by domain experts and can be extended and improved.The inference engine can be implemented by a combination of inference methods.The common inference methods include Bayesian inference, evidential inference,intuitionistic fuzzy inference,and so on.Carling15analyzed the naval posture and threat assessment process and designed an expert system using real-time knowledge to implement naval battlefield situation assessment.Zhou et al.16improved the combat intention recognition expert system to solve the problem of insufficient expert knowledge.

    Although the expert system has strong knowledge expression and knowledge reasoning ability, it is difficult to realize because it needs to abstract a complete knowledge base and reasoning rules, and its fault tolerance and learning ability are not strong.Especially in the face of the complex information battlefield, it is difficult to generalize the complex evolution law of the battlefield by simply relying on the mechanical reasoning rules.

    2.1.3.Decision tree methods

    The decision tree method uses probability and the tree in graph theory to compare different solutions in decision-making to obtain the optimal solution.Zhou et al.17first predicted the future state information of the target from real-time serial data based on LSTM networks and extracted the rules from the uncertain and incomplete prior knowledge using decision tree technique.Then the decision tree is used to obtain the target intention from the prediction data.Wang and Li18proposed a method based on XGBoost decision tree in order to improve the accuracy of tactical intention recognition of air targets.

    The decision tree method is easy to understand and interpret, and the amount of data required for training is less.The structure of the tree is visualized.However, the decision tree model tends to produce an overly complex model, and such a model can have poor generalization performance to the data, i.e., overfitting problems.In addition, decision trees can be unstable,as small changes in the data can lead to completely different tree generation.

    2.1.4.Bayesian networks

    Bayesian networks originated from Bayesian theorem,which is the product of the combination of probability theory and graph theory.It has been widely used in solving uncertain reasoning problems.30Bayesian networks consist of nodes representing battlefield events and their states,and directed arcs representing event transfer relations.After constructing the network,it is necessary to determine the network parameters.Priori probabilities are determined for top-level events without parent nodes, and conditional probabilities are given for events with parent nodes.New events are detected, and the network parameters are updated by backward propagation.Until a certain intention assumption exceeds the preset threshold, the intention is the recognition result.Chen and Wu19developed a hierarchical Bayesian network to represent the uncertain factors and their uncertain relationships related to the confrontation intention in the field of naval warfare.Xu et al.20introduced the information entropy theory in order to optimize the combat intention recognition algorithm based on dynamic sequential Bayesian networks.By analyzing the amount of useful information provided by different participating attributes,the attribute weights are allocated objectively.Jin et al.21gave a Bayesian network optimization method on the problem of air target intention recognition.The Bayesian network is initialized with network nodes and conditional probabilities,and the network effectiveness is evaluated.Xu et al.22proposed an improved method based on semi-supervised plain Bayesian classifier with confidence in data classification.The air combat data can be classified by this method.

    Bayesian networks are suitable for expressing and analyzing uncertain and probabilistic events and can make inferences from incomplete, imprecise or uncertain battlefield information.Bayesian networks have a strong causal probabilistic inference capability, which makes it a mainstream research method for intention recognition.It can dynamically adapt to changes on the battlefield through the constant updating of network parameters, which solves the problem of uncertainty in reasoning about intentions.However, Bayesian networks are not sufficiently adaptable to complex battlefields to handle deception with adversarial intentions.There are also difficulties in determining the prior probabilities and conditional probabilities of events at each node.

    2.2.Data-based intention recognition methods

    2.2.1.Neural networks

    The purpose of neural network is to train an intention recognition network with classification function.First, the battlefield situation information associated with the intention is extracted, and the network recognizable situation feature vector is formed after data preprocessing.Then, the feature vectors are input to the network for solving, and the intention recognition results are directly obtained.Zhou et al.17predicted the future state information of the target from realtime sequence data based on LSTM network and combined LSTM with decision tree to obtain the intention classification results of the target.Chen et al.23developed a fuzzy system model based on an integrated neural network.The neural network was trained using target attributes and intentions to obtain fuzzy affiliation and output functions for different intentions.Ahmed and Mohammed24proposed a similarity approach for attack intention recognition using fuzzy min–max neural network (SAIRF).The method is able to investigate the similarity between evidence and identify the intention of an attack.Through the introduction of the Rectified Linear Unit (ReLU) activation function and the Adaptive moment estimation (Adam) optimization algorithm, Zhou et al.25designed a combat intention recognition model based on neural network to improve the convergence speed of the model and effectively prevent the algorithm from falling into a local optimum.Meng et al.26proposed a new method to identify the tactical intention of multi-aircraft coordinated air warfare by predicting the attack intention based on 19 low correlation features through Support Vector Machine (SVM).

    The neural network method is closer to the human thinking mode and has certain self-learning ability, which can simulate the thinking process of association, memory, analogy, intuition,induction and learning of commanders.Neural networks do not need to organize a large number of generative rules,which can better overcome the knowledge acquisition difficulties of traditional intelligence methods.However, the traditional shallow neural network suffers from the deficiencies of difficult network training, difficult feature extraction and low computational accuracy.In addition, the recognition results are greatly affected by the generalization ability of the classification system.

    2.2.2.Deep learning

    Deep learning networks are artificial neural networks with multiple implicit layers.The presence of multiple layers allows the network to learn more abstract situational features.Using the feature processing layer by layer in the network, the highlevel features of the battlefield situation can be gradually extracted from the low-level features of the battlefield situation.Aiming at the limitation that traditional methods for combat intention recognition of air targets are difficult to effectively capture the essential characteristics of intelligence information, Xue et al.27designed a novel deep learning method,Panoramic Convolutional Long Short-Term Memory networks(PCLSTM),to improve the recognition ability.They designed a time series pooling layer to reduce the parameters of the neural network.To address the situation of incomplete information in practical situations, Liu et al.28proposed an LSTM-based model for predicting air warfare target intentions under incomplete information, and introducing cubic spline interpolation function fitting as well as mean filling method to repair incomplete data.Teng et al.29proposed a combat intention recognition method based on BiGRU network.The air combat feature prediction module is introduced before intention recognition, which further reduces the time of intention recognition and has certain prediction effect.

    Deep learning has excellent data feature learning ability and better overcomes the limitations of traditional neural networks such as weak feature extraction capability, ease of falling into local extremes and difficulty in training multilayer networks.At the same time, the application of deep learning in the field of intention recognition also suffers from the disadvantages of complex model design and poor interpretability.Different deep learning network structures usually need to be designed for different problems.With the increase of data volume and computing power, deep learning methods show very great technical advantages in intelligent cognition of complex battlefield situations, and have great exploration space and application prospects in solving intention recognition problems at the battle and tactical levels.

    Through the above comparative analysis, we choose the deep learning method for intention recognition of air targets.In the next section, we first describe the intention recognition of air targets in detail,followed by an introduction of our proposed new STABC-IR model based on deep learning in Section 4.

    3.Air target intention recognition problem description

    Air target intention recognition refers to the combination of the analysis of information collected through various sensors in a dynamic, adversarial environment and commanderrelated knowledge and operational rules to infer the tactical intentions of air targets.Intention recognition is an important part of situational cognition, the results of which affect subsequent decisions and actions.Intentions usually represent the enemy’s operational plans and reflect the thought patterns of enemy combatants,which cannot be directly reflected by data.However,to achieve their intention and thus further realize the battle plan, the enemy target follows a position, speed, radar state and other characteristics that are in line with a specific law; that is, the intention often needs to be expressed through the actions and state under the guidance of that intention.Fig.1 shows the disassembly and recognition processes of intention.

    Fig.1 Disassembly and recognition processes of intention.

    Due to the complexity and uncertainty of the battlefield, it is difficult to express the above mapping relationship from the temporal feature set to the intention space through an explicit mathematical formula.In this paper, we train STABC-IR model through the simulation dataset obtained from the battlefield simulation system to implicitly establish the mapping relationship from the target temporal feature set to the target intention space, and the target intention recognition process is shown in Fig.2.

    The target intention recognition process is mainly divided into offline training processes and real-time recognition processes.To perform target intention recognition, first, the STABC-IR network is trained using the simulated dataset annotated and encoded by experts to establish the mapping relationship from the temporal feature set to the intention space, during which the parameters are continuously optimized through continuous training.The intention recognition model achieves optimal results.In the actual intention recognition process, the real-time battlefield situational data acquired by sensors are integrated, normalized and coded, and the processed data are input into the trained STABC-IR model to obtain the target intention recognition results.The description and encoding of the target intention space and target intention feature input are shown below.

    3.1.Target intention space

    To accurately identify the tactical intention of incoming air targets, a reasonable tactical intention space of enemy targets,which is also a prerequisite for sample annotation of the simulation dataset, should first be provided.The intention space often varies greatly based on different backgrounds of thought, combat scenarios, and target entities.Therefore, the intention space of the target needs to be defined according to the corresponding operational context, as well as the basic attributes and possible operational tasks of the enemy target.

    The simulation dataset and real-time temporal feature set in this paper are obtained by relying on a battlefield simulation where the enemy attacks and we defend.Integrating the operational context with the attributes and tasks of the enemy targets,the set of target tactical intention is established as{attack,reconnaissance, surveillance, cover, electronic interference,retreat}.Table 1 shows a detailed description of these six main intentions.

    With the development of current equipment, in combat missions, a target may have multiple tactical intentions at the same time, and their tactical intentions may shift as time and posture change.In addition,a target usually has a limited number of intentions due to fixed attributes such as type and a fixed range of variation in its state change.The subsequent study in this paper assumes that each target has only one primary intention at a given moment and uses the primary intention of the target to label and identify the sample data.

    The intention recognition problem is essentially a multiclassification problem and thus requires supervised learning during training.One of the key issues in applying neural networks to the intention recognition problem is how to abstract the human-set intention space into pattern labels that can be recognized by the classifier.In this process, the commander extracts the key feature information of the target from the battlefield situational data and identifies the real intention of the target by combining the existing rules and his experience.Therefore, the process of encapsulating the intention space into model training labels is also the process of reflecting the model with knowledge-driven capability.For the six kinds of intentions in the above set intention space, the method shown in Fig.3 is used to label them,which facilitates the training and recognition of the model.

    Fig.2 Target intention recognition process.

    Table 1 Detailed description of 6 intentions.

    In this paper, the intention labels of the samples were obtained by two main methods.The first method is that the sample data are generated according to the rules already available in the simulation system.A few templates of tactical intention actions are available in the simulation system.Selecting one or more intention patterns will automatically generate the corresponding time series data.The second method is obtained from our simulation training center.The data are extracted from the back end of the air-ground joint combat simulation system by simulating a red-blue confrontation, in which the computer has classified some complete continuous data for intentions.For some complex or ambiguously labeled sample data,revisions were made by domain experts based on personal experience.

    3.2.Target intention characteristics

    The target’s intention is reflected by their actions and state.Intention recognition requires the fusion and analysis of information obtained from sensors, so it is critical to select the appropriate feature information as input to the model.During combat, the target has certain temporal change characteristics in their real-time state because they need to realize their intention through a series of tactical actions, in addition to some inherent attributes of the target such as target type and target volume,which remain basically unchanged.In addition,in the process of identifying the target’s intention,battlefield environmental information should also be considered, specifically including terrain, weather, wind direction and other factors.However,these types of environmental factors do not produce drastic changes within a certain time frame.Therefore, this paper does not currently consider environmental information.

    Fig.3 Intention space coding.

    After the intention space is determined, the feature information that needs to be input can be determined according to the relationship between features and intentions.For example,reconnaissance aircraft with reconnaissance intention generally use low-altitude or very high-altitude reconnaissance to avoid radar detection, their flight altitude is 100–1000 m or 15000 m or more, and the air-to-ground radar on the aircraft is kept on during the reconnaissance process.Jets usually use fast speeds to meet the enemy during air combat fighting, flying at a speed of 700–1500 km/h, while bombers generally fly at a speed of 600–900 km/h and only keep the radar on during the preparation for the attack and attack phase.Aircraft with surveillance and jamming intentions do not need to fly over defended areas for their own safety, so their shortcuts are larger than aircraft with attack intentions, which have smaller shortcuts.Considering the hostile relationship between ourselves and the enemy,in addition to some technical limitations,some target characteristics cannot be obtained directly, so other characteristics can be used instead.For example,the target type and volume cannot be obtained directly but can be reflected by the radar one-dimensional distance image and reflection intercept area.

    Based on a comprehensive analysis of the above, the input features selected in this paper for target intention recognition have 12 dimensions.(A) The first 9 features, height, velocity,acceleration, heading angle, azimuth, distance, course short,One-Dimensional (1D) range profile, and radar cross section,are numerical characteristics.(B) The remaining 3 features,air-to-air radar state, air-to-ground radar state, and electronic interference state,are non-numerical characteristics.The intention recognition input characteristics classification is shown in Fig.4.

    The network model can only handle numerical data, such as the target flight altitude, speed and other status data obtained by radar and other sensors.However, the order of magnitude difference among different features is large,so data normalization is required.Classified data, such as the target air-to-air radar status and electronic interference status obtained by electronic reconnaissance equipment, are nonnumerical data, which need to be processed numerically.

    Define the matrix V as the input temporal feature matrix for target intention recognition.For numerical features such as height,velocity,and acceleration,the normalization process is performed by using the maximum-minimum normalization method to map them to the interval [0,1], and the calculation process is shown as

    where the element v′is the initial input value, v is the normalized result, min is the minimum value under the dimension,max is the maximum value under the dimension,and c is a very small constant to prevent the denominator from being 0 when max=min.The constant c=10-5is used as an example.Table 2 shows a detailed description of the numerical type characteristics and the input of a sample at a certain frame.The target radar state, electronic interference state and other non-numerical characteristics are categorical data,which cannot be directly processed by the neural network,so it needs to be numericalized and transformed into results in the interval[0,1].The process is shown as

    where K is the total number of classifications under this dimension, and c is the offset, taking c=10-3.Then, the original input v′corresponds to the result of mapping the k-th class under this dimension to the interval [0,1] as v.Table 3 shows the detailed description of the numerical type characteristics and the input of a certain sample at a certain frame.

    The target state changes continuously during the target operation, and the continuously changing target state can often reflect the target’s intention, so the target state input at each moment is in matrix form.Based on the above normalization and numerical processing,we can obtain the standard feature input matrix V, which can be expressed as

    where element vi,jdenotes the j-th characteristic value of the target at frame i in the input at moment t,and n is the number of sampled frames at each moment.The number of sampled frames is variable,and the optimal number of sampled frames can be obtained by testing during the training process.m is the total dimensionality of the target features, where m=12 in this paper.

    4.STABC-IR model for air target intention recognition

    In this paper,an intention recognition model based on BiGRU and CRF with space–time attention mechanism is constructed to address several major problems in the field of air target intention recognition.The steps of the STABC-IR model construction are given below.

    Step 1.For the characteristics of real-time,serial,complexity and diversity of battlefield data, numerical and nonnumerical data are integrated, normalized and uniformly coded to form a standard feature set, respectively.

    Fig.4 Target intention recognition characteristics.

    Table 2 Description and examples of numerical characteristics.

    Table 3 Description and examples of non-numerical characteristics.

    Step 2.To address the temporal characteristics of the intention recognition problem, a temporal network based on BiGRU is constructed.BIGRU can model the connection between the input of the preceding and following moments and mine the target intention information embedded in the temporal data.

    Step 3.To address the interpretability problem of neural network applications in the field of intention recognition,STA is proposed.The relationships among different categories of features are first mined and analyzed using space attention,followed by the relationships among inputs at different moments using time attention.The space–time attention mechanism can expand the influence of key features, and the network results can be interpreted after visualization operation.

    Step 4.An intention transformation network based on CRF is designed for the possible back-and-forth dependency characteristic of intention sequences.The tactical intention of each moment of the target is jointly modeled using the intention transfer feature function to characterize the back-andforth dependency of the intention, and the negative loglikelihood loss function is constructed as the loss of the whole network to finally obtain the target intention recognition result of the current moment.

    The following is a detailed description of the general framework and the components of the STABC-IR intention recognition model.

    4.1.General framework

    The general framework of the model is shown in Fig.5.Among them, the input layer solves the data preprocessing problem,the space attention layer solves the correlation problem among different features, the BiGRU layer and the time attention layer solve the temporal sequence problem, and the CRF layer solves the backward and forward dependency problem of intention.

    As shown in Fig.5, V is the standard feature input matrix,which contains n time steps and m characteristics.X and S are the outputs of the space attention layer and the time attention layer, respectively.Both α and β are the influence weights.H→,H←and H denote the forward hidden state, the backward hidden state and the hidden state, respectively.O and Y are the outputs of the fully connected (FC) layer and the CRF layer,respectively.It should be noted that the time span of the input layer and the CRF layer are not the same.The input layer uses n to represent the input per frame,and the CRF layer uses t to represent the input per moment.Both n and t are changeable,and the time span of t is generally larger than n.

    Fig.5 Model framework.

    4.2.Temporal network based on BiGRU

    GRU31introduces a memory mechanism and forgetting mechanism similar to that of the human brain based on RNN,specifically by adding a reset gate and an update gate, which effectively solves the problem of gradient disappearance and gradient explosion that exists in RNN during training,and fits the commander’s mindset for target intention recognition during operations.Compared with another variant of RNN,LSTM,32GRU can achieve comparable results with LSTM with fewer parameters and simpler structure, and possesses better convergence.Therefore,in this paper,the GRU network is chosen as the basis for the processing of temporal information.Fig.6 shows the structure schematic of GRU.For ease of understanding, the time in this section is still denoted by t.

    As shown in Fig.6,given the input Xtat moment t and the hidden state Ht-1in the previous time step, the output of the reset gate Rtand update gate Ztcan be obtained after the calculation of the fully connected layer of the activation function sigmoid.

    where Wxhand Whhare weight parameters.bhis a bias parameter, and ?is the multiplication by elements.The function of reset gate Rtis to decide how much information needs to be reset in the hidden state of the previous moment.If it is close to 0, it means that the hidden state of the previous time is almost completely reset to the input of the current time.The function of update gate Ztis to decide whether the information in the previous moment is discarded.The smaller the value is,the more information contained in the hidden node of the previous moment is discarded.The reason for the low complexity of the GRU network model is that certain useless information is ignored, while the reset gate captures short-term dependencies in the time series and the update gate captures long-term dependencies in the time series.

    Fig.6 GRU unit.

    In the process of training the intention recognition model,it is found that the current time step is not only determined by the previous time step sequence but sometimes may also be determined by the later time step.BiGRU solves this problem by adding a hidden layer that passes information from backward to forward on top of GRU, which consists of a forward GRU network and a backward GRU network.The structure of BiGRU is shown in Fig.7.

    4.3.Space-time attention mechanism

    Fig.7 BiGRU structure.

    Attention is a mechanism used to improve the effectiveness of intention recognition models.The attention mechanism is usually applied in the field of language translation and can be used to learn the dependencies between words.The principle of attention is to calculate the degree of match between the current input sequence and the output vector,and in general the higher the degree of match, the higher the relative score of the attention points.Attention gives the neural network model the ability to distinguish each information point in a sequence and assign different weights,making the learning of neural network models more flexible.At the same time, attention can be used to interpret the arrangement relationship between inputs and outputs, which can greatly improve the interpretability of neural network models.This is particularly important for air target intention recognition.

    In the process of target intention recognition, intentions are characterized by multidimensional features with temporal relationships, and there are connections among different input features, and inputs at different moments.For different input features at the same moment, the relationship is expressed as a spatial relationship.For different input features at different moments, the relationship is expressed as a temporal relationship.Intention recognition as a part of situational cognition, its methods and results should be understandable by commanders, i.e., the interpretability of the model is required to be high.To address the temporalspatial relationship and interpretability requirements of input features, this paper proposes a Space-Time Attention mechanism (STA) to simulate the thought process of commander cognition from spatial and temporal perspectives, respectively.More attention is applied to the key features and the results are displayed with the help of visualization operations and other means to make them more easily accepted by the commanders.

    The space–time attention mechanism is divided into a space attention layer and a time attention layer.The space attention layer mainly mines the relationship among different categories of features, and uses the attention mechanism to weigh the contributions of different features to expand the influence of key features.The time attention layer mainly mines the relationship among the input features at different moments and processes the temporal information.Since the BiGRU layer mainly processes the temporal information as well,the temporal attention layer is combined with the BiGRU layer when designing the network structure.The final output of the neural network is not only considered the output of the BiGRU layer at the moment, but is jointly determined by the output of all moments of the BiGRU layer.The structural diagram of STA is shown in Fig.8.

    where utand Wtare the weight coefficient matrices at moment t; btis the corresponding offset at moment t, and etis the energy value obtained by the hidden layer state Htat moment t;n is the number of frames of input features at time t;the calculated βtis the influence weight at each moment,and Stis the final weighted sum of the outputs obtained at each moment,i.e., the input of the fully connected layer in the next step.

    4.4.Intention transformation network based on CRF

    The results obtained from the attention layer are input to the fully connected layer, which can be directly transformed into the probabilities of various intentions by the softmax function to obtain the intention recognition results.This method has good results for simple classification problems.However,when there is a before-and-after dependency between the results that need to be output, this method cannot characterize the transformation relationship between the before-and-after recognition results.Instead, it can only classify each time point independently, making the accuracy of the recognition results lower.

    Fig.8 Space-time attention mechanism.

    For the target intention recognition problem,with the continuous improvement of the target’s own capability and tactical warfare in the battlefield, the same target may have different intentions at different moments in the time series.There is a dependency relationship between the target’s intention at the preceding and following moments, i.e., the target’s intention at the current moment depends to a certain extent on their intention at the previous moment.Table 4 shows the air target intention transfer probability matrix A obtained based on the simulation data statistics.The elementrepresents the probability of the intention shift of moment t to j if the intention of moment t-1 is i.As seen from Table 4,the probability of shifting to other intentions is not equal for each intention.For example, when the intention at moment t-1 is to attack,there will be a 0.8921 probability that the attack intention will remain at moment t,a 0.0512 probability that it will change to cover,and almost no probability that it will change to interference.Thus, there is a back-and-forth dependence between the tactical intention of air targets, i.e., the intention at each moment is not completely independent but is dependent on the intention at the previous moment.

    5.Experimental analysis

    5.1.Experimental data and environment

    The experimental data are provided by the simulation system.Two scenarios set by the simulation system are shown in Fig.9.Both scenarios are simulated using red-blue confrontation.The blue units are mainly air targets with multiple intentions,while the red units are mainly defending strongholds andmultiple anti-aircraft weapons.Scenario I shows the air defense of a stronghold in mountainous terrain,where the blue side has a single source of air targets and the red side has a more concentrated distribution of weapons.Scenario II shows the air defense of a stronghold in plain terrain, where the blue side has a more diverse source of air targets and the red side has a more dispersed distribution of weapons.It is important to note that our proposed STABC-IR model is mainly used to identify the intentions of the air targets and provide a reliable basis for the next command, but it does not involve subsequent decisions and actions.Although the settings of the two scenarios are different, the tactics adopted by the blue air targets are basically the same, and their intentions and temporal characteristics are similar.Therefore, we mixed the data from the two scenarios and used them together as experimental data.The temporal characteristics data of the air targets are provided by the system interface, and the data labels are obtained from the initial settings of the system and later revisions by experts in the field of air combat.By removing the unusable data, 20000 samples were randomly selected as the sample set for this experiment, including 6 air target tactical intentions.The percentage of each intention are 26.7% for attack intention, 20.3% for reconnaissance intention, 17.4%for surveillance intention, 18.1% for cover intention, 9.6%for jamming intention, and 7.9% for retreat intention.80%of the sample set was used as the training sample set, and 20% was used as the test sample set.The number of time frames to be input for each sample input layer was determined by subsequent tests,and the input feature dimensions were the twelve-dimensional features determined in the previous section.

    Table 4 Intention transfer matrix.

    The experimental computer system is Windows 10, Python version is 3.8.0, NVIDIA GeForce RTX 3060 GPU and CUDA 11.0 are used for acceleration,and PyTorch 1.8.0 deep learning framework is used.

    5.2.Evaluation metrics

    The performance of STABC-IR model is validated.The model is trained using the training set,and the test set is used to evaluate the model performance.The following metrics are used to evaluate the classification of the network:Accuracy,Precision,Recall, F1 score and Loss.They are calculated as follows.

    (1) Accuracy.The ratio of the number of correctly predicted samples in the test set to the total number of samples in the test set.

    where TP denotes the number of samples whose true labels are positive and are predicted to be positive, FP denotes the number of samples whose true labels are negative but are predicted to be positive, TN denotes the number of samples whose true labels are negative and are predicted to be negative as well,and FN denotes the number of samples whose true labels are positive but are predicted to be negative.For each intention, the corresponding accuracy is calculated, while for the whole model, the combined accuracy is calculated, i.e., the accuracy of the multiple classification problem.

    (2)F1 score.The summed average of Precision and Recall.The expressions of Precision, Recall and F1 score are given as

    where K is the total number of intentions (there are 6 intentions in this paper, so K=6), and F1kis the F1 score corresponding to the k-th category of intentions.

    (3)Loss.The cross-entropy loss of the model on the test set is expressed as

    where m is the number of samples,k is the target intention,and pikis the one-hot truth label(0 or 1).qikis the probability that the i-th sample predicted by the model belongs to the k-th intention and Σk=6k=1qik=1.

    5.3.Frame rate determination

    Fig.9 Simulation system.

    In the original dataset, the aerial target feature data are provided by the simulation system, and the system sets the sampling point interval to 3 s,i.e.,3 s per frame interval.The number of input frames n for each sample has a great influence on the training results of the model.Too few frames will lead to the model not learning the relationship between target features and intent,and the model will have a weak generalization ability and low accuracy.Too many frames can lead to redundant input information,longer running time of the model,and even failure of the model to converge.Therefore, we first need to determine the appropriate number of input frames n to intercept all samples and obtain the final usable training and test sets.

    In the STABC-IR model, BiGRU needs to process the input multidimensional temporal information, which occupies most of the running time of the model.Therefore,to determine the appropriate number of input frames n,this paper performs a comparative experimental analysis using the BiGRU model for n= {4,6,8,10,12,14,16}, where each frame is spaced by 3 s.The experimental results are shown in Table 5.

    From the table, it is seen that the accuracy of the BiGRU model gradually improves and the inference time gradually increases as the number of input frames increases.When n>10, the accuracy improvement of intention recognition is minimal, but the corresponding inference time increases sharply.Therefore, considering the model accuracy and running time, all samples in this paper are selected with 10 frames of target features as input, i.e., n=10, where each frame is spaced by 3 s.

    5.4.Parameter tuning

    Hyperparameters have a great impact on the classification performance of the network.Therefore, after determining the appropriate number of input frames, some hyperparameters in the STABC-IR model need to be set to improve the intention recognition performance of the model.We set that the BiGRU layer contains three hidden layers, each with 128, 64 and 64 neuron nodes respectively.Other hyperparameters ofthe model mainly include the epoch ne, the batch size nb, the learning rate lr, and the hyperparameters in the selected optimizer.

    Table 5 Experimental results with different frame rates.

    The optimizer updates and computes the network parameters that affect the model training and model output to approximate or reach the optimal value, thus minimizing the loss function.The Adam optimizer33is selected for this model.The learning rate α of the optimizer is chosen as 0.001 by default, and the first-order moment decay coefficient β1and second-order moment decay coefficient β2are set to 0.9 and 0.999, respectively.

    Epoch ne, batch size nband learning rate lr are set.The recognition accuracy of the three is evaluated using the test set, and the results are shown in Table 6.

    From Table 6, the highest recognition accuracy of the model, which is 95.7%, is obtained when the parameters are ne=100, nb=256 and lr=0.1.Therefore, epoch ne, batch size nb, and learning rate lr are set to 100, 256 and 0.1,respectively.

    5.5.Results and analysis

    5.5.1.Analysis of STABC-IR model results

    The experimental results of the STABC-IR air target intention recognition model based on the above analysis design experiments are shown in Fig.10.

    From the accuracy and loss value curves in Fig.10,we can see that the convergence time of the model is approximately 30 epochs, and there is no significant change in the accuracy and loss value after 30 epochs.After the model is trained,the accuracy of the training set is approximately 97 %, and the loss value is approximately 0.13.The accuracy of the test set is approximately 95%,with the highest being 95.7%,and the loss value is approximately 0.15, with the lowest being 0.146.

    Since the number of samples under each intention label in the set varies, the accuracy under each recognition intention needs to be further analyzed.The confusion matrix of the test set is generated, as shown in Fig.11, where different colors of the heat map scale indicate different recognition accuracy.The darker the color of the block, the higher the recognition accuracy.The diagonal line indicates the accuracy of correct recognition.

    From the confusion matrix, it can be seen that STABC-IR model proposed in this paper has a high accuracy in identifying the tactical intention of all six targets.Further analysis reveals that the accuracy rate of retreat intention is the highest among the six intentions,reaching 98.1%.Combined with the analysis of the actual battlefield situation, this result is mainly because the maneuver state of the target with retreat intention is relatively special; for example, its distance and shortcut ofthe route continue increasing.The mutual recognition error rate between attack intention and cover intention is higher because targets executing cover intention tend to confuse the enemy by using tactical behaviors such as feinting, which are similar to attack in terms of state characteristics and other aspects, thus leading to this behavior being misclassified as attack intention.From the perspective of the commander’s cognition, this situation is in line with normal cognition, and the recognition accuracy is reduced but is within an acceptable range.

    Table 6 Accuracy with different parameters.

    Fig.10 Experimental results of test and training sets of STABC-IR model.

    5.5.2.Space-time attention weighting ratio analysis

    The space–time attention mechanism introduced in the model can improve the correct recognition rate of the model by updating the weights of different dimensional features and different momentary points in the time series by means of feedforward neural networks.To verify the effectiveness of the space–time attention mechanism, the weights assigned to different dimensional features by the space attention layer and the weights assigned to different moments in the time series by the time attention layer are visualized, as shown in Fig.12.Among them, Figs.12(a) and (b) show the results of weight assignment for spatial attention layer and temporal attention layer, respectively.The horizontal axes are twelvedimensional input features and 10 moment points, respectively,and the vertical axes are all 6 intentions,and the shades of color blocks represent the results of feature and temporal weight assignment under that intention.In Fig.12, w represents the attention weight.

    Fig.11 Confusion matrix of STABC-IR model.

    As can be seen from Fig.12(a),each intention generally has a higher attention to features that may change frequently during the recognition process.For example,height,velocity, azimuth, distance, and course short are of high interest because the target’s height and velocity often reflect the tactical maneuvers being performed by the target, while azimuth, distance,and course short mainly reflect the direct threat level of the target to the stronghold.They are all closely related to the target’s intention.Conversely, intention is less of a concern for other characteristics.For example, features such as RCS and airto-air radar state receive less attention because the variation of these features or the range of variation is small within the set range of sampling frames, which cannot provide sufficient basis for the final determination of different intentions.However, the space attention layer can still capture the connection for some features that are closely linked to the corresponding intentions.For example,the attention of interference intention is significantly higher for electronic jamming status than other intentions, while the retreat intention is more sensitive to distance, which are consistent with the actual situation on the battlefield.

    From Fig.12(b), it can be seen that most intentions focus more attention on slightly nearer moments but not always to the most recent moments.It is found that attack intention focuses more attention on multiple pre-moment states because the attack intention is usually reflected by a series of maneuvers, and there is often a more obvious initiation behavior.For example,some aircraft will enter the attack state by climbing high to search for the target and then dive to attack,so the attack intention will focus more attention on the start of the attack behavior before multiple moments.In addition, the attention point distribution of the attention matrix for the attack and cover intentions are basically the same because cover vehicles tend to use feinting tactics and have similar behavioral actions as the attack,which also confirms the analysis of the confusion matrix in the previous section.The three intentions of reconnaissance, surveillance, and interference have a more even distribution of attention for the time series,which is related to the fact that all three are continuous behavioral actions that span long periods of time.The retreat intention has a posterior focus for the time series,which is related to its behavioral actions such as staying away and shutting down.

    Fig.12 Results of visualization of space-time attention mechanism.

    The above analysis reveals that the visualization results of the space–time attention mechanism are consistent with battlefield reality and can help commanders better understand the importance of different features and different moments.The time–space attention mechanism proposed in this paper has obvious effects on key feature extraction in target space dimension and time dimension.The accuracy of the intention recognition model can be improved, and the model has some interpretability and better results.

    5.5.3.Comparative analysis with model-based intention recognition methods

    Since there is currently no publicly available dataset in the field of aerial target intention recognition, we used other intention recognition methods from the references to conduct comparative experiments on the same dataset.This section compares STABC-IR with the model-based intention recognition methods mentioned in the introduction, and the next section with the data-based intention recognition methods.In this section,we have selected the latest decision tree method17and IEDSBN method20for comparison.

    The authors in Ref.17identified 7 intentions and 9 different target features, while our dataset includes 6 intentions and 12 features.Considering that the training set data volume is too large for the decision tree method, we randomly selected 100 items from the training samples for each intention.These 600 samples are used as prior knowledge for constructing the decision tree.We calculate the decision support of each feature state and the segmentation information entropy of each alternative cut point, and select the alternative cut point with the lowest segmentation information entropy as the optimal cut point for segmenting the conditional attributes.It should be noted that only 23 samples were utilized as prior knowledge to construct the decision tree in Ref.,17so the decision tree we constructed is very large.

    In Ref.,20the authors proposed a dynamic sequence Bayesian network based on the information entropy theory (IEDSBN).The time series in the original model of IE-DSBN has only 4 frames, and the intention space contains only 2 intentions.The target characteristics are divided into upperlevel tactical actions and lower-level attribute parameters.We also divided the target characteristics in the dataset.The state transition probabilities between tactical actions,the affiliation of tactical actions with different attributes,and the prior probabilities of attribute weights in the model are given by experts.

    We compare the trained STABC-IR model, decision tree model and IE-DSBN model on the test set, and the comparison results are shown in Table 7.

    From the results, it can be seen that the accuracy of the STABC-IR model proposed in this paper reaches 95.7%,which is much higher than the 83.1% of the IE-DSBN model and 72.4%of the decision tree method.Although the STABCIR model has the longest computation time,it is still within the acceptable range for real-time recognition.

    In the process of constructing intention recognition decision trees and Bayesian networks, it can be found that both of them have obvious disadvantages.Decision tree methods require the construction of a large classification tree.Although the optimal cut point can be determined by calculating the decision support and segmentation information entropy,when the number of training samples used to construct the tree is large,the construction process will be very slow and the structure of the decision tree will become very complex.At the same time,overly complex models can lead to reduced generalizability and overfitting problems.On the contrary,if we use a small number of training samples, the structure of the tree may be very simple.However, it cannot cover enough cases of intention recognition and cannot achieve accurate classification when encountering a class of similar data types, and the classification accuracy will be greatly reduced.In addition, decision tree methods mostly recognize data based on the current moment, and cannot take into account features suchas the temporal order of intention recognition data and the back-and-forth dependency of intentions.

    Table 7 Comparison of different intention recognition models.

    Dynamic Bayesian networks take into account the temporal nature of the intention recognition data and can recognize temporal data.The most significant disadvantage of Bayesian networks is the difficulty in determining the prior probabilities,conditional probabilities and transfer probabilities in their networks.These probabilities are currently given mainly through expert judgment.Taking the description of the intention recognition problem in this paper as an example,there exist 6 kinds of intentions and 12 kinds of features.It is assumed that each feature has only 3 different state classifications.Even so, a total of 3 × 12 × 6 = 216 conditional probabilities need to be determined in advance, in addition to 6 × 6 = 36 transfer probabilities.The determination of these probabilities is difficult and the objectivity and reasonableness of the results given by the experts need to be assessed.

    Considering the above analysis, it can be found that the construction process of traditional decision trees and Bayesian networks is difficult and the accuracy of recognition results is low.These two methods cannot simultaneously satisfy the requirements of temporality, interpretability and back-andforth dependency of intention recognition.Our proposed data-based deep learning method for intention recognition of air targets can solve the above problems simultaneously and ensure an objective and reasonable recognition process.

    5.5.4.Comparative analysis with data-based intention recognition methods

    The methods used are DBP,25SVM,26PCLSTM,27LSTMAttention,28and GRU-Attention.29Under the same conditions of sampling frame number and intention space,the above related models are trained, their performance metrics are calculated, and the results of the comparison experiments are shown in Table 8.

    From the results, it is found that the recognition accuracy and loss value of STABC-IR model proposed in this paper are much better than those of several other intention recognition models.The integrated accuracy and F1 score of STABCIR model for air target intention recognition are 95.7% and 0.865,respectively,which are both greatly improved compared with other models.By calculating the cross entropy loss of the test set,the corresponding loss value of the model proposed in this paper is significantly lower than that of the comparison models, indicating that the tactical intention of the targets in the test set is better recognized.

    The comparison shows that the recognition model based on LSTM or GRU network as a temporal feature network model improved by RNN can capture the hidden features in the temporal data and is more suitable for air target tactical intention recognition than other models.By comparing the experimental results of the LSTM-Attention and GRU-Attention models,it is found that the recognition accuracy,loss value and F1 score of both are similar, but LSTM-Attention takes significantly longer time to recognize a single sample.This is because the cell structures of LSTM and GRU are different; the former consists of forget,input and output gates,while the latter consists of reset and update gates.Moreover, the parameters of the GRU network are much smaller than those of the LSTM network.These are the reasons why this paper chooses to use BiGRU for the processing of temporal information.Inthe battlefield environment, the time for the command information system to perform intention recognition is also a factor that needs to be focused on,and the method with shorter processing time can be considered with approximate accuracy.

    Table 8 Comparison of different intention recognition models.

    5.5.5.Analysis of ablation experiments

    To further verify the effectiveness of STABC-IR model for intention recognition, ablation experiments were conducted on the same dataset.The experimental model structure settings and experimental results are shown in Table 9, and the variation curves of the accuracy and loss values of the model are shown in Fig.13.

    From Table 9 and Fig.13,we can see that the accuracy,loss value and F1 score of STABC-IR are optimal.By conducting ablation experiments on the bidirectional network layer,space–time attention layer and CRF layer, the contributions of each layer to the accuracy are 0.022, 0.036 and 0.024,respectively, indicating that the introduction of the bidirectional network structure, space–time attention mechanism and CRF can all improve the performance of tactical intention recognition to some extent.From the analysis of the changes in accuracy and loss values in the ablation experiments, the five models generally improved in accuracy and decreased in loss values as the number of training rounds increased, with STABC-IR model consistently outperforming the other four models.The addition of the bidirectional network structure significantly outperforms the base GRU model in terms of accuracy and loss value shortly after the initial training, indicating that the bidirectional propagation mechanism can effectively improve the training effect and enable the neural network model to learn faster with the same batch size, learning rate, and number of training rounds.Since the number of samples included in each intention in the test set varies, the recall and F1 scores are used to reflect the recognition accuracy of the five models for each intention,and the results are shown in Table 10.The serial numbers i,ii,iii,iv,v,and vi indicate 6 kinds of intentions:attack,reconnaissance,surveillance,cover,electronic interference, and retreat, respectively.

    Analysis of the above table reveals that for each intention,STABC-IR model has the highest recall rate and F1 score.By comparing BiGRU-Attention and BiGRU-CRF, it is found that the role of introducing only the space–time attention mechanism in the BiGRU-based model is slightly greater than that of introducing only the CRF layer.Introducing both at the same time leads to better recognition of the model.Comparing the six types of intentions, we can find that the recall and F1 scores of the cover intention are the lowest in general because the cover intention has the most kinds of maneuvers and tactical actions,and the target may be mistaken as havingother kinds of intentions during the recognition process.The highest accuracy and F1 score are obtained for retreat intention because the input features of retreat intention are more obvious and the model can learn its characteristics faster and better.

    Table 9 Results of ablation experiments.

    Fig.13 Results of ablation experiments.

    5.5.6.Analysis of exploiting experiments

    In order to verify the practicability of STABC-IR method, we conducted exploiting experiments in Alpha C2 system34in our laboratory.Alpha C2 system itself is a command and decision system based on deep reinforcement learning in a two-sided adversarial environment.The input of the system is the realtime status of air targets and various reward scores, and the direct output is the Weapon Target Assignment (WTA) strategy and winning percentage.We have modified the input part of the system.First, different intention recognition methods are used to judge the intention of the target.Then, the realtime state of the target and the intention obtained by recognition are jointly used as inputs.Finally,the winning percentage in different situations is obtained by multiple iterations of reinforcement learning.Among them, the intention recognition methods we selected are divided into the trained STABC-IR method and the intention recognition methods built into the system (Bayesian network and Fisher method).

    The virtual digital battlefield environment setting and scoring criteria in the experiment are the same as those in Ref.35The winning percentage results and intention recognition accuracy of the experiment are shown in Fig.14 and Table 11.

    From Fig.14 and Table 11,it can be found that the highest winning percentage of the system after introducing STABC-IR or Bayesian network is much higher than 58.6%without intention recognition, where the highest winning percentage of STABC-IR can reach 81.7%.This indicates that the addition of the intention recognition module can optimize the final interception strategy and promote the system to learn a better solution by itself.This is because the result of intention recognition contains historical information and the cognitive experience of the commander, which contains richer information than the mere target state data.However, after using Fisher method, the highest winning percentage of the system is only 53.4%,which is lower than that without intention recognition.This is due to the low recognition accuracy of Fisher method,which is only 85.4%.The accuracy of the intention recognition result will directly affect the judgment of the final interception strategy on the reward score,and then affect the final winning percentage.

    The STABC-IR model has an intention recognition accuracy of 97.2% in the Alpha C2 system, which is higher than 95.7%in the training experiments.In addition,the recognition accuracy of Bayesian network and Fisher method is also generally higher than the accuracy of various recognition methods in the previous experiments.Through analysis, we can find that this is because most enemy aircraft in the set virtual digital battlefield environment are intended to attack.The target characteristics are more obvious, thus leading to a high accuracy rate of intention recognition.

    The practicability of the STABC-IR method is verified through the exploiting experiments,which can provide a powerful help for command decision making.In addition, the experiment results also show that the STABC-IR method has obvious advantages over the traditional methods such as Bayesian network and Fisher method.

    6.Conclusions

    Table 11 Comparison of different intention recognition models.

    In this paper, we propose an air target intention recognition method based on BiGRU and CRF with STA (STABC-IR)for the characteristics of temporality, interpretability and back-and-forth intention dependency in the air target intention recognition process.First, a hierarchical strategy is used to select twelve-dimensional target features,which are normalized and uniformly encoded, and the cognitive experience of decision makers is encapsulated as intention labels.Second,a temporal network based on BiGRU is constructed to model the temporal relationship of input features and solve the temporal problem of intention recognition.Third,the space–time attention mechanism is proposed.The relationship among different features and that among different moments are mined using space attention and time attention, respectively, to expand the influence of key features.The model results are represented using visualization techniques, which further improve the model’s temporal analysis capability while endowing it with a certain degree of interpretability.Finally,an intention transformation network with CRF is proposed to jointly model the tactical intention of the target at each moment using the intention transfer feature function to solve the back-and-forth dependency of intention.Through comparison experiments with other advanced intention recognition methods, STABCIR has higher intention recognition accuracy and better overall performance than other models.The ablation experiments and exploiting experiments also verify the effectiveness of the model.

    Our proposed STABC-IR model for the first time simultaneously considers the temporal sequence and the back-andforth dependency of intentions in the process of intention recognition, and makes the model with certain interpretable functions through the space–time attention mechanism.This is particularly important in the current battlefield situational cognition process, which is still human-centered and machine-assisted.The recognition process and results of STABC-IR model are in line with the situational cognition thinking of commanders, which is important for improving the tactical intention recognition capability of air targets and has reference value for the construction of command and control auxiliary decision-making system.In the next step,we plan to conduct research on the possible imperfection and incompleteness of the information obtained in complex environments to further improve the adaptability, stability and robustness of the STABC-IR model.In addition, based on the STABC-IR intention recognition model for the single target, we will also investigate the new method for multi-target intention recognition.

    Declaration of Competing Interest

    The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    Acknowledgements

    This study was supported by the National Natural Science Foundation of China (Nos.62106283 and 72001214).

    国产av国产精品国产| 国产又色又爽无遮挡免| 国产精品久久久久久久久免| 18禁观看日本| 国产精品蜜桃在线观看| 久久99热6这里只有精品| 看非洲黑人一级黄片| 超色免费av| 亚洲精品,欧美精品| 久久精品久久久久久噜噜老黄| 97在线视频观看| 在线观看美女被高潮喷水网站| 在线观看美女被高潮喷水网站| 少妇猛男粗大的猛烈进出视频| 99热6这里只有精品| 精品一区二区三卡| 久久久久久久久久久久大奶| 国产片内射在线| 亚洲精品日本国产第一区| 午夜福利视频在线观看免费| 亚洲av日韩在线播放| 七月丁香在线播放| 免费大片18禁| 国产视频首页在线观看| 欧美激情 高清一区二区三区| 乱码一卡2卡4卡精品| 18禁动态无遮挡网站| 亚洲美女搞黄在线观看| 乱码一卡2卡4卡精品| av专区在线播放| 黄色配什么色好看| 欧美xxⅹ黑人| 久久99蜜桃精品久久| 亚洲伊人久久精品综合| 亚洲内射少妇av| 黑人欧美特级aaaaaa片| 91精品国产国语对白视频| 中文字幕精品免费在线观看视频 | 激情五月婷婷亚洲| 国产黄色视频一区二区在线观看| 日韩在线高清观看一区二区三区| 亚洲精品日韩av片在线观看| 卡戴珊不雅视频在线播放| 在线观看www视频免费| 亚洲美女搞黄在线观看| 中国美白少妇内射xxxbb| 亚洲第一区二区三区不卡| 国产永久视频网站| 丰满迷人的少妇在线观看| 一级黄片播放器| 午夜免费男女啪啪视频观看| 日产精品乱码卡一卡2卡三| 伦理电影免费视频| 97精品久久久久久久久久精品| 日韩欧美一区视频在线观看| 欧美亚洲 丝袜 人妻 在线| 亚洲精品国产色婷婷电影| 免费黄色在线免费观看| 欧美人与善性xxx| 久久99蜜桃精品久久| 久久久久人妻精品一区果冻| 午夜精品国产一区二区电影| 精品国产露脸久久av麻豆| 亚洲综合色惰| 欧美日韩av久久| 久久这里有精品视频免费| 久久热精品热| 丰满迷人的少妇在线观看| 在线观看www视频免费| 亚洲久久久国产精品| 亚洲色图 男人天堂 中文字幕 | 久久午夜福利片| 日日啪夜夜爽| 伊人亚洲综合成人网| 国产精品欧美亚洲77777| 51国产日韩欧美| av视频免费观看在线观看| 亚洲精品亚洲一区二区| 久久97久久精品| 亚洲人成网站在线播| 色94色欧美一区二区| 日韩制服骚丝袜av| 丝袜脚勾引网站| 考比视频在线观看| 最黄视频免费看| 中文字幕av电影在线播放| 免费黄网站久久成人精品| 大香蕉久久网| 国产一区亚洲一区在线观看| 曰老女人黄片| 91精品一卡2卡3卡4卡| 成人手机av| 校园人妻丝袜中文字幕| 国产亚洲av片在线观看秒播厂| 99精国产麻豆久久婷婷| 一级a做视频免费观看| 免费黄色在线免费观看| 国产白丝娇喘喷水9色精品| 亚洲国产av影院在线观看| 王馨瑶露胸无遮挡在线观看| av专区在线播放| 国产极品天堂在线| 国产精品久久久久久av不卡| 精品一区二区三卡| 国产一区二区三区av在线| 亚洲,一卡二卡三卡| 亚洲不卡免费看| 国产有黄有色有爽视频| 国国产精品蜜臀av免费| 晚上一个人看的免费电影| 亚洲经典国产精华液单| 国产一区亚洲一区在线观看| 日本黄色片子视频| 国产欧美亚洲国产| 在线观看三级黄色| 亚洲四区av| 国产在线一区二区三区精| 热99国产精品久久久久久7| 有码 亚洲区| 自拍欧美九色日韩亚洲蝌蚪91| 各种免费的搞黄视频| 91久久精品电影网| 又大又黄又爽视频免费| 亚洲av福利一区| 亚洲成人手机| 亚洲国产精品成人久久小说| 晚上一个人看的免费电影| 欧美精品高潮呻吟av久久| videosex国产| 亚洲av电影在线观看一区二区三区| 久久人人爽av亚洲精品天堂| 99国产精品免费福利视频| 国产精品一区二区在线观看99| 国产永久视频网站| 97超视频在线观看视频| 久久国产亚洲av麻豆专区| 狂野欧美激情性bbbbbb| 丝袜在线中文字幕| 精品视频人人做人人爽| 欧美xxⅹ黑人| 亚洲国产av新网站| 一本色道久久久久久精品综合| 久久亚洲国产成人精品v| 成人亚洲精品一区在线观看| 亚洲av中文av极速乱| 日韩中文字幕视频在线看片| 久久99热这里只频精品6学生| 简卡轻食公司| 丝袜喷水一区| av有码第一页| 免费人妻精品一区二区三区视频| 在线观看三级黄色| 好男人视频免费观看在线| 久久精品久久久久久久性| 久久久久久久亚洲中文字幕| 国产成人精品在线电影| av福利片在线| 国产日韩一区二区三区精品不卡 | 欧美成人精品欧美一级黄| 国产精品不卡视频一区二区| 99热网站在线观看| 亚洲欧美日韩另类电影网站| 国产有黄有色有爽视频| 天美传媒精品一区二区| 国产淫语在线视频| 九九久久精品国产亚洲av麻豆| 男人爽女人下面视频在线观看| 亚洲国产精品999| 日韩不卡一区二区三区视频在线| 97精品久久久久久久久久精品| 久久久久国产网址| 2018国产大陆天天弄谢| 国产在线一区二区三区精| 亚洲av欧美aⅴ国产| 免费大片18禁| 男女无遮挡免费网站观看| 中文欧美无线码| 午夜老司机福利剧场| 99久久精品一区二区三区| 亚洲色图 男人天堂 中文字幕 | 亚洲精品第二区| 欧美日本中文国产一区发布| 日本猛色少妇xxxxx猛交久久| 婷婷成人精品国产| 伊人亚洲综合成人网| 久久精品久久精品一区二区三区| 成年女人在线观看亚洲视频| 视频区图区小说| 日韩亚洲欧美综合| 女人久久www免费人成看片| 亚洲av福利一区| 亚洲成人一二三区av| 在线天堂最新版资源| 亚洲av电影在线观看一区二区三区| 乱人伦中国视频| a级毛片在线看网站| 亚洲精品av麻豆狂野| 久久精品国产鲁丝片午夜精品| 美女大奶头黄色视频| 国精品久久久久久国模美| 国产一区有黄有色的免费视频| 国产黄频视频在线观看| 欧美 亚洲 国产 日韩一| 日韩成人伦理影院| 女人精品久久久久毛片| 18禁动态无遮挡网站| 国产成人精品无人区| 天美传媒精品一区二区| 赤兔流量卡办理| 草草在线视频免费看| 亚洲欧美一区二区三区国产| 国产成人精品在线电影| 午夜免费鲁丝| 欧美变态另类bdsm刘玥| 爱豆传媒免费全集在线观看| 国产精品国产三级国产专区5o| 九九在线视频观看精品| 精品少妇久久久久久888优播| 中文字幕免费在线视频6| 欧美最新免费一区二区三区| 日本91视频免费播放| 老熟女久久久| 日韩电影二区| 不卡视频在线观看欧美| 制服丝袜香蕉在线| 亚洲天堂av无毛| 搡老乐熟女国产| 国产 精品1| 成人综合一区亚洲| 满18在线观看网站| 大码成人一级视频| 免费观看的影片在线观看| av在线老鸭窝| 在线看a的网站| 国产亚洲欧美精品永久| 亚洲精品国产av成人精品| 久久久久久人妻| 亚洲经典国产精华液单| 亚洲av.av天堂| 亚洲欧美清纯卡通| 亚洲第一av免费看| 满18在线观看网站| 看非洲黑人一级黄片| 亚洲人成77777在线视频| 免费人妻精品一区二区三区视频| 亚洲内射少妇av| 国产精品一区www在线观看| 妹子高潮喷水视频| 日本爱情动作片www.在线观看| 免费观看的影片在线观看| 日日啪夜夜爽| 精品熟女少妇av免费看| 大片电影免费在线观看免费| 午夜激情av网站| 亚洲人成网站在线观看播放| 在线观看免费日韩欧美大片 | 99久久人妻综合| 国产熟女欧美一区二区| 伊人久久国产一区二区| 色94色欧美一区二区| 极品人妻少妇av视频| 一边摸一边做爽爽视频免费| 一级毛片黄色毛片免费观看视频| 在线观看免费高清a一片| 久久久欧美国产精品| 亚洲一级一片aⅴ在线观看| 18禁动态无遮挡网站| 亚洲精品自拍成人| 久久久久网色| 色网站视频免费| av国产精品久久久久影院| 久久久精品免费免费高清| 最近最新中文字幕免费大全7| 美女视频免费永久观看网站| 久久热精品热| 色视频在线一区二区三区| 亚洲国产最新在线播放| 黑人猛操日本美女一级片| 王馨瑶露胸无遮挡在线观看| 香蕉精品网在线| 亚洲国产日韩一区二区| 一级毛片 在线播放| a 毛片基地| 99国产精品免费福利视频| 一区二区av电影网| 国产精品熟女久久久久浪| 午夜视频国产福利| 免费观看a级毛片全部| 久久午夜福利片| 亚洲精品久久成人aⅴ小说 | 欧美精品国产亚洲| a级毛色黄片| 性高湖久久久久久久久免费观看| 日韩大片免费观看网站| 免费黄色在线免费观看| av有码第一页| 自拍欧美九色日韩亚洲蝌蚪91| 青春草国产在线视频| 纵有疾风起免费观看全集完整版| 在线 av 中文字幕| 亚洲国产精品一区二区三区在线| 99久久精品国产国产毛片| 久久精品人人爽人人爽视色| 久久韩国三级中文字幕| 少妇高潮的动态图| 免费观看无遮挡的男女| 黄色配什么色好看| 一二三四中文在线观看免费高清| 国产在线免费精品| 亚洲欧美日韩卡通动漫| www.av在线官网国产| 亚洲情色 制服丝袜| 国产av一区二区精品久久| 亚洲国产欧美在线一区| 午夜影院在线不卡| xxx大片免费视频| 亚洲少妇的诱惑av| 菩萨蛮人人尽说江南好唐韦庄| 国产精品久久久久久久久免| 欧美日韩精品成人综合77777| 国产视频内射| 亚洲精品自拍成人| 少妇熟女欧美另类| 日韩人妻高清精品专区| 欧美精品亚洲一区二区| 亚洲av欧美aⅴ国产| 国产精品嫩草影院av在线观看| 国产精品久久久久久久电影| 好男人视频免费观看在线| av有码第一页| 美女内射精品一级片tv| 亚洲成人av在线免费| 男男h啪啪无遮挡| 精品人妻熟女毛片av久久网站| 搡老乐熟女国产| 精品人妻在线不人妻| 2018国产大陆天天弄谢| 日韩欧美一区视频在线观看| 久久久精品94久久精品| kizo精华| 天堂8中文在线网| a级片在线免费高清观看视频| 国国产精品蜜臀av免费| 国产精品国产av在线观看| 夫妻午夜视频| kizo精华| 综合色丁香网| 亚洲性久久影院| av天堂久久9| 少妇被粗大的猛进出69影院 | 高清不卡的av网站| 色94色欧美一区二区| 午夜福利视频精品| 亚洲成人av在线免费| videossex国产| 国产男女超爽视频在线观看| 国产在视频线精品| 寂寞人妻少妇视频99o| 夜夜骑夜夜射夜夜干| 亚洲av二区三区四区| 国产免费视频播放在线视频| 久久国产精品大桥未久av| 美女内射精品一级片tv| 国产熟女欧美一区二区| 在现免费观看毛片| 国产精品 国内视频| 国产精品99久久99久久久不卡 | 三上悠亚av全集在线观看| 少妇被粗大猛烈的视频| 国产在线一区二区三区精| 九九久久精品国产亚洲av麻豆| av线在线观看网站| 一本色道久久久久久精品综合| 国产高清三级在线| 啦啦啦在线观看免费高清www| 欧美97在线视频| 大片电影免费在线观看免费| 赤兔流量卡办理| 亚洲欧洲精品一区二区精品久久久 | 亚洲丝袜综合中文字幕| 亚洲精品国产色婷婷电影| 国产成人午夜福利电影在线观看| 在线观看美女被高潮喷水网站| 十八禁网站网址无遮挡| 毛片一级片免费看久久久久| 国产av精品麻豆| 国语对白做爰xxxⅹ性视频网站| 一级毛片黄色毛片免费观看视频| 亚洲精品日本国产第一区| 少妇高潮的动态图| 狂野欧美激情性xxxx在线观看| 嫩草影院入口| 丝袜脚勾引网站| 国产一区二区三区综合在线观看 | 国产精品一国产av| 一本大道久久a久久精品| 热99久久久久精品小说推荐| 亚洲天堂av无毛| 国产精品国产三级专区第一集| 91在线精品国自产拍蜜月| 国产色婷婷99| 少妇被粗大猛烈的视频| 久久人人爽av亚洲精品天堂| 18禁裸乳无遮挡动漫免费视频| 一级毛片我不卡| 十八禁高潮呻吟视频| 一区二区三区精品91| 性色avwww在线观看| 人妻一区二区av| 啦啦啦在线观看免费高清www| 成年av动漫网址| 亚洲精品,欧美精品| 亚洲国产色片| 国产av精品麻豆| 男男h啪啪无遮挡| 亚洲国产精品999| 少妇的逼好多水| 新久久久久国产一级毛片| 69精品国产乱码久久久| 日韩 亚洲 欧美在线| av.在线天堂| 日日摸夜夜添夜夜爱| 免费高清在线观看视频在线观看| a 毛片基地| 亚洲av中文av极速乱| 国产无遮挡羞羞视频在线观看| av免费观看日本| 中文字幕免费在线视频6| 少妇人妻久久综合中文| 大又大粗又爽又黄少妇毛片口| 欧美少妇被猛烈插入视频| 人妻一区二区av| 高清不卡的av网站| 狠狠婷婷综合久久久久久88av| 老司机影院成人| 男女啪啪激烈高潮av片| 婷婷色av中文字幕| 视频区图区小说| 国产色爽女视频免费观看| 高清av免费在线| 日韩成人伦理影院| 卡戴珊不雅视频在线播放| videosex国产| 国产精品女同一区二区软件| 亚洲精品第二区| 老司机亚洲免费影院| 尾随美女入室| 久久久久久久久大av| 欧美成人精品欧美一级黄| av在线观看视频网站免费| av女优亚洲男人天堂| 美女脱内裤让男人舔精品视频| 国产av国产精品国产| 国产成人一区二区在线| 久久ye,这里只有精品| 秋霞伦理黄片| xxx大片免费视频| 亚洲欧美色中文字幕在线| 亚洲av.av天堂| 丝瓜视频免费看黄片| 最新中文字幕久久久久| 欧美 日韩 精品 国产| 黄片无遮挡物在线观看| 妹子高潮喷水视频| 日韩伦理黄色片| videos熟女内射| 欧美国产精品一级二级三级| 久久鲁丝午夜福利片| 午夜福利影视在线免费观看| 亚洲精品日韩在线中文字幕| 99热这里只有精品一区| 亚洲av福利一区| 日日摸夜夜添夜夜添av毛片| 久久午夜福利片| 中文天堂在线官网| 国产精品久久久久久久久免| 亚州av有码| kizo精华| 王馨瑶露胸无遮挡在线观看| 国产永久视频网站| 中文字幕亚洲精品专区| 午夜91福利影院| 插阴视频在线观看视频| av网站免费在线观看视频| 日本黄色片子视频| 亚洲av.av天堂| 午夜视频国产福利| 国产免费视频播放在线视频| 纵有疾风起免费观看全集完整版| 国产精品久久久久久久久免| 五月玫瑰六月丁香| 伦精品一区二区三区| 日韩欧美精品免费久久| 亚洲精品视频女| 在线亚洲精品国产二区图片欧美 | 亚洲不卡免费看| 2018国产大陆天天弄谢| 久久鲁丝午夜福利片| 欧美 亚洲 国产 日韩一| av在线观看视频网站免费| av线在线观看网站| 男女高潮啪啪啪动态图| 国产成人精品福利久久| av女优亚洲男人天堂| 一区在线观看完整版| 久久久久久久亚洲中文字幕| 在线观看美女被高潮喷水网站| 国产有黄有色有爽视频| 国产精品一二三区在线看| 精品国产一区二区久久| 久久婷婷青草| 卡戴珊不雅视频在线播放| 国产女主播在线喷水免费视频网站| 欧美亚洲日本最大视频资源| 亚洲国产欧美在线一区| 久久精品国产亚洲av涩爱| 插阴视频在线观看视频| 在线观看免费日韩欧美大片 | 亚洲欧洲精品一区二区精品久久久 | 亚洲精品日韩av片在线观看| 亚洲欧美色中文字幕在线| 久久综合国产亚洲精品| 免费观看在线日韩| 少妇 在线观看| 亚洲美女视频黄频| 亚洲人成网站在线观看播放| 好男人视频免费观看在线| 亚洲一区二区三区欧美精品| 伊人久久国产一区二区| 狂野欧美激情性xxxx在线观看| .国产精品久久| 亚洲精品成人av观看孕妇| 亚洲伊人久久精品综合| 91精品一卡2卡3卡4卡| 亚洲国产最新在线播放| 亚洲精品,欧美精品| 91成人精品电影| 欧美性感艳星| 欧美xxⅹ黑人| 如何舔出高潮| 有码 亚洲区| 性高湖久久久久久久久免费观看| 九色亚洲精品在线播放| 亚洲怡红院男人天堂| 国产成人午夜福利电影在线观看| 欧美日韩国产mv在线观看视频| 2022亚洲国产成人精品| 乱码一卡2卡4卡精品| 国产精品女同一区二区软件| 欧美xxⅹ黑人| 在线亚洲精品国产二区图片欧美 | 久久久久国产网址| 人妻少妇偷人精品九色| 人人妻人人澡人人看| 国产午夜精品一二区理论片| 亚洲婷婷狠狠爱综合网| 日韩av不卡免费在线播放| 老熟女久久久| 一级二级三级毛片免费看| 国产有黄有色有爽视频| 亚洲精品456在线播放app| 久久久国产一区二区| 精品酒店卫生间| 少妇 在线观看| 女人精品久久久久毛片| 一个人看视频在线观看www免费| 国产老妇伦熟女老妇高清| 精品国产乱码久久久久久小说| 人妻 亚洲 视频| 欧美日韩国产mv在线观看视频| 一区二区三区精品91| 国产成人精品一,二区| 91在线精品国自产拍蜜月| 精品人妻偷拍中文字幕| 国产日韩欧美亚洲二区| 9色porny在线观看| 天堂中文最新版在线下载| 中文字幕制服av| 97在线视频观看| 国产av码专区亚洲av| 嘟嘟电影网在线观看| 一级a做视频免费观看| 波野结衣二区三区在线| 简卡轻食公司| 一区二区三区精品91| 99热这里只有精品一区| 国产av一区二区精品久久| 少妇人妻 视频| 久久精品国产鲁丝片午夜精品| 一本色道久久久久久精品综合| 精品一品国产午夜福利视频| av一本久久久久| 不卡视频在线观看欧美| 精品卡一卡二卡四卡免费| 亚洲国产精品一区二区三区在线| 亚洲精品成人av观看孕妇| 精品久久久久久久久亚洲| 国产亚洲av片在线观看秒播厂| 一级a做视频免费观看| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 美女视频免费永久观看网站| 少妇人妻久久综合中文| 成年女人在线观看亚洲视频| 国产综合精华液| 大陆偷拍与自拍| 美女国产高潮福利片在线看| 国产精品偷伦视频观看了| 老熟女久久久| 啦啦啦在线观看免费高清www| 国产在线免费精品| 人妻系列 视频| 少妇的逼好多水| 99视频精品全部免费 在线| 999精品在线视频| 男女国产视频网站|