• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Multi-Modal Military Event Extraction Based on Knowledge Fusion

    2023-12-12 15:49:24YuyuanXiangYangliJiaXiangliangZhangandZhenlingZhang
    Computers Materials&Continua 2023年10期

    Yuyuan Xiang,Yangli Jia,Xiangliang Zhang and Zhenling Zhang

    School of Computer Science,Liaocheng University,Liaocheng,252059,China

    ABSTRACT Event extraction stands as a significant endeavor within the realm of information extraction,aspiring to automatically extract structured event information from vast volumes of unstructured text.Extracting event elements from multi-modal data remains a challenging task due to the presence of a large number of images and overlapping event elements in the data.Although researchers have proposed various methods to accomplish this task,most existing event extraction models cannot address these challenges because they are only applicable to text scenarios.To solve the above issues,this paper proposes a multi-modal event extraction method based on knowledge fusion.Specifically,for event-type recognition,we use a meticulous pipeline approach that integrates multiple pre-trained models.This approach enables a more comprehensive capture of the multidimensional event semantic features present in military texts,thereby enhancing the interconnectedness of information between trigger words and events.For event element extraction,we propose a method for constructing a priori templates that combine event types with corresponding trigger words.This approach facilitates the acquisition of fine-grained input samples containing event trigger words,thus enabling the model to understand the semantic relationships between elements in greater depth.Furthermore,a fusion method for spatial mapping of textual event elements and image elements is proposed to reduce the category number overload and effectively achieve multi-modal knowledge fusion.The experimental results based on the CCKS 2022 dataset show that our method has achieved competitive results,with a comprehensive evaluation value F1-score of 53.4%for the model.These results validate the effectiveness of our method in extracting event elements from multi-modal data.

    KEYWORDS Event extraction;multi-modal;knowledge fusion;pre-trained models

    1 Introduction

    Military informatization is the focus of modern military development.The application of event extraction technology in the military sector holds great potential for enhancing the efficiency of information acquisition.This technology enables the dynamic,real-time expansion of the information base and contributes to the effective management and analysis of military information.In recent years,internet-based equipment data has experienced significant growth.This kind of data is typically disseminated in the form of text,images,and other multi-modal content[1].Military equipment data has gradually become an important resource and the basis for equipment requirement justification.However,the currently available public datasets for multi-modal event extraction in the military domain are relatively limited.They suffer from a lack of diversity in data samples,exhibit a wide distribution of event elements,and pose challenges in effectively extracting crucial knowledge.Therefore,extracting relevant event types and elements from multi-modal military equipment data is of utmost importance.It facilitates the discovery of knowledge and application patterns that are suitable for equipment requirement argumentation.

    Event extraction is a widely studied topic in natural language processing research[2,3].Its primary objective is to automatically extract user-desired events from unstructured event information and represent them in a structured format.Event extraction techniques have a wide range of applications in the fields such as biomedical[4,5],judicial[6,7],social[8,9],journalistic[10,11],etc.But in the military domain,there is still a lack of effective event extraction approaches due to less research on military event extraction[12].

    Usually,a military equipment event consists of triggers and arguments.Each trigger corresponds to a military equipment event and determines the corresponding event type.Arguments refer to multiple elements of the events.As shown in Fig.1,the example consists of military equipment text and its corresponding image.In this example,we extracted information about the event type,the argument element,and the corresponding coordinate position of the event body in the image.If the object frame corresponding to the text is not detected in the image,it is marked as “-1”.However,most event extraction approaches are aimed at extracting arguments from the sentences of a document,such as the Knowledge Base Population(KBP)dataset1https://tac.nist.gov/2017/KBP/,a popular event extraction dataset.

    Figure 1:An example of a multi-modal event element

    In this paper,we propose a multi-modal event extraction method based on knowledge fusion to address the challenge of event arguments for multi-modal data.The method consists of three subtasks:event type recognition,event argument extraction,and multi-modal knowledge fusion.In event type recognition,we use an event multi-label classification model and a trigger word extraction model to jointly define event types.The event multi-label classification model is built with BERT [13] as the baseline to obtain the semantic features and contextual information of the text.The trigger word extraction model is built with ERNIE[14]as the baseline to obtain richer semantic information and distinguish ambiguity.In event argument extraction,we obtain a dynamic word vector representation based on contextual information from ERNIE.This representation captures bidirectional semantic information using a Bidirectional Gated Recurrent Unit(BIGRU)[15].Then,Conditional Random Field(CRF)[16]decoding is used to identify event arguments.In multi-modal knowledge fusion,we use the BERT model to recognize argument entities and the YOLOv5[17]model for target detection.

    In addition,there is a shortage of sufficient annotated data and a significant presence of overlapping event arguments in the military domain.To train and evaluate our model,we use a data augmentation approach based on full-domain random substitution of parameter entities.This approach allows us to implement event expansion while maintaining syntactic and semantic invariance.We then construct an a priori template by integrating the model output used for event type recognition.Moreover,we have designed a fusion method for the spatial mapping of textual event elements and image elements.This method aims to reduce category count overload and effectively achieve multimodal knowledge fusion.The contributions of this paper are summarized as follows:

    ? We propose a multi-modal military event extraction framework based on knowledge fusion.In this framework,text event elements and image elements are both mapped to the same label space,effectively integrating multi-modal knowledge.

    ? We propose a method to construct an a priori template of event types+trigger words based on the recognized event types.By effectively modeling the multidimensional semantics of overlapping parameters of different event types,more meaningful representations of semantic relationships between event elements can be learned.

    ? We conduct extensive experiments on the CCKS 2022 dataset2https://www.biendata.xyz/competition/KYDMTJSTP/and demonstrated the effectiveness of the proposed method in multi-modal military event element extraction.

    The remainder of the paper is structured as follows: Section 2 discusses the related work.In Section 3,we provide an overview of multi-modal event element extraction approaches.We first outline the general framework and then elaborate on event type recognition,event element extraction,and multi-modal knowledge fusion.Section 4 provides details about the experiment results and a discussion of the proposed methods.Finally,Section 5 concludes this paper with an overall summary and future works.

    2 Related Works

    Our research includes three objectives:event extraction approaches,object detection approaches,and multi-modal knowledge fusion.We review the major literature in the three areas.

    2.1 Event Extraction

    Event extraction methods can be mainly divided into pattern-matching-based and machinelearning-based methods.Early event extraction usually uses pattern-matching-based methods.Riloff[18] mentioned the inclusion of event elements in the context of event trigger words by manually constructing a domain-specific dictionary for event extraction.However,the pattern-matching method depends on the specific form of domain-specific text and is less generalizable to the system.

    In recent years,machine learning methods have gradually become the mainstream approach for event extraction.Compared with pattern-matching-based methods,machine learning methods are more adaptable to different domains and have better portability.Deep learning has become a very popular machine learning method and is widely used in event extraction tasks [19].The first deep learning-based event extraction method utilized a pipeline-based model.Chen et al.[20]enhanced the traditional convolutional neural network model through a dynamic multi-pool mechanism and proposed a dynamic multi-pool convolutional neural network(DMCNN).This approach performs event extraction in two stages.To compensate for the shortcomings of the pipeline model,Tian et al.[21]employed a pre-trained language model for event extraction.They transformed the joint extraction task into an annotation problem and utilized an end-to-end model to extract entities and events.Lyu et al.[22] proposed a transformation-based neural network model that exploits the connection between the entity and event structures to perform joint entity and event extraction.

    Although various event extraction methods have been proposed,they still produce unsatisfactory performance due to the complexity of military texts and the universality of overlapping event elements.Therefore,we propose a method to construct an a priori template of event types+trigger words based on the recognized event types.Our method can comprehensively capture the inherent multidimensional semantic features in military texts.At the same time,it can fully utilize the detailed features of trigger words,thereby promoting a deeper understanding of the semantic relationships between elements.

    2.2 Object Detection

    Object detection algorithms can be mainly classified into traditional object detection algorithms and deep learning-based object detection algorithms.Traditional object detection algorithms usually extract features manually.Felzenszwalb et al.[23] proposed a deformable part model for object detection.The model combines a Histogram of Oriented Gradient (HOG) and a Support Vector Machine (SVM) classifier.However,traditional object detection methods can only extract low-level image features and have low performance.

    In recent years,most object detection methods have been based on deep learning.There are two main types of mainstream deep learning object detection algorithms:two-stage object detection algorithms and one-stage object detection algorithms.Two-stage detection algorithms first generate candidate regions and then classify the candidate regions.Girshick et al.[24]proposed the regions with CNN features(R-CNN)algorithm.The algorithm consists of generating candidate regions for regionbased feature extraction,using Support Vector Machines(SVM)to detect the candidate regions,and determining their corresponding object classes and locations.The one-stage detection algorithm is an end-to-end object detection algorithm that accomplishes both object edge prediction and object classification.The YOLOv1 algorithm proposed by Redmon et al.[25] divides the image into many grids,and then localizes and classifies each grid of the image.

    Although various object detection methods exist,they tend to share common limitations.These limitations include slow processing speeds,inefficient resource utilization,and challenges in generalizing to new object classes that are significantly different from the training dataset.To address these issues,our study adopted the YOLOv5 model to improve processing speed and resource efficiency.In addition,we constructed a target detection and recognition dataset using a combined human-machine label transformation approach,which effectively improves the overall performance of the model.

    2.3 Multi-Modal Knowledge Fusion

    Multi-modal knowledge fusion usually extracts feature representations of different modal information to achieve a collaborative representation of multi-modal data.Zhang et al.[26] proposed a multi-modal data source fusion model that utilizes gated cyclic units to capture the diversity of data sources bi-directionally.Additionally,they employed a hierarchical attention network to obtain a holistic representation of the information.Ding et al.[27]first extracted visually relevant multi-modal knowledge and then represented the multi-modal knowledge through a fine-grained explicit triad.

    The majority of existing event extraction models predominantly concentrate on text-based scenarios,overlooking the potential of event element extraction from multi-modal data.As a result,the research on extracting event elements from multi-modal sources has received limited attention,leading to a relatively underexplored area of study.To effectively achieve multi-modal knowledge fusion,we propose a novel multi-modal label mapping method.This method facilitates the mapping of independent variables extracted from textual data and objects extracted from images into a unified label space,thus enabling the effective fusion of textual and visual information.

    3 Materials and Methods

    Our research proposes a multi-modal event element extraction framework that enables the extraction of a wider range of event types and elements from large-scale multi-modal military news documents.As shown in Fig.2,the proposed framework comprises four phases organized in a pipeline fashion.These phases encompass event type recognition,event argument extraction,object detection and recognition,and multi-modal knowledge fusion.

    Figure 2:A multi-modal event extraction framework

    In the first phase of event extraction,trigger words are discovered from event sentences,and an event trigger word is a keyword that reflects the occurrence of an event.Domain experts annotate trigger words for different types of events and then expand the trigger word library by word vector similarity.A BERT-based multi-label classification model and an ERNIE-based trigger word extraction model are used to recognize the types of events in military news.

    In the second phase,we constructed an a priori template of event types+trigger words based on the recognized event types to solve the problem of overlapping arguments of different event types in event sentences.Then,the ERNIE-BIGRU-CRF model is used to implement argument slot filling to extract the corresponding event arguments.In the third phase,the BERT model is used to recognize argument entities,and the YOLOv5 object detection algorithm is used to recognize object bounding boxes.Finally,the object bounding box coordinates are mapped to the text argument by using the multi-modal label mapping method.

    3.1 Event Type Recognition

    3.1.1 Event Trigger Word Extraction

    In event extraction,the trigger word can characterize the event occurrence,and it is the most important feature word to decide the event type.However,an event can be represented by different styles of triggers.There is a correspondence between the event type and the trigger word.The event type can be identified based on the trigger word.For example,the news item“French Phantoms attacked Palmyra and Raqqa in Syria”means that an attack event occurred due to the trigger word“attack”.Therefore,this study constructs a trigger lexicon by labeling the trigger words of different types of events by domain experts.However,event features are difficult to be covered as well as may filter some words that can act as trigger words by themselves.Therefore,this study uses the ERNIE-based trigger word model to fully extract the trigger word information in military news to expand the trigger word database.The trigger words for different event types are shown in Table 1,the left column of the table represents the event type,while the right column contains the corresponding trigger words.

    Table 1:Military events taxonomy

    3.1.2 Event Multi-Label Classification

    In event type recognition,since text contains multiple event types,a sentence may belong to multiple event types.Therefore,we need to use a multi-label text classification algorithm[28]to identify event types.Since a text contains a large number of unlabeled events,we propose to add a multi-label classification model with empty event classes to perform event multi-label classification.Fig.3 shows an overview of the multi-label classification model.

    Figure 3:BERT model for multi-label text classification

    The multi-label classification model is shown in Fig.3.We encode the text using BERT to acquire a dynamic word vector representation of the sentence.Then,the encoded vectors are passed through a feedforward neural network that incorporates a sigmoid layer to classify the text and recognize the event type.

    3.2 Event Argument Extraction

    Event argument extraction aims to extract the relevant arguments and the roles played by the arguments in an event.However,in the military domain,the scarcity of annotated data and the presence of overlapping event arguments pose significant challenges.To address these issues,we first use a full-domain random substitution data enhancement method based on arguments to perform event expansion while keeping the syntactic semantics unchanged.The main idea of the algorithm is to replace the arguments corresponding to the initiator,bearer,time,and location with arguments of the same type in the event text.For example,replaced with.The trigger words,such as“attack,”“strike,”and“destroy,”are replaced randomly.Then we extract the event arguments using the event argument extraction model.Fig.4 shows an overview of the event argument extraction model,which is divided into four main phases:construction of input text,model pre-training,model building,and model finetuning.

    Figure 4:ERNIE-BIGRU-CRF model for event argument extraction

    3.2.1 Construction of Input Text

    In the stage of constructing input text,we aim to address the issue of overlapping arguments between different event types within a sentence.Based on the event types and trigger words,we first use the event type recognition model output for integration,and then construct an a priori template of event types+trigger words as input text,implemented in the form of [CLS]+event types+trigger words+[SEP]+[text]+[SEP].After using this approach to input text,fine-grained input samples with event trigger words can be obtained,enabling the model to more fully understand the semantic relationships between arguments.Meanwhile,trigger words that are not related to the event type are filtered using a multi-label classification model.The final result is generated by the trigger word extraction model and the multi-label classification model voting,which can further reduce the propagation error of the pipeline.

    3.2.2 Model Pre-Training

    Sun et al.[29]stated that continued pre-training on an in-domain corpus can significantly improve the model’s understanding of a specific domain.We divide the text into text sequences with a length of less than 300.To improve the adaptation and modeling abilities of the language model to the data,we continuously pre-trained the language model on the training texts.

    3.2.3 Model Building

    The event argument extraction model is shown in Fig.4.Firstly,ERNIE encodes the sentences to obtain a semantic feature vector of the sentences.Given the input token sequenceS=(s0,s1,...,sn),we incorporate each token into the transformer encoder to generate a word vector sequenceX=(xi1,xi2,...,xin).This sequence is trained through the ERNIE model’s embedding layer to obtain the word vector as follows:

    whereWin∈R768is the representation of then-thword andWerefers to the embedding layer’s weight parameter.

    Then,the vector is fed into BI-GRU to capture the long-range dependencies and output a sentence representation vector that incorporates deep semantic information as follows:

    wheredenotes the hidden state passed forward to the next node,denotes the hidden state passed backward to the next node,denotes the hidden state of the previous node forward,denotes the hidden state of the previous node backward,Concatrepresents the splicing of the forward and backward hidden layer state vectors,andCis the output vector of the BiGRU layer.

    Finally,the event arguments are labeled by the CRF layer and calculated as follows:

    wheremis the number of label types,Ci,yIis the score of the tagyiof thei-thtoken in the sequence,andyirepresents the score of a transition from the tagyito tagyi+1.

    The event argument extraction model calculates the loss value of the CRF layer on a sentence level,as follows:

    wherePmis the score corresponding to each predicted path,mis the number of paths,andPrrepresents the score of ground truth.

    3.2.4 Model Optimization

    To enhance the performance of the event element extraction model,we incorporate a fine-tuning process that involves adjusting the learning rates.However,during experimentation,we observed that the model often converges within the desired range of 2e-5.To address this issue,we designed the learning rates using a layer-by-layer decreasing LayerRate[29],where lower learning rates are assigned to the lower layers of the network during the training phase.The learning rate is as follows:

    whereηkrepresents the kth layer learning rate andξrepresents the decay factor,ξ=0.95.

    In model training,the CRF layer failed to converge to the same learning rate.Due to the unequal coordination between the model and the CRF layer,the learning rate of the CRF layer has increased by 100 times.The Fast Gradient Method(FGM)[30]can form adversarial samples by adding perturbations to the embedding layer.Therefore,we use FGM to improve the robustness of the model to train a better-performing event argument extraction model.

    3.3 Multi-Modal Knowledge Fusion

    This article employs the BERT model to detect the types of arguments to achieve multi-modal knowledge fusion of text and images.Then,we use the YOLOv5 model to identify the object bounding boxes in the images and extract the corresponding type information.We propose a multi-modal label mapping method that jointly maps the classification results from argument identification and object detection to the same label space.By employing rule-based post-processing techniques,we establish links between spatial information(bounding box coordinates)and textual information(arguments)to effectively connect the visual and textual modalities.

    3.3.1 Argument Recognition

    I continued collecting, and eventually ended up with some old and valuable cards. But there s one card I would never trade, not even for a Mickey Mantle12 rookie card. I still have that Ken Griffey, Jr., the one with the uneven13 borders and the ragged14 corners, the one that has only plain gray pasteboard on the back instead of statistics.

    In argument recognition,we first classify the arguments using a set of predefined rules.The unrecognizable arguments are then manually labeled using a crowdsourcing architecture,and the labeled dataset is transformed into a dataset for argument recognition.

    Recently,pre-trained language models have achieved remarkable results on many natural language processing tasks.Given the input token sequenceW=(w0,w1,...,wn),we train this sequence through the embedding layer of the BERT model to obtain the feature vectorsTi=(T0,T1,...,Tn),as follows:

    whereTirepresents the text vector representation obtained using the BERT pre-trained model,irepresents thei-thtext in the multi-modal dataset.

    3.3.2 Object Detection and Recognition

    In object detection and recognition,image data annotation is combined with textual information.For example,“helicopter gunship”belongs to the category “aircraft”.Therefore,each object in the image corresponds to a specific category.In this paper,we classify the weaponry in the image data into aircraft,ships,missiles,trucks,submarines,and six other“parent types”.

    To efficiently construct the object detection and recognition dataset,we use a combined humanmachine label transformation method.The argument types are first converted to their“parent types”using an argument recognition model,then the data are labeled using Lambelme,and finally,the labeling errors are corrected through a manual review process.

    whereTirepresents the image features obtained through the YOLOv5 model,irepresents thei-thimage in the multi-modal dataset.

    3.3.3 Multi-Modal Label Mapping

    In the multi-modal label mapping stage.Firstly,we fuse the output of the feature from independent variable recognition and object detection to obtain the fusion features of the text-image as follows:

    where ⊕represents the fusion of feature vectors,Airepresents the feature vector after the text and image fusion.

    Then,we input the fused featureAiinto the self-attention module and perform feature mapping through a fully connected layer,as follows:

    wheremTandPare learnable parameters for the hidden layer,αis the standardized attention weight,Eiis the feature representation output by the attention layer,andXiis the feature representation obtained through the fully connected layer.

    Finally,the features are processed by the Softmax layer and calculated as follows:

    whereFiis the final classification result,Wis the weight matrix,andbis the bias term.

    In this stage,we classify the“initiator”,“bearer”and“using the device”of the argument.Because“time”and“l(fā)ocation”cannot be extracted from the image,these two arguments are assigned a value of “-1”.We use an object detection algorithm to identify the object bounding box and its type.A single image may contain multiple object boxes of the same type,thus we select the object box with the largest area.If the argument type corresponds to the image type,the object box coordinates are assigned to the argument.If they do not correspond,the value“-1”is assigned.But this method is not accurate enough.Therefore we use predefined rules to filter wrong arguments and overlapping arguments.

    4 Experiments

    In this section,we conduct a series of experiments to evaluate the effectiveness of our proposed approach.We first describe the implementation details,including data and hyperparameter settings.Then,we show the experimental results,including the performance of the model at each stage,and the entire multi-modal event element extraction approach.

    4.1 Dataset and Evaluation Metrics

    In our experiments,we use the CCKS 2022 dataset oriented to the open-source multi-modal military event element extraction evaluation task.The dataset includes seven different event types:attack,scouting,safeguard,blocking,deployment,defensive,and maneuvering events.In this dataset,1400 annotated military news texts are used as the training set,200 annotated military news texts are used as the validation set,and 400 military news texts are used as the test set for evaluating the multi-modal event element extraction approach.

    We use Precision(P),Recall(R),and F-Measure(F1)as the major metrics to evaluate the model performance of our models.A prediction is considered correct when it accurately identifies the event type,the event argument,and the location coordinates of the argument in the image.Regarding the correct coordinates of the argument in the image,one criterion is that the intersection ratio between the predicted position and the labeled position of the argument is greater than 0.5.If there is no corresponding coordinate for the argument in the image,the output is designated as -1 to indicate correctness.We use the event element matching F1 as the final evaluation metric with the following equation:

    whereP=the number of predicted correct event elements/number of all predicted event elements,andR=the number of predicted correct event elements/number of all correctly annotated elements.

    4.2 Implementation Details

    We use a PyTorch [31] and PaddlePaddle [32] based framework to implement the multi-modal event element extraction method.We divide 1400 training data into 7 copies using 7-fold crossvalidation experiments.Suitable optimizers,learning rates,batch sizes,and weight recessions are used in our method.More detailed settings of the hyperparameters can be found in Table 2.

    Table 2:Hyperparameters in the model

    4.3 Experimental Results

    4.3.1 Evaluations of Event Type Recognition Approaches

    We use a fusion of event multi-label classification and trigger word extraction models for event type identification.Tables 3 and 4 show the experimental results for the event multi-label classification task and the trigger word extraction task on the CCKS 2022 dataset,respectively.We can see from Table 3 that the BERT model improves the overall performance of the event multi-label classification task compared to other pre-trained language models.On the contrary,it is clear from Table 4 that the ERNIE model is better for the trigger word extraction task.This is because the trigger word extraction task requires more semantic information,and the ERNIR model can learn more semantic knowledge compared to other pre-trained models.

    Table 3:Performance comparisons of multi-label classification algorithms

    Table 4:Performance comparisons of trigger words extraction algorithms

    4.3.2 Evaluations of Event Argument Extraction Approaches

    Tables 5 and 6 show the experimental results of the event argument extraction task on the CCKS 2022 dataset,respectively.From Table 5,we can see that the ERNIE model slightly outperforms the other pre-trained models for the event argument extraction task.This is because ERNIE uses a knowledge masking strategy in the pre-training phase,which adopts three different granularity spans of token,phrase,and entity for masking in stages to learn semantic association information and entity boundary information.Therefore ERNIR has better performance in event argument extraction tasks.

    Table 5:Performance comparisons of event argument extraction algorithms

    Table 6 shows our improved performance using the model optimization method.We use FGM to do perturbation on embedding to further improve the model performance.The BIGRU+CRF model based on pre-trained ERNIE shows improvement in evaluation metrics compared to the pre-trained ERNIE model.This is because BIGRU can fuse deeper semantic information,and then compute probabilistic maximum label sequences by CRF,which solves the annotation bias problem.We use an argument-based full-domain random replacement data augmentation method to improve the category imbalance and improve the model performance.We constructed an a priori template based on event type+trigger word to solve the problem of overlapping parameters,thus effectively improving the F1 value of the evaluation metric.The trigger words we used are shown in Table 1.The magnitude of the loss value indicates the convergence of the model during the training process.Fig.5 shows that the loss value of the event argument extraction model consistently remains at a low and stable level,indicating the model’s excellent convergence performance on this dataset.

    Table 6:Performance comparison of algorithms under different optimization strategies

    4.3.3 Evaluations of Multi-Modal Knowledge Fusion Approaches

    In the multi-modal knowledge fusion stage,two datasets were constructed for model evaluation based on the CCKS 2022 dataset.The first dataset includes the text of six“parent types”of argument entities.It comprises 1468 annotated argument entities as the training set,and 203 unannotated argument entities as the test set to evaluate the performance of the argument entity recognition approach.The second dataset consists of equipment images from the CCKS 2022 dataset.It includes 1400 images annotated with 2024 bounding boxes as the training set,and 200 images annotated with 318 bounding boxes as the test set for evaluating the performance of the object detection approach.

    Figure 5:Change curve of loss value

    (1) Argument Recognition.In this set of experiments,we used the first dataset containing the argument entities to train and evaluate the performance of the model using a 7-fold cross-validation approach.We use four pre-trained models,including BERT,XLNET,ERNIE,and RocBert,to identify six types of argument entities.The experimental results are shown in Table 7,where the BERT model obtained the best performance for P,R,and F1.This model accuracy can be used as a basis for label mapping in the multi-modal knowledge fusion phase.

    Table 7:Performance comparisons of argument recognition algorithms

    (2)Object Detection and Recognition.This experiment evaluates the object detection method on a second dataset consisting of object detection images.In this experiment,we used the YOLOv5 model for object detection and recognition.The model was fine-tuned using the equipment images,and the metric F1 value was evaluated,resulting in 0.753.

    (3) Multi-modal Label Mapping.We evaluated the proposed multi-modal knowledge fusion approach using the CCKS 2022 dataset.This method fuses the text extracted from events and the images detected by the object.The final evaluation metric F1 value was 0.53403,obtaining competitive results on the CCKS 2022 dataset.

    4.3.4 Performance Analysis of Model Usage Memory

    During the training process,we observed that the proposed model can have significant memory requirements,especially when working with larger datasets.These memory limitations can present challenges in real-world applications,particularly when deploying pre-trained models on devices with limited memory.To address potential memory issues in real-world scenarios,we explore several strategies.First,hardware accelerators like GPUs can be employed.For instance,we used a GeForce RTX 3090 in our implementation,which greatly improves memory utilization and accelerates model training.Additionally,techniques such as gradient checkpointing,gradient accumulation,and batch size reduction can be utilized to alleviate memory constraints.When optimizing memory usage,achieving the appropriate balance is crucial to ensure that the model’s predictive ability is not compromised.

    5 Conclusions

    In this paper,we propose a multi-modal event extraction method based on knowledge fusion,to address the challenges of multi-modal event elements in the military domain.The method consists of three subtasks:event type recognition,event argument extraction,and multi-modal knowledge fusion.We first use a multi-label classification BERT model and a trigger word extraction ERNIE model to jointly recognize event types.Then the ERNIE-BIGRU-CRF model is used to extract event arguments.Finally,we use the BERT model to recognize argument entities and the YOLOv5 model to detect and recognize image objects for multi-modal knowledge fusion of images and text.In addition,we use a full-domain random substitution data enhancement method based on arguments to overcome the problem of insufficient labeled data in the military domain.We construct an a priori template of event types+trigger words to solve the argument overlap problem.The aforementioned methods demonstrate the ability to effectively extract event types and event elements from extensive multimodal military data.This process enables the rapid extraction of valuable information,which holds great significance in enhancing the efficiency of military resource utilization and facilitating applied research on military knowledge.

    The experimental results on the CCKS 2022 dataset demonstrate the effectiveness of the proposed method and yield competitive results.The extracted multi-modal event elements can be effectively used to support the informational analysis of military equipment.However,our proposed multi-modal knowledge fusion method suffers from propagation errors.Therefore,in the future,we will investigate fusing textual knowledge and image information in the feature space under small sample conditions to further improve the proposed multi-modal event element extraction method.

    Acknowledgement:We are grateful to all those with whom we have enjoyed working on this and other related papers.

    Funding Statement:This research was supported by the National Natural Science Foundation of China(Grant No.81973695)and Discipline with Strong Characteristics of Liaocheng University–Intelligent Science and Technology(Grant No.319462208).

    Author Contributions:Study conception and design: Y.Xiang,X.Zhang;data collection: Y.Xiang;analysis and interpretation of results: Y.Xiang,Z.Zhang,Y.Jia;draft manuscript preparation: Y.Xiang,Y.Jia.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The multi-modal military event extraction dataset used in this paper is available at https://github.com/xyy313/MMEE/tree/main/dataset.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    中文精品一卡2卡3卡4更新| 赤兔流量卡办理| 亚洲精品日本国产第一区| 国产一区二区亚洲精品在线观看| 国产极品天堂在线| 黄色怎么调成土黄色| 亚洲国产高清在线一区二区三| 欧美高清性xxxxhd video| 日韩亚洲欧美综合| 亚洲aⅴ乱码一区二区在线播放| 欧美国产精品一级二级三级 | 禁无遮挡网站| 免费观看性生交大片5| 18禁裸乳无遮挡动漫免费视频 | 午夜老司机福利剧场| 国产黄片视频在线免费观看| 久久久久久国产a免费观看| 精品久久久久久久久亚洲| 午夜日本视频在线| 全区人妻精品视频| 夫妻性生交免费视频一级片| 日韩一区二区视频免费看| 亚洲美女搞黄在线观看| 国产精品国产三级国产专区5o| 国产老妇伦熟女老妇高清| 国产免费一区二区三区四区乱码| 大片免费播放器 马上看| 久久ye,这里只有精品| 男人舔奶头视频| 99精国产麻豆久久婷婷| 国产精品国产av在线观看| 久久久久久久亚洲中文字幕| 日本一本二区三区精品| 高清毛片免费看| 蜜桃亚洲精品一区二区三区| 国产片特级美女逼逼视频| 十八禁网站网址无遮挡 | 天天一区二区日本电影三级| 欧美变态另类bdsm刘玥| 老师上课跳d突然被开到最大视频| 日本三级黄在线观看| 亚洲成人一二三区av| 插阴视频在线观看视频| 久久影院123| 亚洲怡红院男人天堂| 久久久久久九九精品二区国产| 精品熟女少妇av免费看| 国产一区二区三区综合在线观看 | 久久影院123| 国产一区二区亚洲精品在线观看| 丰满人妻一区二区三区视频av| 免费高清在线观看视频在线观看| 精品熟女少妇av免费看| 肉色欧美久久久久久久蜜桃 | 国产综合精华液| 啦啦啦中文免费视频观看日本| 少妇人妻久久综合中文| 国产真实伦视频高清在线观看| 中文天堂在线官网| 久久久久久久久久久免费av| 午夜免费观看性视频| 男女那种视频在线观看| 亚洲av不卡在线观看| 一级av片app| 国产精品久久久久久精品古装| 国产亚洲av片在线观看秒播厂| 久久久精品免费免费高清| 看免费成人av毛片| 中文资源天堂在线| 在线观看一区二区三区激情| 一级黄片播放器| 亚洲国产最新在线播放| 国精品久久久久久国模美| 精品少妇久久久久久888优播| 偷拍熟女少妇极品色| 免费大片18禁| 国产男人的电影天堂91| av又黄又爽大尺度在线免费看| 国产精品成人在线| 女人久久www免费人成看片| 赤兔流量卡办理| 国产精品国产av在线观看| 天天一区二区日本电影三级| 天天一区二区日本电影三级| 人体艺术视频欧美日本| 三级经典国产精品| 国产淫语在线视频| 熟妇人妻不卡中文字幕| 久久精品久久久久久久性| 精品人妻熟女av久视频| 日韩免费高清中文字幕av| 91精品伊人久久大香线蕉| 国产综合精华液| 久久综合国产亚洲精品| 久久久久久久午夜电影| 激情 狠狠 欧美| 夫妻午夜视频| 亚洲美女视频黄频| 国产精品秋霞免费鲁丝片| 精品一区二区免费观看| 国产一区亚洲一区在线观看| 一边亲一边摸免费视频| 噜噜噜噜噜久久久久久91| 欧美激情在线99| 国产黄片视频在线免费观看| 欧美性感艳星| 色哟哟·www| av又黄又爽大尺度在线免费看| 一级毛片电影观看| 国产成人免费无遮挡视频| 高清欧美精品videossex| 国产精品一区二区性色av| 日韩三级伦理在线观看| 人妻一区二区av| 春色校园在线视频观看| 国产精品久久久久久av不卡| 男男h啪啪无遮挡| 亚洲综合精品二区| 99久久精品国产国产毛片| 国产精品嫩草影院av在线观看| 一区二区三区免费毛片| 99久久精品国产国产毛片| 国产乱来视频区| 亚洲不卡免费看| 日韩成人av中文字幕在线观看| 国产精品三级大全| 午夜免费鲁丝| 欧美另类一区| 国产精品女同一区二区软件| 国产一区二区亚洲精品在线观看| 国产精品久久久久久精品电影| 啦啦啦在线观看免费高清www| 又粗又硬又长又爽又黄的视频| 日韩三级伦理在线观看| 国产高清国产精品国产三级 | 少妇人妻 视频| 爱豆传媒免费全集在线观看| 免费少妇av软件| 欧美极品一区二区三区四区| 18禁动态无遮挡网站| 尤物成人国产欧美一区二区三区| 午夜精品一区二区三区免费看| 午夜免费男女啪啪视频观看| 欧美+日韩+精品| 精品国产露脸久久av麻豆| 国产免费一级a男人的天堂| 亚洲av欧美aⅴ国产| 午夜福利网站1000一区二区三区| 成人毛片60女人毛片免费| 成人毛片a级毛片在线播放| 神马国产精品三级电影在线观看| 街头女战士在线观看网站| 日韩精品有码人妻一区| 亚洲av福利一区| av在线天堂中文字幕| 91在线精品国自产拍蜜月| 亚洲国产精品成人久久小说| 在线亚洲精品国产二区图片欧美 | 在线亚洲精品国产二区图片欧美 | 三级男女做爰猛烈吃奶摸视频| 中文天堂在线官网| eeuss影院久久| 又粗又硬又长又爽又黄的视频| 91久久精品国产一区二区三区| 午夜福利视频精品| 日韩欧美精品免费久久| 久久精品久久久久久久性| 成人亚洲精品av一区二区| 免费观看在线日韩| 亚洲欧美精品自产自拍| 国产午夜精品久久久久久一区二区三区| 亚洲av成人精品一二三区| 亚洲av一区综合| 在线 av 中文字幕| 精品人妻熟女av久视频| 久久久久久久亚洲中文字幕| 国产色爽女视频免费观看| 欧美高清成人免费视频www| 久久久久久久久久成人| 看十八女毛片水多多多| 熟女电影av网| 国产极品天堂在线| 成人国产av品久久久| 国产乱人偷精品视频| 国产一区有黄有色的免费视频| 国产午夜精品一二区理论片| 国产精品.久久久| 边亲边吃奶的免费视频| 又爽又黄a免费视频| 观看免费一级毛片| 看黄色毛片网站| 99视频精品全部免费 在线| 久久99蜜桃精品久久| 国产黄a三级三级三级人| 久久久色成人| av专区在线播放| 久久久午夜欧美精品| 亚洲精品aⅴ在线观看| 亚洲自拍偷在线| av一本久久久久| 国产片特级美女逼逼视频| 国产亚洲av片在线观看秒播厂| 精品国产一区二区三区久久久樱花 | 只有这里有精品99| 日日撸夜夜添| 亚洲aⅴ乱码一区二区在线播放| 亚洲精品456在线播放app| 国产久久久一区二区三区| 在线观看一区二区三区激情| tube8黄色片| 人妻制服诱惑在线中文字幕| 久久精品久久久久久噜噜老黄| 亚洲,一卡二卡三卡| 成年av动漫网址| 偷拍熟女少妇极品色| 青青草视频在线视频观看| av国产久精品久网站免费入址| 国产69精品久久久久777片| 免费少妇av软件| 亚洲电影在线观看av| 日韩大片免费观看网站| 男插女下体视频免费在线播放| 极品教师在线视频| 精品一区二区三卡| 国产精品一区二区在线观看99| 国产亚洲5aaaaa淫片| 亚洲精品国产色婷婷电影| 亚洲精品日本国产第一区| 国产一区有黄有色的免费视频| 亚洲国产日韩一区二区| av播播在线观看一区| 最近最新中文字幕免费大全7| 好男人视频免费观看在线| 欧美丝袜亚洲另类| 小蜜桃在线观看免费完整版高清| 国产免费福利视频在线观看| 国产亚洲91精品色在线| 午夜福利视频精品| 韩国高清视频一区二区三区| 日日啪夜夜爽| 久久久久久久久久久免费av| 国产精品国产三级国产av玫瑰| 啦啦啦中文免费视频观看日本| 国产黄色视频一区二区在线观看| 欧美97在线视频| .国产精品久久| 亚洲天堂av无毛| 欧美另类一区| 简卡轻食公司| 国产爽快片一区二区三区| 精品视频人人做人人爽| 女人久久www免费人成看片| 久久99精品国语久久久| 国产成人免费观看mmmm| 久久人人爽av亚洲精品天堂 | 一级a做视频免费观看| 欧美zozozo另类| 亚洲精品成人av观看孕妇| 蜜桃久久精品国产亚洲av| 欧美老熟妇乱子伦牲交| av在线蜜桃| 日日撸夜夜添| 国产免费一区二区三区四区乱码| 久久久久久久亚洲中文字幕| 国产黄频视频在线观看| 一区二区三区免费毛片| 亚洲精品一区蜜桃| 国产伦精品一区二区三区四那| 肉色欧美久久久久久久蜜桃 | 又爽又黄无遮挡网站| 日韩成人伦理影院| 高清午夜精品一区二区三区| 精品久久久久久久久亚洲| 91在线精品国自产拍蜜月| 99热这里只有精品一区| 三级经典国产精品| 国产精品国产三级国产av玫瑰| 久久久国产一区二区| 日本欧美国产在线视频| 97在线人人人人妻| .国产精品久久| 3wmmmm亚洲av在线观看| 嫩草影院新地址| 亚洲欧美日韩另类电影网站 | 啦啦啦中文免费视频观看日本| 国产美女午夜福利| 一级爰片在线观看| 在线天堂最新版资源| 国产成年人精品一区二区| 六月丁香七月| 久久久久久久国产电影| 九草在线视频观看| 久久99热这里只有精品18| av卡一久久| 亚洲精品第二区| 国产高清三级在线| 大陆偷拍与自拍| 成人一区二区视频在线观看| 亚洲欧美日韩东京热| 国产亚洲5aaaaa淫片| a级毛片免费高清观看在线播放| 能在线免费看毛片的网站| 国产精品久久久久久久久免| 日韩一区二区视频免费看| 国产av码专区亚洲av| 亚洲怡红院男人天堂| 国产成人a∨麻豆精品| 中文字幕免费在线视频6| 日韩在线高清观看一区二区三区| 日韩av在线免费看完整版不卡| 听说在线观看完整版免费高清| 美女视频免费永久观看网站| 色综合色国产| 哪个播放器可以免费观看大片| 女人十人毛片免费观看3o分钟| 亚洲av不卡在线观看| 久久久久久久久久成人| 国产午夜精品久久久久久一区二区三区| 色哟哟·www| 免费看av在线观看网站| 不卡视频在线观看欧美| 久久99蜜桃精品久久| 在线观看美女被高潮喷水网站| 亚洲欧美日韩另类电影网站 | av在线播放精品| 欧美成人午夜免费资源| 久久久久久久久久成人| 最近中文字幕2019免费版| 国产精品人妻久久久影院| 亚洲最大成人手机在线| 亚洲av成人精品一区久久| 高清午夜精品一区二区三区| 丰满人妻一区二区三区视频av| 日韩亚洲欧美综合| 人妻 亚洲 视频| av天堂中文字幕网| 免费播放大片免费观看视频在线观看| 观看美女的网站| 日韩 亚洲 欧美在线| 亚洲丝袜综合中文字幕| 久久久亚洲精品成人影院| 精品少妇久久久久久888优播| 亚洲精品国产av蜜桃| 亚洲最大成人中文| 看非洲黑人一级黄片| 亚洲国产精品成人久久小说| 亚洲人成网站在线观看播放| 边亲边吃奶的免费视频| 18禁在线播放成人免费| 啦啦啦啦在线视频资源| 午夜免费男女啪啪视频观看| 自拍偷自拍亚洲精品老妇| 久久久久久久精品精品| 久久99热6这里只有精品| 男女无遮挡免费网站观看| 男的添女的下面高潮视频| 精品一区在线观看国产| 亚洲欧美日韩另类电影网站 | 老司机影院成人| 麻豆成人av视频| 日韩欧美精品v在线| 久久97久久精品| 男人狂女人下面高潮的视频| 欧美+日韩+精品| 欧美日韩视频精品一区| 久久精品久久精品一区二区三区| 极品教师在线视频| 国产精品.久久久| 一区二区av电影网| av国产久精品久网站免费入址| 涩涩av久久男人的天堂| 日本免费在线观看一区| 一边亲一边摸免费视频| 人妻系列 视频| 欧美+日韩+精品| 亚洲自偷自拍三级| 热re99久久精品国产66热6| 精品酒店卫生间| 国产精品国产三级国产av玫瑰| av女优亚洲男人天堂| 欧美人与善性xxx| 国产精品av视频在线免费观看| 嫩草影院入口| 成人综合一区亚洲| 午夜日本视频在线| 婷婷色综合www| 亚洲欧美成人精品一区二区| 精品国产三级普通话版| 高清欧美精品videossex| 国产精品人妻久久久久久| 国内揄拍国产精品人妻在线| 亚洲av免费高清在线观看| 国产成人午夜福利电影在线观看| 精品久久久久久久久亚洲| 国产淫片久久久久久久久| 亚洲精华国产精华液的使用体验| 精品久久久久久电影网| 色视频www国产| 美女脱内裤让男人舔精品视频| 激情五月婷婷亚洲| 中文字幕亚洲精品专区| 国产有黄有色有爽视频| 日本av手机在线免费观看| 黄色欧美视频在线观看| 国产一区二区在线观看日韩| 日本欧美国产在线视频| 99热这里只有是精品50| 最新中文字幕久久久久| 久久午夜福利片| 国产又色又爽无遮挡免| 免费av毛片视频| 麻豆乱淫一区二区| 成人亚洲欧美一区二区av| 一级a做视频免费观看| 久久久久久久国产电影| 1000部很黄的大片| 青青草视频在线视频观看| 久久精品国产a三级三级三级| 欧美精品一区二区大全| 日本爱情动作片www.在线观看| 国产大屁股一区二区在线视频| 精品人妻偷拍中文字幕| 日本av手机在线免费观看| 别揉我奶头 嗯啊视频| 国产乱人视频| 女人十人毛片免费观看3o分钟| 天堂网av新在线| 亚洲欧美日韩无卡精品| 一区二区三区乱码不卡18| 亚洲人成网站在线观看播放| 亚洲精品成人久久久久久| 亚洲最大成人手机在线| 久久久久久久久久成人| 有码 亚洲区| 国产精品秋霞免费鲁丝片| 老司机影院成人| 十八禁网站网址无遮挡 | 精品久久久精品久久久| 亚洲三级黄色毛片| 少妇人妻一区二区三区视频| av国产久精品久网站免费入址| 秋霞伦理黄片| 黄片wwwwww| 国产精品熟女久久久久浪| 国产毛片在线视频| 亚洲美女视频黄频| 亚洲精品日韩在线中文字幕| 免费看不卡的av| 日韩制服骚丝袜av| 国产69精品久久久久777片| 纵有疾风起免费观看全集完整版| 美女xxoo啪啪120秒动态图| videos熟女内射| 久久99精品国语久久久| 97热精品久久久久久| 国产精品国产av在线观看| 国产精品秋霞免费鲁丝片| 欧美xxxx性猛交bbbb| 黄色配什么色好看| 有码 亚洲区| 狠狠精品人妻久久久久久综合| 三级国产精品欧美在线观看| 青青草视频在线视频观看| 尤物成人国产欧美一区二区三区| 精品人妻偷拍中文字幕| 91久久精品国产一区二区成人| 成人美女网站在线观看视频| 国产精品成人在线| 国产亚洲精品久久久com| 精品少妇黑人巨大在线播放| 一级毛片 在线播放| 性色av一级| 国产成人精品久久久久久| 国产成人a∨麻豆精品| 免费av观看视频| 各种免费的搞黄视频| 蜜桃久久精品国产亚洲av| 亚洲精品成人久久久久久| 狂野欧美激情性bbbbbb| 亚洲成人av在线免费| 69人妻影院| 99久久精品一区二区三区| 丰满人妻一区二区三区视频av| 热re99久久精品国产66热6| 午夜福利在线观看免费完整高清在| 亚洲欧美日韩卡通动漫| 午夜免费男女啪啪视频观看| 最近最新中文字幕免费大全7| 欧美3d第一页| 麻豆久久精品国产亚洲av| 国产精品久久久久久av不卡| 人妻 亚洲 视频| 一边亲一边摸免费视频| 丝瓜视频免费看黄片| 亚洲精品日韩av片在线观看| 久久精品夜色国产| 精品国产露脸久久av麻豆| 欧美日韩视频高清一区二区三区二| 国产淫语在线视频| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 乱系列少妇在线播放| 岛国毛片在线播放| 免费大片18禁| 80岁老熟妇乱子伦牲交| 久久久久久伊人网av| 亚洲精品国产av蜜桃| 午夜福利在线在线| 老司机影院成人| 成人亚洲欧美一区二区av| 亚洲,一卡二卡三卡| 欧美激情国产日韩精品一区| 一级毛片久久久久久久久女| 精品久久久噜噜| www.色视频.com| 久热久热在线精品观看| 久久久午夜欧美精品| 国产亚洲精品久久久com| 国产精品.久久久| 亚洲av电影在线观看一区二区三区 | 成年女人在线观看亚洲视频 | 亚洲精品456在线播放app| 日韩人妻高清精品专区| 国产在视频线精品| 看十八女毛片水多多多| 日本爱情动作片www.在线观看| 赤兔流量卡办理| 亚洲国产欧美在线一区| 亚洲不卡免费看| 亚洲成人中文字幕在线播放| 美女高潮的动态| 欧美日韩在线观看h| 99热全是精品| 亚洲精品日韩在线中文字幕| 午夜爱爱视频在线播放| 欧美xxⅹ黑人| 老司机影院成人| 交换朋友夫妻互换小说| 99久久精品国产国产毛片| 国产av码专区亚洲av| 毛片女人毛片| 人人妻人人澡人人爽人人夜夜| 国产免费视频播放在线视频| 亚洲最大成人中文| 日本av手机在线免费观看| 国产毛片在线视频| 男女国产视频网站| 国产午夜精品一二区理论片| 国产精品爽爽va在线观看网站| 一级毛片aaaaaa免费看小| 自拍欧美九色日韩亚洲蝌蚪91 | 少妇人妻一区二区三区视频| 性色av一级| 国产精品偷伦视频观看了| 久久精品夜色国产| 日韩av在线免费看完整版不卡| 国产淫片久久久久久久久| 日韩欧美 国产精品| 男人爽女人下面视频在线观看| 日韩av不卡免费在线播放| 男人狂女人下面高潮的视频| 男女下面进入的视频免费午夜| 亚洲av在线观看美女高潮| 国产欧美另类精品又又久久亚洲欧美| 可以在线观看毛片的网站| 亚洲av电影在线观看一区二区三区 | 成人亚洲精品一区在线观看 | 看非洲黑人一级黄片| 九九爱精品视频在线观看| 少妇丰满av| 内地一区二区视频在线| 3wmmmm亚洲av在线观看| 18禁动态无遮挡网站| 亚洲天堂av无毛| 极品教师在线视频| 91aial.com中文字幕在线观看| 超碰97精品在线观看| 欧美日韩视频精品一区| 嘟嘟电影网在线观看| 五月伊人婷婷丁香| 熟妇人妻不卡中文字幕| 99久久中文字幕三级久久日本| 又爽又黄a免费视频| 日日撸夜夜添| 丝袜喷水一区| 国产一区亚洲一区在线观看| 亚洲av成人精品一区久久| 亚洲精品色激情综合| 亚洲av成人精品一区久久| 亚洲精品乱久久久久久| 在线观看一区二区三区激情| 亚洲自偷自拍三级| 国产免费又黄又爽又色| 国产成年人精品一区二区| 亚洲在线观看片| 久久久国产一区二区| 秋霞伦理黄片| 91久久精品国产一区二区三区| 免费看不卡的av| 一级二级三级毛片免费看| 欧美+日韩+精品| 亚洲色图综合在线观看| 国产老妇女一区| av女优亚洲男人天堂| 亚洲av中文av极速乱| 麻豆乱淫一区二区| 久久国产乱子免费精品| 国产欧美日韩一区二区三区在线 | 哪个播放器可以免费观看大片| 晚上一个人看的免费电影| 久久久久精品久久久久真实原创| 极品少妇高潮喷水抽搐| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 人妻一区二区av| 一二三四中文在线观看免费高清| av天堂中文字幕网|