• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Ghost Module Based Residual Mixture of Self-Attention and Convolution for Online Signature Verification

    2024-05-25 14:40:26FangjunLuanXuewenMuandShuaiYuan
    Computers Materials&Continua 2024年4期

    Fangjun Luan ,Xuewen Mu and Shuai Yuan,?

    1Department of Computer Technology,School of Computer Science and Engineering,Shenyang Jianzhu University,Shenyang,110168,China

    2Liaoning Province Big Data Management and Analysis Laboratory of Urban Construction,Shenyang Jianzhu University,Shenyang,110168,China

    3Shenyang Branch of National Special Computer Engineering Technology Research Center,Shenyang Jianzhu University,Shenyang,110168,China

    ABSTRACT Online Signature Verification(OSV),as a personal identification technology,is widely used in various industries.However,it faces challenges,such as incomplete feature extraction,low accuracy,and computational heaviness.To address these issues,we propose a novel approach for online signature verification,using a one-dimensional Ghost-ACmix Residual Network(1D-ACGRNet),which is a Ghost-ACmix Residual Network that combines convolution with a self-attention mechanism and performs improvement by using Ghost method.The Ghost-ACmix Residual structure is introduced to leverage both self-attention and convolution mechanisms for capturing global feature information and extracting local information,effectively complementing whole and local signature features and mitigating the problem of insufficient feature extraction.Then,the Ghost-based Convolution and Self-Attention(ACG)block is proposed to simplify the common parts between convolution and self-attention using the Ghost module and employ feature transformation to obtain intermediate features,thus reducing computational costs.Additionally,feature selection is performed using the random forest method,and the data is dimensionally reduced using Principal Component Analysis (PCA).Finally,tests are implemented on the MCYT-100 datasets and the SVC-2004 Task2 datasets,and the equal error rates (EERs) for small-sample training using five genuine and forged signatures are 3.07%and 4.17%,respectively.The EERs for training with ten genuine and forged signatures are 0.91% and 2.12% on the respective datasets.The experimental results illustrate that the proposed approach effectively enhances the accuracy of online signature verification.

    KEYWORDS Online signature verification;feature selection;ACG block;ghost-ACmix residual structure

    1 Introduction

    Online Signature Verification(OSV)technology is a biometric authentication technique aimed at establishing a high-accuracy and robust system to determine whether a signature sample is genuine or forged.Unlike offline signatures,online signatures are real-time signals that vary with time and primarily capture dynamic information,such as signature trajectory,pressure,vertical and horizontal deviations,and velocity,which are collected using signature devices like electronic tablets[1–3].Online signatures are widely accepted in society due to their ease of collection,resistance to imitation,and high security [4,5].However,the instability of online signatures makes them susceptible to changes of the signer’s physiological state,psychological fluctuations,and surrounding environment.Additionally,the limited number of online signature samples results in significant challenges for achieving high verification accuracy [6–8].Therefore,achieving effective personal identification through OSV technology is crucial for determining the legal validity of handwritten signatures.In-depth research on online signature verification can address challenges related to stability and accuracy,thereby fostering its application and development in areas such as e-commerce and e-government.

    Convolution and self-attention mechanisms,as potent techniques in deep learning,are frequently employed in current methods for handwritten signature verification.Consequently,methods for handwritten signature verification can be categorized into convolution-based and self-attention-based approaches.Each of these approaches has its unique advantages and limitations:Convolution demonstrates excellent performance in data processing,capturing local features and spatial information.However,it is constrained by the design of local receptive fields and parameter sharing,which may impede its ability to handle long-range dependencies and global information.On the other hand,the self-attention mechanism offers advantages in capturing global dependencies and dynamically computing attention weights,making it suitable for processing sequential data and global information.Nevertheless,when dealing with large-scale data,it may encounter significant computational overhead and might excessively focus on global information,potentially lacking the ability to handle local features.

    To enhance the performance and accuracy of handwritten signature verification,while minimizing computational overhead and effectively handling local and global features,we propose a novel onedimensional Ghost-ACmix Residual Network (1D-ACGRNet) that integrates convolution and selfattention mechanisms.The main contributions are as follows:

    First of all,we introduce a novel Ghost-ACmix residual structure that combines the strengths of convolution and self-attention mechanisms to extract feature information.This approach allows for a synergistic utilization of global and local signature features,effectively addressing the issue of insufficient feature representation in signatures.

    Then,recognizing that the fusion of convolution and self-attention increases model complexity and computational overhead,we propose a Ghost-based Convolution and Self-Attention (ACG)module.In this module,during the underlying convolution operations of both convolution and selfattention,a straightforward feature transformation is employed to replace the conventional convolution process,acquiring more comprehensive feature information.The advantage of this approach lies in reducing feature redundancy and lowering computational costs,thereby,enhancing computational efficiency while maintaining model performance.

    Finally,we employ the random forest algorithm for feature selection on global features,filtering out redundant information based on feature importance scores.We also adapt the Principal Component Analysis (PCA) method for feature dimensionality reduction,thereby,further reducing the computational load of the model.

    2 Related Work

    2.1 Previous Work

    After decades of exploration and research,numerous online handwritten signature verification methods have been proposed.Generally,these methods can be categorized into two types according to the approach of feature extraction: Parameter-based methods and function-based methods.The parameter-based signature verification methods primarily extracted a fixed-length feature vector from the time-series data of the entire or partial sampling points of signatures.Then,they transformed the comparison between two signatures into a comparison between their respective feature vectors.Jain et al.[9] considered the stroke count as a global feature.Lee et al.[10] used average speed,average pressure,and the number of pen lifts during the signature as global features.Ibrahim et al.[11]extracted 35 global features,including maximum signature velocity,pen-down count,signature aspect ratio,etc.,for signature verification.This method showed acceptable overall verification performance,but some signers had relatively low verification accuracy.Fierrez-Aguilar et al.[12] introduced 100 global features and ranked them based on their discriminatory power in differentiating genuine and forged signatures.Vorugundi et al.[13] combined a one-dimensional convolutional neural network(1D-CNN)with a Siamese neural network(SNN)and used the extracted global feature set to represent online handwritten signatures.Additionally,some transform methods,such as wavelet transform[14],Fourier transform[15],i-vector[16],have also been applied for extracting global features from online handwritten signatures.

    With the continuous development of deep learning and signature devices,many researchers have been devoted to adopting deep learning methods for online signature authentication.Recurrent Neural Networks(RNN)and Long Short Term Memory(LSTM)networks were widely used in this context.Tolosana et al.[17] investigated different types of RNN networks for signature authentication and proposed a multi-RNN Siamese network as well as a Siamese network combining LSTM and Gate Recurrent Unit (GRU).Greff et al.[18] conducted an analysis of eight different LSTM variants across three distinct tasks: Speech recognition,handwriting recognition,and polyphonic music modeling.Furthermore,Li et al.[19]proposed a Stroke-Based RNN model that combined signature stroke information with RNN networks in 2019.Lai et al.[20] proposed a method for synthesizing dynamic signature sequences to address the problem of lacking forged signatures during training.Vorugunti et al.[21]merged manually extracted features with features extracted by Autoencoders(AE)and applied the Depthwise Separable Convolutional Neural Network(DWSCNN)for the first time to online signature verification tasks.Additionally,a language-independent shallow model Shallow Convolutional Neural Network (sCNN) [22] based on a convolutional neural network (CNN) has been introduced.This model had a simple structure,using a custom shallow CNN to automatically learn signature features from the training data for signature authentication.

    The function-based online signature verification methods treated the time-series data of all or partial sampling points as feature information and performed signature verification by calculating the distance between the template signature and the test signature.This approach retained information from all sampling points,providing more data and higher security.Kholmatov et al.[23]improved the Dynamic Time Warping (DTW) method by constructing a three-dimensional feature vector for each signature using the distance between the current sample and all reference samples.Song et al.[24] selected five stable local temporal features: x-axis velocity,y-axis velocity,pressure,y-axis acceleration,and centripetal acceleration,and used the DTW-SCC algorithm for signature verification.Lai et al.[25] proposed an online handwritten signature verification method based on RNN and signature Length Normalized Path Signature(LNPS)descriptor to address the problem of inconsistent signature lengths,but this method involved heavy model computation.In addition,the Information Divergence-Based Matching Strategy [26],Gaussian Mixture Model (GMM) [27],and other methods have also been applied to function-based signature verification.

    2.2 Existing Challenges

    From the above-mentioned research work,it can be concluded that researchers have made significant progress in the field of online signature authentication over the years.However,there are still some challenges that need to be addressed.Firstly,improving the accuracy of online signature verification is still a primary focus and a difficult task in current research.Previous studies mainly used traditional methods or convolutional methods for feature extraction from online signatures.However,online signatures are complex and contain multiple types of information,and susceptible to various external factors.In the process of feature extraction,the global or local information of the features is often overlooked,leading to limited effectiveness in feature extraction.Secondly,reducing the computational cost of signature verification,eliminating redundant information,and lightening the model structure pose significant challenges in online signature authentication methods.Moreover,signatures that have been deliberately forged over time are difficult to distinguish from genuine signatures.Therefore,distinguishing between proficient forgeries and genuine penmanship is a challenging aspect in online signature authentication.

    2.3 Residual Network Architecture

    ResNet,short for Residual Network,is a widely recognized and significant neural network architecture in deep learning,proposed by He et al.in 2016 [28].ResNet was designed to address the issues of vanishing and exploding gradients that occur in deep neural networks.In deep neural networks,as the number of layers increases,the information propagation may suffer from the problem of vanishing gradients.This leads to difficulties in network convergence of training very deep neural networks.ResNet addresses this issue by introducing a residual learning approach.It utilizes“residual blocks,”which allow the network to directly learn the residual part of the target function.By using residual blocks,even for a very deep network,the information propagation path becomes more direct with reducing gradient decay and making it easier to train and optimize deeper networks.The core idea of ResNet is to introduce skip connections or “shortcut connections”in the network,enabling the direct transfer of information between different layers.By incorporating skip connections,ResNet enables the flow of information across layers,facilitates the training of very deep neural networks and addresses the vanishing gradient problem,and ultimately leads to improved performance and convergence.

    2.4 Integration of Self-Attention and Convolution

    Convolution and self-attention are two powerful representation learning techniques that are considered as independent methods.However,they have strong inherent connections because much of their computations actually involve similar operations.Specifically,traditional convolution can be decomposed into smaller convolutions,which are then shifted and summed.Similarly,in the selfattention module,the projections of queries,keys,and values can be seen as multiple convolutions,followed by computation of attention weights and aggregation of values.As a result,the first stage of both modules involves similar operations.Moreover,the first stage is computationally more intensive compared with the second stage.Their operations can be used to perform an elegant fusion of both seemingly distinct methods,known as the hybrid model(ACmix)[29],which simultaneously leverages the advantages of both self-attention and convolution while having the minimum computational overhead compared with pure convolution or self-attention counterparts.

    2.5 GhostNet

    The Ghost block[30]is a lightweight convolutional block that achieves comparable performance to standard convolution with fewer parameters and lower training costs.Traditional convolutions require significant computations and parameters to extract feature maps,often resulting in feature maps containing rich or redundant information,with many feature maps being similar.The Ghost block obtains some “ghost” feature maps through linear transformations,which can be derived from other feature maps.The Ghost block divides the convolution process into two parts:Standard convolution and transformations(identity mapping and depth-wise convolution).During this process,the Ghost block introduces the concept of “Ghost”dividing the input feature maps into main and ghost parts.The main part consists of a small portion of feature maps generated by convolutions,preserving the primary feature information,while the ghost part consists of feature maps generated by identity mapping,assisting computations for providing additional information.This design allows the Ghost block to maintain low computational and memorial overhead while offering high model expressiveness and accuracy.

    Another advantage of the Ghost block is its adaptability and pluggability.It can be integrated into various convolutional neural network structures and model architectures.By applying the Ghost block to different parts of a network model,lightweight and efficient goals can be achieved without the need to redesign the entire model.

    3 Method

    3.1 Whole Framework of the Proposed Method

    The overall framework of our proposed online signature verification method based on the 1DACGRNet model is illustrated in Fig.1.It primarily consists of two main components: Feature extraction and selection,as well as the Ghost-ACmix residual module for signature verification.In the first part of feature extraction and selection,the original dataset undergoes feature extraction,resulting in a 54-dimensional global feature vector.Subsequently,a random forest algorithm [31] is employed to determine the importance of each feature dimension,leading to the removal of redundant features with importance scores below 0.Additionally,PCA[31]is used to reduce the dimensionality of 54 global features.The resulting four global features are combined with the features extracted to form the feature information.The second part is the 1D-ACGRNet model signature authentication.The stable global features processed in the first stage are input into a convolutional layer,followed by LeakyReLU activation and Average Pooling.The output is then fed into four Ghost-ACmix residual modules for feature modeling,and then output to the classifier composed of the full connection layer and SoftMax function for classification,and finally obtains the signature authenticity.Table 1 provides the specific parameters of the model.

    Table 1: Detailed parameters of 1D-ACGRNet

    Figure 1: Whole framework of the proposed method

    3.2 Ghost-ACmix Residual Structure

    In traditional deep neural networks,with the network depth increases,the gradient signal becomes very small,making training difficult or even impossible.This paper employs the Ghost-ACmix residual structure as shown in Fig.2,which introduces skip connections to allow information to bypass certain layers,effectively overcoming the problem of vanishing gradients.

    Figure 2: Ghost-ACmix residual structure

    Considering that conventional online signature verification methods often overlook either global or local information,leading to incomplete feature extraction and insufficient accuracy,given that online signature authentication techniques are often deployed on small-capacity embedded devices,requiring a reduction in overall model parameters and computational complexity,we adopt the ACG block combining convolution and self-attention with the Ghost method to replace the standard convolutional part in the residual structure.This approach aims to improve model performance while remaining lightweight.Assuming that the input to the residual structure is denoted as x,the output y(x)can be represented by the following equation:

    3.3 ACG Block

    The convolutional and self-attention modules typically follow different design paradigms.Traditional convolutional operations utilize aggregation functions over local receptive fields,applying weighted summation to the data within each window based on the convolution filter weights.In contrast,self-attention modules apply weighted averaging operations based on the context of input features,where attention weights are dynamically computed as similarity functions between relevant pairs of pixels.This flexibility allows the attention module to adaptively focus on different regions and capture more diverse information features.The specific operation is shown in Fig.3.

    For the convolution operation,given a convolution kernel K of size 1×k,it can be decomposed into k sets of 1×k convolution kernels,followed by subsequent shifting and summation operations.The specific decomposition process is as follows.

    Figure 3: ACG block

    The convolutional kernel K of size 1 × k is decomposed intokindividual 1 × 1 convolutional kernels,denoted asKp,wherep∈{0,1,...,k-1}.The input feature map isF∈RCin×W,and the output feature map isG∈RCout×W.The tensorsf i∈RCinandgi∈RCoutrepresent the feature tensors of pixelsiinFandGrespectively.Utilizing these 1×1 convolutional kernels,convolution operations are performed with the corresponding positions offi,resulting in k feature maps denoted asgip.Due to the high computational complexity of tensor displacement,a fixed-kernel depth convolution is used to perform the tensor displacement operation.Shift and sum the k feature maps to obtain the pixel valuegiin the final output featureG.

    By such decomposition,the standard convolution can be achieved through a series of smaller 1×1 convolutional kernels and a two-step process of shifting and summation.

    For self-attention operations,the attention module can also be decomposed into two stages.In the first stage,the input features are projected into query q,key k,and value v using a 1×1 convolution.Similarly,let the input features be represented asF∈RCin×W,and the output features asG∈RCout×W.This facilitates the transformation off i∈RCinandgi∈RCoutrepresentingFandGcorresponding pixelifeature tensor.qi,ki,virespectively denote tensors of query,key,and value for pixeli.Wq,Wk,Wvcorrespondingly refer to the weights associated with query q,key k,and value v.

    The second stage involves computing attention weights and performing a weighted sum.The attention weights A are computed by taking the dot product of the query q and key k,followed by scaling the result using the scaling factor 1/√dto prevent gradient vanishing or exploding issues.drepresents the feature dimension ofqi.

    Therefore,by weighting and summing the value vectors v using attention weights,the pixelgiin the output featureGcan be computed.

    Decomposing the Self-Attention into two stages makes the computation process more efficient,as a significant portion of computations is concentrated in the first stage,similar to the computation pattern in traditional convolutions.Since both Convolution and Self-Attention apply convolutional operations in the first stage,it is possible to perform the first-stage computation only once and then separately pass the intermediate features to the convolution and self-attention paradigms for computation,finally combining them through weighted summation.This integration fully leverages the advantages of both,while reducing computational costs.

    Although this processing approach reduces the parameter count to some extent,there still exists a significant amount of redundancy in the intermediate features.To address this issue,Fig.4 represents the idea of the Ghost block introduced in this paper.In the first stage of obtaining intermediate features,the first set of intermediate features is generated through a 1×1 convolution.Subsequently,the Ghost block is employed to generate the remaining sets of intermediate features.By using the Ghost block,the overall parameter count of the model is reduced,and the problem of feature redundancy is effectively avoided.

    Figure 4: Ghost block

    3.4 Classifier

    As depicted in Fig.1,the classifier of the 1D-ACGRNet model is composed of several layers including Flatten layer,Dropout layer,Relu activation function layer,and three fully connected layers[128,64,2].In the Flatten layer,the features produced by four Ghost-ACmix residual structures are flattened into a one-dimensional array and fed into the fully connected layers.Considering that the front-end network has already captured significant features relevant to signature authentication through max-pooling layers,we introduce a single Dropout layer (0.5) in the first fully connected layer to eliminate redundant feature information.Ultimately,the SoftMax activation function is employed to predict the authenticity of each signature sample.The design of the entire classifier aims to fully exploit effective features and prevent overfitting,thus,enhancing the performance of signature verification.

    4 Experimental Preparation

    4.1 Datasets and Preprocessing

    The experimental datasets in this study include the MCYT-100 datasets and the SVC-2004 Task2 datasets,which are described as follows:

    The MCYT-100 datasets are a publicly available datasets commonly used for research in online signature verification.The datasets are released by the BiDA Laboratory at the Autonomous University of Madrid.The MCYT-100 Spanish datasets are consist of 100 users,and each user has 25 genuine and forged signature samples.The signature feature information includes five time-series features:Abscissa,ordinate,pen pressure,pen horizontal angle,and pen vertical angle.

    The SVC-2004 Task2 Chinese-English datasets are subset datasets provided at the First International Signature Verification Competition held by the Hong Kong University of Science and Technology in 2004.The datasets contain a relatively small number of signatures and consist 40 users,with each user having 20 genuine and forged signature samples.The signature feature information includes seven time-series features: Abscissa,ordinate,pen pressure,time,pen horizontal angle,pen vertical angle,and pen-up/pen-down flag.For the sake of model generalization,the time and penup/pen-down flag time-series features in the dataset were not used during the experiment.

    To mitigate the impact of input device variations and the influence of the number,size,and location of sampling points on each signature input,we employed smoothing and normalization on the signature feature sequence X and Y coordinates.This is done to eliminate data errors originating from data acquisition equipment and individual user factors.We utilized a five-point cubic smoothing filter to process the feature data.The time sequence of the signature is represented as T,where n denotes the number of sampling points,X and Y represent the abscissa and ordinate trajectory feature sequence,P signifies the pressure sequence feature,and AZ and AL denote the horizontal and vertical angular features,respectively.For the signature feature sequence{T1,T2,...,Tn},where in Ti ∈{X,Y},corresponds to the smoothed data points,i=(1,2,...,n),while n represents the count of signature samples.The smoothing process is as follows:

    Subsequently,the selected features are normalized to scale the data of the five feature sequences within the range of[0,1].The formula for normalization is as follows:

    After computation,a total of 54 global signature features are extracted,as detailed in the following Table 2.

    Table 2: Signature global features

    4.2 Feature Selection

    In the process of using multiple global features,the issue of feature redundancy inevitably arises.These redundant features not only increase the complexity of model training but may also negatively impact the model performance in authenticating genuine and forged signatures.To mitigate this drawback,this study employs a random forest feature selection method based on ensemble learning strategies to select features from the 54 extracted global features.This method assigns importance weights to each feature,and features with an importance weight of zero are removed.Simultaneously the PCA method is adopted to reduce 54 global features to four dimensions for filtering out effective features from the numerous global features.

    4.3 Training Configuration

    This study primarily conducted network hyperparameter optimization,training,and validation experiments using Python 3.8 and the TensorFlow 2.3 deep learning library.The adaptive moment estimation(Adam)optimizer is employed in this research.To ensure the convergence of the network model and mitigate overfitting,the model weight with the highest validation rate is selected as the final trained model after multiple iterations of training.Table 3 shows the results of hyperparameter optimization during the model learning process.

    Table 3: Hyperparameter results

    4.4 Evaluation Methods

    In this research,a variety of metrics are employed to assess the model.The formulas for these evaluation metrics are as follows:

    Accuracy (ACC) is utilized to measure overall correctness.The False Acceptance Rate (FAR)quantifies the rate of incorrectly accepting forged signatures,while the False Rejection Rate (FRR)gauges the rate of incorrectly rejecting genuine signatures.The Equal Error Rate (EER) represents the point where FAR equals FRR,with lower values indicating superior performance.The Receiver Operating Characteristic(ROC)curve illustrates the trade-off between FAR and FRR.The Confusion Matrix(CM)presents detailed classification outcomes,including True Positives,True Negatives,False Positives,and False Negatives.

    5 Results and Analysis

    5.1 Analysis of Model Performance

    To assess the overall performance of the model,this section presents a comprehensive analysis of signature authentication performance on the MCYT-100 and SVC-2004 Task2 datasets.This analysis includes the evaluation of authentication performance for all users.During the experimental process,the selected feature set after feature selection is employed as the input to the network.The experiment employed five genuine and forged signatures for training.

    To further validate the effectiveness of the proposed method,we conducted experiments on both the 1D-ACGRNet model and the 1D-RNet model,comparing their EER and plotting ROC curves for comparison.From Fig.5,it can be observed that on the MCYT dataset,the EER for signature verification with the 1D-RNet model is 4.37%,and the one for the 1D-ACGRNet model is 3.07%,and the improved 1D-ACGRNet model shows a significant decrease in EER by 1.30%.On the SVC dataset,the 1D-RNet model EER for signature verification is 4.76%,while the 1D-ACGRNet model EER is 4.17%,which indicates a reduction of 0.59%compared with the former.

    Figure 5: The ROC curves of the MCYT dataset and the SVC dataset for two model architectures

    In this subsection,we conduct comprehensive performance experiments using both the SVC dataset and the MCYT dataset,and the experimental results are presented in Table 4.It can be observed that with the increase in training data,FAR,FRR,and EER all show a decreasing trend on both datasets,while ACC shows improvement.The detailed data of the confusion matrix corresponding to Table 4 is illustrated in Fig.6.The experimental results prove that on both the MCYT and SVC datasets,the 1D-ACGRNet model exhibits substantial overall performance enhancement in signature verification,confirming the effectiveness of the 1D-ACGRNet model.

    Table 4: Results for the two datasets

    Figure 6: Confusion matrix of SVC and MCYT datasets

    5.2 Analysis of Ablation Experiments

    We employ “ablation experiments” to qualitatively and quantitatively analyze the impact of various components within the research model.To ensure the robustness of the model,the experiment adopts a stratified k-fold cross-validation method,dividing the signature data of each user into k equal parts,with one part as the training data,conducting k rounds of training and validation to ensure that each data subset participates in the testing phase.In this experiment,we ensure that the training data volume for each round is 5.Table 5 provides the specific results of the experiment.In conclusion,the enhanced model employed in this study has demonstrated significant performance improvements.This model not only enhances accuracy but also reduces computational costs,rendering it more suitable for online signature verification.

    Table 5: Comparison of ablation experiment results

    5.3 Comparison with Similar Work

    To further validate the effectiveness of the proposed method,this section compares the performance of our approach with other methods.Table 6 presents experimental results obtained by different methods proposed by predecessors on the MCYT dataset and SVC dataset.When training with 5 genuine and forged signatures,as well as with 10 genuine and forged signatures,the equal error rates(EER)are consistently lower than those of other methods.This further validates the effectiveness of the proposed online handwritten signature verification method based on the one-dimensional Ghost-ACmix Residual Neural Network(1D-ACGRNet)introduced in this study.

    Table 6: Comparison of EER across different methods

    6 Conclusion

    This paper proposes an online signature verification method using a 1D-ACGRNet,which integrates convolution and self-attention mechanisms into a residual network.It is the first application of a residual network embedded with a mixture of convolution and self-attention mechanisms in this field.Convolution and self-attention are proficient in extracting local and global information from data,respectively.By combining them,the signature information becomes more comprehensive,thereby enhancing the overall accuracy of the model.Simultaneously applying feature selection removes redundant global features,which are combined with feature dimensionality reduction methods to obtain effective feature information.And the ghost method is further introduced to reduce the model’s computational complexity.The experimental results indicate that,when training samples are employed with 5 genuine and forgery signature data on the MCYT-100 and SVC-2004 Task2 datasets,the equal error rates of this study are 3.07%and 4.17%,respectively.When using 10 genuine and forgery signatures,the equal error rates are 0.91%and 2.12%.Compared with previous research,the proposed method in this paper has implemented some improvement in performance.

    Acknowledgement:The authors would like to present appreciation to National Natural Science Foundation of China and Liaoning Provincial Science and Technology Department Foundation.

    Funding Statement:This work is supported by National Natural Science Foundation of China(Grant No.62073227)and Liaoning Provincial Science and Technology Department Foundation(Grant No.2023JH2/101300212).

    Author Contributions:The authors confirm contribution to the paper as follows: Study conception and design:F.Luan,X.Mu;data collection:X.Mu;analysis and interpretation of results:F.Luan,X.Mu,S.Yuan;draft manuscript preparation:X.Mu,S.Yuan.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Due to the nature of this research,participants of this study did not agree for their codes to be shared publicly,and supporting datasets are available.The SVC-2004 Task2 database is openly at: https://cse.hkust.edu.hk/svc2004/download.html.The MCYT-100 database is openly at:http://atvs.ii.uam.es/atvs/mcyt100s.html.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    日韩亚洲欧美综合| 网址你懂的国产日韩在线| 内地一区二区视频在线| 国产在视频线精品| 亚洲精品乱久久久久久| 午夜日本视频在线| 国产精品国产av在线观看| 婷婷色av中文字幕| 天天躁日日操中文字幕| 一级毛片 在线播放| 久久精品人妻少妇| 国产在线视频一区二区| 国产爽快片一区二区三区| 亚洲国产最新在线播放| 国产精品一及| 亚洲av欧美aⅴ国产| 国产精品不卡视频一区二区| 久久精品国产亚洲av天美| 国产人妻一区二区三区在| 校园人妻丝袜中文字幕| 亚州av有码| 久久99精品国语久久久| 秋霞伦理黄片| 国产69精品久久久久777片| 精品国产一区二区三区久久久樱花 | 久久久久久久大尺度免费视频| 国产精品一区二区在线不卡| 久久精品国产自在天天线| 在线天堂最新版资源| 91精品一卡2卡3卡4卡| 丝瓜视频免费看黄片| 高清欧美精品videossex| 国产综合精华液| 免费不卡的大黄色大毛片视频在线观看| 精品人妻偷拍中文字幕| 亚洲在久久综合| 九九在线视频观看精品| 亚洲av二区三区四区| 久久国产精品男人的天堂亚洲 | 免费不卡的大黄色大毛片视频在线观看| 秋霞在线观看毛片| 精品一区在线观看国产| 国产精品av视频在线免费观看| 1000部很黄的大片| www.av在线官网国产| 精品视频人人做人人爽| 久久久久久久国产电影| 五月天丁香电影| 国产黄片视频在线免费观看| 色吧在线观看| av在线观看视频网站免费| 人体艺术视频欧美日本| 尾随美女入室| 精品久久久噜噜| 嘟嘟电影网在线观看| 免费观看的影片在线观看| 欧美性感艳星| 七月丁香在线播放| 亚洲真实伦在线观看| 精品国产露脸久久av麻豆| 蜜桃在线观看..| 精品亚洲成a人片在线观看 | 久久久久久九九精品二区国产| 欧美激情国产日韩精品一区| 新久久久久国产一级毛片| 久久人人爽人人片av| 国产精品嫩草影院av在线观看| 一二三四中文在线观看免费高清| 国产精品欧美亚洲77777| 高清日韩中文字幕在线| 免费人妻精品一区二区三区视频| 久久国产精品大桥未久av | 免费人成在线观看视频色| 男人狂女人下面高潮的视频| 国产 精品1| 久久国产亚洲av麻豆专区| 97超视频在线观看视频| 街头女战士在线观看网站| 你懂的网址亚洲精品在线观看| 另类亚洲欧美激情| 国产高清三级在线| 91精品国产九色| 国产精品一区www在线观看| 99久久综合免费| 在线观看美女被高潮喷水网站| 亚洲自偷自拍三级| 搡女人真爽免费视频火全软件| 国产精品无大码| 亚洲av欧美aⅴ国产| 老司机影院毛片| 日韩成人伦理影院| 中文字幕久久专区| 高清在线视频一区二区三区| 久久精品久久久久久久性| 国产精品嫩草影院av在线观看| 美女内射精品一级片tv| 欧美xxxx黑人xx丫x性爽| 深爱激情五月婷婷| 久久ye,这里只有精品| 国产精品国产三级国产av玫瑰| 亚洲欧美成人综合另类久久久| 欧美日韩视频精品一区| 国产黄片美女视频| 成人二区视频| 成人二区视频| 性色av一级| 日韩一区二区三区影片| 国产成人aa在线观看| 国产精品久久久久成人av| 好男人视频免费观看在线| 亚洲aⅴ乱码一区二区在线播放| 亚洲国产欧美人成| 视频区图区小说| 国产极品天堂在线| 成人毛片60女人毛片免费| 免费av不卡在线播放| 网址你懂的国产日韩在线| 欧美老熟妇乱子伦牲交| 午夜免费男女啪啪视频观看| 18+在线观看网站| 日韩视频在线欧美| 高清毛片免费看| 狠狠精品人妻久久久久久综合| 免费看日本二区| 干丝袜人妻中文字幕| 免费在线观看成人毛片| 国产人妻一区二区三区在| 亚洲国产精品成人久久小说| 亚洲av免费高清在线观看| 久久精品久久精品一区二区三区| 天美传媒精品一区二区| 精品午夜福利在线看| 婷婷色综合大香蕉| 精品久久久久久久久亚洲| 国产真实伦视频高清在线观看| 久久99蜜桃精品久久| 免费高清在线观看视频在线观看| 波野结衣二区三区在线| 人人妻人人爽人人添夜夜欢视频 | 王馨瑶露胸无遮挡在线观看| 国产精品一区二区在线不卡| 有码 亚洲区| 国国产精品蜜臀av免费| 亚洲国产精品专区欧美| 亚洲美女视频黄频| 草草在线视频免费看| 少妇丰满av| 少妇丰满av| 日韩制服骚丝袜av| 又大又黄又爽视频免费| 亚洲欧美成人综合另类久久久| 亚洲怡红院男人天堂| 视频区图区小说| 搡女人真爽免费视频火全软件| 美女国产视频在线观看| 亚洲美女视频黄频| 日韩精品有码人妻一区| 国产一区二区三区av在线| 亚洲精品久久久久久婷婷小说| av在线观看视频网站免费| 欧美精品一区二区免费开放| 自拍偷自拍亚洲精品老妇| 精品国产乱码久久久久久小说| 亚洲欧美成人综合另类久久久| 啦啦啦在线观看免费高清www| 免费久久久久久久精品成人欧美视频 | 亚洲精品国产色婷婷电影| 777米奇影视久久| 我要看日韩黄色一级片| 国产高清有码在线观看视频| 国产精品一区二区在线不卡| 人人妻人人添人人爽欧美一区卜 | 中文字幕精品免费在线观看视频 | 丰满迷人的少妇在线观看| 免费观看性生交大片5| 亚洲国产精品一区三区| 国产黄片视频在线免费观看| 亚洲精品久久午夜乱码| 免费观看性生交大片5| 国产高清有码在线观看视频| 成人无遮挡网站| 18禁裸乳无遮挡免费网站照片| 亚洲欧美清纯卡通| 欧美成人一区二区免费高清观看| 一区二区三区免费毛片| 久久午夜福利片| 午夜福利视频精品| 少妇猛男粗大的猛烈进出视频| 亚洲成人手机| 国产精品熟女久久久久浪| 精品人妻熟女av久视频| 99热全是精品| 亚洲国产成人一精品久久久| 老师上课跳d突然被开到最大视频| 成人美女网站在线观看视频| 大香蕉久久网| 日韩在线高清观看一区二区三区| 国产高清不卡午夜福利| 日韩不卡一区二区三区视频在线| 久久午夜福利片| 精品人妻一区二区三区麻豆| 亚洲成色77777| 亚洲熟女精品中文字幕| 久久人人爽人人爽人人片va| 精品久久久久久久末码| 亚洲精品456在线播放app| 国产又色又爽无遮挡免| 亚洲av日韩在线播放| 亚洲精品乱久久久久久| av在线播放精品| 国产一级毛片在线| 国产伦精品一区二区三区四那| 亚洲国产日韩一区二区| 身体一侧抽搐| 视频区图区小说| 在线观看一区二区三区| 又粗又硬又长又爽又黄的视频| 欧美性感艳星| 一本久久精品| 少妇 在线观看| 97精品久久久久久久久久精品| 中文字幕制服av| 亚洲第一av免费看| 免费看不卡的av| 国产男女超爽视频在线观看| 午夜福利网站1000一区二区三区| 国产精品国产三级专区第一集| 国内揄拍国产精品人妻在线| 日本猛色少妇xxxxx猛交久久| 色视频在线一区二区三区| 国产在视频线精品| 久久久久性生活片| 高清在线视频一区二区三区| 亚洲人成网站在线观看播放| 精品酒店卫生间| 成人毛片a级毛片在线播放| 久久99精品国语久久久| 在线观看一区二区三区| 国产成人精品婷婷| 日本av手机在线免费观看| 一本久久精品| 蜜桃久久精品国产亚洲av| 久久精品熟女亚洲av麻豆精品| 麻豆成人av视频| av福利片在线观看| 亚洲精品乱码久久久久久按摩| 久久精品久久久久久噜噜老黄| 午夜激情久久久久久久| 在线观看美女被高潮喷水网站| videossex国产| a级毛色黄片| av在线老鸭窝| 伊人久久精品亚洲午夜| 全区人妻精品视频| 亚洲真实伦在线观看| 日韩大片免费观看网站| 亚洲精品国产成人久久av| 丰满少妇做爰视频| 一级黄片播放器| 国产 一区 欧美 日韩| 高清视频免费观看一区二区| 全区人妻精品视频| 久久99蜜桃精品久久| 十分钟在线观看高清视频www | 欧美老熟妇乱子伦牲交| 99精国产麻豆久久婷婷| 亚洲精品日韩在线中文字幕| 视频区图区小说| 欧美少妇被猛烈插入视频| 男男h啪啪无遮挡| 建设人人有责人人尽责人人享有的 | 午夜激情久久久久久久| 色视频在线一区二区三区| 成人高潮视频无遮挡免费网站| 久久久欧美国产精品| 一级毛片黄色毛片免费观看视频| 草草在线视频免费看| 亚洲综合精品二区| 久久精品久久久久久久性| 亚洲av中文字字幕乱码综合| 永久免费av网站大全| 九九久久精品国产亚洲av麻豆| 亚洲国产精品一区三区| 成人特级av手机在线观看| 视频区图区小说| 亚洲精品国产色婷婷电影| 男人和女人高潮做爰伦理| 亚洲不卡免费看| 人妻制服诱惑在线中文字幕| 纵有疾风起免费观看全集完整版| 日韩av免费高清视频| 麻豆成人午夜福利视频| 国产免费又黄又爽又色| 欧美日韩综合久久久久久| 日本午夜av视频| 成人特级av手机在线观看| 性色avwww在线观看| 欧美成人精品欧美一级黄| 亚洲精华国产精华液的使用体验| 国产精品一区二区性色av| 亚洲综合色惰| 免费观看性生交大片5| 少妇熟女欧美另类| 中文在线观看免费www的网站| 伦理电影大哥的女人| 自拍偷自拍亚洲精品老妇| xxx大片免费视频| 插阴视频在线观看视频| 国产久久久一区二区三区| 麻豆成人av视频| 亚洲真实伦在线观看| 一边亲一边摸免费视频| 色婷婷久久久亚洲欧美| 老女人水多毛片| 少妇高潮的动态图| 国产极品天堂在线| xxx大片免费视频| 好男人视频免费观看在线| 国产v大片淫在线免费观看| 另类亚洲欧美激情| 免费观看av网站的网址| 视频区图区小说| 超碰av人人做人人爽久久| 黄色视频在线播放观看不卡| 国产欧美日韩精品一区二区| 亚洲精品aⅴ在线观看| 国产国拍精品亚洲av在线观看| 午夜福利影视在线免费观看| 久久久久久久国产电影| 99国产精品免费福利视频| 亚洲真实伦在线观看| 日本猛色少妇xxxxx猛交久久| 午夜福利影视在线免费观看| 免费黄网站久久成人精品| 日本黄色日本黄色录像| 五月伊人婷婷丁香| 久久久久人妻精品一区果冻| 最近的中文字幕免费完整| 免费看不卡的av| 观看av在线不卡| 国产精品久久久久成人av| 18+在线观看网站| 3wmmmm亚洲av在线观看| 美女cb高潮喷水在线观看| 人妻 亚洲 视频| 国产 一区精品| 插阴视频在线观看视频| 精品一品国产午夜福利视频| 男女啪啪激烈高潮av片| 亚洲精品国产色婷婷电影| 国产乱来视频区| 久久亚洲国产成人精品v| 免费高清在线观看视频在线观看| 在线观看国产h片| 成年免费大片在线观看| 日本黄色日本黄色录像| 亚洲欧洲国产日韩| 美女内射精品一级片tv| 国产日韩欧美亚洲二区| 欧美xxⅹ黑人| 一级毛片电影观看| 妹子高潮喷水视频| 成人亚洲欧美一区二区av| 中文精品一卡2卡3卡4更新| 免费在线观看成人毛片| 性高湖久久久久久久久免费观看| 日韩大片免费观看网站| 少妇熟女欧美另类| 国产男女超爽视频在线观看| 日韩大片免费观看网站| 欧美成人一区二区免费高清观看| 欧美高清性xxxxhd video| 精品少妇黑人巨大在线播放| 国产 一区精品| 91在线精品国自产拍蜜月| 极品教师在线视频| 一个人看的www免费观看视频| 久久99热这里只有精品18| 欧美xxxx黑人xx丫x性爽| 精品亚洲成a人片在线观看 | 一级毛片 在线播放| 春色校园在线视频观看| 亚洲av成人精品一区久久| 日本-黄色视频高清免费观看| 五月玫瑰六月丁香| 五月天丁香电影| 亚洲av中文字字幕乱码综合| 2022亚洲国产成人精品| 超碰av人人做人人爽久久| 国产精品蜜桃在线观看| h日本视频在线播放| 一级毛片 在线播放| 日本欧美国产在线视频| 成年女人在线观看亚洲视频| 日韩一本色道免费dvd| 久久人人爽av亚洲精品天堂 | 午夜福利在线观看免费完整高清在| 97超视频在线观看视频| 老司机影院成人| 免费人成在线观看视频色| 成人高潮视频无遮挡免费网站| 亚洲精品视频女| 国产精品一区二区在线不卡| 亚洲av综合色区一区| 欧美激情极品国产一区二区三区 | 日韩中字成人| 男男h啪啪无遮挡| 最近最新中文字幕大全电影3| 高清黄色对白视频在线免费看 | 自拍欧美九色日韩亚洲蝌蚪91 | 亚洲四区av| 欧美变态另类bdsm刘玥| 91aial.com中文字幕在线观看| 欧美日韩视频高清一区二区三区二| 国产精品久久久久久精品古装| 国产精品.久久久| 色吧在线观看| 亚洲成人一二三区av| 99视频精品全部免费 在线| 精品久久久久久久久av| 超碰97精品在线观看| 日韩国内少妇激情av| 久久99热这里只有精品18| 大香蕉97超碰在线| 人人妻人人看人人澡| 日本与韩国留学比较| 91精品伊人久久大香线蕉| 99热这里只有是精品50| 亚洲av国产av综合av卡| 最近中文字幕高清免费大全6| 国产爽快片一区二区三区| 全区人妻精品视频| 欧美少妇被猛烈插入视频| 蜜桃亚洲精品一区二区三区| 欧美最新免费一区二区三区| 又粗又硬又长又爽又黄的视频| 久久人人爽人人片av| 成人高潮视频无遮挡免费网站| 高清日韩中文字幕在线| 狂野欧美白嫩少妇大欣赏| 交换朋友夫妻互换小说| 久久久久久九九精品二区国产| 亚州av有码| 国产美女午夜福利| 小蜜桃在线观看免费完整版高清| 婷婷色麻豆天堂久久| 午夜免费鲁丝| 中文字幕免费在线视频6| 色综合色国产| 七月丁香在线播放| 国产成人aa在线观看| 国产伦在线观看视频一区| 26uuu在线亚洲综合色| 亚洲婷婷狠狠爱综合网| 涩涩av久久男人的天堂| 交换朋友夫妻互换小说| 国产亚洲最大av| 国产无遮挡羞羞视频在线观看| 久久精品国产自在天天线| 男女国产视频网站| 亚洲国产精品成人久久小说| 国产男女超爽视频在线观看| 2018国产大陆天天弄谢| 亚洲精品456在线播放app| 看免费成人av毛片| 十分钟在线观看高清视频www | 午夜福利在线观看免费完整高清在| 激情 狠狠 欧美| 亚洲精品一区蜜桃| freevideosex欧美| 久久 成人 亚洲| 国产色爽女视频免费观看| 久久久久久九九精品二区国产| 青春草亚洲视频在线观看| 中文字幕免费在线视频6| 欧美区成人在线视频| 国产精品欧美亚洲77777| 亚洲精品中文字幕在线视频 | 大片免费播放器 马上看| 国产成人a区在线观看| 男人舔奶头视频| 欧美xxxx黑人xx丫x性爽| 秋霞伦理黄片| 在线观看一区二区三区| 精品久久久精品久久久| av专区在线播放| 男人舔奶头视频| 日本爱情动作片www.在线观看| 男人狂女人下面高潮的视频| 久久久久久久久久人人人人人人| av又黄又爽大尺度在线免费看| 亚洲欧美日韩另类电影网站 | 岛国毛片在线播放| 亚洲国产精品专区欧美| 狂野欧美白嫩少妇大欣赏| 女性生殖器流出的白浆| 国产精品国产三级国产av玫瑰| 国产精品一二三区在线看| 欧美日韩一区二区视频在线观看视频在线| 国产精品精品国产色婷婷| 精品久久久噜噜| 99久久精品热视频| 干丝袜人妻中文字幕| 亚洲精品aⅴ在线观看| av国产久精品久网站免费入址| 久久av网站| 国产精品三级大全| 纯流量卡能插随身wifi吗| 久久99蜜桃精品久久| 最后的刺客免费高清国语| 亚洲精华国产精华液的使用体验| 99热网站在线观看| 少妇熟女欧美另类| av天堂中文字幕网| 91久久精品国产一区二区三区| 乱码一卡2卡4卡精品| 免费观看a级毛片全部| 精品一区二区三区视频在线| 国产无遮挡羞羞视频在线观看| 日本与韩国留学比较| 3wmmmm亚洲av在线观看| 午夜激情福利司机影院| 免费观看av网站的网址| 能在线免费看毛片的网站| 人妻制服诱惑在线中文字幕| 亚洲欧洲日产国产| 免费少妇av软件| 日本vs欧美在线观看视频 | 国产又色又爽无遮挡免| av又黄又爽大尺度在线免费看| 精品亚洲乱码少妇综合久久| 少妇丰满av| 九九爱精品视频在线观看| 又大又黄又爽视频免费| 男人狂女人下面高潮的视频| 天天躁夜夜躁狠狠久久av| 精品久久久久久久末码| 亚洲欧美日韩东京热| 久久人人爽av亚洲精品天堂 | 欧美精品国产亚洲| 国产爽快片一区二区三区| 欧美精品人与动牲交sv欧美| 国产免费一区二区三区四区乱码| 欧美人与善性xxx| 午夜免费鲁丝| 日本黄色日本黄色录像| 91在线精品国自产拍蜜月| h视频一区二区三区| 一级毛片电影观看| 亚洲精华国产精华液的使用体验| 18禁在线播放成人免费| 精品少妇黑人巨大在线播放| 免费久久久久久久精品成人欧美视频 | videos熟女内射| 小蜜桃在线观看免费完整版高清| 亚洲欧美成人综合另类久久久| 高清毛片免费看| 五月伊人婷婷丁香| 特大巨黑吊av在线直播| 亚洲成人中文字幕在线播放| 亚洲国产成人一精品久久久| 国产一区二区三区av在线| 欧美3d第一页| 免费观看a级毛片全部| 香蕉精品网在线| 少妇被粗大猛烈的视频| 亚洲人成网站在线播| 国产精品福利在线免费观看| 久久久久性生活片| 汤姆久久久久久久影院中文字幕| 97精品久久久久久久久久精品| 汤姆久久久久久久影院中文字幕| 国产亚洲一区二区精品| 一区二区三区精品91| 秋霞在线观看毛片| 永久免费av网站大全| 国产亚洲精品久久久com| 性高湖久久久久久久久免费观看| 色视频在线一区二区三区| 国产午夜精品一二区理论片| 国产亚洲午夜精品一区二区久久| 亚洲人成网站高清观看| 精品久久久久久电影网| 如何舔出高潮| 亚洲天堂av无毛| 啦啦啦啦在线视频资源| 亚洲精品日韩在线中文字幕| 爱豆传媒免费全集在线观看| 精品亚洲成a人片在线观看 | 国产片特级美女逼逼视频| 99精国产麻豆久久婷婷| 免费观看无遮挡的男女| 日韩,欧美,国产一区二区三区| 观看美女的网站| 亚洲在久久综合| 久久97久久精品| 国产精品一区二区在线观看99| 亚洲四区av| 亚洲一区二区三区欧美精品| 亚洲av综合色区一区| 免费播放大片免费观看视频在线观看| 极品少妇高潮喷水抽搐| 欧美精品国产亚洲| 久久久精品免费免费高清| 亚洲精品国产色婷婷电影| 亚洲性久久影院| 久久 成人 亚洲| 精品国产一区二区三区久久久樱花 | 一级a做视频免费观看| 97超碰精品成人国产| 啦啦啦中文免费视频观看日本| 九草在线视频观看| 六月丁香七月| 最近中文字幕2019免费版| av不卡在线播放|