• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Improved Shark Smell Optimization Algorithm for Human Action Recognition

    2023-10-26 13:12:58InzamamMashoodNasirMudassarRazaJamalHussainShahMuhammadAttiqueKhanYunCheolNamandYunyoungNam
    Computers Materials&Continua 2023年9期

    Inzamam Mashood Nasir ,Mudassar Raza ,Jamal Hussain Shah ,Muhammad Attique Khan ,Yun-Cheol Nam and Yunyoung Nam

    1Department of Computer Science,COMSATS University Islamabad,Wah Campus,Wah Cantt,47040,Pakistan

    2Department of Computer Science,HITEC University,Taxila,Pakistan

    3Department of Architecture,Joongbu University,Goyang,10279,South Korea

    4Department of ICT Convergence,Soonchunhyang University,Asan,31538,Korea

    ABSTRACT Human Action Recognition(HAR)in uncontrolled environments targets to recognition of different actions from a video.An effective HAR model can be employed for an application like human-computer interaction,health care,person tracking,and video surveillance.Machine Learning(ML)approaches,specifically,Convolutional Neural Network(CNN)models had been widely used and achieved impressive results through feature fusion.The accuracy and effectiveness of these models continue to be the biggest challenge in this field.In this article,a novel feature optimization algorithm,called improved Shark Smell Optimization(iSSO)is proposed to reduce the redundancy of extracted features.This proposed technique is inspired by the behavior of white sharks,and how they find the best prey in the whole search space.The proposed iSSO algorithm divides the Feature Vector(FV)into subparts,where a search is conducted to find optimal local features from each subpart of FV.Once local optimal features are selected,a global search is conducted to further optimize these features.The proposed iSSO algorithm is employed on nine(9)selected CNN models.These CNN models are selected based on their top-1 and top-5 accuracy in ImageNet competition.To evaluate the model,two publicly available datasets UCF-Sports and Hollywood2 are selected.

    KEYWORDS Action recognition;improved shark smell optimization;convolutional neural networks;machine learning

    1 Introduction

    Human Action Recognition(HAR)includes the action recognition of a person through imaging data which has various applications.Recognition approaches can be divided into three categories:multi-model,overlapping categories,and video sequences [1].This data used for recognition is the major difference between images and video categories.Data in form of images and videos are acquired through cameras in controlled and uncontrolled environments.With the advancement of technology in past decades,various smart devices have been developed which to collect images and video data for HAR,health monitoring,and disease prevention[2].Different research has been carried out on HAR through images or videos over the last three decades[3,4].Human visual systems get visual information about an object such as its movement,shape,and its variations.This information is used to investigate the biophysical processes of HAR.Computer vision systems have achieved very good accuracy while catering to different challenges such as occlusion,background clutter,scale and rotation invariance,and environmental changes[5].

    HAR depending upon the action complexity can be divided into primitive,single-person,interaction,and group action recognition [6].The basic movement of a single human body part considers primitive action,a set of primitive actions of one person includes including single-person action,a collection of humans and objects involves in interaction while collective actions performed by a group of people are group actions.Computer vision-based HAR systems are divided into hand-crafted feature-based methods and deep learning-based methods.The combined framework of hand-crafted and deep features is also employed by many researchers[7].

    The data plays an important role in efficient HAR systems.The HAR data is categorized into color channels,depth,and skeleton information.Texture information can be extracted from color channels,i.e.,RGB which is close to the visual appearance,but illumination variations can affect the visual data [8].Depth map information is invariant to the lighting changes which is helpful in foreground object extractions.3D information can also be captured through a depth map,but noise factors should be considered while capturing the depth map.Skeletons information can be gathered through color channels and depth maps,but it can be exploited from environmental factors[9].HAR systems use different levels of features such as whole data as the input of HAR used in [10].Apart from features,motion is an important factor that can be incorporated into the feature computation step.It includes optical flow for capturing low-level feature information in multiple video frames.Some researchers included motion information in the classification step with Conditional Random Fields,Hidden Markov Models,Long-Short Term Memory (LSTM),Recurrent Neural Networks (RNN),and 3D Convolutional Neural Networks(CNN)[11–15].These HAR systems have good recognition accuracy using the most appropriate feature set.

    A CNN-based convolutional 3D (C3D) network was proposed in [16].The major difference between the 3D CNN and the proposed one was that it utilized the whole video as an input instead of a few frames or segmented frames,which makes it robust for large databases.The architecture of the C3D network comprises several layer groups like convolutional layer=8,maximum pooling layers=5,fully connected layers=2,and the last softmax loss layer.UCF 101 dataset was utilized to evaluate the best combination of the proposed network architecture.The best performance achieved by the proposed network was using a 3×3×3 convolutional filter without updating the other parameter.The researcher came up with RNNs[17]to overcome the limitation action of CNN models of information derivation from long timelapse.RNN has proved robust while extracting time dimension features and has one drawback of gradient disappearance.The mentioned problem is addressed by presenting Long Short-Term Memory Network(LSTM)[18],which utilizes processors to gauge the information integrity and relevance.Normally,input gates,output gates,and forget gates are utilized in the processor.The information flow is controlled by gates in the processor and unnecessary information which requires large memory chunks is stored for long-term tasks.

    A ConvNet architecture for the spatiotemporal fusion of video fragments has evaluated its performance on dataset UCF-101 by achieving an accuracy of 93.5% and HMDB-51 by achieving an accuracy of 69.2%[19].An architecture is proposed to handle 3D signals effectively and efficiently and introduced Factorized Spatio-Temporal Convolutional Network(FSTCN).It was tested on two publicly available datasets UCF-101 and achieved 88.1% accuracy,while achieved 59.0% accuracy on HMDB-51[20].In another method,LSTM models are trained to utilize the differential gating scheme,which focuses on the varying gain due to the slow movements between the successive frames,change based on Derivate of States(DoS)and this combined called differential RNN(dRNN).The method is implemented on KTH and MSRAction3D datasets.The accuracy achieved on their datasets is 93.96% and 92.03%,respectively[21].

    This article presents an improved form of the Shark Smell Algorithm (SSO),which reduces redundant features.The proposed algorithm utilizes both,SSO and White Shark Optimization(WSO)properties to solve the redundancy issues.The proposed iSSO divides the population into sub-spaces to find local and global optimal features.In the end,these extracted local features are used to optimize global features.Features are extracted using 9 pre-trained CNN models,which are selected based on their top-1 and top-5 accuracies in ImageNet competition.This model is tested on two publicly available datasets UCF-Sports (D1) and Hollywood2 (D2) and it has obtained better results than state-of-the-art(SOTA)methods.

    2 Proposed Methodology

    In an uncontrolled environment,various viewports,illuminations,and changing backgrounds,traditional hand-crafted features have been proved insufficient [22].In the age of big data and the evolution of ML methods,Deep Learning(DL)has achieved remarkable results[23–25].These results have motivated researchers around the globe to apply these DL methods to domains involving video data.The challenge of ImageNet classification drastically changed the dimensions of DL methods,when CNNs made a huge breakthrough.The main difference between CNN methods and local feature-based methods is that CNN iteratively and automatically extracts deep features through its interconnected layers.

    2.1 Transfer Learning of Pre-Trained CNN Models

    Artificial Intelligence (AI) and Machine Learning (ML) have a sub-domain,called Transfer Learning(TL),which transforms the learned knowledge of one problem(base problem)into another problem (target problem).TL improves the learning of a model through the data provided for the target problem.A model trained to classify Wikipedia text can be utilized to classify the texts of simple documents after TL.A model trained to classify cards can also classify birds.The nature of this problem is the same,which is to classify objects.TL provides scalability to a trained model,which enables it to recognize different types of objects.Since 2015,after the first CNN model,AlexNet[22] was proposed,a lot of CNN architectures were proposed.The base for all these models was a competition,where a dataset,ImageNet [26],having 1000 classes was presented.The efficiency of all proposed CNN models to date is still measured on how the proposed model performs on the ImageNet dataset.In this research,nine of the most used CNN models are selected,where,through TL,features of input images from selected datasets will be extracted.Table 1 lists all selected CNN models along with their depth,size,input size,number of parameters,and their top-1 and top-5 accuracies on ImageNet datasets.

    Table 1:Different characteristics of selected pre-trained CNN models

    The structure of all these selected pre-trained models is different because of the nature and arrangement of layers.The selected feature extraction layer and extracted features per image vary from model to model.For Vg,the fc7 layer is selected to extract 4096 features for a single image.1280 and 4032 features are extracted from the global_average_pooling2d_1 and global_average_pooling2d_2 layers of Mo and Na models,respectively.avg_pool is selected as a feature extraction layer for Re,De,Xe,and In models,which extracted 2048,1920,2048,and 1536 features,respectively.avg1 is selected as the feature extraction layer for Da,and it extracted 1024 features against a single image.When the Ef model is used as a feature extractor,it extracts 1280 features from the GlobAvgPool layer.All these extracted features are forwarded to iSSO for optimization.

    2.2 Improved Shark Smell Optimization(iSSO)

    The meta-heuristic model used in this article is an improved form of Shark Smell Optimization(SSO)[33].The SSO was proposed after inspiration was taken from the species of sharks.Sharks are considered as most hazardous and strongest predacious in the universe[34].Sharks are creatures with a keen ability to smell and highly contrasted vision due to their sturdy eyesight and powerful muscles.They have more than 300 sharp,pointing,and triangular teeth in their gigantic jaws.Sharks usually strike with a large and abrupt bite of prey,which proves so sudden that the prey cannot avoid it.These sharks hunt the prey by using their extreme sense of smelling and hearing the traits of prey.The iSSO algorithm initially divides the whole search space intosubparts.The algorithm then performs the local and global search to find the optimum prey in both,local and global search spaces of.Once an optimum prey is located,the search then continues to find all the optimal prey in the remaining subparts.The process mentioned below is for a single subpart.The whole process will be repeated for all.Another factor is the quantity of selected optimal features.For this,denotes the total selected features.

    2.2.1 Prey Tracking

    Sharks wander in the ocean freely just like any other organism of the sea and search for prey.In that search,sharks update their positions by the traits of prey.They apply all their tricks to locate,stalk and track down the prey.All senses of sharks along with their average distance range are illustrated in Fig.1.All these illustrated features help them to exploit and search the whole space for hunting prey.

    Figure 1:Senses of shark along with its average distance range

    2.2.2 Prey Searching(Exploration and Exploitation)

    The sharks have a very unfamiliar sense of hearing,that is,they can hear any wavelength from the full length of their body.Their whole body can detect any change in water pressure and reveal the nearby movements of the targeted prey.The attention of sharks is usually attained by moving prey,which leaves a disturbance in water pressure.Sharks even have body organs,which can detect the tiny electromagnetic fields,produced through the swimming of prey.Turbulence due to the prey’s motion helps sharks to sense the frequency of waves and accurately predict the size and location of prey.The velocity of waves detected by sharks is described as:

    whereυdenotes the velocity of wavy motion,ωdenotes the wavelength that defines the distance between shark and prey andωfdenotes the frequency of waves during the wavy motion.This frequency is determined by the total number of cycles,completed by the shark in a second.The sharks utilize their extraordinary sense to exploit the whole space and to detect prey.Once,a prey is in the nearby area,the senses of the shark grow exponentially,and it travels towards the pined point position of the prey.The following equation is assumed to be used to update the position of a shark with constant acceleration:

    here,a new position of the shark is denoted byρ,the primitive position is denoted byρiand the initial velocity is denoted byυi.The interval taken to travel between current and initial positions is represented byΔTandAccdenotes the constant acceleration factor.Many preys disburse their scent when they leave their position.When a shark reaches that position,it finds no prey and thus starts to search for the prey randomly and explore the nearby areas by using its sense of smell,hearing,and sight.The first step of this algorithm is to generate a search space of all possible solutions.Search space ofmsharks inndimensions,with a position of all sharks,is presented as:

    here,P is a 2D matrix,containing the positions of all sharks in search space,ndenotes the total number of decision variables andrepresentsxthshark innthdimension.This population is generated by randomly initialized upper and lower bounds as:

    Now is the time for the shark to move toward prey.When a shark detects the waves of moving prey,it locks its target and starts moving towards that prey,which is defined as:

    here,C represents the coefficient of acceleration.The value of C for this work is equal to 2.145 after extensive experiments.?1and?2are calculated as:

    here,maximum,and current iterations are denoted bySands.Active motion of sharks can be achieved by using subordinate and initial velocities denoted by?maxand?min.For this work,these velocities for?maxand?minis set at 0.14 and 1.35,respectively.

    The sharks spend most of their time searching for optimal prey and to achieve it,they constantly change their positions.Their position changes when either they smell the scent of prey or they feel the movement in waves,caused by prey.Sometimes,a potential prey leaves its position and leaves some scent,either they feel a shark coming towards them or in search of food.In this case,the shark starts to stray randomly in search of other prey.The position of the shark,in that case,is updated as per the following equation:

    here,scdis a factor,which changes the direction of the moving shark,ωfmaxandωfmixdenote the maximum and minimum frequencies during its motion,pandqdenote any positive constants to maintain the exploitation and exploration behavior of the shark.For this work,the values ofωfmaxandωfminare kept at 0.31 and 0.03 after in-depth analysis.Sharks have a behavior,which tends to maintain their position closer to the prey:

    TheSenseis a parameter,which denotes the key senses of a shark while moving towards the prey and it is defined as:

    here,ris a positive constant,which is used to manage the behavior of exploitation and exploration of sharks.During the evaluation of this study,the value ofris kept at 0.002.

    The behavior of sharks is simulated mathematically by preserving the initial two optimal solutions and updated white shark position w.r.t these optimum solutions.The following equation is used to preserve the stated behavior:

    This relation shows that the position of the shark is always updated w.r.t.the optimal position of prey.The final location of the shark will be somewhere in the search space,near the optimum prey.The final algorithm of iSSO is presented in Algorithm 1.

    After extensive experiments,the value ofandis set at 14 and 0.65.The impact of these values is also presented in the result section.

    3 Experimental Results

    The proposed iSSO algorithm is evaluated by performing multiple experiments under different parameters,which efficiently verifies the performance of this algorithm.This section provides an in-depth view of performed experiments along with ablation analysis and comparison with existing techniques.

    3.1 Experimental Setup and Datasets

    The proposed iSSO algorithm is evaluated on two(2)benchmark datasets including UCF-Sports Dataset (D1) [35] and Hollywood2 Dataset (D2) [36].D1 contains a total of 150 videos from 10 classes included in this dataset,which represents human actions from different viewpoints and a range of scenes.D2 contains a total of 1,707 videos across 12 classes.These videos are extracted from 69 Hollywood movies.

    The proposed iSSO model is trained,tested,and validated using an HP Z440 workstation having an NVIDIA Quadro K2000 with a GPU memory of 2 GB DDR5.This card has 382 CUDA cores along with a 128-bit memory interface and 17 GB/s memory bandwidth.MATLAB2021a was used for training,testing,and validation.All selected pre-trained models are transfer learned with an initial learning rate of 0.0001 with an average decrease of 5% after 7 epochs.The whole process has 160 epochs and overall momentum of 0.45.Selected datasets are split using the standard 70-15-15 ratio for training,testing,and validation.During the testing of the proposed model,eight(8)classifiers were trained,which include Bagged Tree(BTree),Linear Discriminant Analysis(LDA),three kernels of k-Nearest Neighbor (kNN),i.e.,Ensemble Subspace kNN (ES-kNN),Weighted kNN (W-kNN) and Fine kNN(F-kNN),and three kernels of Support Vector Machine(SVM),i.e.,Cubic SVM(C-SVM),Quadratic SMV(Q-SVM)and Multi-class SVM(M-SVM).The performance of the proposed iSSO algorithm is evaluated using six metrics,such as Sensitivity(Sen),Correct Recognition Rate(CRR),Precision (Pre),Accuracy (Acc),Prediction Time (PT),and Training Time (TT).All experimental results presented in the next section are achieved after performing each experiment at least five times,using the same environment and factors.

    3.2 Recognition Results

    The efficiency of the proposed model is evaluated by performing multiple experiments.Initially,the impact of all selected pre-trained models is noted by feeding the dataset and extracting features from the selected output layer.In the next experiment,the proposed iSSO algorithm is employed on extracted deep features.And finally,the iSSO-enabled CNN model with the highest accuracy is further forwarded to the other classifiers.It is noteworthy that all the selected classifiers were used during this experiment,but F-kNN achieved the highest accuracy,thus Table 2 contains the results of F-kNN.While using D1,the Na model achieved the highest average Acc of 97.44 was achieved.This average accuracy has a factor,of±1.36%,which it alters during the five experiments.Similarly,Na obtained 96.97% CRR.The F-kNN took 206 min on average to train and 0.53 s to predict an input image.The lowest average Acc of 73.02%was obtained by the Vg model,whereas Ef took the highest TT of 347 min.

    Table 2:Performance of iSSO on selected CNN models on D1

    Once a model with the best performance is selected in the first experiment,this model is used to train all selected classifiers.As mentioned earlier,F-kNN performed better on D1 when Na was selected as the base CNN model.This classifier achieved average Sen of 97.37%,an average CRR of 96.97%,and a Pre of 97.28%.The second-best average Acc of 91.75% was achieved by Es-kNN.The worst-performing classifier was BTree,which could only achieve an 80.83% average Acc.The lowest average TT was of 193 s and the lowest average PT of 0.39 s was taken by LDA,but it could only achieve 84.16% Acc.

    The proposed model is also evaluated on D2,where the Da network achieved a maximum average Acc of 80.66%.The change factor of this model is 1.04%,after performing the same experiment 5 times.The average CRR of this model is noted at 79.68%.The best classifier for this model is MSVM,which took 139 min on average to train and 0.48 s on average to predict an input image.The second-best average Acc of 78.27% is achieved by De,which also achieves 78.66% CRR.For this model,M-SVM took 221 min to train and 0.54 s to predict.The lowest average accuracy of 60.02% on D2 is again achieved by Vg,where the selected classifier took 297 min to train and 1.45 s to predict an input image.The performances of all selected CNN models with and without the iSSO algorithm are compared in Table 3.

    Table 3:Performance of iSSO on selected CNN models on D2

    After the selection of the best-performing CNN model,all selected classifiers are trained on the extracted features of that CNN model.During this experiment,selected evaluation matrices are used to note the performance of each classifier.M-SVM has achieved the best average Sen of 79.22%,best average CRR of 79.68%,best Pre of 79.84%,and best average Acc of 80.66%.This classifier requires 280 min for training and 0.48 s for predicting an input image.The second-best average Acc of 75.88% is obtained by W-kNN,which took 280 min to train and 0.36 s to predict.The lowest TT is noted at 115 min for BTree,but the achieved average Acc is 50.95%.

    3.3 Ablation Analysis of iSSO

    This section discusses the importance of selecting values of parameters used in the iSSO algorithm.It should be noted that all readings of this section are performed using the network,which obtained the highest accuracy for each dataset,i.e.,Na for D1 and Da for D2.Secondly,the classifier used for this analysis is also retrieved from the best experiment for each dataset,i.e.,f-kNN for D1 and M-SVM for D2.All experiments in this analysis are performed thrice and an average reading of three experiments is mentioned against each parameter.

    The first and most important factor of the iSSO algorithm is the number of subparts,into which the whole search space,the feature vector,is divided.Table 4 represents the impact of different values for this parameter on accuracy and training time.It is noteworthy that the less value ofdecreases TT but reduces the performance of the algorithm.

    Table 4:Impact of different values of

    Another important parameter is,which selects the total number of features after the completion of an algorithm.The impact ofon TT and Acc is shown in Table 5.It is visible that with the increase of selected features,the Acc and TT increase for both datasets until the value ofreaches 0.65.

    Table 5:Impact of different values of

    Table 5:Impact of different values of

    The coefficient of acceleration C determines how quickly the shark will move from its current position.The quicker the movement is,the less exploration it will make.The acceleration must neither be too fast nor too slow,as the faster shark will skip important and potential prey and slower sharks will take too much time in exploration.Another factor is the behavior of sharksrduring the exploitation and exploration process.The value ofrdetermines the intervals,by which each prey should be searched for.Lesser value ofrwill increase the searching time and ultimately increases the TT.Table 6 represents the comparison of different values of C andr.

    Table 6:Impact of different values of

    The values of?max,?min,anddo not majorly impact the overall performance of iSSO,specifically in terms of Acc and TT.At the selected values of these parameters,the iSSO has obtained the highest possible performance.Tweaking these parameters marginally changes the results,which can be ignored.The validation accuracy and validation loss of the proposed model on both datasets are shown in Fig.2,where Figs.2a and 2b are the validation accuracy and validation loss on D1,respectively,while Figs.2c and 2d are the validation accuracy and validation loss on D2,respectively.It can be seen that 50% accuracy on both datasets is achieved on the initial 40 epochs,the validation loss is also reduced to less than 50% in the same number of epochs,which shows the high convergence of the proposed model.

    Figure 2:Validation accuracy and validation loss on D1 and D2

    3.4 Comparison with Existing Techniques

    A hybrid model was proposed in [37] by combining Speeded Up Robust Features (SURF) and Histogram of Oriented Gradients (HOG) for HAR.This model was cable of extracting global and local features as it obtained motion regions by adopting background subtraction.Motion edge features,effectively described by the directional controllable filters were utilized in HOG to extract information on local edges.The bag of Word(BoW)model was also obtained by performing k-means clustering.In the end,Support Vector Machines(SVM)were used to recognize the motion features.This model was tested on SBU Kinect Interaction,UCF Sports,and KTH datasets and achieved accuracies of 98.5%,97.6%,and 98.2%,respectively.QWSA-HDLAR model was proposed in[38]for the recognition of human actions.This model utilized TL-enabled CNN architecture,called NASNet for feature extraction.The NASNet model also employs a tuning process for hyper-parameters to optimally increase performance.In the end,a hybrid model containing CNN and RNN,called CNNBiRNN,was used to classify different human actions.This model was tested on D1 and KTH,and it achieved an average recognition rate of 99.0% and 99.6% on both datasets,respectively.

    An attention mechanism based on bi-directional LSTM (BiLSTM) and dilated CNN (dCNN)was proposed in [39],which extracted effective features of the HAR frame.Salient features were extracted using the dCNN and these features were fed to the BiLSTM model for the learning process.The learning process helped the model for long-term dependencies,which boosted the evaluation performance and extracted HAR-related cues and patterns.This model was evaluated on J-HMDB,D1,and UCF11 and achieved 80.2%,99.1%,and 98.3% accuracies,respectively.A DCNN-based model was proposed in[40],which took the input of globally contrasted frames.The resnet-50 model was transferred and learned and it extracted features from a fully connected and global average pooling layer.Both features were fused using Canonical Correlation Analysis (CCA) and then finetuned using the Shanon Entropy-based technique.The proposed model was tested on KTH,UTInteraction,YouTube,D1,and IXMAS datasets and achieved accuracies of 96.6%,96.7%,100%,99.7%,and 89.6%,respectively.The authors in [41] proposed the HAR model using feature fusion and optimization techniques.Before feature engineering,the color transformation was applied to enhance the video frames.Optical flow extracted the moving region after the frames fusion,and these regions were forwarded to extract texture and shape features.Finally,weighted entropy was utilized to select related features and M-SVM was used to classify the actions.This model experimented on UCF YouTube,D1,KTH,and Weizmann datasets and it achieved 94.5%,99.3%,100%,and 94.5%,respectively.Table 7 compares the proposed model with existing techniques.

    Table 7:Comparison with existing techniques on D1

    HAR was carried out using three models in[44]including where extraction of compact features,re-sampling of shot framerate,and detection of the shot boundary.The main objective of this research was to emphasize the extraction of relevant features.This model was tested on Weizmann,UCF,KTH,and D2 datasets using the second model,it achieved 97.8%,95.6%,97.0%,and 73.6% accuracies,respectively.A lightweight deep learning model was proposed in[45],which recognizes human actions using surveillance streams of CNN models.An ultra-fast object recognizer named Minimum-Output-Sum-of-Squared-Error(MOSSE)locates the subject in a video,while the LiteFlowNet CNN model was used to extract pyramid convolutional features of successive frames.In the end,Gated Recurrent Unit(GRU)was trained to perform HAR.Experiments were conducted on YouTube,Hollywood2,UCF-50,UCF-101 and HMDB51 datasets and overall average accuracy of 97.1%,71.3%,95.2%,95.5% and 72.3%,respectively.

    Double-constrained BOW (DC-BOW) was presented in [46],which utilized spatial information of features on three different scales including hidden scale,presentation scale,and descriptor scale.Length and Angle Constrained Linear Coding (LACLC) methods were obtained by constructing a loss function between local features and visual words.To optimize the features,spatial differentiation between extracted features of every cluster was considered.LACLC and a hierarchical weighted approach were applied to extract the related features.The proposed model was tested on UCF101,D2,UCF11,Olympic Sports,and KTH datasets and it achieved accuracies of 88.9%,67.13%,96%,92.3%,and 98.83%,respectively.A Spatiotemporally Attentive 3D Network(STA3D)was proposed in [42] for the propagation of important temporal descriptors and refining of spatial descriptors in 3D Fully Convolutional Networks(3D-FCN).To refine spatial descriptors and propagate temporal descriptors,an adaptive up-sampling module was also proposed.This technique was evaluated on D1 and D2,where it achieved 90%and 71.3%accuracies,respectively.A DCNN-based model is proposed in [43],which has three modules,reasoning and memory,attention,and high-level representation modules.The first modules concentrated on temporal and spatial reasoning so that temporal and spatial patterns could be efficiently discriminated.The second and third modules were mainly utilized for learning through captured spatial saliencies.This model was evaluated on D1 and D2,where it achieved 88.9% and 78.9%accuracies.Table 8 compares the performance of the proposed model with existing techniques.

    Table 8:Comparison with existing techniques on D2

    4 Conclusion

    In this article,an analysis of pre-trained CNN models is presented,where 9 models are selected based on their total parameters,size,and Top-1 and Top-5 accuracies.These selected pre-trained CNN models are trained on the selected dataset using the TL.The output layer of these pre-trained models is mentioned,and no experiments are performed based on a selection of the output layer.The extracted features of these CNN models are forwarded to the proposed iSSO,which is an improved algorithm from the traditional SSO.The iSSO algorithm divides the feature vector into subsets,where each subset is then used to find the local and global best features.The selection of local and global best features is inspired by the searching capabilities of the white shark,which uses its senses to find the optimal prey.Once the features are selected,the results are taken using selected publicly available datasets.The limitation of this work is the training time,which is too high,i.e.,the lowest training time for D1 is 194 min and for D2,it is 139 min.The one reason for taking this much TT is the dataset,which includes videos.But the main reason is the architecture of these models,which have too many repeated blocks of layers,which can be reduced.In the future,the architecture of the best-performing CNN models of this article will be analyzed to detect and reduce the repeated blocks of layers.The impact of these repeated blocks can also be analyzed.

    Acknowledgement:Not applicable.

    Funding Statement:This work was supported by the Collabo R&D between Industry,Academy,and Research Institute (S3250534) funded by the Ministry of SMEs and Startups (MSS,Korea),the National Research Foundation of Korea(NRF)grant funded by the Korea government(MSIT)(No.RS-2023-00218176),and the Soonchunhyang University Research Fund.

    Author Contributions:The authors confirm their contribution to the paper as follows:study conception and design: I.M.N,M.A.K,and M.R;data collection: I.M.N,M.A.K,and M.R;draft manuscript preparation:I.M.N,M.A.K,M.R,and J.H.S;funding:J-C.N and Y.N;validation:JH.S,Y-C.N,and Y.N;software: I.M.N,M.A.K,Y.N,and Y-C.N;visualization: JH.S,Y-C.N,and Y.N;supervision:M.A.K,M.R and Y.N.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The dataset used in this work is publically available for research purpose.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    蜜桃亚洲精品一区二区三区| 精品久久国产蜜桃| 国产伦精品一区二区三区视频9| 亚洲欧美精品自产自拍| 色综合色国产| 伦精品一区二区三区| 99热网站在线观看| 蜜臀久久99精品久久宅男| 两个人的视频大全免费| 99视频精品全部免费 在线| 国产又色又爽无遮挡免| 赤兔流量卡办理| 乱码一卡2卡4卡精品| 国产大屁股一区二区在线视频| 在线观看人妻少妇| 晚上一个人看的免费电影| 黄色视频在线播放观看不卡| 一本一本综合久久| 亚洲四区av| 国产亚洲av嫩草精品影院| 六月丁香七月| 91久久精品国产一区二区成人| 欧美性猛交╳xxx乱大交人| 亚洲国产欧美在线一区| 美女国产视频在线观看| 日本wwww免费看| 久久国产乱子免费精品| 国产极品天堂在线| 午夜福利网站1000一区二区三区| 99久久九九国产精品国产免费| 亚洲最大成人中文| 欧美激情国产日韩精品一区| 成人午夜精彩视频在线观看| 一级爰片在线观看| 尤物成人国产欧美一区二区三区| 亚洲av不卡在线观看| 免费黄网站久久成人精品| 别揉我奶头 嗯啊视频| 青春草国产在线视频| 99热6这里只有精品| 麻豆乱淫一区二区| 日韩一区二区三区影片| 丝袜脚勾引网站| 亚洲精品成人av观看孕妇| 国产 精品1| 亚洲国产精品成人久久小说| 亚洲欧美一区二区三区国产| 午夜激情福利司机影院| 亚洲欧美中文字幕日韩二区| 少妇 在线观看| 久久精品国产亚洲网站| 久久精品国产亚洲网站| 成人鲁丝片一二三区免费| 婷婷色av中文字幕| 亚洲精品中文字幕在线视频 | 精品人妻熟女av久视频| 日韩 亚洲 欧美在线| 国产精品国产三级国产专区5o| 免费av毛片视频| av免费观看日本| 看非洲黑人一级黄片| 亚洲精品成人久久久久久| 在线观看一区二区三区| 精品一区二区免费观看| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 精品国产乱码久久久久久小说| 国产黄频视频在线观看| 日韩一本色道免费dvd| 汤姆久久久久久久影院中文字幕| 蜜臀久久99精品久久宅男| 国产精品福利在线免费观看| 欧美bdsm另类| 久久久成人免费电影| 亚洲精品乱久久久久久| 国产精品三级大全| av一本久久久久| 久久久精品94久久精品| 在线观看国产h片| 欧美性感艳星| 国产成人a∨麻豆精品| 丰满少妇做爰视频| 亚洲四区av| 亚洲国产色片| 国产中年淑女户外野战色| 男人狂女人下面高潮的视频| 成年av动漫网址| 欧美高清性xxxxhd video| 内地一区二区视频在线| 嘟嘟电影网在线观看| 国产精品蜜桃在线观看| 亚洲精品一区蜜桃| 九九爱精品视频在线观看| 黄片wwwwww| 精华霜和精华液先用哪个| 99久久中文字幕三级久久日本| 欧美国产精品一级二级三级 | 老司机影院毛片| 国产免费一级a男人的天堂| 日韩av在线免费看完整版不卡| av播播在线观看一区| 日韩精品有码人妻一区| 国国产精品蜜臀av免费| 一本久久精品| 欧美丝袜亚洲另类| 日韩一区二区三区影片| 18禁动态无遮挡网站| 中文在线观看免费www的网站| 十八禁网站网址无遮挡 | 六月丁香七月| 好男人视频免费观看在线| 欧美日韩精品成人综合77777| 亚洲综合色惰| 久久精品久久精品一区二区三区| 麻豆乱淫一区二区| 国产欧美日韩精品一区二区| 波野结衣二区三区在线| 国产精品福利在线免费观看| 一级毛片电影观看| 亚洲人与动物交配视频| 精品视频人人做人人爽| 肉色欧美久久久久久久蜜桃 | 亚洲综合精品二区| 亚洲精品中文字幕在线视频 | 三级男女做爰猛烈吃奶摸视频| 婷婷色综合大香蕉| 免费大片18禁| 97精品久久久久久久久久精品| 日本三级黄在线观看| 日日啪夜夜爽| 人人妻人人看人人澡| 人人妻人人澡人人爽人人夜夜| 国产黄色免费在线视频| 国产亚洲91精品色在线| 国语对白做爰xxxⅹ性视频网站| 91精品国产九色| 免费看不卡的av| 国产精品一区二区性色av| 久久久久久久久大av| 麻豆国产97在线/欧美| 丝袜美腿在线中文| 色哟哟·www| 美女主播在线视频| 免费看a级黄色片| 亚洲精品自拍成人| 成人毛片a级毛片在线播放| 日本免费在线观看一区| 黄色日韩在线| 久久久久久久久久久免费av| 不卡视频在线观看欧美| 午夜视频国产福利| 久久精品国产亚洲网站| 亚洲精品中文字幕在线视频 | 日日啪夜夜爽| 亚洲欧美精品专区久久| 国产女主播在线喷水免费视频网站| 精品熟女少妇av免费看| 夜夜看夜夜爽夜夜摸| 亚洲国产高清在线一区二区三| 美女脱内裤让男人舔精品视频| 国产男女超爽视频在线观看| 三级男女做爰猛烈吃奶摸视频| av天堂中文字幕网| 人人妻人人澡人人爽人人夜夜| h日本视频在线播放| 亚洲经典国产精华液单| 国产高清不卡午夜福利| 在线a可以看的网站| 插逼视频在线观看| 精品国产露脸久久av麻豆| 亚洲国产高清在线一区二区三| 免费观看在线日韩| 成年女人看的毛片在线观看| 天堂中文最新版在线下载 | 精品久久久久久电影网| 亚洲欧美成人综合另类久久久| 久久99热这里只有精品18| 久久99热这里只有精品18| 七月丁香在线播放| 大码成人一级视频| 国产av不卡久久| 一级二级三级毛片免费看| 超碰av人人做人人爽久久| 国产黄片视频在线免费观看| 少妇猛男粗大的猛烈进出视频 | 久久精品国产亚洲av涩爱| 伊人久久精品亚洲午夜| 欧美+日韩+精品| 乱码一卡2卡4卡精品| 男人爽女人下面视频在线观看| 男女边吃奶边做爰视频| 亚洲av中文av极速乱| 特级一级黄色大片| 欧美高清性xxxxhd video| 国产亚洲精品久久久com| 麻豆国产97在线/欧美| 免费看a级黄色片| 少妇高潮的动态图| 老师上课跳d突然被开到最大视频| 免费播放大片免费观看视频在线观看| 亚洲性久久影院| 中文字幕制服av| 美女内射精品一级片tv| 最近最新中文字幕免费大全7| 亚洲av日韩在线播放| 交换朋友夫妻互换小说| 久久久亚洲精品成人影院| 亚洲精品久久久久久婷婷小说| 只有这里有精品99| 3wmmmm亚洲av在线观看| 亚洲美女视频黄频| 日韩三级伦理在线观看| 日本猛色少妇xxxxx猛交久久| 国产成人a∨麻豆精品| 久久精品夜色国产| 男女无遮挡免费网站观看| 99久久精品一区二区三区| 熟女av电影| 18禁裸乳无遮挡免费网站照片| 一级毛片我不卡| 亚洲精品日本国产第一区| 久久久久国产精品人妻一区二区| 成年免费大片在线观看| 亚洲精品国产av蜜桃| 新久久久久国产一级毛片| 欧美97在线视频| 亚洲无线观看免费| www.av在线官网国产| 精品国产露脸久久av麻豆| 国内少妇人妻偷人精品xxx网站| 国产午夜精品久久久久久一区二区三区| 成人黄色视频免费在线看| 少妇人妻久久综合中文| 日本猛色少妇xxxxx猛交久久| 偷拍熟女少妇极品色| 精品久久久久久久末码| 亚洲va在线va天堂va国产| 午夜爱爱视频在线播放| 美女高潮的动态| 国产乱人偷精品视频| 成人欧美大片| 不卡视频在线观看欧美| 国产熟女欧美一区二区| 久久久亚洲精品成人影院| av国产免费在线观看| 丰满人妻一区二区三区视频av| 精品久久久久久久久av| 一级毛片电影观看| 观看美女的网站| 一级毛片我不卡| 狂野欧美白嫩少妇大欣赏| 久热久热在线精品观看| 老女人水多毛片| 国产精品国产av在线观看| 黄色配什么色好看| 我的老师免费观看完整版| 欧美日韩在线观看h| 久久久久久久久大av| 综合色av麻豆| 六月丁香七月| 久久精品夜色国产| 熟妇人妻不卡中文字幕| 欧美97在线视频| 天天一区二区日本电影三级| 免费观看av网站的网址| av播播在线观看一区| 亚洲成人精品中文字幕电影| 亚洲第一区二区三区不卡| 国产精品99久久久久久久久| 国产成人精品一,二区| 女的被弄到高潮叫床怎么办| 亚洲综合精品二区| 精品少妇久久久久久888优播| 午夜激情久久久久久久| 日韩精品有码人妻一区| 欧美亚洲 丝袜 人妻 在线| 一级毛片黄色毛片免费观看视频| 国产精品99久久久久久久久| 男女无遮挡免费网站观看| 极品少妇高潮喷水抽搐| 欧美日韩亚洲高清精品| 人妻系列 视频| 日韩 亚洲 欧美在线| 日日啪夜夜爽| 内射极品少妇av片p| av在线蜜桃| 亚洲欧美成人精品一区二区| 九九在线视频观看精品| 国产 一区精品| 久久人人爽人人爽人人片va| 久久韩国三级中文字幕| 少妇人妻精品综合一区二区| 涩涩av久久男人的天堂| 亚洲,欧美,日韩| 成人毛片a级毛片在线播放| 菩萨蛮人人尽说江南好唐韦庄| 听说在线观看完整版免费高清| 2021少妇久久久久久久久久久| 女的被弄到高潮叫床怎么办| 赤兔流量卡办理| 天堂中文最新版在线下载 | 97在线视频观看| 婷婷色综合www| 中文字幕免费在线视频6| 国产在线男女| 免费观看在线日韩| 国产精品秋霞免费鲁丝片| 深爱激情五月婷婷| 黄色配什么色好看| 另类亚洲欧美激情| av.在线天堂| 又大又黄又爽视频免费| 欧美性猛交╳xxx乱大交人| 成年人午夜在线观看视频| 80岁老熟妇乱子伦牲交| 成人亚洲欧美一区二区av| 久久99蜜桃精品久久| 午夜免费鲁丝| 久久99热6这里只有精品| 色播亚洲综合网| 一级黄片播放器| 免费人成在线观看视频色| 国产熟女欧美一区二区| 最新中文字幕久久久久| 亚洲va在线va天堂va国产| 欧美 日韩 精品 国产| 欧美老熟妇乱子伦牲交| 制服丝袜香蕉在线| 日日撸夜夜添| 91久久精品国产一区二区成人| 欧美丝袜亚洲另类| 亚洲丝袜综合中文字幕| 亚洲国产高清在线一区二区三| 免费播放大片免费观看视频在线观看| 久久综合国产亚洲精品| 亚洲欧美日韩另类电影网站 | 能在线免费看毛片的网站| 中文字幕免费在线视频6| 久久精品国产亚洲av涩爱| 久久99精品国语久久久| a级毛片免费高清观看在线播放| 亚洲av免费高清在线观看| 国产乱人偷精品视频| 成年版毛片免费区| 黄片wwwwww| 69人妻影院| 天堂俺去俺来也www色官网| 久久久欧美国产精品| 国产永久视频网站| 自拍欧美九色日韩亚洲蝌蚪91 | 亚洲在久久综合| 久久99精品国语久久久| 精品一区在线观看国产| 简卡轻食公司| 成人一区二区视频在线观看| 天堂中文最新版在线下载 | 日日摸夜夜添夜夜爱| 欧美 日韩 精品 国产| 欧美激情国产日韩精品一区| 蜜臀久久99精品久久宅男| 国产精品爽爽va在线观看网站| 亚洲欧美一区二区三区国产| 成人欧美大片| 日韩人妻高清精品专区| 高清av免费在线| 下体分泌物呈黄色| 亚洲丝袜综合中文字幕| 国产女主播在线喷水免费视频网站| 国产精品人妻久久久影院| 大香蕉97超碰在线| 精品久久久噜噜| 亚洲欧美一区二区三区黑人 | 国产男人的电影天堂91| 日韩亚洲欧美综合| 国产成人a区在线观看| 国产午夜福利久久久久久| 国产一区亚洲一区在线观看| 亚洲四区av| 中文字幕久久专区| 寂寞人妻少妇视频99o| 中文精品一卡2卡3卡4更新| 国产精品嫩草影院av在线观看| 国产色婷婷99| 永久免费av网站大全| 精品久久国产蜜桃| 免费黄色在线免费观看| 熟女av电影| 日本色播在线视频| 丝袜美腿在线中文| 精品亚洲乱码少妇综合久久| 国产有黄有色有爽视频| av又黄又爽大尺度在线免费看| 久久久色成人| 国产探花在线观看一区二区| 交换朋友夫妻互换小说| 成人特级av手机在线观看| 国产乱来视频区| 精品少妇黑人巨大在线播放| eeuss影院久久| 久久午夜福利片| 又黄又爽又刺激的免费视频.| 亚洲国产高清在线一区二区三| 国产男人的电影天堂91| 国产美女午夜福利| av在线亚洲专区| 看非洲黑人一级黄片| 成人无遮挡网站| 成人美女网站在线观看视频| 成人毛片60女人毛片免费| 国产一区二区亚洲精品在线观看| 免费黄色在线免费观看| 国产成人一区二区在线| 日韩伦理黄色片| 精品久久久精品久久久| 联通29元200g的流量卡| 神马国产精品三级电影在线观看| 欧美日韩视频高清一区二区三区二| 18禁动态无遮挡网站| 成人亚洲欧美一区二区av| 国产男人的电影天堂91| 午夜激情福利司机影院| 十八禁网站网址无遮挡 | 亚洲久久久久久中文字幕| 美女脱内裤让男人舔精品视频| 欧美精品国产亚洲| 国产精品国产三级国产专区5o| 久久鲁丝午夜福利片| 高清日韩中文字幕在线| 精品一区二区三区视频在线| 欧美日本视频| 少妇丰满av| 国产亚洲精品久久久com| 国产午夜精品一二区理论片| 国产一区二区在线观看日韩| 欧美xxxx性猛交bbbb| 亚洲欧美成人精品一区二区| av免费观看日本| 亚洲va在线va天堂va国产| 成人欧美大片| 成人黄色视频免费在线看| 欧美成人一区二区免费高清观看| 一级爰片在线观看| 免费高清在线观看视频在线观看| 女人十人毛片免费观看3o分钟| 国产日韩欧美亚洲二区| 亚洲av欧美aⅴ国产| 午夜亚洲福利在线播放| av黄色大香蕉| 亚洲av中文av极速乱| 国产成人免费无遮挡视频| 亚洲av成人精品一区久久| 2022亚洲国产成人精品| 69av精品久久久久久| 欧美一级a爱片免费观看看| 国产 精品1| 美女被艹到高潮喷水动态| 日韩av不卡免费在线播放| 最近中文字幕高清免费大全6| 乱码一卡2卡4卡精品| 男人舔奶头视频| 国产成人免费无遮挡视频| 青春草视频在线免费观看| 日韩一区二区三区影片| 国产毛片a区久久久久| 777米奇影视久久| 超碰97精品在线观看| 国产高清不卡午夜福利| 激情五月婷婷亚洲| 亚洲精品乱码久久久v下载方式| 麻豆久久精品国产亚洲av| 在线观看美女被高潮喷水网站| videos熟女内射| 汤姆久久久久久久影院中文字幕| 性插视频无遮挡在线免费观看| 午夜免费观看性视频| 少妇丰满av| 精品人妻熟女av久视频| 色综合色国产| 久久久久久久午夜电影| 亚洲精品乱码久久久v下载方式| 午夜福利高清视频| 亚洲,一卡二卡三卡| 成人一区二区视频在线观看| 少妇被粗大猛烈的视频| av女优亚洲男人天堂| www.av在线官网国产| 有码 亚洲区| 亚洲精品乱码久久久v下载方式| 国内少妇人妻偷人精品xxx网站| av在线播放精品| 久久精品熟女亚洲av麻豆精品| 五月伊人婷婷丁香| 简卡轻食公司| 欧美最新免费一区二区三区| 狂野欧美激情性xxxx在线观看| 极品少妇高潮喷水抽搐| 久久99蜜桃精品久久| 热99国产精品久久久久久7| 国产成人freesex在线| 赤兔流量卡办理| 内地一区二区视频在线| 久久热精品热| 插阴视频在线观看视频| 精品少妇黑人巨大在线播放| 2018国产大陆天天弄谢| 亚洲精品一区蜜桃| 亚洲欧美日韩无卡精品| 色婷婷久久久亚洲欧美| 白带黄色成豆腐渣| 我要看日韩黄色一级片| 国产成人a区在线观看| 26uuu在线亚洲综合色| 99热全是精品| 国产免费一级a男人的天堂| 91精品伊人久久大香线蕉| 日韩成人伦理影院| 国产av不卡久久| 极品少妇高潮喷水抽搐| 日本午夜av视频| 99热全是精品| 成人毛片60女人毛片免费| 欧美xxxx黑人xx丫x性爽| 人人妻人人爽人人添夜夜欢视频 | 久久久色成人| 亚洲国产高清在线一区二区三| 日韩强制内射视频| 成人美女网站在线观看视频| 草草在线视频免费看| 成人二区视频| 男女边摸边吃奶| 国内精品宾馆在线| av天堂中文字幕网| 日韩成人av中文字幕在线观看| 精品久久久久久久人妻蜜臀av| 亚洲av免费在线观看| 99久久人妻综合| 少妇被粗大猛烈的视频| 天堂俺去俺来也www色官网| 欧美精品一区二区大全| 婷婷色综合www| .国产精品久久| 欧美zozozo另类| 国产成人一区二区在线| 丰满乱子伦码专区| 国产探花在线观看一区二区| 色吧在线观看| 亚洲美女搞黄在线观看| 日本色播在线视频| 色视频www国产| 午夜福利在线在线| 水蜜桃什么品种好| 麻豆久久精品国产亚洲av| 一本久久精品| 成人亚洲欧美一区二区av| 少妇猛男粗大的猛烈进出视频 | 丰满乱子伦码专区| 精品视频人人做人人爽| 亚洲自偷自拍三级| 国产色爽女视频免费观看| 亚洲伊人久久精品综合| 久久久久久久国产电影| 日本午夜av视频| 成人二区视频| av播播在线观看一区| 人妻系列 视频| 日韩一本色道免费dvd| 国产淫片久久久久久久久| 国产免费福利视频在线观看| 婷婷色综合www| 国产有黄有色有爽视频| 一级毛片我不卡| 国产老妇女一区| 日韩欧美精品免费久久| 欧美日韩综合久久久久久| av国产久精品久网站免费入址| 国产精品一区二区三区四区免费观看| 国产综合精华液| 99re6热这里在线精品视频| 超碰av人人做人人爽久久| 色5月婷婷丁香| 国产精品女同一区二区软件| 69av精品久久久久久| 精品少妇黑人巨大在线播放| 一二三四中文在线观看免费高清| 精品国产露脸久久av麻豆| 简卡轻食公司| 97人妻精品一区二区三区麻豆| 99热这里只有精品一区| 日韩大片免费观看网站| 久久这里有精品视频免费| 国产精品精品国产色婷婷| 女人久久www免费人成看片| 丰满少妇做爰视频| 91久久精品电影网| 性色av一级| 黄片无遮挡物在线观看| 日本三级黄在线观看| 免费不卡的大黄色大毛片视频在线观看| 男女无遮挡免费网站观看| 久久影院123| 黄色一级大片看看| 97热精品久久久久久| 国产高潮美女av| 国产精品一二三区在线看| 一区二区av电影网| 久久精品熟女亚洲av麻豆精品| 欧美性猛交╳xxx乱大交人| 三级经典国产精品| 中文乱码字字幕精品一区二区三区| 亚洲精品久久久久久婷婷小说| 成人亚洲精品av一区二区| 免费不卡的大黄色大毛片视频在线观看| 大又大粗又爽又黄少妇毛片口| 听说在线观看完整版免费高清| 精品酒店卫生间| 久久久色成人|