• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    IoT-Cloud Empowered Aerial Scene Classification for Unmanned Aerial Vehicles

    2022-03-14 09:25:30UthayanLakshmiVaraPrasadMohanBharatirajaIrinaPustokhinaDenisPustokhinandVicenteGarcaz
    Computers Materials&Continua 2022年3期

    K.R.Uthayan,G.Lakshmi Vara Prasad,V.Mohan,C.Bharatiraja,Irina V.Pustokhina,Denis A.Pustokhin and Vicente García Díaz

    1Department of Information Technology,Sri Sivasubramaniya Nadar College of Engineering,Chennai,603110,India

    2Department of Information Technology,QIS College of Engineering&Technology,Ongole,523001,India

    3Department of Electronics and Communications Engineering,Saranathan College of Engineering,Trichy,620012,India

    4Department of Electrical and Electronics Engineering,SRM Institute of Science and Technology,Chennai,603203,India

    5Department of Entrepreneurship and Logistics,Plekhanov Russian University of Economics,Moscow,117997,Russia

    6Department of Logistics,State University of Management,Moscow,109542,Russia

    7Department of Computer Science,University of Oviedo,Oviedo,33003,Spain

    Abstract:Recent trends in communication technologies and unmanned aerial vehicles(UAVs)find its application in several areas such as healthcare,surveillance,transportation,etc.Besides,the integration of Internet of things(IoT)with cloud computing environment offers several benefits for the UAV communication.At the same time, aerial scene classification is one of the major research areas in UAV-enabled MEC systems.In UAV aerial imagery,efficient image representation is crucial for the purpose of scene classification.The existing scene classification techniques generate mid-level image features with limited representation capabilities that often end up in producing average results.Therefore, the current research work introduces a new DL-enabled aerial scene classification model for UAV-enabled MEC systems.The presented model enables the UAVs to capture aerial images which are then transmitted to MEC for further processing.Next,Capsule Network(CapsNet)-based feature extraction technique is applied to derive a set of useful feature vectors from the aerial image.It is important to have an appropriate hyperparameter tuning strategy,since manual parameter tuning of DL model tend to produce several configuration errors.In order to achieve this and to determine the hyperparameters of CapsNet model,Shuffled Shepherd Optimization(SSO)algorithm is implemented.Finally, Backpropagation Neural Network (BPNN) classification model is applied to determine the appropriate class labels of aerial images.The performance of SSO-CapsNet model was validated against two openly-accessible datasets namely, UC Merced (UCM) Land Use dataset and WHU-RS dataset.The proposed SSO-CapsNet model outperformed the existing state-of-the-art methods and achieved maximum accuracy of 0.983,precision of 0.985,recall of 0.982,and F-score of 0.983.

    Keywords: Artificial intelligence; mobile edge computing; unmanned aerial vehicles; deep learning; optimization

    1Introduction

    In recent days, Internet of Things (IoT) become a hot research topic and received huge attention among researchers to offer enormous services and applications.At the same time,the cloud computing (CC) technologies offer several benefits to support IoT applications and offer several benefits such as low latency, location aware, scalability, etc.[1].At the same time,Unmanned Aerial Vehicle (UAV) technology has been significantly developed and used for many applications.UAVs can provide fast, cost-effective, and safe deployments for many civil and military applications [2].Fig.1 shows the architecture of Unmanned Aerial Vehicles (UAV).

    Figure 1: General structure of UAV networks

    The popularity of independent UAVs and its applications, involving search and rescue operations, surveillance, and infrastructure observance in the recent years, is tremendous.Though land cover classification is an essential UAV application, it is complex to construct whollyindependent methods.Object identification processes are extremely integrated due to which it is difficult to reduce its cost demands.The movement of UAVs create multiple hindrances to the generated images in terms of clarity i.e., blurred images, and noise since the onboard cameras often generate low resolution images.In most of the UAV applications, it is difficult to perform the identification process because of the need for realistic efficiency.Various researches have been conducted on UAVs and its associated challenges such as tracking and detecting specific objects,types of vehicles, landmarks, land sites, and persons (involving pedestrian motion).But only a few studies considered multiple object identification [3] due to the fact that multiple targeted object identification is essential for most of the UAV applications.The occurrence of a break in application requirements and practical capability might be a result of two critical limitations: 1) it is difficult to build and store numerous methods to target the objects; and 2) high computation strength is required for technical object identification in case of individual objects.

    When aerial image scenes are acquired, it undergoes aerial image classification.The images are categorized into sub-regions by covering several grounded objects and a variety of lands covering different semantic classes.Thus, aerial image classification is an important process for several real-world applications like computer cartography, urban planning, remote sensor, and resource management [4].Generally, some of the identical object classes or land cover varieties are allocated in a pool of scenes.For example, commercial and residential are the two main classes of scenes which may include roads, buildings, and trees.However, these two classes have variances in spatial sharing and density of three class labels.Thus, in aerial scenes, classification is performed depending on structural and spatial pattern complications which is a challenging issue to overcome [5].The common method is to construct a holistic scene demonstration for scene classification.Among the remote sensing studies, Bag of Visual Words (BoVW) is a familiar technique for scene classification.This technique was developed to investigate the text that implements a document via frequency of words.In order to identify the image via occurrences of ‘visual words’, local feature quantization is generated whereas BOW technique is utilized by clustering method.BoVW method is a form of BoW technique used for image analysis whereas all the images are determined as visual words from visual dictionary through the histogram of the former [6].

    Deep Learning (DL) method [7] is highly beneficial in resolving conventional challenges such as object recognition and detection, Natural Language Processing (NLP), speech identification,and a number of such real-world applications.It is highly efficient than the usual processes and it also gained much attention in academia and industries.This technique attempts to acquire general hierarchical feature learning in terms of various abstract stages.UAV images are processed in real-time environment through two distinct ways namely, onboard processing of images with a GPU board and computation offloading through the transfer of DL algorithm processing from UAV to MEC.But there are several issues observed in the design of UAV-enabled MEC system.

    The current research work presents an efficient DL-enabled aerial scene classification model for UAV-enabled MEC systems.The presented model allows the UAVs to capture aerial images and then forward the images to MEC for further processing.In addition, Capsule Network(CapsNet)-based feature extractor is applied to derive a set of useful feature vectors from the aerial image.Moreover, for hyperparameter optimization of CapsNet model, Shuffled Shepherd Optimization (SSO) algorithm is executed.Finally, Backpropagation Neural Network (BPNN)classification model is applied in the determination of appropriate class labels of aerial images.The presented SSO-CapsNet model was validated for its effectiveness against two openly accessible datasets.

    2 Literature Review

    Deep Convolutional Neural Network (CNN) [8] is one of the Deep Learning techniques which is familiar and gaining popularity in various identification and detection processes, since it produces optimum outcomes for regular datasets.In image classification, CNN achieves the highest accuracy and is the most preferred technique nowadays.For industrial usage, it is difficult to adjust the traditional Deep CNN (DCNN) due to the complications involved in fine tuning the hyperparameter manually and trade-off between computation cost and accurate classification.Several studies have attempted to reduce the computation cost incurred in its execution [9].When using UAV aerial scene classification, the complication involved in traditional CNN gets reduced [10].A particular type of CNN structure is chosen to decrease the search space and this lesser search space is made with the knowledge of experts.

    Zhang et al.[11] utilized a so-called standard NN sparse autoencoder (AE) to train a group of chosen image patches and the model was tested by saliency degree to extract the local features.Coates et al.[12] improved the conventional Unsupervised Feature Learning (UFL) pipeline by feature learning.The acquaintance of CNN seems to be beneficial in various applications.In the study conducted by Lecun et al.[13], the CNN model was trained by backpropagation (BP)method and the study obtained adequate efficiency in character identification.In recent times,CNN is often utilized in computer vision research works.However, it is complicate to train deep CNN due to the possession of numerous features that are frequently utilized in particular process and the presence of low number of trained instances.The study was designed to extract the intermittent feature from DCNN.This model undergoes training on sufficiently large scale datasets such as ImageNet, that are utilized for a wider view of visible identification processes such as scene classification, object recognition, and image recovery.

    Cimpoi et al.[14] achieved an optimum outcome when investigating the texture by pooling CNN features acquired from convolutional layer and fisher coding procedure.Research studies are still being conducted using CNN in UAV scene classification.In the literature [15], a pretrained CNN was employed and tuned completely on scene dataset demonstrating excellent classification outcome.But the pretrained CNN method was transferred to scene dataset due to the lack of trained models.In the study conducted earlier [16], the widespread possibility of CNN features,acquired from fully connected layer, underwent testing.In this study, the aerial images were categorized and the optimum outcomes were achieved over comparative techniques in open-source scene datasets.Although various techniques have been proposed for UAV image classification in the literature, there is a need exists to improve its class efficiency.Simultaneously, few techniques have provided optimum outcomes on specific datasets and were never employed on large datasets.Thus, the current research work develops a new advanced DL-based UAV image classifier.

    3 The Proposed SSO-CapsNet Model

    The working principle of the presented SSO-CapsNet model is illustrated in Fig.2.As shown in the figure, UAV captures the aerial images which are then processed in MEC.The captured aerial images are then fed into CapsNet-based feature extractor to derive an effective set of feature vectors.Followed by, hyperparameter tuning of the CapsNet model is performed using the SSO algorithm.Finally, BPNN model is applied to allocate the class labels of the applied aerial test images.The detailed operations of these sub-processes are explained in the succeeding sections.

    Figure 2: Working principle of SSO-CapsNet method

    3.1 Capsule Network(Capsnet)Based Feature Extraction

    CapsNets [17] is developed as an alternate model for CNNs.Being equivariant, the capsules are composed of a network of neurons that fetch in and yield out the vectors in line with scalar value of the CNNs.In CapsNet model, all the capsules are composed of a set of neurons with its output demonstrating various properties of similar features.It gives the benefit of identifying the entire set of entities through initial identification of their parts.Capsule outcome is made up of probability in which the feature encoder exists by capsules and the group of vector values is generally named after ‘instantiation parameters.’It can be defined as the probability of existence of capsule’s features to ensure network invariability.These instantiation parameters are utilized in the representation of network equivariance based on its capability for recognizing pose, texture,and deformation.Invariance is an asset of methods which makes the latter remain unchanged though the input value changes.This is called ‘translational invariance’which is a peculiar characteristic of CNNs.For sample, when CNN detects the face, regardless of the position of eye,it stands still until it identifies the face.But, equivariance makes sure that the spatial position of features, proceeding to the face, is taken into account.Thus, in terms of outcome, equivariance does not consider the occurrence of an eye in image, but considers its position only in the image.Equivalences are the required properties for CapsNets.

    The three commonly available operations for capsule execution are discussed here.They are transformation of AE, vector capsule depending on dynamic routing, and matrix capsule depending on Expectation-Maximization (EM) routing.Fig.3 shows the structure of CapsNet model.

    Figure 3: The architecture of CapsNet

    3.1.1 Transformation of Auto-Encoders

    An initial CapsNet is published with the transformation of AEs.It is constructed to emphasize the capability of network in recognizing the pose.The aim is not to identify an object from the images, but to take the image and their pose as input and output respectively, to form a similar image from original pose.An output vector of capsules, from this initial execution, is composed of output values.Further, one of the signified outcomes lies in these probabilities in which the feature exists through the rest of representative instantiating parameters.The capsules are ordered in various levels: the lower levellis named after initial capsule whereas the upper levell+1 is named after secondary capsule.Lower level capsule removes the ‘pose’parameter in pixel intensities, since it has the ability to initiate a part-whole hierarchy [18].This part-whole hierarchy is an advantage in CapsNets model since it identifies the parts and is developed to identify the whole set of entities too.In order to realize this, this feature is signified by lower level capsule which needs to have correct spatial connection.Previously, it activated higher-level capsules at level,l+1.For instance, assume that eyes and mouth are signified by lower level capsule.Then,each one can forecast to pose the higher-level capsules which signify a face in case of predictions being accepted.In order to describe the basis of initial-level capsule, ANN is learned to change the pixel intensity for pose parameter.In a simple method, 2D images, capsule byxandywith its positions, and its only pose output are utilized.Once the learning process is over, the network takes an image and there is a need arise to shift?xand?y.Then the output of an image remains the identified shift in pose.In order to prevent the influence of inactive capsule from affecting the output of ‘generation unit,’the capsule output is multiplied by probabilities,p.

    3.1.2 Dynamic Routing Between Capsules

    The next level of changes in CapsNets is determined by the capsules which are nothing but a set of neurons with instantiation parameter.These changes are even signified by activity vector,whereas the length of vector signifies the probability in which the feature exists.The enhancement with a detailed prior execution exhibits that there is no need of information in the input [19].The networks are composed of three layers namely, Convolutional (Conv) layer, Primary Capsule(PC) layer, and Class capsule layer.PC layer is the initial capsule layer which is only next to undetermined number of capsules’layer.The final capsules’layer is named after Class capsules layer.Feature extraction process from an image is completed by Conv.layer and the output is fed to PC layer.In all the capsules,i(where 1 ≤i≤N)in layerltakes the activity vectorui∈R into account for encoding spatial data in the procedure of instantiation parameter.The output vectoruiofithlower-level capsules are then fed to every capsule from next layer,l+1.Thejthcapsule at layerl+1 is obtained i.e.,uiand their product is defined with equivalent weight matrix i.e.,Wij.The resultant vectoris the capsuleiat levell’s change of entities which is signified by capsulejat levell+1.In the prediction vector of PC,refers to PC whereasicorresponds to the class capsule,j.

    The product of prediction vectors and coupling coefficient, which together signifies the agreement between the capsules, is performed to obtain a single PCi’s forecast for class capsule,j.When the agreement is higher, both the capsules are related together.Thus, in the outcome, the coupling coefficient is first increased which is then decreased.The weighted sum(sj)of every individual PC forecasts to the class capsule, j is computed to achieve the candidates’squashed function,(vj).

    The squashed function makes sure that the length of output in capsules lies between 0 and 1 as probability.vjin one capsule layer is sent to next layer capsules and processed in a similar manner.The coupling coefficientcijmakes sure that the forecast ofiin levellis connected tojin layerl+1.In all the iterations,cijis upgraded by determining the dot product of ?uj|iand vj.To be specific, the vector values connected to all capsules are observed as mere segments of two numbers; the probabilities signify the presence of feature which the capsules tend to encapsulate and a group of instantiation parameters that assist in the clarification of consistency among the layers.Thus, a related path by agreement stems in detail that if lower-level capsule decides the higher level layer capsules, it is ‘construct a part whole’connection referring to the relevance of path.

    3.1.3 Matrix Capsules with EM Routing

    On the contrary to utilization of vector outputs, the literature [20] presented the illustration of input and output of capsule as matrices.It is essential to decrease the size of transformation matrices between capsule and matrix.Further, it is developed bynelements rather thann2when utilizing vectors.Dynamic routing by agreement is exchanged with EM technique.This dynamic routing is cosine between two pose vectors.Also, the probability of existence of entity, even illustrated by capsule, is exchanged with a parametera, rather than the length of vectors.In the capsuleiat levelLand capsulejat levelL+1, these values refer to the trainable transformation weight matrix i.e.,Wij.EM mechanism ensures that the shift matrix of capsuleiis changed by transformation weight matrixWijto cast the vote to shift the matrix of capsule,jat levelL+1.Vote is an artefact of output matrixMiand transformation matrixWij[20].

    The poses and activations of everyL+1 level layer are established by enteringVijandaias non-linear EM routing techniques.During an iteration, EM upgrades the means, variances, and activation probabilities of layerL+1 capsules with the assignment probability between lower and higher level capsules.

    3.2 Hyperparameter Optimization

    In order to tune the hyperparameters involved in CapsNet model effectively, SSO algorithm is applied and thereby the classification performance is enhanced.SSO algorithm offers several benefits such as maximum accuracy, convergence rate, and reduced parameter dependency.It is based on the herd performance of shepherds.Humans have to learn this phenomenon through long-term observation so as to utilize animal capabilities and attain the objectives [21].Shepherds try to steer their herd in a right way.To resolve this, they generally set animals such as horse or herding dog for the herd.These animals are utilized to manage the herd through their herding behaviour.They further guard the herd animals from wild animals and theft.This performance is the fundamental information to follow the SSO technique.

    Step 1: Initialization

    SSO begins with an arbitrarily-created primary Member Of Community (MOC) for search space as given herewith.

    whererandrefers to arbitrary vector by all components created between 0 and 1; Here, MOCminand MOCmaxdenote the lower and upper bounds of design variables;mimplies the amount of communities, andndefines the count of members going to all the communities.In this regard, it is supposed that the entire number of communities is attained as [21] follows.

    Step 2: Shuffling process

    In this method, initialmrefers to the members of communities which are chosen depending on their objective function values.These are arbitrarily located values in the first column of Multi-Community (MC) matrix (Eq.(7)) which are otherwise, the initial member of all the communities.Then, to create the 2ndcolumn of MC, nextmmembers are selected alike the preceding step which are arbitrarily located in the column.These procedures are carried out forntimes independently, until the MC matrix gets molded as given herewith.

    It is worth mentioning that all the rows of MC refer to the members of all the communities.This phenomenon ensures that the members of initial column of MC are optimal members,compared to all other communities.Moreover, the member’s place in final column is the bad agent in all communities.

    Step 3: Movement of Community Member

    It is obvious that the iteration numbertandβincrease whereasαvalue decreases correspondingly.Thus the outcome and exploration rate decreases whereas the exploitation rate increases [22].

    Step 4: Update the position of each community member

    Based on the prior step, the new location of MOCi,jis computed utilizing Eq.(13).Next, the location of MOCi,jis upgraded, when it could not find the worst old objective function value [22]:

    Step 5: Checking termination conditions

    Later, the count of iterations is set as the end condition (Max-iteration), then the optimization procedure is finished.Afterwards, it goes to step 2 for a new round of iteration.

    3.3 BPNN Based Image Classification

    At the final stage, the extracted feature vectors from hyperparameter-tuned CapsNet are fed into BPNN model to perform the classification.BPNN is a multi-layer network which has a set of input, hidden, and output layers.All the layers contain a number of neurons.To adjust the weight and bias in neurons, BPNN uses error BP function.It is beneficial in a gradient-descent feature and this technique is developed as an efficient function estimate technique [23].Classical BPNN has a number ofminputs andnoutputs.

    In feedforward network, all the neurons from the next layer act as input in every neuron for the outputs from final layer.Afterwards, the output is fed as input for the next neuron layer.In one neuronj, assumenrefers to the number of neurons in final layer;oirefers to the output ofith neuron;wirepresents the equivalent weight foroiandθjimplies the bias of neuronsj.Then,the neuronsjcompute the input for sigmoid function,Ijutilizing the equation [23].

    Assumeojindicates the output of neuron,jwhich is expressed as follows

    When the neuronjimplies the output layer, BPNN begins the BP level.Assumetjrefers to encoder target output.This technique calculates the output errorErrjto neuronjin the output layer with the help of following equation.

    Assumeksignifies the amount of neurons in next layer;wprefers to weight; andErrpdefines the neuron error,pin next layer.The errorErrjofjth neurons is expressed as follows

    Assumeηindicates the rate of learning.Neuronjtunes its weightwjand biasθjwith the help of [23].

    If BPNN finishes tuning the network by one trained sample, it begins with a second trained sample with the output of first trained sample as an input to train the second sample.For executing the classifier, BPNN requires only the execution of the feedforward network.The outputs at the output layer are the final classifier outcome.

    4 Experimental Validation

    The proposed SSO-CapsNet model was simulated using Python 3.6.5 tool.It was validated using two datasets namely, UCM and WHU-RS datasets.UCM dataset is composed of a largesized aerial image under 21 classes.Every class holds a total of 100 images with an identical size of 256*256 pixels.WHU-RS dataset includes a total of 950 images with an identical size of 600*600 pixels.These images undergo uniform distribution under a set of 19 classes.Few sample test images are shown in Fig.4.

    Tab.1 and Fig.5 show the results of accuracy analysis attained by SSO-CapsNet model against different CNN models.The table values report that the PlacesNet model obtained the least accuracy of 0.914.Simultaneously, the VGG-VD19 technique exhibited a slightly higher accuracy of 0.932, whereas the VGG-VD16 model achieved an improved accuracy of 0.941.Concurrently,AlexNet, CaffeNet, and VGG-F methods accomplished a reasonable and same accuracy of 0.944.In line with these, VGG-M model demonstrated improvement over the earlier models and achieved an accuracy of 0.945.Meanwhile, the VGG-S model showcased a reasonable accuracy of 0.946.However, the presented SSO-CapsNet model surpassed other methods and attained a maximum accuracy of 0.983.

    Tab.2 and Fig.6 examine the performance of SSO-CapsNet model against the existing models in terms of different measures.When analyzing the results in terms of F-score, the table values show that the VGGNet model obtained the least F-score of 0.785.Likewise, the VGG-RBFNN model exhibited a slightly increased F-score of 0.788 whereas the CA-VGG-LSTM model accomplished a certain improvement in F-score i.e., 0.796.Followed by, ResNet-50 model depicted a manageable outcome with an F-score of 0.797 while the CA-VGG-BiLSTM model resulted in an F-score of 0.798.Simultaneously, ResNet-RBFNN model achieved a reasonable F-score of 0.806.Concurrently, GNet model accomplished an F-score of 0.807, whereas an even higher F-score of 0.814 was offered by CA-ResNet-LSTM model.In line with this, GoogLeNet-RBFNN model demonstrated an improvement in the results over earlier models with an F-score of 0.815 and the CA-ResNet-BiLSTM model provided an F-score of 0.815.Meanwhile, CA-GNet-LSTM model attempted to showcase a reasonable F-score of 0.818, whereas CA-GNet-BiLSTM model accomplished a competitive F-score of 0.818.However, the presented SSO-CapsNet model surpassed all other methods and achieved a high F-score of 0.983.

    Figure 4: Sample images

    When investigating the outcomes with respect to precision and recall, the table values portray that the CA-ResNet-BiLSTM model obtained the least precision and recall values such as 0.779 and 0.890 respectively.At the same time, VGG-RBFNN approach exhibited slightly higher precision and recall values i.e., 0.782 and 0.839 respectively, whereas the CA-GNet-LSTM model reached superior precision and recall values such as 0.785 and 0.886 respectively.Followed by, VGGNet model depicted a manageable outcome while its precision and recall values were 0.791 and 0.823 respectively.Further, CA-VGG-BiLSTM yielded the following precision and recall values i.e., 0.793 and 0.840 respectively.Simultaneously, the ResNet-RBFNN model achieved a reasonable precision and recall of 0.799 and 0.846 correspondingly.Concurrently, CA-ResNet-LSTM model accomplished a precision and recall of 0.799 and 0.861 correspondingly.A further increase was observed in precision and recall values such as 0.799 and 0.871 by CA-GNet-BiLSTM model.Followed by, the GoogLeNet-RBFNN model demonstrated an improved outcome over earlier models with a precision and recall of 0.800 and 0.868 correspondingly.The GNet method attained 0.805 precision and 0.843 recall value.Meanwhile, the CA-VGG-LSTM model demonstrated reasonable precision and recall values such as 0.806 and 0.825 respectively.Further,ResNet-50 method resulted in competitive precision and recall values i.e., 0.809 and 0.820 respectively.However, the proposed SSO-CapsNet technique surpassed all other methods and offered the highest precision and recall values such as 0.985 and 0.982.

    Table 1: Accuracy analysis of SSO-CapsNet with various CNN models

    Figure 5: Accuracy analysis of SSO-CapsNet model with distinct CNN models

    Table 2: Performance analysis of SSO-CapsNet against various models in terms of precision,recall and F-score

    Figure 6: Result of SSO-CapsNet Model under distinct measures

    Tab.3 and Fig.7 portray the results of accuracy analysis conducted upon SSO-CapsNet model against the existing models.The table values infer that the SCK model achieved a minimum accuracy of 0.725.At the same time, SPM model exhibited a slightly increase in accuracy up to 0.740, whereas SPCK++ model accomplished a higher accuracy of 0.774.Followed by, SC+Pooling model depicted a manageable outcome with an accuracy of 0.817.Likewise,SG+UFL and CCM-BOVW models accomplished similar results i.e., accuracy of 0.866.Simultaneously, PSR model achieved a reasonable accuracy of 0.891.Concurrently, UFL-SC model accomplished an accuracy of 0.903, whereas an enhanced accuracy of 0.909 was attained by OverFeat model.

    In line this these, MSIFT and COPD models exhibited higher outcomes over earlier models and its accuracy values were 0.910 and 0.913 respectively.The Dirichlet model provided moderate results with an accuracy of 0.928.Meanwhile, VLAT and GoogLeNet+Finetune models showcased reasonable accuracy values such as 0.943 and 0.971 correspondingly.The CCP-Net and MOPSO-SC methodologies provided competitive accuracies such as 0.975 and 0.979 respectively.However, the presented SSO-CapsNet model surpassed all other models and accomplished a superior accuracy of 0.983.

    Table 3: Accuracy analysis of SSO-CapsNet against various state-of-the-art methods

    Figure 7: Accuracy analysis of SSO-CapsNet model against existing techniques

    Tab.4 illustrate the results for running time analysis of SSO-CapsNet model against other CNN models [5,24-26].The results portray that both VGG-16 and VGG-19 models demonstrated the worst performance and achieved high running time of 423 and 496 s respectively.In line with these, VGG-M and VGG-S models demonstrated slightly enhanced results with respective running times being 124 and 135 s.Followed by, CaffeNet and AlexNet models reported moderate running time of 85 and 86 s respectively.Moreover, PlacesNet and VGG-F models showcased somewhat reasonable and same running time of 82 s.But the SSO-CapsNet model displayed superior performance and accomplished the task at a running time of 73 s.

    Table 4: Running time (s) analysis of SSO-CapsNet method against various CNN models

    5 Conclusion

    The current study developed a new DL-enabled aerial scene classification model for UAVenabled MEC systems i.e., SSO-CapsNet model.The presented model allows the UAVs to capture aerial images and send it to MEC for further processing.At MEC, the captured aerial images are fed into CapsNet-based feature extractor to derive an effective set of feature vectors.Followed by,SSO algorithm is used to fine tune the hyperparameters of CapsNet model.The application of SSO algorithm helps in effectively tuning the hyperparameters.Thus, the accuracy of overall aerial image scene classification is enhanced.Finally, BPNN model is applied to allocate the class labels of the applied aerial test images.The simulation results of the proposed SSO-CapsNet model were validated against the benchmark UCM and WHU-RS datasets.The obtained experimental values inferred that the SSO-CapsNet model outperformed other classifiers and accomplished the maximum accuracy of 0.983, precision of 0.985, recall of 0.982, and F-score of 0.983.In future,SSO-CapsNet model can be implemented in handling various input sizes with multiple scaling.Further, the model can be assessed on big datasets such as NWPU-resic45 for its performance.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    嫩草影视91久久| 老熟妇仑乱视频hdxx| 亚洲精品一卡2卡三卡4卡5卡| 水蜜桃什么品种好| 又紧又爽又黄一区二区| 久久久精品区二区三区| 黑人巨大精品欧美一区二区mp4| 免费一级毛片在线播放高清视频 | 国产日韩一区二区三区精品不卡| 咕卡用的链子| 热99国产精品久久久久久7| 999久久久国产精品视频| 一进一出好大好爽视频| 母亲3免费完整高清在线观看| 国产精品亚洲av一区麻豆| 日日爽夜夜爽网站| 国产高清视频在线播放一区| 在线天堂中文资源库| 一区二区三区精品91| 美女扒开内裤让男人捅视频| av不卡在线播放| 99久久人妻综合| 国产一区二区三区综合在线观看| 别揉我奶头~嗯~啊~动态视频| 制服人妻中文乱码| 亚洲欧美激情在线| 亚洲男人天堂网一区| av超薄肉色丝袜交足视频| 精品欧美一区二区三区在线| 夜夜骑夜夜射夜夜干| 韩国精品一区二区三区| 久久久久精品人妻al黑| 国产av又大| 久久国产精品人妻蜜桃| 老汉色∧v一级毛片| 国产精品国产av在线观看| 在线亚洲精品国产二区图片欧美| 五月开心婷婷网| 亚洲色图 男人天堂 中文字幕| 久久ye,这里只有精品| 欧美激情久久久久久爽电影 | 亚洲国产中文字幕在线视频| 老司机亚洲免费影院| 午夜福利视频在线观看免费| 肉色欧美久久久久久久蜜桃| 男人操女人黄网站| 少妇猛男粗大的猛烈进出视频| 欧美日韩亚洲高清精品| 亚洲av日韩精品久久久久久密| 后天国语完整版免费观看| 精品一区二区三区av网在线观看 | 十八禁网站网址无遮挡| 欧美日韩福利视频一区二区| 99久久99久久久精品蜜桃| 每晚都被弄得嗷嗷叫到高潮| 午夜福利免费观看在线| 国产成人免费观看mmmm| 亚洲va日本ⅴa欧美va伊人久久| 91成人精品电影| 国产精品一区二区在线观看99| 最黄视频免费看| xxxhd国产人妻xxx| 国产精品影院久久| 一个人免费在线观看的高清视频| 女警被强在线播放| 丁香六月欧美| 我要看黄色一级片免费的| 久久人妻av系列| 精品视频人人做人人爽| 亚洲精品中文字幕在线视频| 国产精品国产高清国产av | 在线观看舔阴道视频| 激情在线观看视频在线高清 | 精品一区二区三卡| 啪啪无遮挡十八禁网站| 亚洲欧洲精品一区二区精品久久久| 水蜜桃什么品种好| 亚洲第一青青草原| 欧美日韩福利视频一区二区| 咕卡用的链子| 日本黄色视频三级网站网址 | 亚洲精品美女久久久久99蜜臀| 亚洲av日韩在线播放| 99香蕉大伊视频| 国产日韩一区二区三区精品不卡| 国产精品久久久久久精品电影小说| 黄色 视频免费看| 午夜福利欧美成人| 国产精品国产av在线观看| 久久国产精品大桥未久av| 丝袜人妻中文字幕| 热99re8久久精品国产| 两人在一起打扑克的视频| 国产片内射在线| 精品少妇内射三级| 欧美日韩av久久| 男人舔女人的私密视频| 欧美激情极品国产一区二区三区| 亚洲国产av新网站| 色婷婷久久久亚洲欧美| 久久久精品免费免费高清| 久久亚洲真实| 久久国产精品人妻蜜桃| 91字幕亚洲| 亚洲精品美女久久av网站| 99热国产这里只有精品6| 啦啦啦视频在线资源免费观看| 丝瓜视频免费看黄片| 一个人免费在线观看的高清视频| 国产精品久久电影中文字幕 | 亚洲精品中文字幕在线视频| 久久中文字幕一级| 国产高清国产精品国产三级| 在线观看www视频免费| 99精品欧美一区二区三区四区| 国产色视频综合| 淫妇啪啪啪对白视频| 免费在线观看影片大全网站| 99国产综合亚洲精品| 亚洲国产欧美在线一区| 国产精品98久久久久久宅男小说| 亚洲av美国av| 精品一区二区三区四区五区乱码| 国产精品99久久99久久久不卡| av视频免费观看在线观看| 免费人妻精品一区二区三区视频| 热re99久久国产66热| 91成人精品电影| 精品福利永久在线观看| 精品少妇久久久久久888优播| 中文字幕制服av| 日韩视频一区二区在线观看| 夫妻午夜视频| 在线观看www视频免费| 五月开心婷婷网| 夜夜夜夜夜久久久久| 亚洲 欧美一区二区三区| 亚洲色图综合在线观看| 欧美日本中文国产一区发布| 欧美老熟妇乱子伦牲交| 国产三级黄色录像| 亚洲精品国产精品久久久不卡| 亚洲国产欧美一区二区综合| 国产色视频综合| 国产精品亚洲av一区麻豆| 国产激情久久老熟女| 午夜日韩欧美国产| 超碰97精品在线观看| 国产淫语在线视频| 国产成人精品在线电影| 视频在线观看一区二区三区| 亚洲第一欧美日韩一区二区三区 | 热re99久久国产66热| 国产亚洲一区二区精品| 国产精品免费一区二区三区在线 | 国精品久久久久久国模美| 一区二区三区国产精品乱码| 欧美国产精品va在线观看不卡| 欧美精品亚洲一区二区| 国产深夜福利视频在线观看| 蜜桃在线观看..| 精品国产一区二区三区久久久樱花| 大香蕉久久成人网| 久久免费观看电影| 国产精品偷伦视频观看了| 中亚洲国语对白在线视频| 欧美在线一区亚洲| 亚洲国产av新网站| 少妇猛男粗大的猛烈进出视频| 国产精品熟女久久久久浪| 亚洲国产毛片av蜜桃av| 国产一区二区激情短视频| 母亲3免费完整高清在线观看| 一区在线观看完整版| 久久久精品国产亚洲av高清涩受| 欧美成人免费av一区二区三区 | 男人操女人黄网站| 国产成人一区二区三区免费视频网站| 少妇精品久久久久久久| 欧美在线黄色| 午夜福利一区二区在线看| 在线看a的网站| e午夜精品久久久久久久| 怎么达到女性高潮| 精品一区二区三区视频在线观看免费 | 涩涩av久久男人的天堂| av网站免费在线观看视频| 两个人看的免费小视频| 午夜免费成人在线视频| 国产成人精品无人区| 国产成人啪精品午夜网站| 在线十欧美十亚洲十日本专区| 一级黄色大片毛片| 操出白浆在线播放| 后天国语完整版免费观看| 久久这里只有精品19| 欧美中文综合在线视频| av国产精品久久久久影院| 一级黄色大片毛片| 国产成人欧美| 国产真人三级小视频在线观看| 在线观看免费视频网站a站| 一进一出好大好爽视频| 国产成人精品久久二区二区91| 最新在线观看一区二区三区| 一区二区三区国产精品乱码| 午夜精品国产一区二区电影| 一本一本久久a久久精品综合妖精| 久久亚洲真实| 亚洲天堂av无毛| 亚洲男人天堂网一区| 免费女性裸体啪啪无遮挡网站| 美女主播在线视频| 精品午夜福利视频在线观看一区 | 丰满少妇做爰视频| 日韩免费av在线播放| 亚洲美女黄片视频| 国产又色又爽无遮挡免费看| 国产免费现黄频在线看| 欧美日韩福利视频一区二区| 人人澡人人妻人| 亚洲成人免费电影在线观看| 亚洲成人国产一区在线观看| 久久午夜亚洲精品久久| 欧美激情久久久久久爽电影 | 一本久久精品| 国产欧美日韩精品亚洲av| 欧美一级毛片孕妇| 91字幕亚洲| 国产精品久久久久久人妻精品电影 | 亚洲少妇的诱惑av| 日本av免费视频播放| 亚洲va日本ⅴa欧美va伊人久久| 国产不卡av网站在线观看| 日本wwww免费看| 五月天丁香电影| 久久久水蜜桃国产精品网| 极品人妻少妇av视频| 中文字幕av电影在线播放| 丁香六月天网| 免费看十八禁软件| 亚洲色图综合在线观看| 少妇被粗大的猛进出69影院| 国产精品亚洲av一区麻豆| 妹子高潮喷水视频| 亚洲av第一区精品v没综合| 黄色怎么调成土黄色| 精品一区二区三卡| 首页视频小说图片口味搜索| 在线亚洲精品国产二区图片欧美| 黄片大片在线免费观看| 欧美性长视频在线观看| 国产精品九九99| 丰满人妻熟妇乱又伦精品不卡| 成人国产一区最新在线观看| 欧美亚洲 丝袜 人妻 在线| 大片免费播放器 马上看| 国产免费av片在线观看野外av| 一区二区av电影网| 999精品在线视频| 亚洲七黄色美女视频| 99精品在免费线老司机午夜| 国产一卡二卡三卡精品| 国产精品亚洲av一区麻豆| 超碰97精品在线观看| 亚洲欧美一区二区三区黑人| 99久久99久久久精品蜜桃| 久久久欧美国产精品| 99久久精品国产亚洲精品| 蜜桃国产av成人99| 丝袜人妻中文字幕| 国产成人免费无遮挡视频| 大型黄色视频在线免费观看| 亚洲国产精品一区二区三区在线| 亚洲国产欧美一区二区综合| 大码成人一级视频| 欧美激情久久久久久爽电影 | 亚洲精品中文字幕在线视频| tube8黄色片| 精品福利永久在线观看| 中文字幕最新亚洲高清| 不卡一级毛片| 国产精品九九99| 成人国语在线视频| 午夜激情av网站| 午夜福利在线免费观看网站| 国精品久久久久久国模美| 亚洲国产成人一精品久久久| 精品久久久久久久毛片微露脸| 极品教师在线免费播放| 午夜福利,免费看| av网站免费在线观看视频| 桃花免费在线播放| 亚洲午夜理论影院| 欧美精品亚洲一区二区| 亚洲精品乱久久久久久| 久久久精品国产亚洲av高清涩受| 黄色毛片三级朝国网站| 亚洲熟女毛片儿| 亚洲第一av免费看| 亚洲av第一区精品v没综合| 69精品国产乱码久久久| 亚洲情色 制服丝袜| 超碰97精品在线观看| 国产精品一区二区在线观看99| 亚洲国产欧美在线一区| 少妇 在线观看| 亚洲精品粉嫩美女一区| 首页视频小说图片口味搜索| 国产成人欧美| 日本av免费视频播放| 侵犯人妻中文字幕一二三四区| 国产av精品麻豆| 久久国产亚洲av麻豆专区| 午夜福利,免费看| 国产亚洲午夜精品一区二区久久| 亚洲成人免费av在线播放| 国产欧美日韩一区二区精品| 国产成人欧美| 亚洲天堂av无毛| 欧美成人午夜精品| 交换朋友夫妻互换小说| 水蜜桃什么品种好| 啪啪无遮挡十八禁网站| 亚洲精品一卡2卡三卡4卡5卡| 男女免费视频国产| 欧美性长视频在线观看| 男女免费视频国产| 欧美国产精品va在线观看不卡| 黄色视频,在线免费观看| 啦啦啦在线免费观看视频4| 国产精品二区激情视频| 最新美女视频免费是黄的| 美女午夜性视频免费| 久久精品国产99精品国产亚洲性色 | 欧美日韩一级在线毛片| 国产av国产精品国产| 国产视频一区二区在线看| 成人18禁高潮啪啪吃奶动态图| 大香蕉久久成人网| 午夜福利一区二区在线看| 80岁老熟妇乱子伦牲交| 老司机在亚洲福利影院| 日日爽夜夜爽网站| 日本一区二区免费在线视频| a级毛片在线看网站| 日韩欧美免费精品| 天堂中文最新版在线下载| 香蕉丝袜av| 亚洲av日韩在线播放| 久久亚洲真实| 久久久久国内视频| 成年人黄色毛片网站| 老鸭窝网址在线观看| 欧美精品一区二区大全| 制服诱惑二区| 美女国产高潮福利片在线看| 亚洲av日韩精品久久久久久密| 2018国产大陆天天弄谢| 91成人精品电影| 久久九九热精品免费| 狂野欧美激情性xxxx| 日韩欧美国产一区二区入口| 高清毛片免费观看视频网站 | 麻豆乱淫一区二区| 精品久久久久久久毛片微露脸| 亚洲avbb在线观看| 久久热在线av| 十八禁高潮呻吟视频| 1024视频免费在线观看| 色老头精品视频在线观看| 国产精品一区二区在线观看99| 亚洲精品国产色婷婷电影| 日韩视频在线欧美| 精品一区二区三区四区五区乱码| 黄色成人免费大全| 视频在线观看一区二区三区| 王馨瑶露胸无遮挡在线观看| 国产男女超爽视频在线观看| 国产日韩一区二区三区精品不卡| 亚洲av美国av| 亚洲第一欧美日韩一区二区三区 | 久久久水蜜桃国产精品网| 搡老乐熟女国产| 十八禁高潮呻吟视频| 亚洲av欧美aⅴ国产| 一本综合久久免费| 亚洲精品中文字幕在线视频| 日本av免费视频播放| 成年人午夜在线观看视频| 精品亚洲乱码少妇综合久久| 母亲3免费完整高清在线观看| 啦啦啦视频在线资源免费观看| 一区二区日韩欧美中文字幕| 99热网站在线观看| 91成人精品电影| 精品熟女少妇八av免费久了| 午夜精品国产一区二区电影| 成人黄色视频免费在线看| 精品一区二区三区视频在线观看免费 | 亚洲黑人精品在线| 亚洲国产欧美网| 国产一区二区 视频在线| 久久久久国产一级毛片高清牌| 成年动漫av网址| 动漫黄色视频在线观看| 日韩免费高清中文字幕av| 国产精品一区二区在线不卡| 成人精品一区二区免费| 王馨瑶露胸无遮挡在线观看| 又黄又粗又硬又大视频| 老司机深夜福利视频在线观看| 一区二区三区乱码不卡18| 无限看片的www在线观看| 大片免费播放器 马上看| 精品卡一卡二卡四卡免费| 我要看黄色一级片免费的| 亚洲成av片中文字幕在线观看| 首页视频小说图片口味搜索| 亚洲伊人色综图| 国产黄频视频在线观看| 亚洲精品一卡2卡三卡4卡5卡| 夜夜爽天天搞| 91九色精品人成在线观看| 国产不卡一卡二| 精品久久久久久电影网| 黄色 视频免费看| 久久中文字幕人妻熟女| 色综合婷婷激情| 中文字幕人妻丝袜一区二区| 一级a爱视频在线免费观看| 日韩欧美一区视频在线观看| 国产男女内射视频| 91成人精品电影| 国产精品秋霞免费鲁丝片| 黄色视频在线播放观看不卡| 午夜福利免费观看在线| 黄色a级毛片大全视频| 精品卡一卡二卡四卡免费| 国产精品.久久久| 国产一区二区三区视频了| 欧美国产精品va在线观看不卡| 我的亚洲天堂| 午夜成年电影在线免费观看| 黄色怎么调成土黄色| 老司机深夜福利视频在线观看| 亚洲中文字幕日韩| 丁香六月天网| 午夜福利一区二区在线看| 欧美日本中文国产一区发布| 黑丝袜美女国产一区| 亚洲国产精品一区二区三区在线| bbb黄色大片| 在线播放国产精品三级| 91成人精品电影| 菩萨蛮人人尽说江南好唐韦庄| 国产成人欧美| 国产一区二区三区综合在线观看| 精品一区二区三区av网在线观看 | 国产视频一区二区在线看| 狠狠精品人妻久久久久久综合| 国产在线一区二区三区精| 久久久水蜜桃国产精品网| 成人手机av| 日本黄色视频三级网站网址 | 午夜福利一区二区在线看| 99热国产这里只有精品6| 久久久久久久国产电影| 国产伦人伦偷精品视频| 欧美中文综合在线视频| 黑人巨大精品欧美一区二区蜜桃| 免费人妻精品一区二区三区视频| 少妇猛男粗大的猛烈进出视频| 国产又爽黄色视频| 国产欧美日韩综合在线一区二区| 9热在线视频观看99| 亚洲成av片中文字幕在线观看| 国产aⅴ精品一区二区三区波| 国产日韩欧美亚洲二区| 亚洲熟女精品中文字幕| 手机成人av网站| 国产欧美亚洲国产| 免费一级毛片在线播放高清视频 | tocl精华| 少妇猛男粗大的猛烈进出视频| 一进一出好大好爽视频| 正在播放国产对白刺激| 亚洲少妇的诱惑av| 91成人精品电影| avwww免费| 三级毛片av免费| 曰老女人黄片| 久久久精品免费免费高清| 国产黄色免费在线视频| 王馨瑶露胸无遮挡在线观看| 一边摸一边抽搐一进一出视频| 十八禁网站免费在线| 十分钟在线观看高清视频www| 在线亚洲精品国产二区图片欧美| 午夜精品久久久久久毛片777| 超碰成人久久| 99精国产麻豆久久婷婷| 一本综合久久免费| 国产精品av久久久久免费| 亚洲中文日韩欧美视频| 亚洲熟妇熟女久久| 2018国产大陆天天弄谢| 国产xxxxx性猛交| 亚洲天堂av无毛| 国产1区2区3区精品| 欧美黑人精品巨大| 男人操女人黄网站| 久久精品亚洲精品国产色婷小说| 欧美日韩中文字幕国产精品一区二区三区 | 精品国产乱码久久久久久小说| 欧美日韩黄片免| 精品国产乱码久久久久久男人| 女人爽到高潮嗷嗷叫在线视频| 一进一出好大好爽视频| 欧美中文综合在线视频| 久久精品国产a三级三级三级| 丝袜美腿诱惑在线| 国产人伦9x9x在线观看| 好男人电影高清在线观看| 一本一本久久a久久精品综合妖精| 香蕉丝袜av| 精品免费久久久久久久清纯 | 18禁国产床啪视频网站| 亚洲人成电影免费在线| 久久久久久人人人人人| 中亚洲国语对白在线视频| 国产av国产精品国产| 在线观看免费视频网站a站| 高清av免费在线| 激情视频va一区二区三区| 国产精品 国内视频| 在线观看舔阴道视频| 亚洲成人国产一区在线观看| 一级毛片电影观看| av线在线观看网站| 一进一出抽搐动态| 亚洲精品成人av观看孕妇| 五月开心婷婷网| 成人影院久久| 国产在线一区二区三区精| 男女免费视频国产| 水蜜桃什么品种好| 国产麻豆69| 久久国产精品男人的天堂亚洲| 免费不卡黄色视频| 在线观看www视频免费| 天天添夜夜摸| 欧美乱码精品一区二区三区| 男女床上黄色一级片免费看| 成年女人毛片免费观看观看9 | 中国美女看黄片| 99精品在免费线老司机午夜| 亚洲成人免费av在线播放| 天天操日日干夜夜撸| 国产1区2区3区精品| 日本欧美视频一区| 国产国语露脸激情在线看| 每晚都被弄得嗷嗷叫到高潮| 国产精品久久久久久人妻精品电影 | 搡老熟女国产l中国老女人| 大香蕉久久网| 亚洲中文日韩欧美视频| 一个人免费在线观看的高清视频| 黄色a级毛片大全视频| 亚洲中文av在线| 亚洲午夜理论影院| 精品少妇黑人巨大在线播放| 免费久久久久久久精品成人欧美视频| 男人舔女人的私密视频| aaaaa片日本免费| 黑人巨大精品欧美一区二区蜜桃| av一本久久久久| 中文字幕人妻丝袜一区二区| 国产精品熟女久久久久浪| 一级片免费观看大全| 亚洲 国产 在线| e午夜精品久久久久久久| a级片在线免费高清观看视频| 久久久久久久久免费视频了| 纵有疾风起免费观看全集完整版| 变态另类成人亚洲欧美熟女 | 午夜两性在线视频| 成年女人毛片免费观看观看9 | 我的亚洲天堂| 日韩成人在线观看一区二区三区| 亚洲五月婷婷丁香| 国产精品98久久久久久宅男小说| 亚洲一码二码三码区别大吗| 色播在线永久视频| 久久久水蜜桃国产精品网| 精品国内亚洲2022精品成人 | 成年女人毛片免费观看观看9 | 成年动漫av网址| 亚洲精品美女久久久久99蜜臀| bbb黄色大片| 国产精品国产高清国产av | 婷婷成人精品国产| 国产免费现黄频在线看| 亚洲自偷自拍图片 自拍| 91精品三级在线观看| 丝袜美足系列| 国产精品.久久久| 欧美人与性动交α欧美软件| 国产精品二区激情视频| 搡老乐熟女国产| 黑丝袜美女国产一区| 亚洲精品一卡2卡三卡4卡5卡| 成年人黄色毛片网站| 女人高潮潮喷娇喘18禁视频| 久久久国产一区二区|