• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep-Net:Fine-Tuned Deep Neural Network Multi-Features Fusion for Brain Tumor Recognition

    2023-10-26 13:13:46MuhammadAttiqueKhanRehamMostafaYuDongZhangJamelBailiMajedAlhaisoniUsmanTariqJunaidAliKhanYeJinKimandJaehyukCha
    Computers Materials&Continua 2023年9期

    Muhammad Attique Khan ,Reham R.Mostafa ,Yu-Dong Zhang ,Jamel Baili ,Majed Alhaisoni ,Usman Tariq ,Junaid Ali Khan,Ye Jin Kim and Jaehyuk Cha,?

    1Department of Computer Science,HITEC University,Taxila,47080,Pakistan

    2Department of Informatics,University of Leicester,Leicester,UK

    3Information Systems Department,Faculty of Computers and Information Sciences,Mansoura University,Mansoura,Egypt

    4Department of Computer Engineering,College of Computer Science,King Khalid University,Abha,61413,Saudi Arabia

    5Computer Sciences Department,College of Computer and Information Sciences,Princess Nourah bint Abdulrahman University,Riyadh,11671,Saudi Arabia

    6Management Information System Department,College of Business Administration,Prince Sattam Bin Abdulaziz University,Al-Kharj,16278,Saudi Arabia

    7Department of Computer Science,Hanyang University,Seoul,04763,Korea

    ABSTRACT Manual diagnosis of brain tumors using magnetic resonance images(MRI)is a hectic process and time-consuming.Also,it always requires an expert person for the diagnosis.Therefore,many computer-controlled methods for diagnosing and classifying brain tumors have been introduced in the literature.This paper proposes a novel multimodal brain tumor classification framework based on two-way deep learning feature extraction and a hybrid feature optimization algorithm.NasNet-Mobile,a pre-trained deep learning model,has been fine-tuned and twoway trained on original and enhanced MRI images.The haze-convolutional neural network(haze-CNN)approach is developed and employed on the original images for contrast enhancement.Next,transfer learning(TL)is utilized for training two-way fine-tuned models and extracting feature vectors from the global average pooling layer.Then,using a multiset canonical correlation analysis (CCA) method,features of both deep learning models are fused into a single feature matrix—this technique aims to enhance the information in terms of features for better classification.Although the information was increased,computational time also jumped.This issue is resolved using a hybrid feature optimization algorithm that chooses the best classification features.The experiments were done on two publicly available datasets—BraTs2018 and BraTs2019—and yielded accuracy rates of 94.8% and 95.7%,respectively.The proposed method is compared with several recent studies and outperformed in accuracy.In addition,we analyze the performance of each middle step of the proposed approach and find the selection technique strengthens the proposed framework.

    KEYWORDS Brain tumor;haze contrast enhancement;deep learning;transfer learning;features optimization

    1 Introduction

    Cancer is rapidly becoming a major public health concern worldwide[1].It is the second leading cause of death after cardiovascular disease,accounting for one out of every six deaths worldwide[2].Brain cancer is one of the deadliest diseases due to its violent nature,low survival rate,and diverse features.Tumor shape,texture,and location are some of the different features that can be used to classify brain tumors[3].Meningiomas are the most common benign intracranial tumors that inflame the thin membranes surrounding the brain and spinal cord.Astrocytomas,ependymomas,glioblastomas,oligoastrocytomas,and oligodendrogliomas are all types of brain tumors known as gliomas[4].Pituitary tumors,which develop at the pituitary gland’s base in the brain,are frequently benign.However,these lesions may prevent the generation of pituitary hormones,which would have a systemic impact[5].

    In medical care,the incidence rates of meningiomas,gliomas,and pituitary tumors are about 15%,45%,and 15%,respectively [6].Physicians may diagnose and predict a patient’s survival rate based on the type of tumor.At the same time,the best treatment method,from surgery,chemotherapy,or radiotherapy to the“wait and see”strategy that ignores invasive procedures,can also be agreed upon.Classification of the tumor is vital for planning and monitoring the course of treatment [7].MRI is a non-invasive,painless medical imaging technique.It is one of the most accurate detection and classification techniques for cancer.A radiologist’s knowledge is required for the highly technical,error-prone,and time-consuming task of identifying the type of malignancy from MRI scans.In the artificial intelligence (AI) area,a new and creative computer assistant diagnostic (CAD) system is urgently required to enable doctors and radiologists to ease the workload of diagnosing and classifying tumors.

    A CAD system typically consists of three steps denoted (1) tumor segmentation,(2) extraction of the segmented tumor’s characteristics based on statistical or mathematical parameters evaluated throughout the learning process using a collection of MRI images that are labeled,and (3)implementing an accurate machine learning classifier to estimate an anomaly class [8].Before the classification stage,many conventional machine learning (ML) approaches are required for lesion identification [9].The segmentation process is a computationally intensive phase,which might be unpredictable according to image contrast and intensity normalization variance and can influence classification performance.Feature extraction is a critical method of producing interesting features to identify a raw image’s contents.However,this phase has few adverse effects of being a timeconsuming assignment requiring prior knowledge of the problem domain.The extracted features are then utilized as ML inputs and assigned an image class label according to these crucial features[10].Alternatively,deep learning(DL)is a subfield of artificial intelligence(AI)that automatically learns data representation and makes predictions and conclusions[11].DL is independent of hand-crafted feature extraction methods and can learn features directly from the sample data through several hidden layers.Convolutional,ReLu activation,and pooling layers to avoid overfitting normalization,fully connected,and softmax layers are among the hidden layers in a straightforward convolutional neural network(CNN)model.The completely linked layer is the most crucial layer in which characteristics are obtained in 1D for classification purposes.

    Deep learning models already pre-trained include Alexnet,VGG16,ResNet,and NasNet-Mobile.Compared to previous models,the NasNet-Mobile is lighter and performs better while analyzing medical images.Transfer learning(TL)is used to fine-tune and train the pre-trained deep models on the target dataset.The TL is a method of recycling a model with the least memory and computation required[12].Later,some activation functions are employed on the newly trained feature extraction models,such as sigmoid or tanh.Generally,these deep learning models are trained using raw images without any region of interest (ROI) detection;therefore,there is a high chance of irrelevant and redundant feature extraction.The researchers try to resolve this issue by introducing some feature selection techniques.The feature selection is the optimal subset selection method from the original feature set.A few famous feature selection techniques employed in medical imaging are mouth flame,the t-distribution stochastic neighborhood approach(t-SNE)approach,correlation,etc.Sometimes,the feature extraction from one type of data does not give better results;therefore,researchers introduced several feature fusion techniques.As a result,they have improved the accuracy through feature fusion techniques but faced a limitation of high computational time[13].

    Deepak et al.[14]suggested a CNN via transfer learning framework for classifying brain tumors.They utilized a pre-trained GoogleNet model and extracted features through TL.The extracted features were subsequently classified utilizing machine learning classifiers that improved classification accuracy.The main observation of this work was the ability of TL for fewer training images.Mohsen et al.[15]designed a classifier based on CNN for brain tumor classification.They used discrete wavelet transform and principal component analysis (PCA) to combine the features of CNN in the designed classifier.Sharif et al.[16] presented an end-to-end system for brain tumor classification.They used the Densenet201 pre-trained model to extract features that were later refined through two feature selection techniques called entropy-Kurtosis and modified GA.The selected features are merged utilizing the non-redundant serial-based approach and then employed as input to the classifiers for final classification.The observation of this study was the selection of the best features to reduce the computational time of the classifiers and enhance the classification accuracy.An automatic brain tumor segmentation method in 3D medical images was proposed by Kamnitsas et al.[17].The two main parts mainly describe this method for the 3D CNN’s extremely accurate soft segmentation and,more importantly,for the 3D CRF’s post-processing of the created soft segmentation labels,which successfully produces the hard segmentation labels and eliminates false positives.The model offers more effective performance and has been tested on the BraTs2015 and ISLES 2015 datasets.An entirely automatic model for segmenting 3D tumor photos was proposed by Alqazzaz et al.[18].They trained four SegNet models for analysis,and post-processing combined the data from those models.The original images’maximum intensity information is encrypted to deep features for improved representation.This work categorizes the extracted features using the decision tree as a classifier.

    On BraTs2017,experiments are conducted to arrive at an average F1 score of 0.80.A multimodal automatic brain tumor classification technique utilizing deep learning was presented by Khan et al.[19].The model was based on performing the following task sequentially as follows:Linear contrast stretching;transfer learning-based features extraction utilizing pre-trained CNN models(VGG16 &VGG19);features selection based on correntropy;and finally,the classification of brain cancers using a fused matrix sent to the extreme learning machine (ELM).A deep learning model for brain tumor segmentation by merging short-term memory (LSTM) and co-evolutionary neural networks(ConvNet)concepts was introduced by Iqbal et al.[20].Following pre-processing,the classweighting concept is presented to solve the problems with class inequality.ConvNet produced a single score(exactitude)of 75%using the BraTS 2018 benchmark dataset,while an LSTM-based network generated 80% of the results with an overall fusion accuracy of 82.29 percent.Few other recent techniques are also introduced for brain tumor classification such as DarkNet-Color Map Superpixels[21],Exemplar Deep Features[22],and named a few more[23](summary is seen in Table 1).

    Table 1:Summary of recent state-of-the-art(SOTA)techniques

    The techniques mentioned above focused on the fusion of deep learning features and selecting the optimal or best features from multimodal brain tumor classification.However,in the fusion process,they mainly focused on the accuracy of a classification and computational time.Moreover,the major challenge is accurately classifying tumor categories such as T1,T1CE,T2,and Flair.Each class has a high similarity in shape and texture;therefore,it is not easy to classify correctly.Another challenge is the high computational time that can be resolved using feature optimization techniques.In this work,we proposed a two-way deep learning features extraction and hybrid feature optimization algorithm-based best feature selection framework for brain tumor classification.Our major efforts and contributions to this work are listed as follows:

    ? A two-way deep learning framework is introduced and trained using transfer learning.Then,the trained deep learning framework’s average pooling layers are used to extract features.

    ? A multiset canonical correlation analysis(MCCA)-based features fusion technique is employed to get better information for accurate classification.

    ? We proposed a hybrid enhanced whale optimization algorithm with crow search(HWOA-CSA)to select the best feature and reduce the computational time.

    ? A comparison is conducted among the middle steps of the proposed framework.Also,we evaluated by comparing the proposed framework’s accuracy with recent state-of-the-art(SOTA)techniques.

    The proposed methodology of this work,which includes a two-way deep learning framework for a hybrid feature optimization algorithm,is mentioned in Section 2.At the same time,detailed experimental results are shown in Section 3.Finally,we conclude the proposed framework in Section 4.

    2 Proposed Methodology

    The proposed two-way deep learning model and optimal feature selection framework consist of a few important steps,as illustrated in Fig.1.In the first step,a deep two-way learning fine-tuned model is trained on original and enhanced images.The main purpose is to obtain the most important features for accurate classification results.In the second step,fine-tuned networks are trained through TL and extracted features from average pooling layers.The extracted average pooling layer features are fused using a multiset CCA-based approach in the third step.In the next step,a hybrid HWOA-CSA feature selection algorithm is proposed and applied to a fused feature vector that is finally classified using an ELM classifier.The description of each step is given below.

    Figure 1:Architecture of proposed framework of multimodal brain tumor classification

    2.1 Dataset Preparation and Contrast Enhancement

    In this work,two datasets–BraTs2018 and BraTs2019,are employed for the experimental process.Both datasets consist of four types of tumor classes such as T1,T1W,T2,and FLAIR.

    BraTs2018[31]:This dataset consists of 385 scans for training and 66 scans for its validation.All the MRI scans of this dataset have a volume of 240×240×155.Each volume is segmented manually by expert neuroradiologists.Each volume includes T1,T1CE,T2,and FLAIR sequences,as illustrated in Fig.2.

    Figure 2:Sample brain tumor images

    BraTs2019[32]:This dataset consists of 335 scans for training,and validation scans are the same as Brats2018.Four sequences-T1,T1CE,T2,and FLAIR tumors are contained for each volume.Similar to the BraTs2018 dataset,the ground truths are generated manually with the help of expert neuroradiologists.All the MRI scans of this dataset have a volume of 240×240×155.A few sample images are shown in Fig.2.

    In this work,we utilized 48,000 MRI samples from the BraTs2018 dataset(12,000 in each class),whereas for BraTs2019,the total extracted samples tally 56,000 (14,000 in each class).Initially,the extracted MRI samples had a dimension of 240×240;therefore,we converted all into 512×512.In addition,a contrast enhancement technique has been proposed named haze-CNN and applied to the original MRI images of the selected datasets.

    ConsiderΔis a database that consists of two datasets,BraTs2018 and BraTs2019.Each image in the dataset is represented byI(x,y)with a dimension of 512×512.The first operation on the image is applied to clear the tumor region as follows:

    where,I1(u)is the observed intensity value,I(u)is the original intensity value,T(u)is the transmission map,Lgis atmospheric light,andu∈(x,y).The following formula is applied to get brighter pixels based on the observed intensity value.

    whereαis a global atmospheric light andh(u)is a medium transmission.Later on,to apply the updated intensity values on the original image,the following transformation is performed:

    Here,the value ofconstis 0.7,which is selected based on the higher intensity value in the original image.After that,we employed a deep neural network-based approach[33]for the final enhancement.The VGG16 pre-trained model is trained on authentic(original)MRI images(grayscale)and applied on each channel of the imageI3(x,y).It is described mathematically as follows:

    Here,irepresents the number of extracted channels,and C defines the concatenation operation.The output of the above equation in the form of enhanced images is illustrated in Fig.3.This figure clarifies that the tumor region in the enhanced images is more precise than in the original images.

    Figure 3:Enhanced MRI samples using a haze-CNN approach

    2.2 Fine-Tuned NasNet-Mobile

    Nasnet CNN architecture is a scalable network with building blocks optimized through reinforcement learning.Each building block consists of several layers (convolutional &pooling) and the recurrent time according to the network capacity.This network consists of 12 building blocks and a total of 913 layers.The total number of parameters is 5.3 million,less than the VGG and Alexnet [34].In the fine-tuning process,the last classification layer is removed and added by a new FC layer.The main purpose of replacing this layer is that this network was previously trained on the Imagenet dataset,and the number of labels tallied at 1000.However,the selected datasets BraTs2018 and BraTs2019 include four classes;therefore,it is essential to modify this deep network.

    After fine-tuning,deep transfer learning is employed to train this model;the TL’s main purpose is to reuse a pre-trained model for another task with less time and memory.Moreover,the TL is useful when the training data is fewer(target)than the source data.

    In the training of the deep learning model,several hyperparameters are employed as specified–the learning rate is 0.0001,mini-batch size is 64,max epochs are 100,an optimization method is Adam,and the loss function is cross-entropy.The activation function for feature extraction is sigmoid.Visually,this process is shown in Fig.4.This figure illustrates that the ImageNet dataset is used as source data for the pre-trained network.After fine-tuning,the model is trained on BraTs2018 and BraTs2019 datasets.As shown in Fig.1,the fine-tuned NasNet-Mobile is trained separately on original and enhanced samples;therefore,two deep-learning models are obtained.On both trained models,features are extracted from the GAP layers and obtained a feature vector of dimensionsN×1056 andN×1056,respectively.After that,the extracted features are fused using an MCCA-based fusion approach.

    Figure 4:Features extraction using transfer learning

    2.3 MCCA-Based Deep Features Fusion

    GivenJfeature sets,J?(j1,j2),the canonical variatesGjcan be computed through a deflationary approach as follows:

    Based on this formula,both deep feature vectors are fused and obtain a new fused feature vector of dimensionN×1450.Later,this fused vector is refined using a hybrid feature selection algorithm.

    2.4 Hybrid Feature Selection Algorithm

    This work utilized a hybrid optimization algorithm for best feature selection.In the selection process,initially,a fused feature vector of dimension N×1450 is passed to the hybrid optimization algorithm(HOA).The first features are processed in the HOA through a modified whale optimization algorithm in the output,while a global best vector is obtained.The entropy is computed to give the global best vector and passed through a threshold function (Eqs.(14)–(15)).The resultant features are again evaluated through a fitness function,and the resultant vector is passed to the Crow search algorithm for further dimensionality reduction.In the end,a best-fitted feature vectorof dimension,N×726 is finally passed to an extreme learning machine(ELM)for final classification.The mathematical working of the HOA selection algorithm is given below.

    The hybrid method is based on the enhanced WOA and Crow search algorithm(CSA).In addition,Mirjalili introduced a swarm intelligence algorithm known as WOA[35].This algorithm is established on the predatory approach of humpback whales.Humpback whales catch a school of small fishes or krill close to the surface.This procedure generates specific bubbles with a ring path,and the operator is divided into three components.In the first phase of the algorithm,a prey is surrounded and attacked by a spiral bubble net (exploitation phase);in the second phase,whales randomly search for food(exploration phase).Mathematically,the WOA process is described as follows:

    Encircling prey:Initially,the optimal position is not recognized;therefore,it is supposed that the current optimal solution is the objective prey or near to the optimal solution.After defining the best search space,the other search spaces try to set their location regarding the best search space.

    Bubble-net attacking:The local search process is described as each humpback whale relocates to get near a prey within a dense ring while following a spiral-shaped path.Then,following the compact ring or spiral mechanism,a probability of 0.5 is set to renew the location of the humpback whale.The mathematical formula is defined as follows:

    Searching for prey:The location of the current humpback whale is revised based on the random walk approach,defined as follows:

    wherePrandshows the placement of a random humpback whale picked from the given population.After selecting the global best features through the whale optimization algorithm,Entropy is computed to solve the problem of uncertainty,and it is calculated as follows:

    Based on the entropy value,the threshold function is defined to enhance the selection performance of the whale optimization algorithm,and it is defined as follows:

    The best-selected features are passed to the Crow search algorithm to reduce some redundant features.This metaheuristic’s purpose is for a given crowito be able to follow another crowjto identify its concealed food location.Therefore,it is important that the crowiposition is constantly updated during this process.Furthermore,when food is stolen,crowimust move it to a new location.Accordingly,based on a swarm of crows(N),a CSA algorithm begins by initializing two metrics.

    Location matrix: All the possible solutions for the problem in this study are represented by the location of crowein the search space.The crow’s position at iterationitis denoted as a vector.

    whereNis the population size,dis the problem’s dimension,Itmaxdenotes the maximum number of iterations.

    Memory matrix:this matrix represents the memory of crows to store the location where their food is kept.Crows have an exact recollection of where their food is hidden,and it is also believed to be the best position for that particular crow.Therefore,at each iteration,there are two scenarios of crows’movement in the search space:

    In the first scenario,crowgis completely unaware that crowefollows him.Consequently,this means that the next position of crowein the direction of the crow g’s hidden food site is denoted as follows:

    The second scenario occurs when crowgrealizes it’s followed by crowe.Consequently,to fool crowe,crowgwill move to an entirely random location in the search region.Randomness determines Crowelocation.These two scenarios of the crows’position update can be modeled as follows:

    wherereandrgare random numbers with uniform distribution in[0,1].The awareness probability of crowgat iterationitis denoted by.It is AP’s job to maintain a balance between exploration and extraction.Large AP values encourage diversification,while small AP values increase intensification.The new crow’s position is assessed using anf()objective function on each iteration,and then the crows update their memorized positions,denoted as follows:

    3 Results and Analysis

    Two publically available datasets,such as BraTS2018[31]and BraTs2019[32],are employed for the experimental process.In the validation process,50 percent of the dataset images are utilized for the training,and the remaining 50 percent are utilized to test the proposed deep learning framework.In the deep learning model training,several hyperparameters are employed: learning rate is 0.0001,mini-batch size is 64,max epochs are 100,an optimization method is Adam,the loss function is crossentropy,and the activation function for feature extraction is sigmoid.Furthermore,five different classifiers,such as multiclass support vector machine (MCSVM),weighted K-nearest neighbor(WKNN),Gaussian Na?ve Bayes(GNB),ensemble baggage tree(EBT),and extreme learning machine(ELM),have been implemented for the classification performance.Finally,the testing results are computed through 10-fold cross-validation.The entire framework is implemented on MATLAB2021b using a desktop PC with 16 GB of RAM,256 SSD,and a 16 GB graphics card.

    3.1 Results

    Experimental process:The proposed deep learning framework is validated through several experiments: i) deep learning feature extraction from an average pooling layer of a fine-tuned NasNet-Mobile CNN trained on original dataset images and performed classification;ii)deep learning feature extraction from an average pooling layer of a fine-tuned NasNet-Mobile CNN trained on enhanced MRI images and performed classification;iii) fused deep features of both fine-tuned CNN models using the MCCA approach;and iv)the proposed hybrid feature optimization algorithm is applied on fused feature vector and obtains the best feature for final classification.

    Results of BraTs2018 dataset:Table 2 presents the classification results for the middle steps of the proposed deep learning framework.This table calculates the results for NasNet-Org,NasNet-Enh,and fusion.NasNet-Org represents deep learning extracted features through NasNet-Mobile original sample images of the selected dataset,while NasNet-Enh represents feature extraction using enhanced MRI images.The ELM classifier gives better results,with 88.9%,90.6%,and 91.8%accuracy than the other classifiers.According to the results given in this table,it is shown that classification performance is improved for the enhanced images that are further boosted after the fusion of both deep feature vectors.The computational time is also mentioned in this table,and it is observed that the time of NasNet-Enh and fusion steps is increased more than NasNet-Org.To resolve the challenge of high computational time and improve the classification accuracy,a hybrid feature optimization algorithm is applied to the fused feature vector,with the results presented in Table 3.The ELM classifier achieved the best accuracy of 94.8 percent,whereas the sensitivity rate was 94.62 percent.The computational time of ELM is 13.660(sec),which was previously 66.3264(sec)for NasNet-Org.

    Table 2:Step-wise classification results of the proposed framework on the BraTs2018 dataset

    Fig.5 illustrates the confusion matrix of the ELM classifier after the best feature selection.In this figure,it is noted that each tumor class prediction rate is above 90 percent.A time-based comparison among each middle step is also conducted,as plotted in Fig.6.This figure describes that the fusion process consumed more time than the rest.

    Figure 5:Confusion matrix of ELM classifier for BraTs2018 dataset

    Results of BraTs2019 dataset:Table 4 presents the classification results for the middle steps of the proposed deep learning framework using the BraTs2019 dataset.The results given in this table are computed for NasNet-Org,NasNet-Enh,and deep features fusion.ELM classifier gives better results for the first three middle steps than the other classifiers,achieving accuracies of 88.0%,89.3%,and 90.6%.According to the results given in this table,it is shown that classification performance is improved for the enhanced images that are further boosted after the fusion features.The computational time of each step for all classifiers is also mentioned in this table,while it is observed that the time of NasNet-Enh and fusion steps is increased more than NasNet-Org.This challenge is resolved through a hybrid feature optimization algorithm applied to the fused feature vector,with the results presented in Table 5.The ELM classifier achieved the best accuracy of 94.8 percent,whereas the sensitivity rate was 94.62 percent.These values showed that the selection of the best features not only reduced the computational time but also increased classification accuracy.The computational time of ELM is 23.100(sec)after the feature selection process.The previous minimum time was 77.3484(sec)for NasNet-Org features.

    Figure 6:Time-based comparison of each middle step on selected classifiers using the BraTs2018 dataset

    Fig.7 illustrates the confusion matrix of the ELM classifier after the best feature selection.Timebased comparison among each middle step is also conducted,as plotted in Fig.8.The time plotted in this figure shows the importance of the feature selection step.This figure also describes that the fusion of a deep learning process consumes more time than the other steps,such as NasNet-Org and NasNet-Enh.

    Figure 7:Confusion matrix of elm for BraTs2019 dataset

    3.2 Discussion and Comparison

    In Figs.9 and 10,a comparison of middle stages is also performed and plotted.The precision is improved following the fusion procedure,as shown in these figures.The fusion accuracy is further tuned while the optimization approach enhances each classifier’s accuracy by over 4%.Furthermore,the computational time shown in Tables 2–5 indicates that the fusion process consumes more time than that saved by the proposed hybrid feature selection technique.Finally,the correctness of the proposed framework is compared to several previous approaches,as given in Table 6.Khan et al.[19]obtained an accuracy of 92.5 percent using the BraTs2018 dataset.Rehman et al.[26]used the BraTs2018 dataset and achieved an accuracy of 92.67 percent.Sharif et al.[25] presented a deep learning model and achieved an accuracy of 92.5%using the BraTs2018 dataset.In this article,the proposed framework obtained an accuracy of 94.8 percent on the BraTs2018 dataset and 95.7 percent on the BraTs2019 dataset.Overall,the proposed accuracy is improved as opposed to the existing techniques[19,25,26].

    Figure 9:Comparison of each middle step of the proposed framework based on the accuracy for the BraTs2018 dataset

    Figure 10:Comparison of each middle step of the proposed framework based on the accuracy for the BraTs2019 dataset

    Table 6:Comparison of proposed brain tumor classification framework with SOTA techniques

    4 Conclusion

    This research proposes a multimodal brain tumor classification framework based on two-way deep learning and hybrid feature optimization algorithms.The framework’s first stage was to train a finetuned two-way deep learning model on original and augmented images,which were then trained using TL.A multiset CCA-based system is employed to extract and fuse the features of the average pooling layers.Before being classified using an ELM classifier,the fused feature vector is enhanced using a hybrid HWOA-CSA feature optimization approach.The experiment used two publicly available datasets called BraTs2018 and BraTs2019,with 94.8 and 95.7 percent accuracy rates,respectively.In terms of accuracy,the proposed strategy outperforms recent SOTA techniques.According to the data,combining two-way CNN features improves classification accuracy.However,it increases the time consumed,but the proposed hybrid feature optimization approach overcame the long processing time.Tumor segmentation and CNN model training will be done in the future.In addition,the BraTs2020 dataset will be used in the testing process.

    Acknowledgement:The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through large group Research Project.

    Funding Statement:This work was supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning(KETEP),Granted Financial Resources from the Ministry of Trade,Industry&Energy,Republic of Korea(No.20204010600090).The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through large group research project under Grant Number RGP.2/139/44.

    Author Contributions:Conceptualization,Methodology,Software,Original Writing Draft: Muhammad Attique Khan,Reham R.Mostafa,Yu-Dong Zhang,and Jamel Baili;Validation,Formal Analysis,Visualization,Project Adminstration: Majed Alhaisoni,Usman Tariq,Junaid Ali Khan;Data Curation,Supervision,Funding,Resources:Ye Jin Kim and Jaehyuk Cha.

    Availability of Data and Materials:Datasets used in this work are publicly available for the research purpose.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产亚洲欧美在线一区二区| 精品人妻在线不人妻| av电影中文网址| 黄色a级毛片大全视频| 成人影院久久| 我的亚洲天堂| xxxhd国产人妻xxx| 91麻豆av在线| 人妻久久中文字幕网| 免费久久久久久久精品成人欧美视频| 亚洲欧洲日产国产| 久久热在线av| 脱女人内裤的视频| 亚洲色图综合在线观看| 91精品国产国语对白视频| 亚洲色图综合在线观看| 在线观看免费高清a一片| netflix在线观看网站| 午夜日韩欧美国产| 少妇猛男粗大的猛烈进出视频| netflix在线观看网站| 免费在线观看视频国产中文字幕亚洲 | 国产激情久久老熟女| 成人三级做爰电影| 中文精品一卡2卡3卡4更新| √禁漫天堂资源中文www| 欧美国产精品一级二级三级| 中国国产av一级| 脱女人内裤的视频| 丰满少妇做爰视频| 丰满少妇做爰视频| 欧美 亚洲 国产 日韩一| 国产主播在线观看一区二区| 老司机深夜福利视频在线观看 | 久久久水蜜桃国产精品网| 国产欧美亚洲国产| 桃红色精品国产亚洲av| 午夜福利在线观看吧| 国产成人影院久久av| 久久精品成人免费网站| 别揉我奶头~嗯~啊~动态视频 | 欧美日韩亚洲高清精品| 国产在线一区二区三区精| 在线观看一区二区三区激情| 日韩欧美一区二区三区在线观看 | e午夜精品久久久久久久| av又黄又爽大尺度在线免费看| 黑人猛操日本美女一级片| 中国国产av一级| 99九九在线精品视频| 丝袜脚勾引网站| 纯流量卡能插随身wifi吗| 免费少妇av软件| 高清视频免费观看一区二区| 国产精品av久久久久免费| 国产精品久久久久久人妻精品电影 | 国产亚洲av片在线观看秒播厂| 少妇被粗大的猛进出69影院| 精品人妻熟女毛片av久久网站| 精品国产一区二区三区久久久樱花| 狠狠狠狠99中文字幕| 如日韩欧美国产精品一区二区三区| 午夜老司机福利片| 国产成人啪精品午夜网站| 午夜福利视频精品| 97人妻天天添夜夜摸| 黄色 视频免费看| 亚洲欧美色中文字幕在线| 国产又色又爽无遮挡免| 最近中文字幕2019免费版| 免费人妻精品一区二区三区视频| 亚洲国产av新网站| 欧美少妇被猛烈插入视频| 99国产精品99久久久久| 侵犯人妻中文字幕一二三四区| 午夜免费成人在线视频| 人人妻人人爽人人添夜夜欢视频| 亚洲精品国产精品久久久不卡| 99国产精品一区二区蜜桃av | 啦啦啦免费观看视频1| 少妇被粗大的猛进出69影院| 少妇裸体淫交视频免费看高清 | 少妇精品久久久久久久| 国产欧美日韩一区二区三 | 国产成人av教育| 最新在线观看一区二区三区| 国产激情久久老熟女| 国产精品欧美亚洲77777| 777米奇影视久久| 精品一区二区三区av网在线观看 | 国产男女内射视频| 高潮久久久久久久久久久不卡| 美国免费a级毛片| 亚洲三区欧美一区| 国产亚洲av片在线观看秒播厂| 亚洲国产精品999| 天天添夜夜摸| av网站免费在线观看视频| 视频区图区小说| 一级毛片电影观看| 亚洲国产欧美网| 日韩 亚洲 欧美在线| 久久久久久久国产电影| 99国产极品粉嫩在线观看| 久久中文看片网| 国产伦人伦偷精品视频| 久久香蕉激情| 男女高潮啪啪啪动态图| 一本大道久久a久久精品| 999精品在线视频| 久热这里只有精品99| 免费高清在线观看日韩| 91av网站免费观看| 69精品国产乱码久久久| 国产激情久久老熟女| 欧美乱码精品一区二区三区| 秋霞在线观看毛片| 搡老岳熟女国产| 久久久国产精品麻豆| 免费人妻精品一区二区三区视频| 一本—道久久a久久精品蜜桃钙片| 伦理电影免费视频| 色播在线永久视频| 日韩一区二区三区影片| 国产日韩欧美亚洲二区| 久久久久精品人妻al黑| 天堂中文最新版在线下载| 制服诱惑二区| 日本猛色少妇xxxxx猛交久久| netflix在线观看网站| 一级黄色大片毛片| 国产真人三级小视频在线观看| 成人亚洲精品一区在线观看| 中国美女看黄片| 久久久久视频综合| 两个人看的免费小视频| 国产精品1区2区在线观看. | 久久99热这里只频精品6学生| 亚洲精品国产一区二区精华液| 丁香六月欧美| 国产精品1区2区在线观看. | 国产国语露脸激情在线看| av一本久久久久| 咕卡用的链子| 亚洲精品中文字幕在线视频| 久久国产精品男人的天堂亚洲| 中文字幕人妻丝袜一区二区| 亚洲欧美精品自产自拍| 美女主播在线视频| 免费久久久久久久精品成人欧美视频| 国产成人免费无遮挡视频| 一边摸一边抽搐一进一出视频| 亚洲国产成人一精品久久久| 国产精品久久久人人做人人爽| 欧美激情高清一区二区三区| 成年人黄色毛片网站| 免费久久久久久久精品成人欧美视频| 国产成人精品久久二区二区免费| 亚洲精品成人av观看孕妇| 乱人伦中国视频| 日韩欧美一区二区三区在线观看 | 人人澡人人妻人| 欧美日韩精品网址| 欧美+亚洲+日韩+国产| 亚洲,欧美精品.| 国产免费福利视频在线观看| 久久久国产成人免费| 国产精品一区二区在线不卡| 欧美日韩国产mv在线观看视频| 超碰97精品在线观看| 午夜福利乱码中文字幕| 99国产综合亚洲精品| 男人操女人黄网站| 午夜久久久在线观看| 亚洲va日本ⅴa欧美va伊人久久 | 亚洲人成77777在线视频| 欧美成狂野欧美在线观看| 男女高潮啪啪啪动态图| 国产精品免费视频内射| 国产精品久久久av美女十八| 午夜福利在线免费观看网站| 成人黄色视频免费在线看| 国产精品 欧美亚洲| 不卡av一区二区三区| 精品亚洲成国产av| 日韩三级视频一区二区三区| 十八禁人妻一区二区| www.熟女人妻精品国产| 在线 av 中文字幕| 欧美在线黄色| 免费观看av网站的网址| 色婷婷av一区二区三区视频| 久久 成人 亚洲| 精品少妇一区二区三区视频日本电影| 少妇精品久久久久久久| 最近中文字幕2019免费版| 99久久国产精品久久久| 淫妇啪啪啪对白视频 | 亚洲,欧美精品.| 日韩大码丰满熟妇| 久久久精品94久久精品| 欧美日韩精品网址| 精品国产一区二区三区四区第35| 老司机影院毛片| 国产成人一区二区三区免费视频网站| 少妇猛男粗大的猛烈进出视频| 秋霞在线观看毛片| 久久精品aⅴ一区二区三区四区| 男女免费视频国产| 午夜成年电影在线免费观看| 黄色视频在线播放观看不卡| 天天影视国产精品| 国产精品自产拍在线观看55亚洲 | www.999成人在线观看| 国产成人影院久久av| 欧美精品av麻豆av| 丝袜脚勾引网站| 日韩精品免费视频一区二区三区| 亚洲七黄色美女视频| 日本vs欧美在线观看视频| 亚洲av片天天在线观看| 色老头精品视频在线观看| 国产野战对白在线观看| 国产亚洲欧美在线一区二区| 国产精品偷伦视频观看了| 久久午夜综合久久蜜桃| 97人妻天天添夜夜摸| 日韩视频在线欧美| 叶爱在线成人免费视频播放| 午夜老司机福利片| 免费一级毛片在线播放高清视频 | 悠悠久久av| 久久久久精品人妻al黑| 国产精品偷伦视频观看了| 人人妻人人添人人爽欧美一区卜| 亚洲五月色婷婷综合| 国产淫语在线视频| 狠狠婷婷综合久久久久久88av| 国产精品亚洲av一区麻豆| 99久久99久久久精品蜜桃| 婷婷成人精品国产| 国产欧美亚洲国产| 别揉我奶头~嗯~啊~动态视频 | 男男h啪啪无遮挡| 国产亚洲一区二区精品| 久久久欧美国产精品| 丝袜人妻中文字幕| 欧美大码av| 日韩一区二区三区影片| 91麻豆精品激情在线观看国产 | 一本—道久久a久久精品蜜桃钙片| 老司机深夜福利视频在线观看 | 一进一出抽搐动态| 精品少妇久久久久久888优播| 精品国产超薄肉色丝袜足j| 男女床上黄色一级片免费看| 亚洲精品久久久久久婷婷小说| 国产成人av激情在线播放| 少妇粗大呻吟视频| 国产一区二区三区综合在线观看| 别揉我奶头~嗯~啊~动态视频 | 五月开心婷婷网| 成人影院久久| 亚洲自偷自拍图片 自拍| 国产一区有黄有色的免费视频| 久久国产精品大桥未久av| 久久久久久亚洲精品国产蜜桃av| 无限看片的www在线观看| 欧美在线黄色| 亚洲国产欧美一区二区综合| 伦理电影免费视频| 一边摸一边做爽爽视频免费| 深夜精品福利| 亚洲色图 男人天堂 中文字幕| 啦啦啦中文免费视频观看日本| 国产无遮挡羞羞视频在线观看| 亚洲精品粉嫩美女一区| 别揉我奶头~嗯~啊~动态视频 | 成人免费观看视频高清| 久久精品久久久久久噜噜老黄| 91九色精品人成在线观看| 在线观看一区二区三区激情| 久久99一区二区三区| 一级毛片电影观看| 日韩 亚洲 欧美在线| 国产伦理片在线播放av一区| 国产精品成人在线| 国产精品一区二区免费欧美 | 操出白浆在线播放| 精品久久久精品久久久| 亚洲伊人色综图| 热99久久久久精品小说推荐| 男人操女人黄网站| 在线观看免费高清a一片| 在线观看人妻少妇| 亚洲国产欧美一区二区综合| 成人18禁高潮啪啪吃奶动态图| 久久久久久免费高清国产稀缺| 高清视频免费观看一区二区| 国产熟女午夜一区二区三区| 大片免费播放器 马上看| 国产精品久久久av美女十八| 亚洲国产精品一区二区三区在线| 欧美国产精品一级二级三级| 国产无遮挡羞羞视频在线观看| 久久久国产精品麻豆| 777米奇影视久久| 麻豆乱淫一区二区| 十八禁网站免费在线| 操美女的视频在线观看| 欧美国产精品va在线观看不卡| 在线永久观看黄色视频| 亚洲人成电影免费在线| 美国免费a级毛片| 欧美黑人精品巨大| 黄片大片在线免费观看| 免费在线观看日本一区| videos熟女内射| 久久久久久久久久久久大奶| 两个人看的免费小视频| 国产深夜福利视频在线观看| 日韩欧美免费精品| av网站在线播放免费| 亚洲精品成人av观看孕妇| 视频区欧美日本亚洲| 色综合欧美亚洲国产小说| 国产精品久久久久久精品古装| 色94色欧美一区二区| 国产亚洲欧美精品永久| 国产高清视频在线播放一区 | 丝袜美足系列| avwww免费| 最新的欧美精品一区二区| 精品久久久精品久久久| 80岁老熟妇乱子伦牲交| 69精品国产乱码久久久| 高清欧美精品videossex| 亚洲av美国av| 18禁裸乳无遮挡动漫免费视频| 狂野欧美激情性bbbbbb| 亚洲国产精品999| 日韩制服骚丝袜av| 高清欧美精品videossex| 久久久欧美国产精品| 日本wwww免费看| 亚洲午夜精品一区,二区,三区| 亚洲第一欧美日韩一区二区三区 | 亚洲色图 男人天堂 中文字幕| 日韩电影二区| av网站免费在线观看视频| 啦啦啦视频在线资源免费观看| 欧美 日韩 精品 国产| 两个人免费观看高清视频| 国产熟女午夜一区二区三区| 欧美日韩国产mv在线观看视频| 波多野结衣av一区二区av| 亚洲中文av在线| 免费观看a级毛片全部| 中文字幕av电影在线播放| 少妇 在线观看| 丝袜脚勾引网站| 精品视频人人做人人爽| 久久精品亚洲av国产电影网| 午夜福利视频精品| 日本欧美视频一区| 美女大奶头黄色视频| 看免费av毛片| 午夜影院在线不卡| 男女边摸边吃奶| 免费在线观看日本一区| 国产精品麻豆人妻色哟哟久久| 视频在线观看一区二区三区| 男男h啪啪无遮挡| 久久天堂一区二区三区四区| 18禁黄网站禁片午夜丰满| 国产精品免费视频内射| 亚洲中文av在线| 日本猛色少妇xxxxx猛交久久| 欧美 日韩 精品 国产| 黄频高清免费视频| 久久影院123| 日本五十路高清| 久久久久久久久久久久大奶| 窝窝影院91人妻| 国产淫语在线视频| 日本av免费视频播放| 亚洲欧美清纯卡通| 亚洲久久久国产精品| 欧美一级毛片孕妇| 又紧又爽又黄一区二区| 国产男女超爽视频在线观看| www.精华液| 亚洲精品国产av蜜桃| 亚洲国产精品一区二区三区在线| 久久人人爽av亚洲精品天堂| 人人妻,人人澡人人爽秒播| 精品福利永久在线观看| 午夜福利视频精品| 一区二区日韩欧美中文字幕| 99久久国产精品久久久| 国产精品一区二区精品视频观看| 国产在线免费精品| 多毛熟女@视频| 伊人久久大香线蕉亚洲五| av超薄肉色丝袜交足视频| 咕卡用的链子| 成年人免费黄色播放视频| 熟女少妇亚洲综合色aaa.| www.自偷自拍.com| 亚洲黑人精品在线| 精品高清国产在线一区| 欧美黑人精品巨大| 久久香蕉激情| 国产精品1区2区在线观看. | svipshipincom国产片| 国产深夜福利视频在线观看| 久久免费观看电影| 国产亚洲精品一区二区www | 夜夜骑夜夜射夜夜干| 日韩三级视频一区二区三区| 黑人猛操日本美女一级片| 精品人妻1区二区| 国产精品偷伦视频观看了| 亚洲自偷自拍图片 自拍| 午夜福利免费观看在线| svipshipincom国产片| 97在线人人人人妻| 制服诱惑二区| 亚洲国产精品成人久久小说| 亚洲人成77777在线视频| 亚洲国产精品一区三区| 国产精品久久久av美女十八| 欧美日韩福利视频一区二区| 丝袜脚勾引网站| 最近中文字幕2019免费版| 看免费av毛片| av视频免费观看在线观看| 精品欧美一区二区三区在线| 亚洲专区中文字幕在线| 国产精品偷伦视频观看了| 9热在线视频观看99| 久久女婷五月综合色啪小说| 天堂8中文在线网| 国产av一区二区精品久久| 国产成人一区二区三区免费视频网站| 亚洲五月色婷婷综合| 日韩欧美免费精品| 激情视频va一区二区三区| 在线观看免费视频网站a站| 国产精品秋霞免费鲁丝片| 欧美成人午夜精品| 久久人妻熟女aⅴ| netflix在线观看网站| www.av在线官网国产| 国产高清国产精品国产三级| 国产真人三级小视频在线观看| 日韩一卡2卡3卡4卡2021年| 蜜桃国产av成人99| 视频在线观看一区二区三区| 久久精品国产综合久久久| 日韩欧美一区二区三区在线观看 | 国产精品99久久99久久久不卡| 亚洲久久久国产精品| 欧美中文综合在线视频| 丝袜喷水一区| 国产一区二区在线观看av| 亚洲精品国产av蜜桃| 最黄视频免费看| 欧美日韩黄片免| 老鸭窝网址在线观看| 欧美性长视频在线观看| 成在线人永久免费视频| 啦啦啦中文免费视频观看日本| 久久毛片免费看一区二区三区| 精品欧美一区二区三区在线| 精品亚洲成国产av| 极品人妻少妇av视频| 自线自在国产av| 日韩有码中文字幕| 999久久久精品免费观看国产| 欧美亚洲 丝袜 人妻 在线| 老司机在亚洲福利影院| 免费女性裸体啪啪无遮挡网站| 国产成人精品久久二区二区免费| 国产欧美亚洲国产| 国产精品一区二区免费欧美 | 国产激情久久老熟女| 搡老岳熟女国产| 各种免费的搞黄视频| 极品少妇高潮喷水抽搐| 亚洲国产av新网站| 天天躁夜夜躁狠狠躁躁| 成年人黄色毛片网站| 亚洲成人免费电影在线观看| 亚洲精品日韩在线中文字幕| 下体分泌物呈黄色| 人妻一区二区av| √禁漫天堂资源中文www| 男人添女人高潮全过程视频| 久久久久国产一级毛片高清牌| 涩涩av久久男人的天堂| 一区二区av电影网| 中文字幕制服av| 99香蕉大伊视频| 亚洲国产欧美日韩在线播放| 欧美日韩成人在线一区二区| 97精品久久久久久久久久精品| 母亲3免费完整高清在线观看| 欧美国产精品va在线观看不卡| 男女免费视频国产| 欧美激情久久久久久爽电影 | 天天影视国产精品| 国产精品亚洲av一区麻豆| 亚洲少妇的诱惑av| 菩萨蛮人人尽说江南好唐韦庄| 天天躁夜夜躁狠狠躁躁| 免费在线观看黄色视频的| 每晚都被弄得嗷嗷叫到高潮| 亚洲熟女精品中文字幕| 妹子高潮喷水视频| 汤姆久久久久久久影院中文字幕| 欧美日韩av久久| 母亲3免费完整高清在线观看| 九色亚洲精品在线播放| 80岁老熟妇乱子伦牲交| 日本撒尿小便嘘嘘汇集6| 久久人人97超碰香蕉20202| 国产极品粉嫩免费观看在线| 黑人巨大精品欧美一区二区蜜桃| 国产成人av教育| 乱人伦中国视频| av片东京热男人的天堂| tube8黄色片| 人妻久久中文字幕网| 久久国产精品男人的天堂亚洲| 自拍欧美九色日韩亚洲蝌蚪91| 国产成人系列免费观看| 爱豆传媒免费全集在线观看| 日韩视频在线欧美| 欧美少妇被猛烈插入视频| 国产欧美日韩一区二区三 | 妹子高潮喷水视频| av福利片在线| 999精品在线视频| 啪啪无遮挡十八禁网站| 天天躁日日躁夜夜躁夜夜| 精品乱码久久久久久99久播| 久久久久国产一级毛片高清牌| 久热爱精品视频在线9| 老汉色av国产亚洲站长工具| 日本vs欧美在线观看视频| 在线 av 中文字幕| 两个人免费观看高清视频| av超薄肉色丝袜交足视频| 久久午夜综合久久蜜桃| 丝袜喷水一区| 亚洲黑人精品在线| 日日夜夜操网爽| 国产精品自产拍在线观看55亚洲 | 国产欧美日韩一区二区精品| 悠悠久久av| 99热网站在线观看| 久久国产精品男人的天堂亚洲| 美女扒开内裤让男人捅视频| 久久天堂一区二区三区四区| av天堂久久9| 久久久水蜜桃国产精品网| 亚洲国产欧美网| 天堂中文最新版在线下载| 精品福利观看| 嫁个100分男人电影在线观看| 妹子高潮喷水视频| 99久久国产精品久久久| 精品亚洲成a人片在线观看| 久久久久视频综合| 久久狼人影院| 伦理电影免费视频| 国产一区二区三区综合在线观看| 国产成人a∨麻豆精品| 久久人妻福利社区极品人妻图片| 欧美国产精品va在线观看不卡| 亚洲精华国产精华精| 亚洲精品在线美女| 操出白浆在线播放| 极品少妇高潮喷水抽搐| 婷婷丁香在线五月| 蜜桃在线观看..| 一区二区三区四区激情视频| 大陆偷拍与自拍| www日本在线高清视频| 久久免费观看电影| 亚洲精品中文字幕在线视频| 国产成人av教育| 在线观看一区二区三区激情| 欧美中文综合在线视频| 美女高潮喷水抽搐中文字幕| 免费一级毛片在线播放高清视频 | 老熟女久久久| 国产男人的电影天堂91| 国产精品久久久人人做人人爽| 久久国产精品人妻蜜桃| 老汉色av国产亚洲站长工具| 精品国产国语对白av| 久久99一区二区三区| 亚洲欧美激情在线| 亚洲天堂av无毛| 欧美+亚洲+日韩+国产| 精品欧美一区二区三区在线| 国产不卡av网站在线观看| 中文字幕av电影在线播放| 性色av乱码一区二区三区2| 一本综合久久免费| 国产黄色免费在线视频| 成人av一区二区三区在线看 | 91成人精品电影|