• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    3D Kronecker Convolutional Feature Pyramid for Brain Tumor Semantic Segmentation in MR Imaging

    2023-10-26 13:13:26KainatNazirTahirMustafaMadniUzairIqbalJanjuaUmerJavedMuhammadAttiqueKhanUsmanTariqandJaeHyukCha
    Computers Materials&Continua 2023年9期

    Kainat Nazir ,Tahir Mustafa Madni ,Uzair Iqbal Janjua ,Umer Javed ,Muhammad Attique Khan ,Usman Tariq and Jae-Hyuk Cha

    1Medical Imaging and Diagnostics Lab,NCAI,Department of Computer Science,COMSATS University Islamabad,Islamabad,44000,Pakistan

    2Department of Electrical and Computer Engineering,COMSATS University Islamabad,Wah Campus,Wah,Pakistan

    3Department of Computer Science,HITEC University,Taxila,Pakistan

    4Department of Management Information Systems,College of Business Administration,Prince Sattam Bin Abdulaziz University,Al-Kharj,16273,Saudi Arabia

    5Department of Computer Science,Hanyang University,Seoul,04763,Korea

    ABSTRACT Brain tumor significantly impacts the quality of life and changes everything for a patient and their loved ones.Diagnosing a brain tumor usually begins with magnetic resonance imaging (MRI).The manual brain tumor diagnosis from the MRO images always requires an expert radiologist.However,this process is time-consuming and costly.Therefore,a computerized technique is required for brain tumor detection in MRI images.Using the MRI,a novel mechanism of the three-dimensional(3D)Kronecker convolution feature pyramid(KCFP)is used to segment brain tumors,resolving the pixel loss and weak processing of multi-scale lesions.A single dilation rate was replaced with the 3D Kronecker convolution,while local feature learning was performed using the 3D Feature Selection(3DFSC).A 3D KCFP was added at the end of 3DFSC to resolve weak processing of multi-scale lesions,yielding efficient segmentation of brain tumors of different sizes.A 3D connected component analysis with a global threshold was used as a post-processing technique.The standard Multimodal Brain Tumor Segmentation 2020 dataset was used for model validation.Our 3D KCFP model performed exceptionally well compared to other benchmark schemes with a dice similarity coefficient of 0.90,0.80,and 0.84 for the whole tumor,enhancing tumor,and tumor core,respectively.Overall,the proposed model was efficient in brain tumor segmentation,which may facilitate medical practitioners for an appropriate diagnosis for future treatment planning.

    KEYWORDS Brain tumor segmentation;connect component analysis;deep learning;kronecker convolution;magnetic resonance imaging

    1 Introduction

    A tumor is a human’s uncontrollable growth of cancer cells[1].A tumor that grows inside the brain and spreads to nearby locations constitutes the primary tumor.By contrast,a secondary brain tumor has more than one point of origin and then reaches the brain via the process known as brain metastasis[2].Meanwhile,a glioma is a brain tumor originating from surrounding infiltrating nerve tissues and glial cells[3].Like other cancers,brain tumors comprise two types:benign and malignant.They are divisible into four grades,i.e.,I,II,III,and IV (World Health Organization,year 2000).Grades I and II brain tumors are low-grade benign gliomas,while grades III and IV are high-grade gliomas.Benign brain tumors include meningioma and glioma,while malignant ones are astrocytoma and glioblastoma.Grade IV tumors are the most hazardous tumor;histopathology is a primary method that can be used to classify grade IV tumors from other ones[4].

    Based on the attributes of the intra-tumoral regions,brain tumors can be separated into four groups,i.e.,edema,non-enhancing nucleus,necrotic and active nucleus [5].The groups mentioned above derived three classes which further make the segmentation map.The first class is the whole tumor that comprises all four tumor groups.The second class is the core tumor which encompasses all tumor groups except for the edema,and the third class is the enhancing tumor which consists of just the enhancing core[6].

    With the technological advancement of medical imaging,modalities are crucial for curing brain tumors.Imaging modalities such as positron emission tomography,ultrasonography,and Magnetic Resonance Imaging (MRI) help to understand many aspects of a brain tumor [4].In MRI,soft tissues are contrasted to provide vital information about different parameters of brain tumors with nearly no harmful effects of high radiation on humans [7].MRI scans for diagnosing tumor regions are produced in three anatomical views: coronal,sagittal,and axial [8],segmenting gliomas and other tumor structures for more efficient treatment planning.However,the intensity of MRI images is not homogeneous [9],and scanner are costly [10].Also,manual segmentation is timeconsuming.Consequently,physicians often use rough measures and are prone to errors[11].Automatic segmentation also encounters challenges with different sizes,shapes,and dimensions of abnormal brain tumors[12].

    Meanwhile,Convolutional Neural Network(CNN)is widely used to automatically segment brain tumors,extracting more relevant and accurate features by enhancing the receptive field.However,CNN is computationally intensive,requiring a large kernel size[13].Different CNN architectures were proposed to resolve the complexity of high computational costs[14].Other techniques were also used to reduce the filter parameters to improve network performance.For example,Atrous convolution is used to enlarge the receptive field while maintaining a similar resolution for a feature.It captures more contextual information at the same kernel size.Specifically,Atrous filters use zeros to expand vacant positions,preserving the feature size from one layer to another,capturing the global information,and keeping the number of parameters constant [15].However,due to the increment of the dilation rate,Atrous convolution losses between-pixel information,missing some vital data and yielding inaccurate segmentation [16].Besides,Deep Convolution Neural Network (DCNN) uses repeated striding convolutional kernels and multiple pooling layers to extract more tissue features[17].However,it reduces the feature resolutions.In general,most of the existing DCNN techniques have limited capacity for multi-scale processing,hindering the model from improving the performance of brain tumor segmentation.Some deep learning techniques used 3D convolutional networks[18],parallelized long short-term memory(LSTM)[19],and fully connected networks(FCN)[20].Machine learningbased probabilistic models were merged in many studies to produce a deep-learning model [21],performing automated brain tumor segmentation.Other segmentation methods used the capabilities of 3D-CNN[22]and 2D-CNN[23].However,3D-CNN used all 3D information of MRI data,yielding computational complexity due to an increment in the number of parameters.Therefore,2D-CNN was used for efficient and cost-effective brain tumor segmentation.

    Another study [24] proposed a Kronecker approach to resolve the Atrous convolution problem while keeping the number of parameters consistent.Kronecker convolution generates a lightweight model with many trained batches without extensive hardware while increasing the receptive field and implementing the computation of lost features.Also,the discrimination of cancerous and healthy cells becomes possible with contexts around the lesion.The generated feature maps from the threedimensional(3D)Feature Selection(FSC)block are fed into the multi-input branch pyramid to fuse contexts with lesion features.This ultimately improves model identification for accurate and efficient segmentation without information loss.The key contributions of this study are as follows:

    ? We have used 3D Kronecker to resolve the loss of pixels in Atrous convolution caused by increased dilation rates.

    ? The 3D Kronecker Convolution Feature Pyramid(KCFP)model captured multi-scale features of brain tumors.

    ? Pyramid features are used through the skip connections from the 3D FSC network to overcome the vanishing gradient problem while preserving local features.

    ? The connected component analysis is combined with a global threshold to reduce false positives for effective structural segmentation.

    The rest of this paper is organized as Section 2,literature review.Section 3 gives a detailed overview of our proposed KCFP model.Section 4 presents the Results,and their Discussion Section 5 contains the conclusion.

    2 Literature Review

    In 2012,Medical Image Computing and Computer-Assisted Intervention(MACCAI)challenge was introduced.The medical imaging computing and computer-assisted intervention society provided the BraTs dataset to facilitate brain tumor segmentation [25].Two automated techniques were introduced to segment brain tumors in the past decade.The first was the machine-learning techniques that used various classifications to learn different and diverse features,solving the issue of multi-classes[26].In addition,these techniques yielded hierarchical segmentations with effective fine scales[27].

    Another study[28]used a 2.5D CNN architecture but began with 2D kernels that overlooked intersize interactions,and hence,crucial contextual information was not captured[29].Meanwhile,many FCNs prognosticated the predicted segmentation masks effectively[30],but they could not model the context of the label domain[31].Consequently,a new variant of FCN,that is,U-Net,was developed[32].Fully connected layers were absent in U-Net.Thus,it would miss the context when identifying the boundary images.The missing contexts were retrieved by merging the images in a mirrored manner.Compared to FCN,U-Net was better because it could capture the skip connections among different pathways.

    These skip connections allowed the original image data to repair the details.Also,a modified UNet was proposed[33],implementing a dice-loss function to resolve overfitting in tumor segmentation.Besides,zero-padding was used to keep the output dimension constant in down and up-sampling paths[34].Meanwhile,a U-Net with multiple input channels was used for identifying lesions [35].In the same way,many other approaches are used to improve the segmentation process[36].Another study proposed a fully connected CNN[37],using low-and high-level feature maps in the final classification.

    Similarly,another study [38] developed a V-Net,i.e.,a modified 3D U-Net with a dice-loss function,to capture crucial information from the 3D data.Also,a 3D U-Net was developed,using the ground truth of the whole tumor to detect the tumor core.Besides,two U-Nets were used in postprocessing to improve the prediction,yielding a better classification of brain tumors[39].

    The primary drawback of FCN was that the up-sampling results were unclear,thus decreasing the analytical performance of the medical images.The cascaded architecture was then used to overcome this problem,converting multi-segmentation into binary segmentation [40].In this respect,two cascades of V-Nets were used to ensure that the training methods concentrated the essential voxels[41].A multi-class cascaded classifier was also reported[42].Besides,feature fusion was accomplished using various feature-extraction methods.In general,cascaded networks accounted for the spatial linkages between sub-regions.However,training numerous sub-networks was more complex than just a single end-to-end network.Attention mechanisms were also used to improve brain tumor segmentation[43].

    Another study [44] introduced a novel attention gate that targeted structures of various sizes and shapes.Models trained with attention gate suppressed superfluous elements of an input image while emphasizing key features.Also,a 2D U-net-based confined parameter network was developed[45].It contained an attention learning algorithm to prevent the model from becoming redundant by adaptively weighing each input channel.Besides,a multi-scale network was used to provide enough information to interpret segmentation features[46].

    These architectures(machine learning and deep learning)were computationally intensive,requiring a costly hardware setup.Consequently,the segmentation process became highly expensive.Numerous studies were conducted to mitigate the complexity of 3D-CNN.

    In particular,the Atrous convolution-based methods were widely used to address this issue.In this respect,a multi-fiber network with Atrous convolution effectively integrated group convolution[47].It reduced the computational cost of the 3D convolution by merging features at various scales.Besides,weighted Atrous convolutions were used to collect multiple-scale information,reducing inference time and model complexity.Likewise,a multi-scale Atrous convolution was used [48] to sample the high-level refined characteristics of objects.However,Atrous convolution suffers from the loss of information due to missing pixels.Hence,Kronecker convolution was used to mitigate information loss by increasing the receptive field while keeping the same parameters [21].The related work on medical image segmentation is presented in Table 1.

    Table 1:Related work on medical image segmentation

    The next section elaborates on the brain tumors segmentation using 3D Kronecker convolution feature pyramid with details of dataset preprocessing,proposed model,post-processing,and performance evaluation metrics.

    3 Brain Tumors Segmentation Using 3D Kronecker Convolution Feature Pyramid

    This section presents the dataset and pre-processing of the proposed model,performance evaluation,post-processing,and training and model validation.

    3.1 Dataset and Preprocessing

    This study used the standard Multimodal Brain Tumor Segmentation (BraTs) data for brain tumor segmentation.In the BraTs dataset,the primary focus is given to the segmentation of intrinsically heterogeneous(gliomas)by utilizing a dataset of MRI scans from distinct sources.The BraTs 2020 training consisted of 369 volumes,of which 125 were given for validating the dataset.These BraTs datasets comprised MRI scans with four modalities,namely,T1,T2,Flair,and T1-Contrast Enhanced[50].The dimension of MRI volume dimension is 240×240×155.The ground truth of the training dataset for each patient was also given.The ground truth contained four classes of segmentation:necrotic,edema,non-tumor,and non-enhancing core.These MRI scans were re-sampled to isotropic voxel resolution at 1 mm∧3 as skull stripped.Segmentations were validated on the leaderboard to check the effectiveness of our proposed model.The intensity of the MRI depended on the image acquisition.In this study,variations in intensity and contrast in MRI volume were reduced via normalization,and Z-score[51]was used to normalize the MRI modalities via Eq.(1).

    Here,Z describes Z-score,μ represents the mean pixel intensity,andαdenotes the standard deviation of the pixel intensity,and X is the pixel value.Normalization wrapped and aligned the image data in the anatomic template for model convergence by attaining the optimal global solution.

    Segmentation of the proposed model is comprised of four steps.Firstly,the nibble library was used to load the BraTs multi-modality data,followed by Z-score normalization since deep learning models were sensitive to data diversity.We trained our model simultaneously with all three brain views,i.e.,axial,coronal,and sagittal.The data was flipped to coronal,sagittal,and axial planes randomly at the probability of 0.5 to benefit from brain multi-view while augmenting the data to generalize our model.Consequently,when visualizing a slice in a single view (e.g.,axial),neighboring pixels in the region of interest could be compared with two other views (sagittal and coronal) [45].Also,the Gaussian blurring was added to the data to remove noise from MRI images.

    Secondly,the model is trained on all three brain views once with brain view augmentation on run time.This pre-processed data,i.e.,multi-view data,was then used to train the model.Thirdly,the model was data-validated.Lastly,the validations were post-processed.In this respect,a 3D connected component analysis with a global threshold was used to reduce false positives from model predictions.

    3.2 The Proposed Model

    This study used a 3D KCFP model to segment brain tumors automatically.The model consisted of three modules: 3D feature selection using a 3DFSC network,multi-scale feature learning using featured pyramids,and post-processing based on 3D connected component analysis with a global threshold.The main flow is shown in Fig.1.

    Figure 1:Overview of the methodology

    In this study,all pixels contained small and essential information that must be captured for better segmentation.The inter-dilation factor,represented byr1(Fig.2),controlled the dilation rate.The intra-sharing factor,denoted byr2,regulated the sub-region size,capturing feature vectors and sharing filter vectors.Kronecker convolution averaged all intra-sharing factors,i.e.,r2×r2pixel values to capture the partial information missed by Atrous convolution.In this study,Kronecker convolution functioned atr1=4 andr2=3(Fig.2a)while Atrous convolution atr1=4(Fig.2b).

    Figure 2:(a)Kronecker convolution with r1=4 and r2=3 and(b)Atrous convolution with r1=4

    This study used the 3DFSC network associated with a feature pyramid to generate hierarchical features of multi-scale intrinsic for segmenting tumors.The 3DFSC network learned the dense and non-destructive features of detailed lesions.The pyramid fused multi-scale features of lesions to handle tumors of various sizes.When the network got deeper,local features were preserved using skip connections to overcome the vanishing gradient,ensuring proper gradient flow within the network.

    The contextual information supplied by these local features was sufficient to determine the boundaries of various lesion tissues.The contexts around the lesion became valuable auxiliary information to discriminate different tissues,including the cancerous and healthy cells.Each feature map in the network was combined using concatenation by aiming at learning the valuable features of the boundary to improve identifying the model for the anatomy of the lesion.Therefore,our network efficiently propagated complex and vital information without compromising essential data features.Our model segmented different cancerous lesions while preventing information loss with no increment in parameter numbers.Also,a single model was used to capture the contextual information from a multi-view for brain tumor segmentation.An adequate kernel size was used to address the varying tumor sizes among patients and different sizes of the tumor sub-region.

    Also,this study has used feature maps to develop a 3D structure that helped multi-scale feature learning,as shown in Fig.3.This block was added at the end of the 3DFSC network while KCFP fused the local and global features using a multi-input pyramid structure.Besides,the mapped features of the 3DFSC network were then propagated to each branch of the 3D Kronecker convolution with different intra–dilation and inter-dilation rates.Different dilatation rates were used at each pyramid level to generate three varying receptive fields for capturing multi-scale lesions.The small receptive field of this pyramid was responsible for segmenting the enhancing tumor,the medium for the non-enhancing tumor,and the large for the whole tumor.The last layer in our proposed model is the classification layer and it uses Multi-class Logistic Regression for segregating the classes.This regression based classification follow probability distribution between the range[0,1].

    Up-sampling layers were inserted to concatenate the feature maps at three different pyramid levels.Up-sample kept the dimension of each branch consistent with the previous one.This study also used an up-sample 3D block in the proposed pyramid learning mechanism to scale up the size of different feature maps without any learnable parameters.We also used Group Normalization(GN)for all layers in the network,further improving the performance of our model.

    Figure 3:The architecture of the proposed methodology

    3.3 Performance Evaluation

    The performance of our segmentation model was evaluated using three metrics,i.e.,dice similarity coefficient (DSC),sensitivity,and specificity.DSC metric was used for three labels,i.e.,tumor core(TC),whole tumor(WT),and enhancing core(EC).WT and TC contained foreseen regions of EC,non-EC,and necrosis.However,WT had an additional prediction region,i.e.,edema.The DSC metric was calculated using Eq.(2)[52].

    where T is the manual label,P is the predicted region,|P|is the total area of P,|T|is the total area of T,and|P ∩T|denotes the overlapped region between P and T.

    Meanwhile,sensitivity is the proportion of correctly identified positives[5].It was estimated using Eq.(3)[52].

    where TP is the true positive and FN denotes the false negative.Specificity,the measure of the accurately identified proportion of actual negatives[7],was estimated using Eq.(4)[52].

    where TN is the true negative and FP denotes the false positive.Also,the smaller the value,the closer the prediction of the actual value is calculated using Eq.(5).

    where X is the volume of the mask,Y is the volume predicted by the model,anddx∈Xrepresents the distance from X to Y.

    3.4 Post-Processing

    Post-processing was performed to mitigate the impacts of false positives.Images were processed using the algorithm 3D Connected Component Analysis (CCA).This algorithm grouped the pixels into components based on the connectivity of the pixels to reduce false positives by removing outliers.Removing small,isolated clusters from the prediction results was essential because brain tumors were taken from the single connected domain.

    Besides,the connected domain was analyzed to remove other small clusters.Some patients had benign tumors with gliomas consisting of non-enhancing and edemas tumors.In such a case,uncertainty arose when clusters of benign tumors were misclassified as enhancing tumors,causing inefficient segmentation.Therefore,volumetric constraints were imposed to remove enhancing tumors with values lower than the threshold.The 3D connected components were used to remove non-tumoral regions using a threshold.We experimented extensively with different pixels,i.e.,80,100,120,150,200,etc.Given that a global threshold of 500 pixels yielded the best prediction,all connected components smaller than this value were removed.

    3.5 Training and Model Validation

    We implemented our model with the PyTorch framework.For model training,the cross-validation evaluation of the training set is used.For validating the trained model,BraTs 2020 validation dataset was used.The patch size for training and validation was 128×128×128,containing most of the brain parts.This patch size was ideal for maximal information.Besides,we used the Adam optimizer with a learning rate of 0.001 and updated using the cyclic learning rate strategy after each iteration.For the brain tumor segmentation,we have utilized 3 million trainable parameters.Furthermore,we have trained our model with 400 epochs.Also,L2 regularization was used to prevent overfitting,while the Generalized Dice Loss function was used to resolve the class imbalance.Generalized Dice Loss is the cost function for our segmentation process,which is estimated using Eq.(6).

    Meanwhile,Nvidia Titan X Pascal with NVIDIA Cuda core 3584 GPU was used to train the model for the experiments.Also,we measured the computational time of the network on NVidia Titan X Pascal with NVIDIA Cuda core 3584 GPU and Intel(R) Core(TM) M-5Y10c computer with a 1.80 GHz processor and 8 GB RAM.

    4 Results and Discussion

    The semantic segmentation process before and after the post-processing is shown in Fig.4.Overall,segmented tumor boundaries became smooth and accurate after post-processing.

    Figure 4:Patient samples from BraTs 2020 data: (a) brain volumetric MRI,(b) results before postprocessing,and(c)results after post-processing

    Table 2 shows the results of the evaluation with and without post-processing.Overall,postprocessing enhanced efficiency.

    Table 2:Evaluation with post-processing and without post-processing

    Table 3 compares the results of our KCFP model with the result of other benchmark schemes.The multilayer dilated convolutional neural network(MLDCNN)model[53]achieved a DSC of 0.76 on ET,0.87 on WT,and 0.77 on TC.Its Hausdorff was promising compared to other methods.The Lesion encoder model used the spherical coordinate transformation as a pre-processing strategy to increase the accuracy of segmentation in combination with normal MRI volumes.It achieved a DSC of 0.71 on ET,0.86 on WT,and 0.80 on TC[54].Similarly,the ME-Net model used a multi-encoder for feature extraction with a new loss function known as Categorical Dice.It achieved a DSC of 0.73 on ET,0.85 on WT,and 0.72 on TC.The AFPNet used a dilated convolution at multiple levels,fusing multi-scale features with context with a conditional random field as post-processing[55].This model achieved a DSC of 0.71 on ET,0.83 on WT,and 0.74 on TC.The AEMA-Net model achieved a DSC of 0.71 on ET,0.83 on WT,and 0.74 on TC[56].It used a 3D asymmetric expectation-maximization attention network to capture long-range dependencies for segmentation.The Transbts model exploited the transformer into 3D CNN using an encoder-decoder structure.It achieved a DSC of 0.76 on ET,0.86 on WT,and 0.77 on TC[57].The proposed model outperformed others in DSC,sensitivity,and Hausdorff 95.Our model achieved an average DSC of 0.80,0.90,and 0.84 on the BraTs 2020 dataset for ET,WT,and TC,respectively.

    Table 3:Comparison of the KCFP model with other state-of-the-art models

    Fig.5 shows the box plots of sensitivity,Hausdorff,and dice scores for three tumor regions.The dice score for the three tumor regions ET,TC,and WT are 0.80,0.84,and 0.90,respectively(Fig.5a).Based on the dice score,our model effectively improved the segmentation.

    Figure 5:Plot-boxes:(a)dice score,(b)sensitivity,and(c)Hausdorff.The horizontal line in each box represents its median value.The hollow circle shows the outliers

    Meanwhile,sensitivity reflects the impact of features on predicting models [58].Fig.5b shows that the sensitivity of our model is 0.83,0.92,and 0.83 for ET,WC,and TC.Our KCFP model used size,location,and texture as features to identify the defective tissues,enhancing the final prediction that distinguished all healthy tissues from the damaged ones.Fig.5c shows that the Hausdorff of our model is 3.06,4.66,and 6.43 for ET,WC,and TC,respectively.

    Fig.6 shows the performance of the segmentation model with FLAIR-input MRI in three views:coronal,sagittal,and axial.The tumor predicted by our proposed model was very close to the ground truth,indicating that our model effectively segmented the brain tumors.The performance of segmentation of a patient taken from BraTs 2020 training set.The first column shows the brain MRI views,the second displays the ground truth labels,and the third depicts the predicted segmentation of patients using KCFP.The aquamarine-colored area represents a non-enhancing tumor,the red zone denotes enhancing tumor,and the dandelion-colored area indicates edema.

    Figure 6:Results of a patient taken from BraTs 2020 training set

    The automatic segmentation of brain tumors minimizes the burden of doctors while enhancing treatment planning for saving cancer patients [59].However,the automatic segmentation of brain tumors faces many challenges due to the lesions’different sizes,locations,and positions [60].In this study,the problem of pixel loss in Atrous convolution caused by increased dilation rate was resolved using a 3D Kronecker convolution.The features lost in Atrous convolution were captured by Kronecker convolution.The 3D Kronecker convolution increased the receptive field while keeping the number of parameters constant and minimizing the pixel loss (Fig.1).These pixels contained crucial information capturing,which was essential for better segmentation.The difference in intensity value and contrast in MRI volume was reduced by normalization[61].The proposed model used the Kronecker-convolution feature pyramid to learn the characteristics of the lesion and preserve local features,thus improving the model’s capacity to distinguish TC from other lesions.The features learned by DCNN had multiple scales and nonlinear abstraction as a natural feature.This feature allowed the model to combine distinct hierarchies of abstract information,increasing the focus of the target areas[62].The proposed model was trained with axial,coronal,and sagittal views.The multi-scale features of brain tumors were captured using 3D KCFP with various inter and intra-dilation factors.Lesion and context information was incorporated using this module to improve the segmentation.

    The hierarchical features of multi-scale intrinsic generated by the 3DFSC network were used for effective and reliable segmentation.The 3DFSC learned the dense and non-destructive features of detailed lesions.Tumors of various sizes were handled using a feature pyramid.The vanishing gradient was resolved,and local features were preserved using skip connections from the 3DFSC network.This way,crucial and complex information was collected without losing essential data features.The proposed model was trained by providing all three possible brain views,i.e.,axial,coronal,and sagittal.Besides,the feature maps at three different pyramid levels were concatenated using up-sampling layers.With the help of up-sampling,the dimension of each branch was kept consistent with each branch,further improving our model performance.The 3D CCA post-processing technique reduced false positives by combining the connected component analysis with a global threshold to remove outliers.Upon post-processing,tumor boundaries became smoother,effectively distinguishing the lesions from other tissues.Skip connections ensured the backpropagation of gradient flow to any layers without losing crucial information.Meanwhile,when evaluated using BraTs datasets,the proposed KCFP model of this study took 5 s on average to do segmentation for one patient.This average time was deemed reasonable.For future work,different inter-dilation and intra-dilation rates would be used to evaluate further and enhance our model performance.

    5 Conclusion

    In this study,a 3D KCFP was used for brain tumor segmentation to overcome the problem of pixel loss and weak processing of multi-scale lesions.The pixel loss was due to an increment in the dilation rate of Atrous convolution.We designed an integrated 3DFSC network with a multi-scale feature pyramid to learn local and global features interracially,overcoming the problem of weak processing.The 3DFSC learned the WT feature and its substructure effectively.By contrast,the feature pyramid dealt with multi-scale lesions.In this way,the proposed model distinguished the boundaries of various tumor tissues.Finally,false positives were reduced using the 3D CCA post-processing technique with a global threshold,attaining more structural segmentations.Our proposed model outperformed other benchmark schemes.Overall,our proposed KCFP model might benefit clinical medical image segmentation.However,the class imbalance problem is not completely mitigated in our proposed model.In the future,we will drive a cost function to solve the class imbalance problem completely and focus on different dilation rates to evaluate model performance.

    Acknowledgement:This study was conducted at the Medical Imaging and Diagnostics Lab at COMSATS University Islamabad.

    Funding Statement:This work was supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning (KETEP),granted financial resources from the Ministry of Trade,Industry&Energy,Republic of Korea(No.20204010600090).In addition,it was funded from the National Center of Artificial Intelligence(NCAI),Higher Education Commission,Pakistan,Grant/Award Number:Grant 2(1064).

    Author Contributions:All listed authors have substantially contributed to the manuscript and have approved the final submitted version.The authors confirm their contribution to the paper as follows:

    ?Kainat Nazir contributed to the literature review,design and development of AI-based architecture,the experimentation process,data collection,and manuscript writing.

    ? Tahir Mustafa Madni conceived the idea for the study and designed the overall research framework.He led the team throughout the entire experimentation process.

    ? Uzair Iqbal Janjua supervised data collection and performed in-depth data analysis to draw meaningful conclusions.He critically reviewed the experiment’s outcomes in the context of existing literature,addressing potential implications and limitations of the AI models.

    ?Umer Javed contributed to the literature review to identify the gaps in knowledge and establish the context for our study.

    ?Muhammad Attique Khan guided the team throughout the study and provided overall leadership and expertise in this domain.

    ?Usman Tariq also actively participated in the interpretation and discussion of the results.

    ?Jae-Hyuk Cha performed in-depth data analysis,participated in discussion of the results,and critically reviewed the final version of the manuscript.

    Availability of Data and Materials:A publicly available dataset was used for analyzing our model.This dataset can be found at http://ipp.cbica.upenn.edu.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产精品爽爽va在线观看网站| 特级一级黄色大片| 岛国在线免费视频观看| 国产高清激情床上av| 两个人的视频大全免费| 内射极品少妇av片p| 观看免费一级毛片| 简卡轻食公司| 国产黄色小视频在线观看| 精品国产三级普通话版| 国产高潮美女av| 在线播放无遮挡| 国内久久婷婷六月综合欲色啪| 久久久色成人| 女生性感内裤真人,穿戴方法视频| 99国产极品粉嫩在线观看| 国产白丝娇喘喷水9色精品| 欧美最黄视频在线播放免费| 亚洲第一欧美日韩一区二区三区| 国产亚洲av嫩草精品影院| 亚洲性夜色夜夜综合| 国产免费av片在线观看野外av| 亚洲精品亚洲一区二区| 国产乱人伦免费视频| 美女被艹到高潮喷水动态| 精品99又大又爽又粗少妇毛片 | 哪里可以看免费的av片| 亚洲精品日韩av片在线观看| 精品国产亚洲在线| 国产精品亚洲美女久久久| 男女那种视频在线观看| 最新在线观看一区二区三区| 高清毛片免费观看视频网站| 国产精品女同一区二区软件 | 久久99热这里只有精品18| 此物有八面人人有两片| 小说图片视频综合网站| 99久久99久久久精品蜜桃| 午夜久久久久精精品| 欧美黄色淫秽网站| 亚洲精华国产精华精| 一个人免费在线观看的高清视频| 亚洲综合色惰| 亚洲美女黄片视频| 午夜福利在线观看免费完整高清在 | 国产91精品成人一区二区三区| av在线观看视频网站免费| 亚洲中文字幕日韩| 伊人久久精品亚洲午夜| 可以在线观看毛片的网站| 国内久久婷婷六月综合欲色啪| 在线看三级毛片| 51国产日韩欧美| 青草久久国产| 简卡轻食公司| 亚洲五月天丁香| 国产一区二区三区在线臀色熟女| 97超视频在线观看视频| av国产免费在线观看| 成人毛片a级毛片在线播放| 网址你懂的国产日韩在线| 欧美高清成人免费视频www| 男女那种视频在线观看| 成人亚洲精品av一区二区| 在线免费观看不下载黄p国产 | 18美女黄网站色大片免费观看| 听说在线观看完整版免费高清| 美女黄网站色视频| 午夜a级毛片| 2021天堂中文幕一二区在线观| 亚洲一区二区三区色噜噜| 激情在线观看视频在线高清| 日日摸夜夜添夜夜添av毛片 | 尤物成人国产欧美一区二区三区| 麻豆av噜噜一区二区三区| 久久久国产成人免费| 一边摸一边抽搐一进一小说| 在线观看美女被高潮喷水网站 | 在现免费观看毛片| 日本 欧美在线| 精品免费久久久久久久清纯| 露出奶头的视频| www.999成人在线观看| 亚洲美女视频黄频| 亚洲av免费高清在线观看| 毛片一级片免费看久久久久 | 亚洲欧美日韩无卡精品| 国产高潮美女av| 欧美三级亚洲精品| 亚洲av二区三区四区| 免费看光身美女| 激情在线观看视频在线高清| 99国产极品粉嫩在线观看| 长腿黑丝高跟| 18禁裸乳无遮挡免费网站照片| 中出人妻视频一区二区| 真人做人爱边吃奶动态| av福利片在线观看| www日本黄色视频网| 他把我摸到了高潮在线观看| 天美传媒精品一区二区| 国产人妻一区二区三区在| 中文字幕免费在线视频6| 韩国av一区二区三区四区| 三级国产精品欧美在线观看| 少妇丰满av| 久久久久久国产a免费观看| 免费搜索国产男女视频| 桃色一区二区三区在线观看| 国产在线精品亚洲第一网站| netflix在线观看网站| 国产爱豆传媒在线观看| 草草在线视频免费看| 我要搜黄色片| 1024手机看黄色片| 国产精品久久电影中文字幕| 日韩大尺度精品在线看网址| www.熟女人妻精品国产| 国产精品不卡视频一区二区 | 欧美性猛交黑人性爽| 乱人视频在线观看| 亚洲欧美清纯卡通| 日本黄色片子视频| 高潮久久久久久久久久久不卡| 白带黄色成豆腐渣| ponron亚洲| 国产精品一及| 国产精品自产拍在线观看55亚洲| 成人午夜高清在线视频| 男人和女人高潮做爰伦理| 美女高潮的动态| 久久久久久久亚洲中文字幕 | 亚洲,欧美精品.| 哪里可以看免费的av片| 国产成人福利小说| 国内毛片毛片毛片毛片毛片| 丝袜美腿在线中文| 桃红色精品国产亚洲av| 一级作爱视频免费观看| 色综合欧美亚洲国产小说| 欧美另类亚洲清纯唯美| 男女那种视频在线观看| 看片在线看免费视频| 中文字幕高清在线视频| 国产精品99久久久久久久久| 国内精品一区二区在线观看| 亚洲午夜理论影院| 内射极品少妇av片p| 一区二区三区激情视频| 久久欧美精品欧美久久欧美| 99久久精品热视频| 女生性感内裤真人,穿戴方法视频| 一区二区三区四区激情视频 | 91久久精品国产一区二区成人| 在线观看美女被高潮喷水网站 | 国产淫片久久久久久久久 | 国产 一区 欧美 日韩| av在线老鸭窝| 91在线观看av| 激情在线观看视频在线高清| 99视频精品全部免费 在线| 国内精品美女久久久久久| 免费看a级黄色片| 我的女老师完整版在线观看| 深爱激情五月婷婷| 久久亚洲真实| 国产人妻一区二区三区在| 日本三级黄在线观看| 国产综合懂色| 99久久99久久久精品蜜桃| 亚洲最大成人中文| 精品一区二区三区人妻视频| av在线蜜桃| 婷婷色综合大香蕉| 99热只有精品国产| 国产欧美日韩精品亚洲av| 国产精品日韩av在线免费观看| 观看美女的网站| 色精品久久人妻99蜜桃| 亚洲成av人片免费观看| 97超级碰碰碰精品色视频在线观看| 色av中文字幕| 国内久久婷婷六月综合欲色啪| 少妇的逼水好多| 国产午夜精品论理片| 国产黄片美女视频| 99久久久亚洲精品蜜臀av| 最好的美女福利视频网| 搡老岳熟女国产| 美女免费视频网站| 亚洲av成人av| 制服丝袜大香蕉在线| 国产成人影院久久av| 一进一出抽搐动态| 亚洲av不卡在线观看| 99久久成人亚洲精品观看| 精品国内亚洲2022精品成人| 欧美精品啪啪一区二区三区| 999久久久精品免费观看国产| 亚洲av成人不卡在线观看播放网| av天堂在线播放| 国产蜜桃级精品一区二区三区| 丰满的人妻完整版| 成人特级av手机在线观看| 91在线精品国自产拍蜜月| 日本 欧美在线| 欧洲精品卡2卡3卡4卡5卡区| 在线播放无遮挡| 亚洲电影在线观看av| 亚洲成人精品中文字幕电影| 久久精品国产清高在天天线| 黄色视频,在线免费观看| 欧美成人a在线观看| 国产美女午夜福利| 麻豆av噜噜一区二区三区| 中文在线观看免费www的网站| 18+在线观看网站| 国产黄片美女视频| 亚洲av成人不卡在线观看播放网| 精品久久久久久久久亚洲 | 自拍偷自拍亚洲精品老妇| 国产不卡一卡二| 国产av不卡久久| 亚洲七黄色美女视频| 99热6这里只有精品| 免费观看精品视频网站| 日本黄大片高清| 免费电影在线观看免费观看| 国产黄色小视频在线观看| 亚洲欧美日韩高清专用| 成年女人毛片免费观看观看9| 啦啦啦韩国在线观看视频| 在线十欧美十亚洲十日本专区| 国内精品美女久久久久久| 日韩中字成人| 变态另类丝袜制服| av视频在线观看入口| 国内精品久久久久久久电影| 日本一本二区三区精品| 国产伦一二天堂av在线观看| 日本黄色视频三级网站网址| 免费看a级黄色片| 亚洲av成人av| 欧美在线一区亚洲| 一进一出抽搐动态| 乱码一卡2卡4卡精品| 韩国av一区二区三区四区| 全区人妻精品视频| 亚洲国产精品sss在线观看| 国产av不卡久久| 精品久久久久久久末码| 免费在线观看成人毛片| 又黄又爽又刺激的免费视频.| 特级一级黄色大片| 精品一区二区三区人妻视频| 3wmmmm亚洲av在线观看| 亚洲熟妇熟女久久| 啪啪无遮挡十八禁网站| 久99久视频精品免费| 老女人水多毛片| 在线十欧美十亚洲十日本专区| 我的老师免费观看完整版| 亚洲精品在线观看二区| 午夜a级毛片| 综合色av麻豆| 青草久久国产| 精品久久久久久成人av| 午夜日韩欧美国产| 国产精品久久久久久久电影| 色在线成人网| 久久人人爽人人爽人人片va | 欧美xxxx性猛交bbbb| 日本在线视频免费播放| 久久久久久久久中文| 小说图片视频综合网站| 村上凉子中文字幕在线| 亚洲内射少妇av| 中国美女看黄片| 淫妇啪啪啪对白视频| 女同久久另类99精品国产91| 97人妻精品一区二区三区麻豆| 十八禁网站免费在线| 亚洲aⅴ乱码一区二区在线播放| 午夜激情欧美在线| 九色国产91popny在线| 国产精品一区二区三区四区免费观看 | 波多野结衣高清无吗| 身体一侧抽搐| 91字幕亚洲| 88av欧美| 精品一区二区三区视频在线| 日韩欧美在线二视频| 久久久久久久久久成人| 日韩欧美 国产精品| 波多野结衣高清作品| 最近最新免费中文字幕在线| 特大巨黑吊av在线直播| 国产精品免费一区二区三区在线| 久久久色成人| 亚洲电影在线观看av| 毛片一级片免费看久久久久 | 伦理电影大哥的女人| 极品教师在线免费播放| 亚洲国产日韩欧美精品在线观看| 亚洲av成人av| 日韩欧美国产在线观看| 香蕉av资源在线| 欧美日韩亚洲国产一区二区在线观看| 亚洲精品日韩av片在线观看| 免费在线观看亚洲国产| 午夜免费成人在线视频| 简卡轻食公司| 日韩人妻高清精品专区| 国产成人av教育| av在线蜜桃| 天天躁日日操中文字幕| 免费无遮挡裸体视频| 免费大片18禁| 亚洲国产日韩欧美精品在线观看| 国产三级黄色录像| 免费一级毛片在线播放高清视频| 国产亚洲欧美在线一区二区| 欧美一级a爱片免费观看看| 亚洲综合色惰| 久久人妻av系列| www.色视频.com| 精品人妻一区二区三区麻豆 | 成人精品一区二区免费| 成人特级av手机在线观看| 啦啦啦观看免费观看视频高清| 他把我摸到了高潮在线观看| 老师上课跳d突然被开到最大视频 久久午夜综合久久蜜桃 | 免费观看人在逋| 中文字幕高清在线视频| or卡值多少钱| 91九色精品人成在线观看| 国产一区二区亚洲精品在线观看| 黄色日韩在线| 欧美成人a在线观看| 欧美日韩瑟瑟在线播放| 日韩欧美免费精品| 欧美绝顶高潮抽搐喷水| 一区二区三区四区激情视频 | 青草久久国产| 国产免费一级a男人的天堂| 99riav亚洲国产免费| 在线观看66精品国产| 久久亚洲真实| 麻豆成人av在线观看| 亚洲精华国产精华精| 亚洲色图av天堂| 久久久久久久精品吃奶| 免费无遮挡裸体视频| 69av精品久久久久久| 黄色一级大片看看| 中亚洲国语对白在线视频| 精品福利观看| 国产精品久久视频播放| 国产精品久久久久久久电影| 一本一本综合久久| ponron亚洲| 精品熟女少妇八av免费久了| 男人舔奶头视频| 午夜激情欧美在线| 国产野战对白在线观看| 免费看日本二区| 美女大奶头视频| 成年女人毛片免费观看观看9| 亚洲国产精品sss在线观看| 性欧美人与动物交配| 国产三级在线视频| 国产v大片淫在线免费观看| 美女大奶头视频| 91麻豆av在线| 国产高清激情床上av| 久久伊人香网站| 国产免费av片在线观看野外av| 丰满人妻熟妇乱又伦精品不卡| 久久99热这里只有精品18| 国产精品永久免费网站| 宅男免费午夜| 国产探花在线观看一区二区| 亚洲无线在线观看| 搡老熟女国产l中国老女人| 偷拍熟女少妇极品色| 欧美+日韩+精品| or卡值多少钱| 99国产精品一区二区三区| 两个人视频免费观看高清| 国产毛片a区久久久久| 一个人观看的视频www高清免费观看| 网址你懂的国产日韩在线| 亚洲精华国产精华精| 在线a可以看的网站| 99riav亚洲国产免费| 精品国产亚洲在线| 国产成人aa在线观看| 又粗又爽又猛毛片免费看| 国内精品久久久久久久电影| 一级a爱片免费观看的视频| 亚洲av电影不卡..在线观看| 免费av毛片视频| 久久国产精品人妻蜜桃| 热99在线观看视频| 日本在线视频免费播放| 亚洲自偷自拍三级| 麻豆一二三区av精品| 久久久精品欧美日韩精品| 成人永久免费在线观看视频| 色播亚洲综合网| 变态另类成人亚洲欧美熟女| 99riav亚洲国产免费| 亚洲中文字幕一区二区三区有码在线看| 久9热在线精品视频| 好男人电影高清在线观看| 99久久精品国产亚洲精品| 美女大奶头视频| 欧洲精品卡2卡3卡4卡5卡区| 欧美黑人欧美精品刺激| 中文字幕av在线有码专区| 久久久精品欧美日韩精品| 熟妇人妻久久中文字幕3abv| 亚洲精华国产精华精| 夜夜看夜夜爽夜夜摸| 麻豆成人午夜福利视频| 国产三级在线视频| 欧洲精品卡2卡3卡4卡5卡区| 亚洲自偷自拍三级| 国产精品人妻久久久久久| 国产伦人伦偷精品视频| 麻豆成人av在线观看| 成年人黄色毛片网站| 国产乱人视频| 极品教师在线视频| 香蕉av资源在线| 精品国内亚洲2022精品成人| 欧美性猛交╳xxx乱大交人| 国产精品乱码一区二三区的特点| 国模一区二区三区四区视频| 一区二区三区高清视频在线| 欧美色欧美亚洲另类二区| 亚洲精品成人久久久久久| 日韩欧美在线二视频| 欧美黄色片欧美黄色片| 18+在线观看网站| 69人妻影院| 又紧又爽又黄一区二区| 亚洲人成电影免费在线| 美女大奶头视频| 国内久久婷婷六月综合欲色啪| 88av欧美| 国产亚洲精品av在线| 欧美激情在线99| 国产亚洲精品久久久久久毛片| 亚洲欧美日韩高清专用| 国产野战对白在线观看| 麻豆成人av在线观看| 最近最新免费中文字幕在线| 亚洲中文字幕一区二区三区有码在线看| 亚洲精品乱码久久久v下载方式| 亚洲综合色惰| 麻豆一二三区av精品| 日本免费a在线| 欧美最新免费一区二区三区 | 欧美不卡视频在线免费观看| 久久性视频一级片| 午夜免费成人在线视频| 欧洲精品卡2卡3卡4卡5卡区| 天堂√8在线中文| 日日摸夜夜添夜夜添av毛片 | www.熟女人妻精品国产| 女生性感内裤真人,穿戴方法视频| 成人永久免费在线观看视频| 日韩欧美精品免费久久 | 亚洲av不卡在线观看| 宅男免费午夜| 亚洲午夜理论影院| 亚洲精华国产精华精| 两个人的视频大全免费| 成人鲁丝片一二三区免费| 日韩欧美一区二区三区在线观看| 99久久精品一区二区三区| 国产伦人伦偷精品视频| 日本黄色视频三级网站网址| 日本撒尿小便嘘嘘汇集6| 久久精品国产自在天天线| 久久久久久久亚洲中文字幕 | 亚洲专区国产一区二区| 18禁黄网站禁片午夜丰满| 亚洲精品成人久久久久久| 久久人妻av系列| 露出奶头的视频| 成年版毛片免费区| 日韩亚洲欧美综合| 村上凉子中文字幕在线| 中文字幕人妻熟人妻熟丝袜美| 成年女人永久免费观看视频| 黄色一级大片看看| 久久久国产成人免费| 精品人妻视频免费看| 国产综合懂色| 欧美性猛交╳xxx乱大交人| 免费av不卡在线播放| 能在线免费观看的黄片| 色噜噜av男人的天堂激情| av在线老鸭窝| 免费大片18禁| 精品国产亚洲在线| 欧美3d第一页| 久久久久久久久久黄片| 国产熟女xx| 亚洲精品久久国产高清桃花| 神马国产精品三级电影在线观看| 别揉我奶头 嗯啊视频| 国产精品日韩av在线免费观看| 国产毛片a区久久久久| 国产欧美日韩一区二区精品| 少妇人妻一区二区三区视频| 精品久久久久久久久久免费视频| 免费看美女性在线毛片视频| 中文亚洲av片在线观看爽| 久久久久性生活片| 国产91精品成人一区二区三区| 9191精品国产免费久久| 午夜老司机福利剧场| 午夜福利在线在线| 级片在线观看| 美女xxoo啪啪120秒动态图 | 国产亚洲欧美98| 国产私拍福利视频在线观看| 久久久久免费精品人妻一区二区| 久久久久九九精品影院| 午夜福利18| 精品久久久久久久久久免费视频| 搡老妇女老女人老熟妇| 永久网站在线| 特大巨黑吊av在线直播| 亚洲熟妇熟女久久| 国产乱人伦免费视频| 18禁黄网站禁片午夜丰满| 一区福利在线观看| 少妇的逼水好多| 久久午夜福利片| 久久久久亚洲av毛片大全| 国产久久久一区二区三区| 九九热线精品视视频播放| 免费av不卡在线播放| 国内精品久久久久久久电影| 宅男免费午夜| 熟妇人妻久久中文字幕3abv| 亚洲欧美日韩高清专用| 在线观看一区二区三区| 人妻丰满熟妇av一区二区三区| 两个人视频免费观看高清| 成人毛片a级毛片在线播放| 亚洲人成网站在线播| 色综合站精品国产| 在线a可以看的网站| 国产视频内射| 精品午夜福利在线看| 日本熟妇午夜| 欧美日韩福利视频一区二区| 黄色女人牲交| 搡老熟女国产l中国老女人| 给我免费播放毛片高清在线观看| 18禁黄网站禁片免费观看直播| 99精品在免费线老司机午夜| 啪啪无遮挡十八禁网站| 久久欧美精品欧美久久欧美| 久久人妻av系列| 搡老妇女老女人老熟妇| 欧美一区二区精品小视频在线| 内地一区二区视频在线| a级毛片a级免费在线| 熟女电影av网| 蜜桃亚洲精品一区二区三区| 色哟哟·www| 又紧又爽又黄一区二区| 成人鲁丝片一二三区免费| АⅤ资源中文在线天堂| 757午夜福利合集在线观看| bbb黄色大片| 色吧在线观看| 欧美精品啪啪一区二区三区| 国产麻豆成人av免费视频| 亚洲综合色惰| 日本熟妇午夜| 热99re8久久精品国产| 丰满人妻一区二区三区视频av| 国产精品乱码一区二三区的特点| 国产美女午夜福利| 我的女老师完整版在线观看| 国产精品乱码一区二三区的特点| 欧美xxxx性猛交bbbb| 免费电影在线观看免费观看| 少妇熟女aⅴ在线视频| 三级毛片av免费| 99热只有精品国产| 激情在线观看视频在线高清| 老司机午夜十八禁免费视频| 天堂动漫精品| av福利片在线观看| 国产单亲对白刺激| 久久久久久久午夜电影| 在线观看免费视频日本深夜| 如何舔出高潮| 噜噜噜噜噜久久久久久91| 国产一区二区三区视频了| 日韩欧美在线二视频| 制服丝袜大香蕉在线| 精品人妻偷拍中文字幕| 观看免费一级毛片| 免费人成视频x8x8入口观看| 国产野战对白在线观看| 免费人成在线观看视频色|