• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Brain Tumor Auto-Segmentation on Multimodal Imaging Modalities Using Deep Neural Network

    2022-11-11 10:44:56EliasHossainMdShazzadHossainMdSelimHossainSabilaAlJannatMoontahinaHudaSameerAlsharifOsamaFaragallahMahmoudEidandAhmedNabihZakiRashed
    Computers Materials&Continua 2022年9期

    Elias Hossain,Md.Shazzad Hossain,Md.Selim Hossain,Sabila Al Jannat,Moontahina Huda,Sameer Alsharif,Osama S.Faragallah,Mahmoud M.A.Eid and Ahmed Nabih Zaki Rashed

    1Department of Software Engineering,Daffodil International University,Dhaka 1207,Bangladesh

    2Department of Computing and Information System,Daffodil International University,Dhaka 1207,Bangladesh

    3Department of Computer Science&Engineering,BRAC University,Dhaka 1212,Bangladesh

    4Department of Information and Communication Engineering,Bangladesh University of Professionals(BUP),Dhaka 1216,Bangladesh

    5Department of Computer Engineering,College of Computers and Information Technology,Taif University,P.O.Box 11099,Taif 21944,Saudi Arabia

    6Department of Information Technology,College of Computers and Information Technology,Taif University,P.O.Box 11099,Taif 21944,Saudi Arabia

    7Department of Electrical Engineering,College of Engineering,Taif University,P.O.Box 11099,Taif 21944,Saudi Arabia

    8Electronics and Electrical Communications Engineering Department,Faculty of Electronic Engineering,Menouf 32951,Egypt

    Abstract: Due to the difficulties of brain tumor segmentation,this paper proposes a strategy for extracting brain tumors from three-dimensional Magnetic Resonance Image (MRI)and Computed Tomography (CT)scans utilizing 3D U-Net Design and ResNet50, taken after by conventional classification strategies.In this inquire,the ResNet50 picked up accuracy with 98.96%,and the 3D U-Net scored 97.99%among the different methods of deep learning.It is to be mentioned that traditional Convolutional Neural Network(CNN)gives 97.90% accuracy on top of the 3D MRI.In expansion, the image fusion approach combines the multimodal images and makes a fused image to extricate more highlights from the medical images.Other than that,we have identified the loss function by utilizing several dice measurements approach and received Dice Result on top of a specific test case.The average mean score of dice coefficient and soft dice loss for three test cases was 0.0980.At the same time, for two test cases, the sensitivity and specification were recorded to be 0.0211 and 0.5867 using patch level predictions.On the other hand,a software integration pipeline was integrated to deploy the concentrated model into the webserver for accessing it from the software system using the Representational state transfer(REST)API.Eventually,the suggested models were validated through the Area Under the Curve-Receiver Characteristic Operator(AUC-ROC)curve and Confusion Matrix and compared with the existing research articles to understand the underlying problem.Through Comparative Analysis,we have extracted meaningful insights regarding brain tumour segmentation and figured out potential gaps.Nevertheless, the proposed model can be adjustable in daily life and the healthcare domain to identify the infected regions and cancer of the brain through various imaging modalities.

    Keywords: Brain cancer segmentation; 3D U-Net; ResNet50; Dice measurement;ROC-AUC

    1 Introduction

    A brain tumor is one of the foremost shocking’s clutters globally [1].According to the Central Brain Tumor Registry of the United States,CBTRUS,83,570 people were evaluated and expected to be diagnosed with brain.Other Central Nervous System(CNS)tumors within the US(24,530 harmful tumors and 59,040 nonmalignant tumors),and 18600 individuals will die from the disease[2].

    Most brain tumor detection and diagnosis methods depend on the decision of neuro specialists and radiologists for image evaluation which is complicated,time-consuming,and vulnerable to human error [3].Therefore, computer-aided diagnosis is currently in demand.In addition, in a low-income country like Bangladesh, the expense of tertiary health care is likely to be high for the majority of inhabitants[4].As a result,ordinary people find it difficult and expensive to diagnose a disease further.On the contrary, by optimizing and standardizing the way that electronic medical record (EMR)systems are designed, we can lower the cost of supporting them with machine learning [5].Many companies employ these techniques to improve medical diagnostics and early disease prediction[6,7].

    The proposed study considers multimodal images from MRI and CT scans.Based on image fusion, a novel technique has been used to combine the multimodal images.As a result, feature extraction was easier, and an accurate result was produced when the model was trained using these fused images.This research aims to develop a robust and automatic brain cancer detection system with comparatively higher accuracy.As a result,both physicians and patients can detect brain cancer from the brain’s scan images in less time than the time required for a conventional strategy.

    The main purpose of this work is to create a computerized framework for recognizing brain cancer factors utilizing cutting-edge innovation like machine learning as follows:

    1.This study demonstrates a novel image fusion technique for identifying brain tumors that can provide quick and promising accuracy.

    2.We employed both U-Net architecture and ResNet50 architecture for image segmentation.We compared them to show a better approach.

    3.A robust system that applies the state-of-art technique combined with machine learning will be developed to make it publicly available to diagnose brain tumors from multimodal brain images.

    This study is organized into five interconnected sections.In Section 2,the literature review,we have reviewed the existing literature.In Section 3,the research methodology,we have proposed the system architecture.In Section 4,the results and analysis,we have discussed the results and the corresponding relevant comparison.Finally,Section 5 concludes the paper with future work.

    2 Literature Review

    Several machine learning and deep learning schemes have been utilized extensively in brain tumor identification.The authors in [8] used the capsule neural network.The capsule arranged for the classification of the cancer sorts within the brain is prepared with an open database and tried applying the TGCA-GBM dataset from the TCIA.Compared with the other systems,the extra change appears within the classification exactness with the reduced computational endeavours and the preparation and forecast process time.Nevertheless,assessing the trial and testing the proposed strategy appears that the capsule neural network-based classifier has outperformed the convolutional networks.

    The paper [9] used TensorFlow to detect brain cancer using MRI.It implemented the convolutional neural network (CNN)with five layers in TensorFlow.In total, the authors used 1800 MRIs in the dataset.Among this dataset, 900 images were cancerous, and 900 were non-cancerous.Data Augmentation techniques were used after the data acquisition.The training accuracy of the approach was 99%,and validation accuracy was 98.6%in 35 epochs.The system is still in development.

    The authors in[10]proposes an improved approach utilizing Remaining Systems for classifying brain tumor sorts.The dataset contains 3064 MRIs of 3 brain tumor sorts(Meningiomas,Gliomas,and Pituitary tumors).The authors utilized many strategies,including ResNet,to extend the dataset measure and improve the exactness.The papers [11] and [12] used convolutional neural arrange to identify brain tumors from brain MRIs.According to [11], the Softmax classifier has the greatest exactness in CNN.And the strategy got a precision of 99.12%on the test information.The authors in [13] carry the thought of profound learning, a procedure for picture compression employing a deep wavelet autoencoder(DWA).The fundamental reason for this approach is to mix the essential includes lessening property of the autoencoder and the picture decay property of wavelet change.The authors compared the proposed method with autoencoder-deep neural network(autoencoder-DNN)and showed the superiority of DWA.On the opposite, the paper [14] utilized VGG-19 for exchange learning and the highlights are optimized through entropy for precise and quick classification.

    The authors in[15]proposed a computerized multimodal classification strategy utilizing profound learning for brain tumor sort classification.The approach is comprised of four important steps.First,the direct differentiate extending uses edge-based histogram equalization and discrete Cosine transform(DCT).Second,perform profound learning includes extraction.Third,implement a correntropybased joint learning approach beside the extraordinary learning machine(ELM)to choose the most excellent highlights.Fourth, intertwine the atrial slightest square (PLS)-based vigorous covariant highlights in one network.The combined lattice was afterwards nourished to ELM for the last classification.The said technique improved precision from 92.5% to 97.8% on distinctive BraTS datasets.

    The authors in [16] unraveled the issue of multiclass classification of brain cancer by beginning with changing it into a few twofold issues.First,it chose the highlights to employ a strategy based on the Support Vector Machine Recursive Feature Elimination (SVM-RFE)rule.Then, on the other hand, the Twin Support Vector Machine (TWSVM)is utilized as the strategy of the classifier to decrease computational complexity.At last,the ponder appears 100%exact in classifying information of typical and MD classes.

    The literature[17]utilized an ensemble learning approach to recognize the distinctive brain cancer grades.The study proposed a strategy to identify the components of a gathering learner.The proposed system is assessed with 111,205 attractive brain resonances having two unreservedly accessible datasets for investigation purposes.As a result, the proposed method improved precision up to 99% of exactness.The authors in [18] developed a tumor segmentation algorithm using image processing techniques with great learning machines nearby open areas(ELM-LRF)to improve precision up to 97.18%.

    The previous research did not concentrate on extensively using the novel approach based on the above literature review.The majority of the study is limited to making a classifier or segmentation system using typical deep learning architecture.Secondly, it is crucial to optimize the model’s loss while developing any clinical models.Real-time complex data handling will be highly significant when installing such a model into the healthcare sector.A comprehensive pipeline is required by reducing the loss,which is still missing.We used multimodal imaging modalities in the proposed research,e.g.,CT scan and MRI images.We combined them to make a fused image with the help of the Image Fusion algorithm so that significant features can be extracted quickly,and simultaneously the fused image can be fed into the model to get a better and faster diagnosis.We have shown a comprehensive architecture towards developing a clinical tool because a software integration approach is demonstrated in this research for the consideration of researchers.In our ponder,state-of-the-art methods such as 3D UNet and ResNet50 are coordinates for making a computerized and speedier determination framework.

    3 Research Methodology

    The methodology of this proposedresearch is categorized into six segments: Research Dataset,Data Preprocessing,Suggested Algorithms,Selecting Loss Function,Image Fusion Approach(IFA),and Software Integration Pipeline.Fig.1 shows the model of this proposed research.The entire study has been carried out by following the procedures.Taking a close look at Fig.1, it can be observed that the overall research has been conducted by following the three interconnected steps, e.g., Data Preprocessing, Model Preparation, and Deployment.In the beginning, the necessary code and data are passed through the Data Extraction module.Then with the utilization of MRI and CT scan images,the signals are forwarded to the subsequent component like Data Preprocessing.After reaching out to the second module, the research dataset has been preprocessed appropriately to feed the data into the machine.After that, the concentrated models were trained and validated using the third component, such as Model Preparation; eventually, the proposed segmented models were deployed into the webserver to access it from the software system.Nonetheless, the detailed sequence and consequences are shown visually in Fig.1.

    Suppose we need to portray this proposed pipeline, ready to say.In that case, we’ll begin with a local or cloud storage to collaborate our work which repository will be overhauled to outworking changes, and each alters will execute the total pipeline.After a alter is made to the dataset or code,the channel will take a step to the Information extraction module,where each will be disengaged.The method begins with the image combination method, where we combined MRI and CT images and made a fused picture.Then, we passed those pictures to the image segmentation module, where the division module creates diverse divisions and make a mask, label, and handles picture one by one.The information processing will be complete here, and then it’ll part the processed information to train, validate and test, and complete the entire data preprocessing process.Within another parcel,it’ll make a preparing set up and prepare the show and after it too assesses the performance to check its productivity.Finally,the deployment stage comes where the prepared model is saved to demonstrate storage.The chosen updated model will be thrust to the server for utilizing, and by interfacing that API’s,a web or mobile application can execute the brain cancer detection process.

    Figure 1:Block diagram of the research model.The comprehensive pipeline is followed by three steps,e.g.,data preprocessing,model preparation,and deployment,to develop effective clinical tools

    3.1 Research Dataset

    BraTS 2015 dataset is considered in this ponder: image division of brain tumors.There are 220 MRIs of high-grade gliomas(HGG)and 54 MRIs of low-grade gliomas(LGG)within the consider.The four intra-tumoral classifications fragmented as “ground reality”are edema, upgrading tumor,non-enhancing tumor,and corruption.Each preparing test is divided into two records:The primary is a picture file with a 4D cluster of MRI pictures within the shape of a heart(240,240,155,4).The Primary THREE Measurements ARE the X,Y,and Z values for each point within the 3D volume,for the most part, alluded to as a voxel.The values for four partitioned arrangements make up the fourth measurement: Pizazz is for “Liquid Weakened Reversal Recuperation”, and T1w stands for“T1-weighted”T1gd stands;for“T1-weighted with gadolinium differentiate upgrade”(T1-Gd),and T2w stands for “T2-weighted.” In each preparing illustration, the moment record may be a name record that contains a 3D cluster with(240,240,155).The integer values in this cluster speak to the“name”for each voxel within the related picture records:portrays the foundation,one says to edema,2 outlines a non-enhancing tumor,and three speaks to an improving tumor.We have got to 484 preparing images,isolated into training(80%)and validation(20%).

    3.2 Data Preprocessing

    To start,we focused on making“patches”of our information,which are alluded to as sub-volumes of the complete Attractive reverberation imaging (MRI)pictures.The patches were made since a network competent in processing the whole volume at once would essentially not fit inside our current environment’s memory/GPU capabilities.As a result,we’ll employ this standard method to produce spatially steady sub-volumes of our information that can be encouraged into our arrangement.We’ll particularly create arbitrarily inspected shape sub-volume[16]from our images.Furthermore,owing to the reality that a considerable chunk of the MRI volumes is just brain tissue or dark foundation with no tumors,we need to form beyond any doubt that we select patches with a few tumor information.As a result,we’ll only choose patches with at slightest 95%non-tumor districts.Typically finished by filtering the volumes concurring to the values within the foundation names.Standardizing direction(cruel 0, step 1)At long last, since the numbers in the MRI image span such a wide run, we’ll standardize them to a cruel of zero and a standard deviation of one.Because standardization makes memorising much simpler, this can be a common technique in profound picture preparation.After that,the Sub-volume Inspecting was made.An arbitrarily produced sub-volume of estimate[160,160,16].Its comparing name is in a 1-hot arrange, which has the shape [3, 160, 160, 16].It has been guaranteed that 95% of the returned fix is non-tumor locales, whereas applying this sub-sampling strategy.

    Given that our network anticipates the channels for our pictures to seem like the primary measurement(rather than the final one in our current setting),reorder the image’s measurements to have the channels show up as the primary dimension.Reorder the measurements of the name cluster to have the direct size as the classes (rather than the final one in our current setting).Decrease the names cluster measurement to incorporate the non-background classes(add three rather than four).Finally,the standardization method is connected to standardize the values over each channel and each Z plane to have a cruel of zero and a standard deviation of 1.

    3.3 Suggested Algorithm

    Several state-of-the-art techniques have been utilized for this work,but among them,the 3D U-Net Architecture and ResNet50[12]appeared with satisfactory performance.Here should be mentioned that, for the case of fused Images, we have achieved better accuracy for ResNet50.Moreover, the 3D U-Net architecture can segment images utilizing exceptionally few clarified illustrations.This is often because 3D images have parcels of rehashing structures and shapes, empowering a quicker preparation process with barely labeled information.On the other hand,ResNet-50 is 50 layers deep CNN design.Other than that, the show has over 23 million trainable parameters, which appears significant engineering that produces it better for picture division and affirmation too,and the center CNN is utilized to degree the half-breed CNN strategies to watch the customization proficiency.

    3.4 Selecting Loss Function

    Aside from architecture, one of the basic components of any profound learning strategy is choosing our misfortune work.A common choice merely may be recognizable with is the crossentropy loss work.Be that as it may,this misfortune work isn’t perfect for division assignments due to overwhelming course awkwardness(there are regularly not numerous positive locales).,we chose Dice Similarity Coefficient(DSC)to measure similarity in a single class and its version for multiple classes with soft dice loss.

    3.5 Image Fusion Approach

    Images are the foremost common information source in healthcare, but they are also troublesome to analyze.Clinicians must presently depend intensely on restorative picture examination performed by exhausted radiologists.Computer vision program based on the foremost later profound learning calculations permits automated investigation to supply exact comes about that is conveyed exceptionally speedier than the manual prepares.Multimodal therapeutic imaging has changed how we diagnose diseases from MRI and CT scans by making decisions.Multimodal imaging points supply a more wonderful picture that gives more exact and dependable measurements than any single image,protecting the only noteworthy highlights of the depictions program for therapeutically testing,diagnosing,and curing disorders.The two modalities considered for the picture Combination are CT and MRI.

    3.5.1 Image Registration

    The method of converting pictures into a shared facilitate framework in which comparable pixels speak to homogeneous organic areas is known as picture enlistment.Registration can be utilized to supply an anatomically normalized reference outline that can be used to compare brain ranges from different patients.A fundamental method in which several focuses(points of interest)are characterized on indistinguishable places in two volumes is known as picture points of interest enlistment.The volumes are enlisted once a calculation coordinates the points of interest.The CT check picture is utilized as a reference(settled)picture,and the MRI check picture is adjusted concurring to the user’s points.Fig.2 clarifies the comprehensive architecture of the Image Fusion.

    Figure 2:Comprehensive architecture of the image fusion.The MRI and CT scans are received first.The wavelet decomposition then feeds to the transfer learning algorithms to make fused images

    3.5.2 Image Fusion

    STEP 01:Applying the wavelet deterioration on the CT picture to produce surmised LL1 coefficient and the detailed LH1,LV,and LD1 coefficients.

    STEP 02:Applying the wavelet decay on the MRI to produce surmised LL2 coefficient and detailed the detailed LH2,LV2,and LD2 coefficients.

    STEP 03:Applying the combination based on the VGG-19 organized on 4 sets:(LL1,LL2),(LH1,LH2),(LV1,LV2),and(LD1,LD2)to create LL,LH,LV,and LD bands.

    STEP 04:Applying converse wavelet change on the 4 bands produced in step 3 to get the fused image.

    3.5.3 Image Segmentation

    Watershed division is a region-based strategy that employs image morphology.It demands the determination of slightest one marker insides each picture question, counting the foundation as an isolated protest.The markers are chosen by an administrator or given using a programmed method that considers the application-specific information of the objects.Once the things are checked, they are developed employing a morphological watershed change.

    3.6 Software Integration Pipeline

    From model storage updated model is chosen for the forecast of the demonstration, and the predicted model record will be conveyed utilizing a carafe system to test the demonstration from client input; typically, the test will be executed on the nearby server.To use this show to Web or mobile applications,we utilized the gunicorn server and transferred all our updated records to Heroku from the git repository.Then,through Heroku API,we used this model from the Web or mobile applications to distinguish brain cancer.Fig.3 illustrates the model deployment pipeline, which contains seven interconnected steps:Updating Model,Model Prediction,using Flask,WSGI HTTP Server,Accessing Git Repository,Passing to the Heroku,and finally,the user’s application.The detailed explanation and illustration are shown in Fig.3.

    Figure 3:Highlighting the software integration pipeline for identifying the brain cancer and segmenting the infected region through the real-life software system,e.g.,web application or mobile application

    4 Result and Discussions

    4.1 Dice Measurement Interpretation

    Tab.1 clarifies the dice measurement report for the dice coefficient.The Dice similarity coefficient(DSC)was utilized as a measurable approval metric to assess the execution of both the reproducibility of manual segmentations and the spatial cover exactness of robotized probabilistic fragmentary division of MRI pictures.We consider the best test case for all categories in each test class and expose each test case as three different test classes.The DSC average means the outcome is as expected,which is 0.9053,and it was close to the standard DSC matrix is not exceeding the margin more significant than 1.The Dice similitude coefficient(DSC)was utilized as a quantifiable validation metric to survey the execution of both the reproducibility of manual segmentations and the spatial cover precision of computerized probabilistic fragmentary division of MRIs.We consider the most acceptable test case for all classes in each test class and uncover each test case as three diverse test classes.The DSC typical mean result is as anticipated,which is 0.9053,and it was near to the standard DSC lattice,which isn’t surpassing the edge more noteworthy than 1.In addition,Tab.1 also illustrates the dice measurement report for the soft dice loss.The Dice similarity coefficient (DSC)was utilized as a quantifiable approval metric to evaluate the execution of both the reproducibility of manual segmentations and the spatial cover exactness of the computerized probabilistic fragmentary division of MRI images.

    We consider the most acceptable test case for all classes in each test class and uncover each test case as three diverse test classes.The DSC typical mean result is as anticipated,which is 0.9053,and it was near to the standard DSC lattice,which is not surpassing the edge more noteworthy than 1.Soft dice loss was calculated to address the data imbalance problem to get the more excellent gradients.Nevertheless,the class imbalance can be given a different loss factor for each class.The network can deal with the frequent situation of a specific category.It remains to be explored whether it is suitable for the category imbalance scenario.Bellow cases expose the class’s efficient losses as it deals with each data class whose average mean is 0.0980.The more pleasant gradients soft dice loss had been calculated to address the data imbalance issue and a persuading wish.In any case,the class imbalance can be given a distinctive loss calculated for each class.The organization can bargain with the frequent circumstance of a particular category.It remains to be investigated whether it is reasonable for the category imbalance situation.Bellow cases expose the proficient class losses as it manages each data class whose average mean is 0.0980.

    Table 1: Dice measurement report for the dice coefficient and the soft dice loss

    4.2 Model Classification Report(MCP)

    The classification problem may seem to be a kind of multiclass classification problem, but it is an instance segmentation where each tumor in the image is independent.The main objective of this classification is to identify the positive and negative labels from the segmented images data.In the experiments,three deep learning models,namely ResNet-50,U-Net,and CNN,have been trained and tested on the chosen and segmented dataset.From Tab.2,we found that each classification method performs quite well.The topmost is ResNet-50 which achieves 98.96%accuracy with high precision,recall&F1 score,and respectively,U-Net&CNN got the second and last position.These classification methods give performance metrics based on the segmented image using U-Net.Tab.3 clarifies the patch level prediction in identifying the sensitivity and specificity.The demonstration covers a few important regions,but it’s unquestionably not idealized.

    Table 2: Experimental result for the concentrated algorithms

    Table 3: Patch level prediction in terms of identifying the sensitivity and specificity

    Fig.4 outlines the melded pictures constituted by utilizing Picture Combination Calculations.This strategy aims to make images more worthy and comprehensible for human and machine perception,not fair to play down the sum of information.Multisensory picture combination could be a computer vision procedure that combines germane data from two or more pictures into a single image.On the other hand,Fig.4d clarifies the segmenting of the infected region for the case of Sagitical,Transversal,and Coronal.The red mark indicates the regional segmentation towards automated measurements.

    Figure 4:(a)Illustration of the normal images(b)demonstrating the thresholding Images(c)visualized the watershed segmented image(d)segmenting the infected region for the case of sagitical,transversal,and coronal.The red mark indicates the regional segmentation towards automated measurements

    4.3 Model Evaluation

    After the assessment, markers have been utilized to survey our concentrated demonstrate, e.g.,Confusion Matrix,ROC-AUC Curve,and Loss Accuracy Estimation.It can be said that a perplexity lattice could be a capable instrument for evaluating classification models [19].It gives you a clear picture of how well the model distinguished the classes based on the information you supplied and how the classes were misclassified.The Recipient Administrator Characteristic(ROC)bend may be a parallel classification issue evaluation metric.It’s a probability curve that displays the TPR against the FPR at distinctive edge levels,subsequently isolating the flag from the‘noise’.The Region Beneath the Bend (AUC)could outline the ROC bend that measures a classifier’s capacity to recognize between classes[20].The AUC shows how well the show recognizes between positive and negative categories.The higher the AUC, the way better the execution of the show.When AUC = 1, the classifier can effectively separate all Positive and Negative course focuses.Be that as it may,on the off chance that the AUC was 0, the classifier would anticipate all Negatives to be Positives and all Positives to be Negatives when the 0.5 classifier comes up short to distinguish between Positive and Negative lesson focuses.The classifier either anticipates an irregular or consistent lesson for all of the information focuses.As a result,the higher a classifier’s AUC score is,the better it can recognize between positive and negative classes,as shown in Fig.5.

    Figure 5:(a)Visualize the ROC curve of the multiclass(b)representation of the confusion matrix on the trained dataset(c)differentiation between epochs and loss(d)showing the training and validation accuracy over the epochs and accuracy

    4.4 Comparative Analysis

    It is to be mentioned that U-Net is a promising approach for Image segmentation.However,we segmented all our data using U-Net.We used U-Net,ResNet50,and CNN to verify the classification method and received the satisfactory performance shown in Tabs.4 and 5.Nonetheless, we made our comparison defined with two portions.On the first hand,we are executing all sorts of methods with fused segmented images where the image datasets are preprocessed by image fusion technique and then segmented through U-Net.On the other hand, we didn’t use the image fusion technique,just segmented images.It can be observed that methods executed with fused images give comparison output than the non-fused images.Since in image fusion strategy, pictures are characterized as an arrangement of vital data from different sensors utilizing different numerical models to create a single compound picture.The combination of the image is being used for joining the complementary multitemporal,multi-view,and multi-sensor data into a single picture with moved forward picture quality and by keeping the keenness of basic highlights.Therefore,it is considered an imperative preprocessing.As we can see, the picture combination strategy contrasts with picture division and effectively the classification executions result.

    Table 4: Comparative analysis over the various parameters in terms of identifying the brain cancer region and classification using the multimodal imaging modalities

    Table 5: Comparative analysis over the state-of-the-art techniques and identifying the software integration in the previous studies

    Table 5:Continued

    4.5 Observations and Discussions

    After reviewing the literature, it can be observed that a few state-of-the-art strategies were inspected in advance in terms of segmenting and classifying brain cancer, but the comprehensive pipelines are still missing.Despite having strengths and accuracy in certain specific papers,including the model’s performance over different features,several drawbacks were noticed during the reviewing phase.It is noticeable that Capsule Neural Network(CAPNN)was applied for classifying brain cancer,but the CAPNN finds difficulties in differentiating the very close object.We have also observed that SVM-RFE was used to classify brain cancer from multimodal imaging modalities and appeared with good accuracy.The results demonstrate consistent overall accuracy throughout a wide range of feature counts,implying that the same accuracy may be obtained with a minimal number of features.Some enhancements that might be explored include standardizing the data,utilizing different features,and using other ways to convert multiclass issues into binary problems.Apart from these, the previous research only concentrated on making a robust model instead of constituting the clinical tools,which is crucial because a powerful clinical application is required to complete the diagnosis faster.We used image fusion techniques and software integration to overcome these drawbacks to create“patches”of our data.CT scan and MRI are the two modalities considered in this research.We also optimized loss function, removed overfitting, and reduced the complexity of ResNet50 and 3D U-Net algorithms,for which we have achieved better accuracy in the case of fused Images.Nonetheless,we have shown a pipeline for deploying the clinical model into the webserver.The clinical models can be deployed to make real-time software to detect the brain cancer region and classify the cancer label following the procedures.

    5 Conclusions and Future Work

    Concurring to the insights by CBTRUS, it is assessed that, in 2021, in 18600, individuals will pass on from brain cancer.Be that as it may, the mortality rate can be decreased by diagnosing the brain tumors from multimodal pictures as early as conceivable.On the opposite, the professionals’manual discovery of brain tumors from MRIs and CT checks is time-consuming and inclined to human blunder.Machine learning approaches can reduce the required time to detect brain tumors and produce a more accurate result than the manual detection process.Our proposed system will help doctors and patients detect brain tumors from multimodal images and take the necessary steps to help the patient recover quickly.In this study,the Image Fusion technique has been used to image preprocessing,making the feature extraction more accessible and less time-consuming.Further,U-Net architecture has been used for image segmentation.Finally, U-Net, ResNet50, and CNN have been used for image classification.In this case,ResNEt50 gave the highest accuracy at 98.96%.Therefore,the proposed system can be utilized as a decision-making tool to identify brain tumors.Furthermore,the proposed approach can help remote triaging patients with brain cancer, minimizing hospital workload.This research will present a common platform for both doctors and patients in identifying brain tumors in the future.One of the drawbacks of this research is that the system is proposed based on a public dataset,as due to the pandemic,it is quite impossible to collect real-time data.However,it has been planned to train the models using real-time data in the future.This approach will provide significant clinical assistance and aid in the reduction of brain cancer mortality rates.

    Acknowledgement:The authors would like to thank the Deanship of Scientific Research, Taif University Researchers Supporting Project number (TURSP-2020/348), Taif University, Taif, Saudi Arabia for supporting this research work.

    Funding Statement:This study was funded by the Deanship of Scientific Research, Taif University Researchers Supporting Project number(TURSP-2020/348),Taif University,Taif,Saudi Arabia.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    欧美日本中文国产一区发布| 下体分泌物呈黄色| 国产99久久九九免费精品| 丝袜美足系列| 色婷婷久久久亚洲欧美| 精品少妇一区二区三区视频日本电影| 免费日韩欧美在线观看| 水蜜桃什么品种好| 曰老女人黄片| 侵犯人妻中文字幕一二三四区| 老汉色∧v一级毛片| 精品亚洲乱码少妇综合久久| 午夜影院在线不卡| 一本一本久久a久久精品综合妖精| 91成人精品电影| 热re99久久精品国产66热6| 久久99热这里只频精品6学生| 亚洲专区字幕在线| 大片免费播放器 马上看| 午夜福利影视在线免费观看| 又紧又爽又黄一区二区| 性色av一级| 韩国精品一区二区三区| 天天躁狠狠躁夜夜躁狠狠躁| 国产av一区二区精品久久| 国产伦理片在线播放av一区| 国产在线免费精品| 亚洲中文av在线| 女性生殖器流出的白浆| 国产亚洲精品久久久久5区| 男女边摸边吃奶| 亚洲精品乱久久久久久| 高清欧美精品videossex| 纯流量卡能插随身wifi吗| 91精品三级在线观看| 一区二区三区精品91| 一级毛片电影观看| 亚洲欧美日韩另类电影网站| 欧美在线黄色| 久久久水蜜桃国产精品网| 国产av精品麻豆| 丝袜人妻中文字幕| 日韩视频一区二区在线观看| 久久久久久久国产电影| 自线自在国产av| 午夜免费观看性视频| 免费在线观看完整版高清| 亚洲精品久久成人aⅴ小说| 动漫黄色视频在线观看| 中国美女看黄片| 久久久久国内视频| 免费人妻精品一区二区三区视频| 香蕉国产在线看| 亚洲激情五月婷婷啪啪| 精品人妻1区二区| 亚洲国产看品久久| 国产精品国产av在线观看| 精品人妻熟女毛片av久久网站| 久久中文字幕一级| 国产精品成人在线| 亚洲成国产人片在线观看| 两个人看的免费小视频| 咕卡用的链子| 一边摸一边做爽爽视频免费| 黑人巨大精品欧美一区二区蜜桃| 午夜免费观看性视频| 欧美日韩中文字幕国产精品一区二区三区 | 人妻 亚洲 视频| 国产成人精品在线电影| 国产成人精品无人区| 最近最新免费中文字幕在线| 免费看十八禁软件| 日本av免费视频播放| 国产一区二区在线观看av| 欧美国产精品一级二级三级| 午夜免费鲁丝| 在线十欧美十亚洲十日本专区| 建设人人有责人人尽责人人享有的| 亚洲精品国产av蜜桃| 国产精品香港三级国产av潘金莲| a级片在线免费高清观看视频| 国产在线视频一区二区| 五月开心婷婷网| 一级片免费观看大全| 免费观看a级毛片全部| 亚洲一区二区三区欧美精品| 三级毛片av免费| www.精华液| 久久久久久久久免费视频了| 岛国在线观看网站| 欧美黑人精品巨大| 在线观看免费日韩欧美大片| 久久青草综合色| 老熟妇仑乱视频hdxx| 亚洲中文av在线| 日韩制服骚丝袜av| 嫁个100分男人电影在线观看| 后天国语完整版免费观看| 黄色片一级片一级黄色片| 国产精品免费大片| 在线观看免费高清a一片| 又紧又爽又黄一区二区| 啦啦啦啦在线视频资源| 精品视频人人做人人爽| 精品国产乱码久久久久久男人| 久久九九热精品免费| 午夜激情久久久久久久| 啦啦啦视频在线资源免费观看| 精品久久久久久电影网| 日本一区二区免费在线视频| 各种免费的搞黄视频| 国产精品免费大片| 丁香六月欧美| 国产亚洲一区二区精品| 在线精品无人区一区二区三| 日韩制服骚丝袜av| 欧美人与性动交α欧美软件| 亚洲国产看品久久| 久久精品亚洲av国产电影网| 一级毛片女人18水好多| 老熟女久久久| 在线看a的网站| 欧美少妇被猛烈插入视频| 亚洲精品成人av观看孕妇| 国产精品二区激情视频| 视频区图区小说| 欧美乱码精品一区二区三区| 日韩欧美国产一区二区入口| 人妻一区二区av| 香蕉丝袜av| 一级片免费观看大全| 久久久精品国产亚洲av高清涩受| 久久影院123| 欧美精品啪啪一区二区三区 | 男女无遮挡免费网站观看| 母亲3免费完整高清在线观看| 国产人伦9x9x在线观看| 欧美日韩亚洲高清精品| 性高湖久久久久久久久免费观看| 狂野欧美激情性xxxx| 少妇 在线观看| 99国产精品免费福利视频| 人人妻人人澡人人看| 欧美日韩国产mv在线观看视频| 久久久国产欧美日韩av| 1024视频免费在线观看| 一区二区三区四区激情视频| 丝袜喷水一区| 欧美亚洲 丝袜 人妻 在线| 一级毛片精品| 国产在线观看jvid| 最新的欧美精品一区二区| 久久青草综合色| 精品人妻一区二区三区麻豆| 色精品久久人妻99蜜桃| 精品一区二区三区av网在线观看 | 另类亚洲欧美激情| 国产亚洲精品久久久久5区| 免费高清在线观看视频在线观看| 桃花免费在线播放| 欧美久久黑人一区二区| 国产免费一区二区三区四区乱码| 亚洲欧美日韩另类电影网站| www.自偷自拍.com| 黄片小视频在线播放| 99国产综合亚洲精品| 日本vs欧美在线观看视频| 亚洲va日本ⅴa欧美va伊人久久 | 日韩中文字幕视频在线看片| 亚洲三区欧美一区| 午夜福利在线免费观看网站| 国产成人啪精品午夜网站| 日韩一区二区三区影片| 国产亚洲av片在线观看秒播厂| 国产欧美亚洲国产| 18禁黄网站禁片午夜丰满| 女人爽到高潮嗷嗷叫在线视频| 少妇精品久久久久久久| 亚洲五月婷婷丁香| 久久 成人 亚洲| 免费在线观看黄色视频的| 成人影院久久| www日本在线高清视频| 精品国产一区二区三区四区第35| 中文欧美无线码| 99久久综合免费| 久久亚洲精品不卡| 国产欧美日韩综合在线一区二区| 男人爽女人下面视频在线观看| a级片在线免费高清观看视频| 精品国产一区二区久久| 国产精品99久久99久久久不卡| 国产精品二区激情视频| 国产男人的电影天堂91| 丰满饥渴人妻一区二区三| 久久女婷五月综合色啪小说| 国产精品 欧美亚洲| 欧美亚洲日本最大视频资源| 亚洲九九香蕉| 亚洲情色 制服丝袜| av在线app专区| 亚洲精品日韩在线中文字幕| netflix在线观看网站| 一区二区日韩欧美中文字幕| 一区二区av电影网| 狂野欧美激情性bbbbbb| 久久女婷五月综合色啪小说| 亚洲av电影在线观看一区二区三区| 亚洲 国产 在线| 在线看a的网站| 久久久久久久精品精品| 99热全是精品| 亚洲第一av免费看| 免费观看人在逋| 一本色道久久久久久精品综合| 国产在视频线精品| 欧美黑人欧美精品刺激| 最新在线观看一区二区三区| 999久久久精品免费观看国产| 国内毛片毛片毛片毛片毛片| 黑人欧美特级aaaaaa片| 国产一区有黄有色的免费视频| 青草久久国产| 夜夜骑夜夜射夜夜干| 18禁国产床啪视频网站| 大码成人一级视频| 亚洲专区中文字幕在线| 亚洲av日韩在线播放| 亚洲欧美日韩高清在线视频 | 国产av精品麻豆| 青春草亚洲视频在线观看| 日本91视频免费播放| 两性夫妻黄色片| 不卡av一区二区三区| 欧美黑人欧美精品刺激| 午夜久久久在线观看| 国产深夜福利视频在线观看| 两性夫妻黄色片| 亚洲va日本ⅴa欧美va伊人久久 | 丰满迷人的少妇在线观看| 一个人免费看片子| 黄色片一级片一级黄色片| 欧美性长视频在线观看| 久久中文看片网| 各种免费的搞黄视频| 国产无遮挡羞羞视频在线观看| 欧美xxⅹ黑人| 91精品国产国语对白视频| h视频一区二区三区| 久久精品久久久久久噜噜老黄| 在线十欧美十亚洲十日本专区| 亚洲精品美女久久av网站| 91麻豆av在线| av天堂在线播放| 亚洲精品自拍成人| 免费日韩欧美在线观看| 大陆偷拍与自拍| 久久热在线av| 久热这里只有精品99| 老司机午夜十八禁免费视频| 国产精品一二三区在线看| 日本a在线网址| 精品少妇久久久久久888优播| 国产一区有黄有色的免费视频| 大香蕉久久成人网| 777久久人妻少妇嫩草av网站| 黑丝袜美女国产一区| 国产成人av激情在线播放| 巨乳人妻的诱惑在线观看| 最黄视频免费看| 一二三四在线观看免费中文在| 国产男女超爽视频在线观看| 亚洲 欧美一区二区三区| 80岁老熟妇乱子伦牲交| 大码成人一级视频| 涩涩av久久男人的天堂| 午夜老司机福利片| 亚洲五月色婷婷综合| 成年人免费黄色播放视频| 少妇人妻久久综合中文| 日韩免费高清中文字幕av| av在线老鸭窝| 老司机影院毛片| 99久久精品国产亚洲精品| 日韩免费高清中文字幕av| 黑人巨大精品欧美一区二区蜜桃| 免费在线观看完整版高清| 99香蕉大伊视频| 亚洲专区国产一区二区| 纵有疾风起免费观看全集完整版| 亚洲欧美一区二区三区久久| 久久久国产精品麻豆| 亚洲国产精品一区三区| 亚洲国产看品久久| 狂野欧美激情性xxxx| 天天躁夜夜躁狠狠躁躁| 欧美日韩成人在线一区二区| 一级a爱视频在线免费观看| 亚洲成av片中文字幕在线观看| 久久久久久亚洲精品国产蜜桃av| 一本一本久久a久久精品综合妖精| 亚洲成国产人片在线观看| 国产日韩欧美视频二区| www.999成人在线观看| 操美女的视频在线观看| 欧美精品亚洲一区二区| 美女高潮到喷水免费观看| 欧美日韩亚洲国产一区二区在线观看 | 在线亚洲精品国产二区图片欧美| 日韩熟女老妇一区二区性免费视频| 在线av久久热| 97人妻天天添夜夜摸| 美女高潮喷水抽搐中文字幕| 人人妻人人澡人人看| 亚洲欧美一区二区三区久久| 97在线人人人人妻| 亚洲成人国产一区在线观看| 黄色视频,在线免费观看| 免费在线观看黄色视频的| 亚洲精品国产色婷婷电影| 亚洲精品在线美女| 免费人妻精品一区二区三区视频| 秋霞在线观看毛片| 久久精品成人免费网站| 国产精品一区二区精品视频观看| 欧美在线一区亚洲| 亚洲专区字幕在线| 少妇裸体淫交视频免费看高清 | kizo精华| 中国国产av一级| 两个人免费观看高清视频| 亚洲国产欧美在线一区| 午夜影院在线不卡| 色综合欧美亚洲国产小说| 亚洲少妇的诱惑av| av片东京热男人的天堂| 老熟女久久久| 亚洲欧美成人综合另类久久久| 美女视频免费永久观看网站| 日本av免费视频播放| 国产精品一二三区在线看| 丝袜人妻中文字幕| 日韩中文字幕欧美一区二区| a在线观看视频网站| 少妇 在线观看| av超薄肉色丝袜交足视频| 免费日韩欧美在线观看| 永久免费av网站大全| 欧美国产精品va在线观看不卡| 99精国产麻豆久久婷婷| 人妻人人澡人人爽人人| 免费观看人在逋| 国产亚洲精品一区二区www | 亚洲va日本ⅴa欧美va伊人久久 | 亚洲天堂av无毛| 超碰成人久久| 欧美黑人欧美精品刺激| 国产精品秋霞免费鲁丝片| 又黄又粗又硬又大视频| 国产亚洲精品第一综合不卡| 久久久久国内视频| 91精品国产国语对白视频| 男女无遮挡免费网站观看| 亚洲熟女精品中文字幕| 亚洲av片天天在线观看| 丁香六月天网| 男女下面插进去视频免费观看| 亚洲综合色网址| 亚洲五月色婷婷综合| 国产一区二区三区在线臀色熟女 | 91麻豆精品激情在线观看国产 | 久久精品aⅴ一区二区三区四区| 女警被强在线播放| 91老司机精品| 国产免费视频播放在线视频| 精品少妇一区二区三区视频日本电影| 日韩三级视频一区二区三区| 欧美黄色淫秽网站| 啦啦啦啦在线视频资源| 国产一区二区激情短视频 | 精品福利观看| 成人免费观看视频高清| 午夜成年电影在线免费观看| 男人舔女人的私密视频| 高清黄色对白视频在线免费看| 亚洲精品国产一区二区精华液| 建设人人有责人人尽责人人享有的| 深夜精品福利| 久久久久视频综合| 一边摸一边抽搐一进一出视频| 老汉色av国产亚洲站长工具| 最新在线观看一区二区三区| 国产精品国产三级国产专区5o| 俄罗斯特黄特色一大片| √禁漫天堂资源中文www| 国产高清视频在线播放一区 | av天堂久久9| 久久精品国产a三级三级三级| av网站免费在线观看视频| 国精品久久久久久国模美| 在线观看免费视频网站a站| 久久久国产成人免费| 欧美人与性动交α欧美软件| www.999成人在线观看| 精品欧美一区二区三区在线| 婷婷成人精品国产| 精品免费久久久久久久清纯 | 青春草视频在线免费观看| 99香蕉大伊视频| 国产老妇伦熟女老妇高清| 91精品三级在线观看| 高清av免费在线| 久久精品国产a三级三级三级| 国产熟女午夜一区二区三区| 国产深夜福利视频在线观看| 中文字幕色久视频| 999久久久国产精品视频| 这个男人来自地球电影免费观看| 亚洲av成人一区二区三| 在线永久观看黄色视频| 在线av久久热| 12—13女人毛片做爰片一| av网站免费在线观看视频| 精品卡一卡二卡四卡免费| 精品乱码久久久久久99久播| 中文精品一卡2卡3卡4更新| 国产高清视频在线播放一区 | 亚洲va日本ⅴa欧美va伊人久久 | 亚洲一卡2卡3卡4卡5卡精品中文| av不卡在线播放| 国产精品九九99| 国产男女内射视频| 好男人电影高清在线观看| 一级,二级,三级黄色视频| 丰满少妇做爰视频| 99re6热这里在线精品视频| 欧美另类亚洲清纯唯美| 国产精品影院久久| 国产男人的电影天堂91| netflix在线观看网站| 女性生殖器流出的白浆| 日本精品一区二区三区蜜桃| 美女中出高潮动态图| 亚洲人成电影观看| 夜夜骑夜夜射夜夜干| 久久久水蜜桃国产精品网| 欧美另类一区| 国产精品久久久av美女十八| 最黄视频免费看| 亚洲欧美激情在线| 狠狠狠狠99中文字幕| 国产免费一区二区三区四区乱码| 99国产精品免费福利视频| 亚洲国产精品一区三区| 女人久久www免费人成看片| 日韩免费高清中文字幕av| 满18在线观看网站| 亚洲欧美激情在线| 在线观看免费午夜福利视频| 国产在线观看jvid| 欧美大码av| 19禁男女啪啪无遮挡网站| 咕卡用的链子| 亚洲精华国产精华精| 日韩中文字幕欧美一区二区| 久久久精品国产亚洲av高清涩受| 久久狼人影院| 淫妇啪啪啪对白视频 | 国产成人一区二区三区免费视频网站| 国产又爽黄色视频| 成人18禁高潮啪啪吃奶动态图| www.av在线官网国产| 天天躁日日躁夜夜躁夜夜| 纵有疾风起免费观看全集完整版| 精品国产超薄肉色丝袜足j| 99精品久久久久人妻精品| 久久久国产欧美日韩av| 国产精品 欧美亚洲| 国产欧美日韩精品亚洲av| 黑人操中国人逼视频| 精品福利永久在线观看| 悠悠久久av| e午夜精品久久久久久久| 久久狼人影院| 亚洲精品久久午夜乱码| 丝袜美腿诱惑在线| 秋霞在线观看毛片| 国产精品秋霞免费鲁丝片| 搡老岳熟女国产| 成人18禁高潮啪啪吃奶动态图| 性高湖久久久久久久久免费观看| 国产成人免费无遮挡视频| 中文字幕人妻丝袜制服| 亚洲中文字幕日韩| 亚洲国产av新网站| 久久久精品免费免费高清| 亚洲 欧美一区二区三区| 国产亚洲精品久久久久5区| 久久人人爽人人片av| 黄色片一级片一级黄色片| 美女中出高潮动态图| 人人妻人人添人人爽欧美一区卜| 久久中文字幕一级| 亚洲成人免费电影在线观看| 日本撒尿小便嘘嘘汇集6| 久久精品国产亚洲av香蕉五月 | svipshipincom国产片| 啦啦啦 在线观看视频| 欧美性长视频在线观看| 国产一区二区三区av在线| 国产亚洲精品第一综合不卡| 中国国产av一级| 国产男女超爽视频在线观看| 亚洲欧美日韩另类电影网站| 亚洲国产av新网站| 80岁老熟妇乱子伦牲交| 91精品伊人久久大香线蕉| 欧美精品一区二区大全| 亚洲欧美日韩另类电影网站| 精品久久久久久电影网| 91麻豆精品激情在线观看国产 | 一个人免费看片子| 日韩大片免费观看网站| 国产精品麻豆人妻色哟哟久久| av片东京热男人的天堂| 国产亚洲精品第一综合不卡| 操美女的视频在线观看| 国产亚洲精品第一综合不卡| 自拍欧美九色日韩亚洲蝌蚪91| 久久久国产欧美日韩av| 少妇人妻久久综合中文| 男女午夜视频在线观看| 99久久国产精品久久久| 亚洲 国产 在线| 国产成人av教育| 成人国产一区最新在线观看| 三级毛片av免费| av天堂在线播放| 国产精品亚洲av一区麻豆| 无遮挡黄片免费观看| 亚洲第一欧美日韩一区二区三区 | 国产一区有黄有色的免费视频| 9色porny在线观看| 性色av乱码一区二区三区2| 欧美日韩黄片免| av又黄又爽大尺度在线免费看| 精品少妇黑人巨大在线播放| 女性被躁到高潮视频| 久久香蕉激情| 青春草视频在线免费观看| av国产精品久久久久影院| 久久人妻熟女aⅴ| 韩国高清视频一区二区三区| 国产区一区二久久| 亚洲精品久久成人aⅴ小说| 久久99热这里只频精品6学生| 最近最新中文字幕大全免费视频| 中文字幕制服av| 夫妻午夜视频| 亚洲av电影在线进入| 国产视频一区二区在线看| 99热网站在线观看| 欧美黑人欧美精品刺激| 岛国在线观看网站| 一进一出抽搐动态| kizo精华| 超碰成人久久| 别揉我奶头~嗯~啊~动态视频 | 看免费av毛片| 啪啪无遮挡十八禁网站| 久久人人97超碰香蕉20202| www.熟女人妻精品国产| 黄色视频,在线免费观看| 在线观看www视频免费| 777米奇影视久久| 大香蕉久久网| av免费在线观看网站| 免费不卡黄色视频| 欧美激情 高清一区二区三区| 亚洲自偷自拍图片 自拍| 欧美日韩视频精品一区| 成年动漫av网址| 丝袜美足系列| 国产精品成人在线| 亚洲国产看品久久| 国产成人精品久久二区二区免费| 超色免费av| 国产xxxxx性猛交| 久久 成人 亚洲| 久久久久国内视频| 国产成人系列免费观看| 免费一级毛片在线播放高清视频 | 亚洲欧洲精品一区二区精品久久久| 亚洲九九香蕉| 女人被躁到高潮嗷嗷叫费观| 黑人猛操日本美女一级片| 国内毛片毛片毛片毛片毛片| 飞空精品影院首页| 一级毛片电影观看| 免费女性裸体啪啪无遮挡网站| 国产高清视频在线播放一区 | 精品一区二区三卡| 精品一区在线观看国产| 十八禁网站网址无遮挡| 精品卡一卡二卡四卡免费| 亚洲精品自拍成人| 天天躁狠狠躁夜夜躁狠狠躁| 中文字幕av电影在线播放| 亚洲人成电影观看| 国产真人三级小视频在线观看| 精品少妇内射三级| 黄片大片在线免费观看| 国产成人系列免费观看| 午夜福利视频精品| 青草久久国产|