• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Nuclei Segmentation in Histopathology Images Using Structure-Preserving Color Normalization Based Ensemble Deep Learning Frameworks

    2024-01-12 03:46:06ManasRanjanPrustyRishiDineshHariketSukeshKumarShethAlapatiLakshmiViswanathandSandeepKumarSatapathy
    Computers Materials&Continua 2023年12期

    Manas Ranjan Prusty ,Rishi Dinesh ,Hariket Sukesh Kumar Sheth ,Alapati Lakshmi Viswanath and Sandeep Kumar Satapathy,3,?

    1Centre for Cyber-Physical Systems,Vellore Institute of Technology,Chennai,600127,India

    2School of Computer Science and Engineering,Vellore Institute of Technology,Chennai,600127,India

    3Centre for Advanced Data Science,Vellore Institute of Technology,Chennai,600127,India

    ABSTRACT This paper presents a novel computerized technique for the segmentation of nuclei in hematoxylin and eosin(H&E)stained histopathology images.The purpose of this study is to overcome the challenges faced in automated nuclei segmentation due to the diversity of nuclei structures that arise from differences in tissue types and staining protocols,as well as the segmentation of variable-sized and overlapping nuclei.To this extent,the approach proposed in this study uses an ensemble of the UNet architecture with various Convolutional Neural Networks(CNN)architectures as encoder backbones,along with stain normalization and test time augmentation,to improve segmentation accuracy.Additionally,this paper employs a Structure-Preserving Color Normalization (SPCN)technique as a preprocessing step for stain normalization.The proposed model was trained and tested on both single-organ and multi-organ datasets,yielding an F1 score of 84.11%,mean Intersection over Union (IoU) of 81.67%,dice score of 84.11%,accuracy of 92.58%and precision of 83.78%on the multi-organ dataset,and an F1 score of 87.04%,mean IoU of 86.66%,dice score of 87.04%,accuracy of 96.69%and precision of 87.57%on the single-organ dataset.These findings demonstrate that the proposed model ensemble coupled with the right preprocessing and post-processing techniques enhances nuclei segmentation capabilities.

    KEYWORDS Nuclei segmentation;image segmentation;ensemble U-Net;deep learning;histopathology image;convolutional neural networks

    1 Introduction

    Latest advancements in microscopy cell analysis and big data have revolutionized the detection of illnesses with the aid of computer systems[1].In particular,the accurate detection and segmentation of cell nuclei,which harbor a wealth of pathogenic information,has become critical for automated diagnosis and evaluation of cellular physiological states.This has raised a need for a precise and automated system for nuclei detection and segmentation that can significantly expedite the discovery of treatments for crucial ailments such as cancer.The nucleus of a cell serves as the starting point for various analyses,enabling researchers to gain insight into the cell’s response to different treatments and unravel the underlying biological processes [2].By streamlining therapy and drug development processes,this method holds immense potential for enhancing patient care[3].For over half a century,segmenting the nucleus from histopathological images has been a focal point in clinical practice and scientific research.

    Automated nucleus segmentation is indispensable for various applications such as cell counting,movement monitoring,and morphological studies[4].It provides vital information about cell characteristics and activities,facilitating early detection of diseases such as breast cancer and brain tumors.Initially,approaches like watershed and active contours were employed for nucleus segmentation.However,with sufficient training data,neural networks have emerged as the clear winner,surpassing traditional methods by a significant margin [5].These networks have now become practical tools in laboratory settings.

    Although convolutional neural networks(CNNs) present a promising solution to this problem,the existence of several competing frameworks makes it challenging to choose the most suitable one for the job [6].Two commonly used frameworks for object identification and segmentation,U-Net and Mask Region-Based Convolutional Neural Networks(Mask-RCNN),have exhibited remarkable performance in nucleus segmentation.

    The authors acknowledge the benefits of ensembling different competing candidates for a given task to achieve better results by leveraging each candidate’s strengths and capabilities for improved robustness and accuracy[7].In the case of nuclei segmentation,the authors believe that ensembling U-Nets constructed using different CNN architectures as encoder backbones can offer several advantages over existing approaches.Since each encoder backbone learns image representations differently due to architectural variations and design choices,combining them could enable the ensemble to capture a more diverse set of image features,which can in turn improve the model’s ability to handle variations in nuclei appearance,size,shape,and texture,leading to more robust and accurate segmentation results.

    Inspired by this,this study proposes an ensemble of U-Nets constructed with different CNN architectures as encoder backbones,combined with stain normalization and test time augmentation.This approach produces competitive results when trained and tested on both single-organ and multiorgan datasets of histopathology images.

    The novelty in this approach lies mainly in ensembling U-Nets trained with different CNN architectures as encoder backbones,namely ResNet101,InceptionResNetV2,and DenseNet121.While stain normalization(for pre-processing)and test-time augmentation(for post-processing)are pre-existing and common steps taken in nuclei segmentation tasks,the authors’main contributions in this paper are to explore the effects of combining these pre-processing and post-processing steps with the proposed ensemble model on the accuracy and robustness of the nuclei segmentation results.

    The subsequent sections of this paper are organized as follows:Section 2 presents a comprehensive literature review on nuclei segmentation.Section 3 discusses the datasets used in our study and their properties,while Section 4 introduces the proposed method in detail.In Section 5,the authors present the results obtained from the proposed model,along with the derived inferences and observations.Finally,Section 6 concludes this paper by summarizing the key findings and contributions.

    The following are the authors’contributions to automated nuclei segmentation in histopathology images:

    i) The proposed model is an ensemble of three UNet models,each constructed with different CNN architectures as encoder backbones,namely ResNet101,InceptionResNetV2,and DenseNet121.

    ii) The pre-processing step employs a data-driven clustering technique to find the most appropriate reference image for stain normalization.

    iii) The post-processing approach involves applying Test-Time Augmentation(TTA)with various transformations to generate multiple prediction masks per model,which are then uniquely combined using a weighted average and pixel-wise majority voting to produce the final prediction.

    2 Related Works

    This section surveys existing research in the field of nuclei segmentation,specifically focusing on various proposed segmentation models,as well as different pre-processing and post-processing techniques.Threshold-based approaches,such as the watershed algorithm [8],and other similar methodologies,are standard techniques employed for nucleus segmentation.However,these methods often require human intervention for feature extraction,making them tedious and time-consuming.With the advancement of deep learning,researchers have begun employing CNN-based approaches to tackle the task of nucleus segmentation,achieving several successful attempts[9,10]in developing robust models that can work out of the box and perform automated nuclei segmentation with high accuracy regardless of the variations in staining protocols.

    Classic nuclei segmentation methods generally comprise of two steps:first,recognizing the nuclei,and then delineating the contours of each nucleus.During the detection stage,the region or seed of each nucleus must be generated.Unsupervised learning approaches typically group unlabeled data into homogeneous clusters based on criteria like intra-cluster distance[11],with K-Means and Fuzzy C-Means being two common algorithms [12,13].However,these methods have drawbacks,including sensitivity to initial parameter values,returning local optimum solutions,and requiring prior knowledge of cluster numbers.Nature-inspired algorithms have been proposed as an efficient way to overcome these issues [14].U-Net,a significant contribution cited in [15],has been a remarkable advancement in biomedical image segmentation and is the primary inspiration for this paper.

    The task of nuclei segmentation is usually preceded by a pre-processing step that improves the model’s training,and a post-processing step that improves the trained model’s predictions.Several pre-processing techniques have been proven to improve the model’s performance.For example,the authors of[16]combined the stain normalization proposed by[17]with a Nucleus Boundary model for improved results,while others have used various other color normalization methods[18,19].

    Some researchers,including those cited in [20],have incorporated deep learning into the preprocessing stage by employing a Deep Convolutional Gaussian Mixture Model (DCGMM).This model learns stain variations using a pixel-color dispersion of the nucleus,surrounding tissues,and background tissue types,subsequently utilizing this information to perform stain normalization.In contrast,the authors of[21]have utilized color contrast methods for a lightweight U-Net architecture,specifically by modifying the encoder branch,to achieve impressive results.Remarkably,some studies,such as those referenced in[22,23],have entirely bypassed the pre-processing step,and yet still managed to attain good results.

    In combination with these different pre-processing techniques,several researchers have also proposed novel segmentation algorithms that produce cutting-edge outcomes.The authors of [24]proposed deep interval markers,whereas the authors of[25]have modified Mask-RCNN to produce state-of-the-art results.The authors of [7] advanced this field by combining Mask-RCNN and U-Net,utilizing the Watershed algorithm as a post-processing step.In a similar vein,the authors of[19,26] ensembled variations of U-Net,such as R2U-Net and stacked U-Nets,to enhance accuracy and F1 score.Various studies,such as [27],have employed different ensembles with U-Net and its derivatives,outperforming the base U-Net in terms of efficiency.Conversely,other research papers,such as[28]and[29],have focused on successful modifications to the U-Net architecture itself to boost performance.

    In addition to this,various authors have incorporated several post-processing methods in their proposed models,leading to noticeable improvements in the final results.These methods include mask expansion,lateral bleed compensation [30,31],Condition Erosion based Watershed (CEW),Morphological Dynamics based Watershed(MDW),and Conventional Watershed Algorithms[32,33].These techniques aim to differentiate the central areas of the images from the background and surrounding elements.They strive to isolate every potential nucleus area,regardless of whether it is single or multiple layers [34],but often struggle to distinguish between adjacent cells.It is worth noting that the effectiveness of these techniques is assessed based on how accurately the segmented pixels align with those in the manually drawn ground truth images[35].When calculating the nuclei detection rate,each connected region in the segmentation data is counted as one nucleus,irrespective of the number of nuclei present within the area.The evaluation methods for segmentation precision and nuclei recognition rate do not take duplication into account.Such limitations might affect the final decision in a Computer-Aided Diagnosis (CAD) system,especially due to errors in the under-segmentation of adjacent cells.Therefore,future research will likely concentrate on developing techniques to extract interregional barriers and isolate shared nuclei.The methods under examination might also be effectively applied to overlapped separation techniques,helping to distinguish between overlapped nuclei.

    The existing literature illustrates extensive research in the field of histopathology image segmentation utilizing a variety of deep learning models.However,there have been relatively few studies specifically addressing the problem of overlapping nuclei cells.The model proposed in this paper employs an ensemble approach enhanced with test-time augmentation,to tackle this challenge.This proposed methodology can contribute to the model’s robustness,providing a promising solution to this issue.

    3 Dataset

    The primary dataset used in this paper is the multiple-organ stained H&E image dataset(MOSID)[36,37],which contains annotated tissue images of several patients with tumors of different organs and who were diagnosed at multiple hospitals.This dataset contains a diverse set of 30 H&E-stained images from different organs such as the breast,liver,kidney,prostate,bladder,colon,and stomach.Some sample images from the MOSID dataset are shown in Fig.1.Each of these images is 1000 ?1000 pixels in size with more than 20000 annotated nuclei.This training dataset will enable the creation of robust and generalizable nuclei segmentation pipelines that can operate right out of the box,given the diversity of nuclei structures across numerous organs and patients,as well as the differences in staining protocols used at multiple hospitals.

    In addition to this,the model was also trained and tested on a secondary single-organ dataset called the Triple-Negative Breast Cancer dataset(TNBC)[38],which contains a number of annotated breast tissue images.Some sample images from the TNBC dataset are shown in Fig.2.This data set consists of 50 images,every 512 ?512 pixels in size,with a total of 4022 annotated nuclei.The purpose of using this secondary dataset was to test if the proposed approach worked equally well when presented with both single-organ as well as multi-organ datasets.

    Figure 2:Sample images from the TNBC dataset

    4 Proposed Methodology

    This section presents an overview of the proposed method and breaks down its various components in detail.Fig.3 shows the high-level working of the proposed approach.The proposed methodology employs a combination of various techniques,including stain normalization,patchbased processing,test time augmentation,and an ensemble model consisting of three UNet architectures with different encoder backbones.By leveraging the strengths of these components,the authors aim to improve the accuracy and robustness of nuclei segmentation.

    The process of nuclei segmentation of a given histopathology image using the proposed model entails a sequential execution of steps,as described below:

    1.Stain Normalization:The histopathology image is first subjected to stain normalization using a stain normalizer.Stain normalization helps to remove variations in staining intensity and colour,making the images consistent and suitable for further analysis.

    2.Patch Extraction:Patches of size 256 ?256 are extracted from the stain-normalized image.This is done to break down the large image into smaller regions for processing.Each patch serves as input to the segmentation model.

    3.U-Net Ensemble Model: The ensemble model is composed of three U-Net architectures,each using a different encoder backbone.The encoder backbones used in this approach are ResNet101,InceptionResNetV2,and Densenet121.By ensembling these UNet architectures with different encoder backbones,the authors hope to leverage the complementary strengths and diverse feature extraction capabilities of ResNet101,InceptionResNetV2,and Densenet121.

    4.Test-Time Augmentation:Before feeding the patches into the ensemble model for prediction,a series of augmentations are applied to each patch.Test-time augmentation involves generating multiple versions of each patch with different augmentations,such as rotations,flips,and scaling.This helps to increase the robustness and accuracy of the predictions.

    5.Ensemble Prediction:Each augmented patch is individually fed into the ensemble model,and the model returns a prediction mask for each patch.These masks are then merged to obtain the final prediction for each patch.

    6.Patch Mask Fusion: After obtaining the prediction masks for all patches,the masks are merged to reconstruct the final nuclei-segmented mask for the entire histopathology image.The merging process combines the predicted masks in a way that ensures consistency across the patches.

    Figure 3:Overview of proposed method

    The following subsections provide a detailed overview of the pre-processing,modeling,and postprocessing steps in our approach,highlighting the rationale and methodology behind each one.

    4.1 Pre-Processing

    To reduce the color variations that arise from differences in staining protocols,a Structure-Preserving Color Normalization(SPCF)technique[17]was applied to the histopathology images prior to training the U-Net models.Given a source image s and a target image t,the SPCF technique first estimates the stain color appearance matrix(also called the stain matrix W)and the stain density map matrix (also called the concentration matrix H) by factorizing the Vs into WsHs and Vt into WtHt using their proposed Sparse Non-negative Matrix Factorization(SNMF)approach.

    Here,the stain matrix W is a 2 ?3 matrix where the first row represents the hematoxylin stain color in RGB format,and the second row represents the eosin stain color in the same format.The concentration matrix,on the other hand,is an N ?2 array(N being the number of pixels)where the columns give the pixel concentration of hematoxylin and eosin,respectively.V is the optical density array of the given image which is given by Eq.(1):

    where I is the given RGB matrix of intensities of the image,and I0is the illumination light intensity on the sample(255 for 8-bit images).The relationship between the optical density V,the stain matrix and the concentration matrix can be obtained via the Beer-Lambert Law(Eq.(2)):

    Combining Eqs.(1)and(2),we get the following relationship:

    The normalized source picture is then created by combining the target Wt’s stain matrix instead of source Ws with a scaled version of the concentration matrix of the source Hs.Due to the stain density H being preserved and the appearance W merely changing,the structure remains unchanged.

    To find the most appropriate target image for stain normalization,a simple data-driven clustering technique was employed.Specifically,the stain matrices of all the training images were first extracted using the above-described tool.Then,K-means clustering with K=1 was applied on all the stain matrices to find the representative stain template at the cluster center.The image closest to this cluster center was chosen as the target image for stain normalization.Fig.4 shows the target images for the MOSID and TNBC datasets,respectively.

    Figure 4:Stain templates for stain normalization–left(MOSID),right(TNBC)

    Once the images were normalized,they were extracted into patches of 256 ?256 using a sliding window for training.The reasons for doing this were two-fold: Firstly,extracting patches provided a means of data augmentation,by increasing the number of training images available.Secondly,histopathology images can sometimes be large images like Whole Slide Images (WSI) which make model training and prediction very slow.Dividing an image into smaller fixed-size patches can improve the model’s training and prediction speed.

    The main reason for choosing 256 ?256 as the patch size for extraction was to ensure the right balance between information density and computational feasibility:Choosing too small a patch size may result in losing important details and context necessary for accurate segmentation.On the other hand,larger patch sizes could lead to increased computational requirements and potential memory limitations.By selecting 256 ?256 patches,there is a balance between capturing sufficient information for nuclei segmentation and ensuring computational feasibility within the available resources.This also helps mitigate class imbalance by increasing the chances of capturing a more balanced distribution of positive(nuclei)and negative(background)examples within each patch.

    4.2 Modeling

    The proposed model is a weighted average ensemble that is made up of three U-Nets built with different backbones(encoders)namely,ResNet101,InceptionResNetV2,and DenseNet121,pretrained on ImageNet.U-Nets as shown in Fig.5 are fully convolutional neural networks developed especially for the task of biomedical image segmentation.The architecture of the U-Net is made up of two parts—an encoder path (contracting path) that is responsible for finding features from the input image,and a decoder path (expanding path) that is responsible for constructing and upsampling an output image from the feature representations formed by the encoder.U-Net also contains residual or skip connections that concatenate feature representations from the encoder directly to the corresponding block in the decoder,thereby helping in giving localization information and enabling accurate semantic segmentation.

    Figure 5:U-Net architecture.Adapted with permission from[15],Copyright?2015 Springer International Publishing Switzerland

    Given the limited number of nuclei histopathology images and the subsequent lack of data to effectively train deep segmentation models from scratch,we can leverage the power of transfer learning by using pre-trained architectures such as ResNet and DenseNet as the encoder backbone for the U-Net.This can also improve the model’s training speed and accuracy.Hence,we make use of three U-Nets with different backbones such as ResNet101,InceptionResNetV2,and DenseNet121.

    ResNet [39] (or residual networks) overcame the problem of vanishing gradient by introducing skip connections(or residual connections)that transferred results of a few layers to some deeper layers below it,thereby skipping the layers in between.Inception-ResNets[40],on the other hand,combine the Inception architecture,with residual connections.DenseNets [41] also solves the problem of the vanishing gradient,and like ResNets,they do so by adding shortcuts among layers.But unlike ResNets,a layer in DenseNet receives the outputs of all previous layers and concatenates them in the depth dimension.

    We combine the advantages of all three types of convolutional neural networks using a weighted average ensemble technique.Each of the three U-Net models was trained on both normalized (preprocessed)and unnormalized MOSID and TNBC datasets.Random augmentations such as rotations,flips,and shifts were applied to the training set to increase the number of data points available for training.These augmentations create new instances of the data by modifying the spatial orientation,mirroring,or position of the images,effectively increasing the diversity of the dataset.This helps to balance the representation of different classes,including nuclei and background,by providing a more balanced distribution of augmented data points during training.

    The training parameters were kept constant to perform a comparative analysis of the models.The optimal values for the hyperparameters,including the number of epochs,batch size,and learning rate,were determined using a standard grid search approach.The choice of Adam as the optimizer and binary cross-entropy(BCE)Jaccard loss as the loss function was motivated by their established effectiveness in various segmentation tasks.Table 1 depicts the various hyperparameters used.

    Table 1:Model training hyperparameters

    To obtain the optimal weights for the weighted average of the ensemble model,a grid search was performed with different weighted averages of the model’s predictions on the test set.The weights corresponding to the prediction with the highest IoU score were chosen as the optimal weight.Table 2 shows the optimal weights for the U-Net models trained on both normalized and unnormalized MOSID and TNBC datasets:

    Table 2:Weights for the ensemble model—In the order of (ResNet101,InceptionResNetV2,DenseNet121)

    4.3 Post-Processing

    To boost the model’s performance after training,Test-Time Augmentation (TTA) was applied while making predictions.Here,a given histopathology image is augmented by rotating (90°,180°and 270°),flipping horizontally,flipping vertically,and flipping both horizontally and vertically.This yields 7 images including the original image which is then fed as input to each of the three U-Net models of the ensemble.Thus,each histopathology image produces 21 prediction masks(7 predictions per model ?3 models).These 21 masks are the first ensemble model-wise using the weighted average to produce 7 augmented masks.The augmentations are then undone,and the 7 masks are ensemble using a pixel-wise majority voting approach to produce the final prediction.Fig.6 illustrates the same.

    Figure 6:Overview of model prediction with test time augmentation(TTA)

    5 Results and Discussions

    The following section presents the results of the proposed approach and draws inferences from it.We employed several metrics to evaluate the performance of individual models and their ensembles.

    These metrics can be classified into object-level metrics such as Dice Coefficient and mean Intersection over Union(IoU),and pixel-level metrics such as Accuracy,Precision,Recall,and F1 score[42–44].

    The IoU(or Jaccard Index)of class c is the percentage of overlap of the predicted class c in the segmentation mask with that in the ground truth and can be defined by Eq.(4).The mean IoU,on the other hand,gives the mean IoU of all classes in the segmentation mask,as shown in Eq.(5).The dice score coefficient(DSC)can also be used to gauge model performance and is positively correlated to the IoU value.It is given by Eq.(6).In the case of instance segmentation,the value of the dice score can be numerically equal to that of the F1 score.The F1 score,Precision,Recall,and Accuracy were employed as pixel-level metrics to get a better understanding of the model’s performance (Eqs.(7)–(10)).

    The results of the models and their ensembles trained and tested on both the MOSID and TNBC datasets are presented in Tables 3 and 4,respectively.For each dataset,we conducted experiments with and without stain normalization to assess the impact of this pre-processing technique on performance.Additionally,we compared results obtained with and without test-time augmentation to evaluate the influence of the post-processing technique.

    Table 3 reveals that,for the MOSID dataset,the individual models trained and tested on a stainnormalized dataset achieved significantly better results compared to those trained and tested on the raw dataset.Stain normalization mitigates the effects of high color variations,thereby facilitating the learning of underlying feature representations.These findings validate the effectiveness of the preprocessing technique.

    Table 3:Results obtained with MOSID dataset

    Furthermore,irrespective of the application of pre-processing or post-processing techniques,the ensemble of the three models consistently outperformed the individual models.The ensemble method leverages the strengths of each model by assigning weights based on their individual performance and computing an average,leading to a better output.Moreover,we observed that employing postprocessing techniques consistently yielded improved results,thus establishing the contribution of test-time augmentation in enhancing performance.Similar to data augmentation during training,augmentation at test time can enhance a model’s predictive capability.

    Similar conclusions can be drawn from the results obtained on the TNBC dataset,as presented in Table 4.Notably,InceptionResNetV2 marginally outperformed the ensemble when applied to a stainnormalized dataset.However,in the absence of stain normalization,the ensemble capitalized on the model with the best individual performance,thereby achieving improved overall performance.

    Table 4:Results obtained with TNBC dataset

    Overall,our findings demonstrate the efficacy of stain normalization in enhancing performance and highlight the advantages of employing ensemble models and test-time augmentation for nuclei segmentation in histopathology images.

    Table 5 shows a comparative analysis of the proposed approach with different segmentation methods.We can see that the proposed model outperforms its standard counterpart,the U-Net,and its variants such as the atrous spatial pyramid pooling U-Net(ASPPU-Net).It also fares better than other standard deep learning architectures such as DeepLab and Fully Convolutional Networks(FCN),as well as non-deep learning methods such as Otsu and Watershed.

    Table 5:Comparative analysis of the proposed approach with different segmentation methods

    The proposed model exhibits higher efficiency in nuclei segmentation for several reasons.Firstly,the application of stain normalization as a pre-processing technique reduces color variations in histopathology images,allowing the model to learn meaningful features more effectively.This normalization enhances the model’s robustness to variations in color intensity,leading to improved generalization to unseen data.

    Additionally,the ensemble model,consisting of three U-Net models with different encoder backbones,further enhances efficiency.By weighing the predictions of each model based on individual performance and averaging them,the ensemble leverages the strengths of each model,reducing the impact of limitations and improving overall accuracy and robustness.This also tackles the overlapping nuclei cell.

    Moreover,the incorporation of test-time augmentation during the prediction phase contributes to higher efficiency.By applying a series of augmentations to each patch and considering the ensemble predictions from the augmented patches,the model captures diverse variations in the data,resulting in more accurate and robust predictions.The use of pre-trained encoders and the principles of transfer learning further enhance efficiency by leveraging learned representations from large-scale datasets.Collectively,these strategies optimize the model’s efficiency and performance in nuclei segmentation tasks,effectively addressing challenges posed by variations and maximizing accuracy.

    6 Limitations and Future Work

    The proposed nuclei segmentation method in histopathology images exhibits potential but requires further investigation and improvement.The generalization of nuclei segmentation models to diverse histopathology images with varying staining protocols,tissue types,and image qualities remains an open challenge.While the authors have attempted to tackle this generalization issue by ensembling UNets with different encoder backbones(thereby leveraging diverse learning capabilities)and combining this with stain normalization and test-time augmentation,future research can explore training on diverse datasets,employing different domain adaptation techniques,and investigating alternative encoder backbones or architectures for enhanced performance.Additionally,balancing the accuracy-computational efficiency trade-off in test-time augmentation is crucial and an area for further investigation.Furthermore,considering adaptive patch sizes or multi-scale strategies during patch extraction can improve the model’s ability to handle nuclei of different sizes during segmentation.

    Evaluation metrics play a significant role in nuclei segmentation,and there is a need to develop novel metrics that capture the specific challenges of histopathology image analysis,including nuclear shape,size,and proximity.Moreover,it is vital to consider the input of medical professionals for the clinical application and acceptance of the proposed method.The discrepancies between the generated and real histopathology images should be addressed through collaboration,validation studies,and the development of standardized interfaces and integration frameworks.This will ensure the practical usability of the nuclei segmentation algorithm in routine clinical practice.

    Implementation challenges such as managing computational resources,ensuring dataset availability and diversity,addressing stain normalization accuracy,and achieving real-time processing present additional hurdles in practical deployment.Acquiring diverse and annotated histopathology datasets that encompass various staining protocols and tissue types,is crucial but challenging.The accuracy of stain normalization plays a vital role in reducing variations and ensuring its effectiveness is necessary for reliable segmentation.Moreover,optimizing real-time processing and seamless integration into clinical workflows are essential considerations,necessitating the application of optimization techniques and careful deployment strategies to overcome these challenges.

    7 Conclusion

    In this research,we proposed an ensemble of the U-Net’s encoder-decoder architecture with different popular convolutional neural networks as encoder backbones combined with stain normalization and test-time augmentation as pre-processing and post-processing techniques respectively.The number of training samples was increased by extracting patches of fixed size from the original images and applying various data augmentation techniques to them.The proposed model was trained and tested on both single-organ (TNBC) and multi-organ (MOSID) datasets,exposing it to nuclei of various morphological shapes and staining intensities.The proposed model’s nuclei identification and segmentation capabilities were tested and compared using several metrics.We inferred that the proposed model ensemble performed better than the individual models used as backbones.Results also showed that the model’s performance was boosted with the application of the proposed preprocessing and post-processing techniques.Additionally,we drew a comparison between our proposed method and the methods of other papers,which showed that the proposed method outperformed other methods in terms of the metrics considered.

    Acknowledgement:The authors would like to thank the School of Computer Science and Engineering,and the Centre for Cyber Physical Systems Vellore Institute of Technology,Chennai for their constant support and motivation to carry out this research.

    Funding Statement:No funding is associated with this research.

    Author Contributions:Study conception and design: Rishi Dinesh,Manas Ranjan Prusty,Hariket Sukesh Kumar Sheth,Alapati Lakshmi Viswanath,Sandeep Kumar Satapathy;data collection:Rishi Dinesh,Manas Ranjan Prusty,Hariket Sukesh Kumar Sheth,Alapati Lakshmi Viswanath,Sandeep Kumar Satapathy;analysis and interpretation of results:Rishi Dinesh,Manas Ranjan Prusty,Hariket Sukesh Kumar Sheth,Alapati Lakshmi Viswanath,Sandeep Kumar Satapathy;draft manuscript preparation: Rishi Dinesh,Manas Ranjan Prusty,Hariket Sukesh Kumar Sheth,Alapati Lakshmi Viswanath,Sandeep Kumar Satapathy.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The MOSID dataset used in this paper is available in the repository at https://monuseg.grand-challenge.org/Data/.The TNBC dataset used in this paper is available in the repository at https://zenodo.org/record/1175282.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲精华国产精华液的使用体验| 性色avwww在线观看| 久久精品久久精品一区二区三区| 亚洲三区欧美一区| 天天影视国产精品| 国产成人精品无人区| 国产国语露脸激情在线看| 伦理电影免费视频| 曰老女人黄片| 国产淫语在线视频| 大话2 男鬼变身卡| 十分钟在线观看高清视频www| 黄色毛片三级朝国网站| 久久久久久人人人人人| 成年人免费黄色播放视频| 伊人亚洲综合成人网| 日韩一卡2卡3卡4卡2021年| 看免费av毛片| 韩国精品一区二区三区| 岛国毛片在线播放| 最黄视频免费看| 男女高潮啪啪啪动态图| 天堂俺去俺来也www色官网| 日韩精品免费视频一区二区三区| 国产成人精品久久二区二区91 | 国产av精品麻豆| 欧美日韩一级在线毛片| 天美传媒精品一区二区| 国产欧美日韩一区二区三区在线| 五月开心婷婷网| 成人黄色视频免费在线看| 美女主播在线视频| 黑人欧美特级aaaaaa片| 国产无遮挡羞羞视频在线观看| kizo精华| 国产一区二区三区av在线| √禁漫天堂资源中文www| 久久久久久久久免费视频了| 国产精品女同一区二区软件| 热re99久久国产66热| 欧美日韩av久久| 国产亚洲av片在线观看秒播厂| 国产成人精品在线电影| 中文字幕色久视频| 亚洲欧美一区二区三区国产| 欧美+日韩+精品| 只有这里有精品99| av国产精品久久久久影院| 激情视频va一区二区三区| 国产不卡av网站在线观看| a级毛片在线看网站| 色哟哟·www| 曰老女人黄片| 欧美激情 高清一区二区三区| 丝袜人妻中文字幕| 啦啦啦中文免费视频观看日本| 蜜桃国产av成人99| 久久女婷五月综合色啪小说| 各种免费的搞黄视频| 中文字幕精品免费在线观看视频| 一区福利在线观看| 日产精品乱码卡一卡2卡三| 搡老乐熟女国产| 精品少妇一区二区三区视频日本电影 | 国产日韩一区二区三区精品不卡| 侵犯人妻中文字幕一二三四区| 久久久久久久亚洲中文字幕| 国产成人精品无人区| 一边亲一边摸免费视频| 亚洲三区欧美一区| 精品酒店卫生间| 最新中文字幕久久久久| 国产免费一区二区三区四区乱码| 欧美97在线视频| 男女下面插进去视频免费观看| 七月丁香在线播放| 大码成人一级视频| 97在线视频观看| av国产久精品久网站免费入址| 国产片特级美女逼逼视频| 丝袜脚勾引网站| 久久精品国产鲁丝片午夜精品| 国产av一区二区精品久久| 久久久国产精品麻豆| 亚洲精品久久久久久婷婷小说| 久久久久国产一级毛片高清牌| 久久综合国产亚洲精品| 丰满少妇做爰视频| 水蜜桃什么品种好| 欧美成人精品欧美一级黄| 91成人精品电影| 久久久精品免费免费高清| 91精品三级在线观看| 亚洲成人av在线免费| 大香蕉久久成人网| 日日撸夜夜添| 久久ye,这里只有精品| 卡戴珊不雅视频在线播放| 久久人人爽av亚洲精品天堂| 人妻人人澡人人爽人人| 亚洲国产精品成人久久小说| 香蕉精品网在线| 黑人巨大精品欧美一区二区蜜桃| 久久久a久久爽久久v久久| 午夜精品国产一区二区电影| 亚洲国产成人一精品久久久| 亚洲欧美色中文字幕在线| 欧美激情 高清一区二区三区| 久久鲁丝午夜福利片| 美女大奶头黄色视频| 99香蕉大伊视频| 在线观看免费高清a一片| 亚洲少妇的诱惑av| a级毛片在线看网站| 国产一区二区 视频在线| 蜜桃国产av成人99| 欧美xxⅹ黑人| 日本爱情动作片www.在线观看| 另类精品久久| 亚洲人成77777在线视频| 一本—道久久a久久精品蜜桃钙片| 老司机影院成人| 成人午夜精彩视频在线观看| 国产极品粉嫩免费观看在线| 美女国产高潮福利片在线看| 国产精品国产av在线观看| 久久99蜜桃精品久久| 亚洲 欧美一区二区三区| 乱人伦中国视频| 精品国产乱码久久久久久男人| 高清av免费在线| 少妇 在线观看| 欧美+日韩+精品| 久久久精品国产亚洲av高清涩受| 亚洲欧美一区二区三区国产| 日本黄色日本黄色录像| 国产麻豆69| 在现免费观看毛片| 亚洲,欧美,日韩| 日韩精品有码人妻一区| 亚洲国产精品成人久久小说| 女性生殖器流出的白浆| 国产激情久久老熟女| 青春草国产在线视频| 大香蕉久久网| 国产激情久久老熟女| 在线观看www视频免费| 免费大片黄手机在线观看| 亚洲综合色惰| 狂野欧美激情性bbbbbb| 99re6热这里在线精品视频| 人人妻人人澡人人看| 婷婷色综合www| 午夜福利一区二区在线看| 91aial.com中文字幕在线观看| 丝袜美足系列| 亚洲欧美成人精品一区二区| 美女国产视频在线观看| 久久久精品区二区三区| 日本免费在线观看一区| 青青草视频在线视频观看| 亚洲精品美女久久av网站| 欧美日韩视频精品一区| 欧美国产精品va在线观看不卡| 午夜福利视频在线观看免费| 免费高清在线观看日韩| 国产精品人妻久久久影院| 9191精品国产免费久久| 国产深夜福利视频在线观看| 男女边摸边吃奶| 最近中文字幕高清免费大全6| 97在线人人人人妻| 国产精品偷伦视频观看了| 热re99久久国产66热| 美女中出高潮动态图| 麻豆av在线久日| 看免费av毛片| 久热这里只有精品99| 一区二区三区精品91| 少妇猛男粗大的猛烈进出视频| 国产一区有黄有色的免费视频| 最新的欧美精品一区二区| 国产成人免费无遮挡视频| 久久国产精品男人的天堂亚洲| 18在线观看网站| 在线观看免费高清a一片| 精品国产一区二区三区四区第35| 又大又黄又爽视频免费| 一级黄片播放器| 久久久国产精品麻豆| 久久久国产一区二区| 亚洲精品中文字幕在线视频| 一级毛片黄色毛片免费观看视频| 91在线精品国自产拍蜜月| 免费黄频网站在线观看国产| 午夜av观看不卡| 一本久久精品| 久久鲁丝午夜福利片| xxx大片免费视频| 亚洲三区欧美一区| 永久免费av网站大全| 国产一区二区 视频在线| 欧美日韩亚洲高清精品| 日韩,欧美,国产一区二区三区| 亚洲成人av在线免费| 18在线观看网站| 在线观看三级黄色| 国产精品久久久久久av不卡| 亚洲成人一二三区av| 欧美精品高潮呻吟av久久| 黑丝袜美女国产一区| 国产熟女午夜一区二区三区| 亚洲内射少妇av| 亚洲精品国产av成人精品| 妹子高潮喷水视频| 蜜桃在线观看..| 丝袜人妻中文字幕| 成人漫画全彩无遮挡| 在线免费观看不下载黄p国产| 久久久欧美国产精品| 亚洲精品自拍成人| 日韩中文字幕视频在线看片| 午夜91福利影院| 国产精品蜜桃在线观看| 亚洲国产毛片av蜜桃av| 一个人免费看片子| 看免费成人av毛片| 国产欧美日韩综合在线一区二区| 天天躁日日躁夜夜躁夜夜| 国产成人精品一,二区| 99精国产麻豆久久婷婷| 国产 一区精品| 亚洲欧洲国产日韩| 少妇的丰满在线观看| 久久久久人妻精品一区果冻| videossex国产| 最近2019中文字幕mv第一页| 曰老女人黄片| 一二三四在线观看免费中文在| 少妇熟女欧美另类| 亚洲精品成人av观看孕妇| 高清在线视频一区二区三区| av有码第一页| 26uuu在线亚洲综合色| 久久久久久久久免费视频了| a 毛片基地| 天天影视国产精品| 亚洲第一区二区三区不卡| 欧美精品av麻豆av| 国产免费又黄又爽又色| 桃花免费在线播放| 久久久久久久国产电影| 久久99精品国语久久久| 欧美日韩av久久| 久久女婷五月综合色啪小说| 啦啦啦中文免费视频观看日本| 一区二区日韩欧美中文字幕| 亚洲成国产人片在线观看| 成人二区视频| 欧美精品人与动牲交sv欧美| 夜夜骑夜夜射夜夜干| 亚洲国产日韩一区二区| 人人澡人人妻人| 成年女人在线观看亚洲视频| 久久久久视频综合| 涩涩av久久男人的天堂| 国产极品粉嫩免费观看在线| 黄色 视频免费看| 精品人妻偷拍中文字幕| 99久国产av精品国产电影| 亚洲国产日韩一区二区| 欧美日韩视频高清一区二区三区二| 国产又爽黄色视频| 精品一区二区三区四区五区乱码 | 七月丁香在线播放| www.自偷自拍.com| 亚洲天堂av无毛| 国产精品99久久99久久久不卡 | 99久久中文字幕三级久久日本| 国产一级毛片在线| 免费高清在线观看视频在线观看| 老司机影院成人| 成人亚洲欧美一区二区av| 天美传媒精品一区二区| 亚洲精华国产精华液的使用体验| 丰满乱子伦码专区| 亚洲第一av免费看| 国产97色在线日韩免费| 男人舔女人的私密视频| 五月伊人婷婷丁香| 美女视频免费永久观看网站| 制服诱惑二区| 亚洲五月色婷婷综合| 黄频高清免费视频| 高清欧美精品videossex| 看免费成人av毛片| 免费观看性生交大片5| av视频免费观看在线观看| 中国三级夫妇交换| 伊人久久国产一区二区| 精品国产一区二区久久| 亚洲国产成人一精品久久久| 欧美精品一区二区免费开放| 国产精品国产三级国产专区5o| 国产成人精品无人区| 大片免费播放器 马上看| 一区在线观看完整版| 一级毛片我不卡| 九草在线视频观看| av在线播放精品| 尾随美女入室| 丰满迷人的少妇在线观看| 美女中出高潮动态图| 亚洲一级一片aⅴ在线观看| 九九爱精品视频在线观看| 国产精品一国产av| 久久精品夜色国产| 亚洲精品自拍成人| 下体分泌物呈黄色| 久久99一区二区三区| videossex国产| 亚洲国产日韩一区二区| av国产精品久久久久影院| 免费在线观看完整版高清| 久久久精品94久久精品| 欧美成人午夜免费资源| 一级片免费观看大全| 搡老乐熟女国产| 99热全是精品| 国产精品久久久av美女十八| 91精品三级在线观看| 波多野结衣一区麻豆| 十分钟在线观看高清视频www| 精品少妇内射三级| 天堂俺去俺来也www色官网| 伊人久久大香线蕉亚洲五| 欧美老熟妇乱子伦牲交| 日韩av在线免费看完整版不卡| 91久久精品国产一区二区三区| 男人爽女人下面视频在线观看| 一级爰片在线观看| 中文字幕另类日韩欧美亚洲嫩草| av片东京热男人的天堂| 美女国产高潮福利片在线看| 宅男免费午夜| 又大又黄又爽视频免费| 在线 av 中文字幕| 亚洲国产日韩一区二区| 日本爱情动作片www.在线观看| 亚洲第一青青草原| 人人澡人人妻人| 国产伦理片在线播放av一区| 久久综合国产亚洲精品| 十八禁网站网址无遮挡| 久久青草综合色| 精品少妇黑人巨大在线播放| 久久99热这里只频精品6学生| 欧美黄色片欧美黄色片| 午夜免费鲁丝| 精品酒店卫生间| 看免费av毛片| 看十八女毛片水多多多| videossex国产| 欧美激情 高清一区二区三区| 一本色道久久久久久精品综合| 久久久久精品久久久久真实原创| 天天躁日日躁夜夜躁夜夜| 久热这里只有精品99| 免费在线观看黄色视频的| 色婷婷久久久亚洲欧美| 日本黄色日本黄色录像| 国产亚洲午夜精品一区二区久久| 久久久久久人妻| 母亲3免费完整高清在线观看 | 多毛熟女@视频| 丝袜喷水一区| 婷婷色综合大香蕉| 国产精品一二三区在线看| 精品国产露脸久久av麻豆| 夫妻午夜视频| 国产日韩欧美亚洲二区| 国产极品天堂在线| 午夜福利视频精品| 欧美av亚洲av综合av国产av | 国产成人精品久久二区二区91 | 国产熟女午夜一区二区三区| 老司机影院成人| 亚洲国产精品999| 亚洲国产av影院在线观看| 国产片特级美女逼逼视频| 中文字幕人妻丝袜一区二区 | 又大又黄又爽视频免费| 日韩 亚洲 欧美在线| 久久人妻熟女aⅴ| 亚洲av免费高清在线观看| 久久人妻熟女aⅴ| 在线 av 中文字幕| 90打野战视频偷拍视频| 亚洲精品国产av成人精品| 亚洲第一区二区三区不卡| 成年人午夜在线观看视频| 美女大奶头黄色视频| 侵犯人妻中文字幕一二三四区| 国产极品天堂在线| 欧美 日韩 精品 国产| 婷婷色综合大香蕉| 天天躁夜夜躁狠狠躁躁| 99久久中文字幕三级久久日本| 一级爰片在线观看| 丰满乱子伦码专区| 91精品三级在线观看| 啦啦啦视频在线资源免费观看| 美女高潮到喷水免费观看| 亚洲国产欧美网| 在线天堂最新版资源| 老司机影院成人| 男女啪啪激烈高潮av片| 蜜桃国产av成人99| 精品久久久久久电影网| 亚洲美女搞黄在线观看| 只有这里有精品99| 啦啦啦在线免费观看视频4| 嫩草影院入口| 赤兔流量卡办理| av免费在线看不卡| 免费日韩欧美在线观看| 岛国毛片在线播放| 日本爱情动作片www.在线观看| 久久精品熟女亚洲av麻豆精品| 日韩不卡一区二区三区视频在线| 国产成人aa在线观看| 亚洲成人av在线免费| 国产一区二区 视频在线| av在线app专区| 国产精品无大码| 国产成人aa在线观看| 成年美女黄网站色视频大全免费| 欧美精品av麻豆av| 91久久精品国产一区二区三区| 久久精品国产鲁丝片午夜精品| 永久免费av网站大全| 日日摸夜夜添夜夜爱| 亚洲精品一区蜜桃| 日韩av在线免费看完整版不卡| 亚洲精品国产av成人精品| 国产欧美亚洲国产| 制服人妻中文乱码| 青春草视频在线免费观看| 国产片内射在线| 99国产精品免费福利视频| 黄色毛片三级朝国网站| 狠狠婷婷综合久久久久久88av| 女性被躁到高潮视频| 丝袜人妻中文字幕| 成年av动漫网址| 成人国语在线视频| 18+在线观看网站| 日本爱情动作片www.在线观看| 欧美最新免费一区二区三区| 下体分泌物呈黄色| 亚洲天堂av无毛| 97在线人人人人妻| 一级毛片电影观看| 黄色一级大片看看| 色婷婷久久久亚洲欧美| 91国产中文字幕| 久久久久久人人人人人| 日韩中文字幕视频在线看片| 宅男免费午夜| 王馨瑶露胸无遮挡在线观看| 伊人亚洲综合成人网| 超碰97精品在线观看| 亚洲国产av新网站| 午夜免费观看性视频| 国产麻豆69| 高清在线视频一区二区三区| 国产亚洲午夜精品一区二区久久| 欧美日本中文国产一区发布| 我要看黄色一级片免费的| 亚洲激情五月婷婷啪啪| 三上悠亚av全集在线观看| 桃花免费在线播放| 国产成人午夜福利电影在线观看| 午夜福利视频精品| 国产女主播在线喷水免费视频网站| 国产亚洲一区二区精品| 国产麻豆69| 日韩精品免费视频一区二区三区| 久久久久久久精品精品| 叶爱在线成人免费视频播放| 国产精品免费视频内射| 日韩中字成人| 亚洲欧美色中文字幕在线| 久久ye,这里只有精品| 女性生殖器流出的白浆| 亚洲国产精品成人久久小说| 一区在线观看完整版| 老女人水多毛片| 一级毛片电影观看| 性高湖久久久久久久久免费观看| 国产精品嫩草影院av在线观看| 成人手机av| videos熟女内射| 夫妻午夜视频| 国产白丝娇喘喷水9色精品| 超碰成人久久| 亚洲精品,欧美精品| 十八禁高潮呻吟视频| 多毛熟女@视频| 国产一区二区激情短视频 | 女性生殖器流出的白浆| 亚洲国产色片| 午夜av观看不卡| 波多野结衣av一区二区av| 国产亚洲最大av| 亚洲经典国产精华液单| 一级a爱视频在线免费观看| 免费在线观看黄色视频的| 精品国产一区二区三区四区第35| 最近中文字幕高清免费大全6| 亚洲第一区二区三区不卡| 国产亚洲一区二区精品| 亚洲一级一片aⅴ在线观看| 亚洲第一青青草原| 美国免费a级毛片| 18在线观看网站| 高清在线视频一区二区三区| 制服人妻中文乱码| 美女脱内裤让男人舔精品视频| 一边摸一边做爽爽视频免费| 日韩制服骚丝袜av| 成人午夜精彩视频在线观看| 丝袜美足系列| 国产黄色免费在线视频| 精品人妻一区二区三区麻豆| 亚洲国产精品一区三区| 一级毛片黄色毛片免费观看视频| 热re99久久精品国产66热6| 一本色道久久久久久精品综合| 国产毛片在线视频| 超碰成人久久| 五月伊人婷婷丁香| 精品一品国产午夜福利视频| 日日爽夜夜爽网站| 电影成人av| www.自偷自拍.com| 亚洲欧洲精品一区二区精品久久久 | 天天躁狠狠躁夜夜躁狠狠躁| 成人漫画全彩无遮挡| 一级毛片黄色毛片免费观看视频| 婷婷色av中文字幕| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲第一青青草原| av国产精品久久久久影院| 色94色欧美一区二区| 日韩精品有码人妻一区| 青春草亚洲视频在线观看| 蜜桃在线观看..| 久久午夜综合久久蜜桃| 国产精品国产av在线观看| 一边亲一边摸免费视频| 精品一品国产午夜福利视频| av片东京热男人的天堂| 久久久a久久爽久久v久久| 最近最新中文字幕免费大全7| 久久久久久人人人人人| 日韩中字成人| 国产一级毛片在线| 久久99蜜桃精品久久| 人妻系列 视频| 只有这里有精品99| 日韩中文字幕欧美一区二区 | 久久国内精品自在自线图片| 大话2 男鬼变身卡| 最近2019中文字幕mv第一页| 2022亚洲国产成人精品| 飞空精品影院首页| 久久精品久久久久久噜噜老黄| 精品亚洲成a人片在线观看| 国产高清国产精品国产三级| 叶爱在线成人免费视频播放| 婷婷成人精品国产| www.精华液| 欧美最新免费一区二区三区| 黄网站色视频无遮挡免费观看| 国产精品亚洲av一区麻豆 | 香蕉国产在线看| 国产日韩欧美亚洲二区| 欧美日韩亚洲国产一区二区在线观看 | 久久精品国产亚洲av高清一级| 国语对白做爰xxxⅹ性视频网站| 亚洲人成77777在线视频| 一本久久精品| 日韩av在线免费看完整版不卡| 女的被弄到高潮叫床怎么办| 女人精品久久久久毛片| 在线观看免费视频网站a站| 我的亚洲天堂| 大陆偷拍与自拍| 老司机影院成人| 免费高清在线观看视频在线观看| 丝袜在线中文字幕| 欧美日韩精品成人综合77777| 乱人伦中国视频| 国产成人精品婷婷| 国产伦理片在线播放av一区| 日日爽夜夜爽网站| a级毛片在线看网站| 久久久a久久爽久久v久久| 亚洲精品av麻豆狂野| 亚洲激情五月婷婷啪啪| 天天躁狠狠躁夜夜躁狠狠躁| 国产成人免费无遮挡视频| av在线观看视频网站免费| 男女无遮挡免费网站观看|