• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    MAAUNet: Exploration of U-shaped encoding and decoding structure for semantic segmentation of medical image

    2022-11-28 02:09:36SHAOShuoGEHongwei

    SHAO Shuo, GE Hongwei

    (1. Jiangsu Provincial Engineering Laboratory of Pattern Recognition and Computational Intelligence, Jiangnan University, Wuxi 214122, China; 2. School of Artificial Intelligence and Computer Science, Jiangnan University, Wuxi 214122, China)

    Abstract: In view of the problems of multi-scale changes of segmentation targets, noise interference, rough segmentation results and slow training process faced by medical image semantic segmentation, a multi-scale residual aggregation U-shaped attention network structure of MAAUNet (MultiRes aggregation attention UNet) is proposed based on MultiResUNet. Firstly, aggregate connection is introduced from the original feature aggregation at the same level. Skip connection is redesigned to aggregate features of different semantic scales at the decoder subnet, and the problem of semantic gaps is further solved that may exist between skip connections. Secondly, after the multi-scale convolution module, a convolution block attention module is added to focus and integrate features in the two attention directions of channel and space to adaptively optimize the intermediate feature map. Finally, the original convolution block is improved. The convolution channels are expanded with a series convolution structure to complement each other and extract richer spatial features. Residual connections are retained and the convolution block is turned into a multi-channel convolution block. The model is made to extract multi-scale spatial features. The experimental results show that MAAUNet has strong competitiveness in challenging datasets, and shows good segmentation performance and stability in dealing with multi-scale input and noise interference.

    Key words: U-shaped attention network structure of MAAUNet; convolutional neural network; encoding-decoding structure; attention mechanism; medical image; semantic segmentation

    0 Introduction

    With the development of computer vision, image segmentation has achieved superior performance in the fields of natural image and biomedical image. Under the conditions of medical image, the parts that need to be segmented are often only specific regions, such as tumor regions, organ tissues and diseased regions. Unlike natural image, medical image has inconsistencies in scale, difficult to collect datasets and more noise interference. Manual inspection requires considerable professionalism and subjective dependence. Therefore, the development of semantic segmentation technology for medical image is an important research topic.

    Early image segmentation methods commonly include region segmentation, boundary segmentation, thresholding and feature-based clustering. Although traditional segmentation methods have certain improvements in segmentation accuracy, they require prior knowledge, are not applicable to challenging tasks, and cannot maintain robustness. For example, the ISIC-2018[1]dataset contains skin lesion images of different scales. Fig.1 demonstrates that the scale, shape and color of skin lesions can greatly vary in dermoscopy images. Some images with complex shapes or unclear boundaries are unsatisfactory in traditional segmentation methods.

    Fig.1 Variation of scale in medical images

    Relying on the popularity of deep convolutional neural network (CNN[2]) in computer vision, CNN is quickly used for medical image segmentation tasks[3]. Networks such as fully convolutional networks (FCN[4]), SegNet[5], U-Net[6], V-Net[7], ResNet[8], DDANet[9], PSPNet[10], DenseNet[11], MultiResUNet[12], U-Net++[13], DC-UNet[14]and DoubleUNet[15]are used for image and voxel segmentation in various medical image modes. These methods have also achieved good performance on many complex datasets, proving the effectiveness of CNN networks in learning and identifying features to segment organs or diseased tissues from medical image.

    A fully CNN (FCN) structure[4]is proposed to perform end-to-end image segmentation, which is superior to the existing algorithms at the time. FCN is improved and a new architecture of SegNet[5]is developed, which includes a 13-layer deep encoder to extract spatial features, and a corresponding 13-layer deep decoder to give segmentation results. DeepLab[16]is proposed, and deep CNN with a fully connected conditional random field (CRF) is used to refine the segmentation result. Then DeepLabV2[17]is improved by using atrous convolution to reduce the degree of signal down-sampling. Atrous spatial pyramid pooling (ASPP) module is employed to capture long range context for DeepLabV3[18]. DeepLabV3+[19]is proposed, in which encoder-decoder structure is employed for semantic segmentation. An architecture of U-Net[6]is proposed, which includes a contracted path for acquiring context and a symmetrical extended path for precise positioning. A skip connection is added to the encoder-decoder image segmentation network (such as SegNet) to improve the accuracy of the model and solve the problem of gradient disappearance. A similar architecture of V-Net[7]is proposed, which adds residual connections and replaces 2D operations with 3D operations to process 3D voxel images. The optimization of Dice, a widely used segmentation metric, is also proposed. Some studies have developed a segmented version of the densely connected network architecture of DenseNet[11], which uses an encoder-decoder framework like U-Net.

    However, these models still face the problems of variable segmentation target scale, noise interference, rough segmentation results, slow training process and insufficient robustness. In response to these problems, a multi-scale residual aggregation U-shaped attention network structure of MAAUNet (MultiRes aggregation attention U-Net) is proposed. Through extensive experiments on different medical image datasets, it is found that MAAUnet is better than the classic U-Net model and the recent MultiResUNet model in most cases. The contributions of this article can be summarized as 1)-3).

    1) Aggregate connection is introduced, which is different from the original single feature aggregation of the same level. Skip connections are redesigned to merge the features of different semantic scales at multiple levels and multiple scales, thereby the semantic gap between skip connections is further reduced.

    2) After the multi-scale convolution module, a convolution block attention module is added to focus and integrate features in the two attention directions of channel and space, and optimize the intermediate feature map.

    3) The residual connection is improved in the original convolution block. The convolution channel is expanded with a series convolution structure. The residual connection is retained, and the multi-scale convolution block is turned into a multi-channel convolution block.

    1 Prior knowledge

    1.1 U-Net architecture

    Fig.2 shows the U-Net network architecture. It consists of encoder and decoder. The encoder follows the typical structure of a convolutional network. It includes repeated application of two 3×3 convolutions, each of which is followed by a rectified linear unit (ReLU) and a 2×2 maximum pooling operation with a step size of 2 for down sampling. In each down sampling step, the number of feature channels is doubled. This operation is repeated four times. Each step in the decoder includes the up sampling of the feature map, followed by 2×2 deconvolution, which reduces the number of feature channels by half, and concatenates the feature map with the corresponding feature map of the skip connection in the encoder. And then two 3×3 convolutions are used, a ReLU after each convolution is obtained. Since boundary pixels are lost in each convolution, cropping is required. In the last layer, 1×1 convolution is used to map each component feature vector to the required number of classes.

    Fig.2 U-Net architecture

    In addition, a skip connection is introduced to transmit the output of the encoder to the decoder by the U-Net architecture. These feature maps are connected with the output of the upsampling operation. And the spliced feature maps are propagated to subsequent layers. The network is allowed to retrieve spatial features lost due to pooling operations by Skip connections.

    1.2 MultiResUNet architecture

    The U-Net model is considered and redesigned by MultiResUNet. Aiming at the diversity of medical image in scale, the original convolutional layer is replaced by a block like Inception[20], which can better solve the problem of different scales of images, as shown in the Fig.3.

    Fig.3 MultiRes block improvement process

    The parallel structure of the left picture in Fig.3 is converted into the serial structure of the middle picture through reconstruction. Based on the serial structure, a residual connection is added to form the structure of the right picture in Fig.3. So far, a series of smaller 3×3 convolutional layers are used to replace the larger 5×5 and 7×7 convolutional layers, and a 1×1 convolutional layer called residual connection[8]is added, which can provide some additional spatial characteristics. This structure is called MultiRes block.

    There may be a semantic gap between the corresponding levels of encoding-decoding architecture. The main reason is that the feature map obtained by the encoder cannot be directly connected with the feature map output by the decoder. There is a semantic gap between encoder and decoder, so some convolutional layers are added to the path of the skip connection, which is called ResPath. Some modifications have been made to the skip connection named ResPath between encoder and decoder. Instead of simply connecting the feature maps from the encoder stage to the decoder stage, they are first passed through a chain of 3×3 convolutional layers with residual connections, and then they are connected with the decoder features. The ResPath path is shown in Fig.4.

    The MultiRes block and ResPath are added to the U-shaped structure to form the MultiResUNet model, and it is shown as Fig.5.

    Fig.4 ResPath structure diagram

    Fig.5 MultiResUNet architecture

    2 MAAUNet model

    The MAAUNet model improves the aggregation connection based on MultiResUNet, reduces the semantic gap, integrates the attention mechanism to optimize the intermediate feature map, and proposes a multi-channel convolution block to deal with interference of different scales. The model can effectively deal with scale transformations and background interference, and provide more effective segmentation based on these improvements .

    2.1 Aggregate connection

    Although MultiResUNet reduces the semantic gap between encoder and decoder by adding ResPath at the corresponding level, in order to further bridge the semantic gap between encoder start and decoder end, it is recommended to use aggregate join strategy on the basis of the original ResPath retention. The deeper feature maps are up-sampled, and the low-level feature maps of the skip connection are fused in this layer to better deal with images of different scales and help to further reduce the semantic gap.

    This is because the deep-level feature map has more accurate semantic information, and is a coarse-grained feature map that is not conducive to recovering details. The low-level feature map has inaccurate semantics and is a fine-grained feature map that helps to restore segmentation details. Therefore, the upward aggregation connection not only makes full use of the semantic information, but also can restore the fine segmentation results. The skip connection is redesigned to aggregate the features of different semantic scales in the decoder sub-network to form a highly flexible feature fusion scheme. In this way, different levels of features can be merged, and they can be integrated through feature superposition.

    The specific method is to fill the center of the U-shaped structure with nodes based on the original ResPath retention. Each node is concatenated by the ResPath result of the previous node at the same depth and the upsampling result of the node at the next depth which together constitute the restoration information of this node. The specific structure is shown in Fig.6.

    Different sizes of receptive fields have different sensitivities to target objects of different sizes. For example, the characteristics of large receptive fields can easily identify large objects. However, the edge information of large objects and the small objects themselves are easily lost by the down-sampling and up-sampling of the deep network in medical image segmentation. At this time, it may be necessary to help with the characteristics of small receptive fields. Therefore, feature aggregation strategies of different depths are helpful to deal with scale changes.

    Fig.6 Diagram of aggregation connection

    2.2 Convolution block attention module

    The existing U-shaped structure treats image feature recognition equally, but in fact the information of a picture is not evenly distributed but slightly focused.To pay more attention to areas with rich information gathering features, the CNN model is combined with the attention mechanism to improve the segmentation performance of the model. For the channel attention mechanism, the squeeze-and-excitation module before is proposed[21], which can distinguish the importance of different channels. The spatial attention module is introduced for the spatial attention mechanism, such as SA-UNet[22]. The module draws the attention map along the spatial dimension, and multiplies the attention map by the input feature map to optimize the adaptive features.

    The convolutional block attention module (CBAM[23]) is introduced, which can refine the feature map along the channel and space dimensions and integrate the two dimensions. Given an intermediate feature mapF∈RC×H×W, CBAM sequentially infers a one-dimensional channel attention feature mapMc∈RC×1×1and a two-dimensional spatial attention feature mapMs∈R1×H×W. The overall attention process can be summarized as

    F′=Mc(F)?F,

    F″=Ms(F′)?F′,

    where ? represents element-wise multiplication. During the multiplication process, the attention value is propagated accordingly. The channel attention value is propagated along the spatial dimension, and vice versa.F″ is the final refined output. Fig.7 shows the calculation process of each attention map.

    Fig.7 CBAM module structure

    The feature map is compressed into a one-dimensional vector in the spatial dimension by the channel attention sub-module. The global maximum pooling and global average pooling are used to aggregate the feature information of the spatial mapping. And the result is added element-wise through a shared fully-connected layer. Global pooling and maximum pooling can be used together to extract richer high-level features and provide more accurate information. The results of the channel sub-module is used to apply average pool and maximum pool operations along the channel axis to generate concatenate operations feature descriptors by the spatial attention sub-module. Convolutional layers are used to generate spatial attention maps.

    To effectively calculate the channel attention, the spatial dimension of the input feature map is compressed. The most commonly used method to aggregate spatial information is average pooling, but the maximum pooling also collects important clues about the characteristics of different objects, which can be used to infer more refined channel attention. The maximum pooling feature that encodes the main part can compensate for the average pooling feature of the global statistics. Therefore, both average pooling and maximum pooling features are used .

    Fig.8 CBAM sub-module structure diagram

    The spatial relationship is used to generate a spatial attention map. Spatial attention and channel attention are complementary. To calculate the spatial attention, average pooling and maximum pooling operations are first applied along the channel axis, and they are connected to generate effective feature descriptors. It can effectively highlight the information area by applying pooling operations along the axis of the channel. On the cascaded feature descriptors, the convolutional layer is applied to generate the spatial attention mapMs(F)∈RH×W, and the coding of the spatial attention map represents the enhanced or suppressed position information.

    After the convolution module of the network, the convolution attention module is introduced to adaptively refine the generated feature map, focus on the feature-rich channel and space, and then perform the next layer of convolution to obtain a more accurate and finer intermediate feature map.

    2.3 Multi-channel block

    Note that there is a simple residual connection in MultiRes block. This residual connection only provides some additional spatial features, which may not be enough to complete some challenging tasks. Features of different scales have shown great potential in medical image segmentation. Therefore, to overcome the problem of insufficient spatial features, a sequence of three concatenated 3×3 convolutional layers is used to expand the convolution channel in the MultiRes block, so that the two convolutional channels in series can complement each other and provide a richer space feature. To prevent the neural network from degenerating and improve the convergence speed when training the model, the symmetry of the convolutional structure can be broken, so the original residual connections are kept. This block is called Multi-channel block. Its structure is as shown in the Fig.9.

    Fig.9 Multi-channel block structure

    Therefore, the basic architecture of MAAUnet is proposed. Aggregate connections are added to reduce the semantic gap based on MultiResUNet. The intermediate feature map and the multi-channel block are optimized to deal with scale changes by the convolutional block attention mechanism module. Its structure is as shown in the Fig.10.

    Fig.10 MAAUNet structure diagram

    3 Experiments

    3.1 Experimental setup

    The model in this article is based on Python programming, and the network model has been implemented by using Keras with Tensorflow backends. The operating system of the experimental platform is Linux4.4.0, and the GPU is Ge Force RTX2080Ti.

    In order to verify the effectiveness and segmentation performance of MAAUNet model, comparative experiments are carried out on the ISIC-2018, Murphy lab[24], CVC-ClinicDB[25]and ISBI-2012 datasets by using U-Net, MultiResUNet and DC-UNet.

    3.1.1 Datasets

    Four public datasets are selected to test the performance of four U-Net based models. The nuclei in Murphy lab dataset are irregular in terms of brightness and the images often contain noticeable debris. Some images in the ISBI-2012 electron microscope dataset contain many interferences, such as noise, and other parts of the cell will affect the model identify boundary. The ISIC-2018 dataset contains skin lesion images of different scales, and the shape, size and colour of the lesion area are all different. In the colonoscopy images of CVC-ClinicDB, the boundaries of polyps are very blurred and difficult to distinguish, and the shape, size, structure and location of polyps are also different. These factors make this dataset the most challenging. The Table 1 briefly describes the dataset used in the experiment.

    Fluorescence microscope image dataset is collected by Murphy lab. This dataset contains fluorescence microscope images , and cell nuclei are manually segmented by experts. The brightness of the nuclei is irregular, and the image usually contains bright fragments, making it a challenging dataset of microscopy images.

    Electron microscopy images are segmtioned using the ISBI-2012: 2D EM challenge dataset. The images face slight alignment errors and are corrupted by noise.

    ISIC-2018 is a dermoscopy image dataset. A total of 2 594 images of different types of skin lesions with expert annotations are included. The original and input resolutions are shown in the Table 1. CVC ClinicDB is a colonoscopy image database used in the endoscopy image experiments. The images are extracted from 29 colonoscopy video sequence frames. A total of 612 images are obtained.

    3.1.2 Pre-processing/post-processing

    The purpose is to study the performance of the proposed MAAUNet architecture compared with the original U-Net and MultiResUNet. Therefore, no specific pre-processing is applied. The only pre-processing applied is to resize the input image to fit the GPU memory, and divide the pixel value by 255 to bring it to the range of [0,1].

    3.1.3 Training

    For a batch containingnimages, the loss functionJis

    The Adam optimizer with parameters is used to train these modelsβ1=0.9 andβ2=0.999. The number of training epochs varies with the size of the dataset.

    3.1.4 Evaluation metric

    In semantic segmentation, the target points occupy different proportions in the entire image. Therefore, indicators such as accuracy and recall rate are not enough, and may show exaggerated segmentation performance, which changes with the proportion of segmented background. Therefore, the Jaccard index is used to evaluate the image segmentation model. The Jaccard index of two setsAandBcan be defined as the ratio of the intersection and union of the two sets.

    3.2 Results and discussion

    Four kinds of datasets of Murphy lab, ISBI-2012, ISIC-2018 and CVC-ClinicDB are compared with U-Net, MultiResUNet and DC-UNet. The proposed MAAUNet model is analysed from both quantitative and qualitative perspectives to validate the segmentation performance. Compared experiment results are shown in Table 2. For better readability, the fractional values of Jaccard index have been converted to percentage ratios (%). And bold values in the Table 2 represent the maximum performance for each dataset.

    Table 2 Comparison of experimental results of different models

    3.2.1 Quantitative analysis

    It can be seen that the proposed model achieves improvement of 2.979 7%, 7.958 0%, 5.072 8% and 6.392 9% on the Jaccard index for Murphy lab dataset, ISBI-2012 dataset, ISIC-2018 dataset and CVC-ClinicDB dataset, respectively, compared with the classic UNet model. Among them, results on ISBI-2012 dataset and CVC-ClinicDB dataset have been significantly improved. For Murphy lab dataset and ISIC-2018 dataset, the proposed model still achieves improvements compared to U-Net. Therefore, these improvement effects are obvious.

    Compared with the MultiResUNet model, the Jaccard index of the proposed model achieves improvement of 0.628 7%, 7.419 5% and 1.201 7% for Murphy lab dataset, ISBI-2012 dataset and ISIC-2018 dataset. For CVC-ClinicDB dataset only, MAAUNet seems to be equivalent to MultiResUNet. Compared with the DC-UNet model, the proposed model has also been improved on all datasets.

    Quantitatively, for the Jaccard index, which is widely used in medical image segmentation, the proposed model MAAUNet has achieved considerable performance improvement on the datasets which have multi-scale input and interference of bright noise. The proposed multi-scale model with aggregation connection and attention mechanism has indeed achieved good improvements.

    3.2.2 Qualitative analysis

    This paper selects the more typical samples in the dataset for qualitative analysis. As shown in Fig.11, the proposed model is more robust to images of different scales. The segmentation of boundaries, fragments and small areas are more refined, and the interference of high-brightness noise is more effectively avoided.

    For example,the first row of Fig.11 is the experiment results of ISBI-2012 dataset. Because of the influence of light and dark changes, the original MultiResUNet segmentation results contain many messy lines inside the cells. While these fragments and lines are filtered out for the results of proposed model segmentation. The interior is cleaner.

    Experiments on the ISIC-2018 dataset are shown in the second row. Segmentation results obtained by MultiResUNet are too fine and narrow due to blurred boundaries and noise interference, while MAAUNet integrates multi-scale features to obtain more overlapping lesion segmentation areas, which achieves relative improvement.

    The third row of Fig.11 shows experiment results on the CVC-ClinicDB dataset. A small target is incorrectly segmented by MultiResUNet due to the influence of other similar organizations, while the wrong segmentation and the interference of similar organizations are avoided by MAAUNet.

    Experiment results on the Murphylab dataset are shown in the fourth row of Fig.11. The lower right corner of the input image contains highlighted fragments. Incomplete cell segmentation is caused by MultiResUNet due to the highlight interference. The segmentation results obtained contain small cell fragments, while the proposed MAAUNet avoid confusion of highlight noise. A clearer and complete cell segmentation is obtained. It can be observed from the qualitative analysis that the MAAUNet model faces multi-scale input, and the noise interference input obtains a more refined and clear segmentation result, and its effectiveness and robustness are verified on multiple datasets.

    Fig.11 Qualitative analysis. Segmentation results for different models on four datasets. Dataset (row from top to bottom): ISBI-2012, ISIC-2018, CVC-ClinicDB, Murphy Lab. Image (Column from left to right): Original image, Ground truth, MultiResUNet, MAAUNet.

    3.3 Ablation experiment

    To further confirm that the aggregation connection marked as ①, the convolutional block attention mechanism module marked as ② and the multi-channel module marked as ③ do play a positive role, the ablation experiments as shown in Table 3 are performed.

    3.3.1 Aggregate connection

    Comparing original MultiResUNet network and the structure with aggregated connections added, it can be seen that the addition of aggregated connections improves the segmentation performance by 0.237 5%, 4.392 1% and 0.502 9% on Murphy lab dataset, ISBI-2012 dataset and ISIC-2018 dataset. It shows that adding aggregate connections on the basis of the attention mechanism also yields 2.023 7%, 0.131 9% and 1.067 6% improvements on ISBI-2012 dataset, ISIC-2018 dataset and CVC-ClinicDB dataset. It can be seen that aggregate connections based on the multi-channel module still achieve 0.133 6%, 2.051 3% and 0.389 5% performance improvements on Murphy lab dataset, ISBI-2012 dataset and ISIC-2018 dataset.

    Table 3 Ablation experiment results

    The aggregation connection reduces the semantic gap and plays a good role in the decoder to recover low-level position information. It can be seen from Fig.11 that the original skip connection at the same level is replaced with cross-level flexible aggregation connections. The segmentation result of electron microscope dataset, skin disease image dataset and cell nucleus dataset have all been improved. Similar results are obtained for segmentation on endoscopy dataset. The aggregation connection can fuse different levels of feature information, which is more conducive to the accurate restoration of segmented images. The use of aggregate connection is a beneficial expansion of the U-shaped structure, and is very helpful for medical image segmentation with different scales and sensitive boundary information.

    3.3.2 Convolution block attention module

    It can be seen that the performance of the model is improved by 0.260 6%, 4.351 1% and 0.383 3% in Murphy lab dataset, ISBI-2012 dataset and ISIC-2018 dataset because of the addition of the attention module, respectively. It shows that the use of the attention module based on aggregation connection also promotes the performance of the model by 1.982 7%, 0.012 3% and 0.271 1% in ISBI-2012 dataset, ISIC-2018 dataset and CVC-ClinicDB, respectively. It shows that the attention module based on the multi-channel module has achieved segmentation performance improvement by 1.770 6% and 0.366 8% in ISBI-2012 dataset and ISIC-2018 dataset.

    It can extract richer advanced features, provide more refined information, adaptively optimize the intermediate feature map, and obtain more accurate segmentation results by the insertion of the convolutional block attention model. Improvements have been made in the fluorescence microscope, dermoscopy dataset and electron microscope dataset,while the endoscopy dataset has achieved comparable results.

    3.3.3 Multi-channel block

    It can be seen that the performance of the model is improved by 0.306 0%, 5.435 7% and 0.174 5% on Murphy lab dataset, ISBI-2012 dataset and ISIC-2018 dataset by using of multi-channel modules, respectively. It shows that the model with multi-channel modules based on aggregation connection obtain comprehensive segmentation performance improvement by 0.202 1%, 3.094 9% , 0.061 1% and 0.606 0% in Murphy lab dataset, ISBI-2012 dataset, ISIC-2018 dataset and CVC-ClinicDB dataset, respectively. It can be seen that the performance of multi-channel module based on the attention module is relatively improved by 2.855 2%, 0.158 0% and 1.557 1% in ISBI-2012 dataset, ISIC-2018 dataset and CVC-ClinicDB dataset, respectively.

    The multi-channel convolution block has a positive effect on the training of the network gradient. Improvements have been made to the fluorescence microscope dataset, electron microscope dataset and dermoscopy dataset. It can better extract spatial features of different scales, enrich complementary feature information and produce better segmentation results through the multi-channel convolution module.

    Finally, the aggregation connection, attention mechanism module and multi-channel convolution block are merged into the original U-shaped encoding-decoding structure, and the proposed model achieves better segmentation results.

    4 Conclusions

    By analyzing the architectures of classic U-Net and recent MultiResUNet in view of the different image scales, noise interference and other influencing factors, the aggregate connection structure, the convolution block attention module and the multi-channel convolution are designed to better capture multi-scale features, optimize intermediate feature maps and reduce the semantic gap. A new U-shaped architecture-MAAUNet is proposed.

    To verify the segmentation performance of the model, experiments on four public medical datasets are compared with a variety of mainstream models.The efficiency and stability of MAAUNet are verified in medical image segmentation. The qualitative results also show better segmentation fineness, which can detect fuzzy boundaries more effectively and avoid noise interference.

    In summary, the proposed model MAAUNet with aggregation connection and attention mechanism has indeed achieved good segmentation results. Of course, it is necessary to continue the research in lightening the model structure and improving the generalization ability of the model in the future.

    一区二区三区激情视频| 午夜久久久久精精品| www国产在线视频色| 精品国内亚洲2022精品成人| 欧美日韩国产亚洲二区| 91字幕亚洲| 亚洲国产精品sss在线观看| 亚洲一区高清亚洲精品| 91九色精品人成在线观看| 三级国产精品欧美在线观看 | 国语自产精品视频在线第100页| 999精品在线视频| 99国产极品粉嫩在线观看| 欧美黑人精品巨大| 亚洲av日韩精品久久久久久密| 香蕉国产在线看| 成年人黄色毛片网站| 在线观看日韩欧美| 精品高清国产在线一区| 日本成人三级电影网站| 婷婷精品国产亚洲av在线| 国产一区二区在线av高清观看| 岛国在线免费视频观看| 美女 人体艺术 gogo| 女人被狂操c到高潮| 丁香六月欧美| 一区二区三区激情视频| 美女午夜性视频免费| 国产亚洲欧美在线一区二区| 中国美女看黄片| 操出白浆在线播放| 久久人妻av系列| 天堂av国产一区二区熟女人妻 | 十八禁网站免费在线| 叶爱在线成人免费视频播放| 最近在线观看免费完整版| 国产又黄又爽又无遮挡在线| 久久久久国产精品人妻aⅴ院| 久久久久久免费高清国产稀缺| 亚洲激情在线av| 一个人免费在线观看的高清视频| 色播亚洲综合网| 日韩有码中文字幕| 观看免费一级毛片| 精品福利观看| 免费看a级黄色片| 少妇粗大呻吟视频| 少妇被粗大的猛进出69影院| 51午夜福利影视在线观看| 毛片女人毛片| 精品久久久久久成人av| 亚洲午夜理论影院| 久久久国产欧美日韩av| 丝袜美腿诱惑在线| 国产熟女午夜一区二区三区| 国产片内射在线| 亚洲精品一卡2卡三卡4卡5卡| 大型黄色视频在线免费观看| 在线免费观看的www视频| 国产精品自产拍在线观看55亚洲| 麻豆国产av国片精品| 后天国语完整版免费观看| 精品欧美一区二区三区在线| 午夜日韩欧美国产| 国产在线观看jvid| 视频区欧美日本亚洲| 日日干狠狠操夜夜爽| 久久性视频一级片| 午夜成年电影在线免费观看| 国产伦一二天堂av在线观看| 日本黄大片高清| 久久精品国产亚洲av香蕉五月| or卡值多少钱| 97超级碰碰碰精品色视频在线观看| 精品久久蜜臀av无| 欧美日韩一级在线毛片| 夜夜躁狠狠躁天天躁| 色老头精品视频在线观看| 黑人巨大精品欧美一区二区mp4| 久久国产乱子伦精品免费另类| 99精品欧美一区二区三区四区| 麻豆国产97在线/欧美 | bbb黄色大片| 日本在线视频免费播放| 欧美性长视频在线观看| 日韩成人在线观看一区二区三区| 在线永久观看黄色视频| 俺也久久电影网| 国产精品亚洲av一区麻豆| 50天的宝宝边吃奶边哭怎么回事| 亚洲 国产 在线| 欧美黄色片欧美黄色片| 亚洲欧美一区二区三区黑人| 欧美日韩福利视频一区二区| 男男h啪啪无遮挡| 精品少妇一区二区三区视频日本电影| 19禁男女啪啪无遮挡网站| 欧美日韩乱码在线| 国产真实乱freesex| 色综合欧美亚洲国产小说| 一级毛片高清免费大全| 黄色 视频免费看| 中文字幕人成人乱码亚洲影| 丝袜美腿诱惑在线| 制服人妻中文乱码| 在线观看免费视频日本深夜| 国产一区二区激情短视频| 久久精品成人免费网站| 国产久久久一区二区三区| 女同久久另类99精品国产91| 国产不卡一卡二| 久久久精品欧美日韩精品| 国产69精品久久久久777片 | 91在线观看av| 波多野结衣高清无吗| 美女高潮喷水抽搐中文字幕| 日韩大码丰满熟妇| 一级毛片高清免费大全| 成人国产一区最新在线观看| 女生性感内裤真人,穿戴方法视频| 18美女黄网站色大片免费观看| 欧美丝袜亚洲另类 | 欧美在线黄色| 九色成人免费人妻av| 黄色片一级片一级黄色片| 国产人伦9x9x在线观看| 精品不卡国产一区二区三区| 少妇被粗大的猛进出69影院| 亚洲黑人精品在线| 日日摸夜夜添夜夜添小说| 国产精品亚洲一级av第二区| av欧美777| 午夜亚洲福利在线播放| 中文字幕最新亚洲高清| 精品福利观看| 亚洲av日韩精品久久久久久密| 国产av又大| 亚洲天堂国产精品一区在线| 久久香蕉精品热| 国产av一区二区精品久久| 亚洲狠狠婷婷综合久久图片| 操出白浆在线播放| 男女那种视频在线观看| 国产精品久久久久久久电影 | 啦啦啦韩国在线观看视频| 99国产精品一区二区三区| 男人舔奶头视频| 最近最新中文字幕大全免费视频| 国产97色在线日韩免费| 亚洲中文日韩欧美视频| 久久精品aⅴ一区二区三区四区| 香蕉丝袜av| 精品一区二区三区四区五区乱码| 国产亚洲精品一区二区www| 老汉色∧v一级毛片| 欧美午夜高清在线| 天堂动漫精品| 毛片女人毛片| 可以在线观看毛片的网站| 亚洲欧美日韩无卡精品| 淫秽高清视频在线观看| 亚洲人成电影免费在线| 日日夜夜操网爽| 成人特级黄色片久久久久久久| 亚洲成人中文字幕在线播放| 禁无遮挡网站| 欧美最黄视频在线播放免费| 日韩 欧美 亚洲 中文字幕| 最近视频中文字幕2019在线8| 后天国语完整版免费观看| 男女那种视频在线观看| 每晚都被弄得嗷嗷叫到高潮| 国产高清视频在线观看网站| 午夜精品久久久久久毛片777| 亚洲在线自拍视频| 国产爱豆传媒在线观看 | 成人18禁在线播放| 天天一区二区日本电影三级| 亚洲片人在线观看| 国产精品久久久久久人妻精品电影| 日本一本二区三区精品| 两个人的视频大全免费| 大型av网站在线播放| 久久精品国产99精品国产亚洲性色| 韩国av一区二区三区四区| 久久精品夜夜夜夜夜久久蜜豆 | 99在线视频只有这里精品首页| 91老司机精品| 国产亚洲av高清不卡| 少妇被粗大的猛进出69影院| 久久久久性生活片| 特级一级黄色大片| 亚洲精品久久成人aⅴ小说| 一本一本综合久久| 中文字幕av在线有码专区| 999久久久国产精品视频| 亚洲欧美一区二区三区黑人| 国产高清videossex| 精品电影一区二区在线| 免费看美女性在线毛片视频| 中文字幕久久专区| 日韩欧美国产一区二区入口| 日日干狠狠操夜夜爽| 99久久综合精品五月天人人| 国产亚洲av高清不卡| 久久久精品欧美日韩精品| 97超级碰碰碰精品色视频在线观看| 欧美日本视频| 亚洲成人精品中文字幕电影| 国内毛片毛片毛片毛片毛片| 脱女人内裤的视频| 久久久久久人人人人人| 制服丝袜大香蕉在线| 日本一本二区三区精品| 成人国产一区最新在线观看| 成在线人永久免费视频| 国产精品 欧美亚洲| 亚洲成人久久性| 99在线人妻在线中文字幕| 可以免费在线观看a视频的电影网站| 成人高潮视频无遮挡免费网站| 国产一区二区在线观看日韩 | 日韩精品青青久久久久久| 婷婷精品国产亚洲av| 人人妻人人澡欧美一区二区| 日本免费a在线| 国产精品久久视频播放| 日日干狠狠操夜夜爽| 麻豆国产av国片精品| 18美女黄网站色大片免费观看| 国产视频内射| 一级毛片精品| 香蕉久久夜色| 久久精品人妻少妇| 欧美午夜高清在线| 99国产精品一区二区蜜桃av| 狂野欧美激情性xxxx| 国产亚洲精品一区二区www| 美女高潮喷水抽搐中文字幕| 欧美黄色淫秽网站| 亚洲全国av大片| 精品久久久久久久毛片微露脸| 国产视频内射| 婷婷亚洲欧美| 久久精品影院6| 国产成人一区二区三区免费视频网站| 亚洲av片天天在线观看| 亚洲真实伦在线观看| 色综合亚洲欧美另类图片| 岛国在线免费视频观看| 久久久久国产一级毛片高清牌| 亚洲精品中文字幕在线视频| 久久久国产精品麻豆| 在线看三级毛片| 一本大道久久a久久精品| 巨乳人妻的诱惑在线观看| 亚洲一区高清亚洲精品| aaaaa片日本免费| 成熟少妇高潮喷水视频| 精品久久久久久久人妻蜜臀av| 国产欧美日韩一区二区精品| 国产激情偷乱视频一区二区| 嫩草影视91久久| 国产亚洲精品av在线| 久久精品夜夜夜夜夜久久蜜豆 | 久久这里只有精品19| 国产单亲对白刺激| 一进一出抽搐动态| 香蕉国产在线看| 我要搜黄色片| 欧美午夜高清在线| 999久久久国产精品视频| 每晚都被弄得嗷嗷叫到高潮| 三级国产精品欧美在线观看 | 久99久视频精品免费| 午夜免费观看网址| 久久中文看片网| 人人妻人人澡欧美一区二区| 熟女电影av网| 国内少妇人妻偷人精品xxx网站 | 天堂影院成人在线观看| 三级男女做爰猛烈吃奶摸视频| 欧美午夜高清在线| 一级作爱视频免费观看| 成人特级黄色片久久久久久久| 中文字幕人妻丝袜一区二区| 国产真实乱freesex| 亚洲人成网站高清观看| 999精品在线视频| 国产黄a三级三级三级人| 日韩有码中文字幕| 一本大道久久a久久精品| 亚洲男人天堂网一区| 一卡2卡三卡四卡精品乱码亚洲| 久久久久久九九精品二区国产 | 国产精品98久久久久久宅男小说| 国产黄色小视频在线观看| 99精品在免费线老司机午夜| 欧美黑人欧美精品刺激| 精品午夜福利视频在线观看一区| 精品一区二区三区视频在线观看免费| 亚洲性夜色夜夜综合| 老司机午夜十八禁免费视频| 国产免费男女视频| 国产私拍福利视频在线观看| 久久性视频一级片| 亚洲欧美日韩高清在线视频| 国内少妇人妻偷人精品xxx网站 | videosex国产| 女警被强在线播放| 国产熟女xx| 色在线成人网| 琪琪午夜伦伦电影理论片6080| bbb黄色大片| 看黄色毛片网站| 久久国产乱子伦精品免费另类| 亚洲aⅴ乱码一区二区在线播放 | 久久久国产精品麻豆| 欧美3d第一页| 久久香蕉精品热| 长腿黑丝高跟| 国产视频内射| 99国产精品一区二区三区| 国产高清激情床上av| 精品欧美国产一区二区三| 亚洲国产欧美网| 丁香欧美五月| 禁无遮挡网站| 午夜日韩欧美国产| 两个人免费观看高清视频| 后天国语完整版免费观看| 日韩欧美在线二视频| 国产精品影院久久| 两个人视频免费观看高清| 最近最新中文字幕大全电影3| 亚洲成人久久性| 国产精品日韩av在线免费观看| 亚洲国产精品成人综合色| netflix在线观看网站| 一区福利在线观看| 麻豆成人午夜福利视频| 亚洲av电影不卡..在线观看| 亚洲欧洲精品一区二区精品久久久| 两个人的视频大全免费| 一个人免费在线观看电影 | АⅤ资源中文在线天堂| 在线观看免费日韩欧美大片| 久久久久久久久免费视频了| 久久精品人妻少妇| 日本一二三区视频观看| 这个男人来自地球电影免费观看| 精品久久久久久久人妻蜜臀av| 亚洲精品美女久久av网站| 精品一区二区三区av网在线观看| 久久久久久久久久黄片| 怎么达到女性高潮| 亚洲成人国产一区在线观看| 久久午夜综合久久蜜桃| 男女床上黄色一级片免费看| 人妻丰满熟妇av一区二区三区| 1024香蕉在线观看| 国产片内射在线| 国产主播在线观看一区二区| 国产久久久一区二区三区| 国产一区二区在线观看日韩 | 哪里可以看免费的av片| 怎么达到女性高潮| 琪琪午夜伦伦电影理论片6080| 亚洲精品av麻豆狂野| 久久精品影院6| 精品国内亚洲2022精品成人| 床上黄色一级片| 中文在线观看免费www的网站 | 伦理电影免费视频| 久久久国产欧美日韩av| 午夜老司机福利片| 亚洲av成人不卡在线观看播放网| 欧美成人一区二区免费高清观看 | 天天躁夜夜躁狠狠躁躁| 99久久国产精品久久久| 亚洲av成人不卡在线观看播放网| 国产亚洲欧美98| av天堂在线播放| 一进一出抽搐动态| 人人妻人人澡欧美一区二区| 国产真实乱freesex| 免费在线观看视频国产中文字幕亚洲| 国产免费av片在线观看野外av| 免费搜索国产男女视频| 国产69精品久久久久777片 | 天堂av国产一区二区熟女人妻 | 国产成人精品无人区| 日本一本二区三区精品| 久久人人精品亚洲av| 久久精品成人免费网站| 中文字幕久久专区| 搡老熟女国产l中国老女人| 精品国产美女av久久久久小说| 国产日本99.免费观看| 亚洲国产高清在线一区二区三| 国内毛片毛片毛片毛片毛片| 久久热在线av| 午夜福利欧美成人| 99久久国产精品久久久| 久久香蕉精品热| 精品免费久久久久久久清纯| 高潮久久久久久久久久久不卡| 精品人妻1区二区| 最近在线观看免费完整版| www日本在线高清视频| 国产高清视频在线观看网站| 久久久久性生活片| 国产精品免费视频内射| 黄色视频不卡| 国产探花在线观看一区二区| 长腿黑丝高跟| 亚洲成a人片在线一区二区| 欧美日本亚洲视频在线播放| 美女 人体艺术 gogo| 色哟哟哟哟哟哟| 操出白浆在线播放| 精品电影一区二区在线| 午夜精品久久久久久毛片777| 国产三级中文精品| 日日干狠狠操夜夜爽| 黑人操中国人逼视频| 亚洲精品粉嫩美女一区| 成人一区二区视频在线观看| 亚洲精品国产精品久久久不卡| 中文字幕最新亚洲高清| 香蕉av资源在线| 黄色丝袜av网址大全| 国产91精品成人一区二区三区| √禁漫天堂资源中文www| 男女下面进入的视频免费午夜| 精品午夜福利视频在线观看一区| 午夜a级毛片| tocl精华| 亚洲成人免费电影在线观看| 18美女黄网站色大片免费观看| 国产精品乱码一区二三区的特点| 久久人人精品亚洲av| 在线观看日韩欧美| 少妇粗大呻吟视频| 一级作爱视频免费观看| 在线观看www视频免费| www.www免费av| 国产精品亚洲一级av第二区| 日本 欧美在线| 亚洲精品一卡2卡三卡4卡5卡| 欧美在线一区亚洲| 757午夜福利合集在线观看| 香蕉丝袜av| 91在线观看av| 亚洲狠狠婷婷综合久久图片| 波多野结衣巨乳人妻| 亚洲无线在线观看| 日韩欧美 国产精品| 级片在线观看| 免费在线观看影片大全网站| 欧美人与性动交α欧美精品济南到| 久久香蕉精品热| 国产精品久久久久久人妻精品电影| 91九色精品人成在线观看| 成在线人永久免费视频| 精品国内亚洲2022精品成人| 琪琪午夜伦伦电影理论片6080| 国产av麻豆久久久久久久| www.999成人在线观看| 淫妇啪啪啪对白视频| 午夜福利18| 99久久精品国产亚洲精品| 午夜免费成人在线视频| 很黄的视频免费| 99国产极品粉嫩在线观看| tocl精华| 18禁观看日本| 最近最新免费中文字幕在线| 亚洲熟女毛片儿| 一本久久中文字幕| 黑人巨大精品欧美一区二区mp4| 亚洲五月天丁香| 在线观看免费视频日本深夜| 舔av片在线| 欧美日本视频| 亚洲狠狠婷婷综合久久图片| 香蕉久久夜色| 最近最新中文字幕大全免费视频| 国内精品一区二区在线观看| 国产黄色小视频在线观看| 欧美绝顶高潮抽搐喷水| 欧美日韩乱码在线| 国产伦在线观看视频一区| 中文字幕熟女人妻在线| 色综合婷婷激情| 国产精品自产拍在线观看55亚洲| 精品久久久久久久久久免费视频| 亚洲精品美女久久av网站| 亚洲国产精品久久男人天堂| 一进一出好大好爽视频| 夜夜躁狠狠躁天天躁| 亚洲精品美女久久av网站| 国产99久久九九免费精品| 一个人免费在线观看的高清视频| 韩国av一区二区三区四区| 99久久无色码亚洲精品果冻| av免费在线观看网站| 麻豆国产av国片精品| 2021天堂中文幕一二区在线观| 国产av一区在线观看免费| 老汉色av国产亚洲站长工具| 女人爽到高潮嗷嗷叫在线视频| 男女视频在线观看网站免费 | bbb黄色大片| 在线观看一区二区三区| 亚洲 欧美一区二区三区| 日韩欧美免费精品| 久久精品亚洲精品国产色婷小说| 久久久久国内视频| 男人舔奶头视频| 亚洲国产精品久久男人天堂| 国产三级黄色录像| 欧美日韩精品网址| 狂野欧美白嫩少妇大欣赏| 在线观看一区二区三区| 亚洲 欧美一区二区三区| av视频在线观看入口| 日本黄大片高清| 欧美中文日本在线观看视频| 免费电影在线观看免费观看| 国产伦一二天堂av在线观看| 亚洲aⅴ乱码一区二区在线播放 | 这个男人来自地球电影免费观看| 日本成人三级电影网站| 搡老妇女老女人老熟妇| 人人妻人人看人人澡| 天天躁狠狠躁夜夜躁狠狠躁| 一个人免费在线观看的高清视频| 亚洲国产日韩欧美精品在线观看 | 亚洲人成网站在线播放欧美日韩| 日本 欧美在线| 国产精品98久久久久久宅男小说| 最近最新免费中文字幕在线| 亚洲中文字幕日韩| 久久性视频一级片| 三级国产精品欧美在线观看 | 国产视频内射| 亚洲自拍偷在线| 可以免费在线观看a视频的电影网站| 日韩成人在线观看一区二区三区| 久久精品国产99精品国产亚洲性色| 97人妻精品一区二区三区麻豆| 亚洲免费av在线视频| 欧美在线黄色| 国产高清videossex| 精品国产美女av久久久久小说| 久久人妻av系列| 搡老妇女老女人老熟妇| 亚洲电影在线观看av| 午夜老司机福利片| 1024香蕉在线观看| 国产精品亚洲美女久久久| 男人的好看免费观看在线视频 | 高清在线国产一区| 99国产综合亚洲精品| 精品国产超薄肉色丝袜足j| 免费看日本二区| 久久精品影院6| 成年版毛片免费区| tocl精华| 亚洲国产精品sss在线观看| 中文字幕久久专区| 老鸭窝网址在线观看| 神马国产精品三级电影在线观看 | 中文字幕人妻丝袜一区二区| 757午夜福利合集在线观看| 一进一出抽搐动态| 久久这里只有精品中国| 韩国av一区二区三区四区| 香蕉国产在线看| 国产在线观看jvid| 亚洲av中文字字幕乱码综合| 一级片免费观看大全| 神马国产精品三级电影在线观看 | 亚洲狠狠婷婷综合久久图片| 精品国产亚洲在线| 亚洲av电影不卡..在线观看| 91成年电影在线观看| 亚洲电影在线观看av| 老司机福利观看| 国产三级黄色录像| 午夜两性在线视频| 99久久综合精品五月天人人| 床上黄色一级片| 日本成人三级电影网站| 少妇的丰满在线观看| 国产av不卡久久| 久热爱精品视频在线9| e午夜精品久久久久久久| 亚洲熟妇中文字幕五十中出| 婷婷亚洲欧美| 午夜激情福利司机影院| 18禁国产床啪视频网站| 欧美日韩瑟瑟在线播放| 亚洲人成伊人成综合网2020| 中文字幕人妻丝袜一区二区| 欧美精品亚洲一区二区| 首页视频小说图片口味搜索| 亚洲精品美女久久久久99蜜臀| 婷婷亚洲欧美| 国产视频内射| 久久久水蜜桃国产精品网| 免费无遮挡裸体视频| xxxwww97欧美| 国产黄a三级三级三级人|