• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    DT-Net:Joint Dual-Input Transformer and CNN for Retinal Vessel Segmentation

    2023-10-26 13:14:40WenranJiaSiminMaPengGengandYanSun
    Computers Materials&Continua 2023年9期

    Wenran Jia ,Simin Ma ,Peng Geng and Yan Sun

    1School of Information Science and Technology,Shijiazhuang Tiedao University,Shijiazhuang,China

    2School of Mathematics and Information Science,Zhangjiakou University,Zhangjiakou,China

    ABSTRACT Retinal vessel segmentation in fundus images plays an essential role in the screening,diagnosis,and treatment of many diseases.The acquired fundus images generally have the following problems:uneven illumination,high noise,and complex structure.It makes vessel segmentation very challenging.Previous methods of retinal vascular segmentation mainly use convolutional neural networks on U Network (U-Net) models,and they have many limitations and shortcomings,such as the loss of microvascular details at the end of the vessels.We address the limitations of convolution by introducing the transformer into retinal vessel segmentation.Therefore,we propose a hybrid method for retinal vessel segmentation based on modulated deformable convolution and the transformer,named DT-Net.Firstly,multi-scale image features are extracted by deformable convolution and multi-head selfattention(MHSA).Secondly,image information is recovered,and vessel morphology is refined by the proposed transformer decoder block.Finally,the local prediction results are obtained by the side output layer.The accuracy of the vessel segmentation is improved by the hybrid loss function.Experimental results show that our method obtains good segmentation performance on Specificity(SP),Sensitivity(SE),Accuracy(ACC),Curve(AUC),and F1-score on three publicly available fundus datasets such as DRIVE,STARE,and CHASE_DB1.

    KEYWORDS Retinal vessel segmentation;deformable convolution;multi-scale;transformer;hybrid loss function

    1 Introduction

    The vessels in fundus images are currently the only microvascular system that can be directly visualized non-invasively and painlessly.The pathological characteristics of related diseases can be observed by the morphology and changing information of retinal vessels.For example,diabetic patients are prone to retinopathy,macular degeneration,and blindness [1–3].The retinal vessels of hypertensive patients have higher curvature and narrowing,which can easily lead to retinal hemorrhage[4].Therefore,visualizing the distribution and details of retinal vessels can help doctors diagnose diseases more efficiently[5].However,retinal vessels have the following problems:complex and diverse structures,tiny vessels,low contrast,and easy confusion with the background.A significant amount of time and effort is required to segment the vessels manually.Therefore,an automatic retinal vessel segmentation method is essential to assist doctors in diagnosing diseases quickly.

    Artificial intelligence technology shortens the distance between human life and computer,and the method based on deep learning is applied to various tasks.For example,Sultan et al.used deep learning to deal with the segmentation task of high-resolution aerial images[6,7].Liu et al.[8]and Qin et al.[9]applied deep learning to image fusion.Jin et al.[10] applied deep learning to classification tasks to provide accurate ERM automatic grading for clinical practice.Deep learning is also widely used in the task of medical image segmentation[11,12]and other fields[13,14].Among them,convolutional neural networks(CNN)have made great progress in location-sensitive tasks[15–17].In recent years,U-Net[18]based on the full convolution network(FCN)[19]has been widely used in medical image segmentation[20–24].However,U-Net is difficult to deal with irregular and tiny vessels.M-Net[25]is an improved U-Net framework that uses the image pyramid mechanism to realize multi-level receptive fields and can learn image features at different scales.However,the feature filtering is not realized in the hopping connection in the M-Net model.ResU-Net[26]is derived from the U-Net architecture.It uses residual blocks to replace convolutional layers and increases the depth of the model to get more vessel features.But the contrast-limited adaptive histogram equalization (CLAHE) operation increases the noise of the image.UNet++[27] redesigns the skip connection part of U-Net,which is the aggregation of features of different semantic scales in the decoder.But it consumes too much memory and takes a lot of time on small datasets.IterNet[28]is an encoder-decoder model like U-Net,which adopts U-Net as the basic module to improve the connectivity of vessel segmentation results by expanding the depth of the model through multiple iterations.

    Based on U-Net,Deformable U-Net(DUNet)[29]adds deformable convolution[30]to adaptively adjust the receptive field according to the size and shape of vessels and improve segmentation accuracy and noise immunity.MAU-Net [31] uses modulated deformable convolution [32] as an encoding and decoding unit and uses position and channel attention block to realize vascular segmentation.Recently,the transformer has been successfully applied to the field of computer vision.Inspired by this,TransUNet uses a hybrid of CNN and transformer as an encoder and uses a skip connection and decoder for medical image segmentation[33].The encoder,bottleneck,and decoder of Swin-Unet[34]use the Swin-transformer block[35]to realize medical image segmentation.FAT-Net[36]implements a dual encoder,including both CNN and transformer branches,to achieve skin lesion segmentation.Although it can get better performance,these models based on the transformer are both complicated and time-consuming,which will influence the practicability to some extent.

    These segmentation methods have the following problems:(1)The method used can only extract the local information in the image and can not deal with the global features.(2) The accuracy of segmentation is low.(3) The structural information in the vascular image can not be obtained well.Given the above problems,we use deformable convolution to extract complex and variable structural information,which has better learning ability than ordinary convolution.In addition,we use the transformer to capture long-term dependencies through a self-attention mechanism and help CNN overcome its inherent spatially induced biases [37].Therefore,a segmentation network based on a combination of deformable convolution[32]and transformer is proposed to solve the challenging task of retinal vessel segmentation.The proposed network uses convolution to extract local features and the transformer to construct long-term dependencies.And pre-training on large-scale datasets is not required and achieves better results on small-scale datasets.Our main contributions are summarized as follows:

    (1)We propose an end-to-end deep learning network named DT-Net,which is very effective for retinal vessel segmentation.The network takes into account multi-scale input,structural information,and long-term dependency,and provides more powerful technical support for clinical diagnosis and processing.

    (2) Combine deformable convolution with transformer.Deformable convolution can extract structural information in retinal vessels.Transformer makes up for the defect that CNN can only obtain local information and enhances the extraction ability of feature information to achieve a better segmentation effect.

    (3) A dual-input MHSA algorithm was proposed to extract multi-scale image information of fundus vascular images with different resolutions.The output of multi-scale image information is fused by skip connection to compensate for the information loss in feature extraction.A mixed loss function was used to improve the accuracy of retinal vessel segmentation.

    (4) We conducted experiments on DRIVE,STARE,and CHASE_DB1 with accuracy rates of 96.31%,97.03%,and 97.37%,respectively.The experimental results showed that our segmentation performance was superior to other methods.

    The remainder of this paper is organized as follows:Section 2 describes our proposed approach in detail.Section 3 presents the fundus dataset,preprocessing methods,and experimental results.Finally,we conclude with a summary and outlook in Section 4.

    2 Method

    2.1 Network Architecture

    The architecture of the proposed DT-Net is shown in Fig.1.It consists of four main parts:encoder,decoder,multi-scale input and side output.We improve on U-Net,one of the simplest and most popular architectures in medical image segmentation.Firstly,because the information obtained by the single-scale input of U-Net is limited,we use multi-scale layers to construct image pyramid input,and average pooling is used on the retinal images of sizeH×Wto obtain the multi-scale image information,enhancing each layer’s feature information.Secondly,a hybrid block is used in the encoder to extract vessel features with irregular shapes and sizes.The encoders are connected through max pooling,which halves the size of the feature map and generates hierarchical features at different levels.Except for the first layer,each layer is inputted by the maximum pool map of the upper layer and the feature map of this layer.The high-level features can correctly identify the coarse vessel information,and the low-level features can accurately obtain the tiny vessel information.Among them,the hybrid block combines modulated deformable convolution and MHSA.The segmentation effect is improved by using deformable convolution for local feature extraction and using MHSA to learn global features.In self-attention,relative position encoding is used to learn the content-position relationship in images.

    This paper uses a novel decoder structure to fuse the high-resolution and low-resolution information.The decoder uses dual-input MHSA to obtain low-resolution and high-resolution information and then passes through the residual block [38] to achieve feature reuse,alleviate the problem of gradient disappearance,prevent the occurrence of overfitting,and improve segmentation capabilities.The blue-shaded part at the bottom of Fig.1 is the structure of the residual block.Finally,multi-scale features are fused using image information at different scales.This structure of first down-sampling and then up-sampling reduces the risk of overfitting to a certain extent.In the side output path,the feature map is spatially up-sampling,and then 1×1 convolution is performed to compress the number of channels to 2,which is convenient for direct comparison with ground truth and outputs the corresponding probability value of each pixel of the image.

    Figure 1:DT-Net network architecture diagram

    2.2 Deformable Convolution

    Most of the original CNN extract feature information at a fixed position in an image based on a fixed receptive field structure,and cannot adaptively generate deformable receptive fields and convolution kernel shapes according to different image features[39].However,the vessel structure is irregular and complex,and the introduction of deformable convolution can enhance the construction ability of retinal vessel geometric deformation.On the basis of the traditional convolution,the deformable convolution increases the direction vector of the convolution kernel to make the shape of the convolution kernel closer to the feature.A learnable offset is introduced into the deformable convolution.Offset learning is the use of interpolation algorithm,through back propagation learning.The effective receptive field can more accurately cover the actual shape of the vessel to learn more features.Therefore,deformable convolution is used in this paper to enhance the generalization ability of the adaptability to different position information of the image and the mapping ability during the convolution process.The deformable convolution formula is as follows:

    LetNdenotes the sampling position of a given standard convolution kernel,andwiandpidenote the weight of thei-th position and the preset offset,respectively.x(p)andy(p)denote the features at positionpon the input and output feature mapsxandy,respectively.WhereΔpiandΔmiare the learnable offset and adjustment factor at thei-th position,and the adjustment factorΔmi∈[0,1]andΔpiis an arbitrary value.It can be found that the deformable convolution learns the offset and the weight of the sampling points,which can effectively capture the structural details of tiny vessels and thus achieve more accurate feature extraction.

    2.3 Multi-Head Self-Attention Mechanism

    MHSA is an attention mechanism that pays more attention to the internal structure,inherently has a global receptive field,and is good at capturing long-distance dependencies.The input feature map can be expressed asX∈RH×W×C,whereH,W,Care the height,width,and number of channels,respectively.The self-attention calculation formula is as follows:

    where three 1×1 convolutions are used to projectXfor query,key and value embedding:Q,K,VinRH×W×d,wheredis the dimension of the embedding for each head.The attention matrixAworks well for feature aggregation,where each row value corresponds to the similarity of a given element inQrelative to all elements inK.

    Because the image is highly structured data,in the local characteristics of high resolution,in addition to the border area,most of the pixels with similar features.Therefore,computing the attention among all pixels is very inefficient and redundant.So,we propose an efficient self-attention for the task of vessel segmentation,as shown in Fig.2.The proposed self-attention decoder architecture is used to recover detailed information from the skip connections of the encoder,wherexis the image feature of the previous layer in the decoder,and then the 1×1 convolution is performed to obtain a low-resolution image of sizeHl×Wl×dcharacteristics,yis the image feature from the same layer in the encoder,and then a high-resolution feature of sizeHh×Wh×dis obtained by 1×1 convolution.Then the dot product and soft-max are performed,and the pairwise attention matrix between the input units.Finally,image features of sizeHh×Wh×dare obtained.For positional encoding,standard selfattention blocks lose their positional information and are ineffective for construction highly structured image content[40].The sinusoidal embedding in the convolution layer in the previous research does not have the property of translation,so the 2-dimensional relative position coding is used by adding the information of relative heightRhand widthRw.Relative position coding is used before soft-max operation,and the attention logit isqkT×qrT.

    Figure 2:MHSA decoder

    2.4 Loss Function

    Loss function has a significant influence on deep learning training tasks.Most existing methods use only a single loss function to evaluate the network performance.Image segmentation tasks usually use cross-entropy as a loss function,and the ratio of foreground and background pixels in the retinal image is severely imbalanced,resulting in the features of retinal vessels cannot be effectively learned by the model.In the binary segmentation task,the Dice loss function can alleviate the above problems,and its essence is to measure the degree of overlap between two samples However,adjusting the weight of the network according to a single loss function can easily lead to the loss of the feature information of the middle and lower layers of the network.Mixed loss can effectively help the model training and enhance segmentation quality.Therefore,the network is trained using a hybrid loss function.Compare the output with the ophthalmologist’s criteria and calculate the loss between them:

    whereωis the weighting factor for balancing different losses.The binary cross entropy (BCE) loss encourages the segmentation model to independently predict the correct class label at each pixel position.Dice loss can alleviate the imbalance of class to some extent.The BCE loss function and Dice loss function are defined as follows:

    whereKrepresents the number of pixels in a given image,andpi∈[0,1],gi∈[0,1] represent the predicted probability and label probability of thei-th pixel,respectively.The parameterεis a Laplace smoothing factor,which avoids numerical problems and speeds up the convergence of the training process.

    3 Experimental

    3.1 Experimental Details

    In this paper,we run all experiments based on Windows 10,Intel Core i5-10400F CPU,GeForce GTX 1080ti GPU,Python 3.7 language,and PyTorch deep learning framework.The parameters in the network are optimized using the Adam optimizer with an initial learning rate of 0.0005 and a weight decay of 0.001.To dynamically adjust the training process,a cosine annealing strategy is utilized to update the learning rate.The proposed DT-Net framework is trained for 200 epochs with a batch size of 2.Fig.3 shows the loss function curves of the proposed method for training and verifying datasets relative to iteration on three datasets: DRIVE [41],STARE [42],and CHASE_DB1 [43].The horizontal coordinate of the image is the iteration period“Epoch”,and the ordinate is the loss value“Loss”.Legend“training”means training,and Legend“validation”means validation.When the proposed method is trained on three datasets,the training and validation losses converge rapidly within 50 epochs,flatten out within 150 epochs,and then reach a stable value.

    Figure 3:Loss function curves vs.iterations for the training and validation datasets

    3.2 Data and Preprocessing

    We use the public and widely used standard datasets DRIVE,STARE,and CHASE_DB1 as the training and testing datasets of the proposed network.Sample images for the three datasets are shown in Fig.4,along with their ground truth vessel segmentation masks and field-of-view (FOV) masks.The DRIVE dataset consists of 40 fundus images with an image resolution of 584 × 565,and it is specified that the training set and the test set both contain 20 images.The STARE dataset consists of 20 fundus images with an image resolution of 700×605,including 10 retinal images with pathological features.This dataset can evaluate the ability of the model to segment abnormal fundus images.The CHASE_DB1 dataset consists of 28 retinal images of 14 children with an image resolution of 960×999.Since the STARE and CHASE_DB1 datasets do not officially specify the training set and test set,we use the first 10 images of the STARE dataset for model training according to DUNet[29],and the remaining 10 images are used for model performance evaluation.For the CHASE_DB1 datasets,we follow a common protocol [44],selecting the first 20 images for model training and the remaining 8 images for model performance evaluation.The three datasets contain the results of the manual segmentation of retinal images by two experienced ophthalmologists.We used the segmentation results of the first ophthalmologist as the ground truth for network training [45] and also as the standard segmentation results for network model and algorithm evaluation.

    Figure 4:Sample images from DRIVE,STARE,and CHASE_DB1.(A)Original image;(B)ground truth;(C)field-of-view masks

    Retinal images often contain noise and uneven illumination,so all images from these three datasets undergo four kinds of preprocessing for image enhancement before the training and testing datasets of the network.The preprocessing process of the fundus images is shown in Fig.5.Firstly,the color images are converted into grayscale images,simplifying the subsequent preprocessing steps and reducing the computation during training.Secondly,each pixel in the grayscale image is normalized to reduce the data dimension and speed up the convergence.Then,the CLAHE method[46]is used to suppress image noise and enhance the details of the vessel and the contrast between the vessel and background.Finally,nonlinear transformation and gamma correction are performed to solve the image quality problem caused by the brightness of the input images,enhance the contrast,make the vessel in the darker area clearer,and improve the image quality.After the above processing,it can be found that the distinction between the retinal vessel and the background is significantly improved at this time,which is conducive to feature extraction in the training process and enhances the segmentation quality of the retinal vessel.

    Figure 5:Image preprocessing of fundus images,from left to right,are original image,image graying,image normalization,gradient histogram equalization,gamma correction,original images patches,and ground truth patches

    Due to the limited number of images in the fundus datasets,patch processing is adopted to expand the datasets to reduce the effect of overfitting.In the training process,each image of the pre-processed datasets is first randomly cropped into a patch of size 64 × 64 for the training of the network.The corresponding patches are also extracted from the ground truth to ensure the original images and the ground truth.In the experiments,90% of the extracted patches are used for training and the remaining 10% are used for validation.Fig.5 shows some image patches and the corresponding ground truth of the fundus images.

    3.3 Evaluation Index

    Similar to most methods of retinal image segmentation,we will compare the proposed DT-Net method with other algorithms and evaluate it through the following indicators: Accuracy (ACC),Specificity(SP),Sensitivity(SE),F1-score,and Area Under Receiver Operating Characteristic(ROC)Curve(AUC).Acc is used to evaluate the overall segmentation performance of the model.The larger the ACC,the more accurate the segmentation.The specific mathematical expression is as follows:

    SP is an important metric of retinal vessel segmentation.It is the ratio of correct negative predictions to the total number of negative predictions.It mainly evaluates the ability to recognize background in retinal images.The better the SP value,the lower the false positive rate (FPR).The specific mathematical expression is as follows:

    SE mainly evaluates the ability to recognize retinal vessels(positive)in retinal images.It is the ratio of correct positive predictions to the total number of positive predictions in the predicted results.The specific mathematical expression is as follows:

    Little gingerbread boy jumped out: Carlo Collodi s Pinnochio is another famous story, although not a traditional fairy tale, in which an inanimate object comes to life

    F1-score for evaluation of segmentation results and the similarity criteria of ophthalmology.The larger the value is,the closer the algorithm segmentation result is to the gold standard,and the segmentation effect.The specific mathematical expression is as follows:

    Among them,true positive(TP)means correctly identified vessel pixels,true negative(TN)means correctly identified non-vessel pixels,false positive(FP)means non-vessel pixels identified as vessels,and false negative(FN)means vessel pixels identified as non-vessel.

    In addition,we also introduce AUC to evaluate the segmentation performance of the model.AUC is a professional metric of retinal vessel segmentation.The closer its value is to 1,the better the performance of the algorithm and the stronger the robustness.The ROC curve describes the relationship between the true position rate and the false position rate under the different classification thresholds.The closer the value of the area under the AUC is to 1,the better the algorithm performs and the more robustness.

    3.4 Ablation Experiment

    To further verify the effectiveness of the proposed network for vessel segmentation,we conduct ablation experiments on the DRIVE dataset.The prediction results of the network are compared in terms of five performance metrics: ACC,SE,SP,AUC,and F1-score.To more clearly see the improvement of the accuracy of retinal vessel segmentation by each module proposed in the model,the segmentation performance of different methods is shown in Table 1.M0 uses a hybrid loss function based on U-Net.M1 adds multiple inputs and side outputs based on M0.M2 adds the encoder hybrid block based on M1.M3 adds the transformer decoder block based on M1.M4 adds the encoder hybrid block and transformer decoder block based on M1.

    Table 1:Ablation experimental results of vessel segmentation on DRIVE dataset

    As shown in Table 1,when multi-scale input and side output is added to M0,each index is significantly improved,and the segmentation performance of the network is improved.After adding the hybrid block to the network,AUC and F1-score in M2 are 0.51% and 2.01% higher than M1,respectively,which verifies the effectiveness of the hybrid block.In M3,SE,AUC,and F1-score are 4.25%,0.3%,and 2.3% higher than M1,respectively.It is shown that the proposed MHSA decoder block is effective in retinal vessel segmentation and enhances the performance of retinal vessel segmentation.

    We can see from the last row of Table 1 that the values of SE,AUC,and F1-score of the proposed network are increased from 74.68%,96.70%,81.13% of M0 to 86.36%,98.43%,84.88%,respectively.Experiments show that using either the hybrid block of the encoder or the attention block of the decoder can improve the segmentation performance of the network,which shows their rationality and effectiveness.Therefore,the proposed method has advantages in retinal vessel segmentation.

    Ablation experiments are performed by setting different loss functions to verify which loss function is more suitable for the proposed method.The effects of varying loss functions on performance indexes are shown in Table 2.First,“DT+BCE”uses a BCE loss function to train the network.“DT+Dice”uses the Dice loss function.“DT+BCE+Dice”combines the BCE loss function and the Dice loss function.The results in Table 2 show that almost all the metrics are improved with the help of the hybrid loss,which proves that the hybrid loss contributes to enhancing the accuracy of the model.

    Table 2:Loss function ablation experiment

    The learning rate,as an essential parameter in the process of model training,controls the learning progress of the network model.To explore the influence of different learning rates on segmented images,Table 3 shows the segmentation results when the learning rate of this method is 0.0001,0.0003,0.0005,0.0007,0.0009,and 0.0011.When the learning rate is set to 0.0005,the best performance is achieved for all metrics.When the learning rate increases or decreases,both F1-score and AUC will decrease.

    Table 3:Segmentation results with different learning rates

    3.5 Comparison with Other Methods

    Tables 4–6 evaluate the different vessel segmentation methods in the DRIVE,STARE,and CHASE_DB1 datasets.Because there are more background pixels than vessel pixels in the fundus images,AUC and F1-score metrics are more suitable for evaluating the vessel segmentation method.In Table 4,compared with the maximum of existing methods,our proposed method performs better on the DRIVE dataset.There is a 2.83% increase in SE,0.2% in SP,0.22% in AUC,and 1.22% in F1-score,respectively.The highest SE and SP of the proposed model means that retinal vessels can be identified more accurately,and noise information can be suppressed.This is because the MHSA mechanism focuses on capturing global vessel details.We can observe from Table 5 that the proposed method achieves the best performance for ACC,SP,AUC,and F1-score on STARE datasets compared to other methods.The proposed method performs better,indicating that the framework is effective for vessel segmentation.Since there are many lesion images in the STARE dataset,the SE is not optimal all metrics of our method are highest on the CHASE_DB1 dataset except the SE and ACC metric.On the whole,our method performs well.As seen from Tables 4–6,the proposed method has the highest F1-score metrics on the three datasets compared to the maximum value of each metric of the other methods,with an increase of 1.85%,3.61%,and 0.26%,respectively.This indicates that the proposed method can distinguish retinal vessel pixels and background pixels effectively and accurately.In general,compared with these methods,the proposed method can segment retinal vessels more accurately and has good prospects for application in clinical medical imaging diagnosis.

    Table 4:Comparison of the proposed method with existing methods in the DRIVE dataset

    To further observe the segmentation results of the models,partial segmentation results on the three datasets are given for visual comparison,as shown in Figs.6–8.It can be seen that the DT-Net model produces more details of the vessel segmentation.Compared with U-Net,DT-Net can detect more vessels.Compared with DUNet and GT U-Net,DT-Net can detect some details of missing vessels and thus complete segmentation more efficiently.Compared with AReN-UNet and Multistage DPIRef-Net,DT-Net has better vessel continuity.We can observe that the DT-Net is superior to the other five methods on three datasets.As can be seen in Figs.6–8,the segmentation effect of the proposed method is very close to the standard of ophthalmologist manual segmentation,obtaining more continuous vessels.It can successfully segment the continuous tiny vessels and has good generalization ability when segmenting different datasets.This proves the network can reduce background noise,enhance contrast,and preserve irregular vessels well.Visualization further illustrates the importance of multiscale contextual information and capturing long-term dependencies in retinal vessel segmentation.This suggests that the proposed method can segment retinal vessel images,help specialized physicians in disease diagnosis,and reduce the workload of clinical medical specialists.In addition,we use the ROC curve to evaluate the model,shown in Fig.9.The closer the ROC curve is to the upper left boundary,the more accurate the network is.Fig.10 shows a locally enlarged view of the tiny vessel in the segmentation result.This is because,in retinal images,tiny vessels are not significantly different from the image background.Therefore,to help the network pay attention to essential features and suppress unnecessary features,we use the MHSA mechanism in both the encoder and decoder.It can be seen from Fig.10 that the proposed algorithm has good robustness to the intersection of the vessel and tiny vessel areas with low contrast and maintains the degree and connectivity of thick and thin vessels,and the segmentation results of the lesion region are relatively close to the standard segmentation.The reliability and robustness of this algorithm for retinal vessel segmentation are verified.The above experimental results can prove that the performance of the proposed model is generally better,it can more accurately identify vessels and backgrounds,and segment tiny vessels better.

    Figure 6:The segmentation results of different models on DRIVE datasets.(A) Original images;(B) ground truth images;(C) U-Net;(D) DUNet;(E) GT U-Net;(F) AReN-UNet;(G) multistage DPIRef-Net;(H)ours

    Figure 7:The segmentation results of different models on STARE datasets.(A) Original images;(B) ground truth images;(C) U-Net;(D) DUNet;(E) GT U-Net;(F) AReN-UNet;(G) multistage DPIRef-Net;(H)ours

    Figure 8:The segmentation results of different models on CHASE_DB1 datasets.(A)Original images;(B) ground truth images;(C) U-Net;(D) DUNet;(E) GT UNet;(F) AReN-UNet;(G) multistage DPIRef-Net;(H)ours

    Figure 9:The ROC curves of the DT-Net model on different datasets.(A)DRIVE dataset.(B)STARE dataset.(C)CHASE DB1 dataset

    Figure 10:Partial increased view of different models on different datasets.From top to bottom are the fundus images from the DRIVE,STARE,and CHASE DB1 datasets.From left to right:(A)original images;(B)partial views;(C)ground truth;(D)U-Net;(E)DUNet;(F)GT U-Net;(G)AReN-UNet;(H)multistage DPIRef-Net;(I)ours

    4 Discussion

    In our work,we propose a hybrid convolution and transformer network evolved from the classical model U-Net,which aims to aggregate multi-scale feature information at different resolutions to achieve accurate and efficient vessel segmentation.The fundus image is full of noise and low contrast.Therefore,we first preprocess to improve image contrast and suppress the background noise of the source image.To fully use multi-scale information,DT-Net uses multi-scale images as input,and then we introduce deformable convolution to change the convolution kernel according to the actual shape of blood vessels to obtain more accurate structural information.Meanwhile,the MHSA mechanism is used to capture the distant relationship of fundus images,making up for the defect that CNN cannot extract global features.

    In addition,the proposed network is verified by ablative experiments.The ACC and AUC indexes of the network improved significantly after the addition of a mixing block to the encoder,and the SE and F1-scoring indexes of the network improved significantly after the addition of a transformer decoder block.Of course,the current study of DT-Net proposed has the following shortcomings:(1)Due to the similarity between the background and blood vessels in the datasets,our method cannot achieve the best performance in each index;(2) They inevitably lose some container details due to constant up-sampling.In the future,we will introduce more advanced methods,such as the encoding pattern in Swin-Unet,to preserve more details in the original image and make our model perform better on various metrics.

    5 Conclusion

    We propose a network named DT-Net for fundus blood vessel segmentation.The performance of this method is mainly improved by the introduction of variable convolution and multiple selfattention mechanisms,which not only extract the structural information easily ignored in fundus blood vessel images but also effectively extract information at different scales.And the DT-Net presented in the DRIVE,STARE,and CHASE_DB1 datasets is significantly improved.Experimental results show that this method can better process different fundus data sets,has better generalization ability,and provides more accurate segmentation results for medical diagnosis and treatment.In terms of segmentation results,our model can segment more vascular details and has better connectivity.

    Acknowledgement:We thank the High-Performance Computing Center of the Shijiazhuang Tiedao University for their support.

    Funding Statement:This work was supported in part by the National Natural Science Foundation of China under Grant 61 972267;the National Natural Science Foundation of Hebei Province under Grant F2018 210148;the University Science Research Project of Hebei Province under Grant ZD2021334.

    Author Contributions:WJ and SM design of study,analysis of data.WJ and SM conducted experiments and drafted the manuscript.PG and SM revised and edited the manuscript.YS and PG polished the manuscript.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Publicly available datasets were analyzed in this study.This data can be found here: DRIVE: https://drive.grand-challenge.org;STARE: https://cecas.clemson.edu/~ahoover/stare/;CHASE_DB1:https://blogs.kingston.ac.uk/retinal/chasedb1.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    最近中文字幕2019免费版| 全区人妻精品视频| 日韩不卡一区二区三区视频在线| 久久久久久久精品精品| 欧美丝袜亚洲另类| 国产欧美日韩精品一区二区| 夜夜看夜夜爽夜夜摸| 少妇丰满av| 99九九线精品视频在线观看视频| a级毛色黄片| 久热这里只有精品99| 人人妻人人澡人人爽人人夜夜| 只有这里有精品99| 午夜亚洲福利在线播放| 少妇人妻精品综合一区二区| 国产大屁股一区二区在线视频| 岛国毛片在线播放| 亚洲怡红院男人天堂| 啦啦啦中文免费视频观看日本| 久久99热6这里只有精品| 亚洲av中文字字幕乱码综合| 亚洲一区二区三区欧美精品 | 三级国产精品片| tube8黄色片| 免费看光身美女| 蜜臀久久99精品久久宅男| 亚洲欧洲日产国产| 亚洲欧美中文字幕日韩二区| 午夜免费观看性视频| av在线蜜桃| 日韩三级伦理在线观看| 国产精品一区www在线观看| 精品久久久噜噜| 高清午夜精品一区二区三区| 午夜激情久久久久久久| 国产成人aa在线观看| 夜夜爽夜夜爽视频| 男女边摸边吃奶| 一级a做视频免费观看| 99久久精品国产国产毛片| 伊人久久国产一区二区| 国产午夜福利久久久久久| 色吧在线观看| 午夜爱爱视频在线播放| 赤兔流量卡办理| 大话2 男鬼变身卡| 欧美另类一区| 最近的中文字幕免费完整| av专区在线播放| 国产爱豆传媒在线观看| 听说在线观看完整版免费高清| 欧美xxxx性猛交bbbb| 久久这里有精品视频免费| 纵有疾风起免费观看全集完整版| 亚洲国产欧美人成| 五月天丁香电影| av在线app专区| 欧美区成人在线视频| 美女国产视频在线观看| 五月天丁香电影| 国产淫语在线视频| 亚洲精品,欧美精品| 听说在线观看完整版免费高清| 国产精品国产三级国产专区5o| 国产精品人妻久久久久久| freevideosex欧美| 中国国产av一级| 久久人人爽人人片av| 另类亚洲欧美激情| 少妇人妻一区二区三区视频| 国产免费视频播放在线视频| videos熟女内射| 欧美日韩一区二区视频在线观看视频在线 | 亚洲av中文av极速乱| 国产一区二区亚洲精品在线观看| 少妇熟女欧美另类| 国产精品不卡视频一区二区| 日韩精品有码人妻一区| 性色avwww在线观看| 最近最新中文字幕大全电影3| 美女脱内裤让男人舔精品视频| 七月丁香在线播放| 国产精品久久久久久精品电影小说 | 亚洲av中文字字幕乱码综合| 少妇人妻一区二区三区视频| 欧美成人一区二区免费高清观看| 联通29元200g的流量卡| 久久99蜜桃精品久久| 日韩成人av中文字幕在线观看| 亚洲精品中文字幕在线视频 | 成人亚洲精品av一区二区| 黄色配什么色好看| 色吧在线观看| 最近2019中文字幕mv第一页| 十八禁网站网址无遮挡 | 亚洲av国产av综合av卡| 免费观看性生交大片5| 成人亚洲精品av一区二区| 三级经典国产精品| 一级片'在线观看视频| 亚洲欧美中文字幕日韩二区| 亚洲在久久综合| 免费观看a级毛片全部| 久久6这里有精品| 丝瓜视频免费看黄片| 插逼视频在线观看| 亚洲国产欧美人成| 久久久久久久大尺度免费视频| 大香蕉久久网| 欧美精品国产亚洲| 午夜激情福利司机影院| 九草在线视频观看| 国产一区亚洲一区在线观看| 丝袜脚勾引网站| 看黄色毛片网站| 一区二区三区精品91| 日本爱情动作片www.在线观看| 18禁动态无遮挡网站| 国产黄片美女视频| 亚州av有码| 国产淫片久久久久久久久| 久久久精品欧美日韩精品| 青春草国产在线视频| 亚洲在久久综合| 草草在线视频免费看| 亚洲va在线va天堂va国产| 夜夜爽夜夜爽视频| 亚洲成人中文字幕在线播放| 亚洲综合精品二区| 免费人成在线观看视频色| 婷婷色麻豆天堂久久| 午夜亚洲福利在线播放| 国产成人a区在线观看| 22中文网久久字幕| 亚洲欧美日韩卡通动漫| 日韩视频在线欧美| 亚洲精品成人久久久久久| 日韩成人av中文字幕在线观看| 高清毛片免费看| 欧美三级亚洲精品| 国产综合精华液| 九九爱精品视频在线观看| 你懂的网址亚洲精品在线观看| 高清av免费在线| 免费在线观看成人毛片| 免费av不卡在线播放| 亚洲精品色激情综合| 亚洲欧美一区二区三区国产| 特级一级黄色大片| 爱豆传媒免费全集在线观看| 成人二区视频| 国产av码专区亚洲av| 国产国拍精品亚洲av在线观看| 亚洲精品第二区| 十八禁网站网址无遮挡 | 国产一区二区在线观看日韩| 亚洲欧美中文字幕日韩二区| h日本视频在线播放| 五月玫瑰六月丁香| 久久99热6这里只有精品| 国产白丝娇喘喷水9色精品| 国产欧美另类精品又又久久亚洲欧美| 午夜激情福利司机影院| 美女脱内裤让男人舔精品视频| 97在线视频观看| 晚上一个人看的免费电影| 国产熟女欧美一区二区| 久久久久久国产a免费观看| 免费看日本二区| 精品国产一区二区三区久久久樱花 | 菩萨蛮人人尽说江南好唐韦庄| 五月天丁香电影| 高清视频免费观看一区二区| 国产男人的电影天堂91| 久久久久久久大尺度免费视频| 欧美日韩一区二区视频在线观看视频在线 | 人妻少妇偷人精品九色| 久久女婷五月综合色啪小说 | av黄色大香蕉| 黄片无遮挡物在线观看| 亚洲国产色片| 免费av不卡在线播放| 亚洲国产欧美人成| 99久久人妻综合| 亚洲精品aⅴ在线观看| 青青草视频在线视频观看| 亚洲精品一区蜜桃| 亚洲欧美日韩无卡精品| 中文乱码字字幕精品一区二区三区| 人妻一区二区av| 我的女老师完整版在线观看| 美女视频免费永久观看网站| 久久热精品热| 日韩欧美精品免费久久| 欧美人与善性xxx| 久久久成人免费电影| 国产黄色视频一区二区在线观看| 国产av码专区亚洲av| 久久ye,这里只有精品| 别揉我奶头 嗯啊视频| 91午夜精品亚洲一区二区三区| 亚洲欧美日韩卡通动漫| 精品一区二区三卡| 精品国产三级普通话版| 国产亚洲一区二区精品| 永久免费av网站大全| 国产黄片视频在线免费观看| 国产成人精品一,二区| 街头女战士在线观看网站| 亚洲国产最新在线播放| 搞女人的毛片| 有码 亚洲区| 午夜精品国产一区二区电影 | 毛片一级片免费看久久久久| 少妇的逼水好多| 亚洲欧洲国产日韩| 久久久久国产精品人妻一区二区| 成人亚洲欧美一区二区av| 激情 狠狠 欧美| 国产日韩欧美亚洲二区| 神马国产精品三级电影在线观看| 日韩亚洲欧美综合| 99re6热这里在线精品视频| 日韩大片免费观看网站| 蜜桃亚洲精品一区二区三区| 亚洲精品自拍成人| 美女cb高潮喷水在线观看| 麻豆成人av视频| 大码成人一级视频| 亚洲欧美日韩另类电影网站 | 人体艺术视频欧美日本| 少妇裸体淫交视频免费看高清| tube8黄色片| 街头女战士在线观看网站| 春色校园在线视频观看| 五月开心婷婷网| 一级毛片电影观看| 国产高清三级在线| 亚洲天堂av无毛| 国产老妇伦熟女老妇高清| 国产成人精品福利久久| 久久人人爽av亚洲精品天堂 | 熟女电影av网| 免费大片18禁| 日韩欧美一区视频在线观看 | 午夜福利在线在线| av女优亚洲男人天堂| 伦理电影大哥的女人| 成人特级av手机在线观看| 精品人妻一区二区三区麻豆| 26uuu在线亚洲综合色| 亚洲欧美一区二区三区国产| 三级国产精品欧美在线观看| 国产精品久久久久久av不卡| 黄色配什么色好看| 日韩成人av中文字幕在线观看| 免费人成在线观看视频色| 日本wwww免费看| 久久精品久久久久久久性| 欧美xxⅹ黑人| 免费观看无遮挡的男女| 特级一级黄色大片| 成人漫画全彩无遮挡| 欧美一级a爱片免费观看看| 国内揄拍国产精品人妻在线| 国产一级毛片在线| 日本与韩国留学比较| 国产老妇伦熟女老妇高清| 亚洲经典国产精华液单| 久久97久久精品| 波多野结衣巨乳人妻| 国产成人freesex在线| 亚洲内射少妇av| 久久精品国产鲁丝片午夜精品| av播播在线观看一区| 好男人在线观看高清免费视频| 久久久久久久久久久免费av| 国产高清有码在线观看视频| 麻豆成人av视频| 一级a做视频免费观看| 涩涩av久久男人的天堂| 国产精品av视频在线免费观看| 日本黄色片子视频| 亚洲激情五月婷婷啪啪| 国产 精品1| 国产熟女欧美一区二区| 午夜老司机福利剧场| 一个人看视频在线观看www免费| 久久久久久国产a免费观看| 国产色爽女视频免费观看| 五月伊人婷婷丁香| 韩国高清视频一区二区三区| 婷婷色麻豆天堂久久| 国产老妇伦熟女老妇高清| 99精国产麻豆久久婷婷| 18禁在线播放成人免费| 中国美白少妇内射xxxbb| 久久久国产一区二区| 亚洲天堂国产精品一区在线| 久久久久久久久大av| 女的被弄到高潮叫床怎么办| 97人妻精品一区二区三区麻豆| 亚洲人与动物交配视频| 久久久亚洲精品成人影院| 亚洲最大成人手机在线| 夜夜看夜夜爽夜夜摸| 国产精品久久久久久av不卡| 网址你懂的国产日韩在线| 天美传媒精品一区二区| 伦精品一区二区三区| 综合色av麻豆| 国产又色又爽无遮挡免| 成人特级av手机在线观看| 成人综合一区亚洲| av又黄又爽大尺度在线免费看| 国产在线一区二区三区精| 亚洲欧美日韩无卡精品| 国产一级毛片在线| 亚洲成人中文字幕在线播放| 特大巨黑吊av在线直播| 哪个播放器可以免费观看大片| 国产精品国产三级国产专区5o| 午夜福利高清视频| 国产成人aa在线观看| 成人毛片60女人毛片免费| 欧美成人a在线观看| 久久久久国产网址| 欧美zozozo另类| 伦精品一区二区三区| 搡老乐熟女国产| 波多野结衣巨乳人妻| 日本猛色少妇xxxxx猛交久久| 国产一区有黄有色的免费视频| 99热网站在线观看| 老女人水多毛片| 亚洲美女视频黄频| 国产在线一区二区三区精| 又大又黄又爽视频免费| 欧美日韩亚洲高清精品| 婷婷色综合大香蕉| 午夜精品国产一区二区电影 | 99热全是精品| 国产黄频视频在线观看| 热re99久久精品国产66热6| 国产亚洲5aaaaa淫片| 日韩 亚洲 欧美在线| 永久免费av网站大全| 日韩亚洲欧美综合| 久久久久久九九精品二区国产| 中文乱码字字幕精品一区二区三区| 亚洲第一区二区三区不卡| 精品久久久精品久久久| 日本免费在线观看一区| 亚洲av免费高清在线观看| 亚洲电影在线观看av| 国语对白做爰xxxⅹ性视频网站| 成年版毛片免费区| 欧美成人一区二区免费高清观看| 高清午夜精品一区二区三区| 在现免费观看毛片| 亚洲精品aⅴ在线观看| 久久久久网色| 日韩一本色道免费dvd| 简卡轻食公司| 日韩一本色道免费dvd| 亚洲欧美一区二区三区国产| 亚洲精品久久久久久婷婷小说| 午夜激情福利司机影院| 日本与韩国留学比较| 久久99热6这里只有精品| 久久99精品国语久久久| 国产精品久久久久久精品古装| 亚洲成人精品中文字幕电影| 特大巨黑吊av在线直播| 久久99热这里只有精品18| 国产在线一区二区三区精| 日韩制服骚丝袜av| 国产一区有黄有色的免费视频| 国产成年人精品一区二区| 一级爰片在线观看| 色哟哟·www| 男女无遮挡免费网站观看| 大片电影免费在线观看免费| 小蜜桃在线观看免费完整版高清| 欧美一区二区亚洲| 国产成人精品福利久久| 日韩欧美精品免费久久| 三级国产精品欧美在线观看| 51国产日韩欧美| 男人舔奶头视频| 人妻系列 视频| 亚洲欧美精品专区久久| 欧美成人精品欧美一级黄| av国产免费在线观看| 亚洲精品国产av蜜桃| 一区二区av电影网| 日本一本二区三区精品| 国产一区二区亚洲精品在线观看| 日日摸夜夜添夜夜添av毛片| 91在线精品国自产拍蜜月| 欧美激情久久久久久爽电影| 国产成人免费无遮挡视频| 亚洲经典国产精华液单| 2021少妇久久久久久久久久久| 麻豆久久精品国产亚洲av| 国产人妻一区二区三区在| 男女啪啪激烈高潮av片| 热re99久久精品国产66热6| 亚洲精品成人久久久久久| 精品99又大又爽又粗少妇毛片| 日韩中字成人| 中文字幕人妻熟人妻熟丝袜美| 中国三级夫妇交换| 男女无遮挡免费网站观看| 一级毛片电影观看| 不卡视频在线观看欧美| 国产精品久久久久久av不卡| 亚洲欧美精品自产自拍| 久久精品夜色国产| 国产真实伦视频高清在线观看| 欧美成人精品欧美一级黄| 国产精品av视频在线免费观看| 国产日韩欧美亚洲二区| 日本猛色少妇xxxxx猛交久久| 国产又色又爽无遮挡免| 校园人妻丝袜中文字幕| 成人亚洲欧美一区二区av| 大香蕉97超碰在线| 久久久亚洲精品成人影院| 韩国高清视频一区二区三区| 精品国产三级普通话版| 国产精品麻豆人妻色哟哟久久| 中文精品一卡2卡3卡4更新| 久久精品熟女亚洲av麻豆精品| 男女国产视频网站| 久久鲁丝午夜福利片| 午夜福利视频精品| 亚洲电影在线观看av| 免费黄色在线免费观看| 国产精品一区二区性色av| 亚洲精品第二区| 国产 一区精品| av线在线观看网站| 在线观看三级黄色| 熟女av电影| 嫩草影院新地址| 夫妻性生交免费视频一级片| 麻豆成人午夜福利视频| 亚洲欧美精品专区久久| 亚洲av在线观看美女高潮| 亚洲av免费在线观看| 99久久精品一区二区三区| 一级二级三级毛片免费看| 国产精品一区二区三区四区免费观看| 大香蕉97超碰在线| 亚洲精品自拍成人| 在线亚洲精品国产二区图片欧美 | 人人妻人人看人人澡| 国产综合精华液| 亚洲成色77777| 热99国产精品久久久久久7| 久久热精品热| 一级av片app| 日韩制服骚丝袜av| 日韩一区二区视频免费看| 美女xxoo啪啪120秒动态图| 亚洲色图av天堂| 一本一本综合久久| 中文欧美无线码| 亚洲av在线观看美女高潮| 啦啦啦在线观看免费高清www| 免费av观看视频| 丝袜美腿在线中文| 我的老师免费观看完整版| 免费黄色在线免费观看| 成人国产麻豆网| 国产成人精品一,二区| 噜噜噜噜噜久久久久久91| 大香蕉久久网| 国产片特级美女逼逼视频| 一区二区三区精品91| 久久精品国产亚洲av天美| 大片电影免费在线观看免费| 国产探花极品一区二区| 美女cb高潮喷水在线观看| freevideosex欧美| av线在线观看网站| 18禁裸乳无遮挡动漫免费视频 | 69av精品久久久久久| 免费观看av网站的网址| 国产乱来视频区| 久久精品久久久久久噜噜老黄| 国产成人精品一,二区| 欧美性猛交╳xxx乱大交人| 亚洲成色77777| 国产精品久久久久久精品电影小说 | 一区二区av电影网| 日韩一区二区视频免费看| 最近中文字幕高清免费大全6| 九九在线视频观看精品| 97精品久久久久久久久久精品| 日韩中字成人| 国产淫语在线视频| 69人妻影院| 亚洲欧美精品自产自拍| 女人被狂操c到高潮| 七月丁香在线播放| 在线播放无遮挡| 欧美xxxx黑人xx丫x性爽| 22中文网久久字幕| 免费黄网站久久成人精品| 国产黄a三级三级三级人| 日韩国内少妇激情av| 一本久久精品| 国产黄频视频在线观看| 丝袜喷水一区| 99九九线精品视频在线观看视频| 插逼视频在线观看| 国产亚洲一区二区精品| 色吧在线观看| 国产成人一区二区在线| 免费观看的影片在线观看| av在线观看视频网站免费| 97在线人人人人妻| 久久影院123| 99视频精品全部免费 在线| 久久久久性生活片| 亚洲欧美成人综合另类久久久| 日韩一区二区视频免费看| 国产成人免费无遮挡视频| 男的添女的下面高潮视频| 极品教师在线视频| 亚洲内射少妇av| 精品酒店卫生间| 乱系列少妇在线播放| 国内揄拍国产精品人妻在线| 人妻 亚洲 视频| 天天躁日日操中文字幕| 神马国产精品三级电影在线观看| av黄色大香蕉| 欧美一区二区亚洲| 亚洲欧美精品自产自拍| 国产精品蜜桃在线观看| 啦啦啦在线观看免费高清www| 亚洲国产日韩一区二区| 国产人妻一区二区三区在| 最新中文字幕久久久久| 国产伦精品一区二区三区四那| 久久韩国三级中文字幕| 亚洲一区二区三区欧美精品 | 久久影院123| 亚洲经典国产精华液单| 最近2019中文字幕mv第一页| 精品一区二区免费观看| 国产伦理片在线播放av一区| 亚洲最大成人手机在线| 久久精品国产自在天天线| 99久久精品一区二区三区| 亚洲婷婷狠狠爱综合网| 只有这里有精品99| 免费少妇av软件| 亚洲色图综合在线观看| 国产爱豆传媒在线观看| 国产乱人偷精品视频| 美女xxoo啪啪120秒动态图| 欧美xxⅹ黑人| 亚洲精品国产av蜜桃| 男女国产视频网站| 全区人妻精品视频| 欧美成人精品欧美一级黄| 精品久久国产蜜桃| 国产男女超爽视频在线观看| 国产69精品久久久久777片| 精品午夜福利在线看| 大香蕉久久网| 99热全是精品| 大码成人一级视频| 久久久久久久久久久免费av| 青春草国产在线视频| 在线观看三级黄色| 国产黄片美女视频| 国产精品一二三区在线看| 国产精品久久久久久精品电影| 高清视频免费观看一区二区| 亚洲国产色片| 国产成人91sexporn| 亚洲,欧美,日韩| 亚洲欧美中文字幕日韩二区| 亚洲国产精品专区欧美| 精品少妇黑人巨大在线播放| 亚洲成人久久爱视频| 99热网站在线观看| 天天一区二区日本电影三级| 久久久久国产精品人妻一区二区| 日韩大片免费观看网站| 九草在线视频观看| 久久久久性生活片| 午夜激情久久久久久久| 三级男女做爰猛烈吃奶摸视频| 久久精品久久久久久久性| 九色成人免费人妻av| 国产老妇女一区| 啦啦啦啦在线视频资源| 久久人人爽人人爽人人片va| 老司机影院毛片| 嘟嘟电影网在线观看| 精品一区在线观看国产| 国产免费福利视频在线观看| 亚洲图色成人| 免费大片18禁| 在线观看av片永久免费下载| 精品人妻一区二区三区麻豆| 久久97久久精品| 免费高清在线观看视频在线观看| 中文字幕久久专区|