• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Weakly Supervised Network with Scribble-Supervised and Edge-Mask for Road Extraction from High-Resolution Remote Sensing Images

    2024-05-25 14:40:04SupengYuFenHuangandChengchengFan
    Computers Materials&Continua 2024年4期

    Supeng Yu ,Fen Huang,? and Chengcheng Fan

    1College of Artificial Intelligence,Nanjing Agricultural University,Nanjing,210095,China

    2Innovation Academy for Microsatellites of CAS,Shanghai,201210,China

    3Shanghai Engineering Center for Microsatellites,Shanghai,201210,China

    4Key Laboratory of Satellite Digital Technology,Shanghai,201210,China

    ABSTRACT Significant advancements have been achieved in road surface extraction based on high-resolution remote sensing image processing.Most current methods rely on fully supervised learning,which necessitates enormous human effort to label the image.Within this field,other research endeavors utilize weakly supervised methods.These approaches aim to reduce the expenses associated with annotation by leveraging sparsely annotated data,such as scribbles.This paper presents a novel technique called a weakly supervised network using scribble-supervised and edge-mask(WSSE-net).This network is a three-branch network architecture,whereby each branch is equipped with a distinct decoder module dedicated to road extraction tasks.One of the branches is dedicated to generating edge masks using edge detection algorithms and optimizing road edge details.The other two branches supervise the model’s training by employing scribble labels and spreading scribble information throughout the image.To address the historical flaw that created pseudo-labels that are not updated with network training,we use mixup to blend prediction results dynamically and continually update new pseudo-labels to steer network training.Our solution demonstrates efficient operation by simultaneously considering both edge-mask aid and dynamic pseudo-label support.The studies are conducted on three separate road datasets,which consist primarily of high-resolution remote-sensing satellite photos and drone images.The experimental findings suggest that our methodology performs better than advanced scribble-supervised approaches and specific traditional fully supervised methods.

    KEYWORDS Semantic segmentation;road extraction;weakly supervised learning;scribble supervision;remote sensing image

    1 Introduction

    The process of road extraction,which is often referred to as road detection or road segmentation,is a vital undertaking in the fields of computer vision and remote sensing.Autonomous driving,urban planning,and environmental monitoring are among the fields in which it assumes a pivotal role.Road extraction aims to precisely detect and outline road areas in aerial or satellite images.This task can be difficult due to the intricate and varied road configurations in real-life environments.Recently,road extraction tasks have witnessed significant achievements using deep learning-based methods.These methods exploit the capabilities of convolutional neural networks(CNNs)to acquire highly distinctive features from unprocessed visual data[1].For example,Khan et al.[2]proposed an encoder-decoder network with an integrated attention unit to cope with the road segmentation task in high spatial resolution satellite images,which can automatically analyze high spatial resolution satellite images and extract road networks.Adopting these technologies has significantly improved road extraction precision and efficacy compared to traditional image processing approaches.The expansion of satellite remote sensing technology has dramatically increased the number of open-source roads accessible on OpenStreetMap (OSM).Nonetheless,some locations still need to be explored and undocumented on the global map.Although fully supervised learning may accurately extract road information from remote sensing data,it requires pixel-level labeling,which involves much human effort.

    In contrast,weakly supervised learning refers to the process of acquiring knowledge from labels that need to be more sparsely marked.Furthermore,the utilization of weakly supervised learning techniques has the potential to decrease the size of the dataset while simultaneously producing improved classification results.

    Standard labeling methods include point annotations [3],scribbles [4],bounding boxes [5],and image-level annotations [6].Using different types of sparse labels can lead to varying training classification outcomes.Taking scribble annotations as an example,ScribbleSup[4]employs alternate optimization to combine GrabCut [7] and Fully Convolutional Networks (FCN) [8] to enhance segmentation accuracy.However,this approach also increases model complexity and challenges in segmenting fine-edge details.During the network training process,as the similarity between the network’s predicted results and the pseudo-labels increases,the supervision obtained from pseudolabels gradually weakens,and the learning process becomes stable.This phenomenon is referred to asLazy Learning[9].InLazy Learning,the network model stores knowledge in the pseudo-labels for the student model to learn from.However,if prediction errors occur,they persist throughout the entire self-learning process.As a result,inaccuracies accumulate,ultimately leading to a degradation in the quality of generated pseudo-labels.Lazy Learningreflects the lack of quantitative improvement in pseudo-labels during the learning process.The method provided by Sohn et al.[10]involves utilizing consistency regularization to construct pseudo-labels.The initial step consists of generating pseudolabels for unlabeled images that have undergone weak augmentation.In the context of a specific image,pseudo-labels are selectively preserved solely when the model generates predictions surpassing a pre-determined threshold.After the same picture is significantly augmented,the predictions are changed by computing cross-entropy loss until they match the conserved pseudo-labels.This method successfully reduces pseudo-label imprecision and ensures the integrity of the generated pseudolabels.Zhang et al.[11]introduced a novel algorithm(Mixup)for image mixup augmentation within the domain of computer vision.The method combines two images from different classes using a random blending strategy to increase the size of the training dataset.Expanding the dataset size may significantly improve the classification accuracy with only a slight increase in Central Processing Unit(CPU)resources.

    The application of weakly supervised learning in the domain of remote sensing imagery shows significant potential.Nonetheless,the utilization of weakly supervised deep learning approaches for road surface extraction is still in its early stages due to remote sensing images’numerous and complex characteristics.As a result,exploring methods for efficiently incorporating weak-label information into standard weakly supervised learning algorithms has become a crucial topic that requires careful consideration.Therefore,we propose a novel multi-branch network,known as WSSE-net,that utilizes weak supervision in the form of scribbles and edge masks[12].This strategy aims to effectively address the challenges related to the complexity of models and the precise segmentation of complicated edge features.

    The research conducted has yielded the following primary contributions:

    ? A novel weakly supervised deep learning strategy has been proposed in this study,which utilizes weak supervision in the form of scribbles and edge masks to extract road information from remote sensing images.

    ? A convolutional neural network has been developed to produce high-quality pseudo-labels for propagating scribbling annotations.This model integrates edge masks to guide the optimization of road edge information.Mixup dynamically blends prediction results and continually updates new pseudo-labels to steer network training.

    ? Extensive experimentation has been undertaken using widely recognized public datasets to showcase the efficacy of the strategy described in our study.The system demonstrates exceptional performance and outperforms many widely used scribble-supervised segmentation techniques.

    2 Methods

    This study presents an innovative approach for road extraction using a weakly supervised convolutional network.Our method incorporates edge masks and scribbling information to enhance the accuracy and efficiency of the road extraction process.Fig.1 illustrates the model’s composition,which comprises a single encoder and three decoders:The primary segmentation decoder,the auxiliary segmentation decoder,and the edge-mask auxiliary decoder.The proposed model utilizes the Ushaped network architecture as the foundational segmentation network and expands it into a threebranch segmentation network by integrating an auxiliary segmentation decoder.

    2.1 Edge Mask Auxiliary Decoding Branch

    Drawing from the work of Wei et al.[12],it is imperative to integrate high-resolution predictions that exhibit distinct edges with more resilient,lower-resolution features.This amalgamation is crucial for achieving enhanced edge details and minimizing false positives in the segmentation process.Consequently,the information at the lower-resolution level is increased to align with the resolution of the high-resolution information.Subsequently,the Holistically-Nested Edge Detection(HED)[13]technique is employed to merge the abovementioned components,facilitating boundary prediction production.The HED model utilized in this study has undergone pre-training on the Berkeley Segmentation Dataset(BSDS500)[14].The outlined methodology is as follows:Initially,using HED is employed to generate edge-masks.HED is mainly reformed based on the VGG network.The pooling layer after the fifth convolutional layer of the network is designed by the Visual Geometry Group (VGG).All the fully connected layers are deleted,and the remaining part is used as the primary network.HED employs a stacked structure and a globally integrated perspective to enable simultaneous learning and integration of information at multiple scales.In a road scene,road edge elements typically include objects of varying sizes,such as junctions.HED’s multi-scale architecture allows the algorithm to collect more information on the road edge and enhances sensitivity to edges at various scales.HED trains multi-scale edge response graphs simultaneously,allowing the network to interpret the semantic information in pictures better.In the road scene,the road boundary frequently has semantic properties.HED increases the perception of semantic information through collaborative training,allowing for more accurate detection and extraction of features from road edges.The Edge Mask Auxiliary Decoding Branch also extracts multi-scale features from the Primary Encoder-Decoder Branch.

    Figure 1: The overall structure of the WSSE-net

    The structure of the Edge Mask Auxiliary Decoding Branch: In the Primary Encoder-Decoder Branch,the decoder takes a 32 × 32 pixel feature and performs two separate 2 × 2 up-sampling operations followed by a 3 × 3 convolution operation.The resulting features are then merged with the 128×128 pixel encoder.The merged result is then subjected to two additional 2×2 up-sampling operations and one 3×3 convolution operation to obtain the segmentation predictionedgemask.The border pseudo-label is evaluated to analyze the loss in conjunction with the projected outcomeedgemask.This process aids in the refinement and direction of the significant segmentation branch.

    2.2 Auxiliary Decoder Branch

    The Auxiliary Decoder Branch predominantly uses the U-net encoder as foundational and introduces a recently defined auxiliary decoder.The decoder comprises four up-sampling layers,and adropout[15] layer is incorporated before each convolutional section in the auxiliary segmentation decoder.Including dropout layers serves the objective of introducing perturbations and enhancing the robustness and generalization power of the model.The segmentation pseudo-label,denoted asPLseg,is created by randomly combining the final generated segmentation prediction,referred to asYaux,with the primary segmentation prediction,denoted asYseg.

    2.3 Primary Encoder-Decoder Branch

    In this paper,we propose a novel method based on Mixup that generates pseudo-labels through random mixing to enhance image segmentation.The method comprises two distinct and independent branches: The Primary Encoder-Decoder Branch and the Auxiliary Decoder Branch.The primary segmentation division employs U-net [16] architecture as the segmentation network’s foundational framework.The particular procedure is as follows:Initial input is a 512×512 pixel image.The features are then passed through the four down-sampling layers of the decoder’s principal segmentation branch.At each down-sampling stage,the feature channels are multiplied by two to reduce the dimensions,improving feature extraction.

    The Atrous Spatial Pyramid Pooling(ASPP)module[17]is employed to connect the encoder and decoder.The ASPP module consists of three Atrous convolutions with a size of 3×3 and one 1×1 convolution.This technique facilitates the acquisition of a broader receptive field while maintaining a substantial level of resolution.In the primary segmentation decoder,the features undergo four upsampling layers and are merged withedgemaskfrom the auxiliary segmentation decoding branch to produceYseg.The resultantYseg,the final output,is combined withYaux,created from the auxiliary segmentation branch,through random mixup.This process yields pseudo-labels that are utilized for segmentation purposes.The utilization of pseudo-labels serves the purpose of providing additional supervision and training for both the primary segmentation branch and the auxiliary segmentation branch to improve the performance of image segmentation.

    Mixup is a mixed-class enhancement algorithm used in computer vision,which can mix images of different classes to expand the training dataset.The core formula of Mixup is as follows:

    wherexiandxjare raw input vectors,yiandyjare one-hot label encodings.In this paper,we adopted the core idea of mixup as described above and applied it to the model with the following specific formula:

    wherePLsegis the mixed-generated pseudo-labels,Ysegis the segmentation prediction from the primary segmentation branch,Yauxis the segmentation prediction from the auxiliary segmentation branch,λis a random number between 0 and 1,which is generated randomly at each iteration.

    The argmax function is utilized to get the class ID that corresponds to the highest probability value in the model’s prediction.This ensures that each mixing operation yields a more favorable outcome.

    In the meantime,scribble trains the segmentation network directly by minimizing the partial crossentropy loss.This method overcomes the inherent flaw of previous pseudo-label generation,in which the generated labels needed to be updated in conjunction with network training.It accomplishes this by eliminating the gradient between the primary segmentation decoding branch and the auxiliary segmentation decoding branch,thereby preserving their independence instead of enforcing direct consistency.This approach extends the supervisory signal from a limited number of pixels to encompass the entire image.Consequently,the pixels labeled through scribbles can effectively spread across the image by dynamically combining pseudo-labels with the unlabeled pixels[13].

    2.4 Optimization Method

    As a functional metric that measures the difference between the model outputs and the anticipated labels,the loss function is essential to deep network learning and affects the process’s overall efficacy.The loss function makes the network self-optimize by requiring the communication of the computed loss value back to the model.Usually,the application’s context and specific criteria influence the choice of loss function.

    Based on the two types of labels provided by the network in this paper,namely segmentation pseudo-labels(PLseg)and boundary pseudo-labels(PLboun),the following loss functions are proposed:

    The equation includes the boundary loss,where w and h represent the width and height of the boundary pseudo-labels (PLboun) at the pixel level,andedgemaskrepresents the prediction of the edge mask auxiliary decoding branch.The segmentation pseudo-labels generated (PLseg) are utilized to compute the loss function independently for the segmentation predictions of both the primary decoder branch(Yseg)and the auxiliary decoder branch(Yaux).The specific loss function can be expressed as follows:

    The segmentation predictionYsegis only used to calculate the loss function for the scribble-labeled regions,rather than the entire image.In the scribble regions,the local loss function is defined as follows:

    In the equation,mrepresents the one-hot encoded scribble-labeled regions,which indicate the probability of pixelibelonging to the road.Wmdenotes the set of scribble-labeled pixels.Based on the above information,the overall loss function can be formulated as follows:

    2.5 Dataset Descriptions

    To evaluate the performance of the proposed method,we validated our method on three road datasets,including the CHN6-CUG dataset [18,19],the Berlin Road dataset [20],and the Massachusetts Dataset.The CHN6-CUG road dataset primarily covers urban areas in China,including six Chinese cities,such as Chaoyang District in Beijing,Yangpu District in Shanghai,and the city center of Wuhan.It is a pixel-level dataset containing road and non-road data.CHN6-CUG contains 4511 annotated images with a size of 512×512 pixels.These images are divided into 3608 for model training and 903 for testing and evaluation.Kaiser created the Berlin Road dataset from high-resolution satellite imagery.The collection primarily focuses on the urban areas of Berlin.It is composed of aerial images from Google Earth,as well as pixel-level building,road,and background labels from OpenStreetMap.We performed post-processing on the images and labels of this dataset,preserving road centerlines as scribble annotations,resulting in the Berlin Road Dataset.The Massachusetts Road dataset consists of 1171 aerial images from Massachusetts.As with building data,the size of each image is 1500×1500 pixels,covering an area of 2.25 square kilometers.We randomly divided the data into a training set of 1080 images,a validation set of 21,and a test set of 70 images.These three datasets have undergone post-processing scribble annotation.These three data sets are all correlated.They are all road data sets based on remote sensing images taken by satellites,and they all cover various roads in different scenes,such as cities,rural areas,and mountain areas.However,there are some differences:The Berlin dataset focuses more on road occlusion in the city,etc.,and the CHN6-CUG includes many mountains,rural,and field roads,focusing on the edge details of the road,while the Massachusetts dataset focuses on road continuity.The first dataset uses line annotations from LabelMe as scribbles(Fig.2),while the second dataset uses road centerlines from OpenStreetMap(OSM)as scribbles.

    Figure 2: Examples of scribble annotations

    2.6 Experimental Details

    The proposed WSSE-net was implemented using Pytorch11.6 on NVIDIA RTX 2070 GPU.All our experiments are based on the same hardware and software platform,different hardware and software devices may have other effects on the experimental results.The network performance parameters of WSSE-net are evaluated on the test set using multiple evaluation metrics.We standardized the datasets to a size of 512 × 512 for the experiments and then restored them to their original sizes at the end of the experiment.

    2.7 Evaluation Metrics

    Precision,recall,F1 Score (also known as F1-Score or F1),and intersection over union (IoU)are widely used for assessing pixel-level segmentation.Precision is a metric that quantifies the ratio of accurately predicted pixels to the total number of anticipated pixels.These four measures were chosen to assess the performance of the proposed WSSE-net.

    First,we calculate the confusion matrix for road predictions and road ground truth,including True-Positive (TP),False-Positive (FP),False-Negative (FN) and True-Negative (TN).Precision,recall,F1 Score,and IoU can be calculated as follows:

    3 Results

    A series of experiments were undertaken using identical datasets and network settings to assess the validity of the model provided in this research study.A comparative analysis was conducted between the suggested model and a well-recognized Scribble weakly supervised segmentation model as well as WSSE-net.Specifically,the comparison was performed with ScribbleSup[4],ScRoadExtractor [12],BoxSup[21],Boundary Perception Guidance(BPG)[22],Weakly-Supervised Salient Object Detection via Scribble Annotations(WSOD)[23],and Weakly labeled OpenStreetMap Centerline(WeaklyOSM)[24] models.Additionally,we assess the efficacy of our model by doing a comparative analysis with established fully supervised networks,namely U-net,DeepLabV3+[17],SegNet[25]and D-LinkNet[26]series.A visual analysis of the segmentation results and quantitative assessment metrics were used to evaluate each network’s segmentation performance.

    3.1 Results on the CHN6-CUG Dataset

    The experimental findings obtained from training samples on the CHN6-CUG dataset are shown in Table 1.The first column of Table 1 presents other weakly supervised techniques compared to WSSE-net.The following four columns provide the outcomes for four distinct assessment measures,namely Precision,Recall,F1 Score(sometimes referred to as F1-Score or F1),and Intersection over Union (abbreviated as IoU).The performance of several approaches on the test dataset is shown in Fig.3,which exhibits a random selection of three samples from the test dataset for visualization purposes.

    Table 1: Quantitative results for WSSE-net and the comparison methods on the CHN6-CUG dataset

    3.2 Results on the Berlin Road Dataset

    Similarly,Table 2 presents the experimental results using training samples on the Berlin dataset.Fig.4 demonstrates the efficacy of various methods by randomly selecting three samples from the test dataset using the same process.

    Table 2: Quantitative results for WSSE-net and the comparison methods on the Berlin Road dataset

    3.3 Results on the Massachusetts Dataset

    In this investigation,a distinct set of comparative methodologies were utilized than in the previous two.We utilized established,fully supervised procedures.Table 3 presents the results of experiments conducted using training samples on the Massachusetts Road dataset.In particular,we replaced the fully supervised comparative methods in the first column while maintaining the evaluation metrics from the previous experiments.Fig.5 depicts the efficacy demonstrated by various experimental methodologies.

    Table 3: Quantitative results for WSSE-net and the other full-supervised methods on the Massachusetts Road dataset

    Figure 3: Qualitative results of road segmentation using different methods on the CHN6-CUG dataset.(a) Image.(b) Scribble annotation.(c) ScribbleSup.(d) BPG.(e) WSOD.(f) WeaklyOSM.(g)ScRoadExtractor.(h)Ours.(i)Ground truth

    3.4 Ablation Studies

    This section conducts a series of ablation experiments using the CHN6-CUG dataset to assess the performance of the different modules inside the WSSE-net architecture.In the ablation experiments of the loss function,we performed three comparison experiments.The first comparative experiment focused only on selecting the segmentation loss(LPLseg).In the subsequent comparative experiment,we included both the segmentation loss(LPLseg)and the Scribble label loss(Lscri).The third comparative experiment included the edge loss(Lboun),building upon the methodology of the second experiment.We conducted additional ablation trials in the edge identification section while preserving the integrity of the overall loss function.In these trials,the edge detection techniques used wereHED,Canny[27],andDeepEdge[28],each applied individually.The detailed findings may be seen in Tables 4 and 5.

    Figure 4: Qualitative results of road segmentation using different methods on the Berlin Road dataset.(a) Image.(b) Scribble annotation.(c) ScribbleSup.(d) BPG.(e) WSOD.(f) WeaklyOSM.(g)ScRoadExtractor.(h)Ours.(i)Ground truth

    Figure 5: Qualitative results of road segmentation using different methods on the Massachusetts Dataset.(a) Image.(b) U-Net.(c) SegNet.(d) DeepLabV3+.(e) D-LinkNet34.(f) D-LinkNet50.(g)Ours.(h)Ground truth

    4 Discussion

    The performance of our method surpasses that of current advanced weakly supervised methods.Upon careful examination of the experimental findings,a comprehensive evaluation of several weakly supervised models is shown in Table 1 and Fig.3.The model that exhibits the highest level of performance is emphasized via the use of bold formatting.The results indicate that ScribbleSup showed subpar performance,displaying notably worse results than BPG,WSOD,WeaklyOSM,ScRoadExtractor,and our WSSE-net approaches.One limitation of BPG is its limited interaction between the split branch and the border branch,which occurs only at the level of the loss function.This approach does not consider the inherent connection between the two subbranches.Based on the findings shown in Table 1,it can be inferred that the WSSE-net model we have suggested has superior performance in terms of recall (0.6754),IoU (0.5526),and F1 Score (0.7094) when evaluated on the CHN6-CUG dataset.Regarding recall,WSSE-net exhibits a 4.72% enhancement compared to WeaklyOSM,suggesting that our model can extract road network topologies that are more comprehensive.Compared to ScRoadExtractor,our Intersection over Union(IoU)metric has seen a significant rise of 1.96%.This finding implies that the road labels retrieved by WSSE-net exhibit higher alignment with the ground truth data.In the second experiment,as shown in Table 2,WeaklyOSM exhibits an accuracy marginally greater than WSSE-net by 0.66%.However,our model demonstrates a notable enhancement of around 3% in terms of IOU and F1 Score,suggesting a superior quality in comparison.There is no significant difference in the assessment metrics between ScRoadExtractor and our method.However,while analyzing Fig.4,it becomes evident that the segmentation performance of ScRoadExtractor is comparatively worse than that of our technique.Our technique outperforms ScRoadExtractor in terms of road connectivity and completeness.Our technique also performs well when compared to existing popular full supervision methods.In the third experiment,a series of comparison tests were done on the Massachusetts road dataset.The objective was to evaluate the performance of our technique with various well-established,fully supervised semantic segmentation methods,including U-Net,SegNet,DeepLabv3+,and the D-LinkNet family of approaches.The backbone of D-LinkNet34 is built upon a ResNet34 pre-trained on ImageNet,while the backbones of DeepLabv3+and D-LinkNet50 are pre-trained on ImageNet using ResNet50.Based on the data shown in Table 3,it is apparent that our technique demonstrates a significant superiority over U-Net and SegNet in relation to IOU while somewhat falling behind DeeplabV3+and D-LinkNet50.Regarding the F1 Score metric,our approach demonstrates a marginal difference of just 0.1% compared to D-linkNet.Furthermore,our technique exhibits superior performance in terms of accuracy when compared to other methodologies.As seen in Fig.5,the outcomes of our segmentation analysis indicate that our weakly supervised segmentation method based on scribbles has comparable efficacy to some conventional fully supervised models.

    From the ablation experiments on the loss functions,while maintaining the other network configurations constant,the loss functions are configured as follows: Using only the segmentation loss(LPLseg),utilizing both the segmentation loss(LPLseg)and Scribble label loss(Lscri),and employing a composite loss function that integrates segmentation loss(LPLseg),Scribble label loss(Lscri),and edge loss(Lboun).The experimental outcomes,as illustrated in Table 4,demonstrate that the employed loss function of the model achieves an Intersection over Union(IoU)of 54.76%,an accuracy of 72.15%,an F1 Score of 68.71%,and a recall rate of 65.58%.The assessment results for the suggested integrated loss function demonstrate its superiority over a given individual loss function.This finding suggests that the composite loss function outperforms the loss function when applied to a complicated dataset.The ablation tests in the edge mask section show that the edge branch’s inclusion may effectively improve the precision of edge details.This improvement is achieved by integrating high-resolution predictions of unique edges with more resilient,lower-resolution features,correcting false positive detections.In this study,we compared the edge pseudo-label generation component,denoted asHED,and two standard edge creation algorithms,namelyCannyandDeepEdge.The outcomes of this comparison are shown in Table 5.The detection performance of theHEDalgorithm exhibits a modest boost compared to theDeepEdgeandCannyalgorithms.

    Table 4: Comparison of segmentation accuracy with different loss functions

    Table 5: Comparison of segmentation accuracy with different edge mask methods

    5 Conclusion

    This paper introduced a weakly supervised image segmentation method based on scribble-assisted supervision and edge-masks assistance.The methodology used in this study involves implementing a multi-branch convolutional neural network to conduct end-to-end training.Expanding upon the multi-branch network,we proposed strategies for edge-mask guided assistance and random dynamic mixing to generate pseudo-labels for aiding the training process.Experiments were done on the CHN6-CUG and Berlin Road datasets to validate the efficacy of the suggested methodology.Furthermore,our technique outperformed five traditional fully supervised segmentation algorithms when evaluated on the public Massachusetts road dataset.In this study,high-resolution remote sensing image road surface extraction is realized,which effectively reduces the cost of manual labeling and is ahead of other weakly supervised learning methods.Compared with fully supervised learning,this algorithm can improve the automation of road extraction,but its performance still lags that of the popular fully supervised learning.Weak supervision algorithms may face challenges when processing complex objects or images containing fine-grained structures.Therefore,combining deep learning with traditional methods as a breakthrough to mine potential supervision signals of unlabeled image data without requiring a lot of manual annotation work will be an essential step in the research of intelligent road extraction of remote sensing images.

    In the future,we are going to keep working on our study on road extraction inside the region of inadequate supervision,and we are going to keep developing our methodology.In addition,one of our goals is to apply and evaluate our suggested technique to additional complex remote sensing image segmentation tasks.

    Acknowledgement:The authors want to thank the School of Artificial Intelligence at Nanjing Agricultural University(Nanjing,China)for providing computing resources to develop this study.

    Funding Statement:This work was partly supported by the National Natural Science Foundation of China(42001408,61806097).

    Author Contributions:The authors confirm contribution to the paper as follows:Study conception and design: S.Yu and F.Huang;data collection: S.Yu;analysis and interpretation of results: S.Yu and C.Fan;draft manuscript preparation:S.Yu and C.Fan.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The related datasets in this article are open source.The dataset with scribbled labels processed in the experiment can be obtained by contacting the corresponding author.

    Conflicts of Interest:The authors declare no conflicts of interest to report regarding the present study.

    一级爰片在线观看| 街头女战士在线观看网站| 一区二区av电影网| 97热精品久久久久久| 亚洲精品中文字幕在线视频 | 最近最新中文字幕大全电影3| 伦理电影大哥的女人| av播播在线观看一区| 一级片'在线观看视频| 在线观看免费日韩欧美大片 | 久久久a久久爽久久v久久| 免费黄色在线免费观看| 国产成人aa在线观看| a级毛色黄片| 中文字幕免费在线视频6| 高清在线视频一区二区三区| 精品久久久噜噜| 18+在线观看网站| 亚洲欧美日韩东京热| 国产精品久久久久久久电影| 精品人妻偷拍中文字幕| 3wmmmm亚洲av在线观看| 黑人高潮一二区| 亚洲av电影在线观看一区二区三区| 校园人妻丝袜中文字幕| 国产精品久久久久久久久免| 亚洲在久久综合| 国产黄频视频在线观看| 青春草亚洲视频在线观看| 少妇人妻精品综合一区二区| 国产精品人妻久久久久久| 肉色欧美久久久久久久蜜桃| 高清av免费在线| 在现免费观看毛片| 男女无遮挡免费网站观看| 九九在线视频观看精品| 久久久久久伊人网av| 一区二区三区免费毛片| 国产亚洲av片在线观看秒播厂| av福利片在线观看| 亚洲av成人精品一二三区| 在线观看美女被高潮喷水网站| 97精品久久久久久久久久精品| 亚洲精品乱码久久久v下载方式| 麻豆成人av视频| 日日啪夜夜撸| 一级二级三级毛片免费看| 最近2019中文字幕mv第一页| 久久av网站| 国产精品偷伦视频观看了| 又粗又硬又长又爽又黄的视频| 我要看黄色一级片免费的| 新久久久久国产一级毛片| 夫妻性生交免费视频一级片| 国产成人91sexporn| 好男人视频免费观看在线| 免费看av在线观看网站| 国产成人freesex在线| 欧美日韩在线观看h| 免费大片18禁| 在线亚洲精品国产二区图片欧美 | 小蜜桃在线观看免费完整版高清| 午夜福利在线在线| 啦啦啦视频在线资源免费观看| 极品教师在线视频| 欧美最新免费一区二区三区| xxx大片免费视频| 日韩免费高清中文字幕av| 秋霞在线观看毛片| 午夜免费鲁丝| 亚洲怡红院男人天堂| 有码 亚洲区| 人人妻人人添人人爽欧美一区卜 | 中文字幕制服av| 亚洲欧美成人精品一区二区| 91久久精品电影网| 国产欧美另类精品又又久久亚洲欧美| 国模一区二区三区四区视频| 特大巨黑吊av在线直播| 国产伦精品一区二区三区四那| 亚洲在久久综合| .国产精品久久| 精品一区二区免费观看| av不卡在线播放| 中文字幕久久专区| 国产亚洲5aaaaa淫片| 丰满迷人的少妇在线观看| 久久99热6这里只有精品| 色5月婷婷丁香| 成人黄色视频免费在线看| 亚洲色图综合在线观看| 直男gayav资源| 最近中文字幕2019免费版| 各种免费的搞黄视频| 99久国产av精品国产电影| 大又大粗又爽又黄少妇毛片口| 交换朋友夫妻互换小说| 成人漫画全彩无遮挡| 久热久热在线精品观看| 性高湖久久久久久久久免费观看| 亚洲精品色激情综合| 春色校园在线视频观看| 国产成人午夜福利电影在线观看| 日本av免费视频播放| 人人妻人人添人人爽欧美一区卜 | av在线观看视频网站免费| 欧美一区二区亚洲| 国产在线一区二区三区精| 亚洲国产高清在线一区二区三| xxx大片免费视频| 最近2019中文字幕mv第一页| 黄色配什么色好看| 七月丁香在线播放| 久久精品夜色国产| 五月开心婷婷网| 亚洲伊人久久精品综合| a 毛片基地| 欧美高清成人免费视频www| 街头女战士在线观看网站| 91久久精品国产一区二区成人| 久久精品国产鲁丝片午夜精品| 日本免费在线观看一区| 2018国产大陆天天弄谢| 国产69精品久久久久777片| 久久鲁丝午夜福利片| 99热国产这里只有精品6| 久久 成人 亚洲| 99久久人妻综合| 人体艺术视频欧美日本| 国产男人的电影天堂91| 欧美丝袜亚洲另类| 久久鲁丝午夜福利片| 九九久久精品国产亚洲av麻豆| 在线观看免费高清a一片| 高清在线视频一区二区三区| 交换朋友夫妻互换小说| 91在线精品国自产拍蜜月| 内地一区二区视频在线| 永久网站在线| 偷拍熟女少妇极品色| 欧美日韩亚洲高清精品| 一级毛片黄色毛片免费观看视频| 亚洲国产欧美在线一区| 在线免费观看不下载黄p国产| 伦精品一区二区三区| 黑丝袜美女国产一区| 国产成人精品一,二区| 国产乱来视频区| 99久久精品国产国产毛片| 亚洲欧美一区二区三区国产| 一个人免费看片子| 国产精品99久久久久久久久| 亚洲av欧美aⅴ国产| 国产深夜福利视频在线观看| 亚洲精品一区蜜桃| 国产在线一区二区三区精| 小蜜桃在线观看免费完整版高清| 三级经典国产精品| 男女边摸边吃奶| 99热网站在线观看| 久久 成人 亚洲| 国产精品伦人一区二区| 免费看不卡的av| 三级国产精品欧美在线观看| 亚洲伊人久久精品综合| 女人十人毛片免费观看3o分钟| 少妇人妻精品综合一区二区| av线在线观看网站| 91狼人影院| 日韩一区二区三区影片| 国产欧美日韩精品一区二区| 国产午夜精品久久久久久一区二区三区| 卡戴珊不雅视频在线播放| 精品亚洲成国产av| 亚洲精品亚洲一区二区| 91精品伊人久久大香线蕉| 亚洲av成人精品一区久久| 日产精品乱码卡一卡2卡三| 日日摸夜夜添夜夜爱| 最近最新中文字幕大全电影3| 日韩av在线免费看完整版不卡| 久久青草综合色| 精品亚洲乱码少妇综合久久| 国产精品偷伦视频观看了| 精品久久久久久久末码| 久久亚洲国产成人精品v| 亚洲欧美成人综合另类久久久| 国内精品宾馆在线| 九草在线视频观看| 成人影院久久| 国产精品麻豆人妻色哟哟久久| 久久99精品国语久久久| 97热精品久久久久久| 国产在线男女| 一级毛片 在线播放| 最近的中文字幕免费完整| 九草在线视频观看| 在线观看国产h片| 97在线人人人人妻| 精品人妻视频免费看| 高清不卡的av网站| 午夜免费观看性视频| 久久久久精品久久久久真实原创| 狂野欧美白嫩少妇大欣赏| 大香蕉久久网| a级毛片免费高清观看在线播放| 亚洲人成网站在线播| 纵有疾风起免费观看全集完整版| 嫩草影院新地址| 最近最新中文字幕大全电影3| 国产精品一区二区性色av| 观看美女的网站| 中文字幕人妻熟人妻熟丝袜美| 久久精品人妻少妇| 欧美日韩国产mv在线观看视频 | 中文精品一卡2卡3卡4更新| 男人狂女人下面高潮的视频| 国产爱豆传媒在线观看| 久久6这里有精品| 国产免费又黄又爽又色| 亚洲av成人精品一二三区| 狂野欧美白嫩少妇大欣赏| 亚洲经典国产精华液单| 国产综合精华液| 国产免费又黄又爽又色| 99热网站在线观看| 国内精品宾馆在线| 国产久久久一区二区三区| 国内精品宾馆在线| 少妇精品久久久久久久| 在线天堂最新版资源| 精品久久久久久电影网| 久久久久视频综合| 亚洲精品久久午夜乱码| 三级国产精品片| 国产在线免费精品| 欧美性感艳星| 精品久久久噜噜| av免费观看日本| 国产片特级美女逼逼视频| 成人高潮视频无遮挡免费网站| 麻豆国产97在线/欧美| 亚洲精品国产av蜜桃| 日韩av在线免费看完整版不卡| 能在线免费看毛片的网站| 亚洲欧美成人精品一区二区| 18禁裸乳无遮挡动漫免费视频| 婷婷色av中文字幕| freevideosex欧美| 国产精品99久久99久久久不卡 | 免费少妇av软件| 精华霜和精华液先用哪个| 精品一区二区三区视频在线| 欧美日韩一区二区视频在线观看视频在线| av卡一久久| 另类亚洲欧美激情| 久久人人爽人人爽人人片va| 丝袜喷水一区| 日韩制服骚丝袜av| 美女高潮的动态| 日本黄大片高清| 卡戴珊不雅视频在线播放| 97超碰精品成人国产| 成人国产av品久久久| 日韩欧美精品免费久久| 国产成人精品福利久久| 观看美女的网站| 秋霞伦理黄片| 中文精品一卡2卡3卡4更新| 久久精品国产亚洲av天美| 婷婷色麻豆天堂久久| 亚洲四区av| 伦理电影大哥的女人| 18禁在线播放成人免费| 少妇的逼水好多| 亚洲一级一片aⅴ在线观看| 国产v大片淫在线免费观看| 亚洲精品视频女| 夜夜骑夜夜射夜夜干| 黄色配什么色好看| 99九九线精品视频在线观看视频| 久久久精品免费免费高清| 99久久中文字幕三级久久日本| 嘟嘟电影网在线观看| 久久久亚洲精品成人影院| 日本一二三区视频观看| 国产有黄有色有爽视频| 日韩欧美精品免费久久| 老司机影院毛片| 日本av手机在线免费观看| 久久久精品免费免费高清| 爱豆传媒免费全集在线观看| 精品亚洲成国产av| 99久久中文字幕三级久久日本| 精品99又大又爽又粗少妇毛片| 久久99精品国语久久久| 国产精品国产三级国产专区5o| 国产女主播在线喷水免费视频网站| 一级毛片我不卡| 成人高潮视频无遮挡免费网站| 青春草亚洲视频在线观看| h视频一区二区三区| 久久久久视频综合| 国产色爽女视频免费观看| 国产精品av视频在线免费观看| 干丝袜人妻中文字幕| 男女边摸边吃奶| 精品一区二区三区视频在线| 永久免费av网站大全| 亚洲欧美清纯卡通| 美女脱内裤让男人舔精品视频| 国产成人免费无遮挡视频| 麻豆成人av视频| 成人影院久久| 亚洲欧美成人精品一区二区| 亚洲av电影在线观看一区二区三区| av专区在线播放| 亚洲国产欧美在线一区| 成人综合一区亚洲| 亚洲av福利一区| 高清毛片免费看| 亚洲精品日韩av片在线观看| 国产v大片淫在线免费观看| 永久免费av网站大全| 亚洲一级一片aⅴ在线观看| 国产高清不卡午夜福利| 国产一区二区三区av在线| 久久97久久精品| 亚洲精品一区蜜桃| 欧美xxⅹ黑人| 国产精品精品国产色婷婷| 欧美高清性xxxxhd video| 国产精品一区二区性色av| 人人妻人人添人人爽欧美一区卜 | 日日摸夜夜添夜夜添av毛片| 精品国产三级普通话版| 简卡轻食公司| 精品酒店卫生间| 久久人人爽人人爽人人片va| 欧美3d第一页| 国产免费福利视频在线观看| 久久人人爽av亚洲精品天堂 | 爱豆传媒免费全集在线观看| 成人二区视频| 国产精品一区二区性色av| 久久99热这里只频精品6学生| 在线播放无遮挡| 伦理电影免费视频| 欧美高清成人免费视频www| 欧美另类一区| 免费看不卡的av| 91久久精品国产一区二区三区| 男人添女人高潮全过程视频| 多毛熟女@视频| 99久久精品一区二区三区| 久久人人爽人人爽人人片va| 亚洲av中文字字幕乱码综合| 国产精品爽爽va在线观看网站| 午夜免费男女啪啪视频观看| 18禁在线播放成人免费| 日本黄色日本黄色录像| 亚洲,一卡二卡三卡| 十八禁网站网址无遮挡 | 久久久久久久大尺度免费视频| 精品人妻视频免费看| 亚洲成人一二三区av| 街头女战士在线观看网站| 欧美丝袜亚洲另类| 久久国产乱子免费精品| 午夜精品国产一区二区电影| 少妇人妻 视频| 久久久久久人妻| 少妇人妻 视频| 99久久人妻综合| 国产成人91sexporn| 极品少妇高潮喷水抽搐| 免费不卡的大黄色大毛片视频在线观看| 最近最新中文字幕大全电影3| 一本—道久久a久久精品蜜桃钙片| 中文在线观看免费www的网站| 少妇人妻久久综合中文| 国产成人a区在线观看| 中文精品一卡2卡3卡4更新| 美女中出高潮动态图| 精品久久久久久久久av| 91精品一卡2卡3卡4卡| 18禁在线播放成人免费| 看免费成人av毛片| 久久 成人 亚洲| 成人亚洲精品一区在线观看 | 国产精品99久久久久久久久| 天堂8中文在线网| 国产探花极品一区二区| 男男h啪啪无遮挡| 大陆偷拍与自拍| 夜夜骑夜夜射夜夜干| 国产免费福利视频在线观看| 51国产日韩欧美| 亚洲精品乱码久久久久久按摩| 亚洲精品成人av观看孕妇| 日产精品乱码卡一卡2卡三| 97精品久久久久久久久久精品| 九草在线视频观看| kizo精华| 哪个播放器可以免费观看大片| 我要看黄色一级片免费的| 亚洲色图av天堂| 欧美zozozo另类| 亚洲人成网站在线观看播放| 亚洲综合色惰| 国产精品伦人一区二区| 国产精品一区二区性色av| 亚洲成人手机| 少妇高潮的动态图| 97在线视频观看| 国产乱来视频区| 亚洲久久久国产精品| 亚洲av.av天堂| 国产精品99久久99久久久不卡 | 亚洲怡红院男人天堂| 国产视频内射| 欧美精品一区二区大全| 日韩免费高清中文字幕av| 久久久久国产网址| 伦理电影免费视频| 国产日韩欧美在线精品| 国产伦精品一区二区三区视频9| 国产精品一二三区在线看| 我的老师免费观看完整版| 欧美xxxx黑人xx丫x性爽| 久久久久久久久久成人| 激情五月婷婷亚洲| 久久久久久久亚洲中文字幕| 五月玫瑰六月丁香| 亚洲精华国产精华液的使用体验| 久久久久久久精品精品| 国产av精品麻豆| 亚洲四区av| 日韩av不卡免费在线播放| 欧美丝袜亚洲另类| 久久精品久久久久久噜噜老黄| 又爽又黄a免费视频| 久久久久视频综合| 国产精品人妻久久久久久| 精品少妇久久久久久888优播| 大香蕉久久网| 国内精品宾馆在线| 欧美亚洲 丝袜 人妻 在线| 美女视频免费永久观看网站| www.色视频.com| 国产大屁股一区二区在线视频| 欧美精品国产亚洲| 蜜桃在线观看..| 亚洲,一卡二卡三卡| 欧美日韩综合久久久久久| 男人爽女人下面视频在线观看| 男女啪啪激烈高潮av片| 久热久热在线精品观看| 最近最新中文字幕免费大全7| 国产精品国产av在线观看| 亚洲精品国产色婷婷电影| 久久鲁丝午夜福利片| 午夜老司机福利剧场| 日韩欧美 国产精品| 天堂俺去俺来也www色官网| 日韩中字成人| tube8黄色片| 国产亚洲午夜精品一区二区久久| 欧美人与善性xxx| 王馨瑶露胸无遮挡在线观看| 男的添女的下面高潮视频| 干丝袜人妻中文字幕| 日韩制服骚丝袜av| 久热这里只有精品99| 日本av手机在线免费观看| 99精国产麻豆久久婷婷| 97精品久久久久久久久久精品| 免费观看a级毛片全部| 国产亚洲av片在线观看秒播厂| 亚洲国产欧美人成| 国产在线免费精品| 亚洲精品日本国产第一区| 国产黄频视频在线观看| av黄色大香蕉| 欧美国产精品一级二级三级 | 五月伊人婷婷丁香| 在线观看人妻少妇| 中国国产av一级| 国产黄片美女视频| 精品人妻熟女av久视频| 欧美日韩精品成人综合77777| 新久久久久国产一级毛片| 欧美少妇被猛烈插入视频| 免费大片黄手机在线观看| 日本一二三区视频观看| 在线观看免费日韩欧美大片 | 久久这里有精品视频免费| 综合色丁香网| 日本免费在线观看一区| 久久99热6这里只有精品| 久久久成人免费电影| 久久久精品免费免费高清| 久久6这里有精品| 身体一侧抽搐| 国产精品久久久久久久久免| xxx大片免费视频| 欧美+日韩+精品| av国产久精品久网站免费入址| 国产乱人视频| 久久久久久人妻| 九色成人免费人妻av| 亚洲欧美日韩东京热| 99久国产av精品国产电影| 黄片无遮挡物在线观看| av在线老鸭窝| 综合色丁香网| 大话2 男鬼变身卡| 亚洲精品中文字幕在线视频 | 深爱激情五月婷婷| 色网站视频免费| 久久人人爽人人片av| 亚洲怡红院男人天堂| 午夜免费观看性视频| 国产成人免费无遮挡视频| 日韩欧美 国产精品| 女的被弄到高潮叫床怎么办| 九色成人免费人妻av| 看免费成人av毛片| 久久久久精品性色| 免费播放大片免费观看视频在线观看| 国产伦精品一区二区三区四那| 91久久精品国产一区二区成人| 交换朋友夫妻互换小说| 久久久久久人妻| 亚洲国产欧美人成| av福利片在线观看| 人妻系列 视频| www.色视频.com| 91精品一卡2卡3卡4卡| 精品久久久精品久久久| 自拍偷自拍亚洲精品老妇| 国产 一区 欧美 日韩| 欧美一区二区亚洲| 国产成人精品久久久久久| 国产一级毛片在线| 777米奇影视久久| 久久鲁丝午夜福利片| 国产淫片久久久久久久久| 日韩伦理黄色片| 国产欧美日韩精品一区二区| 天天躁日日操中文字幕| 日韩一本色道免费dvd| 99久久人妻综合| 久久久久人妻精品一区果冻| 大陆偷拍与自拍| 性色av一级| av又黄又爽大尺度在线免费看| 国产在线视频一区二区| 成年人午夜在线观看视频| 男女下面进入的视频免费午夜| 80岁老熟妇乱子伦牲交| 美女内射精品一级片tv| 蜜臀久久99精品久久宅男| 新久久久久国产一级毛片| 久久人人爽av亚洲精品天堂 | 妹子高潮喷水视频| 99热国产这里只有精品6| 国产 一区 欧美 日韩| 精品久久国产蜜桃| 啦啦啦视频在线资源免费观看| 亚洲国产精品999| 久久99热这里只频精品6学生| 三级经典国产精品| 久久久色成人| 久久99精品国语久久久| 午夜精品国产一区二区电影| 99久久精品热视频| 涩涩av久久男人的天堂| 中国三级夫妇交换| 一级毛片aaaaaa免费看小| 精品久久国产蜜桃| 亚洲精品一二三| av在线播放精品| 精品国产一区二区三区久久久樱花 | 久久精品国产自在天天线| 伊人久久国产一区二区| 亚洲自偷自拍三级| 成人午夜精彩视频在线观看| 一级毛片久久久久久久久女| 五月开心婷婷网| 国内精品宾馆在线| 久久久午夜欧美精品| 纯流量卡能插随身wifi吗| 久久99热这里只有精品18| 男人添女人高潮全过程视频| 丰满迷人的少妇在线观看| 一个人看的www免费观看视频| 大香蕉97超碰在线| 国产黄色免费在线视频| 国产欧美日韩一区二区三区在线 | 免费大片黄手机在线观看| 亚洲欧美清纯卡通| 亚洲av中文av极速乱| 亚洲国产成人一精品久久久| 99热网站在线观看| 91精品一卡2卡3卡4卡| 亚洲精品日本国产第一区| 一区二区av电影网| 中国美白少妇内射xxxbb| 麻豆成人午夜福利视频| 色婷婷av一区二区三区视频| 国产一区二区三区综合在线观看 | 夜夜看夜夜爽夜夜摸| 婷婷色综合大香蕉| 特大巨黑吊av在线直播|