• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Lightweight Network Ensemble Architecture for Environmental Perception on the Autonomous System

    2023-01-22 08:59:30YingpengDaiJunzhengWangJingLiLingfengMengandSongfengWang

    Yingpeng Dai,Junzheng Wang,Jing Li,★,Lingfeng Meng and Songfeng Wang,★

    1School of Automation,Beijing Institute of Technology,Beijing,100081,China

    2Institute of Tobacco Research of CAAS,Qingdao,266000,China

    ABSTRACT It is important for the autonomous system to understand environmental information.For the autonomous system,it is desirable to have a strong generalization ability to deal with different complex environmental information,as well as have high accuracy and quick inference speed.Network ensemble architecture is a good choice to improve network performance.However,it is unsuitable for real-time applications on the autonomous system.To tackle this problem,a new neural network ensemble named partial-shared ensemble network(PSENet)is presented.PSENet changes network ensemble architecture from parallel architecture to scatter architecture and merges multiple component networks together to accelerate the inference speed.To make component networks independent of each other,a training method is designed to train the network ensemble architecture.Experiments on Camvid and CIFAR-10 reveal that PSENet achieves quick inference speed while maintaining the ability of ensemble learning.In the real world, PSENet is deployed on the unmanned system and deals with vision tasks such as semantic segmentation and environmental prediction in different fields.

    KEYWORDS Neural network ensemble;real-time application;classification;semantic segmentation

    1 Introduction

    Ensemble learning is widely considered to be a good way to strengthen generalization ability.It has a wide application in many fields such as visual tracking [1], object detection [2,3], data classification and recognition[4,5],and context processing[6].Neural network ensemble[7–9]stacks a finite number of neural networks and trains them for the same task,as shown in Fig.1.Although this parallel ensemble architecture could accurately extract environmental information in complex environment,it is unsuitable for real-time applications on the autonomous system.On the one hand,ensemble architecture does not meet the real-time requirement for real-time applications.For the parallel ensemble architecture, the same input is fed into each component neural network one by one during prediction.This process needs to spend lots of extra time.On the other hand, ensemble architecture dramatically increases the model complexity and requires a lot of memory to run.This poses a great challenge for embedded devices with limited computing resources.Therefore, how to design an effective network ensemble architecture that has less computing time while maintaining the ability of ensemble learning is a challenging problem.

    Figure 1:Structure of ensemble learning

    To apply the network ensemble architecture to real-time tasks on the autonomous systems, a diffusive network ensemble architecture named PSENet is designed to quickly extract environmental information while maintaining good generalization ability and less parameters.PSENet uses fully shared module,partially shared module,independent module to fuse all component neural networks into one big network.Only one image passes through the above three modules in turn and simultaneously outputs the prediction results of each component network.PSENet could decrease the model complexity and accelerate the computing speed during inference.Moreover,to maintain generalization ability,a training method is designed to train the ensemble architecture and solve the dilemma between single input and multiple outputs in different sub-training sets.Section 3 will introduce PSENet in detail.

    The main innovations of this paper are:

    1) A lightweight network ensemble architecture named PSENet is designed and applied to realtime vision tasks on the autonomous system.Compared with parallel ensemble structure,PSENet compresses the model scale and accelerate the inference speed while maintaining generalization ability.

    2) As an extensible network ensemble structure, lots of lightweight neural networks can be put into PSENet to form a lightweight network ensemble structure for real-time applications.

    2 Related Works

    Ensemble learning combines several weak classifiers as a strong classifier to improve generalization ability.At present,ensemble learning is mainly divided into two categories:(1)traditional machine learning algorithm ensemble such as RF[10]and(2)neural network ensemble.As for parallel machine learning algorithm ensemble, the generation of component learners needed to be designed by ID3[11], C4.5 [12] and CART.Those traditional methods achieved good performance for simple vision tasks.However,those extracted features were limited by the manual design.So,they couldn’t deal with complex tasks such as semantic segmentation in complex conditions.Neural network could learn and adjust weights to adapt to different conditions.So,facing most of complex tasks like object detection and semantic segmentation,neural network ensemble tended to show better performance.

    Neural network ensemble comes from [13], which demonstrated that the generalization ability of a neural network system could be significantly improved through ensembling a number of neural networks.Since ensemble structure has good generalization performance,it had been widely adopted in many fields[14–17]and was classified into 3 categories.The first category was the combination of neural networks and traditional ensemble algorithms.Neural networks were used as feature extractors to extract multi-scale features and then those features were fed into the classifiers composed of traditional ensemble algorithms [18–21].The second category was that neural networks were used as component learners.New neural network was designed to to improve individual performance and then a number of neural networks were stacked as parallel component learners [9,22–24].This was a common method to improve ensemble performance by improving the performance of individual learners.To expand the differences among component neural networks,incorporating two,two-anda-half and three-dimensional architecture will strengthen generalization ability [25].Those methods mentioned in second category produced lots of parameters and slowed down inference speed.BENN[26], a neural network ensemble of Binarized Neural Network [27,28], had few parameters and less inference time meanwhile maintaining a better performance with high accuracy due to a fact that Binarized Neural Network had the potential advantages of high model compression rate and fast calculation speed.However, compared with Binarized Neural Network, BENN produced lots of additional parameters and increase additional inference time.The third category was how to train the component neural networks.The most prevailing approaches were Bagging [29] based on Bootstrap [30] and Boosting [31,32].In addition, data disturbance such as sample disturbance and parameter disturbance was usually used to increase diversity of component neural networks [33–35].There is no doubt that most of researches of the neural network ensemble were focused on how to improve the generalization ability and accuracy.For complex tasks in complex conditions,they could achieve good and stable results.However,those methods needed to spend lots of time to predict results.After ensemblingmcomponent neural networks,the inference speed would bemtimes larger than that of a component neural network.Therefore,lightweight neural network ensemble was no longer a lightweight network and was unsuitable for real-time applications on the autonomous system.Although ensemble pruning[36,37]could remove some component neural networks to reduce storage-spend and inference-time-spend,it was extremely limited in many situations and possible to decrease the network ensemble performance.Besides,research on ensemble strategy and generation of component learners,how to design a neural network ensemble architecture with few parameters and less computing time has wide applicability and far-reaching significance.

    3 Proposed Algorithm

    3.1 Neural Network Ensemble

    In this section, we describe the proposed neural network ensemble architecture in detail.In all experiments,we choose bagging to train the component neural networks and choose plurality voting as the combination strategy.To our knowledge,the neural network ensemble usually stacks a number of neural networks that are parallel with each other,as shown in Fig.2.

    Figure 2:Neural network ensemble architecture

    Lots of parallel neural networks make the neural network ensemble architecture become very wide,which leads to two problems.The first problem is lots of parameters.When the neural network ensemble consists of n component neural networks, its parameters will be n times larger than that of a component neural network.This poses a challenge to the storage of embedded AI computing devices.The second problem is inference time.Component neural networks are independent of each other and are run one by one.Those need to spend lots of meaningless time.If there are n component neural networks, when we use the neural network ensemble to predict results, all component neural networks need to be calculated in turn and its inference time is n times larger than that of a component neural network.Slow inference speed limits the applications of neural network ensemble architecture in real-time tasks.

    Three conditions are considered:(1)the ability of ensemble learning requires network ensemble architecture to consist of a number of parallel component neural networks, (2) less computing time requires all component neural networks to reason the results simultaneously,and(3)few parameters require component neural networks to share finite layers.Strong generalization ability is the main advantage of the ensemble architecture.Here, the main research focuses on how to accelerate the inference speed while maintaining generalization ability.Starting from the generalization ability,the disagreement between different component networks is explored.When two component networks share partial layers to extract initial features, they also maintain large disagreement.Based on this discovery,a diffusive network ensemble architecture named PSENet is proposed.PSENet consists of fully shared module,partially shared module,and independent module.Shared module is to decrease

    the parameters and connect input to each component neural network,including fully shared module and partially shared module.Fully shared module is a common module shared by all component neural networks,being directly connected to the input.As a connecting hub,it provides all component neural networks with the same input.Partially shared module,a module shared by partially component neural networks,is to decrease the parameters and mitigate the relevance among component neural networks.Independent module,usually placed at the end of the encoder,has lots of parallel branches and plays an important role in maintaining the independence among component neural networks.When the vision task is the semantic segmentation task, the next module following independent layer is the upsampling [38–40].When the vision task is the classification task, the next module following independent layer is the classifier.Firstly,the input image is fed into the fully shared module to extract initial features.Fully shared module, including several consecutive layers named as fully shared layers, only consists of one branch.Several component networks share a same initial feature extraction module to decrease the model complexity,computing time,and parameters.Then,the initial features are fed into the partially shared module.Partially shared module consists of several parallel branches which are connected to the branch from fully connected module.Finally,features extracted by partially shared module are fed into the independent module.Independent module consists of lots of branches which have as many as component neural networks, and usually is placed at the end of the encoder.Its main role is to extract different features for each component network and ensure that each component network is independent of each other to maintain the generalization ability.From input to output, above three modules are connected in turn, which fuse all parallel component neural networks into one network, as shown in Figs.3–5.The neural network ensemble architecture,called full-full-full architecture,as shown in Fig.3,consists of three fully shared modules,followed one classier layer.Above three fully shared modules extract spatial features that are fed into component neural networks.Compared with traditional neural network ensemble architecture,this architecture has fewer parameters and quicker inference speed, but it significantly weakens the generalization ability.The neural network ensemble architecture, called full-full-partial architecture,as shown in Fig.4,consists of two fully shared modules,one partially shared module.Compared with full-full-full architecture,full-full-partial architecture adds the partially shared module which enhances the diversity of the extracted features to a certain extent.Therefore, full-full-partial architecture strengthens the independence among component neural networks by adding intermediate module(partially shared module) and has stronger generalization ability than full-full-full architecture.The neural network ensemble architecture,called full-partial-independent architecture,as shown in Fig.5,consists of one fully shared module, one partially shared module, and one independent module.Each branch of independent module is parallel with each other and is trained independently.This is beneficial to strengthen the diversity of component neural networks and maintain the independence among component neural networks.However, lots of parallel branches produce more parameters.Moreover, full-partial-independent architecture, mixing multiple modules that are distributed in different stages,is difficult to be trained.

    Figure 3:Proposed neural network ensemble architecture with three fully shared modules.This architecture is called as full-full-full

    Figure 4:Proposed neural network ensemble architecture with two fully shared modules and one partially shared module.This architecture is called as full-full-partial

    Figure 5:Proposed neural network ensemble architecture with one fully shared module, several partially shared modules and several independent modules.This architecture is called as full-partialindependent

    3.2 Training Method

    In order to maintain the disagreement among component networks,all component networks need to be trained in different sub-training datasets to obtain different feature representations.PSENet has only one input connected with multiple component networks.If it is seen as a whole to train,component networks will learn similar features and lose ensemble ability.The learning of neural networks includes forward propagation and back propagation [41].Generally, a combination of forward propagation and back propagation completes one update of parameters.Different from the traditional neural network ensemble architecture,the proposed neural network ensemble architecture fuses a number of component neural networks into one network and all/some component neural networks have some shared layers.This causes that those shared parameters cannot be updated only by a component training dataset.Besides, to strengthen generalization ability, component neural networks are trained in different training subsets.So multiple forward propagation followed by one back propagation completes an update of the neural network ensemble.Here, each forward propagation only propagates in the corresponding component neural network and the direction of each forward propagation is controlled by the connection among fully shared layers,partially shared layers,and independent layers.Through forward propagation,we can get the predicted results of the component learner:

    whereHjis the prediction result onjthtraining subset.Fis the function of component neural network.Djis the training subset produced by bootstrap sampling.Dis training set.The loss of each component neural network can be expressed by:

    Flossis the loss function.Tjis the corresponding labels.Due to all component neural networks have the same structure and task,we take the average of losses from corresponding component neural networks as the final loss for back propagation.

    The proposed neural network ensemble architecture mixes the fully shared layers,partially shared layers,and independent layers and it is inappropriate to regard the encoder as a whole to train.Here,we divide encoder into several stage and fully shared layers,partially shared layers,and independent layers are trained one by one.Eq.(1)is expressed by the following formula:

    whereFωf,Fωp,andFωiare the non-linear relationships about fully shared layers,partially shared layers,and independent layers,respectively.Here,three stages are used to train the weights.The first stage is to train the fully shared layers and all training data are fed into a component neural network.After the training of fully shared layers,the weights of fully shared layers in the component learner are directly transplanted to the fully shared layers in the network ensemble architecture.

    The second step is to train the partially shared modules.In this step,the weights of fully shared layers are fixed and other weights are initialized randomly.Several training subsets are obtained by the Bootstrap method.Eq.(4)is expressed by the following formula:

    whereFf()is the fully shared layers with fixed weights in the component learners.DIis theIthtraining subset.FI,ωpis the partially shared layers in theIthpartially shared module that need to be trained in theIthtraining subset.The training process of partially shared modules is shown in Algorithm 1.

    Algorithm 1 The training process of partially shared module Require:a trained weights Ff,I ≥0 Ensure:FI,ωp Fωf ←Ff while I ≤N do randomly initialize Fωi and Fωp calculate the output Hj with Eq.(5)calculate and update the weights of partially shared layer Fωp set FI,ωp ←Fωp end while

    The third step is to train the independent modules.In this step,the weights of fully shared layers and partially shared layers are fixed and other weights are initialized randomly.According to the Bootstrap method, the training dataset is divided into subsets as many as the component neural networks.Eq.(4)is expressed by the following formula:

    whereFI,pis theIthpartially shared module with fixed weights.FJ,I,ωiis the independent layers in theJthindependent module that is directly connected to theIthpartially shared module, which need to be trained in the Jth training subset.The training process of independent modules is shown in Algorithm 2.

    Algorithm 2 The training process of independent module Require:a trained weights Ff,and FI,p,I ≥0,J ≥0 Ensure:FJ,I,ωi Fωf ←Ff Fωp ←FI,p while J ≤M do randomly initialize Fωi while I ≤N do if Jth independent module is connected to Ith partially shared module then calculate the output Hj with Eq.(6)set FJ,I,ωi ←Fωi break else continue end if end while end while

    3.3 Diversity Measure of Component Neural Networks

    Disagreement measure is used to evaluate the diversity measure of component neural networks.For a given datasetD={(x1,y1),(x2,y2),··,(xn,yn)},facing the multiple categories task,there existsyi∈{0,1,··,m-1,m} and we can get the contingency table between any two component neural networks named asMandNrespectively,as shown in Table 1.

    Table 1:Contingency table between any two component neural networks

    Here,bi,jis the number of the samples whenM=iandN=j.The disagreement measure between any two component neural networks can be expressed by:

    In terms of neural network ensemble architecture,it consists of lots of component neural networks.Therefore,we get the statistics as the disagreement measure of neural network ensemble:

    4 Experimental Results

    We evaluate the performance of the PSENet on Camvid[42,43]and CIFAR-10[44]with the traditional parallel neural network ensemble architecture in terms of accuracy,parameters,inference speed,and disagreements measure.In this section, through several experiments, we prove the relationships and roles between different stages.Besides,compared with parallel neural network ensemble,we show the advantages of the proposed neural network ensemble architecture.

    4.1 Performance Evaluation on the Camvid Dataset

    The Camvid dataset consists of 701 color-scale road images collected in different locations.For easy and fair comparison with prior work,we adopt the common split[45].Training dataset includes 367 images,validation dataset includes 101 images and testing dataset includes 233 images.Segmenting 11 classes on the Camvid Dataset is used to verify performance.

    We divide the training dataset into a lot of component training datasets which include 220 images randomly selected from 367 images and are used to train independent layers.Besides,we make some special component training datasets, including 220 images randomly selected from the several corresponding component training datasets,to train the shared modules.Here,a random self-designed neural network is used as the component neural network to do ablation experiment,as shown in Fig.6.

    On the basis of this component neural network, we stack many component neural networks to build a traditional neural network ensemble architecture and build several proposed neural network ensemble architecture.To show the performance between different neural network ensemble architecture, we design an ablation experiment and the results are shown in Table 2.In Table 2, we design 7 experiments on different neural network structures.One of them is used as a parallel neural network ensemble and the rest remaining structures belong to different proposed neural network ensemble architectures.We measure the performance of different networks by Mean Intersection over Union(MIoU),disagreement measure,FPS,and parameters.The“full”,“partial”,and“-”expresses the fully shared modules,the partially shared module,and the independent module respectively.The“partial3”expresses that partially shared module includes 3 shared branches.The“independent3-9”expresses that the first half of the independent module includes 3 shared branches,and the second half includes 9 independent branches.

    Figure 6:Random self-designed neural network

    Table 2:Ablation experiment

    The parallel neural network ensemble significantly improves the generalization ability of neural networks and produces higher accuracy that is higher 3.1 MIoU than that of component neural networks,but it produces lots of parameters due to the factor that it stacks lots of component neural networks.Besides, component neural networks are independent of each other and are run one by one.So, we need to repeatedly feed the same input into the different component neural networks when we reason the results.Fully shared module is directly connected to the input and shared by all component neural networks,so we can simultaneously run all component neural networks with only one input.This will accelerate inference speed and save much inference time.From Table 2,compared with traditional neural network ensemble,the neural network ensemble with fully shared module could spend less time to complete the same task.The encoder of the proposed neural network ensemble architecture includes three modules:fully shared module, partially shared module, and independent module.Fully shared module could decrease the parameters and accelerate inference speed, but it makes all component neural networks become similar to each other.So too many fully shared modules(fully shared layers) make neural network ensemble lose the ability of the ensemble (generalization ability).Independent module tends to make all component neural networks become independent of each other,but it produces lots of parameters and slows down the inference speed.The performance of partially shared module is between fully shared module and independent module.It tends to make a trade-off between inference speed and accuracy.Full-independent-independent has similar accuracy, disagreement measure, and parameters to traditional neural network ensemble.However,in terms of inference speed,it is 1.89 times quicker than that of traditional neural network ensemble.This demonstrates that the combination of too many independent layers and fully shared layers will accelerate the inference speed meanwhile maintaining the excellent performance of traditional neural network ensemble.When we replace the independent module with the partially shared module, the inference speed is further accelerated due to the reduction of some branches.However,the accuracy will decrease accordingly.With the increase of fully shared module,although the inference speed and parameters of the neural network ensemble are improved,the accuracy and disagreement measure will decrease a lot.When three stages consist of fully shared module,the neural network ensemble basically loses integration ability.

    Table 3 shows the disagreement measure between any two component neural networks in full-fullpartial.Here,partially shared module consists of three branches and each branch is directly connected to three classifiers.When one branch is shared by several classifiers in stage 3, the corresponding component neural networks are similar.From Table 3,every three component neural networks have similar performance and basically lose integration ability.This results in that some component neural networks are redundant and has little effect on the ensemble ability.

    Table 3:Disagreement measure between any two component neural networks

    Table 3 (continued)Component learners 1 2 3 4 5 6 7 8 9 8 n/a n/a n/a n/a n/a n/a n/a n/a 0.0100 9 n/a n/a n/a n/a n/a n/a n/a n/a n/a

    Other neural networks like FCN [46], ENet [47], and DABNet [48], BiseNet [49] are used as component neural networks respectively and we stack them to build different traditional neural network ensemble architecture.And then we compare the corresponding proposed neural network ensemble architecture with parallel neural network ensemble architecture in terms of ensemble accuracy, inference speed, and parameters.The results are shown in Table 4.With the increase of the shared layers,the neural network ensemble architecture produces few parameters and accelerates the inference speed (FPS).However, its diversity is also decreased.So, a finite number of shared layers followed by enough independent layers produce quick inference speed and few parameters while maintaining ensemble accuracy.

    Table 4:The ensemble results between different component neural networks

    Ensemble accuracy.Ensemble architecture has better generalization ability and tends to produce high accuracy.In parallel neural network ensemble architecture, component neural networks are parallel, which makes the component neural networks independent of each other.The parallel architecture is benefit to strengthen the ensemble ability.Based on the parallel network ensemble architecture,the proposed network ensemble architecture fuses parallel component neural networks into a big ensemble architecture in which all component neural networks could predict the results with only one input.From Table 4, compared with parallel architecture, full-partial-independent architecture achieves similar accuracy while having simple model complexity.

    Inference speed.Component neural networks in the parallel ensemble architecture predict the results one by one.Besides,each forecast result of the component neural network needs to input the same data repeatedly.This results in the cost of much extra time.Full-partial-independent architecture changes the reasoning method.All component neural networks are connected to the fully shared module.So when one input is fed into the full-partial-independent architecture,all component neural networks could predict the results simultaneously.From Table 4,full-partial-independent architecture has the obvious advantage than parallel architecture in terms of inference speed.For example,when FCN is used as component neural network,the inference speed of full-part-independent is 3.2 times quicker than that of parallel neural network ensemble architecture while maintaining high ensemble accuracy as parallel network ensemble architecture.When ENet is used as component neural network,the advantage of full-part-independent architecture is weakened in terms of inference speed that is 1.5 times quicker than that of traditional neural network ensemble architecture due to lots of independent layers.

    Parameters.Component neural networks in parallel ensemble architecture are independent with each other.In full-partial-independent architecture,all/some component neural networks have some shared layers.So compared with parallel ensemble architecture,full-partial-independent architecture has fewer parameters.

    4.2 Performance Evaluation on the CIFAR-10 Dataset

    The CIFAR-10 Dataset consists of 60000 32×32 color images.Training dataset includes 50000 images and testing dataset includes 10000 images.This dataset has 10 classes containing 6000 images each.There are 5000 training images and 1000 testing images per class.

    Similarly,we divide the training dataset into a lot of component training datasets which include 30000 images randomly selected from 50000 images and are used to train independent layers.Neural networks like MobileNet[50],Xception[51],and SqueezeNet[52]are used as component neural network respectively.On the basis of those component neural networks,we compare parallel neural network ensemble architecture with the proposed neural network ensemble architecture in terms of ensemble accuracy,inference speed,disagreement measure,and parameters.The results are shown in Table 5.

    Table 5:The results on CIFAR-10

    Ensemble accuracy, Inference speed, and parameters.From Table 5, parallel neural network ensemble could improve the accuracy,but it significantly sacrifices the inference speed and produces lots of parameters.When we introduce fully shared module and fuse those parallel component neural networks into one network named full-independent-independent, all component neural networks can be simultaneously run with only one input.This saves much time and accelerates the inference speed.On the basis of full-independent-independent architecture, when we replace an independent module with a partially shared module, inference speed is further accelerated.At the same time,introducing a partially shared module slightly decreases the disagreement measure and full-partialindependent architecture keeps the ensemble accuracy basically consistent with the full-independentindependent architecture.Generally,neural network ensemble architecture could significantly improve the accuracy, but introduces a lot of parameters and slows down the inference speed.The proposed neural network ensemble architecture compresses ensemble architecture and accelerates the inference speed meanwhile keeping the ensemble accuracy basically consistent with the traditional neural network ensemble architecture.

    Disagreement measure.Disagreement measures the difference between any two component neural networks.Small disagreement means that any two component neural networks extract similar features,which are adverse to the generalization ability.Parallel architecture has good disagreement,but it leads to many problems such as slow inference speed and lots of parameters.Here,the proposed full-partialindependent architecture mitigate the above problems.Shared modules reduce the parameters and accelerate the inference speed.Independent modules keep component neural networks independent with each other.From Table 5,compared with parallel ensemble architecture,full-partial-independent architecture produces similar disagreement.This reveals that the proposed ensemble architecture has a good generalization ability while producing quick inference speed and few parameters.

    4.3 Performance in the Real World

    4.3.1 Environment Understanding of Unmanned Robot

    In the real world, unmanned robot needs to perform various tasks in different environments.Here, two vision tasks are tested on the unmanned robot to show the PSENet performance.One is the semantic segmentation, a key technology for unmanned robot to understand environmental information.11 classes such as road,sky,car,building,tree,and so on are segmented from the image.The other is a classification task.According to different targets in the image,the scene is divided into 4 classes:experimental area,garden,parking lot,and main road.The environmental perception system is shown in Fig.7 and the unmanned robot is shown in Fig.8.

    Figure 7:Environmental perception system

    Figure 8:Unmanned system

    While the unmanned system is moving,the camera captures the images,and those captured images are transmitted to the AI embedded device.Then PSENet deals with those images in real time and the results are fed back to the unmanned system.Fig.9 shows some results of semantic segmentation and scene recognition in continuous road scenes.

    Figure 9:Results in the real world

    Semantic segmentation.In the whole road scenes, a single neural network could better segment large classes such as road,sky,tree,building,and car from the image.However,a lot of noise are existed in each large classes.PSENet synthesizes the results of multiple networks and effectively mitigate the problem.For small classes such as bicyclist,fence,column,both the single neural network and PSENet produce coarse semantic segmentation results.Overall, compared with the single neural network,PSENet achieves smooth boundary and high accuracy.

    Classification.The whole road scenes are divided in 4 categories:experimental area,garden,parking lot,main road.The unmanned mobile recognizes different scenes to finish different operations.For example,when passing through the parking lot,the unmanned system can perform parking operation.When passing through the main road,the unmanned system needs to drive to the right and increases its speed appropriately.In road scenes,PSENet could recognize the category of each scene well.

    4.3.2 Classification of Tobacco Leaf State during Curing

    Intelligent baking requires to identify the drying degree of tobacco leaves and adjust the temperature.Therefore, it is a key technology to accurately identify the current stage of tobacco leaf.The intelligent baking system is shown in the Fig.10.The CCD camera is used to collect tobacco leaves,and the pictures are transmitted to the processor for processing.Finally,the results are fed back to the controller to adjust the temperature.State discrimination results of tobacco leaves are shown in the Table 6 and Fig.11.

    Figure 10:Intelligent baking system

    Table 6:Classification results of tobacco leaf state

    Figure 11:State discrimination results of tobacco leaves

    It is necessary for tobacco leaves to adjust the temperature under different baking conditions.Tobacco leaves are divided into 10 states,and each state corresponds to a different baking temperature.The intelligent baking control of tobacco leaf is realized by judging the current tobacco leaf state and adaptively adjusting the baking temperature.For 2217 continuous images, PSENet is used for state recognition.Results in the real world show that PSENet can well distinguish each state of tobacco.As the early method, AlexNet has 95.2% accuracy.ResNet introduces the residual structure and greatly deepens the network structure,so as to obtain better performance.Compared with AlexNet,ResNet could improve the accuracy from 97.2%to 98.7%.Based on Inception V3,Xception simplifies the calculation of convolution.It replaces the standard convolution with the combination of 1×1 convolution and separable convolution.For classification of tobacco leaf state, Xception produces 98.3%.MobileNet,a lightweight neural network,produces similar accuracy like Xception.Xception and MobileNet produce lower accuracy than ResNet.However,they achieve a quicker inference speed.As a ensemble architecture,PSENet achieves 99.6%accuracy that outperform other algorithms.For classification of tobacco leaf state,it is difficult to classify the last four stages due to similar appearance.PSENet could overcome this problem and achieve stable and accurate classification results.

    5 Conclusions

    We present a new lightweight neural network ensemble architecture that compresses the parallel neural network ensemble architecture.This ensemble architecture divides the parallel structure into fully shared module, partially shared module, and independent module.A fully shared module is shared by all component neural networks and makes all component neural networks simultaneously run with only one input.Independent module tends to keep component neural networks independent of each other and makes the neural network ensemble architecture have a good ensemble ability.We test on Camvid and CIFAR-10 and the results show that the proposed neural network ensemble architecture not only decreases the parameters but also significantly accelerates the inference speed while keeping the ensemble ability similar to the parallel neural network ensemble architecture.This revealed that partially shared layers also maintain the independence of the component neural network and have a greater advantage than the parallel ensemble structure.In the real world, PSENet could deal with semantic segmentation and scene recognition well.In the future,the work mainly focuses on how to determine the relationship between various modules such as the number of shared components.

    Funding Statement:This work is supported by the National Key Research and Development Program of China under Grant 2019YFC1511401,the National Natural Science Foundation of China under Grant 62173038 and 61103157,Science Foundation for Young Scholars of Tobacco Research Institute of Chinese Academy of Agricultural Sciences under Grant 2021B05, and Key Scientific and Technological Research and Development Project of China National Tobacco Corporation under Grant 110202102007.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲图色成人| 国产精品偷伦视频观看了| 亚洲国产av新网站| 校园人妻丝袜中文字幕| 少妇 在线观看| 午夜福利视频在线观看免费| 国产一区二区激情短视频 | 精品一区二区三卡| 亚洲av电影在线进入| 国产黄色免费在线视频| www.精华液| 啦啦啦在线免费观看视频4| a 毛片基地| 亚洲伊人色综图| 宅男免费午夜| 国产黄色免费在线视频| 制服人妻中文乱码| 国产精品免费视频内射| 日韩电影二区| 欧美精品一区二区大全| 午夜激情久久久久久久| 下体分泌物呈黄色| 成人午夜精彩视频在线观看| 日韩av在线免费看完整版不卡| 中国三级夫妇交换| 丰满少妇做爰视频| 波多野结衣av一区二区av| 久久av网站| 亚洲七黄色美女视频| 人人妻人人澡人人看| 91老司机精品| 亚洲国产精品成人久久小说| 亚洲国产毛片av蜜桃av| 国产日韩欧美在线精品| 另类亚洲欧美激情| 无遮挡黄片免费观看| 18禁观看日本| 飞空精品影院首页| 国产精品久久久久久精品电影小说| 最近手机中文字幕大全| 精品卡一卡二卡四卡免费| 免费日韩欧美在线观看| 亚洲国产最新在线播放| 美女国产高潮福利片在线看| 丰满饥渴人妻一区二区三| 亚洲av成人不卡在线观看播放网 | av一本久久久久| 美女扒开内裤让男人捅视频| 婷婷色综合大香蕉| 久久久亚洲精品成人影院| 最近最新中文字幕大全免费视频 | 老鸭窝网址在线观看| 天天躁夜夜躁狠狠久久av| 国产黄色免费在线视频| 欧美精品一区二区免费开放| 香蕉国产在线看| 久久久精品国产亚洲av高清涩受| 国产成人91sexporn| 女人精品久久久久毛片| 亚洲,一卡二卡三卡| 建设人人有责人人尽责人人享有的| 午夜福利一区二区在线看| 51午夜福利影视在线观看| 久久久久精品久久久久真实原创| 久久国产亚洲av麻豆专区| 中文字幕制服av| 国产成人一区二区在线| 最黄视频免费看| 欧美国产精品一级二级三级| 国产精品久久久久久精品电影小说| 亚洲少妇的诱惑av| 久久精品人人爽人人爽视色| 精品国产乱码久久久久久小说| 2018国产大陆天天弄谢| 欧美日韩精品网址| 欧美黑人精品巨大| 一级,二级,三级黄色视频| av片东京热男人的天堂| 丰满少妇做爰视频| 国产日韩欧美亚洲二区| 美女脱内裤让男人舔精品视频| 久久精品亚洲av国产电影网| 伦理电影免费视频| 亚洲精品成人av观看孕妇| 午夜影院在线不卡| 久久久久久久久久久久大奶| 亚洲国产精品一区三区| 18禁动态无遮挡网站| 日韩视频在线欧美| 午夜福利在线免费观看网站| 久久精品久久久久久噜噜老黄| 国产片内射在线| 午夜激情av网站| 18禁观看日本| 九草在线视频观看| 国产麻豆69| 王馨瑶露胸无遮挡在线观看| 久久国产精品男人的天堂亚洲| 久久精品国产亚洲av高清一级| 亚洲成国产人片在线观看| 午夜福利影视在线免费观看| 国产又爽黄色视频| 国产精品国产三级专区第一集| 国产亚洲最大av| 免费黄色在线免费观看| 在线天堂中文资源库| 国产国语露脸激情在线看| 久久韩国三级中文字幕| 女人高潮潮喷娇喘18禁视频| 免费观看av网站的网址| 国产精品免费大片| 妹子高潮喷水视频| 国产成人精品在线电影| 国产激情久久老熟女| 日本欧美国产在线视频| 久久天堂一区二区三区四区| 99香蕉大伊视频| 91精品伊人久久大香线蕉| 久久久久久久国产电影| 老司机影院毛片| 日韩大片免费观看网站| 97人妻天天添夜夜摸| h视频一区二区三区| 人成视频在线观看免费观看| 亚洲国产精品一区三区| 中文字幕制服av| 女性生殖器流出的白浆| av又黄又爽大尺度在线免费看| 天天躁日日躁夜夜躁夜夜| 国产亚洲精品第一综合不卡| 亚洲欧美一区二区三区黑人| 多毛熟女@视频| 欧美日韩精品网址| 国产探花极品一区二区| 国产极品粉嫩免费观看在线| 夜夜骑夜夜射夜夜干| 少妇精品久久久久久久| 国产女主播在线喷水免费视频网站| 美女扒开内裤让男人捅视频| 亚洲欧美日韩另类电影网站| 欧美变态另类bdsm刘玥| 午夜av观看不卡| 亚洲精品av麻豆狂野| 中文字幕制服av| 免费高清在线观看日韩| 国产精品三级大全| 水蜜桃什么品种好| 十八禁网站网址无遮挡| 日本vs欧美在线观看视频| 亚洲精品国产av成人精品| 天天躁狠狠躁夜夜躁狠狠躁| 汤姆久久久久久久影院中文字幕| 啦啦啦在线免费观看视频4| av线在线观看网站| 日韩人妻精品一区2区三区| 国产成人啪精品午夜网站| 一二三四在线观看免费中文在| 少妇被粗大的猛进出69影院| 日韩伦理黄色片| 亚洲国产欧美网| 国产又爽黄色视频| 国产精品无大码| av国产精品久久久久影院| 亚洲专区中文字幕在线 | 99久国产av精品国产电影| 男人添女人高潮全过程视频| 18在线观看网站| 亚洲精品久久午夜乱码| 天天影视国产精品| 亚洲天堂av无毛| 嫩草影院入口| 亚洲欧美成人精品一区二区| 女人精品久久久久毛片| 免费看av在线观看网站| 满18在线观看网站| 国产成人免费无遮挡视频| 日本91视频免费播放| 久久久久久久久久久久大奶| 咕卡用的链子| 亚洲,欧美,日韩| 国产成人91sexporn| 丝袜脚勾引网站| 十八禁网站网址无遮挡| 日韩av免费高清视频| 啦啦啦啦在线视频资源| 看十八女毛片水多多多| 欧美中文综合在线视频| 99久久人妻综合| 欧美精品一区二区大全| 99久久综合免费| 美女午夜性视频免费| 午夜日本视频在线| 亚洲欧美清纯卡通| 亚洲精品美女久久久久99蜜臀 | 国语对白做爰xxxⅹ性视频网站| 欧美97在线视频| 亚洲激情五月婷婷啪啪| 国产成人a∨麻豆精品| videos熟女内射| av网站免费在线观看视频| 欧美日韩视频高清一区二区三区二| 久久久久久免费高清国产稀缺| 色网站视频免费| 国产精品一区二区在线不卡| 欧美在线黄色| 亚洲国产av新网站| 久久久久国产精品人妻一区二区| 免费观看人在逋| 青春草亚洲视频在线观看| 精品人妻一区二区三区麻豆| 亚洲人成电影观看| 免费日韩欧美在线观看| 久久国产精品大桥未久av| 中文字幕最新亚洲高清| 综合色丁香网| 制服诱惑二区| 狂野欧美激情性xxxx| 最近手机中文字幕大全| 亚洲精品国产区一区二| 考比视频在线观看| 欧美日韩亚洲综合一区二区三区_| 午夜av观看不卡| 9色porny在线观看| 一二三四在线观看免费中文在| 国产精品香港三级国产av潘金莲 | 熟女av电影| 18在线观看网站| 欧美精品av麻豆av| 最近的中文字幕免费完整| 巨乳人妻的诱惑在线观看| 亚洲精品视频女| 精品国产国语对白av| 免费看不卡的av| 午夜老司机福利片| 久久久久久免费高清国产稀缺| 欧美日本中文国产一区发布| 国产极品粉嫩免费观看在线| 久久精品国产综合久久久| 国产无遮挡羞羞视频在线观看| 久久国产精品大桥未久av| 色婷婷av一区二区三区视频| xxxhd国产人妻xxx| 欧美av亚洲av综合av国产av | 99久国产av精品国产电影| 欧美97在线视频| 日韩av在线免费看完整版不卡| 好男人视频免费观看在线| 亚洲少妇的诱惑av| 欧美精品人与动牲交sv欧美| 黄片播放在线免费| 久久婷婷青草| 国产探花极品一区二区| 国产极品粉嫩免费观看在线| 一本大道久久a久久精品| 成人午夜精彩视频在线观看| 亚洲一卡2卡3卡4卡5卡精品中文| 国语对白做爰xxxⅹ性视频网站| 亚洲精品在线美女| 91精品三级在线观看| 欧美人与性动交α欧美精品济南到| 亚洲,一卡二卡三卡| a 毛片基地| 天天添夜夜摸| 国产视频首页在线观看| 国产深夜福利视频在线观看| 别揉我奶头~嗯~啊~动态视频 | 少妇精品久久久久久久| 亚洲精品aⅴ在线观看| 国产人伦9x9x在线观看| 久久久久久人妻| 看十八女毛片水多多多| 日日爽夜夜爽网站| 精品亚洲成a人片在线观看| 免费观看av网站的网址| 亚洲精品国产色婷婷电影| 精品亚洲成国产av| 亚洲国产欧美一区二区综合| 久久综合国产亚洲精品| 在线亚洲精品国产二区图片欧美| 这个男人来自地球电影免费观看 | 不卡av一区二区三区| 亚洲国产毛片av蜜桃av| av网站免费在线观看视频| 人妻人人澡人人爽人人| 免费在线观看完整版高清| 亚洲一区二区三区欧美精品| 日韩免费高清中文字幕av| 欧美在线黄色| 午夜av观看不卡| 国产人伦9x9x在线观看| 亚洲av在线观看美女高潮| 国产有黄有色有爽视频| 少妇人妻 视频| av国产精品久久久久影院| 久久性视频一级片| 别揉我奶头~嗯~啊~动态视频 | 国产精品久久久av美女十八| 午夜免费男女啪啪视频观看| 亚洲第一区二区三区不卡| 99精品久久久久人妻精品| 亚洲国产av新网站| 国产精品人妻久久久影院| 久久午夜综合久久蜜桃| 好男人视频免费观看在线| 国产黄色视频一区二区在线观看| 丝袜喷水一区| 波多野结衣av一区二区av| 国产一区二区激情短视频 | 国产淫语在线视频| 精品国产乱码久久久久久男人| 亚洲欧洲日产国产| 亚洲国产成人一精品久久久| 国产成人午夜福利电影在线观看| 日韩制服骚丝袜av| 女的被弄到高潮叫床怎么办| 国产成人精品久久久久久| 日本爱情动作片www.在线观看| 十八禁高潮呻吟视频| 国产熟女午夜一区二区三区| 欧美国产精品va在线观看不卡| 亚洲成色77777| 自拍欧美九色日韩亚洲蝌蚪91| 老司机在亚洲福利影院| 亚洲一码二码三码区别大吗| 在线观看免费高清a一片| 1024香蕉在线观看| e午夜精品久久久久久久| 五月天丁香电影| 最近中文字幕2019免费版| 大片电影免费在线观看免费| 18禁动态无遮挡网站| 日本av免费视频播放| 高清欧美精品videossex| 青春草视频在线免费观看| 久久天躁狠狠躁夜夜2o2o | 啦啦啦在线免费观看视频4| 老司机在亚洲福利影院| 日韩,欧美,国产一区二区三区| 成人18禁高潮啪啪吃奶动态图| 伊人久久国产一区二区| 日本wwww免费看| 国产 精品1| 狠狠婷婷综合久久久久久88av| 国产精品女同一区二区软件| 人妻一区二区av| 国产麻豆69| 九九爱精品视频在线观看| 国产熟女午夜一区二区三区| 久热爱精品视频在线9| 狠狠婷婷综合久久久久久88av| 我的亚洲天堂| 精品亚洲成国产av| 成年女人毛片免费观看观看9 | 丰满乱子伦码专区| 日韩欧美一区视频在线观看| 欧美精品一区二区大全| 黄色毛片三级朝国网站| 欧美变态另类bdsm刘玥| 亚洲一级一片aⅴ在线观看| 亚洲欧美激情在线| 免费观看av网站的网址| 高清不卡的av网站| 国产又色又爽无遮挡免| 欧美中文综合在线视频| 精品亚洲成a人片在线观看| 国产成人91sexporn| 亚洲三区欧美一区| 午夜福利在线免费观看网站| 午夜福利影视在线免费观看| 99国产精品免费福利视频| 成人漫画全彩无遮挡| 一个人免费看片子| 可以免费在线观看a视频的电影网站 | 欧美人与性动交α欧美软件| 国产淫语在线视频| 国产视频首页在线观看| 日本91视频免费播放| 最近最新中文字幕免费大全7| 丰满饥渴人妻一区二区三| 青草久久国产| 老司机在亚洲福利影院| 1024视频免费在线观看| 日本欧美国产在线视频| 久久久久久久久久久免费av| 久久97久久精品| 青春草视频在线免费观看| 国产成人午夜福利电影在线观看| 爱豆传媒免费全集在线观看| 亚洲av在线观看美女高潮| 国产成人91sexporn| 国产精品久久久久久久久免| 黑丝袜美女国产一区| 亚洲 欧美一区二区三区| 亚洲精品久久成人aⅴ小说| 欧美少妇被猛烈插入视频| 熟妇人妻不卡中文字幕| 久久久精品国产亚洲av高清涩受| 亚洲成人一二三区av| 街头女战士在线观看网站| 亚洲国产精品成人久久小说| 叶爱在线成人免费视频播放| netflix在线观看网站| 国产av一区二区精品久久| 免费av中文字幕在线| 91成人精品电影| xxx大片免费视频| h视频一区二区三区| av片东京热男人的天堂| 少妇精品久久久久久久| 国产精品秋霞免费鲁丝片| 老熟女久久久| 午夜av观看不卡| 国产爽快片一区二区三区| 国产极品天堂在线| 韩国高清视频一区二区三区| 久久天躁狠狠躁夜夜2o2o | 美女视频免费永久观看网站| 欧美在线一区亚洲| 一级黄片播放器| 你懂的网址亚洲精品在线观看| 一本大道久久a久久精品| 亚洲视频免费观看视频| 国产男人的电影天堂91| 国产日韩欧美在线精品| 国产男人的电影天堂91| av视频免费观看在线观看| 日韩中文字幕视频在线看片| 青春草视频在线免费观看| 99热网站在线观看| 99久国产av精品国产电影| 母亲3免费完整高清在线观看| 久久久久国产精品人妻一区二区| 色网站视频免费| 欧美乱码精品一区二区三区| 国产av精品麻豆| 少妇猛男粗大的猛烈进出视频| 男人添女人高潮全过程视频| 丰满迷人的少妇在线观看| 精品少妇黑人巨大在线播放| 国产男女内射视频| 亚洲第一av免费看| 精品一区在线观看国产| 国产亚洲av片在线观看秒播厂| 少妇人妻久久综合中文| 国产精品麻豆人妻色哟哟久久| 国产极品粉嫩免费观看在线| 精品亚洲成国产av| 一区二区av电影网| 亚洲国产看品久久| 极品人妻少妇av视频| 国产日韩欧美在线精品| 亚洲国产av新网站| avwww免费| 久久这里只有精品19| 亚洲成人免费av在线播放| 极品人妻少妇av视频| 一区二区三区激情视频| 另类亚洲欧美激情| 精品午夜福利在线看| 九九爱精品视频在线观看| 国产熟女午夜一区二区三区| 丝袜在线中文字幕| 永久免费av网站大全| 建设人人有责人人尽责人人享有的| 老鸭窝网址在线观看| 免费看av在线观看网站| 久久天躁狠狠躁夜夜2o2o | 在线亚洲精品国产二区图片欧美| 一区二区三区精品91| 欧美成人精品欧美一级黄| 国产精品国产三级专区第一集| 在线亚洲精品国产二区图片欧美| 成人黄色视频免费在线看| 丝瓜视频免费看黄片| 1024视频免费在线观看| 亚洲专区中文字幕在线 | 我要看黄色一级片免费的| 亚洲久久久国产精品| 午夜日韩欧美国产| 看免费成人av毛片| 亚洲欧美清纯卡通| 亚洲中文av在线| 婷婷色综合www| 色网站视频免费| 国产 精品1| 亚洲一码二码三码区别大吗| 亚洲五月色婷婷综合| 欧美人与性动交α欧美精品济南到| 精品国产超薄肉色丝袜足j| 亚洲国产av影院在线观看| 男人舔女人的私密视频| 777米奇影视久久| 少妇精品久久久久久久| 中文字幕最新亚洲高清| 一区福利在线观看| av一本久久久久| 精品一区二区三卡| 亚洲精品久久午夜乱码| av视频免费观看在线观看| 久久人人爽av亚洲精品天堂| 两个人免费观看高清视频| 午夜免费鲁丝| 久久亚洲国产成人精品v| 99久久99久久久精品蜜桃| 欧美人与善性xxx| 久久久久久久久免费视频了| 夫妻性生交免费视频一级片| 热re99久久国产66热| 国产精品久久久久久人妻精品电影 | 国产精品免费大片| 十八禁网站网址无遮挡| 啦啦啦中文免费视频观看日本| 夫妻性生交免费视频一级片| 久热爱精品视频在线9| 久久天躁狠狠躁夜夜2o2o | 国产色婷婷99| 99久久精品国产亚洲精品| 如何舔出高潮| 少妇人妻久久综合中文| 丝袜喷水一区| 欧美久久黑人一区二区| 国产成人一区二区在线| 欧美人与性动交α欧美软件| 久久精品人人爽人人爽视色| 最近2019中文字幕mv第一页| 国产熟女午夜一区二区三区| 自线自在国产av| 亚洲欧洲日产国产| 少妇被粗大的猛进出69影院| 丝袜美足系列| 国产精品av久久久久免费| 国产精品免费视频内射| av网站免费在线观看视频| 18禁观看日本| 纵有疾风起免费观看全集完整版| 中文字幕制服av| xxx大片免费视频| 一区福利在线观看| 卡戴珊不雅视频在线播放| 看十八女毛片水多多多| 97精品久久久久久久久久精品| 国产黄色免费在线视频| 涩涩av久久男人的天堂| 婷婷色综合大香蕉| 久久精品熟女亚洲av麻豆精品| 久久久久久久久免费视频了| 亚洲国产日韩一区二区| 免费日韩欧美在线观看| 中文天堂在线官网| 嫩草影视91久久| 久久久久精品久久久久真实原创| 两个人免费观看高清视频| 国产精品 欧美亚洲| 人人妻人人澡人人爽人人夜夜| 国产在视频线精品| 黄频高清免费视频| 在线观看www视频免费| 在线观看免费日韩欧美大片| 91成人精品电影| 天天躁夜夜躁狠狠久久av| 亚洲av在线观看美女高潮| 男人操女人黄网站| 日韩视频在线欧美| 日日摸夜夜添夜夜爱| 人人妻人人澡人人爽人人夜夜| 精品少妇一区二区三区视频日本电影 | 成年美女黄网站色视频大全免费| 建设人人有责人人尽责人人享有的| 日韩一卡2卡3卡4卡2021年| 热99久久久久精品小说推荐| 亚洲综合精品二区| 街头女战士在线观看网站| 巨乳人妻的诱惑在线观看| 啦啦啦视频在线资源免费观看| 交换朋友夫妻互换小说| 国产日韩欧美视频二区| 99热网站在线观看| 午夜影院在线不卡| 国产成人av激情在线播放| 午夜福利网站1000一区二区三区| 亚洲精品日本国产第一区| 精品国产乱码久久久久久小说| 国产精品蜜桃在线观看| 啦啦啦在线观看免费高清www| 老汉色av国产亚洲站长工具| 99久久99久久久精品蜜桃| a级毛片在线看网站| 一区二区三区激情视频| 中文精品一卡2卡3卡4更新| 高清视频免费观看一区二区| 嫩草影院入口| 成人手机av| 高清欧美精品videossex| 欧美日韩一级在线毛片| 国产免费又黄又爽又色| av天堂久久9| 大话2 男鬼变身卡| 精品少妇一区二区三区视频日本电影 | 中国三级夫妇交换| 欧美人与善性xxx| 卡戴珊不雅视频在线播放| 免费高清在线观看视频在线观看| 欧美成人精品欧美一级黄| www.熟女人妻精品国产| 午夜福利免费观看在线| 免费在线观看视频国产中文字幕亚洲 | a 毛片基地| 精品久久久久久电影网| av在线老鸭窝| 免费观看性生交大片5| 欧美激情极品国产一区二区三区| 国产精品一区二区精品视频观看| 一区二区av电影网| 国产淫语在线视频| 欧美乱码精品一区二区三区|