• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Asymmetric Loss Based on Image Properties for Deep Learning-Based Image Restoration

    2024-01-12 03:46:48LinlinZhuYuHanXiaoqiXiZhicunZhangMengnanLiuLeiLiSiyuTanandBinYan
    Computers Materials&Continua 2023年12期

    Linlin Zhu,Yu Han,Xiaoqi Xi,Zhicun Zhang,Mengnan Liu,Lei Li,Siyu Tan and Bin Yan

    Henan Key Laboratory of Imaging and Intelligent Processing,PLA Strategic Support Force Information Engineering University,Zhengzhou,450000,China

    ABSTRACT Deep learning techniques have significantly improved image restoration tasks in recent years.As a crucial component of deep learning,the loss function plays a key role in network optimization and performance enhancement.However,the currently prevalent loss functions assign equal weight to each pixel point during loss calculation,which hampers the ability to reflect the roles of different pixel points and fails to exploit the image’s characteristics fully.To address this issue,this study proposes an asymmetric loss function based on the image and data characteristics of the image recovery task.This novel loss function can adjust the weight of the reconstruction loss based on the grey value of different pixel points,thereby effectively optimizing the network training by differentially utilizing the grey information from the original image.Specifically,we calculate a weight factor for each pixel point based on its grey value and combine it with the reconstruction loss to create a new loss function.This ensures that pixel points with smaller grey values receive greater attention,improving network recovery.In order to verify the effectiveness of the proposed asymmetric loss function,we conducted experimental tests in the image super-resolution task.The experimental results show that the model with the introduction of asymmetric loss weights improves all the indexes of the processing results without increasing the training time.In the typical superresolution network SRCNN,by introducing asymmetric weights,it is possible to improve the peak signal-to-noise ratio(PSNR)by up to about 0.5%,the structural similarity index(SSIM)by up to about 0.3%,and reduce the rootmean-square error(RMSE)by up to about 1.7%with essentially no increase in training time.In addition,we also further tested the performance of the proposed method in the denoising task to verify the potential applicability of the method in the image restoration task.

    KEYWORDS Deep learning;image restoration;loss function;image properties;super resolution;image denoising

    1 Introduction

    Artificial Intelligence(AI)technology has developed significantly in recent decades and achieved success in many fields[1,2](e.g.,robotics,regression analysis,pattern recognition,etc.).Deep learning,as one of the representative techniques of AI technology,has been rapidly developed in the field of computer vision with the improvement of computational resources,especially in image processing tasks(e.g.,denoising[3],super-resolution[4],segmentation[5],and style conversion[6],etc.),where it has demonstrated good processing results.In order to further improve the accuracy of processing results and the effectiveness of information expression in deep learning methods,a large number of network processing models with promising results have been proposed by relevant researchers.In order to optimize the design for specific problems,related researchers have developed new network architectures[7,8].Meanwhile,to enhance the interpretability of neural networks,a large number of research works have explored understanding the internal mechanisms of neural networks and their inherent limitations.For example,by developing reverse processing networks [9] or trying to spoof networks with specific inputs[10].

    The essential components of deep neural networks include forward propagation,backpropagation,optimization,activation function and loss function[11–14].Forward propagation allows inputs to be passed from one layer to the next until an output is produced.Backpropagation is an iterative process that determines the contribution of each neuron to the output error based on a chain rule and adjusts the weights of each neuron through the network.Optimization techniques are used to reduce the errors generated during backpropagation,and algorithms such as gradient descent and stochastic gradient descent can be used to optimize the network.The activation function converts inputs into outputs that the neural network can recognize.The loss function is used to measure the neural network’s performance after backpropagation and optimization.Combining these components allows deep learning to accept complex inputs and generate accurate predictions for various tasks.The loss function measures the predictive power of the network model based on the network-predicted results.It is a crucial component of deep learning models as it quantifies the difference between the model’s predictions and the actual values.The correct choice of the loss function is vital for achieving effective optimization of deep learning models since it directly impacts the effectiveness of model training[15].

    Image recovery is a crucial area of research in computer vision.It involves restoring the original image information from a damaged image,which is essential in various practical applications like medical image processing,image enhancement,and video compression[16].Traditional image recovery methods rely on mathematical models but often struggle with noise and distortion in complex scenes.In contrast,deep learning techniques can automatically learn advanced feature representations through end-to-end models and have significantly improved image restoration tasks.However,for delicate image recovery tasks,such as medical image processing,the credibility of deep learning recovery results still limits its popularization and application[17].When utilizing deep networks for image restoration,a loss function quantifies the difference between a low-quality or corrupted image and the original labeled image.A suitable loss function is vital in improving the network’s ability to recover high-quality images from low-quality inputs [18].Numerous studies have explored different loss functions for image recovery using neural networks.Some commonly used loss functions include Mean Square Error(MSE)and Mean Absolute Error(MAE).These loss functions have shown good performance in various image-processing tasks.For instance,Wang et al.[19] and Zhang et al.[20]employed MSE and MAE,respectively,for image super-resolution.However,these loss functions also have limitations.For example,the MSE loss function may have a significant gap in measuring human-perceived image quality when dealing with tasks related to image quality[21].This is because the MSE loss function assumes several factors,such as the independence of noise from local image characteristics.Nevertheless,the human visual system (HVS) is sensitive to noise based on local brightness,contrast,and structure[22].Generally,selecting an appropriate loss function for a specific deep learning task is challenging,and there is no universal selection scheme.The choice depends on the nature of the task and the type of model being used.Using a conventional loss function for training imposes equal weight on each pixel point,making it difficult to distinguish edge parts.Consequently,suppressing visual artifacts in the network’s output image without compromising the true details becomes a key concern.

    With the development of deep learning,many researchers have constructed a variety of superresolution and denoising network models with different frameworks from the design of network models,and these super-resolution and denoising models have achieved relatively good results in superresolution and denoising problems.However,in the process of network training and processing,in addition to improving the effective utilization of features extracted in the middle of the network,the network training process can be optimized based on the objective so that the information contained in the labels can be fully utilized to improve the processing capability of the network after the completion of training.Especially for different imaging modes in which the physical principles that cause image degradation are different,it is difficult to utilize a single image recovery mode that is applicable to all scenarios,especially for low-contrast images.

    The use of asymmetric loss functions has been proposed by related researchers in multi-label classification tasks,especially when the data in each category is not balanced [23,24].In 2023,Tang et al.[25] constructed a triple representation in the clustering problem,which was further enhanced by different feature constraints for unbalanced data.The asymmetric loss function solves this problem by giving different weights to the losses of different categories.Vogels et al.[26]proposed a modular convolutional architecture for denoising rendered images.The functionality of the kernel prediction network is extended by using a set of asymmetric loss.Liu et al.[27]proposed an asymmetric exponential loss function to address the crack segmentation task sample bias and dataset bias.Depending on the needs of the task,we can adjust the weights of the loss function to balance the importance of different categories.Asymmetric loss function can overcome the drawbacks of the original network to process information equally and improve the processing of the network.

    In this paper,we design an asymmetric loss function based on the characteristics of image processing tasks.The loss function considers the greyscale information of each pixel in the image and balances the pixel information of different greyscale values by applying weights to the original loss.Our method focuses on improving the learning process for pixel locations with poor prediction results in the deep learning model.This is achieved by applying dynamic weights to individual pixel points.

    This paper uses the models obtained based on MAE and MSE loss functions as comparison models.During processing,the loss function is further optimized according to the data characteristics and features of the original image.In this paper,we change the asymmetry of the loss function by imposing different weight values.We tested the proposed loss in different image processing tasks,and the experimental results show that the asymmetric loss proposed in this paper can improve image processing quality.At the same time,the loss proposed in this paper can be efficiently fused into other network models.

    The main innovations of this paper are as follows:

    1)This paper designs an asymmetric loss function based on the characteristics of the image itself.Assigning different weights to pixels effectively improves the network’s processing effect.

    2) The method proposed in this paper allows for quick implementation of the asymmetric loss function without complicated configuration and adjustment.It has the advantage of plug-and-play,making it more convenient and efficient in practical applications.

    3) The method presented in this paper has been tested in various image-processing tasks.It has consistently shown improvement in the network’s processing effect.The method demonstrates good robustness and applicability.

    2 Method

    The different levels of greyscale information in an image reflect the brightness of various areas within the image.In a greyscale image,each pixel has a unique greyscale value that indicates the brightness level of that pixel’s location.A higher greyscale value corresponds to a lighter pixel in the image.Therefore,analyzing the greyscale values in an image provides insight into its overall brightness distribution and the differences in brightness between different regions.When training the output image using a deep network with a conventional loss function,each pixel carries the same weight,leading to similar deviations across pixels of different greyscale values.However,it is essential to note that deviations among greyscale values of varying magnitudes are significantly different.The optimization effect of individual pixels can be adjusted by assigning weights to address this issue.In this study,we propose an asymmetric loss function that considers the different levels of greyscale.By increasing the loss weights for pixels with large greyscale values,we aim to enhance the image processing effect.These weight adjustments optimize the network parameters and improve the overall performance of the image processing system.

    Traditional loss treats each pixel point equally and each pixel point as having the same weight,i.e.,a weight of 1 is imposed.The current loss applied in image processing treats each pixel point equally.The total lossErris the sum of the deviation valueserrof all the pixel points,Err=erri.Here,the deviation valueerriis the product of the loss of each pixel point and the weight (in this case,1)of that pixel point.Under this assumption,the difference between the final output of the network optimized based on this loss function and the label should be the minimum total lossErrminherefore,the bias value err for each pixel point can be calculated by dividing this minimum total loss by the total number of pixelsN,i.e.,err=.However,in an actual image,the greyscale values of the different pixel points are not consistent.This leads to a significant difference in the performance of these pixel points on the image after generating the same deviation valueerr.Specifically,this is since different proportions of changes in pixel grey values can lead to changes in the visual perception of the image,especially in low-contrast regions.This is because in these regions,small changes in grey scale can have a significant impact,thus altering the overall visual appearance of the image.

    To enhance the reliability of information in the low-contrast region,asymmetric weights can be used.During the calculation process,weights are added to the low-contrast region.In this way,even a tiny attenuation produces a more significant loss.As a result,in the final result,the magnitude of the information difference of the pixel points with small grey values decreases,enhancing the low-contrast region’s information reliability.Specifically,introducing asymmetric weights changes the degree of deviation of each pixel point,making the degree of deviation of the grey values of different pixel points closer to each other instead of being numerically similar.In this way,we can optimize the whole image more effectively.Fig.1 illustrates the associated asymmetric loss weight making flowchart.

    To solve this problem,an asymmetric loss function can be introduced to adjust the optimization effect for pixel points with different grey value sizes.The asymmetric loss function can be designed based on the degree of deviation between the output value and the actual value.Their corresponding loss weight can be increased for pixel points with smaller grey values.Conversely,their corresponding loss weight can be decreased for pixel points with larger grey values.This approach better reflects the difference in importance of pixel points with different grey value sizes and thus improves the optimization of network parameters.Fig.2 shows the flowchart of asymmetric loss weight computation based on a specific image.

    Figure 1:The flowchart of the proposed scheme

    First,for the given imagesIinputandIlabelwith the resolution ofH×W×3,grey value normalization is performed to normalize the grey range to the[0,1]range.

    We have designed an asymmetric weight mask based on the normalized label image,Il-n,to fully utilize the valid information in the truthful labels.This weight mask converts the original loss into an asymmetric weighted loss by multiplying it with a weighting variable function,M.This pixel-level mapping,M∈RH×W×1,is constructed based on the greyscale information of the truth image.Each element ofM(i,j) represents the weights applied to the corresponding pixel coordinate points.In calculating the weight mask for the normalized label image,we specifically focus on pixel points with smaller grey values.These points tend to impact the results more,so we assign them higher weights in calculating the loss weights.Conversely,we reduce their respective loss weights for pixel points with larger grey values.This modification allows the network to prioritize pixel points that have a greater impact on the results.

    We use the reciprocal of each pixel point ofIl-nas a weighting benchmark.Since some pixel points have zero grey values,a direct operation of taking the reciprocal may result in a divide-by-zero error.To avoid this,we add a minimal value (1×e-10) to each pixel point’s corresponding labelled image data before calculating its loss weight.This tiny increment ensures that no divide-by-zero error occurs during the computation and maintains the computation’s accuracy and stability.Due to the large value of grey scale values close to zero after taking the reciprocal,directly using this value as a weighting factor for calculating the loss value may cause the network to pay too much attention to these pixel points.To solve this problem,we used a hyperbolic tangent function (Tanh()) to correct the value after taking the reciprocal.TheTanh()function is a commonly used nonlinear function centred at the origin and can compress larger values into a range between-1 and 1.By processing the values after taking the inverse through theTanh()function,we can make these values more numerically compact,thus better reflecting the importance of the pixel points.

    We input the normalizedIi-ninto the model to get the outputF(Ii-n) after network processing.Next,we calculate the MSE or MAE between the output and actual values as the basic loss function.

    The expression for the MSE loss function is:

    The expression for the MAE loss function is:

    Finally,the corrected weight mask is applied to the original loss to obtain the result of the loss calculation after applying the weight mask.

    Based on the above description,Table 1 gives the pseudo-code of the method of this paper.

    Table 1:Pseudo-code of the method

    3 Results

    3.1 Evaluation Metrics

    In this paper,the numerical variability between the processed and ideal reference images is used to evaluate the image quality quantitatively.To quantitatively analyze the corrected image quality of the proposed method in this paper,Peak Signal Noise Ratio(PSNR),Structural Similarity Index Measure,SSIM),and Root Mean Square Errors (RMSE) to measure the accuracy of the output results.The expressions for PSNR,SSIM and RMSE are given below:

    HerefRefdenotes the ideal reference image,fdenotes the result image after processing by different methods,idenotes the index value of each pixel in the image,andNdenotes the total number of pixels in the image.α>0,β>0,andγ>0 represent the weights of luminancecontrastand structural informationsin the SSIM calculation,respectively.andufare the mean values of the ideal reference imagefRefand the result image f after processing by different methods,respectively,which are used to reflect the brightness information of the image;and σfare the standard deviation of the ideal reference imagefRefand the resultant image f processed by different methods,respectively,which are used to reflect the contrast information of the image;is the correlation coefficient of the ideal reference imagefRefand the resultant image f processed by different methods,which is used to reflect the similarity of structural information.C1,C2,andC3,all of which are constant quantities greater than zero,are used to prevent the results from appearing unstable due to a very small or zero denominator.

    The closer the RMSE is to zero,the less the numerical variability of the processing result from the ideal labelled image.When the signal-to-noise ratio of the processing result is higher,the larger the PSNR and SSIM value is.

    3.2 Super-Resolution Task

    Firstly,we verify the effectiveness and applicability of the proposed method in this paper on the super-resolution problem in deep learning-based image restoration tasks.SRCNN [28] and EDSR[29] are choose as the baseline models for the super-resolution task,respectively.MAE and MSE are the baseline losses during the network training process.The network training is completed and tested by introducing asymmetric loss weights designed in this paper based on MAE and MSE losses,respectively.The test dataset is shown in Fig.3.All these images involved in the test are not included in the training dataset.

    Figure 3:The ground-truth of the image used for super-resolution task testing

    3.2.1 SRCNN

    Referring to the parameter settings in the literature[28],we set up the network and selected the BSD300[30]dataset as the training dataset.The images of the test dataset were obtained after four sets of trained networks with corresponding super-resolution results.We evaluated the test results based on PSNR,SSIM and RMSE metrics and calculated the corresponding metric values.The obtained results are shown in Table 2.The corresponding metric values show that adding the weights proposed in this paper,the asymmetric loss optimization improves the super-resolution of the network.

    Table 2:Quantitative results (PSNR/SSIM/RMSE) of the test dataset processing results based on SRCNN with different losses

    To comprehensively assess the effectiveness of the asymmetric loss weights proposed in this paper in terms of network processing effect enhancement,we processed the calculated indicator values.Firstly,we use the metric values of the network output results based on MAE and MSE loss as the benchmark.We plotted a radar chart to visualize the numerical results of the processing results for the 10 images.In Fig.4,the light blue and orange lines represent the evaluation values of the network models trained based on MAE and MSE loss for the super-resolution results of 10 images in the test set,respectively.The red line represents the improvement part of the evaluation values of the network models trained based on asymmetric weighted MAE loss compared to those trained based on MAE loss after the super-resolution processing of 10 images in the test set.The green line represents the improvement part of the evaluation values of the network models trained based on asymmetric weighted MAE loss compared to those trained based on MAE loss after the superresolution processing of 10 images in the test set.

    From Fig.4,we can see that the model introducing asymmetric weights can improve the metrics performance of the images after super-resolution when super-resolution the images in the test set.By using asymmetric weights,different weights can be assigned to different regions during the super-resolution process,thus better preserving the image’s detail information and texture features.Compared to the traditional uniform weight assignment method,asymmetric weighting can capture important details in the image more effectively and reduce noise and distortion.This results in a more metrics performance of the super-resolution image.

    Fig.5 shows the output results after the training is completed.To facilitate the comparison of the output results after adding asymmetric weights with the original loss,we place the results based on the same loss in neighboring columns.By comparing the entire image,we can find that the superresolution results of the network with asymmetric gradient weights are visually closer to the label image.To better understand this result,we enlarged the region of interest marked in red in Fig.6.In Fig.6,we have marked the locations where the different results are inconsistent with red arrows.Through this comparison,we can find that after adding asymmetric gradient weights,the network has a certain improvement effect on the edge clarity of super-resolution images.This means that the network can better capture the detailed information in the image,thereby improving the contrast of the restored image.

    By introducing asymmetric gradient weights during network training,we successfully improved the super-resolution performance of the network.This makes the output visually closer to the label image and improves the edge clarity and contrast of the image.These improvements are significant for image processing tasks as they can help improve image quality and application value.

    Figure 5:The output images generated by SRCNN models based on different loss function

    Figure 6:The corresponding magnified images of the region of interest by the red marked locations in Fig.5

    3.2.2 EDSR

    Referring to the parameter settings in the literature[29],we set up the network and selected the BSD300 dataset as the training dataset.The images of the test dataset were obtained after four sets of trained networks with corresponding super-resolution results.We evaluated the test results based on PSNR,SSIM and RMSE metrics and calculated the corresponding metric values.The obtained results are shown in Table 3.In order to better analyze and interpret the impact of the method proposed in this paper on the network performance,we visualized the data in Table 3.

    Table 3:Quantitative results (PSNR/SSIM/RMSE) of the test dataset processing results based on EDSR with different losses

    In Fig.7,the light blue lines represent the measurement values of the results of 10 images after super-resolution of the network model based on MAE loss.The organ line represents the difference between the measurement values of the super-resolution results of the network model with asymmetric weights and the measurement values of the super-resolution results obtained based on MAE losses after testing 10 images.The red lines represent the measurement values of the results of 10 images after super-resolution of the network model based on MAE loss.The green line represents the difference between the measurement values of the super-resolution results of the network model with asymmetric weights and the measurement values of the super-resolution results obtained based on MAE losses after testing 10 images.

    In the PSNR,SSIM and RMSE subplots of Fig.7,by introducing asymmetric weights,we can see that the output outcome metrics are evaluated better,i.e.,the obtained values are higher than the mean values in PSNR and SSIM,while in RMSE,the obtained values are lower than the mean values.This result shows that by using asymmetric loss weights,the model can be guided to pay more attention to grey scale information,which helps to improve the performance of the model.

    3.3 Denoising Task

    In order to verify the applicability of the method proposed in this paper to the image restoration problem,the paper is further tested on a deep learning-based denoising model.The denoising problem was addressed using the DnCNN[31]and DPHSIR[32]models as baseline models.The baseline losses were measured using MAE and MSE during the network training process.The network training was completed and tested by incorporating the asymmetric loss weights introduced in this paper.The test dataset is shown in Fig.8.All these images involved in the test are not included in the training dataset.

    3.3.1 DnCNN

    Referring to the parameter settings in the literature[31],we set up the network and selected the DIV2K[33]dataset as the training dataset.The DIV2K dataset is a newly proposed high-quality(2K resolution)image dataset for image restoration tasks.

    Figure 7:Quantitative difference of the results of the test dataset processing by EDSR based on different losses.The light blue and orange bar charts represent the metric values of the results of 10 images after super-resolution the MAE loss-based and MSE loss-based network models,respectively.The red and green line graphs represent the difference between the metric values of the super-resolution results of the model after testing of the 10 images with the introduction of asymmetric weights for the MAE loss and the introduction of asymmetric weights for the MSE loss,respectively

    The images of the test dataset were obtained after four sets of trained networks with corresponding denoising results.We evaluated the test results based on PSNR,SSIM and RMSE metrics and calculated the corresponding metric values.The obtained results are shown in Table 4.

    Table 4:Quantitative results (PSNR/SSIM/RMSE) of the test dataset processing results based on DnCNN with different losses

    For the convenience of observation and analysis,we further processed and visualized the obtained indicator values.We plotted a radar chart to visually display the numerical results of the processing results of 10 images.In Fig.9,the light blue and orange lines represent the evaluation values of the network models trained based on MAE loss for the super-resolution results of 10 images in the test set.The red line represents the improvement part of the evaluation values of the network models trained based on asymmetric weighted MAE loss compared to those trained based on MAE loss after the super-resolution processing of 10 images in the test set.The green line represents the improvement part of the evaluation values of the network models trained based on asymmetric weighted MAE loss compared to those trained based on MAE loss after the super-resolution processing of 10 images in the test set.

    By analyzing the radar charts,introducing the asymmetric loss weights proposed in this paper can significantly improve the index values of the processing results compared to the results of MAE and MSE.By processing the calculated index values and analyzing them by drawing radar plots,we can see the effectiveness of asymmetric loss weights in improving the processing results of the network.This indicates that asymmetric loss weighting significantly improves the network processing effect.

    3.3.2 DPHSIR

    Referring to the parameter settings in the literature[32],we set up the network and selected the DIV2K dataset as the training dataset.We have performed the same processing operation on the test results of the DPHSIR network with reference to the processing of the output results of the DnCNN network in Section 3.3.1.The calculated PSNR,SSIM,and RMSE results are shown in Table 5.

    Table 5:Quantitative results (PSNR/SSIM/RMSE) of the test dataset processing results based on DPHSIR with different losses

    Figure 9:Quantitative result enhancement values of the test dataset processing results based on DnCNN with different losses.The light blue and orange bar charts represent the metric values of the results of 10 images after super-resolution the MAE loss-based and MSE loss-based network models,respectively.The red and green line graphs represent the difference between the metric values of the super-resolution results of the model after testing of the 10 images with the introduction of asymmetric weights for the MAE loss and the introduction of asymmetric weights for the MSE loss,respectively

    Fig.10 gives the resultant radar plots of PSNR,SSIM and RMSE metrics for the test results of the test data.In Fig.10,the light blue and orange lines represent the evaluation values of the network models trained based on MAE loss for the super-resolution results of 10 images in the test set.The red line represents the improvement part of the evaluation values of the network models trained based on asymmetric weighted MAE loss compared to those trained based on MAE loss after the superresolution processing of 10 images in the test set.The green line represents the improvement part of the evaluation values of the network models trained based on asymmetric weighted MAE loss compared to those trained based on MAE loss after the super-resolution processing of 10 images in the test set.

    In the PSNR,SSIM and RMSE subgraphs of Fig.10,we observed that the measurement of loss training results was better than the results processed by the network without introducing asymmetric weights.This result indicates that the asymmetric loss weight proposed in this article will positively impact loss calculation and network optimization processes.

    Asymmetric weights can also help networks better understand the importance of different pixels with different greyscale values.In traditional loss functions,it is usually assumed that the greyscale values of all pixels are the same,but this is not the case in reality.Therefore,introducing asymmetric weights can help the network distinguish pixels of different greyscale values,thereby achieving better network optimization after training.

    Precisely,the introduction of asymmetric weights adjusts the attention level of the network to different pixel points during the training process,enabling the network to better focus on essential areas in the image,thereby improving the learning efficiency of the network.Meanwhile,due to the design of asymmetric weights,the network can better adapt to various image processing tasks,including superresolution,denoising,etc.

    Figure 10:(Continued)

    4 Discussion

    This paper’s asymmetric loss function is designed for deep network-based image regression problems.Image regression is a problem of predicting the corresponding value of an image by processing it,which is widely used in the fields of image super-resolution,image denoising and so on.The asymmetric loss function is a loss function that improves the network processing effect by setting different weight values for different pixel points.In the traditional uniform loss function,all pixel points are given the same weight,which often fails to utilize the image information fully.Because each pixel point in an image contains unequal information,pixel points with more critical information should be given more weight.

    A deep network is a multilayer neural network with strong expressive ability and generalization performance,which is increasingly widely used in image processing.Deep networks can recover the information of the image better.However,the subsequent processing tasks have a higher and higher demand for the recovery accuracy and detailed information of the image.The rich information contained in the truth label can improve the training effect of the network,but the current widely used loss function has the same weight for each pixel point,which makes it difficult to mention the role of different pixel points.

    To solve this problem,this paper proposes an asymmetric loss function.The asymmetric loss function differentiates the weight settings for different pixel points according to the truth labels’characteristics and the processing task’s needs,which better achieves the training optimization of the network.Specifically,the asymmetric loss function weights the loss of each pixel point so that the pixel points with more important information occupy a more significant weight in the loss calculation.This can induce the network to pay more attention to the pixel points with more critical information in the image during the training process,thus improving the processing effect of the network.

    5 Conclusion

    This paper uses an asymmetric loss function to solve the deep network-based image regression problem.In image processing,the image regression problem is an important task that aims to predict the corresponding output values based on a given image.However,the traditional loss function has some limitations in dealing with such problems and cannot fully explore the image information.

    The emergence of asymmetric loss function provides a new way to solve these problems.The asymmetric loss function is a loss function that differentiates different pixel points by setting different weight values.In the specific implementation,the asymmetric loss function assigns different weights to each pixel point according to the importance of the pixel point.In the network training process,the asymmetric loss function can make better use of the image information and improve the network’s processing effect and training optimization.

    In order to verify the effectiveness of the asymmetric loss function,we conducted experimental tests in image super-resolution and image denoising tasks.The experimental results show that the deep network model based on the asymmetric loss function exhibits significant advantages in these tasks.The asymmetric loss function can better highlight the image’s detailed information,improve the image’s visual effect,and make it easier to analyze and process the subsequent tasks.

    In addition,the asymmetric loss function proposed in this paper is mainly realized by setting the weight values.This design idea makes combining with other loss functions easy to form a more powerful loss function.This design idea is essential for designing loss functions in other tasks and can be extended to other image-processing or non-image-processing tasks.

    In the future,we can further study more complex and diverse image properties and explore other effective asymmetric loss function design methods.For example,in computer vision,we can introduce more feature extraction methods and deep learning models to improve the quality of image recovery.In addition,we can design and optimize different asymmetric loss functions for different tasks and datasets to adapt to the needs of different scenarios.In addition to the field of computer vision,asymmetric loss functions can also be applied to other fields,such as natural language processing,speech recognition,etc.Of course,applying asymmetric loss function to other fields also needs to consider its applicability and feasibility.We need to choose the appropriate asymmetric loss function according to the specific task and the characteristics of the dataset and conduct the corresponding experiments and verification.At the same time,we also need to pay attention to issues such as the interpretability and robustness of the model to ensure the stability and reliability of the model.

    Acknowledgement:None.

    Funding Statement:This work was supported by the National Natural Science Foundation of China(62201618).

    Author Contributions:Conceptualization,L.Z.and Y.H.;methodology,Y.H.;software,L.Z.;validation,L.Z.,Y.H.and X.X.;formal analysis,L.Z.and Y.H.;investigation,L.Z.;resources,B.Y.;data curation,Z.Z.,M.L.and X.X.;writing-original draft preparation,L.Z.and Y.H.;writingreview and editing,S.T.,L.L.,B.Y.and X.X.;visualization,L.Z.,Z.Z.and M.L.;supervision,B.Y.;project administration,X.X.;funding acquisition,Y.H.and L.L.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:The data and the code used for the manuscript are available for researchers on request from the corresponding author.

    Conflicts of Interest:The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

    久久香蕉精品热| 中国美女看黄片| 一级毛片高清免费大全| 国产av不卡久久| 脱女人内裤的视频| 女生性感内裤真人,穿戴方法视频| 人妻久久中文字幕网| 91成人精品电影| 色精品久久人妻99蜜桃| 久热爱精品视频在线9| 日韩有码中文字幕| 757午夜福利合集在线观看| 国产成人精品久久二区二区91| 午夜日韩欧美国产| 亚洲成av人片免费观看| 黑人巨大精品欧美一区二区mp4| 欧美zozozo另类| 一边摸一边做爽爽视频免费| 国内精品久久久久久久电影| 他把我摸到了高潮在线观看| 久久国产亚洲av麻豆专区| 午夜福利免费观看在线| 国产精品亚洲av一区麻豆| 国产精品自产拍在线观看55亚洲| 国产伦一二天堂av在线观看| 18美女黄网站色大片免费观看| 日本黄色视频三级网站网址| 中出人妻视频一区二区| 日韩欧美三级三区| 少妇熟女aⅴ在线视频| 亚洲五月婷婷丁香| 亚洲五月色婷婷综合| svipshipincom国产片| 国产一区二区激情短视频| 少妇粗大呻吟视频| 深夜精品福利| 久热爱精品视频在线9| 一级作爱视频免费观看| 每晚都被弄得嗷嗷叫到高潮| 啦啦啦 在线观看视频| 色婷婷久久久亚洲欧美| 日本撒尿小便嘘嘘汇集6| 久久精品影院6| 成人亚洲精品av一区二区| 美女高潮到喷水免费观看| 日日爽夜夜爽网站| 国产又爽黄色视频| 黄色毛片三级朝国网站| 三级毛片av免费| 最近最新中文字幕大全免费视频| 啦啦啦观看免费观看视频高清| 天堂动漫精品| 亚洲精品国产一区二区精华液| 99精品欧美一区二区三区四区| 夜夜爽天天搞| 日韩欧美国产一区二区入口| 日本一本二区三区精品| 国产又爽黄色视频| 亚洲av片天天在线观看| 午夜a级毛片| 露出奶头的视频| 精品不卡国产一区二区三区| 亚洲一卡2卡3卡4卡5卡精品中文| 国产亚洲欧美在线一区二区| 怎么达到女性高潮| 精品欧美国产一区二区三| 亚洲熟妇中文字幕五十中出| 又紧又爽又黄一区二区| 免费高清在线观看日韩| www.自偷自拍.com| 男男h啪啪无遮挡| 后天国语完整版免费观看| 中文字幕人妻丝袜一区二区| 国产成人欧美| 精品久久久久久久人妻蜜臀av| 成人手机av| 国产成年人精品一区二区| 久久香蕉精品热| 免费av毛片视频| 一级作爱视频免费观看| 老熟妇乱子伦视频在线观看| 两人在一起打扑克的视频| 亚洲五月天丁香| 精品不卡国产一区二区三区| 男人操女人黄网站| 久久香蕉国产精品| 宅男免费午夜| 婷婷精品国产亚洲av| 91大片在线观看| 麻豆av在线久日| 男女下面进入的视频免费午夜 | 9191精品国产免费久久| 美女扒开内裤让男人捅视频| 亚洲熟妇熟女久久| 天天躁夜夜躁狠狠躁躁| 我的亚洲天堂| 日韩免费av在线播放| 亚洲中文日韩欧美视频| 少妇的丰满在线观看| 夜夜爽天天搞| 美女高潮喷水抽搐中文字幕| 久久人人精品亚洲av| 亚洲国产欧美网| a在线观看视频网站| 亚洲成人国产一区在线观看| 757午夜福利合集在线观看| x7x7x7水蜜桃| 久久久水蜜桃国产精品网| 午夜福利高清视频| 久久青草综合色| www日本黄色视频网| 老熟妇乱子伦视频在线观看| 免费搜索国产男女视频| 国产精品久久久人人做人人爽| 国产欧美日韩一区二区精品| 国产主播在线观看一区二区| 国产精品免费视频内射| www.熟女人妻精品国产| 99国产极品粉嫩在线观看| 老司机深夜福利视频在线观看| 日本在线视频免费播放| tocl精华| www.999成人在线观看| 少妇粗大呻吟视频| 亚洲国产精品久久男人天堂| 露出奶头的视频| 久久亚洲精品不卡| 不卡av一区二区三区| 观看免费一级毛片| 特大巨黑吊av在线直播 | 美女高潮喷水抽搐中文字幕| 9191精品国产免费久久| 精品第一国产精品| 婷婷丁香在线五月| 国产91精品成人一区二区三区| 最近最新中文字幕大全电影3 | 日韩欧美免费精品| 人妻丰满熟妇av一区二区三区| 日日干狠狠操夜夜爽| 亚洲国产毛片av蜜桃av| 老汉色∧v一级毛片| 精品午夜福利视频在线观看一区| av有码第一页| 18禁黄网站禁片免费观看直播| 成年版毛片免费区| 热99re8久久精品国产| 亚洲av日韩精品久久久久久密| 18禁黄网站禁片免费观看直播| 欧美一级a爱片免费观看看 | 成人亚洲精品av一区二区| 国产午夜精品久久久久久| 黑人巨大精品欧美一区二区mp4| 欧美最黄视频在线播放免费| 三级毛片av免费| 中文字幕人妻丝袜一区二区| 啦啦啦韩国在线观看视频| 日本一区二区免费在线视频| 男人舔奶头视频| 国产精品1区2区在线观看.| 少妇的丰满在线观看| 免费观看人在逋| 亚洲午夜理论影院| 久久精品国产亚洲av高清一级| 9191精品国产免费久久| 丝袜人妻中文字幕| 中文字幕高清在线视频| 精品久久蜜臀av无| 黄色视频,在线免费观看| 美女高潮喷水抽搐中文字幕| 麻豆国产av国片精品| 好看av亚洲va欧美ⅴa在| 国内精品久久久久精免费| 法律面前人人平等表现在哪些方面| 精品久久久久久久末码| 19禁男女啪啪无遮挡网站| 国产又爽黄色视频| 老司机深夜福利视频在线观看| 国产精品永久免费网站| 在线观看日韩欧美| 搡老妇女老女人老熟妇| 啦啦啦 在线观看视频| 国产又色又爽无遮挡免费看| 一级a爱片免费观看的视频| 亚洲中文日韩欧美视频| 久久热在线av| 亚洲成a人片在线一区二区| 少妇裸体淫交视频免费看高清 | 国产精华一区二区三区| 97碰自拍视频| 国产黄片美女视频| 日韩欧美一区视频在线观看| 精品电影一区二区在线| 欧美绝顶高潮抽搐喷水| 欧美成人性av电影在线观看| 日韩成人在线观看一区二区三区| 国产av一区在线观看免费| 久久精品国产亚洲av高清一级| 欧美日韩中文字幕国产精品一区二区三区| 欧美精品亚洲一区二区| 国产午夜福利久久久久久| 国产区一区二久久| 18禁美女被吸乳视频| 老司机在亚洲福利影院| 久久精品91无色码中文字幕| 一区二区日韩欧美中文字幕| 精品午夜福利视频在线观看一区| 高清毛片免费观看视频网站| 亚洲av中文字字幕乱码综合 | 一本大道久久a久久精品| 欧美成人一区二区免费高清观看 | 国产视频内射| 十分钟在线观看高清视频www| 夜夜爽天天搞| 免费观看人在逋| 国产免费av片在线观看野外av| av欧美777| 亚洲性夜色夜夜综合| 国产av一区二区精品久久| 亚洲中文字幕一区二区三区有码在线看 | 日韩中文字幕欧美一区二区| 日韩免费av在线播放| av免费在线观看网站| 久久久久久人人人人人| 两个人看的免费小视频| 欧美性猛交╳xxx乱大交人| 国产亚洲精品久久久久5区| 欧美日韩亚洲综合一区二区三区_| 十八禁网站免费在线| 午夜福利欧美成人| 亚洲真实伦在线观看| 色精品久久人妻99蜜桃| 一级作爱视频免费观看| 欧美乱妇无乱码| 校园春色视频在线观看| 女警被强在线播放| 麻豆成人午夜福利视频| 叶爱在线成人免费视频播放| 变态另类丝袜制服| 日韩视频一区二区在线观看| 最近最新中文字幕大全免费视频| 亚洲精品久久成人aⅴ小说| 色在线成人网| 亚洲国产欧美日韩在线播放| 精品久久久久久久久久久久久 | 亚洲一码二码三码区别大吗| 国产成人系列免费观看| 精品国产乱子伦一区二区三区| 亚洲成av人片免费观看| 满18在线观看网站| tocl精华| 亚洲一区高清亚洲精品| 99国产精品一区二区蜜桃av| 狠狠狠狠99中文字幕| 国产真人三级小视频在线观看| 精品人妻1区二区| 精品一区二区三区视频在线观看免费| 免费无遮挡裸体视频| 国产人伦9x9x在线观看| 精品福利观看| 久久国产精品人妻蜜桃| 欧美日韩福利视频一区二区| 精品久久久久久,| 大型黄色视频在线免费观看| 精品久久久久久,| 一二三四在线观看免费中文在| 欧美最黄视频在线播放免费| 男人操女人黄网站| 亚洲第一av免费看| 在线观看66精品国产| www日本在线高清视频| 亚洲国产精品久久男人天堂| videosex国产| 一级毛片女人18水好多| 18禁裸乳无遮挡免费网站照片 | 午夜免费激情av| 天天一区二区日本电影三级| 久久精品亚洲精品国产色婷小说| 人人妻人人看人人澡| 日本a在线网址| 国产男靠女视频免费网站| 亚洲自偷自拍图片 自拍| 欧美最黄视频在线播放免费| 黄色视频,在线免费观看| 久久久久久久午夜电影| 亚洲无线在线观看| 国产亚洲精品第一综合不卡| 黄色a级毛片大全视频| 精品久久久久久久久久久久久 | 桃色一区二区三区在线观看| 免费在线观看成人毛片| 欧美在线一区亚洲| 亚洲成人精品中文字幕电影| 亚洲国产精品999在线| 巨乳人妻的诱惑在线观看| 一级a爱片免费观看的视频| 99久久久亚洲精品蜜臀av| 男人舔女人的私密视频| 动漫黄色视频在线观看| 国产一区二区三区视频了| 黄色成人免费大全| 亚洲 欧美一区二区三区| 18禁国产床啪视频网站| 超碰成人久久| 免费无遮挡裸体视频| 亚洲精品在线观看二区| 亚洲,欧美精品.| 日本精品一区二区三区蜜桃| 欧美黄色片欧美黄色片| 久久久久久大精品| 精品国产美女av久久久久小说| 精品熟女少妇八av免费久了| 成在线人永久免费视频| 婷婷六月久久综合丁香| 人成视频在线观看免费观看| 午夜影院日韩av| 在线观看一区二区三区| 国内揄拍国产精品人妻在线 | 麻豆国产av国片精品| 亚洲国产精品sss在线观看| 动漫黄色视频在线观看| 精品国产亚洲在线| 亚洲自拍偷在线| 免费av毛片视频| 国产成年人精品一区二区| 亚洲成人精品中文字幕电影| 18禁黄网站禁片免费观看直播| 可以在线观看的亚洲视频| 欧美性猛交╳xxx乱大交人| 亚洲精品粉嫩美女一区| 十八禁网站免费在线| 丝袜美腿诱惑在线| АⅤ资源中文在线天堂| 国产主播在线观看一区二区| 国产精品久久久人人做人人爽| 伊人久久大香线蕉亚洲五| 悠悠久久av| 香蕉国产在线看| 欧美午夜高清在线| 天堂动漫精品| 哪里可以看免费的av片| 97碰自拍视频| 免费无遮挡裸体视频| 免费无遮挡裸体视频| 人成视频在线观看免费观看| 午夜免费激情av| 1024手机看黄色片| 欧美在线一区亚洲| 国产精品影院久久| 免费一级毛片在线播放高清视频| 免费在线观看黄色视频的| 啦啦啦免费观看视频1| 久久中文看片网| 亚洲国产毛片av蜜桃av| 一区二区三区高清视频在线| 男女视频在线观看网站免费 | 窝窝影院91人妻| 好看av亚洲va欧美ⅴa在| 成人国产综合亚洲| 国产乱人伦免费视频| 91九色精品人成在线观看| 在线观看免费午夜福利视频| 国产成人av教育| 午夜成年电影在线免费观看| 十八禁人妻一区二区| 成人亚洲精品av一区二区| 淫妇啪啪啪对白视频| 国产亚洲精品一区二区www| 午夜两性在线视频| 国产成人欧美| 亚洲专区国产一区二区| www.自偷自拍.com| 精品不卡国产一区二区三区| 99在线视频只有这里精品首页| 国产精品日韩av在线免费观看| 欧美性猛交黑人性爽| 国产午夜精品久久久久久| 丝袜美腿诱惑在线| 在线观看免费日韩欧美大片| 99热6这里只有精品| 国产亚洲欧美精品永久| 国产成人av激情在线播放| 香蕉久久夜色| 欧美+亚洲+日韩+国产| 老汉色av国产亚洲站长工具| 亚洲成av人片免费观看| 91麻豆av在线| netflix在线观看网站| 精品福利观看| 不卡一级毛片| 最好的美女福利视频网| 一进一出好大好爽视频| a级毛片a级免费在线| 极品教师在线免费播放| 1024视频免费在线观看| 母亲3免费完整高清在线观看| 精品乱码久久久久久99久播| 久久久水蜜桃国产精品网| 日韩av在线大香蕉| 禁无遮挡网站| 欧美色视频一区免费| 成人亚洲精品av一区二区| 久久国产亚洲av麻豆专区| 日韩欧美国产在线观看| xxx96com| 99国产精品一区二区三区| av中文乱码字幕在线| 一区二区三区高清视频在线| 亚洲自拍偷在线| 久久香蕉精品热| 婷婷精品国产亚洲av在线| 一级a爱片免费观看的视频| 国产一区在线观看成人免费| 欧美亚洲日本最大视频资源| 欧美日本视频| 啦啦啦免费观看视频1| 啦啦啦免费观看视频1| 国产男靠女视频免费网站| 大型黄色视频在线免费观看| 久久狼人影院| 99国产精品一区二区三区| 国内久久婷婷六月综合欲色啪| 亚洲午夜精品一区,二区,三区| 亚洲av成人不卡在线观看播放网| 国产一卡二卡三卡精品| 色哟哟哟哟哟哟| 色婷婷久久久亚洲欧美| 变态另类成人亚洲欧美熟女| 久久香蕉精品热| 日日摸夜夜添夜夜添小说| 岛国在线观看网站| 麻豆成人av在线观看| 午夜福利高清视频| 女人高潮潮喷娇喘18禁视频| 亚洲国产精品sss在线观看| 国产黄a三级三级三级人| 天堂动漫精品| 成人三级黄色视频| 国产精品久久久av美女十八| 一二三四社区在线视频社区8| 俄罗斯特黄特色一大片| 极品教师在线免费播放| 亚洲色图 男人天堂 中文字幕| xxxwww97欧美| 日韩大码丰满熟妇| 成人亚洲精品一区在线观看| 欧美乱妇无乱码| 成熟少妇高潮喷水视频| 1024手机看黄色片| 国产精品电影一区二区三区| 色在线成人网| 亚洲精品在线观看二区| 18禁黄网站禁片免费观看直播| 精品国产亚洲在线| 美女大奶头视频| 嫩草影院精品99| 亚洲va日本ⅴa欧美va伊人久久| 色播亚洲综合网| 99久久无色码亚洲精品果冻| 精品午夜福利视频在线观看一区| 国产成人精品久久二区二区免费| 国产高清videossex| 美女国产高潮福利片在线看| 国产欧美日韩一区二区三| 嫩草影院精品99| 日本免费a在线| 男人操女人黄网站| 精品高清国产在线一区| 国产乱人伦免费视频| 男女午夜视频在线观看| 熟妇人妻久久中文字幕3abv| 热99re8久久精品国产| 看免费av毛片| 亚洲欧美激情综合另类| 久久久久国内视频| 国产精华一区二区三区| 老司机午夜十八禁免费视频| 后天国语完整版免费观看| 法律面前人人平等表现在哪些方面| 欧美日韩亚洲国产一区二区在线观看| 欧美黑人欧美精品刺激| 这个男人来自地球电影免费观看| 丝袜美腿诱惑在线| 在线看三级毛片| 亚洲人成伊人成综合网2020| 91老司机精品| 国产一区二区三区视频了| 精品高清国产在线一区| 麻豆久久精品国产亚洲av| 国产成人系列免费观看| 在线免费观看的www视频| 美女国产高潮福利片在线看| 亚洲自偷自拍图片 自拍| 色在线成人网| 精品免费久久久久久久清纯| 成人亚洲精品av一区二区| 亚洲专区字幕在线| 国产成人欧美在线观看| 在线看三级毛片| 人人妻人人看人人澡| 韩国av一区二区三区四区| 成人一区二区视频在线观看| 国产熟女xx| 日韩大码丰满熟妇| 淫秽高清视频在线观看| 99国产综合亚洲精品| 男女下面进入的视频免费午夜 | 制服诱惑二区| bbb黄色大片| 亚洲精品一卡2卡三卡4卡5卡| 一夜夜www| 热re99久久国产66热| 长腿黑丝高跟| 高清在线国产一区| 最好的美女福利视频网| 99精品久久久久人妻精品| av免费在线观看网站| 久久人妻福利社区极品人妻图片| 国产激情偷乱视频一区二区| 国产精品,欧美在线| 精品欧美国产一区二区三| 亚洲成人免费电影在线观看| 欧美zozozo另类| 91九色精品人成在线观看| 国产亚洲欧美在线一区二区| 成人18禁在线播放| 波多野结衣av一区二区av| 91麻豆av在线| 欧美日韩黄片免| av视频在线观看入口| 91九色精品人成在线观看| 久久久精品欧美日韩精品| 国产精品久久电影中文字幕| 999久久久精品免费观看国产| 变态另类丝袜制服| 久久国产精品人妻蜜桃| 在线观看一区二区三区| 18禁黄网站禁片午夜丰满| 国产成+人综合+亚洲专区| 久久香蕉国产精品| 久久国产精品人妻蜜桃| 国产精品久久久人人做人人爽| 国产精品 欧美亚洲| 巨乳人妻的诱惑在线观看| 午夜精品久久久久久毛片777| 在线观看免费视频日本深夜| 日日干狠狠操夜夜爽| 国产成人精品久久二区二区免费| www.999成人在线观看| 精品久久久久久久久久久久久 | av电影中文网址| 琪琪午夜伦伦电影理论片6080| 国产成人av激情在线播放| 欧美午夜高清在线| 国产成人啪精品午夜网站| 美女免费视频网站| 亚洲人成电影免费在线| 精品不卡国产一区二区三区| 国产人伦9x9x在线观看| 亚洲男人的天堂狠狠| 久久国产乱子伦精品免费另类| bbb黄色大片| 精品一区二区三区视频在线观看免费| www.www免费av| 久热爱精品视频在线9| 亚洲国产欧美一区二区综合| 欧美性长视频在线观看| 满18在线观看网站| www日本在线高清视频| 男女之事视频高清在线观看| 国产伦一二天堂av在线观看| 91av网站免费观看| 国产亚洲精品综合一区在线观看 | 自线自在国产av| 欧美日韩一级在线毛片| 91在线观看av| 免费av毛片视频| www国产在线视频色| 亚洲国产毛片av蜜桃av| 亚洲一区高清亚洲精品| 午夜福利18| 99热6这里只有精品| 日韩欧美 国产精品| 国产亚洲av嫩草精品影院| 亚洲中文字幕日韩| 男人的好看免费观看在线视频 | 成年版毛片免费区| 99久久无色码亚洲精品果冻| 欧美不卡视频在线免费观看 | 色综合亚洲欧美另类图片| 国产精品永久免费网站| 人人妻,人人澡人人爽秒播| 精品一区二区三区四区五区乱码| 此物有八面人人有两片| 女警被强在线播放| 国产男靠女视频免费网站| av片东京热男人的天堂| 国产野战对白在线观看| xxxwww97欧美| 在线视频色国产色| 亚洲第一欧美日韩一区二区三区| 波多野结衣高清无吗| 两人在一起打扑克的视频| 一a级毛片在线观看| 伦理电影免费视频| 在线免费观看的www视频| 九色国产91popny在线| 亚洲国产欧洲综合997久久, | 久久99热这里只有精品18| 一本精品99久久精品77| 哪里可以看免费的av片| 中文字幕高清在线视频| 亚洲性夜色夜夜综合| 又黄又粗又硬又大视频| 国产黄色小视频在线观看| 久久久久国产精品人妻aⅴ院|