• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Defocus Blur Segmentation Using Genetic Programming and Adaptive Threshold

    2022-03-14 09:24:18MuhammadTariqMahmood
    Computers Materials&Continua 2022年3期

    Muhammad Tariq Mahmood

    Future Convergence Engineering,School of Computer Science and Engineering,Korea University of Technology and Education,Cheonan,31253,Byeongcheon-myeon,Korea

    Abstract: Detection and classification of the blurred and the non-blurred regions in images is a challenging task due to the limited available information about blur type,scenarios and level of blurriness.In this paper,we propose an effective method for blur detection and segmentation based on transfer learning concept.The proposed method consists of two separate steps.In the first step,genetic programming(GP)model is developed that quantify the amount of blur for each pixel in the image.The GP model method uses the multiresolution features of the image and it provides an improved blur map.In the second phase, the blur map is segmented into blurred and non-blurred regions by using an adaptive threshold.A model based on support vector machine (SVM) is developed to compute adaptive threshold for the input blur map.The performance of the proposed method is evaluated using two different datasets and compared with various state-of-the-art methods.The comparativeanalysis reveals that the proposed method performs better against the state-of-the-art techniques.

    Keywords: Blur measure; blur segmentation; sharpness measure; genetic programming; support vector machine

    1 Introduction

    Generally, blur compromises the visual quality of images but sometimes it is induced deliberately to give the aesthetic impression or a graphical effect.Blur can be caused due to the limited depth of field of the lens, wrong focus and/or relative movement of object and camera.Unintentional defocus blur is considered as an undesirable effect because it not only decreases the quality of the image but also leads to the loss of necessary information.Hence automatic blur detection and segmentation play crucial role in many image processing and computer vision applications including forgery detection, image segmentation, object detection and scene classification, medical image processing and video surveillance system [1-3].

    In literature, various blur measure operators have been proposed for blur detection and segmentation.A comprehensive study and comparative analysis of a variety of blur measures is presented in [4].Elder et al.[5] proposed a method to estimate the blur map by calculating first and second order image gradients.Lin et al.[6] suggested the closed-form matting formulation for blur detection and classification, where the regularization term is computed through and local 1D motion of the blurred object and gradient statistics.Zhang et al.[7] suggested the double discrete wavelet transform to get the blur kernels and to process the blurred images.Zhu et al.[8]suggested the local Fourier spectrum to calculate the blur probability for each pixel and then blur map is estimated through solving a constrained energy function.Oliveira et al.[9] proposed a blur estimation technique through Radon-d transform based sinc-like structure of the motion blur kernel and then applied a non-blind deblurring algorithm to restore the blurry and noisy images.Shi et al.[10] proposed a set of blur features in multiple domains.Among them, they observed that the kurtosis varies in blurred and sharp regions.They also suggested the average power spectrum in the frequency domain as an eminent feature for blur detection.Finally, they proposed a multi-scale solution to fuse the features.In another work, Peng et al.[11] suggested the method to measure the pixel blurriness based on the difference between the original and the multi-scale Gaussian-filtered images.The blur map is then utilized to estimate the depth map.Tang et al.[12] proposed a coarse-to-fine techniques for blur map estimation.First, a coarse blur map is calculated by using the log-averaged spectrum of the image and then updated it iteratively to achieve the fine blur map by using the relevant neighbor regions in the local image.Golestaneh et al.[13] exploited the variations in the frequency domain to distinguish blur and non-blur regions in the image.They computed the spatially varying blur by applying multiscale fusion of the high-frequency discrete cosine transform (DCT) coefficients (HiFST).In another work, Takayama et al.[14] have generated the blur map by evaluating the local blur feature ANGHS (amplitude normalized gradient histogram span).Su et al.[15] have suggested the design of a blur metric by observing the connection between image blur and singular value distribution from a single image (SVD).Vu et al.[16] have measured the blur by a block-based algorithm that uses a spectral measure based on the slope of the local magnitude spectrum and a spatial measure based on maximization of local total variation (TV).

    Once, blur map is generated, the next step is to segment blur and non-blur regions in the input image.Elder et al.[5] applied local scale control technique.In this technique, they calculate the zero crossing of second and third derivatives in the gradient image and use them for segmentation.Lin et al.[6] calculated the features from local 1D motion of the blurred object and used for regularization to segment the motion and blur from the images.In another method,Zhang et al.[7] computed the Double Discrete Wavelet Transform (DDWT) coefficients-based blur kernels to decouple the blurred regions from the input image.Shi et al.[10] used the graphcut technique to segment the blurry and non-blurry regions from the blur map.Tang et al.[12]generated super pixels by using simple linear iterative clustering (SLIC) technique by adapting k-means clustering for segmentation.Yi et al.[17] proposed a new monotonic sharpness metric based on local binary patterns that rely on the observation that the non-uniform patterns are more discriminating towards blur regions.The segmentation process is done by using multi-scale alpha maps obtained through the multi-scale blur maps.Whereas, Golestaneh et al.[13] set the fixed threshold empirically, for the segmentation of in-focus and out-of-focus regions in the image.Takayama et al.[14] used Otsu’s method [18] to get the threshold for every map and then it is used to segment the blur and non-blur region of the image.Su et al.[15] extracted the blurred regions of the image by using the singular value-based blur maps.They also applied the fixed threshold to divide the in-focus and out-of-focus regions in the blurred images.

    Recently, a large number of deep learning-based methods have been used for blur detection [19-23].In [22], a convolutional neural network (CNN) based feature learning method automatically obtains the local metric map for defocus blur detection.In [20], fully convolutional network (FCN) model utilizes high-level semantic information to learn image-to-image local blur mapping.In [23], a bottom-top-bottom network (BTBNet) effectively merges high-level semantic information encoded in the bottom-top stream and low-level features encoded in the top-bottom stream.In [21], a bidirectional residual refining network (BR2Net) is proposed that encodes highlevel semantic information and low-level spatial details by embedding multiple residual learning and refining modules (RLRMs) into two branches for recurrently combining and refining the residual features.In [19], a layer-output guided strategy based network exploits both high-level and low-level information to simultaneously detect in-focus and out-of-focus pixels.

    The performance of the blur segmentation phase very much depends on the capability of blur detection phase.Among various blur detection methods, some perform better than the others in underlying certain conditions.Few most famous and effective methods are using multi-resolution of image in their algorithms.For example, LBP based defocus blur [17] uses three scales with window sizes 11×11, 15×15, and 21×21 to produce three different blur maps and then integrate three maps to get final blur map.Similarly, HiFST [13] uses four scales to generate the initial blur maps and then the final improved blur map is obtained by fusing these initial maps.However, it is very difficult to find an appropriate scale range on which method gives the best results for an arbitrary input image.The performance of a specific blur measure also varies image to image [4].It means that there is not any single blur measure that can perform consistently for all images taken under varying conditions.

    In this paper, we propose a method for blur detection and segmentation based on machine learning approaches.The block diagram of the proposed method is shown in Fig.1.The proposed method is divided into two phases.In the first phase, a robust GP based blur detector is developed that captures the blur insight on different scales.The multi-scale resolution property is encoded into the blur measure to generate an improved blur map by fusing information at different scales through the GP technique.In the second phase, the blur map is segmented by an adaptive threshold obtained through the SVM model.The performance of the proposed method is evaluated using two different datasets and the results are compared with five state-of-the-art methods.The comparative analysis reveals that the proposed method has performed better against the state-of-the-art methods.

    Figure 1: Block diagram for the proposed method for blur detection and segmentation

    The rest of the paper is organized as follows.Section 2 discuss the basic rules for the genetic programming techniques.Section 3 presents the details of the proposed method including the details about the development of models.In Section 4, experimental setup, results, and comparative analysis are presented.Finally, Section 5 concludes the study and provides the future directions.

    2 Genetic Programming

    Multi-Gene Genetic Programming (MGGP) is a variant of GP, which provides model as a linear combination of bias coefficients and multiple genes [24].Traditional GP, in contrast, gives a model with single gene expression.In MGGP, bias coefficients are used to scale each gene and hence play a vital role to improve the efficacy of the overall model.In MGGP symbolic regression,every prediction of the output variable is a weighted sum of each of the genes plus a bias term.The structure of the multi-gene symbolic regression model is shown in Fig.2.Mathematically, the prediction of the training data is written as:

    whereb0represents the bias term,b1,…,bMare the weights for the genesG1,…,GMandMis the number of total genes.LetGibe the output vector of theithtree of sizeN×1.We defineTas gene response matrix of sizeN×(M+1)as follow.

    where 1 refers as a(N×1)column of ones used as offset input.Eq.(1) can be written in matrix form as:

    where b represents the weights vector [b0,b1,...,bM].The optimal weights for initial models participating in multi-gene are determined by applying the least square method.

    Figure 2: Example of multi-gene regression model

    In experiments, individuals in the population have gene restriction between 1 toGmaxand the individual tree depth restriction up toDmax.These parameters are set to control the complexity of the evolved models.The initial population is created by generating a random GP tree subjected toGmaxandDmaxconstraints.During the MGGP run, individuals are probabilistically selected in each generation, and genes in the individual are updated through crossover and mutation operations.In MGGP, the rate based high-level crossover operator is applied which accommodates the exchange of genetic information between the individuals.Rate based high-level crossover operator can be described through the following example.In the crossover between a parent individual consisting of 3 genes labelled as(G1G2G3)and the second parent individual consisting of 5 genes labelled as(G4G5G6G7G8), the crossover points are randomly selected in each parent,as highlighted in boldface below.

    Parent 1:(G1G2G3), Parent 2:(G4G5G6G7G8)

    The selected genes are then exchanged to produce two children for the next generation as expressed below.

    Offspring 1:(G1G3G5G7), Offspring 2:(G4G6G8G2)

    The sizes of the created offspring are governed byGmaxandDmaxconstraints.If the resultant individual contains more genes than theGmax, additional genes are randomly discarded.In order to achieve higher accuracy of for a model, a robust classifier is required.In this paper, we have used logistic regression as a binary classifier.Logistic regression produces a logistic curve, which is limited to the value 0 and 1 hence used to predicts the probability of an outcome.Mathematically,logistic regression function is defined as:

    wherePqis the score for prediction and y is the output for the individual when the training data feature vectorDTis fed to the individual as input vector.Fitness function gives the measurement of the accuracy of a particular model, i.e., how well a model can solve the given problem.Fitness function plays a significant role in improving the performance of the system, hence learning the best classifierThe fitness measure that is used in this paper is area under the receiver operating characteristic (ROC) curve.

    3 Proposed Method

    In first phase, a GP based model ?g(f)is learned to detect the blurriness level of input image pixels which generates a blur map.In the second phase, SVM based classifieris developed to predict the best threshold for the blur map.Finally, the segmented mapMclfsd(x,y)is computed by applying the threshold.

    3.1 Blur Detection

    In this section, GP based blur detection model is developed that generates a blur map for a partially blurred image.This section consists of two parts: (a) preparation of training data for GP, (b) learning best model form GP.

    3.1.1 Data Preparation for GP Model

    We prepare the training data form a random imageI(x,y)and its ground truth imageIgt(x,y).Feature vector f =(f1,f2,f3,f4,f5,f6,f7,f8)is constructed with eight features.Where each feature of f is a blur map of LBP and HiFST, calculated on different windows.f1is generated when the LBP is applied on the imageI(x,y)with fixed window sizew=11.Similarlyf2,f3,f4are LBP blur map using window sizew=15,w=21,w=27 respectively and featuref5tof8are the HiFST blur map using window sizew=11,w=15,w=21 andw=27 respectively.It is important to note that there are number of possibilities available to construct the feature vector.For example, few more blur measure can be included.Moreover, blur maps can be computed by using different sized windows.Different sized windows are normally used to capture the multiscale information.In case of blur detection measures, usually a particular sized window is not capable to capture enough information related to diverse types of blurred pixels.Therefore, in the proposed method, features are computed through LBP and HiFST measures using different window sizes.In this way, the GP based method encodes the multi-scale information for blur detection.The target valuetfor each feature vector is calculated from the ground truth imageIgt(x,y).Training dataDTis used in GP process to evolve a classifier.Mathematically, training data for GP can be represented asDT={f,t}.

    3.1.2 Learning GP Model

    In this module, the first phase is to construct an initial population of predefined size.Each individual in the population is constructed with the linear combination of the bias coefficient and set of genes.The bias coefficients are determined by the least square method for each multigene individual.A gene of multigene GP is a tree-based GP model where the terminal nodes are taken from the feature set f, and the entire non-terminal nodes are the arithmetic operators called function set.The terminal set consists of eight nodesf1tof8, and the function set is made of five nodes.The four nodes are the regular mathematical operators.However, mult3 is the multiplications of three numbers.Times, minus, plus and sqrt has two input arguments each and mult3 has three input arguments, and all operators return single output.All input and output types are floating type values, and therefore, the output of one node can be an input of other nodes.Few important parameters for GP with their values are mentioned in Tab.1.

    Table 1: Parameters for GP-based model learning

    The accuracy of individuals in the population is then evaluated with the fitness function.The best individual is then ranked and selected for the next generation by the selection method.In our experiment, we have used a tournament-based selection method to acquire individuals for the next generation.The crossover and the mutation operator are applied to the selected individuals to produce the population for the next generation.At the end of the evolutionary process the system returns an evolved program ?g(f).The performance of the evolved model is then evaluated on the test data.The fitness function used in this paper is AUC, where AUC indicates the area under the curve of receiver operating characteristic (ROC).Once the GP model is developed, we can compute the blur map for every image usingbm= ?g(f).The GP incorporates multi-scale resolution information in the enhanced blur map BM.Several GP simulations are carried out using the GPTIPS toolbox [24] to achieve an optimal solution.

    3.2 Segmentation

    In this section, a model for computing adaptive threshold is developed that will be applied on blur map to segment blur and non-blur pixels.This section again consists of two parts:(a) preparation of training data for SVM, (b) learning best model form SVM.Following subsections will explain the preparation of training and testing data, learning of SVM model and model evaluation.

    3.2.1 Data Preparation for SVM Model

    First, we create a set of useful features.We compute a feature vector k with ten features named as(k1,k2,...,k10).Tab.2 shows the feature set we have used in our experiment.However,the model accuracy may vary if we choose different features for learning.Mean of all the pixels of an image gives insight about the total brightness.The standard deviation measures the spread of the data about the mean value.Median is a measure of an intensity level of the pixel which separates the high-intensity value pixels and lower intensity value pixels.The covariance of an image is a measure of the directional relationship of pixels.The correlation coefficient calculates the strength of the relationship between the pixels.The entropy measure calculates randomness among the pixels of an image.The skewness of the image contains information probability distribution of the pixels.Negative skew indicates that the bulk of the values lie to the right of the mean, whereas positive skewness indicates that bulk of the values lie to the left of the mean.Kurtosis gives information about the noise and resolution measurement together.The high value of kurtosis values indicates that noise and resolution are low.Contrast contains the distinguishable property of the objects in an image.It is calculated by taking the difference between the maximum and minimum pixel intensity in an image.Energy gives information on directional changes in the intensity of the image.

    Table 2: Components of feature vector from blur map

    The blur map generated from the GP model is used to generate the features for training data i.e., for each blur map 10×1 dimensional features vector k=(k1,k2,...,k10)is computed.Here,the best threshold for each image is the target value against each feature vector.The best thresholddis computed empirically by segmenting the blur maps and comparing them with the ground truth images.The LBP based blur maps with different window sized were segmented against a set of thresholds.The best thresholddis chosen against the best Accuracy metric.The training data set for learning adaptive threshold is represented as:

    Here,N1is the sample size of the training data.In our experiment, the total size of the training and testing data isN=N1+N2, whereN2is the sample size of the test data.

    It happened that the wedding of the King s eldest24 son was to be celebrated25, so the poor woman went up and placed herself by the door of the hall to look on.30 When all the candles were lit, and people, each more beautiful than the other, entered, and all was full of pomp and splendour, she thought of her lot with a sad heart, and cursed the pride and haughtiness31 which had humbled28 her and brought her to so great poverty.

    3.2.2 Learning SVM Model

    Once the training dataDATis ready, a multi-class classifier is being trained using SVM.Multiple binary classifiers can be used to construct a multiclass classifier by decomposing the prediction into multiple binary decisions [25].To decompose the binary classifier decision into one, we have used ‘onevsall’coding type.Each class in the class set is individually separable from all the other classes and for each binary learner, one class is taken as positive, and the rest is taken as negative.This design uses all the combinations of positive class for the binary learner.Non-linearity in the features is taken care of by kernel function by transforming nonlinear spaces into linear spaces.All necessary parameters and their appropriate values are listed in Tab.3.In our experiment, the evolved classifiertakes the value of feature vector k as an input and classify it into one of the sixteen classes.These sixteen numeric values are the adaptive thresholds for the GP retrieved blur map.

    Table 3: Parameters for SVM-based model learning

    To evaluate the performance of the classifier, we compute classification loss(L).It is the weighted sum of misclassified observations and can be represented by the formula:

    here,tvaris the threshold predicted by the classifiertis the pre-known target value for test data andI{x} is the indicator function.In our experiment, the model accuracy achieved with training data is 98.4%, and with test data, the model performs with the accuracy of 88%.

    4 Results and Discussions

    4.1 Experimental Setup

    In our experiment, we have used two datasets named as dataset A and dataset B.Dataset A [10] is publicly available dataset consists of 704 defocus partially blurred images.This dataset contains a variety of images, covering numerous attributes and scenarios like nature vehicles, humankind, other living, and non-living beings with different magnitude of defocus blur and resolution.Each image of this dataset is provided with a hand-segmented ground truth imagesegmenting the blurred and non-blurred regions.Dataset B is a synthetic dataset which consists of 280 out of focus image from the dataset used in [26].Each image of dataset B is synthetically created by mixing the blur and the focused part of other images of dataset A.However, we have generated the ground truth image by just segmenting the defocus blur and defocus nonblur regions of each image.There is a possibility of biasing of the particular choice of images(i.e., scenario and degree of blurriness) with the blur measure operators because the evaluation performance of the methods may differ for the different input image.Therefore, quantitative analysis on one dataset would not qualify to compare the performance of blur measure operator.There is also the possibility of model over-fitting for ?g(f), since the model is trained on the dataset A.In order to mitigate these issues and limitations, we intend to run our quantitative and qualitative analysis on two different datsets A and B.Four quantitative metrics are utilized for the evaluating the performance of the developed classifier.These well-known metrics include Accuracy,Precision, Recall, and F-measure [4,27].The performance of different methods is evaluated using these three criteria.Accuracy measures the closeness of the measurements to the specific values.It is defined as;

    where TP, TN, FP, and FN denote true positive, true negative, false positive, and false negative,respectively.If a pixel is blurred and it is detected as blurred then it is considered as true positive(TP) and if it’s not detected then it is regarded as a false negative (FN).However, if a sharp pixel is detected as a blurred pixel then it is considered as false positive (FP) otherwise it is a true negative (TN).Precision is a measure of the correct positive predictions out of all the positive predictions.It is given by;

    F-measure is the harmonic mean of Precision and Recall.It is defined as;

    The recall getsβtimes more importance as compared to precision.In our experiments, we setβ= 0.5 that gives more weight to Precision as compared to Recall.As we observed that the proposed method is providing better Recall measures as compared to Precision.In this way,providing smaller weight to Recall is a better choice.

    4.2 Comparative Analysis

    In order to do comparative analysis, the performance of the proposed method is compared with the five state-of-art methods including(a)LBP based segmentation defocus blur [17],(b)high frequency discrete cosine transform coefficients based method (HiFST) [13],(c)discriminative blur detection features using local Kurtosis (LK) [10],(d)blurred image region detection and classification through singular value decomposition(SVD) [15] and(e)a spectral and spatial sharpness measure based on total variation (TV) [16].In addition, all experiments were conducted using the computer system with the processor Intel(R) Core (TM) i5-9400F and CPU@2.90 GHz.The operating system Windows 10 was running on the system.Moreover, the software and programs for the proposed method was developed using Matlab 2020a.The Matlab codes provided by the authors for LBP [17]1https://github.com/xinario/defocus_segmentation, HiFST [13]2https://github.com/isalirezag/HiFST, LK [10]3http://www.cse.cuhk.edu.hk/leojia/projects/dblurdetect/index.html, SVD [15]4https://github.com/fled/blur_detectionand TV [16]5http://vision.eng.shizuoka.ac.jp/s3/are used for the comparative analysis.The performance of all the five methods is compared with our proposed method qualitatively and quantitatively by using dataset A and the dataset B.During the computation of blur map, multi-scale resolution windows for LBP and HiFST are same as they mentioned in their respective works and codes.However, for LK, SVD and TV methods, multi-scale windows are not used, so we have used the single window with sizew=15×15.Moreover, for LK, SVD and TV methods, the window sizew=15×15 is the most appropriate and it provides the best results among others.Binarization is the final step in the blur segmentation process, which is achieved using the threshold computed through the SVM based developed model.

    Fig.3 shows the quantitative comparison of the proposed method and five state-of-art methods including LBP, HiFST, LK, SVD and TV.All methods are applied on each image of two datasets A and B, and four measures: Accuracy, Precision, Recall and F-Measure, are computed.In the Fig.3, values for the four-performance metrics Accuracy, Precision, Recall and F-Measure are presented.first, for each image in the dataset, the four metrics are computed then the average value for each metric for the whole dataset is computed.The performance average measures for the dataset A are presented in Fig.3a and the performance average measures for the dataset B are shown in Fig.3b.From the resultant measures, it is clearly visible that the performance of the proposed method is better than state-of-art methods.Among various methods, KL has provided the poor values for all metrics whereas, LBP and HiFST methods have provided comparable results with respect to the proposed method.On the other hand, SVD and TV methods have provided average performance against all measures.It is important to note that, no image from the dataset B was taken for training process.The results from Fig.3b shows generalization property of the developed model.It also shows that the prospect of model overfitting is reduced.The noteworthy difference in Recall value for dataset B between the proposed and others methods signifies the robustness of the proposed method.

    For qualitative comparison, we evaluate our method on the randomly picked images with different scenarios as well as the different degree of blur from the both datasets A and B.We compare the performance of proposed method with five state-of-art methods five state-of-art methods LBP [17], HiFST [13], LK [10], SVD [15] and TV [16].First, the blur maps are generated from all methods and the ground truth are also presented for visual comparisons.Fig.4 compares the visual results for blur maps.The blur maps are presented in the form of grayscale images where the sharp region contain higher intensity pixels and blur regions have low-intensity pixels.It can be observed that the blur maps produced through the proposed method are closer to their ground truths.

    Figure 3: Comparison of the proposed method with state of art methods based on Accuracy,Precision, Recall and F-measure using images of (a) Dataset A with 704 images.(b) Dataset B with 280 images

    Whereas, in blur maps produced by KL, SVD and TV methods, degree of blurriness is not correctly estimated The performances from LBP and HiFST methods are comparable with the proposed method.It is clear that the proposed method has ability to estimate degree of blurriness accurately.

    The blur maps provided by the all above mentioned methods are segmented using the SVM based classifier and the results are presented in Fig.5.It can be observed that the proposed method has segmented the blurred and unblurred regions with higher accuracy than other methods, regardless of the blur type and scenarios.Segmented results produced for the LK, SVD and TV methods have inaccuracies due to inaccurate computation of degree of blurriness.Results produced for LBP and HiFST are comparable with the proposed method, however, at few segmented parts inaccuracies are visible.Whereas, the proposed method has provided better segmented maps.

    The proposed method has ability of capturing multi-scale information.Here, we analyze the multi-scale performance of the LBP-based segmentation defocus blur [17] at two sets of scale range and compared it with our proposed method.In this experiment, we have chosen scale range S1 = 11, S2 = 15 and S3 = 21 as set-1 and S1 = 23, S2 = 25 and S3 = 27 as set-2.Fig.6b clearly shows the performance at sets-1 and 2 in their respective classified map and it varies with type of image.We observe that choosing appropriate scale for particular type of image is a challenging task.Fig.6a shows the blur map and the segmented map of the proposed method.Our algorithm not only resolve the scale issue but also improve the segmentation results significantly.

    Figure 4: Blur maps computed through for few selected images from dataset A and B.(row1),input images; (row2), ground truths; (row3~row8), blur maps that are computed through the proposed LBP [17], HiFST [13], LK [10], SVD [15] and TV [16] methods, respectively

    Figure 5: Segmented maps computed through for few selected images from dataset A and B.(row1), input images; (row2), ground truths; (row3~row8), segmented maps that are computed through the proposed LBP [17], HiFST [13], LK [10], SVD [15] and TV [16] methods, respectively

    4.3 Limitations

    The response of blur measure operator varies on different images, some operator performs better than other on same image due to different blur type, scenario, or level of blurriness.Since proposed method inherit the blur information of two methods HIFST [13] and LBP based defocus blur [17], We could not address the problem of noise propagation in this study.As shown in Fig.7, on a particular image the performance of HIFST [13] is good and it generates a better blur map, while blur map for LBP [17] contains noise.The noise of low performer method gets propagated during the learning process of GP hence the performance for the proposed method reduced on these images.One more limitation of the proposed method is that it takes more time as compared to the other methods.

    Figure 6: (a) The blur map and the classified map of proposed method (b) LBP blur maps at 2 sets of scale range (S1-S3) and (S4-S6) and their respective classified maps after multi-scale inference

    Figure 7: The blur maps of HiFST, LBP and proposed methods for an image which confirms the propagation of noise

    5 Conclusion and Future Work

    In this article, a robust method for blur detection and segmentation is proposed.Blur detection is achieved by the GP based model, which produces a blur map.For segmentation, first, we trained a model using SVM, which can predict the threshold based on the retrieved blur map features, and then respective thresholds are used to acquire the classified map of the images.We have evaluated the performance of the proposed method in terms of accuracy, precision, Recall,and F-measure using two benchmark datasets.The results show the effectiveness of the proposed method to achieve good performance over a wide range of images, which outperforms the stateof-the-art defocus segmentation methods.In the future, we would like to expand our investigation toward different types of blur.We wish to examine the effectiveness of the proposed approach by learning the combination of motion and defocus blur.

    Funding Statement:This work was supported by the BK-21 FOUR program through National Research Foundation of Korea (NRF) under Ministry of Education.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    av视频免费观看在线观看| 精品少妇内射三级| 亚洲精品久久久久久婷婷小说| 亚洲美女搞黄在线观看| 国产国拍精品亚洲av在线观看| 亚洲精华国产精华液的使用体验| 久久99热6这里只有精品| 成人毛片a级毛片在线播放| 内地一区二区视频在线| 国产成人精品婷婷| 国产免费现黄频在线看| 狂野欧美激情性bbbbbb| 制服诱惑二区| 老司机影院毛片| 欧美成人午夜精品| 黑人高潮一二区| 久久久久久久久久人人人人人人| 有码 亚洲区| 午夜福利视频在线观看免费| a 毛片基地| 精品国产露脸久久av麻豆| 妹子高潮喷水视频| 美国免费a级毛片| 菩萨蛮人人尽说江南好唐韦庄| 寂寞人妻少妇视频99o| 一个人免费看片子| 大片免费播放器 马上看| 亚洲精品日韩在线中文字幕| 人妻一区二区av| 精品少妇内射三级| 日韩中字成人| 九草在线视频观看| 国产在线一区二区三区精| 成年人午夜在线观看视频| 菩萨蛮人人尽说江南好唐韦庄| 久久精品国产亚洲av涩爱| 久久久精品区二区三区| 午夜激情久久久久久久| 欧美人与善性xxx| 韩国av在线不卡| 亚洲国产av新网站| 国产高清不卡午夜福利| 亚洲精品第二区| 成人漫画全彩无遮挡| 久久久亚洲精品成人影院| 99re6热这里在线精品视频| 另类精品久久| 国产成人免费无遮挡视频| 大码成人一级视频| 制服丝袜香蕉在线| 黄色 视频免费看| 十分钟在线观看高清视频www| 大香蕉久久网| 久久久a久久爽久久v久久| 国产黄色视频一区二区在线观看| av国产精品久久久久影院| 天堂8中文在线网| 观看av在线不卡| 欧美日韩一区二区视频在线观看视频在线| 久久精品熟女亚洲av麻豆精品| 狠狠精品人妻久久久久久综合| 午夜激情av网站| 男男h啪啪无遮挡| 久久久国产一区二区| 亚洲精品,欧美精品| 天堂中文最新版在线下载| 美女脱内裤让男人舔精品视频| 男人添女人高潮全过程视频| 欧美 亚洲 国产 日韩一| 精品国产乱码久久久久久小说| 国产精品 国内视频| 亚洲精品美女久久久久99蜜臀 | 搡女人真爽免费视频火全软件| 90打野战视频偷拍视频| 亚洲久久久国产精品| 中国美白少妇内射xxxbb| 飞空精品影院首页| 水蜜桃什么品种好| 精品久久久精品久久久| 日韩成人av中文字幕在线观看| 26uuu在线亚洲综合色| 久久99一区二区三区| 肉色欧美久久久久久久蜜桃| 九草在线视频观看| 亚洲精品国产av蜜桃| 9热在线视频观看99| 国产1区2区3区精品| 多毛熟女@视频| 日本色播在线视频| 欧美精品人与动牲交sv欧美| 精品人妻在线不人妻| 欧美成人午夜精品| 男人操女人黄网站| 男女午夜视频在线观看 | 国产精品国产av在线观看| 欧美成人午夜免费资源| 人成视频在线观看免费观看| 国产探花极品一区二区| 美女中出高潮动态图| 久久精品国产a三级三级三级| 欧美 日韩 精品 国产| 久久久久国产精品人妻一区二区| 人人妻人人添人人爽欧美一区卜| av播播在线观看一区| 免费女性裸体啪啪无遮挡网站| 亚洲性久久影院| 国产av一区二区精品久久| 高清在线视频一区二区三区| 下体分泌物呈黄色| 在线观看免费高清a一片| 日韩三级伦理在线观看| 亚洲av国产av综合av卡| 国产在线视频一区二区| 精品亚洲成a人片在线观看| a 毛片基地| 老司机亚洲免费影院| 国产精品久久久久久av不卡| 日日摸夜夜添夜夜爱| 日韩免费高清中文字幕av| av电影中文网址| 热99国产精品久久久久久7| 97人妻天天添夜夜摸| 女性被躁到高潮视频| 国产精品熟女久久久久浪| 老司机亚洲免费影院| www日本在线高清视频| 欧美国产精品va在线观看不卡| 国产精品国产三级国产专区5o| 国产亚洲一区二区精品| 国产爽快片一区二区三区| 十八禁高潮呻吟视频| 免费少妇av软件| 秋霞伦理黄片| 男人添女人高潮全过程视频| 丝袜喷水一区| 久热这里只有精品99| 夜夜爽夜夜爽视频| 免费人成在线观看视频色| 国产乱人偷精品视频| 亚洲美女搞黄在线观看| 自线自在国产av| 精品亚洲乱码少妇综合久久| 亚洲精品一区蜜桃| 国产欧美日韩一区二区三区在线| 晚上一个人看的免费电影| 中文字幕免费在线视频6| 国产精品熟女久久久久浪| 毛片一级片免费看久久久久| 人妻人人澡人人爽人人| av免费在线看不卡| 亚洲国产av新网站| 亚洲综合色惰| 日韩av在线免费看完整版不卡| 成人毛片60女人毛片免费| 肉色欧美久久久久久久蜜桃| 国产精品一区二区在线观看99| videosex国产| 精品少妇黑人巨大在线播放| 久久久欧美国产精品| 亚洲丝袜综合中文字幕| 在线精品无人区一区二区三| 精品国产一区二区三区四区第35| 伊人久久国产一区二区| 女的被弄到高潮叫床怎么办| 狂野欧美激情性bbbbbb| 久久 成人 亚洲| 日韩成人av中文字幕在线观看| 黄色怎么调成土黄色| 有码 亚洲区| 一本—道久久a久久精品蜜桃钙片| 黑人高潮一二区| 国产国语露脸激情在线看| 一级,二级,三级黄色视频| 精品熟女少妇av免费看| 久久久国产精品麻豆| 久久久久精品人妻al黑| 亚洲性久久影院| 99国产精品免费福利视频| 18+在线观看网站| 久热这里只有精品99| 免费黄频网站在线观看国产| 久久久精品区二区三区| 国产乱来视频区| 99九九在线精品视频| 校园人妻丝袜中文字幕| 国国产精品蜜臀av免费| 两个人看的免费小视频| 久久国产亚洲av麻豆专区| 在线观看免费高清a一片| 欧美另类一区| 青春草亚洲视频在线观看| 国产熟女欧美一区二区| 国产成人一区二区在线| 国产精品不卡视频一区二区| 又黄又爽又刺激的免费视频.| 18禁观看日本| 欧美丝袜亚洲另类| 性色av一级| 国产精品久久久久久精品古装| 99热这里只有是精品在线观看| 99热这里只有是精品在线观看| 999精品在线视频| 国产精品一二三区在线看| 超碰97精品在线观看| 纯流量卡能插随身wifi吗| 女性被躁到高潮视频| 最近2019中文字幕mv第一页| 最后的刺客免费高清国语| 老司机亚洲免费影院| 97精品久久久久久久久久精品| 大香蕉97超碰在线| 亚洲国产色片| 亚洲国产成人一精品久久久| 亚洲成人一二三区av| 热re99久久精品国产66热6| 亚洲精品中文字幕在线视频| 免费看av在线观看网站| 日韩一区二区三区影片| 性色av一级| 欧美日韩亚洲高清精品| 久热久热在线精品观看| 纯流量卡能插随身wifi吗| 十八禁高潮呻吟视频| 免费少妇av软件| 巨乳人妻的诱惑在线观看| 亚洲丝袜综合中文字幕| 国产亚洲av片在线观看秒播厂| 男女午夜视频在线观看 | 老司机影院毛片| 黑丝袜美女国产一区| 亚洲av欧美aⅴ国产| 在线观看免费日韩欧美大片| 亚洲av.av天堂| 黄片播放在线免费| 久久午夜综合久久蜜桃| 啦啦啦在线观看免费高清www| 国产成人精品婷婷| 免费av中文字幕在线| 亚洲国产精品专区欧美| 日韩伦理黄色片| 成人国产麻豆网| 亚洲精品色激情综合| 大码成人一级视频| 亚洲精品乱久久久久久| av国产精品久久久久影院| 久久久久网色| 亚洲av综合色区一区| 国产爽快片一区二区三区| 少妇精品久久久久久久| 免费看不卡的av| 最近中文字幕2019免费版| 国产av精品麻豆| 香蕉丝袜av| 午夜激情久久久久久久| 99久久精品国产国产毛片| 日韩一区二区三区影片| 性色av一级| 三上悠亚av全集在线观看| 丰满饥渴人妻一区二区三| 久久久久久久国产电影| 国产精品久久久av美女十八| 男女午夜视频在线观看 | 天天躁夜夜躁狠狠久久av| 女人精品久久久久毛片| 日本猛色少妇xxxxx猛交久久| 久久99热这里只频精品6学生| 又黄又粗又硬又大视频| 狂野欧美激情性bbbbbb| 如何舔出高潮| 欧美精品高潮呻吟av久久| 中国美白少妇内射xxxbb| 深夜精品福利| 免费av不卡在线播放| 成人毛片a级毛片在线播放| 一区二区三区精品91| 亚洲综合色惰| 两个人看的免费小视频| 久久午夜综合久久蜜桃| 久久韩国三级中文字幕| 免费日韩欧美在线观看| 国产老妇伦熟女老妇高清| 妹子高潮喷水视频| 日韩欧美一区视频在线观看| 亚洲精品一二三| 免费大片黄手机在线观看| 人人妻人人澡人人爽人人夜夜| 汤姆久久久久久久影院中文字幕| 日日撸夜夜添| 又粗又硬又长又爽又黄的视频| 国产日韩一区二区三区精品不卡| 国产亚洲精品久久久com| 久久久久久人妻| 高清在线视频一区二区三区| 日韩电影二区| 国产高清三级在线| 晚上一个人看的免费电影| 亚洲精品456在线播放app| 又黄又爽又刺激的免费视频.| 精品酒店卫生间| 天天躁夜夜躁狠狠久久av| 亚洲国产av影院在线观看| 国产精品免费大片| 人人澡人人妻人| 国产老妇伦熟女老妇高清| 欧美成人精品欧美一级黄| 日日啪夜夜爽| 蜜桃在线观看..| 最近中文字幕2019免费版| 纵有疾风起免费观看全集完整版| 七月丁香在线播放| 最近中文字幕高清免费大全6| 丝袜脚勾引网站| 久久精品国产鲁丝片午夜精品| 欧美激情 高清一区二区三区| 成人毛片60女人毛片免费| 婷婷色综合大香蕉| 亚洲精品久久午夜乱码| 好男人视频免费观看在线| 在线免费观看不下载黄p国产| 国产成人欧美| 成年女人在线观看亚洲视频| 日本黄色日本黄色录像| 边亲边吃奶的免费视频| av免费在线看不卡| 黄色怎么调成土黄色| 老司机影院成人| 久久久久久人人人人人| 18禁国产床啪视频网站| 国产女主播在线喷水免费视频网站| 精品国产乱码久久久久久小说| 国产精品免费大片| videossex国产| 妹子高潮喷水视频| 久久久久久人妻| 九草在线视频观看| 亚洲av中文av极速乱| 亚洲欧美成人综合另类久久久| www.熟女人妻精品国产 | 亚洲国产精品国产精品| 91精品三级在线观看| 蜜臀久久99精品久久宅男| 亚洲伊人久久精品综合| 国精品久久久久久国模美| 中文字幕制服av| 夜夜爽夜夜爽视频| 国产一区二区激情短视频 | xxxhd国产人妻xxx| 亚洲av免费高清在线观看| 久久久国产欧美日韩av| 色94色欧美一区二区| 美女国产视频在线观看| 国产精品免费大片| 大香蕉97超碰在线| 欧美精品高潮呻吟av久久| 免费大片黄手机在线观看| 欧美日韩成人在线一区二区| 自拍欧美九色日韩亚洲蝌蚪91| 精品国产国语对白av| 欧美3d第一页| 18+在线观看网站| 国产高清不卡午夜福利| 2022亚洲国产成人精品| 少妇的丰满在线观看| 大片免费播放器 马上看| 韩国高清视频一区二区三区| 午夜免费男女啪啪视频观看| 久久国产精品大桥未久av| 国产黄频视频在线观看| 国产高清国产精品国产三级| 水蜜桃什么品种好| 考比视频在线观看| 欧美日韩一区二区视频在线观看视频在线| 搡女人真爽免费视频火全软件| 亚洲 欧美一区二区三区| 久久久久久久大尺度免费视频| 99re6热这里在线精品视频| 亚洲一区二区三区欧美精品| 亚洲国产欧美在线一区| 午夜91福利影院| 乱人伦中国视频| 欧美日本中文国产一区发布| av在线观看视频网站免费| 国产精品欧美亚洲77777| 精品国产国语对白av| 乱人伦中国视频| 天堂中文最新版在线下载| 最近最新中文字幕免费大全7| av国产久精品久网站免费入址| 免费黄色在线免费观看| 国产精品不卡视频一区二区| 五月开心婷婷网| 人人妻人人澡人人看| 青春草亚洲视频在线观看| 水蜜桃什么品种好| 国产精品麻豆人妻色哟哟久久| 精品国产露脸久久av麻豆| 中文字幕另类日韩欧美亚洲嫩草| 亚洲天堂av无毛| 日本av手机在线免费观看| 人妻系列 视频| 国产精品人妻久久久久久| a级毛片在线看网站| 多毛熟女@视频| 国产精品嫩草影院av在线观看| 在线观看人妻少妇| 在线天堂最新版资源| 精品午夜福利在线看| 女性被躁到高潮视频| 久久狼人影院| 男女边摸边吃奶| 亚洲内射少妇av| 韩国av在线不卡| 国产成人一区二区在线| 日本午夜av视频| 国产精品一区www在线观看| 成年av动漫网址| 成年人免费黄色播放视频| 最黄视频免费看| av在线播放精品| 精品亚洲乱码少妇综合久久| 大话2 男鬼变身卡| 99久久中文字幕三级久久日本| 精品少妇内射三级| 如何舔出高潮| 考比视频在线观看| 丝瓜视频免费看黄片| 国产精品麻豆人妻色哟哟久久| 亚洲在久久综合| 人人妻人人澡人人看| 亚洲国产av新网站| 夜夜骑夜夜射夜夜干| 亚洲国产色片| 欧美精品高潮呻吟av久久| 欧美激情国产日韩精品一区| 日韩中字成人| 欧美另类一区| 性色av一级| 午夜视频国产福利| 久久99一区二区三区| 视频区图区小说| 菩萨蛮人人尽说江南好唐韦庄| 亚洲精品国产色婷婷电影| 老司机影院成人| 人人妻人人爽人人添夜夜欢视频| 国产女主播在线喷水免费视频网站| 精品午夜福利在线看| 咕卡用的链子| 日韩制服丝袜自拍偷拍| 亚洲欧美精品自产自拍| 国产一区二区在线观看日韩| 一区二区三区四区激情视频| 一边摸一边做爽爽视频免费| 新久久久久国产一级毛片| 母亲3免费完整高清在线观看 | 久久婷婷青草| 亚洲色图 男人天堂 中文字幕 | 免费大片黄手机在线观看| 国产xxxxx性猛交| 久久鲁丝午夜福利片| 欧美精品一区二区免费开放| 青春草亚洲视频在线观看| 精品99又大又爽又粗少妇毛片| 国产成人精品无人区| 日韩在线高清观看一区二区三区| 亚洲一区二区三区欧美精品| 国产免费一区二区三区四区乱码| 精品一区二区三卡| 国产精品无大码| 高清不卡的av网站| 日韩精品免费视频一区二区三区 | 精品熟女少妇av免费看| 国产精品偷伦视频观看了| 9色porny在线观看| 久久久国产欧美日韩av| 日韩av在线免费看完整版不卡| 日韩免费高清中文字幕av| 久久婷婷青草| 日本vs欧美在线观看视频| 9191精品国产免费久久| 午夜福利,免费看| 亚洲欧美成人综合另类久久久| 夜夜骑夜夜射夜夜干| www.av在线官网国产| 国产精品国产三级国产av玫瑰| 妹子高潮喷水视频| 色94色欧美一区二区| 有码 亚洲区| 国产精品99久久99久久久不卡 | 欧美变态另类bdsm刘玥| 18在线观看网站| 人人妻人人添人人爽欧美一区卜| 国产精品久久久av美女十八| 久久免费观看电影| av国产精品久久久久影院| 国产亚洲av片在线观看秒播厂| 国产永久视频网站| 天堂8中文在线网| 久久久久久人人人人人| 全区人妻精品视频| 国产高清国产精品国产三级| 91成人精品电影| 日本黄大片高清| 青春草亚洲视频在线观看| 亚洲av在线观看美女高潮| 最新的欧美精品一区二区| 啦啦啦中文免费视频观看日本| 黑人巨大精品欧美一区二区蜜桃 | 久久久久精品久久久久真实原创| 亚洲欧洲精品一区二区精品久久久 | 天天影视国产精品| 欧美日韩综合久久久久久| 亚洲国产av新网站| 一本大道久久a久久精品| 捣出白浆h1v1| 亚洲国产欧美在线一区| 热re99久久国产66热| 国产xxxxx性猛交| 一级,二级,三级黄色视频| 午夜91福利影院| 极品少妇高潮喷水抽搐| 免费人妻精品一区二区三区视频| 观看美女的网站| 成人午夜精彩视频在线观看| av福利片在线| 天堂俺去俺来也www色官网| 熟女av电影| 又粗又硬又长又爽又黄的视频| 最近的中文字幕免费完整| 黄色配什么色好看| 一级毛片电影观看| 国产精品一二三区在线看| 亚洲欧美精品自产自拍| 国产综合精华液| 免费少妇av软件| 极品人妻少妇av视频| 久久精品久久久久久久性| 交换朋友夫妻互换小说| 久久 成人 亚洲| 国产精品麻豆人妻色哟哟久久| 欧美精品一区二区免费开放| 久久韩国三级中文字幕| 国产一区二区三区综合在线观看 | 国产亚洲精品久久久com| 久久精品aⅴ一区二区三区四区 | 亚洲五月色婷婷综合| 尾随美女入室| 色吧在线观看| 性色av一级| freevideosex欧美| 两个人看的免费小视频| 女人被躁到高潮嗷嗷叫费观| 九九在线视频观看精品| 日韩中文字幕视频在线看片| 蜜桃国产av成人99| 免费高清在线观看视频在线观看| 丝袜美足系列| 一区二区日韩欧美中文字幕 | 欧美亚洲 丝袜 人妻 在线| 毛片一级片免费看久久久久| 久久亚洲国产成人精品v| 成人国产av品久久久| 蜜桃在线观看..| 欧美精品一区二区免费开放| 水蜜桃什么品种好| 久久99一区二区三区| 视频中文字幕在线观看| 九九爱精品视频在线观看| 国产有黄有色有爽视频| 亚洲高清免费不卡视频| 久久毛片免费看一区二区三区| 精品国产一区二区久久| 国产精品99久久99久久久不卡 | 久久久精品免费免费高清| 亚洲色图 男人天堂 中文字幕 | 欧美 亚洲 国产 日韩一| 日韩成人伦理影院| 日本vs欧美在线观看视频| 久久人人爽人人片av| 在线精品无人区一区二区三| 制服人妻中文乱码| 国产色爽女视频免费观看| 香蕉国产在线看| 一二三四在线观看免费中文在 | 另类亚洲欧美激情| 国产精品三级大全| 精品99又大又爽又粗少妇毛片| 国产免费一级a男人的天堂| 卡戴珊不雅视频在线播放| 精品一区二区三卡| 亚洲av福利一区| 亚洲伊人色综图| 91aial.com中文字幕在线观看| 免费看av在线观看网站| 日本-黄色视频高清免费观看| 久久综合国产亚洲精品| 久久国产精品男人的天堂亚洲 | 人人妻人人澡人人看| 欧美成人午夜精品| 这个男人来自地球电影免费观看 | 99久国产av精品国产电影| 精品人妻熟女毛片av久久网站| 久久人人爽人人爽人人片va| 高清在线视频一区二区三区| 欧美日韩视频精品一区| 国产 精品1| 久久人妻熟女aⅴ| 久久午夜福利片| 精品酒店卫生间| 97在线视频观看| 在线观看一区二区三区激情| 男人操女人黄网站| 久久久久久久大尺度免费视频| 免费观看在线日韩| 亚洲,一卡二卡三卡| 看十八女毛片水多多多|