• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Real-time stereo matching on CUDA using Fourier descriptors and dynamic programming

    2019-05-14 13:26:40ohamedHallekFethiSmachandohamedAtri
    Computational Visual Media 2019年1期

    M ohamed Hallek (),Fethi Smach ,and M ohamed Atri

    Abstract Computation of stereoscopic depth and disparity map extraction are dynamic research topics.A large variety of algorithms has been developed,among which we cite feature matching,moment extraction,and image representation using descriptors to determine a disparity map.This paper proposes a new method for stereo matching based on Fourier descriptors.The robustness of these descriptors under photometric and geometric transformations provides a better representation of a template or a local region in the image.In our work,we specif ically use generalized Fourier descriptors to compute a robust cost function.Then,a box f ilter is applied for cost aggregation to enforce a smoothness constraint between neighboring pixels.Optimization and disparity calculation are done using dynamic programming,with a cost based on similarity between generalized Fourier descriptors using Euclidean distance.This local cost function is used to optimize correspondences.Our stereo matching algorithm is evaluated using the Middlebury stereo benchmark;our approach has been implemented on parallel high-performance graphics hardware using CUDA to accelerate our algorithm,giving a real-time implementation.

    Keywordsgeneralized Fourier descriptors;stereo matching;dynamic programming;CUDA

    1 Introduction

    Due to technological advances and improvements in digital cameras,stereo vision is an important research area.However,dense correspondences and 3D reconstitution are key problems for computer vision researchers.Provision of an ef ficient algorithm for the reconstitution of 3D information from a stereo image pair of the same scene taken from distinct viewpoints is the main objective of stereo systems.The stereo system must follow three basic steps:calibration,correspondence,and reconstruction.

    In this work,we focus on the correspondence step.The main aims of the stereo matching algorithm are to correctly identify corresponding pixels in the rectif ied stereo images and f ill the disparity map[1,2].Stereo matching algorithms must overcome various problems,the most commonly encountered ones being noise,occlusion,and repetitive textures.Also the researcher must respect various constraints including epipolar geometry,ordering constraints,and smoothness.Many stereo matching algorithms have been developed to solve the correspondence problem using patch-based image synthesis methods[3,4].Analysing state of the art algorithms,stereo matching methods may be divided into local and global categories[5,6]. The most popular local methods are based on block matching and feature matching[7,8].Generally,these methods involve an analysis of local light intensities around each pixel or some regions in the image.However all pixels in the image are involved in global methods,such as graph cut,belief propagation,and dynamic programming[5].

    In 2002,Scharstein and Szeliski[1]def ined a taxonomy to categorize dense correspondence algorithms. It shows that most existing stereo matching methods contain four steps:

    ?Cost function calculation:a matching process for each pixel at all possible disparity levels.

    ?Cost aggregation:aggregating the cost over the support region.

    ?Disparity computation and optimization:selecting the disparity value that optimizes the function and f illing the disparity map.

    ?Disparity ref inement:post-processing to improve the results.

    Dif ferent techniques are used to realize each step.For example,block matching and box f ilters provide the most popular techniques for cost calculation and cost aggregation respectively.Usually block matching is done on gray images or on the intensity channel of color images.In this work,we use a mathematical transformation before calculating the cost function.Here,the first step is done by calculating generalized Fourier descriptors then finding the Euclidean distance between descriptors.Next,we apply a boxer filter to aggregate the cost function.The optimization and disparity computation is done using dynamic programming while the last step is performed using a stereo consistency check and median filter to improve the final disparity map.

    Many stereo matching algorithms,especially the global methods,are computationally expensive.For this reason,many research works are interested in runtime reduction and real-time implementation.In this paper,we present a new approach for stereo image matching based on generalized Fourier descriptors.This approach isdetailed in Section 3.In Section 4,we evaluate our algorithm and we give some experimental results.In Section 5,we present a CUDA-based real-time implementation of our approach on a GPU.Finally,conclusions are presented in Section 6.

    2 Related work

    As already indicated,the stereo matching process is realized by following four fundamental steps.It starts by def ining a cost function and calculating the volume cost for each pixel at all disparity levels,then aggregating the matching cost.Next the energy function is optimized and the disparity map f illed.Finally the obtained disparity map is postprocessed.In the literature,there are several techniques to achieve each step.The most common cost functions are absolute dif ference and block matching.The two functions are characterized by linear computational complexity,simplicity of implementation,and fast runtime.However,the limitations of these techniques are their failure in discontinuous areas and theirs sensitivity to the window size used. A simple comparison of light intensities is not always enough,hence the use of mathematical transformations such as census or rank transforms is required.These nonparametric transformations provide standard metrics and are more robust to radiometric distortion and occlusion[9].In our work,we use a mathematical transformation based on the Fourier transform to extract robust descriptors.Similarity of descriptors provides our cost function.We note that many stereo matching methods are based on feature extraction and point of interest detection.In this context we can mention various descriptors such as the scaleinvariant feature transform(SIFT)[10],and speededup robust features(SURF)[11].Zernike moments are also used for the determination of corresponding points[12].Generally,a mathematical transformation is calculated and robust descriptors are extracted to def ine an ef ficient cost function.

    For cost aggregation,many techniques are employed.The simple solution is the use of linear image f ilters such as a box or Gaussian f ilter.Edge-preserving f ilters such as the bilateral f ilter and guided f ilter can also be good solutions,but they are computationally expensive.In our work,we adopt a simple box f ilter for cost aggregation.

    In the disparity computation,we note that winner takes-all is the most common solution.This step can be improved using semi-global or global optimization algorithms such as graph cuts[13],belief propagation[14],or dynamic programming[15,16].Disparity ref inement is done using the same approach as in Mattoccia et al. [17]and Kordelas et al. [18]to detect occlusions and depth borders. In this step,three consecutive processes are applied:invalid disparity detection,f ill-in of invalid disparity values,and median f iltering.

    Many works consider acceleration of these computationally intensive algorithms.Dif ferent architectures are used to achieve real-time performance.One is based on f ield-programmable gate arrays(FPGAs)[19]. A second alternative is based on graphics hardware using the CUDA language,and is used in many real-time algorithms such as the work of Kowalczuk et al.[20]and Congote et al.[16].Dif ferent real-time algorithms focus on reducing the complexity associated with cost calculation,at the expense of reduced accuracy.In our work we exploit NVIDIA’s GeForce GTX960 computing capabilities to produce an accurate disparity map.

    3 GFD for stereo matching

    Figure 1 presents a block diagram of our stereo matching algorithm.The cost function is based on similarity of generalized Fourier descriptors,denoted SGFD.The cost is aggregated over square windows using a box f ilter.Then we usedynamic programming for energy optimization and f illing the disparity map.At this stage,the obtained disparity map contains some invalid or unwanted pixels,so a post-processing step is required. A left–right consistency check allows us to detect these invalid pixels.Then we perform a f ill-in process to replace the invalid pixels with valid minimum values.The ref inement step includes median f iltering to remove noise and enforce smoothness between neighboring pixels.

    3.1 Rep resentation of GFD

    In pattern recognition,the Fourier transform hasbeen used for many years to extract a set of invariants.In 2002,generic Fourier descriptors[21]were applied to grayscale images.Color descriptors called generalized Fourier descriptors(GFDs)were def ined in 2008 which can be applied to both grayscale and color images[22].These descriptors are given in Eq.(1),when(r,θ)are polar coordinates of the input point M and?f(r,θ)isthe Fourier transform of the function f at the point M(r,θ).

    These invariants are functions of a single variable(radius)which makes them simple to calculate.In an image,the integral in Eq.(1)becomes a discreet sum that gives us a set of values forming a vector.All descriptors must satisfy some invariance and robustnessproperties.For GFDs,thesepropertiesare well respected.The theoretical properties of GFDs are detailed in the work of Smach et al.[22]and we note their invariance under motion,change of scale,and ref lection.In practice,GFDs are obtained for color images using the f lowing steps:

    ?decompose the color image into three channels(red,green,and blue);

    ?calculate the Fourier transform and its square modulus for each channel;

    ?vary the radius r and compute the sum of the pixels located along each ray.

    The final descriptors concatenate the descriptors for each channel.These descriptors give a complete and robust description of the image which can be used for color object recognition and image classification.Smach et al.[22]evaluated the performance of the GFD on several standard and personal databases.The results obtained using GFD and support vector machines(SVMs)for classification indicated the robustness of these descriptors.GFDs outperform various families of invariants,such as Zernike moments.See Refs.[22,23]for more details of GFDs.

    3.2 Local cost function

    Invariance under geometric transformations,and robustnessto noiseand lighting changesareimportant properties of GFDs which allow us to use GFDs for stereo matching.A full and easily accessible description can characterize a region in the reference image and identify it in a target image.

    In stereo matching,the cost function is the main step and it dif fers from one algorithm to another.Our energy function,denoted SGFD,is def ined by similarity between GFDs:for a stereo pair we take the left image as the reference image and characterize some region in this image by the left color descriptors,GFDlc.Then,we calculate the descriptors for the candidate region in the right image,GFDrc.The two descriptors can be expressed as below:where aiand biare the components of the left and right color descriptors and N is the radius of the window. After computing these descriptors,we compute their similarity SGFDcusing their Euclidean distance:

    Fig.1 Block diagram of our approach.

    Many algorithms combine the matching costs for colors and gradients in order to increase the robustness against radiometric dif ferences.Yang et al.[24]and Hosni et al.[25]combined absolute dif ferences of color and gradient using a parameter α.In a similar way,our matching cost combines the Euclidean distance for color descriptors(SGFDc)and the Euclidean distance for gradient descriptors(SGFDg).The global cost function for a pixel p at the disparity value d is denoted by SGFD(p,d):

    The parameterαis used to balance the color and gradient termsasin Yang et al.[24].In the above,we need to calculate SGFDg(p,d)based on the gradient.This is done using the following steps:

    ?Calculate the gradient values in horizontal and vertical directions for the left and right images(Il,Ir).These values,denoted Gx,Gy,are given by Eqs.(6)and(7).

    ?Calculate the gradient magnitude G for both images,as given by Eq.(8).

    ?Calculate the generalized Fourier descriptors for reference and target images.

    ?Computethe Euclidean distancebetween descriptors.The necessary equations are given below,where A=[1 0?1]and I is either the left or right image.Here?denotes the convolution operation,and ATis the transpose of A.

    After calculating SGFDg(p,d),this cost function is aggregated using a simple technique.By applying a box f ilter,we aggregate the matching cost over a square windowω.The aggregated cost for a pixel p at disparity value d is given by following Scharstein and Szeliski[1].

    3.3 Op timization and d isp arity f ill ap p roach

    After cost matching calculation and use of a box f ilter for the aggregation step,the third step in Scharstein and Szeliski’s taxonomy is performed.The aim of this step is to optimize the cost and f ill the disparity map.The most popular method is the winner takesall(WTA)technique,which selects the minimal aggregated corresponding value for each pixel:

    where drdefines the set of allowed discrete disparity levels in an image.The use of WTA reduces computational complexity but can produce unmatched pixels and invalid disparity values at the image border and occluded regions.

    Once the global approach has been developed,a variety of algorithms can be used to f ind the correctly matching points.These algorithms make explicit smoothness assumptions and optimize a global cost function that combinesmatching cost and smoothness cost as detailed in Ref.[1].This global cost function is def ined by

    Dynamic programming(DP)is a popular optimization approach.Generally,the aim of DP is to solve a global problem by dividing it into smaller subproblems whose solution can easily be obtained.The global solution is the concatenation of the solutions of all sub-problems.This optimization approach was introduced for stereo vision by Ohta and Kanade[26].DP exploits ordering and the smoothness constraints to optimize the matching cost between two scanlines.This technique is based on two stages:a step for constructing the cost matrix for all pixels at all possible disparity levels and a step in which pairs of corresponding pixels are selected by searching for the optimal path.Let the aggregated cost function be denoted by CA(x,y,d)where x and y represent the position of a pixel p.The matrix extracted from a volume cost CA(x,y,d)for a f ixed line is denoted Mh(x,d).Thedimensionsof Mhare W×Dmaxwhere W and Dmaxrepresent the width of the image and the maximal disparity.In our work,we use the DP approach developed by Congote et al.[16].Each matrix Mhthat represents the matching between two scanlines is updated according to

    We note thatλrepresents a penalty for change in disparity values between neighboring pixels.After calculating Mh,we compute the disparity value of the corresponding line using the algorithm for the optimal path.

    3.4 Disp arity ref inement

    By thisstage,the cost function hasbeen calculated and aggregated.Dynamic programming is also applied to fill disparity map.Theobtained disparity map contains unmatched pixels due to occlusion and repetitive textures. Thus,a postprocessing step is required.This step involves three consecutive processes:invalid disparity detection,fill-in of invalid disparity values,and weighted median filtering.Disparity refinement starts by detection of invalid disparity values and unmatched pixels in the depth map.This stage is done using a left–right consistency checking process,comparing the left disparity map to the right disparity map.Inconsistent pixels between the two disparity maps are marked as having invalid disparities.In our work,we use the same approach defined by Mattoccia et al.[17]and Kordelas et al.[18].

    Disparity values are marked as invalid if they do not satisfy the condition below:

    where DLRand DRLrepresent the left reference disparity and right disparity map respectively.

    The next step for disparity ref inement is f ill-in of invalid disparity values.The disparity of each unmatched pixel is replaced with the nearest valid disparity.Knowing that,the used valid value is located in the same line or in the starting line.This process is by

    wherethedisparity valueat thelocation of p isdef ined by d(p)and(p?i)indicates the location of the f irst valid disparity on the left side while(p+j)is the location of the f irst valid disparity on the right side.We f inish the ref inement step with median f iltering in order to reduce noise and enforce smoothness between neighboring pixels.

    4 Experimental results and analysis

    To evaluate our stereo matching approach it is necessary to use standard databases.We confronted our algorithm with some stereo matching problems involving occlusion and discontinuous regions.The average number of bad pixels in these regions indicates the accuracy of our stereo matching method,and is given by

    where Dxis the obtained disparity map,GTxis the ground truth,andδpresents a disparity tolerance.

    To evaluate our approach,we start by evaluating our proposed local function:SGFD can be compared with other local energy functions.We study the ef fect of thewindow size and give the evaluation errors.The test wasdone by changing the window size from(3×3)to(25×25)and the errors were calculated in the non-occluded regions.The images used were ArlL and Teddy from the Middlebury dataset.Results are shows in Fig.2.

    The f irst column in Fig.2 shows the left image of ArtL and thepercentage of bad pixelsin non-occluded areas(curves(a)and(b)). The second column presents the Teddy image and the corresponding errors(curves(c)and(d)).The results obtained for all cost functions were calculated without performing any post treatment and errors were calculated using Eq.(15)withδ=1.We compare SGFD,ZSAD(zero mean sum of absolute dif ferences),and NCC(normalized cross correlation)in curves(a)and(c)for Art L and Teddy images respectively.Curves(b)and(d)compare SGFD,ENCC(enhanced normalized cross correlation)[27],and ZNCC(zero mean normalized cross correlation)for the two stereo pairs.For the images used,we note that the lowest error in non-occluded areas is obtained by SGFD using a small window size(3×3)or(5×5).In addition,this error always remains lower compared than the errors given by other functions for a large window.Thus,these curves indicate that SGFD is more accurate than other local cost functions and more robust to window size variation.

    After evaluating the local function,we evaluated our stereo matching algorithm on the Middlebury dataset.This database is used as a de facto standard for comparing stereo matching methods and ranking them according their performance.We started by using version 2 of this database,denoted MV2;it contains only 4 stereo pairs(Tsukuba,Venus,teddy,cones).Results are provided in Tables 1 and 2.We denote the average errors in non-occluded regions and the average of absolute errors by Avg non-occ and Avg all respectively,while the average of depth discontinuity errors are denoted Avg disc.The metric Avg indicates the average of the bad pixel error.These measures are used as the main metrics to evaluate the accuracy of all stereo matching methods.

    Fig.2 Ef fect of window size variation for SGFD and other cost functions.

    Table 1 Comparison of methods on MV2 usingδ=1

    Table 2 Comparison of methods on MV2 usingδ=2

    We note that the calculation of these parameters is performed using Eq.(15).To evaluate our approach,we determined the disparity maps for all images in MV2.Then we calculated thefour metrics usingδ=1 andδ=2 as indicated respectively in Tables 1 and 2.These two tables show that our approach gives the lowest average errors for both thresholds,and conf irm that our proposed approach outperforms the optimized dynamic programming method of Salmen et al.[28],and the approach of Wang et al.[29]based on joint bilateral f iltering and dynamic programming.In addition,our approach is more accuratethan other recent works given in the evaluation table for MV2(ht t p://vision.middl ebur y.edu/st er eo/eval).

    Furthermore,we evaluated our stereo matching algorithm on version 3 of the Middlebury benchmark(MV3).Thisdataset contains30 stereo pairs:15 pairs for training and 15 pairs for testing.The images in this database have resolution up to 750×500 and a maximal disparity that can reach 200.Evaluation results for MV3 are shown in Table 3.It presents the percentage of bad pixels in non-occluded regions and in all regions(nonocc,all).The errors are calculated for the two regions using thresholds equal to 2 and 4(Bad2,Bad4).The average absolute error is denoted Avgerr and indicated in the last column.Table 3 summarizes the evaluation results on the training dataset.The obtained results are detailed in Tables 4and 5,which respectively show the error for each stereo pair on non occluded and all regions.

    Table 3 Comparison of methods on MV3

    The evaluation on MV3 indicates that our proposed approach outperforms other algorithms such as DoGGuided[33]that use a guided f ilter based on the response of the dif ference of Gaussian,binary stereo matching(BSM)[34]and other recent approaches.In addition our stereo matching algorithm is more accurate than ICSG[35]and semi global matching(SGBM1)[36]. Further details of the evaluation on MV3 are provided by disparity maps displayed in Fig.3. This f igure presents respectively the left images and ground truths(GT)for the stereo pairs used in the second and the third columns;the disparity maps in the fourth column are calculated using the proposed approach. Further columns show the disparity maps obtained with other stereo matching algorithms.Errors in both regions(all and non-occluded)for these disparity maps are given Tables 4 and 5.

    Stereo matching methods are classif ied according to dif ferent measures,essentially based on the disparity map quality and the execution time.Global methods are computationally expensive.Their complexity is O(NN)operations per scanline,for scanlines of N pixels.The use of dynamic programming can reduce thiscomplexity to O(N2)but thisdoesnot of fer a fastrunning implementation.A software implementation of our method has a long runtime and does not meet realtime needs.

    To accelerate our stereo matching system we propose an implementation based on graphics hardware as detailed in the following section.

    Table 4 Performance comparison of quantitative evaluation results based on nonocc error from MV 3

    Tab le 5 Performance comparison of quantitative evaluation results based on all error from MV 3

    Fig.3 Disparity maps for the proposed algorithm and other methods on MV 3.

    5 CUDA implementation

    Realtime stereo matching has become a reality.Until very recently,all realtime implementations made use of GPUs or FPGAs.Our method is based on the calculation of generalized Fourier descriptors.We mentioned previously that the GFD calculation is done for each channel separately,then the f inal descriptor is calculated by concatenating the results.We can use parallelism to compute descriptors for the three channels simultaneously and consequently reduce the time required to get the f inal descriptors.On the other hand,dynamic programming allows us to optimize matching between two scan-lines.A CPU implementation performs these steps successively,which is why we seek an appropriate environment for simultaneous processing.We believe that GPU implementation can provide an ef ficient solution.

    5.1 Approach

    In a few years,GPUs have become powerful tools for massively parallel intensive computing.They are currently used for several applicationsincluding image processing.These applications exploit classical image processing methods implemented on a GPU,typically using a specif ic language,CUDA,def ined by NVIDIA in 2007.There are many predef ined functions and libraries for image processing using CUDA language.As our descriptors are based on the Fourier transform,we can exploit the CUFFT library for fast Fourier transform calculation.Our stereo model relies on CUDA for parallel processing,result visualization,and reduction of data transfer costs between CPU memory and GPU memory.This model contains 4 steps:

    ?Load ing inp ut images:transfering the stereo pairs from the CPU to GPU memory(host to device).

    ?Thread allocation:f ixing the number of threads for the calculation grid so that each thread can perform processing on a pixel template.

    ?Parallel p rocessing w ith CUDA:executing kernel stereo functions N timesusing the N threads created in the previous step.

    ?Presentation of results:transfering results from the GPU to CPU memory(device to host).

    We start by f ixing the number of threads and blocks and loading left and right images into device memory.All processes of our stereo matching algorithm are performed with specif ic functions,or kernels,that are executed in parallel by multiple threads.For our method,the organization of the kernel functions is presented in Fig.1.The f irst step in our algorithm is the calculation of the cost volume V(x,y,d)where x,y indicate the position of the pixel p and d is the disparity value.This volume is obtained for all pixels and for all possible disparity values.It is obtained by matching pixel p toˉp at position(x+d,y)using SGFD def ined by Eq.(5).Therefore,the aim of the f irst kernel function is to calculate V(x,y,d)based on our local cost function.The SGFD is built from Fourier transforms and calculated using CUFFT.This library implements several FFT algorithms for varying types of input and output including C2C(complex input to complex output),R2C(real input to complex output),and C2R(complex input to real output).CUFFT of fers highly optimized algorithms to calculate the FFT for dif ferent dimensions:1D,2D,and 3D.In our approach,we use FFT 2D to compute the FFT of a square window,following Haythem et al.[37].

    After the descriptors are obtained to characterize this region,the Euclidean distance is employed to determine the matching cost as indicated in Algorithm 1.We start by loading the input images,f ix the window size,and extract the templates from the original images(Il,Ir)and gradients(Gl,Gr).Next,we calculate the generalized Fourier descriptors(GFD)for all templates.We obtain four descriptors:GFDlcfor the left window,denotedfor the right window,denotedand GFDlg,GFDrgfor gradient left and right windows denoted respectively byand Tmp grad r.The Euclidean distance is computed between the descriptors and the f inal cost function is determined.The aggregated cost volume CA(p,d)in Eq.(9)is easy to calculate using a box f ilter to average the cost. The next kernel function is dedicated for optimization and calculation of the disparity.The goal of this kernel is def ined in Eq.(12).In our work,we follow Congote et al. [16],where the dynamic programming kernel is well described.Before transferring the results to host memory,a last kernel performs postprocessing.The goal of this function is to improve the disparity map by detection of invalid disparity values,to f ill them in,and apply a median f ilter.Westart by simplecomparison between left disparity and right disparity to identify the unwanted pixels according the condition in Eq.(13).We then replace invalid pixels with valid values from the left or right side as indicated in Eq.(14).Finally a simple median f ilter is used to reduce the noise and impose smoothing between neighboring pixels.

    Algorithm 1 Cost function at disparity value d Input:left image I l,right image I r,gradient image left G l,gradient image right G r,parameterα.Outp ut:cost function SGFD c(x,y,d).for every pixel p(x,y)d o for p←y?w/2 to y+w/2 d o for q←x?w/2 to x+w/2 d o T mp lef t[p.ω+q]←I l[y.width+x];T mp r ight[p.ω+q]←I r[y.width+x+d];T mp gr ad l[p.ω+q]←G l[y.width+x];T mp gr adt r[p.ω+q]←G r[y.width+x+d];end for end for S GFD c←dist(GFD(Tmp right));S GFD g←dist(GFD(Tmp gr ad lef t),GFD(Tmp r));S GFD(p,d)←αSGFD c(p,d)+(1?α)SGFD g(p,d);end for l),GFD(Tmp gr ad

    5.2 Imp lementation results

    The computational complexity of our stereo method and its execution time distribution are now discussed.In practice,the graphics card available was an NVIDIA GeForce GTX960 with Maxwell architecture.It has 1024 CUDA cores running at 1.2 GHz.It is connected to an Intel Core i7-3770M based CPU with a clock speed of 3.4 GHz. We tested our stereo matching implementation on images with resolution 320×240 pixels and 32 disparity levels.Our implementation givesusan execution timeof 26.2 ms,with the steps of cost calculation and aggregation taking 76%of the overall runtime.Optimization and disparity f illing processes take 18%of the total processing time and the ref inement kernel takes 6%of the total runtime.

    In order to compare our stereo matching method with other real-time algorithms,we used the same stereo pairs with a resolution of 320×240 pixels and 32 disparity levels.We evaluated the performance of the algorithms based on three important metrics:the number of millions of disparity computations performed per second(MDE/s),the number of the frames per second(FPS),and the average percentage of bad pixel errors across all test images.Results using accuracy and runtime metrics are indicated in Table 6,which provides a comparison between our proposed algorithm and other real-time stereo matching methods.Our implementation achieves 38 frames per second,and is more accurate and faster than DCBGrid[38],and RealTimeGPU[39]based on adaptive cost aggregation and dynamic programming.ReliabilityDP[40],using reliability based dynamic programming,produces less accurate results and is slower than our proposed algorithm.Moreover,our approach gives us almost the same accuracy as that obtained using ESAW[41],although this last method is faster.

    Evaluation of our approach on the Middlebury MV3 database requires the calculation of disparity maps for all stereo pairs.We present some results in Fig.4.The f irst line indicates the left images of each stereo pair and the second line shows the ground truths for each stereo pair.The last line presents the disparity maps obtained using our approach.

    The test on MV3 allows us to place our algorithm in an evaluation table(ht t p://vision.middl ebur y.edu/st er eo/eval 3/).From it,we extract the most important factors:average absolute errors in nonoccluded regions(Avg nonocc),average absolute errors in all regions(Avg all),total time(time),time normalized by number of pixels(s/megapixels,denoted time/MP,and time normalized by number of disparity hypotheses(s/(gigapixels?ndisp))denoted time/GD.In Table 7,we compare our approach with other stereo matching algorithms in termsof accuracy and runtime metrics.

    Table 7 Accuracy and speed for MV 3

    Theresultsin Table7 show that our approach gives average absolute errors in non-occluded regions equal to 9.76%and average absolute errors in all regions equal to 17.6%.The total execution time of our proposed algorithm is 0.27 s.These results indicate that our approach is more accurate and faster than DoGGuided[33],BSM[34],ICSG[35],and SGBM1[36].SPS[32]produces more accurate results over all regions(16.6%)but its results are less accurate in non-occluded regions(10.4%).Also,this method is slower than our stereo matching algorithm,with a global execution time of 22.1s.

    6 Conclusions

    Fig.4 Some disparity maps obtained for MV3.

    This paper presents a new cost function for stereo matching based on generalized Fourier descriptors.The cost function for the proposed stereo matching algorithm is Euclidean distance between Fourier descriptors applied to color and gradient images.Cost aggregation,disparity calculation,and result ref inement are performed respectively using a box f ilter,dynamic programming,and postprocessing.To evaluate our algorithm,we used the Middlebury stereo benchmark.The experimental results indicate that our proposed method outperforms many stereo matching algorithms including ones based on joint bilateral f iltering and dynamic programming,semi global matching and optimized dynamic programming.Also,our proposed approach is more accurate than recent works involving binary stereo matching and stereo matching based on sampled photoconsistency computation.

    Furthermore,we havepresented an implementation of our approach on graphics hardware using CUDA.This implementation exploits the CUFFT library to compute the cost function and CUDA parallel computing architecture to implement the dynamic programming.Results show that this implementation can reach real-time performance,conf irming that it outperforms many real-time algorithms in terms of accuracy and runtime metrics.

    可以在线观看毛片的网站| 黑人巨大精品欧美一区二区mp4| 无限看片的www在线观看| 美女国产高潮福利片在线看| 黄色 视频免费看| 亚洲精品一区av在线观看| 午夜a级毛片| 国产一卡二卡三卡精品| 天天添夜夜摸| 国产精品 国内视频| 嫁个100分男人电影在线观看| 午夜精品久久久久久毛片777| 国产区一区二久久| 丝袜人妻中文字幕| 久久久精品国产亚洲av高清涩受| 精品久久久久久久久久久久久 | 国产精品综合久久久久久久免费| 亚洲欧美日韩无卡精品| 国产爱豆传媒在线观看 | 男女做爰动态图高潮gif福利片| 国产成年人精品一区二区| or卡值多少钱| 久久精品成人免费网站| 欧美丝袜亚洲另类 | 美女免费视频网站| 1024香蕉在线观看| 露出奶头的视频| 母亲3免费完整高清在线观看| 欧美色视频一区免费| 日韩一卡2卡3卡4卡2021年| 欧洲精品卡2卡3卡4卡5卡区| 高清在线国产一区| 99久久99久久久精品蜜桃| 欧美日韩精品网址| 嫩草影视91久久| 国内揄拍国产精品人妻在线 | 狠狠狠狠99中文字幕| 99久久无色码亚洲精品果冻| 桃色一区二区三区在线观看| 日韩大尺度精品在线看网址| 国产黄片美女视频| 亚洲精品一区av在线观看| 国产私拍福利视频在线观看| 午夜亚洲福利在线播放| 国产精品美女特级片免费视频播放器 | 欧美乱妇无乱码| 久久久国产成人精品二区| 国产aⅴ精品一区二区三区波| 亚洲国产欧美网| 欧美精品亚洲一区二区| av天堂在线播放| 天堂√8在线中文| 精品免费久久久久久久清纯| 欧美性猛交╳xxx乱大交人| 麻豆av在线久日| av片东京热男人的天堂| 99精品在免费线老司机午夜| 黑人巨大精品欧美一区二区mp4| av天堂在线播放| 午夜福利一区二区在线看| 女生性感内裤真人,穿戴方法视频| 国产成人精品久久二区二区91| 男人的好看免费观看在线视频 | a在线观看视频网站| 久久久精品国产亚洲av高清涩受| 伦理电影免费视频| 亚洲第一电影网av| 在线观看舔阴道视频| 国产亚洲精品久久久久久毛片| 人人妻,人人澡人人爽秒播| 一进一出好大好爽视频| 成人午夜高清在线视频 | 成年免费大片在线观看| 午夜日韩欧美国产| 午夜视频精品福利| 国产又爽黄色视频| 久久中文字幕人妻熟女| 校园春色视频在线观看| 国产成人av激情在线播放| 无人区码免费观看不卡| 久久久精品欧美日韩精品| 亚洲国产毛片av蜜桃av| 激情在线观看视频在线高清| 成人18禁高潮啪啪吃奶动态图| a在线观看视频网站| 极品教师在线免费播放| 天堂动漫精品| 国产精品香港三级国产av潘金莲| 成人三级黄色视频| 99久久久亚洲精品蜜臀av| 国产精品98久久久久久宅男小说| 色播在线永久视频| 国产在线精品亚洲第一网站| 欧美国产日韩亚洲一区| 国产99久久九九免费精品| 欧美乱码精品一区二区三区| 一区二区三区激情视频| 亚洲国产看品久久| 天天添夜夜摸| 别揉我奶头~嗯~啊~动态视频| 国产真实乱freesex| 中文字幕精品亚洲无线码一区 | 亚洲精品国产一区二区精华液| 国内精品久久久久精免费| 一本大道久久a久久精品| 麻豆一二三区av精品| 99国产精品一区二区三区| 欧美一区二区精品小视频在线| 男女做爰动态图高潮gif福利片| 欧美性猛交黑人性爽| av超薄肉色丝袜交足视频| 操出白浆在线播放| 啦啦啦观看免费观看视频高清| 亚洲最大成人中文| 999久久久国产精品视频| 日韩成人在线观看一区二区三区| 免费av毛片视频| 丝袜美腿诱惑在线| 成人手机av| 中出人妻视频一区二区| 亚洲国产日韩欧美精品在线观看 | 久久久国产精品麻豆| 无人区码免费观看不卡| 免费无遮挡裸体视频| 久热这里只有精品99| 99久久无色码亚洲精品果冻| 久久热在线av| 91字幕亚洲| 男人舔女人的私密视频| 欧美在线黄色| 国产精品九九99| 老鸭窝网址在线观看| 日韩大码丰满熟妇| 欧美中文日本在线观看视频| 美国免费a级毛片| 丝袜在线中文字幕| 国产91精品成人一区二区三区| 国产黄色小视频在线观看| 成人手机av| 一卡2卡三卡四卡精品乱码亚洲| 麻豆一二三区av精品| 日韩中文字幕欧美一区二区| 国产一区在线观看成人免费| 香蕉丝袜av| 视频在线观看一区二区三区| 老司机午夜十八禁免费视频| 伦理电影免费视频| 色婷婷久久久亚洲欧美| 99国产极品粉嫩在线观看| 国产精品综合久久久久久久免费| 久久中文看片网| 极品教师在线免费播放| 青草久久国产| 欧美日韩亚洲国产一区二区在线观看| 在线观看舔阴道视频| 亚洲三区欧美一区| 搡老岳熟女国产| 国产主播在线观看一区二区| 久久久久久大精品| 亚洲精品久久国产高清桃花| 91麻豆av在线| 女人爽到高潮嗷嗷叫在线视频| 国产伦一二天堂av在线观看| 欧美性长视频在线观看| av中文乱码字幕在线| 国产精品日韩av在线免费观看| 两性夫妻黄色片| 国产片内射在线| 香蕉久久夜色| 麻豆久久精品国产亚洲av| 91麻豆精品激情在线观看国产| 日本a在线网址| 精品一区二区三区四区五区乱码| 韩国av一区二区三区四区| 精品国产亚洲在线| 久久久国产欧美日韩av| 中文字幕另类日韩欧美亚洲嫩草| xxxwww97欧美| 一本综合久久免费| 91成人精品电影| 啦啦啦 在线观看视频| 亚洲国产精品成人综合色| aaaaa片日本免费| 国产99久久九九免费精品| 国产一区二区三区视频了| 一级a爱片免费观看的视频| 亚洲人成77777在线视频| 日本a在线网址| 一进一出抽搐gif免费好疼| 亚洲成人国产一区在线观看| 久久天躁狠狠躁夜夜2o2o| 丝袜人妻中文字幕| 级片在线观看| 天天一区二区日本电影三级| 免费看a级黄色片| 国产又爽黄色视频| 亚洲国产欧美一区二区综合| 国产高清videossex| 757午夜福利合集在线观看| 国产野战对白在线观看| 在线天堂中文资源库| 男女床上黄色一级片免费看| 午夜福利在线观看吧| 成人手机av| 男女那种视频在线观看| 国产精品98久久久久久宅男小说| 免费av毛片视频| 麻豆av在线久日| 午夜久久久在线观看| 欧美 亚洲 国产 日韩一| 亚洲精品一区av在线观看| 午夜福利免费观看在线| 午夜福利视频1000在线观看| 两个人看的免费小视频| 欧美日韩黄片免| 99热只有精品国产| 久9热在线精品视频| 亚洲在线自拍视频| 精品国产超薄肉色丝袜足j| 1024香蕉在线观看| 麻豆成人午夜福利视频| 久久久久久久久中文| 国产高清视频在线播放一区| 亚洲欧美日韩无卡精品| 视频在线观看一区二区三区| 久久久久久久久免费视频了| 中文亚洲av片在线观看爽| 香蕉av资源在线| 视频在线观看一区二区三区| 国产片内射在线| 香蕉久久夜色| 日本五十路高清| 午夜亚洲福利在线播放| 欧美大码av| 不卡一级毛片| 长腿黑丝高跟| 可以在线观看的亚洲视频| 麻豆一二三区av精品| 亚洲真实伦在线观看| 亚洲人成77777在线视频| 欧美丝袜亚洲另类 | 日本在线视频免费播放| 国产99白浆流出| 一二三四社区在线视频社区8| 后天国语完整版免费观看| 国产成人啪精品午夜网站| 曰老女人黄片| 一进一出好大好爽视频| 香蕉国产在线看| 看免费av毛片| 亚洲真实伦在线观看| 成人亚洲精品一区在线观看| 特大巨黑吊av在线直播 | 麻豆久久精品国产亚洲av| а√天堂www在线а√下载| 精品国产乱子伦一区二区三区| 国产精品久久久av美女十八| 三级毛片av免费| 日本成人三级电影网站| 亚洲av熟女| 色播在线永久视频| 久久久久久久午夜电影| 免费在线观看日本一区| 1024香蕉在线观看| www.熟女人妻精品国产| 日本 av在线| 非洲黑人性xxxx精品又粗又长| 亚洲中文字幕日韩| 免费女性裸体啪啪无遮挡网站| 日韩大尺度精品在线看网址| 久久婷婷人人爽人人干人人爱| 欧美丝袜亚洲另类 | 亚洲 欧美 日韩 在线 免费| 欧美黑人巨大hd| 亚洲国产欧美网| 十八禁网站免费在线| 色尼玛亚洲综合影院| 国产亚洲精品第一综合不卡| 精品欧美国产一区二区三| 国产精品98久久久久久宅男小说| 日本免费a在线| 真人做人爱边吃奶动态| 久久中文看片网| 亚洲精品国产一区二区精华液| 久久久久亚洲av毛片大全| 在线观看免费日韩欧美大片| 久热爱精品视频在线9| 免费看美女性在线毛片视频| 国产1区2区3区精品| 午夜免费观看网址| 亚洲国产精品久久男人天堂| 国产一区在线观看成人免费| 欧美一级毛片孕妇| 久久久国产欧美日韩av| 久久久久久亚洲精品国产蜜桃av| av天堂在线播放| 中文字幕人妻熟女乱码| 999久久久精品免费观看国产| 大型av网站在线播放| 女人爽到高潮嗷嗷叫在线视频| 黄色视频不卡| 一级毛片精品| 国产三级黄色录像| 精品国内亚洲2022精品成人| 久久精品影院6| 正在播放国产对白刺激| 丰满的人妻完整版| 精品久久久久久久久久久久久 | 亚洲成a人片在线一区二区| 久久久久久人人人人人| 欧美日韩一级在线毛片| 久久久久久九九精品二区国产 | 亚洲在线自拍视频| 男女床上黄色一级片免费看| 国产人伦9x9x在线观看| 国产蜜桃级精品一区二区三区| 国产单亲对白刺激| 色在线成人网| 国产成人欧美在线观看| 久久人人精品亚洲av| 日韩有码中文字幕| 少妇粗大呻吟视频| 久9热在线精品视频| 在线观看www视频免费| 在线观看午夜福利视频| 一边摸一边做爽爽视频免费| 国产精品久久视频播放| 亚洲av五月六月丁香网| 91av网站免费观看| 亚洲va日本ⅴa欧美va伊人久久| or卡值多少钱| 91九色精品人成在线观看| 1024视频免费在线观看| 婷婷亚洲欧美| 免费人成视频x8x8入口观看| 午夜激情av网站| 亚洲aⅴ乱码一区二区在线播放 | 色在线成人网| xxxwww97欧美| 亚洲国产精品999在线| 亚洲免费av在线视频| 嫩草影视91久久| 99riav亚洲国产免费| 两个人看的免费小视频| 成人手机av| 桃色一区二区三区在线观看| 国产精品乱码一区二三区的特点| 久久中文看片网| 亚洲免费av在线视频| 欧美国产精品va在线观看不卡| 国产不卡一卡二| 日本 欧美在线| 丰满人妻熟妇乱又伦精品不卡| 两性午夜刺激爽爽歪歪视频在线观看 | 看片在线看免费视频| 岛国视频午夜一区免费看| 中文字幕最新亚洲高清| 88av欧美| 日韩大尺度精品在线看网址| 久久久久久久午夜电影| 91九色精品人成在线观看| 亚洲狠狠婷婷综合久久图片| 91九色精品人成在线观看| 99国产综合亚洲精品| 色综合婷婷激情| 岛国视频午夜一区免费看| 久久久久久久久免费视频了| 波多野结衣高清作品| 国产在线精品亚洲第一网站| 99在线人妻在线中文字幕| 99精品在免费线老司机午夜| 啦啦啦 在线观看视频| 中文字幕av电影在线播放| 久久中文看片网| 男女那种视频在线观看| 黄色视频不卡| 黑人操中国人逼视频| 极品教师在线免费播放| www日本在线高清视频| av视频在线观看入口| 欧美性长视频在线观看| av超薄肉色丝袜交足视频| 久热爱精品视频在线9| 欧美成人一区二区免费高清观看 | 两性午夜刺激爽爽歪歪视频在线观看 | 免费看十八禁软件| 夜夜看夜夜爽夜夜摸| 国产色视频综合| 国产一区二区三区视频了| 国产熟女午夜一区二区三区| 999久久久国产精品视频| 三级毛片av免费| 亚洲真实伦在线观看| 十分钟在线观看高清视频www| 欧美日本亚洲视频在线播放| 老司机午夜福利在线观看视频| 巨乳人妻的诱惑在线观看| 亚洲精品国产一区二区精华液| 欧美黑人精品巨大| 中文字幕最新亚洲高清| 哪里可以看免费的av片| 99久久99久久久精品蜜桃| 俄罗斯特黄特色一大片| 色综合婷婷激情| 国产精品精品国产色婷婷| 一本大道久久a久久精品| 狠狠狠狠99中文字幕| 成年版毛片免费区| 99riav亚洲国产免费| 亚洲精品国产区一区二| x7x7x7水蜜桃| 在线视频色国产色| 精品久久久久久成人av| 1024香蕉在线观看| 国产精品电影一区二区三区| or卡值多少钱| 色老头精品视频在线观看| 久久精品91无色码中文字幕| 精品国产亚洲在线| 三级毛片av免费| 亚洲成人精品中文字幕电影| 日韩欧美在线二视频| 精品国产美女av久久久久小说| 欧美大码av| 亚洲精品美女久久av网站| 久久久久久久久中文| 91av网站免费观看| 黄色成人免费大全| 欧美乱妇无乱码| 自线自在国产av| 99久久精品国产亚洲精品| 18禁黄网站禁片免费观看直播| 亚洲av电影不卡..在线观看| 午夜免费观看网址| 看片在线看免费视频| 香蕉丝袜av| 白带黄色成豆腐渣| 精品欧美国产一区二区三| ponron亚洲| 国产国语露脸激情在线看| 日本五十路高清| 婷婷丁香在线五月| 日韩欧美三级三区| 亚洲av中文字字幕乱码综合 | 欧美日韩精品网址| 国产av又大| 国产精品综合久久久久久久免费| 午夜激情福利司机影院| 亚洲真实伦在线观看| 一卡2卡三卡四卡精品乱码亚洲| 黄色女人牲交| 亚洲在线自拍视频| av在线天堂中文字幕| 美国免费a级毛片| 国产黄色小视频在线观看| 白带黄色成豆腐渣| 欧美乱妇无乱码| 国产精品影院久久| 国产区一区二久久| 在线视频色国产色| 国产又黄又爽又无遮挡在线| 国产一区二区在线av高清观看| 精品卡一卡二卡四卡免费| 搡老妇女老女人老熟妇| 又紧又爽又黄一区二区| bbb黄色大片| 精品久久久久久,| 国产高清视频在线播放一区| 欧美色视频一区免费| 婷婷精品国产亚洲av在线| 91麻豆av在线| 88av欧美| 亚洲一区二区三区不卡视频| 大型av网站在线播放| 动漫黄色视频在线观看| 亚洲人成伊人成综合网2020| 日本 av在线| 日韩大尺度精品在线看网址| 叶爱在线成人免费视频播放| 日本成人三级电影网站| 午夜老司机福利片| 午夜福利欧美成人| 精品久久久久久久久久免费视频| 国产一区二区三区在线臀色熟女| 亚洲午夜精品一区,二区,三区| 午夜免费激情av| 18禁观看日本| 国产精品久久电影中文字幕| 十八禁网站免费在线| 国产成+人综合+亚洲专区| 老鸭窝网址在线观看| 12—13女人毛片做爰片一| 亚洲成国产人片在线观看| 国产私拍福利视频在线观看| netflix在线观看网站| 国产精品 国内视频| 国产高清视频在线播放一区| 特大巨黑吊av在线直播 | 亚洲中文日韩欧美视频| 一个人观看的视频www高清免费观看 | 亚洲人成伊人成综合网2020| 男女做爰动态图高潮gif福利片| 成人一区二区视频在线观看| 午夜亚洲福利在线播放| 精品国产超薄肉色丝袜足j| e午夜精品久久久久久久| 丰满的人妻完整版| 亚洲av成人一区二区三| 视频在线观看一区二区三区| 国产精品影院久久| 级片在线观看| 黄色视频,在线免费观看| 国产精品98久久久久久宅男小说| 国产精品美女特级片免费视频播放器 | 免费一级毛片在线播放高清视频| 亚洲精品中文字幕一二三四区| 精品国产超薄肉色丝袜足j| 可以在线观看的亚洲视频| 国产不卡一卡二| 国产99久久九九免费精品| 亚洲av第一区精品v没综合| 国产一级毛片七仙女欲春2 | 久久天堂一区二区三区四区| 精品国产国语对白av| 1024视频免费在线观看| 午夜福利在线在线| 国产片内射在线| 免费高清在线观看日韩| 国产精品1区2区在线观看.| 91国产中文字幕| 国产男靠女视频免费网站| 成人18禁在线播放| 成人免费观看视频高清| 少妇 在线观看| 国内精品久久久久久久电影| 久久精品国产清高在天天线| 亚洲av成人一区二区三| 俺也久久电影网| 91成人精品电影| 国产精品一区二区免费欧美| 男女之事视频高清在线观看| 欧美亚洲日本最大视频资源| 久久久久久亚洲精品国产蜜桃av| 99精品欧美一区二区三区四区| 欧美 亚洲 国产 日韩一| 男人操女人黄网站| 搡老岳熟女国产| 我的亚洲天堂| 69av精品久久久久久| 12—13女人毛片做爰片一| 一卡2卡三卡四卡精品乱码亚洲| 国产一区二区在线av高清观看| 久久久久国产精品人妻aⅴ院| 男人操女人黄网站| 男女下面进入的视频免费午夜 | 99精品欧美一区二区三区四区| 99国产精品一区二区三区| 99在线视频只有这里精品首页| 久久狼人影院| 人人妻人人看人人澡| 99riav亚洲国产免费| 久久国产乱子伦精品免费另类| 两个人看的免费小视频| 久久精品国产亚洲av高清一级| 国产精品综合久久久久久久免费| 久久伊人香网站| 90打野战视频偷拍视频| 一级黄色大片毛片| xxxwww97欧美| 波多野结衣高清作品| 欧美不卡视频在线免费观看 | 在线国产一区二区在线| 麻豆av在线久日| 男女午夜视频在线观看| 高潮久久久久久久久久久不卡| а√天堂www在线а√下载| 日韩免费av在线播放| 国产片内射在线| 少妇裸体淫交视频免费看高清 | 国产精品久久久av美女十八| 可以在线观看毛片的网站| 久久天躁狠狠躁夜夜2o2o| 久久人人精品亚洲av| 国产又黄又爽又无遮挡在线| 免费观看人在逋| 身体一侧抽搐| svipshipincom国产片| 久久久精品欧美日韩精品| 久久久久久免费高清国产稀缺| 午夜福利成人在线免费观看| a级毛片a级免费在线| 一二三四在线观看免费中文在| 久久久国产成人免费| 久久99热这里只有精品18| 久久精品国产综合久久久| 亚洲色图 男人天堂 中文字幕| 久久亚洲精品不卡| 亚洲中文日韩欧美视频| 青草久久国产| 久久精品夜夜夜夜夜久久蜜豆 | 久久久久久久久中文| 午夜福利视频1000在线观看| av在线播放免费不卡| 一区二区三区国产精品乱码| 这个男人来自地球电影免费观看| 搡老岳熟女国产| 久久久久久久久中文| 欧美成狂野欧美在线观看| 婷婷六月久久综合丁香| 久久精品亚洲精品国产色婷小说| 亚洲人成电影免费在线| 免费电影在线观看免费观看| 欧美性长视频在线观看| 99热6这里只有精品| 亚洲av美国av|