• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Far-field super-resolution ghost imaging with a deep neural network constraint

    2022-03-19 09:26:58FeiWangChenglongWangMingliangChenWenlinGongYuZhangShenshengHanandGuohaiSitu
    Light: Science & Applications 2022年1期

    Fei Wang,Chenglong Wang,Mingliang Chen,Wenlin Gong,Yu Zhang,Shensheng Han,3,4 and Guohai Situ,3,4?

    1Shanghai Institute of Optics and Fine Mechanics,Chinese Academy of Sciences,Shanghai 201800,China

    2Center of Materials Science and Optoelectronics Engineering,University of Chinese Academy of Sciences,Beijing 100049,China

    Abstract Ghost imaging (GI) facilitates image acquisition under low-light conditions by single-pixel measurements and thus has great potential in applications in various fields ranging from biomedical imaging to remote sensing.However,GI usually requires a large amount of single-pixel samplings in order to reconstruct a high-resolution image,imposing a practical limit for its applications.Here we propose a far-field super-resolution GI technique that incorporates the physical model for GI image formation into a deep neural network.The resulting hybrid neural network does not need to pre-train on any dataset,and allows the reconstruction of a far-field image with the resolution beyond the diffraction limit.Furthermore,the physical model imposes a constraint to the network output,making it effectively interpretable.We experimentally demonstrate the proposed GI technique by imaging a flying drone,and show that it outperforms some other widespread GI techniques in terms of both spatial resolution and sampling ratio.We believe that this study provides a new framework for GI,and paves a way for its practical applications.

    Introduction

    Conventional imaging methods exploit the light reflected or scattered by an object to form its image on a twodimensional sensor that has millions of pixels.However,ghost imaging (GI),an advanced imaging modality based on the second-order correlation of quantum or classical light,uses a single-pixel detector instead to record the reflected or scattered light,yielding a one-dimensional(1D) bucket signal1-6.In some cases,an additional position sensitive detector is required to measure the illumination patterns.Although neither detector directly records a resolvable image of the object,one can employ an intuitive linear algorithm to reconstruct its image by spatial correlating the acquired time-varying patterns and the synchronized bucket signal.As it uses a single-pixel detector to collect the photons that interact with the object,GI has significant advantages over conventional imaging modalities in terms of detection sensitivity,dark counts,spectral range,and cost efficiency7,8.In addition,with the aid of some prior information,e.g.,sparsity,it is capable of sensing compressively during data acquisition9,10.Such enhancements can provide significance in low-light imaging where the photon counts are very low due to scattering or absorption losses as in medical imaging or remote sensing;and in non-visible waveband imaging where the availability of silicon-based sensor becomes expensive or impractical as in infrared or deep ultraviolet regime.

    However,in GI,a large amount of single-pixel measurements is necessary because one sampling only contains a little information about the object.Specifically,to obtain anN-pixel image one needs at leastM=Nmeasurements to meetβ=M/N=100%,whereβrepresents the sampling ratio (the Nyquist sampling criterion).In many applications such as remote sensing10,a rotating ground glass (RGG) is frequently used to generate speckle illumination patterns compared with other programmable modulation strategies,e.g.,digital micromirror device11owing to its high power endurance and cost efficiency.In this case one needsM?Nmeasurements to improve the signal-to-noise ratio(SNR) of the reconstructed image due to the overlap of different patterns9.This inevitably leads to a paradox between the number of pixels occupied by the object and the data acquisition time.In addition,the spatial resolution of GI is physically limited by the grain size of the speckle pattern on the object plane12.This is unfavorable for far-field imaging as the speckle grain becomes too large to distinguish the detailed structure of the object13,14.Thus,an intuitive and longstanding goal in the study of GI is to decreaseβwhile retaining good resolution,so as to reduce the burden of data acquisition and produce better imaging visual effects.However,the consequential incomplete sampling strategy usually lead to ill-posedness in GI reconstruction.Thus,suitable prior assumptions are needed to compensate the missing information.

    One popular approach is based on compressive sensing(CS).CS uses sparsity as a general prior assumption and has become a popular signal reconstruction framework15-17.It has been widely used in various imaging systems such as single-pixel cameras11and compressive holography18.Specifically,given the measurementsy,the CS technique usually reconstructs the objectxby solving the following iteration problem:

    where Φ is the random measurement matrix and Ψ is the transformation matrix that transformsxinto a sparse domain such as discrete cosine transform (DCT) or wavelet.Ψxrepresents the corresponding transform coefficients regularized by thel1norm with the regularization parameterξ.Owing to the sparsity of the image of the object and the randomness of the illumination patterns,CS is also suitable for GI reconstruction.Such GI using sparsity constraint,or GISC for short,enables the reconstruction of high-quality and high-resolution image whenβ<100%7,9,13,19,20[Fig.1f].In the field of GI,CS has been used for resolution enhancement21-23,remote sensing10,3D imaging24,and among many others7,8,19.However,it is still a challenging problem for GISC to operate well in the case whenβis less than the Cramer-Rao bound16,17,22.

    An alternative but increasingly important approach is deep learning that is based on data prior assumptions25-27.Specifically,it has shown that it allows robust GI reconstruction of high-quality images even whenβ<10% with high computational efficiency28-31.Such GI based on deep learning (GIDL) technique uses a deep neural network(DNN)to learn from a large number of input-output data pairs so as to establish a mapping relationship among them.The experimental acquisition of such a huge training set is time consuming and laborious because one needs at least thousands of measurements for one data pair even for a 64×64 image in a proof-of-principle experiment.Though the neural network can be trained on simulation data30,the trained model only works well for the reconstruction of objects that resemble those in the training set.This challenge of generalization is one of the big issues that need to be addressed.

    Recently,Ulyanov et al.32proposed the deep image prior(DIP)framework that uses an untrained neural network as a constraint for image processing tasks such as denoising,inpainting,and super resolution.They demonstrated that a properly designed generator network architecture itself has an implicit bias towards natural images and thus can be used for solving ill-posed inverse problems33.The most significant advantage of DIP is that a generator network can be used without training beforehand,and thus eliminating the need for tens of thousands of labeled data.A similar concept has also been used for computational imaging,such as phase retrieval34,35,CS36,37,and diffraction tomography38.

    Inspired by the idea of DIP,here we propose a new GI technique that incorporates the physical model of GI image formation into a DNN.We hypothesize that the image prior information introduced by an untrained DNN can be applied to achieve better GI reconstruction under much lowerβ.We term the proposed technique as GI using Deep neural network Constraint (GIDC).It utilizes an untrained DNN to generate high-quality and high-resolution results.The only input it requires are a 1D bucket signal sequenceIfrom which one needs to reconstruct an image,together with the associated stack of illumination patternsH,which is easily accessible in a typical GI system [Fig.1a].The proposed GIDC technique is described as follows.First,we correlate theH[Fig.1b,top]andI[Fig.1b,bottom]by differential ghost imaging(DGI)39,40and obtain a rough reconstruction of the image.Second,we feed the resulting DGI reconstruction into a randomly initialized neural network(untrained).Third,we take the output of the neural network as an estimation of a high-quality GI image and use it to calculate a bucket signalIby using a GI image formation model.Finally,we update the weights of the neural network to minimize the error between the measured and estimated bucket signal [Fig.1c].Along with the error reduction [Fig.1d],the output of the neural network also converges to a good-quality image[Fig.1e].Compared with conventional DGI and GISC[Fig.1f],the proposed strategy dramatically increases the quality and resolution of GI under much lower sampling ratioβ.Compared with those state-of-the-art deep-learning-based methods,GIDC does not need to train on any labeled data and thus is more flexible and does not bias towards a specific distribution.Specifically,our contributions include:

    Fig.1 Overview of GIDC.

    ● We demonstrate that GIDC can reconstruct a dramatically high SNR GI image at a very low sampling ratioβ.

    ● We demonstrate that GIDC can enhance the resolution of the reconstructed image even when the speckle grain size is larger,suggesting its potential to break the diffraction limit.

    ● We perform a comparative study on the base of a number of challenging real-world scenarios including a flying drone and synthesized dataset,and demonstrate that GIDC outperforms other widespread GI methods,including DGI,GISC,and GIDL.

    Results

    Sampling ratio

    We built a typical Pseudothermal GI system[Fig.2a]for data acquisition.Here we show the reconstruction results of different objects using different methods at different sampling ratios.The first group of results is plotted in Fig.2b.One can clearly see that all the binary objects have been successfully reconstructed by GIDC,with the number of measurements as low as 256 (β=6.25%).We also take DGI39,40and GISC13for comparison.For all the cases (different objects andβsettings),GIDC outperform DGI and GISC both in terms of visual appearance and quantitative evaluation index (SSIM).We observe the same results in the cases that the object is in grayscale[Fig.2c].One can clearly see that the clean and highcontrast images reconstructed by GIDC,whereas the ones recovered by DGI and GISC are dirty or even corrupted by strong noise in particular whenβis low (see first two columns in Fig.2c).

    We also conducted an outdoor experiment to demonstrate the effectiveness of GIDC.The data were acquired by using a homemade GI LiDAR system41.The imaging target [Fig.2d,top] is a flying drone (DJI,Phantom 4)hovering in the air,50 m away from the GI LiDAR system.The main results are plotted in Fig.2d.One can clearly see that the GIDC can successfully reconstruct the shape of the drone with very high contrast.The size of the reconstructed image is 128×128,meaning that the sampling ratioβ=9000/16,384 ≈55%.The reconstructed image by DGI and GISC plotted as well for comparison.One can see that the image reconstructed by DGI or GISC is corrupted by noise,and the contrast is low.

    Resolution

    We also experimentally demonstrated the spatial resolution that GIDC can offer.It is known that as an imaging method based on the second-order (intensity) correlation of light,the spatial resolution of GI is theoretically limited by the width of the mutual correlation function of the illumination speckle patterns,measured at the object plane42.According to this,we first calculated the normalized correlation function [Fig.3a] of the recorded speckle patterns43,namely

    Then,we measured the full-width at half-max(FWHM)to estimate the value of the speckle grain size on the object plane.We found that it occupies 7 binned pixels in both the horizontal and vertical directions [Fig.3b,c],suggesting that the diffraction limit of our experimental GI system is 683.59 μm.More details about the system configuration toward the GI system can be found in the section “Methods and Materials.”

    Fig.3 Experiment results for USAF resolution target.

    A USAF resolution target was used to test the resolution of different GI reconstruction methods.The main results are plotted in Fig.3d-i.As expected,the image reconstructed by DGI suggests that the elements in Group 0 Element 5 are not resolvable because the linewidth(629.96 μm) is smaller than the diffraction limit(683.59 μm).It becomes a little bit better by using GISC,where some elements with their linewidth smaller than the diffraction limit (Group 1 Element 1,500 μm) can be distinguished.Evidentially,the proposed GIDC has the best performance in terms of both linewidth and sharpness exhibited in the reconstructed image.As shown in Fig.3g-i,the line pairs in Group 1 Element 4 with the linewidth of 353.55 μm can be successfully reconstructed by GIDC,but neither DGI nor GISC achieves the same performance.This suggests that the proposed GIDC has the capability of enhancing the resolution by a factor of about 2 (683.59/353.55=1.93) with respect to the diffraction limit.More evidence can be found in Fig.3j.In addition to the advantages of resolution,the image reconstructed by GIDC has much higher contrast as evidenced by the clean background.

    Discussion

    In this section,we make some more in-depth discussions on the performance of GIDC in comparison to DGI and GISC.GIDL trained on two different datasets were also considered.For the sake of quantitative evaluation,we examine on simulation data in this section.

    Accuracy

    Two differentβsettings were studied here,i.e.,β=12.5% andβ=25%,corresponding to the number of measurementsM=512 and 1024,respectively.The results are shown in Fig.4.Apparently,the images reconstructed by GIDC have the best fidelity for all the sample objects we studied here.In the case ofβ?12.5%,we observed that the reconstructed grayscale images are not as good as the reconstructed binary images even using GIDC.This is probably because a grayscale image contains too much unknown information to be determined,and it seems to be unfeasible to achieve a good reconstruction with a small sampling ratio.However,the reconstructed images are much better whenβ?25%,which is in consistence with the optical experimental results shown in Fig.2.In order to quantitatively evaluate the results obtained by different methods,we calculated the SSIM value for each reconstructed image with respect to the corresponding ground truth.The SSIMs are listed in Table 1.It is clearly seen that GIDC has the highest metrics values in most of the cases,suggesting that the reconstruction accuracy of GIDC outperforms the others.The performance of GIDL,however,depends strongly on the training set and the task in hand.For instance,relatively good performance can be achieved when using GIDL trained on MNIST to reconstruct binary characters.In contrast,the reconstructed images are severely corrupted in the cases of grayscale due to the limited generalization.Although this can be slightly relieved by training it on an alternative dataset such as Cifar1044,it affects the accuracy of the reconstructed binary characters images as suggested by results shown in the fifth and tenth columns in Fig.4.By contrast,GIDC is a general method that can be used to reconstruct different types of objects usually with a high accuracy and a lowβ.

    Fig.4 Comparison of different GI reconstruction methods under different

    Table 1 The metrics of different GI reconstruction methods on SSIM when β?512/4096?12.5% and β?1024/4096?25%

    Resolution

    Here we will analyze the experiment result on resolution enhancement shown in the section “Resolution.” First we compare the resolution of the images reconstructed by GIDC and other widespread GI algorithms from the same set of simulation data.We generated five groups of illumination speckles [Fig.5a1-a5] with the grain sizexs=λz/Dvarying from 3 to 11 μm[Fig.5b1-b5]to encode the object which was a triple-slit pattern shown in Fig.5f.We setβ=410/4096 ≈10%.We found that the DGI cannot distinguish the slits well whenxs>5 μm.As expected,GISC can enhance the resolution.As evidenced in Fig.5d3,the slit pattern can still be recognized whenxsis as large as 7 μm.However,GISC fails whenxs≥9 μm.By contrast,the proposed GIDC reconstructs an almost perfect image under the same condition.Even whenxsis as large as 11 μm,the GIDC still provide a very good result[Fig.5e5].The cross-section of the reconstructed image whenxs=7 μm andxs=11 μm was plotted in Fig.5g,h,respectively.From the results,one can clearly conclude that the proposed GIDC can provide dramatically resolution enhancement compared with DGI and GISC,in high consistence with the experimental results presented in Fig.3.

    Fig.5 Comparison of GI resolution using different reconstruction algorithms.

    Note that in the studies of phase imaging using untrained neural networks34,35,we did not observe such a phenomenon of resolution enhancement.So it must have something to do with the imaging modality of GI.There are three unique features that GI possesses in comparison to phase imaging.First,the object is illuminated by a random beam.Second,the light scattered from the object is recorded with a bucket detector.Third,GI relies on the second-order correlation of the light field45,whereas phase imaging relies on the first order.Each implementation of the random illumination can shift some of the high-spatial frequency components to the lower band46.This means that the associated information beyond the diffraction limit can be efficiently encoded and transmitted to the detector.A similar concept has been introduced in microscopy to achieve super resolution as well47.In the case of GI,however,the decoding of those high-frequency components is not so trivial due to the fact that they are highly compressed in the 1D bucket signal.Indeed,as shown in Fig.5,none of those widespread GI algorithms can do this job.In contrast,GIDC endeavors to find a feasible solution that can reproduce the acquired bucket signal.Such a feasible solution has to contain those high-frequency components encoded in the bucket signal in order to decrease the loss function.

    Robustness

    The robustness is evaluated by examining the effect of noise in the detection to the reconstructed image.There are different kinds of noise in the detection process48,but the noise effect can be modeled as an additive Gaussian distribution with the standard deviationδas a whole20,49.Thus,one can define the detection SNR(dSNR) as follows50:

    to describe the degradation of the detected signal.Two cases were examined in our studies.In the first case,we fixed the dSNR to 26 dB,and see how the reconstructed image would be under different sampling ratio conditions.In the second one,we fixed the sampling ratio to be 60% for different dSNRs.In this analysis,eight standard grayscale images(Supplementary Fig.S1)were used as the target.We again used SSIM to measure the quality of the reconstructed image from the contaminated bucket signal.The results are plotted in Supplementary Fig.S2;one can clearly see that GIDC has the best performance among all the three,in particular when the noise level is high.For DGI,the SSIM value of the reconstructed image is linearly increased with the sampling ratioβas the SNR of the reconstructed image is linearly proportional to the number of measurements39,40.In addition,we observed that the averaged SSIM in this case is around 0.46 whenβ=60%.This noise-independence effect is highly consistence with the theoretical prediction39,40.On the contrary,GISC is more sensitive to the detection noise15as the SSIM drops from 0.862 to 0.544 when the dSNR is decreased from 30 to 22 dB.Some visualization results can be found in Supplementary Fig.S3.

    Priors

    The effect of priors is also examined here.Two types of priors were used in GIDC,the physical prior,i.e.,DGI,and the total variation (TV) regularization.Here we analyze the effect of DGI and TV independently and in combination.When either DGI or TV is not used,the associated SSIM values are plotted as the bars in green and turquoise,respectively,in Supplementary Fig.S2.One can see that,in all the cases,the SSIM values are slightly less than the one associated with GIDC (orange).This suggests that the use of priors does have contribution to the quality of the reconstructed image.This can be more clearly seen by the yellow bars,which are associated with the cases that neither of them was used.But even in this case the reconstructed image is still far better than the one obtained from DGI alone,suggesting that the GIDC framework has good robustness performance.Some visualization results can be found in Supplementary Fig.S3.

    Computational efficiency

    It is necessary to compare the computational time for different approaches.Different image sizes were considered whenβis set to 6.25%.Compared with DGI and GISC,GIDC provides the best results in terms of both visualizations [Supplementary Fig.S4a] and quantitative metrics [Supplementary Fig.S4c] under all pixelresolution settings.However,as shown in Supplementary Fig.S4b,GIDC needs the longest time to optimize.For a 128×128 image,it needs about 5 min to restore a feasible result,while DGI and GISC only needs 0.221 and 12.29 s,respectively.Thus,the previous GISC and our GIDC are both not suitable for real-time applications,at least on the current computing platform.Despite this,for applications that allow post-processing offline but require fast data acquisition,GIDC yields the highest image fidelity at the lowest sampling ratio.The reconstructed image is associated with a SSIM value of 0.9 even whenβis down to 6.25%.We also noticed that the computational time dramatically increases along with the increase of the image size.There are mainly two reasons for this.First,the width of the network will increase accordingly to accept the image as its input,process it and produce an output.Thus,it takes more time to forward infer during each iteration.Second,the size of the measurement matrixHthat is used to generate the estimated bucket signal will increase.Thus,it takes more time to calculate the gradient and update the network parameters.

    There are several strategies that one can take into account to improve the GIDC computational efficiency.These include better design of the neural network architecture,the implementation of depth-wise convolution51,the employment of better initialization52and learning53strategy.In addition,from a practical application point of view,the implementation of GIDC on a faster computing platform together with hardware speedup by using multiple GPUs will also help to significantly increase the computational efficiency.

    Methods and materials

    Formation of the reconstruction algorithm

    For an objectO(xt,yt),the measurements of the pseudothermal GI system are the 1D bucket signal

    measured by a single-pixel detector in the test arm,and the corresponding stack of random illumination patternsHm(xt,yt),wherem=1,2,…,M,measured by a highresolution camera in the reference arm.The conventional GI algorithm reconstructs the object image by computing the intensity correlation betweenHmandIm

    where 〈.〉 denotes the ensemble average approximately defined asand

    For DGI39,40,one uses the illumination patterns so as to improve the SNRto normalize

    For the proposed GIDC,the reconstruction of the object image is formulated as the following objective function

    where Rθis the DNN defined by a set of weights and biases parameters Θ.The goal of GIDC is to find a good configurationθ*∈Θ for the neural network that forces its outputOGIDC=Rθ*(ODGI)to produce a 1D sequenceaccording to the GI image formation physics(Eq.4)that resembles the experimentally acquired bucket signalI.As it is an ill-posed problem,especially whenM?N,there are in principle an infinite number of configurations that satisfies the objective function.Therefore,it is necessary to add prior information about the object so as to select a feasible solution from all the configurations.For example,in GISC,the prior information is about an assumption that the object is sparse in a certain domain.Different from GISC,the proposed GIDC is based on an untrained DNN prior.Although the theory for this has yet to be perfected,existing works has empirically suggested that a properly designed DNN with randomly initialized weights has an inherent bias toward natural images32,34-38.We thus hypothesize that the DNN prior can be used to solve the ill-posed problem described by Eq.(7).We also argue that adding a conventional regularization terms such as the TV38in the GIDC framework would help improving the reconstruction results.So the final objective function (loss function) of GIDC is reformulated as follows:

    where T stands for TV andξis its strength.

    For comparison,it is worthy of pointing out that GIDL uses a DNN as well.But it attempts to learn the mapping function Rθf(wàn)rom a large number of labeled data pairs in the training setST={(OkDGI,Ok)|k=1,2,…,K},by solving

    GIDL learns to map the low-quality reconstructed images to a high-quality ones from the statistics of the training setST.Once trained,the neural net can be used directly to reconstruct objects that are similar with those inST.

    By contrast,GIDC learns the mapping function through updating the weights and biasesθin the neural network to minimize the model-based fidelity term,which can be seen as an interplay between the GI physical modelHand the DNN Rθ.In this way,one can obtain a feasible solutionOGIDC=Rθ*(ODGI) without using any training data.That is to say,GIDC is an untrained method and does not bias toward any particular dataset.We note that the input of the neural network used in GIDC can be a coarse image recovered by any conventional GI algorithms20,34,39,40,46or even random noise32,35,38,here we use the result of DGI for convenience.

    Network architecture and hyper parameters

    The network architecture we employed in this work was derived from the U-net54.More details of the network structure are provided in Supplementary Fig.S5.We adopted the Adam optimizer with a learning rate of α=0.05,β1=0.5,β2=0.9,and ?=10-9to update the weights in the neural network.We also used an exponential decay with a decay rate of 0.9 and decay steps of 100.The momentum and epsilon parameters in the batch normalization were 0.99 and 0.001,respectively.The leak parameter of Leaky ReLU was 0.2.The regularization parameter of the TV was 10-10.The code was run on a computer with an Intel Xeon CPU E5-2696 V3,64 GB RAM,and an NVIDIA Quadro P6000 GPU.The main progress is illustrated in Algorithm 1.For the sake of comparison,we use the same network model for GIDC and GIDL.We also released our code at https://github.com/FeiWang0824/GIDC.

    ?

    Experimental details

    Figure 2a presents the optical system we built for the experimental demonstration.Lt,LrandLcare lenses with focal length of 136.8,30,and 75 mm,respectively.Detector 1 works as a single-pixel detector,whereas Detector 2 is a high-resolution camera.The light source is a solid-state pulsed laser with aλ=532 nm centerwavelength,and a 10 ns pulse width at a repetition rate of 1 kHz.The pulsed beam emitted from the laser irradiated a RGG to produce pseudothermal light.The beam diameter on the RGG wasDand can be adjusted by an optical stop (stop1).The distance between the RGG and the other optical stop (stop2) is aboutz=180 mm.The shape of stop2 is a square with a side length equal to 5 mm.Owing to the RGG,a speckle field is fully developed at the plane of stop2.Then the speckle field is divided by a beam splitter into a test and a reference arms.In the test arm,we use an imaging lensLtto project the speckle field at the stop2 plane to the surface of the object.The side length of the objects is aboutL=25 mm (the magnification ofLtis 5).The transmitted light is collected by a lensLc(Nikon AF-S NiKKOR 85 mm f/1.4 G) and finally recorded by a single-pixel detector (in our experiment,we actually used an AVT F504B camera to record the transmitted intensity,and generated the bucket signal by summing all the pixel values).In the reference arm,we use an image detector 2 (AVT F504B,with the pixel sizepsof 3.45 μm) mounted with an imaging lensLrto take a high-resolution image of the speckle pattern on the stop2 plane.

    Three different types of objects were used to test the GIDC performance,i.e.,transparent slices of various characters with binary value and of natural scenes in grayscale (a film after exposure a standard test image“house”),and a physical USAF resolution chart.The spatial resolution of our GI system can be adjusted by changingDaszis fixed throughxs=λz/D.In addition,different pixel resolutionNcan be obtained by setting different resize factorq=so that the binned pixel size isq2ps.We set the resize factor of 10.64,7.52,and 5.32 for binary characters,grayscale object,and USAF resolution chart to meet the pixel resolution of 64,128,and 256,respectively.Then the binned pixel size is 390.63,195.31,and 97.66 μm,respectively.For the experiment of USAF resolution chart,we setD=0.70 mm to meet the spatial resolution of 683.59 μm(683.59/97.66=7 binned pixels).

    Acknowledgements

    We would like to thank Z.Tong (Shanghai Institute of Optics and Fine Mechanics)for discussion and helpful comments.This work was supported by the National Natural Science Foundation of China (61991452,62061136005),the Key Research Program of Frontier Sciences of the Chinese Academy of Sciences (QYZDB-SSW-JSC002),and the Sino-German Center (GZ1391).

    Author details

    1Shanghai Institute of Optics and Fine Mechanics,Chinese Academy of Sciences,Shanghai 201800,China.2Center of Materials Science and Optoelectronics Engineering,University of Chinese Academy of Sciences,Beijing 100049,China.3Hangzhou Institute for Advanced Study,University of Chinese Academy of Sciences,Hangzhou 310024,China.4CAS Center for Excellence in Ultra-intense Laser Science,Shanghai 201800,China

    Author contributions

    G.S.and F.W.conceived the idea.F.W.discovered the procedures,designed,performed,and analyzed most of the data.C.W.performed most of the experiments together with F.W.and C.W.,M.C.,and W.G.conducted the outdoor experiment.F.W.and G.S.wrote the manuscript.All authors discussed and commented on the manuscript.G.S.supervised the project.

    Conflict of interest

    The authors declare no competing interests.

    Supplementary informationThe online version contains supplementary material available at https://doi.org/10.1038/s41377-021-00680-w.

    一进一出好大好爽视频| 成人特级黄色片久久久久久久| 亚洲欧美精品综合久久99| 波多野结衣一区麻豆| 黄片大片在线免费观看| 男女下面插进去视频免费观看| 日韩欧美免费精品| 黄色视频,在线免费观看| 亚洲av美国av| 久久国产乱子伦精品免费另类| 亚洲五月色婷婷综合| 国产成+人综合+亚洲专区| 中文字幕高清在线视频| 天堂影院成人在线观看| 日韩欧美在线二视频| 欧美日韩一级在线毛片| 久久九九热精品免费| 久久香蕉激情| 亚洲欧美激情综合另类| 欧美乱妇无乱码| 女性生殖器流出的白浆| 久久精品人人爽人人爽视色| 国产精品,欧美在线| 国产精品久久久人人做人人爽| 国产蜜桃级精品一区二区三区| av视频免费观看在线观看| 久久天堂一区二区三区四区| 日韩欧美一区视频在线观看| 精品一品国产午夜福利视频| 99re在线观看精品视频| av中文乱码字幕在线| 午夜福利18| 免费少妇av软件| 国产主播在线观看一区二区| 国产精品久久久人人做人人爽| 亚洲av美国av| 久久久国产精品麻豆| 香蕉丝袜av| 亚洲国产精品成人综合色| 国产亚洲精品久久久久5区| 国产精品二区激情视频| 亚洲全国av大片| 12—13女人毛片做爰片一| 欧美日韩黄片免| 国产高清视频在线播放一区| 国产高清视频在线播放一区| 国产男靠女视频免费网站| 黄色视频,在线免费观看| 国产精品自产拍在线观看55亚洲| 亚洲狠狠婷婷综合久久图片| 大陆偷拍与自拍| 亚洲七黄色美女视频| 黑丝袜美女国产一区| 91av网站免费观看| 满18在线观看网站| 视频在线观看一区二区三区| 国产欧美日韩一区二区精品| 精品欧美国产一区二区三| 操出白浆在线播放| 国产麻豆69| 97超级碰碰碰精品色视频在线观看| 国产成人av教育| 欧美色欧美亚洲另类二区 | 国产亚洲精品av在线| 欧美 亚洲 国产 日韩一| 老司机深夜福利视频在线观看| 成人国产综合亚洲| 一级a爱视频在线免费观看| 精品少妇一区二区三区视频日本电影| 一区二区三区激情视频| 欧美+亚洲+日韩+国产| 久久国产乱子伦精品免费另类| 国产精品久久久久久精品电影 | 1024香蕉在线观看| 国产精品98久久久久久宅男小说| 伊人久久大香线蕉亚洲五| 久久香蕉精品热| 女警被强在线播放| 99riav亚洲国产免费| 亚洲一码二码三码区别大吗| 在线观看午夜福利视频| 欧美日韩福利视频一区二区| 亚洲avbb在线观看| 50天的宝宝边吃奶边哭怎么回事| 欧美最黄视频在线播放免费| 亚洲最大成人中文| 在线观看66精品国产| 少妇粗大呻吟视频| 少妇粗大呻吟视频| 欧美av亚洲av综合av国产av| 免费av毛片视频| 99在线人妻在线中文字幕| 午夜亚洲福利在线播放| 麻豆av在线久日| 一级作爱视频免费观看| 免费在线观看亚洲国产| 亚洲国产中文字幕在线视频| 脱女人内裤的视频| 又黄又粗又硬又大视频| 国产成+人综合+亚洲专区| 色婷婷久久久亚洲欧美| 不卡av一区二区三区| 黄色成人免费大全| 纯流量卡能插随身wifi吗| 国产xxxxx性猛交| 欧美黑人欧美精品刺激| 精品久久蜜臀av无| 久久人妻熟女aⅴ| 日韩高清综合在线| 欧美乱妇无乱码| 欧美日韩亚洲国产一区二区在线观看| 亚洲欧美日韩高清在线视频| 中文字幕高清在线视频| 麻豆成人av在线观看| 亚洲精品中文字幕在线视频| 女人高潮潮喷娇喘18禁视频| 黄色女人牲交| 国产成人精品在线电影| 夜夜爽天天搞| 国内精品久久久久久久电影| 久久久久精品国产欧美久久久| 精品国产国语对白av| 欧美精品亚洲一区二区| 波多野结衣高清无吗| 国产精品,欧美在线| 中文字幕另类日韩欧美亚洲嫩草| 成年人黄色毛片网站| 午夜久久久在线观看| 亚洲国产日韩欧美精品在线观看 | 国产av又大| 国产精品 欧美亚洲| 久久国产亚洲av麻豆专区| 在线观看66精品国产| 日韩成人在线观看一区二区三区| 久久中文看片网| 91av网站免费观看| 午夜精品久久久久久毛片777| 精品欧美一区二区三区在线| 国产精品亚洲av一区麻豆| 久久久久久久久免费视频了| 91九色精品人成在线观看| 99久久久亚洲精品蜜臀av| 亚洲片人在线观看| 黄色视频,在线免费观看| 久9热在线精品视频| svipshipincom国产片| 咕卡用的链子| 国产欧美日韩一区二区三区在线| 国产精品久久久av美女十八| 国产一卡二卡三卡精品| 亚洲九九香蕉| 亚洲成av人片免费观看| 操美女的视频在线观看| 夜夜躁狠狠躁天天躁| 激情在线观看视频在线高清| 久久久久亚洲av毛片大全| 91精品三级在线观看| 久久天躁狠狠躁夜夜2o2o| 免费在线观看黄色视频的| www.精华液| 成人精品一区二区免费| 亚洲中文字幕日韩| 啦啦啦免费观看视频1| 欧美日韩中文字幕国产精品一区二区三区 | 最好的美女福利视频网| 99热只有精品国产| 免费搜索国产男女视频| 夜夜夜夜夜久久久久| 我的亚洲天堂| 久热爱精品视频在线9| 人人妻人人澡欧美一区二区 | 欧美另类亚洲清纯唯美| 高潮久久久久久久久久久不卡| 丝袜美足系列| 国产99白浆流出| 色综合亚洲欧美另类图片| 少妇裸体淫交视频免费看高清 | 女人爽到高潮嗷嗷叫在线视频| 在线观看www视频免费| 无遮挡黄片免费观看| 91成年电影在线观看| 香蕉国产在线看| √禁漫天堂资源中文www| 女人爽到高潮嗷嗷叫在线视频| 12—13女人毛片做爰片一| 他把我摸到了高潮在线观看| 黄频高清免费视频| 国产av又大| 亚洲第一电影网av| 一本综合久久免费| 999久久久精品免费观看国产| 波多野结衣av一区二区av| 亚洲久久久国产精品| 久久亚洲真实| 两人在一起打扑克的视频| 国产精品野战在线观看| 一边摸一边做爽爽视频免费| 一区福利在线观看| 久久精品国产亚洲av高清一级| 久久久国产成人免费| 最新美女视频免费是黄的| 中文亚洲av片在线观看爽| 亚洲第一av免费看| 国产一区二区三区在线臀色熟女| 两性夫妻黄色片| 亚洲第一欧美日韩一区二区三区| 亚洲中文日韩欧美视频| 久久精品亚洲熟妇少妇任你| 亚洲成人久久性| 国产蜜桃级精品一区二区三区| 亚洲一码二码三码区别大吗| 免费不卡黄色视频| 成人欧美大片| 久久精品91蜜桃| 黄色女人牲交| 色播亚洲综合网| 每晚都被弄得嗷嗷叫到高潮| 久久人人精品亚洲av| 99国产精品免费福利视频| 久久天躁狠狠躁夜夜2o2o| 精品日产1卡2卡| 女性被躁到高潮视频| 亚洲美女黄片视频| 亚洲精品美女久久av网站| 亚洲精华国产精华精| 亚洲avbb在线观看| 日韩精品中文字幕看吧| 国产精品久久久av美女十八| 91av网站免费观看| 日韩有码中文字幕| xxx96com| 成人特级黄色片久久久久久久| 日韩欧美一区视频在线观看| 精品国产乱码久久久久久男人| 无人区码免费观看不卡| 窝窝影院91人妻| 999精品在线视频| 国产欧美日韩一区二区三| 久久久久久久久久久久大奶| 欧美一级毛片孕妇| 日韩国内少妇激情av| 自线自在国产av| 国产精品日韩av在线免费观看 | 午夜两性在线视频| 久久亚洲精品不卡| 51午夜福利影视在线观看| 狂野欧美激情性xxxx| 黑人操中国人逼视频| 日韩欧美国产一区二区入口| 淫秽高清视频在线观看| 99久久国产精品久久久| 午夜免费鲁丝| 一区二区三区高清视频在线| 男女做爰动态图高潮gif福利片 | 亚洲av电影不卡..在线观看| 色婷婷久久久亚洲欧美| 丁香欧美五月| 久久中文字幕人妻熟女| 久久久国产成人精品二区| 香蕉久久夜色| 午夜视频精品福利| 啦啦啦 在线观看视频| 午夜老司机福利片| 丁香六月欧美| 亚洲国产欧美日韩在线播放| 亚洲av美国av| 亚洲色图 男人天堂 中文字幕| xxx96com| 亚洲精品国产区一区二| 99精品欧美一区二区三区四区| 大香蕉久久成人网| 亚洲天堂国产精品一区在线| 在线观看66精品国产| 国产精品久久久久久精品电影 | 国产精品久久视频播放| 午夜久久久久精精品| 黑人巨大精品欧美一区二区mp4| 国产91精品成人一区二区三区| 这个男人来自地球电影免费观看| 久久热在线av| 亚洲国产欧美日韩在线播放| 国产精品九九99| 人人妻人人澡人人看| 亚洲色图综合在线观看| 亚洲 欧美 日韩 在线 免费| АⅤ资源中文在线天堂| 久久性视频一级片| 久久人妻av系列| 精品国产超薄肉色丝袜足j| 人人妻人人爽人人添夜夜欢视频| 欧美色视频一区免费| 最新美女视频免费是黄的| 亚洲精品在线美女| 亚洲成人国产一区在线观看| 18禁裸乳无遮挡免费网站照片 | tocl精华| 黄片播放在线免费| 亚洲电影在线观看av| 成人免费观看视频高清| 国产欧美日韩综合在线一区二区| 真人做人爱边吃奶动态| ponron亚洲| 成在线人永久免费视频| 老司机午夜十八禁免费视频| 精品卡一卡二卡四卡免费| 日本欧美视频一区| 高清毛片免费观看视频网站| 久久人妻av系列| 久久久久久人人人人人| 久久精品人人爽人人爽视色| 精品国产一区二区久久| 亚洲中文日韩欧美视频| 一级a爱片免费观看的视频| 一级,二级,三级黄色视频| 高潮久久久久久久久久久不卡| 男人舔女人的私密视频| 叶爱在线成人免费视频播放| 亚洲av电影不卡..在线观看| 精品不卡国产一区二区三区| 欧美日本中文国产一区发布| 热99re8久久精品国产| 国产精品亚洲美女久久久| 免费少妇av软件| 亚洲五月天丁香| 啦啦啦观看免费观看视频高清 | bbb黄色大片| 久久精品亚洲精品国产色婷小说| 美女免费视频网站| 男人的好看免费观看在线视频 | 啪啪无遮挡十八禁网站| 久久热在线av| 久久人人97超碰香蕉20202| 亚洲一码二码三码区别大吗| 亚洲国产中文字幕在线视频| 精品国产亚洲在线| 视频区欧美日本亚洲| 色av中文字幕| 亚洲三区欧美一区| 国产精品爽爽va在线观看网站 | 色播亚洲综合网| 国产在线精品亚洲第一网站| 18禁观看日本| 免费人成视频x8x8入口观看| 亚洲欧洲精品一区二区精品久久久| 老熟妇仑乱视频hdxx| 久久人妻熟女aⅴ| 伦理电影免费视频| 国产欧美日韩一区二区三区在线| 免费观看人在逋| 午夜成年电影在线免费观看| 午夜两性在线视频| 国产一区二区在线av高清观看| 中出人妻视频一区二区| 在线观看免费视频日本深夜| 国产三级黄色录像| 婷婷丁香在线五月| 久久精品91蜜桃| 久久香蕉国产精品| 成人三级做爰电影| 侵犯人妻中文字幕一二三四区| 在线免费观看的www视频| 99香蕉大伊视频| 色老头精品视频在线观看| 天堂动漫精品| 欧美中文日本在线观看视频| 国产亚洲av高清不卡| 国产精品免费一区二区三区在线| 国产极品粉嫩免费观看在线| 好看av亚洲va欧美ⅴa在| 搡老岳熟女国产| 精品免费久久久久久久清纯| 黑丝袜美女国产一区| 极品人妻少妇av视频| 可以在线观看毛片的网站| 午夜免费成人在线视频| 99国产精品一区二区蜜桃av| 日本vs欧美在线观看视频| 精品福利观看| 两个人免费观看高清视频| 午夜精品在线福利| 免费在线观看日本一区| 国产成人系列免费观看| 国产成人精品久久二区二区91| xxx96com| 999精品在线视频| 性色av乱码一区二区三区2| 久久久久久亚洲精品国产蜜桃av| 夜夜看夜夜爽夜夜摸| 国产91精品成人一区二区三区| 亚洲欧美激情综合另类| 欧美乱色亚洲激情| 欧美黄色片欧美黄色片| 日韩高清综合在线| 国产亚洲精品综合一区在线观看 | 国产一区在线观看成人免费| 黑丝袜美女国产一区| 女人精品久久久久毛片| 午夜福利影视在线免费观看| 国产精品一区二区精品视频观看| 91字幕亚洲| 男女床上黄色一级片免费看| 午夜精品久久久久久毛片777| 日韩国内少妇激情av| av天堂在线播放| 亚洲成av人片免费观看| 熟女少妇亚洲综合色aaa.| 妹子高潮喷水视频| 18禁国产床啪视频网站| 97超级碰碰碰精品色视频在线观看| 一本大道久久a久久精品| 亚洲精品久久成人aⅴ小说| cao死你这个sao货| 国产1区2区3区精品| 久久欧美精品欧美久久欧美| 免费搜索国产男女视频| 国产精品亚洲av一区麻豆| 757午夜福利合集在线观看| 91字幕亚洲| 亚洲av第一区精品v没综合| 亚洲欧美日韩高清在线视频| 亚洲中文字幕一区二区三区有码在线看 | 欧美日韩亚洲综合一区二区三区_| 在线观看免费日韩欧美大片| 精品日产1卡2卡| 天天躁夜夜躁狠狠躁躁| 亚洲av成人av| 两人在一起打扑克的视频| 亚洲五月色婷婷综合| 宅男免费午夜| 亚洲av电影不卡..在线观看| 国产一级毛片七仙女欲春2 | 欧美日本亚洲视频在线播放| 午夜亚洲福利在线播放| 我的亚洲天堂| 一级毛片高清免费大全| 一进一出好大好爽视频| 无人区码免费观看不卡| 黑人巨大精品欧美一区二区mp4| 欧美日本中文国产一区发布| 亚洲国产精品久久男人天堂| 在线观看66精品国产| 日韩欧美一区二区三区在线观看| 国产亚洲欧美98| 久久久久国产精品人妻aⅴ院| 国产成人精品在线电影| 日本撒尿小便嘘嘘汇集6| 女性被躁到高潮视频| 免费高清在线观看日韩| 免费在线观看亚洲国产| 少妇 在线观看| 精品国产美女av久久久久小说| av超薄肉色丝袜交足视频| 可以免费在线观看a视频的电影网站| 法律面前人人平等表现在哪些方面| 看免费av毛片| 一级黄色大片毛片| 男人舔女人的私密视频| 国产精品爽爽va在线观看网站 | 老司机福利观看| 夜夜看夜夜爽夜夜摸| 国产主播在线观看一区二区| 视频在线观看一区二区三区| 亚洲欧美激情在线| 久久精品91无色码中文字幕| 欧美成人午夜精品| 日韩有码中文字幕| 麻豆久久精品国产亚洲av| 亚洲一卡2卡3卡4卡5卡精品中文| 好男人电影高清在线观看| 操美女的视频在线观看| 97人妻精品一区二区三区麻豆 | 国产在线精品亚洲第一网站| 亚洲中文av在线| 91九色精品人成在线观看| 亚洲aⅴ乱码一区二区在线播放 | 国产精品亚洲美女久久久| 国产成人精品久久二区二区91| 亚洲专区国产一区二区| 禁无遮挡网站| 一进一出抽搐gif免费好疼| 女人精品久久久久毛片| 日韩 欧美 亚洲 中文字幕| 久久久久国产一级毛片高清牌| av欧美777| 久久影院123| 黄色视频不卡| 亚洲va日本ⅴa欧美va伊人久久| 老鸭窝网址在线观看| 9热在线视频观看99| tocl精华| 乱人伦中国视频| 精品一区二区三区视频在线观看免费| 国产午夜福利久久久久久| 香蕉国产在线看| 久久国产精品人妻蜜桃| 天天一区二区日本电影三级 | 老司机午夜福利在线观看视频| www日本在线高清视频| 日韩视频一区二区在线观看| 级片在线观看| 精品一品国产午夜福利视频| 老司机午夜福利在线观看视频| 1024香蕉在线观看| 97超级碰碰碰精品色视频在线观看| 日本免费一区二区三区高清不卡 | 国产熟女xx| 在线永久观看黄色视频| 黑人欧美特级aaaaaa片| 国产一区二区在线av高清观看| 国产亚洲精品av在线| 免费av毛片视频| 国产精品亚洲av一区麻豆| 亚洲国产看品久久| 丰满人妻熟妇乱又伦精品不卡| 岛国视频午夜一区免费看| 国产高清videossex| 搡老岳熟女国产| 亚洲欧洲精品一区二区精品久久久| 深夜精品福利| 久久影院123| 国产欧美日韩精品亚洲av| 亚洲成国产人片在线观看| 午夜福利欧美成人| 波多野结衣一区麻豆| 变态另类成人亚洲欧美熟女 | 99久久精品国产亚洲精品| 午夜免费鲁丝| 少妇裸体淫交视频免费看高清 | 怎么达到女性高潮| 欧美老熟妇乱子伦牲交| 在线国产一区二区在线| 欧美日韩乱码在线| 女人爽到高潮嗷嗷叫在线视频| 亚洲国产精品成人综合色| 黄色成人免费大全| 免费看十八禁软件| av天堂在线播放| www国产在线视频色| 国产精品久久久久久亚洲av鲁大| 国产国语露脸激情在线看| 老司机靠b影院| 欧美黄色淫秽网站| 亚洲中文字幕日韩| 国产精品秋霞免费鲁丝片| 亚洲伊人色综图| 国内精品久久久久久久电影| 在线观看免费视频网站a站| av超薄肉色丝袜交足视频| 少妇粗大呻吟视频| 欧美成人免费av一区二区三区| 波多野结衣av一区二区av| 亚洲欧美激情综合另类| 久久精品影院6| 高潮久久久久久久久久久不卡| 国内毛片毛片毛片毛片毛片| 国产精品免费视频内射| www.www免费av| 乱人伦中国视频| 久久久国产成人免费| 欧美在线黄色| 一级毛片女人18水好多| 亚洲精品美女久久久久99蜜臀| 一本综合久久免费| av超薄肉色丝袜交足视频| 欧美成人午夜精品| 热99re8久久精品国产| 手机成人av网站| 视频在线观看一区二区三区| 国产伦一二天堂av在线观看| 免费高清视频大片| 成人亚洲精品av一区二区| 男人舔女人下体高潮全视频| 成熟少妇高潮喷水视频| 女生性感内裤真人,穿戴方法视频| 曰老女人黄片| av超薄肉色丝袜交足视频| 人人妻,人人澡人人爽秒播| 国产人伦9x9x在线观看| 亚洲av美国av| 国产免费男女视频| 亚洲中文av在线| 亚洲五月色婷婷综合| 操美女的视频在线观看| 亚洲自拍偷在线| 免费观看精品视频网站| 男女之事视频高清在线观看| 亚洲午夜精品一区,二区,三区| 女警被强在线播放| 一区二区三区高清视频在线| 精品一区二区三区四区五区乱码| 亚洲成人久久性| 夜夜夜夜夜久久久久| 国产在线观看jvid| 国产精品永久免费网站| 日韩视频一区二区在线观看| 亚洲精品中文字幕在线视频| 亚洲中文av在线| 亚洲成人免费电影在线观看| 自拍欧美九色日韩亚洲蝌蚪91| 村上凉子中文字幕在线| 制服人妻中文乱码| 欧美激情极品国产一区二区三区| 成人欧美大片| 亚洲人成电影观看| 成人三级黄色视频| 欧美成人一区二区免费高清观看 | 精品电影一区二区在线| 制服诱惑二区| 黑人巨大精品欧美一区二区蜜桃| 黄片播放在线免费| 久久香蕉国产精品| 成人国产综合亚洲| 女人精品久久久久毛片| www.999成人在线观看| 女人精品久久久久毛片|