• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A machine learning framework for low-field NMR data processing

    2022-06-02 05:00:04SiHuiLuoLiZhiXioYnJinGungZhiLioBinSenXuJunZhouCnLing
    Petroleum Science 2022年2期

    Si-Hui Luo ,Li-Zhi Xio ,*,Yn Jin ,Gung-Zhi Lio ,Bin-Sen Xu ,Jun Zhou ,b,Cn Ling

    a College of Artificial Intelligence and College of Petroleum Engineering,China University of Petroleum,Beijing,102249,China

    b China National Logging Corporation,Xi'an,Shaanxi 710076,China

    c Changzhou Institute of Technology,Changzhou,Jiangsu,213000,China

    Keywords:Dictionary learning Low-field NMR Denoising Data processing T2 distribution

    ABSTRACT Low-field (nuclear magnetic resonance) NMR has been widely used in petroleum industry,such as well logging and laboratory rock core analysis.However,the signal-to-noise ratio is low due to the low magnetic field strength of NMR tools and the complex petrophysical properties of detected samples.Suppressing the noise and highlighting the available NMR signals is very important for subsequent data processing.Most denoising methods are normally based on fixed mathematical transformation or handdesign feature selectors to suppress noise characteristics,which may not perform well because of their non-adaptive performance to different noisy signals.In this paper,we proposed a “data processing framework” to improve the quality of low field NMR echo data based on dictionary learning.Dictionary learning is a machine learning method based on redundancy and sparse representation theory.Available information in noisy NMR echo data can be adaptively extracted and reconstructed by dictionary learning.The advantages and application effectiveness of the proposed method were verified with a number of numerical simulations,NMR core data analyses,and NMR logging data processing.The results show that dictionary learning can significantly improve the quality of NMR echo data with high noise level and effectively improve the accuracy and reliability of inversion results.

    1.Introduction

    As a golden tool that can directly detect and reveal the dynamics of fluid molecules in rock porous media,nuclear magnetic resonance (NMR) technology can quantitatively identify the fluid components and precisely provide petrophysical parameters,such as pore structure,fluid saturation,permeability,wettability,viscosity and so on (Coates et al.,1999;Liang et al.,2019;Liu et al.,2019;Deng et al.,2014;Jia et al.,2016;Xiao et al.,2013).It is very helpful for the evaluation of oil and gas reservoirs.With the attention gradually being focused on unconventional and complex reservoirs,the NMR wireline,LWD (Logging While Drilling) and core analysis techniques become more and more important and practical for complex and tight reservoirs evaluation (Xiao et al.,2015;Luo et al.,2019,2020),such as shale and ultra-deep tight sandstone reservoirs (Song and Kausik,2019).

    However,the data signal-to-noise ratio (SNR) detected from low-field NMR instrument is normally low,which will affect the subsequent NMR data processing and interpretation work.Two reasons were concluded based on the recent publications in recent years.Firstly,the NMR instruments employ the static magnetic field generated by the permanent magnet to polarize the formation at certain depth of investigation (DOIs) (Liao et al.,2021).The magnetic field is extremely weak and varied with surrounding environment,resulting in lower signal amplitude.Secondly,unconventional reservoirs have a low porosity and permeability,leading to the acquired low signal amplitude(Xie et al.,2015;Song and Kausik,2019).It is necessary to increase averages during the acquisition period to meet the requirements of NMR data processing and interpretation when SNR of data is low.

    Low field NMR technology adopts CPMG pulse sequence based on relaxation and diffusion mechanism (Carr and Purcell,1954;Meiboom and Gill,1958) to accurately measure the formation.By collecting echo data and inverting echo data with Inverse Laplace Transformation(ILT)methods,the distribution of one-dimensional(1D) or two-dimensional (2D) parameters such as T1,T2,T1-T2and D-T2of formation fluid can be obtained (Xie and Xiao,2011;Song et al.,2002;Hürlimann and Venkataramanan,2002;Sun and Dunn,2002).The acquisition of these parameters is the premise of NMR interpretation for oil and gas reservoirs.Generally,the signal response equation of 1D or 2D NMR can be attributed to the first kind of “Fredholm Integral Equation”,which is an illconditioned problem without referable analytical solutions.At present,the most inversion researches are based on Singular Value Decomposition(SVD)and Butler-Reeds-Dawsons(BRD)(Prammer,1994;Butler et al.,1981).However,a small disturbance from noise in echo data will cause the deviation of the inversion results.Therefore,various inversion methods constrained by penalty terms have been developed (Zou et al.,2016),by choosing regularization parameters and its weight,to keep the solutions smoother and sparser simultaneously,in other words,to ensure the stability and resolution of the inversion results (Guo et al.,2018,2019).In addition,using machine learning methods to directly invert the echo data,suppress the uncertainty of numerical solution and improve resolution of 1D or 2D distributions is a new research direction of NMR data processing(Parasram et al.,2021;Wang et al.,2017).However,the artificial network method (Parasram et al.,2021) needs a large of label data sets (at least 400000 groups) to train the model,which are based on the forward T2distribution model.In addition,the sparse Bayesian learning method (Wang et al.,2017) requires the prior knowledge with initial inversion result of echo data,to provide the overlapping information of 2D distribution.

    Although various inversion methods emerged,the noise in the NMR signals still seriously affect the accuracy of the inversion results.How to improve the data quality,like suppressing noises while highlighting signals,is the fundamental issue to obtain more desirable NMR processing results.To this end,it is necessary to effectively suppress the noise characteristics introduced by the instrument and the surrounding environment and further improve the SNR of the raw echo data.A lot of meaningful works on lowfield NMR data denoising methods have been published.In the early stage,some researchers adopted time domain filtering(Edwards and Chen,1996)and SVD methods(Lin and Hwang,1993;Chen et al.,1994) to suppress noise by directly filtering out the noise and removing the smaller eigenvalues that represent the noise in the eigenvalue matrix respectively.These methods will lead to strong uncertainty in the inversion results.Subsequently,the mathematical transformation method based on multi-scale wavelet (Mallat and Hwang,1992) began to be used in the processing of NMR echo data with low signal-to-noise ratio.It is feasible because the wavelet can better fit the signal distribution under noise disturbance and extract the signal characteristics of raw echo data.Subsequently,a lot of research works depended on discrete wavelet (DWT) were published.Zheng and Zhang (2007)proposed a spatial correlation threshold denoising method based on non-decimated wavelet transformation.Wu et al.(2011)took a wavelet domain threshold method to realize the denoising process in digital phase signal detection period.Xie et al.(2014)compared three denoising methods of NMR echo data based on wavelet transformation,which proved that wavelet threshold method can obtain higher denoised results and more accurate formation porosity.Meng et al.(2015) proposed an adaptive wavelet packet threshold denoising algorithm to denoise NMR logging data,and verified the effectiveness of NMR data under the condition of low signal-to-noise ratio.Ge et al.(2015) proposed a method based on particle swarm optimization (PSO) algorithm to improve the performance of method,combination of wavelet threshold and timedomain exponential weighted moving average,to reduce the noise of echo data and achieved good inversion results.Recently,the methods based on mathematical morphology(Gao et al.,2020)and the combination of variable window sampling and discrete cosine transformation(DCT)(Gu et al.,2021)have also been used in the denoising process of the raw echo data,which can effectively suppress the noise amplitude to some degree.

    In the past decades,the sparse representation theory had been widely used in compression,inversion,feature extraction and image noise reduction (Starck et al.,2010;Ahmed and Fahmy,2001).The basic idea of sparse representation is to use a dictionary (a mathematical transformation matrix) and several corresponding non-zero coefficients to represent the signal.The noise has the characteristics with randomness and cannot be sparsely represented so that the available signals can be highlighted.For aforementioned DWT and DCT method,the noise can be directly removed by eliminating the small coefficients containing noise characteristics.If most of the coefficients of noise are zero,or there are only existed several values closer to zero,better denoising results can be achieved by filtering the smaller coefficients.The coefficients with large absolute values will retain the most effective information in the raw echo data.However,fixed mathematical transformation,such as DCT (Gu et al.,2021) and DWT (Xie et al.,2014;Meng et al.,2015),cannot effectively represent them all due to fixed analytic formula.For different types of NMR signals,the selection of threshold is highly subjective.When the noise is greater than the signal,the fixed mathematical transformation methods cannot fully represent signal characteristics.This may result in mistaken elimination of the sparse coefficients that representing the real echo data during denoising procedure and lead to the reduction of the accuracy and reliability in subsequent inversion process.If an appropriate transformation matrix or dictionary can be constructed adaptively according to the characteristics of NMR data,it could produce better denoising effects and improve the quality of raw data by preserving the sparse characteristics of signal and eliminating redundant noise characteristics.

    Dictionary learning (DL) is a machine learning method,which can extract signal characteristics from the raw echo data by a selfadaptive and adjustable dictionary learned from noisy signals.Therefore,stronger sparse representation ability than that of fixed mathematical transformation can be established.Dictionary learning has been widely used in seismic data(Beckouche and Ma,2014) and noisy image recovering (Xu et al.,2017;To?i? and Frossard,2011),but rarely applied to low-field NMR data processing.Therefore,in this paper,we will adopt dictionary learning and propose a “data processing framework” to adaptively learn the signal and noise characteristics from the raw NMR echo data.The sparse representation and dictionary updating are simultaneously operated with Orthogonal Match Pursuit (OMP) and K-SVD respectively.The difference of sparse characteristics between signal and noise can be processed and finally reconstructed to improve the quality of NMR echo data.We also applied dictionary learning to process NMR core analysis and well logging data,and verified the advantages of using dictionary learning in the low-field NMR data processing.The numerical simulations,core analysis and well logging data processing results show that the dictionary learning can further improve the inversion accuracy and reliability and spectrum resolution at low SNRs.It is believed that dictionary learning can play an important and practical role in NMR core analysis and downhole NMR data processing in the future.

    2.Principle and method

    2.1.Response equation for NMR measurement

    We firstly consider about 1D NMR data,and conduct numerical experiments based on dictionary learning.The 1D NMR measurement usually refers to the measurement of longitudinal relaxation time T1and transverse relaxation time T2of saturated fluid rocks,which can be used to obtain important reservoir parameter information such as pore structure,fluid saturation and permeability(Coates et al.,1999).T1is usually measured by“Inversion Recovery”or “Saturation Recovery” method,and T2is measured by CPMG(Carr-Purcell-Meiboom-Gill) pulse sequence.When the polarization time is sufficient,the general response equation of onedimensional T1or T2measurement can be obtained:

    In Equation(1),i=1,2.When i=1,it means the measurement of T1,if c1=1,c2=1,it means “Saturation Recovery” method,if c1=1,c2=2,it means“Inversion Recovery”method.When i=2,it means the measurement of T2,and c1=0,c2=-1.The discrete form of Equation (1) is

    In Equation (2),j=1,2,…,n,n is the number of preselected relaxation time components.k=1,2,…,m,m is the number of echo,tiis the acquisition time (usually an integral multiple of the echo interval).bkis the echo signal amplitude,Ti,jis the j-th relaxation time component preselected by Ti,εkis measured noise,including instrument background noise and external electromagnetic noise,f(Ti,j)is the amplitude of relaxation time Ti,j.

    Since T2measurement is mainly used in practical application,this paper only considers obtaining the spin echo data constructed by T2distribution.

    2.2.Dictionary learning

    Dictionary learning(DL)is a machine learning method,which is based on sparse representation theory to obtain the over complete dictionary and sparse representation of the signal through a given training sets.Assuming the signals can be sparsely represented by several atoms in an over complete dictionary (as shown in Fig.1),DL allows us to get rid of the limitation of selecting the corresponding methods for establishing fixed mathematical transformed basis matrix (fixed dictionary),and to adaptively and accurately capture the characteristics of the current data.

    Dictionary learning mainly contains two procedures:sparse representation and dictionary atoms updating.With sufficient iterations of these two procedures,we can adaptively obtain the redundant dictionaries representing data characteristics.The general procedure is as described below:

    Fig.1.Schematic of sparse representation with a dictionary for signals.Signals can be approximately represented by using a dictionary and corresponding sparse coefficients.The procedure for obtaining coefficients is sparse representation and for updating atoms is dictionary update.Dictionary learning is consisted of above two procedures.

    Step 1.Inputting training signal data x,initial dictionary D0,number of iterations N,error standard ε.

    Step 2.Initializing residual error r0=x,and giving initial dictionary D0,which is composed of signal or fixed transformation basis,and setting the number of iterations t=0.

    Step 3.For the given initial dictionary D0,obtaining the sparse representation αkof the training signal x (Sparse representation).

    Step 4.For the training signal x and its sparse representation αk,updating each atom dkin dictionary D column by column to obtain the updated dictionary Dt(Dictionary updating).

    Step 5.Judging whether the number of iterations t is greater than N and whether D meets the requirements of the final conditions.If yes,stop the iteration and output the final dictionary D,otherwise,return to Step 3.

    2.2.1.Sparse representation

    According to the sparse representation theory,the sparse representation of a signal can be expressed as an optimization problem with constraints (Starck et al.,2010;Elad and Aharon,2006) as followed:

    Equation (3) is an optimization problem based on error constraint,and Equation (4) is an optimization problem based on sparsity constraint,which are equivalent to each other;D is the dictionary,α is the sparsity coefficient,x is the original signal,ε is the error threshold,and T is the sparsity or the number of non-zero sparse coefficients.The sparse representation is a NP hard problem,which can be solved by algorithms such as basis pursuit(BP)(Chen et al.,2001),orthogonal matching pursuit(OMP)(Pati et al.,1993).OMP algorithm is widely used in sparse representation,data compression and reconstruction because of its good reconstruction efficiency and fast running speed.OMP algorithm is a typical greedy algorithm (Pati et al.,1993).Its general process is as followed:

    In Step 5,the operation speed is slow due to the inverse process.Therefore,the “Improved Batch OMP” method (Rubinstein et al.,2008) can be adopted to replace the least squares inversion process by using Cholesky decomposition in the conventional OMP algorithm,which will largely accelerate the running speed of the algorithm and ensure the solution accuracy.

    2.2.2.Dictionary updating

    The dictionary updating starts after the sparse representation.The sparse vector α or sparse matrix A obtained from sparse representation is used to update the atoms dkin dictionary D.The objective function of dictionary updating is:

    where X is the signal vector or matrix to be processed,dkand djare the j-th and k-th atoms in dictionary D,and αkand αjare the j-th and k-th elements in sparse coefficient matrix A,respectively.E is the corresponding error matrix,and Ejrepresents the error matrix except for djatom.

    Generally,the SVD method is used to decompose the error matrix,and its corresponding function expression is:

    The first column of matrix U obtained after decomposition is selected as the new djatom.The first row of matrix V is multiplied by first eigenvalue of positive semi-definite diagonal matrix Λ,which is calculated to update the sparse coefficient αjcorresponding to the djcolumn.Until all atoms are updated once,it is indicated that one iteration is completed.

    Repeating the sparse representation and the dictionary update procedures until the number of iterations K is reached or the iteration error meets the pre-selected value ε,the final dictionary and sparse coefficients are obtained.Because SVD decomposition with K iterations is required,this method is also called K-SVD dictionary learning (Aharon et al.,2006).

    2.3.Denoising with dictionary learning

    The NMR echo data can be regarded as an aperiodic and unilateral signal in time-domain,which is subjected to the law of multi-exponential decay.The energy of the NMR signal is mainly concentrated in the front part of the signal.The faster the signal decays,the more the short relaxation components are indicated,and the long decay part indicate the long relaxation components,as shown in Fig.2.Fig.2(b)and(c)show the sparse representation of NMR echo data in DCT domain and DWT domain (taking “db4”wavelet as an example).It can be seen that the DCT coefficients of echo signal are mainly concentrated within low frequency range,and the coefficients with high-frequency range mainly represent noise.DWT coefficients are calculated and spliced according to wavelet decomposition order.The noise may not be sparsely represented in both DCT domain and DWT domain,as shown in Fig.2(d).Therefore,the NMR echo data has good sparsity,and the coefficients of noise can be eliminated by using soft and hard thresholds to achieve the noise suppression of echo data(Gu et al.,2021;Xie et al.,2014;Meng et al.,2015).However,the denoising method based on the fixed mathematical transformation methods will not meet the adaptivity for different types of NMR echo data detected from different samples,because it uses a fixed base matrix to conduct the sparse representation.It can suppress the noise to a certain extent but its ability to improve the quality of data and corresponding inversion accuracy may be limited.

    The NMR echo data with random noise can be expressed as:

    where,y is the noisy echo data with noise,x is the noiseless echo data,n is the random noise which is generally thermal noise with standard deviation of σ.

    The 1D NMR echo data are normally acquired by CPMG pulse sequence,and each adjacent echoes are correlated.In order to fully extract the characteristics of NMR signal and ensure high-quality redundant dictionary and sufficient sparse representation,the noisy data y can be sampled into patches.Directly obtaining 1D patch data from y(as shown in Fig.3(a)),or firstly building the raw data y into an l×k matrix and then using a 2D patch operation scheme (as shown in Fig.3(b)),are alternative.After selecting a patch size of n×m (n,m?l,k),the 1D patch data can be directly used to construct the training set.The 2D patch data should be firstly transformed into a column vector like 1D patch data and then used to construct the training set.It is noted that the overlapping area of each patches should be the maximum to achieve the best applications.Optionally,70% of the patches or the entire patches can be employed as the training set for dictionary learning.

    If the denoised data X can be fully described through the sparse representation of each patch,the denoising problem of a complete natural signal can be expressed as (Elad and Aharon,2006):

    here,λ is a constraint factor and related to the standard deviation σ of noise.μijis a control factor of residual errors and is related to local patch data.It obeys constraint Dαij-≤T,and can be processed implicitly in the process of sparse representation,which can be neglected.Rijindicates the operation of patch selection.The first term of Equation (8) constrains the proximity between noisy data Y and denoised data X.The second term of Equation (8) ensures the continuous optimization of sparse coefficients when solving the objective function.The third term constrains the proximity between patch data RijX and its sparse representation Dαij.The latter two terms jointly ensure that there is only limited error resulted from the sparse representation of each patch RijX in the denoised results.

    The sparse representation of Equation (9) can be performed by employing orthogonal matching pursuit algorithm(OMP)to obtain the sparse coefficient αijof local patches.

    Finally,when dictionary D and sparsity coefficient αijof local patches are well learned and calculated respectively,Equation (8)can be rewritten as Equation (10)

    Fig.2.Sparsity of NMR echo data in DCT domain and DWT domain.(a)The demonstration of noisy echo data and noiseless data which is acquired by low-field NMR tools;(b)the DWT and DCT coefficient distribution of noiseless echo data;(c)the DWT and DCT coefficient distribution of noisy echo data;(d)the DWT and DCT coefficient distribution of noise.The DWT and DCT coefficient distribution of noise.

    Fig.3.Schematic of local patch operation for dictionary learning.

    The numerical solution of Equation (10) can be written as Equation (11).The purpose of Equation (11) is to average the overlappinge data pointss,and finally reconstruct the data to obtain denoised results (Elad and Aharon,2006).The scalar λ is a regularization parameter to give a weighted average which will work on the local patches that are less overlapped.If λ=0,there is no noisy signal will be averaged into denoised result.However,it is not possible for the noiseless reconstruction.In the 1D case for ours,Equation (11) represents the averages of each point on the NMR signal because the overlapped 1D patches are operated for dictionary learning.For reconstruction of denoised NMR signals in Equation (11),we select λ=Max(echo data)/(10*σ)which is selfadaptive for denoising NMR echo data rather than a fixed value.This choice is depended on the consideration that different types of NMR echo data have different noise level and signal amplitude.With the increment of noise level,small λ is better for the denoised results and vice versa.

    The framework of improving denoising NMR data quality with dictionary learning is shown in Fig.4.

    3.Numerical simulations

    3.1.Forward model

    Fig.4.Schematic of data processing framework for improving NMR data quality with using dictionary learning.

    Fig.5.Simulation echo data and corresponding inverted T2 distributions with SNR of 6,10,15 and 20 respectively.

    In order to verify the advantages of dictionary learning for improving NMR echo data,we constructed a bimodal T2distribution,as shown in Fig.5(a).The micro/nano pores are mainly developed in shale or tight sandstone reservoir rocks.NMR needs to measure fluid properties and effectively distinguish fluid components,such as high viscosity organic matter,bound water,movable oil and so on.Therefore,the T2values of the fluid components are assumed to be 10 ms and 150 ms respectively.Under the noiseless condition,we set the total porosity as 10 and the proportion of each component as 6.5 and 3.5,respectively.Considering that the shortest echo spacing of downhole NMR instruments employed in tight and complex shale/sandstone reservoir evaluation is 0.2 ms (Song and Kausik,2019),the number of echoes is set as 2500 and the total acquisition time is 500 ms to ensure sufficient decay of NMR echo data.

    In the numerical simulation,the Gaussian white noise is added to the noiseless echo data of the above model,and the SNR of the noisy echo data is set as 6,10,15 and 20 respectively.The BRD inversion method is used to obtain T2distribution (Butler et al.,1981),and the S-curve method (Zou et al.,2016) is used to select the regularization factor.In order to conduct comparison,we fixed the regularization factor within the range from 0.01 to 10,and adaptively obtained a satisfactory smoothing factor with the Scurve algorithm.Fig.5 demonstrates the echo data and corresponding inversion results with different noise levels.The porosity values are 11.62 p.u.(SNR=6),10.26 p.u.(SNR=10),10.42 p.u.(SNR=15),and 9.89 p.u.(SNR=20) respectively.The root mean square error(RMSE)betweenT2distribution and forward model are 0.057 p.u.(SNR=6),0.039 p.u.(SNR=10),0.040 p.u.(SNR=15),and 0.027 p.u.(SNR=20) respectively.It can be seen that the inversion results are deviated from the desirable values under low SNR condition.Next,we use dictionary learning method to process synthetic noisy echo data.The first step is to adaptively obtain the most suitable dictionary,which can characterize NMR data with different noise levels.Subsequently,well-learned dictionary is used to achieve noise suppression and improve the resolution of the T2spectrum and the accuracy of porosity estimation.

    3.2.Parameters setting for DL

    Prior to dictionary learning,we need to build a training set.The training set can be composed of noiseless NMR echo data or noisy one.However,we cannot previously know the characteristics of noiseless NMR data in practice.Therefore,we directly use noisy NMR echo data for training dictionary.As mentioned above,in order to fully extract the characteristics of NMR echo data,we directly extract maximum overlapped 2D patches from the NMR echo data.The size of each patch is n×n and the sampling step-size is 1.If the NMR signal can be transformed into a 2D sampling matrix with N×K (as demonstrated in Fig.3(b)),the maximum number of patches is:

    Fig.6.Simulation results of denoised results after dictionary learning processing.(a-1) to (a-5) demonstrate addressed echo data with SNR=6;(b-1) to (b-5) demonstrate addressed echo data with SNR=10;(c-1)to(c-5)demonstrate addressed echo data with SNR=15;(d-1)to(d-5)demonstrate addressed echo data with SNR=20.The panels of third and fourth row are demonstrated by 50 × 50 image from echo data,which is convenient for contrast.

    Fig.7.Calculated SNR of denoised echo data and RMSE with different patches for different SNR data.(a) SNR=9;(b) SNR=15;(c) SNR=20.

    Fig.8.Frequency statistics of calculated porosity and root-mean-square error(RMSE)from T2 distribution.Echo data with three types of noise level was synthesized,inverted and calculated under 1000 numerical simulations for repeated tests.

    The training set can be randomly selected from the extracted patches.According to Equation(12),all samples in the training set are selected for dictionary learning.The initial dictionary(a matrix)can be constructed by any fixed analytical transformed base,or it can also be built from the randomly selected training signals.Since the dictionary is redundant,the number of atoms (the number of columns in the dictionary) must be greater than the dimension of atom(the number of rows in the dictionary),and the dimension of atom must be the same as the dimension of column vectors converted from patches.Therefore,the size of the dictionary is n×(q ×n),where q >1.

    OMP algorithm can be used for both sparse representation and noise suppression,since noise is not sparse and will be filtered in the process of calculating residual.Therefore,we can use the sparse representation with sparsity constraint when learning noiseless data,where sparsity T=10 and calculation error ε=1× 10-6.When learning noisy data,we can use error constraint for sparse representation,and the calculation error needs to meet(Beckouche and Ma,2014):

    C=1.15 is the noise gain factor,which is an important empirical value estimation derived from a large number of image tests,which can make the sparse representation more stable(Elad and Aharon,2006);σ is the noise standard deviation of NMR echo data,and the noise standard deviation of each patch is assumed approximately the same;n is the dimension of each patch;Equation(13)represents a sparse representation of each local patch data of the noisy signal.

    In the simulation work,we directly use the NMR echo data with different noise levels (SNR=20,15,10,6) for dictionary learning.However,for NMR data with different noise levels,different patch dimensions can better ensure the effectiveness and reliability of learning.For NMR data with different SNR,we select the corresponding patch dimensions as 5 × 5,6 × 6,6 × 6 and 7 × 7 respectively,and set the dimensions of the dictionary as 25×100,36 × 144,36 × 144 and 49 × 196,respectively.The number of iterations of the dictionary learning is 30.The details will be introduced in the simulation part.

    3.3.Simulation results

    Fig.6 demonstrates the echo data processing results at different SNRs.After respective dictionary learning and denoising process,the BRD inversion processing is conducted to obtain the T2distribution.The panels of first row in Fig.6 illustrate the echo data and residual signal before and after noise suppression(residual_signal=y original-y_denoised,where y-original represents noiseless signal and y_denoised represents denoised signal).The second row demonstrates the dictionaries of different sizes corresponding to different noisy data obtained by dictionary learning;the third row demonstrates noisy data(converted into 50× 50 maps for comparison);the fourth row is the data after denoising,and the fifth row is the inversion result of the echo data after denoising.It can be seen from Fig.6 that the characteristics of NMR echo data with different noise levels can be extracted adaptively,indicated by different atomic components.After denoising,NMR data SNR is increased by 3 times at least.

    However,the noise is very difficult to be suppressed in the first few echoes because the energy of NMR echo data is mainly concentrated in first few echoes.As can be seen from the first row of Fig.6,with the increment of noise level,the fluctuation of residual signal gets more stronger within the first 100 ms of NMR echo data,as well as described in Gu et al.(2021).Instead of setting first few echoes into zeros to avoid the energy loss of first few echoes during denoising,we directly use well-trained dictionary to suppress noise.The T2distribution of denoised echo data has higher resolution and accuracy than that before denoised.It is owed to the sufficient learning from local patches of signal,which provides a more robust dictionary to achieve a more reliable noise suppression and signal reconstruction.

    Fig.9.Comparisons of T2 distributions at different repetitions for six tight core samples.Echo data with TR=256 and 512 is adopted for the standard T2 inversion as comparison.Dictionary learning is employed on the NMR echo data with TR=16,32,64.

    The selection of patch size is very important for dictionary learning,as demonstrated in Fig.7.The blue curve in Fig.7 represents the improved SNRs of denoised echo data after dictionary learning and denoising process.The orange curve is RMSE(the expression of RMES is described in Equation(15))representing the inversion accuracy and estimating the similarity between inversion results inverted from denoised echo data and forward model.It is very interesting that denoised echo data with the higher SNR will not increase the accuracy of inversion results.If the higher SNRs is pursued,the larger patch size will result in the elimination of available echo signals and produce over-smoothing denoised results.This conclusion indicates that reasonable selection of patches will give a more accurate solution.In addition,the patch size is not more than 6 when SNR is larger than 9.The patch size optimized for SNR lower than 9 is fluctuated (the range may be from 5 to 13)because of the strong randomness of noise,which is not demonstrated here.In the later applications,we select patch size of 7 as a compromise to perform dictionary learning when SNR is lower than 9.

    Next,we will consider the effects of the randomness of noise for inversion results.Dictionary learning ensures the stability of NMR data processing under different noise levels,which can be indicated by obtaining T2distribution with BRD inversion.Therefore,we have conducted 1000 repeated tests and recorded on the echo data with random noise before and after dictionary learning processing,and we conduct three steps at each test:dictionary learning,denoising and inversion.We use the total porosity φ=10 and the forward T2distribution as comparison.The deviation and the RMSE with the standard T2distribution are estimated.

    Total porosity φ and RMSE are respectively:

    where,N is the number of pre-distribution points when BRD inversion is used,N=128;f()is the amplitude of T2value of predistribution points obtained by inversion;f(T2i)is the amplitude of T2value corresponding to forward bimodal model.

    Fig.8 shows that the inversion results of echo data at different noise levels (blue histogram marks represent the original noisy echo data,and orange histogram represents the echo data after dictionary learning processing).A total of 1000 repeated tests and statistics were conducted.The statistical results show that the porosity calculated from the noisy echo data and denoised echo data after the dictionary learning obey the Normal Distribution.However,the porosity distribution meets the standard porosity(φ=10) Normal Distribution for the echo data processed by dictionary learning.It can be found that the porosity is still random due to the noise effects on the first few echoes and accurately obtained with the increment of SNR.Whereas,the T2distributions are closer to the model under low SNR.It is indicated that more effective information can be retained adaptively after dictionary learning.The overall statistical results are shown in Table 1,and the porosity values φ and RMSE values are the average of 1000 repeated tests.

    Table 2Amplitude calculated from data of NMR core analysis before and after employing DL,which are compared with results by performing higher repetitions.

    Table 3Parameters of dictionary learning for different repetitions.

    4.Application of DL on NMR rock core analysis and well logging data

    In practice,the raw NMR echo data contains several types of noise,such as antenna noise,electronic circuit noise and possible external electromagnetic harmonic noise.Generally,it is necessary to increase the repetition to reduce the noise level.Firstly,we conduct core experiments to verify the advantages of dictionary learning in noise suppression.The 2 MHz NMR Core Analyzer(Magritek,NZ) is adopted to obtain NMR echo data of different types of tight rock cores.The echo spacing is 0.2 ms,the number of echoes is 2500,and the acquisition time is 500 ms for each scan.NMR measurements were conducted on three shale and three tight sandstone samples.As shown in Fig.9,core 1-3 are shale samples,and core 3-6 are tight sandstone samples.Shale samples are dry samples with some residual oil in the pores,while tight sandstone samples are saturated with brine water and centrifuged.In the core analysis experiment,we do not pay attention to the porosity of rock core,but the improvement of the quality of echo data with low SNRs by using dictionary learning.We compared the calculated area of T2distribution inverted from noisy echo data and denoised echo data.The unit demonstrated is the amplitude of measured voltage (μV),as shown in Table 2.The parameters of dictionary learning are demonstrated in Table 3.For all the experiments,the SNRs of 16 repetitions is larger than 8 so that we select the patch size of 6 according to instruction from simulations.

    Fig.10.Processed results of NMR logging data using the proposed denoised method for Well 1.The T2 distributions through DL denoising exhibit more obvious bimodal peaks.

    Fig.9 shows the T2distribution results obtained from core samples by NMR experiments.For each core,CPMG measurements with 16,32 and 64 repetitions are conducted respectively.Dictionary learning and denoising processing are conducted after each measurement,and then BRD method is used for inversion.The range of regularization parameters used in inversion is 0.01-10,and the regularization parameter with the smallest inversion residual norm is adaptively selected.In order to reveal the improvement,we use the echo data acquired with 256 and 512 repetitions as comparison group to verify the inverted results after dictionary learning processing.The high repetitions will produce echo data with high quality leading to more precise inversion results.It can be seen from Fig.9 that the echo data of shale and sandstone samples measured by using 16,32 and 64 repetitions can be well improved after dictionary learning and denoising,indicating that the characteristics of the echo data are represented by the well-trained dictionary and the noise is greatly suppressed.After inversion,the T2distribution inverted by using the reconstructed echo data has better resolution.The peak positions become more accurate,and the estimations of T2distribution area are effectively improved,as shown in Table 2 (when signal amplitude is calibrated into porosity,the area under T2distribution is the total porosity).However,it should be noted that due to the randomness of noise,the noise level of each measurement is different.Even if it can be suppressed by dictionary learning,the sparse representation of dictionary learning is still an approximate representation of the original signal,and the echo data disturbed by noise cannot be completely restored,which leads to a certain deviation in the inversion result compared to echo data acquired with higher repetitions.Furthermore,the small characterizations of signal could be eliminated during denoising process with tolerated errors,leading to certain components with small amplitude vanished.It is an interesting issue which will be studied in the future.In general,dictionary learning has a good ability to improve the quality of echo data under different noise conditions,as well as to improve the accuracy and resolution of inversion results by using commonly used inversion methods.

    We also apply the dictionary learning to improve the quality of NMR logging data,as demonstrated in Fig.10.Well 1 is one of intervals in an oil tight sandstone reservoir,which is investigated by employing CMR instrument (Schlumberger Technology).The first track is the depth and formation interval of 30 m is selected as an example.The second track includes gamma ray(GR),spontaneous potential(SP)and caliper(CAL)curve.The formation interval is full of brine water mud and can be indicated by the SP curve.The third track demonstrates the raw echo data of NMR logging.The echo number is 1800 per depth point and the echo data is calibrated into porosity unit.The fourth track is the denoised echo data processed by employing dictionary learning.The fifth and sixth track is T2distribution inverted from raw data and denoised data by using the BRD method,respectively.The seventh track represents the porosity calculated from different methods.In this track,the black porosity curve is obtained from the raw echo data in second track;the red porosity curve is obtained from the echo data after dictionary learning and denoising processing in third track;the bottle green curve is obtained from conventional neutron-density logging,the blue dots represent the core porosity with gas measurement.The last track is the averaged atoms,which are used for the sparse representation of NMR echo data.The larger the averaged atom is,the more coefficients need to be used to represent NMR echo data.The SNR of the NMR measurements is ranged from 6 to 12 even though common depth points(CDP)stacking of 7 times is conducted.We conduct a variable scheme for the selection of patch size to meet the requirement of echo data with different SNR.For patch operation parameter,the patch size of 7 is selected for NMR data with SNR lower than 9,and of 6 is selected for NMR data larger than 9.The sparsity T of 15 and iteration of 30 are set as dictionary learning parameters.Only few seconds are cost for dictionary learning and denoising for each echo data.During inversion period,the regularization factor is also set within the range from 0.01 to 10.From T2distributions,it can be seen that the NMR echo data after dictionary learning shows good resolution.The porosity calculated from noisy and denoised echo data are almost the same,which indicates that there is no available signal eliminated during denoising process.The energy of first few echoes are maintained well and noise are suppressed through all the echo data.The trend of NMR porosity curve is similar and closed to the neutron-density porosity curve within tolerated error.The good agreement between the neutron-density porosity curve and core porosity dots indicates the accuracy of well logging.At last,the variation of averaged atoms demonstrated the adaptivity of dictionary learning for different types of NMR echo signals.The small atoms indicate the effective suppression of noise.

    5.Conclusions

    In this paper,we explored feasibility of employing dictionary learning and proposed a “data processing framework” to improve the quality of low-field NMR echo data.Dictionary learning is a machine learning method based on sparse representation theory.With dictionary learning,useful information in noisy NMR echo data can be adaptively extracted and reconstructed,and further improve the quality of raw echo data and the accuracy and stability of inversion results.We have verified the advantages and application effects of dictionary learning method with numerical simulations and applied it on NMR core analysis data and well logging data.Some conclusions can be drawn:

    1) Dictionary learning has good adaptability to echo data at different noise levels,which can be reflected by adaptively learned dictionaries and varied averaged atoms.

    2) The quality of raw echo data with low SNR can be improved by employing dictionary learning,which will further improve the accuracy and reliability of inversion results when common inversion methods are used.It is meaningful to the requirement of rapid NMR logging and laboratory analysis,since more accurate petrophysical parameters can be obtained with fewer averages of raw echo data.

    3) The selection of patch size is very important and its effect is valuable to be studied,since it will affect the quality of signal reconstruction.For NMR echo data,small signal elimination will also result in the inaccuracy of inversion results.

    Although the quality of NMR echo data after dictionary learning processing has been greatly improved,the conventional inversion method is essentially dependent on noisy signal and the uncertainty of numerical solution is still existed,which cannot be eliminated totally.How to infuse the response equation of NMR echo data into dictionary learning to further reduce the uncertainty of inversion results caused by noise disturbance,is our next research topic.

    Acknowledgements

    This paper is supported by Science Foundation of China University of Petroleum,Beijing(Grant Number ZX20210024),Chinese Postdoctoral Science Foundation (Grant Number 2021M700172),The Strategic Cooperation Technology Projects of CNPC and CUP(Grant Number ZLZX2020-03),and National Natural Science Foundation of China (Grant Number 42004105).

    亚洲,欧美,日韩| 18在线观看网站| 精品一区二区三区四区五区乱码 | 999久久久国产精品视频| 美女中出高潮动态图| 90打野战视频偷拍视频| bbb黄色大片| 50天的宝宝边吃奶边哭怎么回事| 一二三四在线观看免费中文在| 搡老岳熟女国产| 亚洲av电影在线进入| 最黄视频免费看| 国产精品欧美亚洲77777| 一区二区三区乱码不卡18| 18禁观看日本| 久久鲁丝午夜福利片| 视频在线观看一区二区三区| 久久国产精品人妻蜜桃| 999精品在线视频| 免费看不卡的av| 黄频高清免费视频| 中文精品一卡2卡3卡4更新| 亚洲成人手机| 精品人妻一区二区三区麻豆| 男女之事视频高清在线观看 | 久久久精品区二区三区| 精品人妻熟女毛片av久久网站| 汤姆久久久久久久影院中文字幕| 丰满迷人的少妇在线观看| 又紧又爽又黄一区二区| 日韩欧美一区视频在线观看| 在线看a的网站| 国产成人欧美在线观看 | 国产福利在线免费观看视频| 2018国产大陆天天弄谢| 女性被躁到高潮视频| 日本91视频免费播放| 国产精品一区二区在线观看99| 国产欧美日韩综合在线一区二区| 国产成人a∨麻豆精品| 在线亚洲精品国产二区图片欧美| 国产男女超爽视频在线观看| 国产在线观看jvid| 精品国产乱码久久久久久男人| 午夜老司机福利片| 无限看片的www在线观看| 97在线人人人人妻| 肉色欧美久久久久久久蜜桃| 久久久国产一区二区| 欧美国产精品va在线观看不卡| 黑人巨大精品欧美一区二区蜜桃| 叶爱在线成人免费视频播放| 亚洲成国产人片在线观看| 午夜日韩欧美国产| 丁香六月天网| 99精国产麻豆久久婷婷| av网站在线播放免费| 欧美国产精品va在线观看不卡| av线在线观看网站| 精品国产乱码久久久久久男人| 桃花免费在线播放| 我要看黄色一级片免费的| 亚洲专区中文字幕在线| 国产精品av久久久久免费| 黄色片一级片一级黄色片| av在线播放精品| 晚上一个人看的免费电影| 人人妻人人爽人人添夜夜欢视频| 国产一区二区三区av在线| 国产精品一区二区在线不卡| 欧美日韩亚洲高清精品| 视频区欧美日本亚洲| 精品国产一区二区久久| 中文字幕精品免费在线观看视频| 搡老岳熟女国产| 天堂中文最新版在线下载| 各种免费的搞黄视频| 丰满少妇做爰视频| 亚洲少妇的诱惑av| 纵有疾风起免费观看全集完整版| 精品一区在线观看国产| 飞空精品影院首页| 精品国产乱码久久久久久男人| 一本—道久久a久久精品蜜桃钙片| 无限看片的www在线观看| 午夜影院在线不卡| 日韩大码丰满熟妇| 99国产精品99久久久久| 国产真人三级小视频在线观看| 最近最新中文字幕大全免费视频 | a级片在线免费高清观看视频| 又大又爽又粗| 日日夜夜操网爽| 久久人人97超碰香蕉20202| 亚洲精品美女久久久久99蜜臀 | 日本91视频免费播放| 精品免费久久久久久久清纯 | 蜜桃在线观看..| 男的添女的下面高潮视频| 啦啦啦在线观看免费高清www| 伦理电影免费视频| 美女视频免费永久观看网站| 午夜福利视频在线观看免费| 肉色欧美久久久久久久蜜桃| 中文字幕最新亚洲高清| 国产精品国产av在线观看| 国产成人影院久久av| 亚洲情色 制服丝袜| 欧美亚洲日本最大视频资源| 国产精品av久久久久免费| 日韩中文字幕视频在线看片| 久久人人爽人人片av| 一本一本久久a久久精品综合妖精| 国产精品成人在线| 高潮久久久久久久久久久不卡| 国产高清视频在线播放一区 | 日本wwww免费看| 亚洲国产最新在线播放| 80岁老熟妇乱子伦牲交| 黄色片一级片一级黄色片| 一本久久精品| 少妇被粗大的猛进出69影院| 亚洲熟女精品中文字幕| 欧美日韩国产mv在线观看视频| 午夜福利视频在线观看免费| 高清黄色对白视频在线免费看| 国产精品久久久久久精品古装| 黄色视频不卡| 精品欧美一区二区三区在线| 久久久精品区二区三区| 视频区欧美日本亚洲| 十分钟在线观看高清视频www| 亚洲一区中文字幕在线| 在线精品无人区一区二区三| 极品人妻少妇av视频| 免费日韩欧美在线观看| 欧美人与善性xxx| 国产国语露脸激情在线看| 咕卡用的链子| 亚洲精品一二三| 久9热在线精品视频| 99久久综合免费| 一个人免费看片子| 免费人妻精品一区二区三区视频| 男男h啪啪无遮挡| 在线观看人妻少妇| 亚洲黑人精品在线| 国产1区2区3区精品| 精品人妻一区二区三区麻豆| 最新的欧美精品一区二区| av在线播放精品| 在线观看人妻少妇| 欧美成狂野欧美在线观看| 男男h啪啪无遮挡| 天天影视国产精品| 久9热在线精品视频| 国语对白做爰xxxⅹ性视频网站| 午夜影院在线不卡| 欧美成狂野欧美在线观看| 热re99久久国产66热| 日本91视频免费播放| 男女边摸边吃奶| 精品一区在线观看国产| 搡老岳熟女国产| 国产在视频线精品| 婷婷色综合www| 韩国精品一区二区三区| 欧美亚洲日本最大视频资源| 久久人人爽人人片av| a级片在线免费高清观看视频| 亚洲精品久久久久久婷婷小说| 免费av中文字幕在线| 亚洲精品国产av成人精品| 一区福利在线观看| 18禁黄网站禁片午夜丰满| 久久精品国产综合久久久| 精品国产一区二区三区四区第35| 亚洲精品自拍成人| 精品高清国产在线一区| 日韩精品免费视频一区二区三区| 美女主播在线视频| 老司机在亚洲福利影院| 日韩熟女老妇一区二区性免费视频| e午夜精品久久久久久久| 在线 av 中文字幕| 欧美黑人欧美精品刺激| 777米奇影视久久| 老司机在亚洲福利影院| 夫妻性生交免费视频一级片| 亚洲综合色网址| 亚洲国产精品成人久久小说| videosex国产| 国产精品欧美亚洲77777| 蜜桃国产av成人99| 天天操日日干夜夜撸| 久久午夜综合久久蜜桃| 天天躁狠狠躁夜夜躁狠狠躁| 中文欧美无线码| 中文字幕高清在线视频| 久久天堂一区二区三区四区| 少妇人妻久久综合中文| 日韩中文字幕欧美一区二区 | 亚洲国产看品久久| 欧美日韩福利视频一区二区| 国产精品熟女久久久久浪| 尾随美女入室| 日韩中文字幕视频在线看片| 欧美成人精品欧美一级黄| 你懂的网址亚洲精品在线观看| 欧美精品av麻豆av| 天天躁狠狠躁夜夜躁狠狠躁| 在线观看免费午夜福利视频| a级毛片黄视频| 欧美黄色片欧美黄色片| 色综合欧美亚洲国产小说| 国产亚洲一区二区精品| 一二三四社区在线视频社区8| 国产三级黄色录像| 丰满少妇做爰视频| 丰满少妇做爰视频| 亚洲精品日本国产第一区| 热re99久久精品国产66热6| 亚洲三区欧美一区| 亚洲五月色婷婷综合| 国产黄色视频一区二区在线观看| 久久久久网色| 欧美 亚洲 国产 日韩一| 久久国产精品影院| 亚洲国产成人一精品久久久| 丁香六月天网| 精品亚洲乱码少妇综合久久| 国产日韩欧美亚洲二区| 亚洲国产日韩一区二区| 色精品久久人妻99蜜桃| 一区在线观看完整版| 欧美日韩视频精品一区| 国产成人一区二区三区免费视频网站 | av天堂久久9| 亚洲伊人色综图| 日韩av在线免费看完整版不卡| 啦啦啦视频在线资源免费观看| 男人添女人高潮全过程视频| 国产成人av教育| 下体分泌物呈黄色| 中文字幕另类日韩欧美亚洲嫩草| 国产av国产精品国产| 国产又爽黄色视频| bbb黄色大片| 亚洲专区国产一区二区| 亚洲av日韩在线播放| 脱女人内裤的视频| 国产一卡二卡三卡精品| 啦啦啦在线观看免费高清www| 999精品在线视频| 亚洲一码二码三码区别大吗| 精品国产国语对白av| 91麻豆精品激情在线观看国产 | 在线观看免费高清a一片| 久久亚洲精品不卡| 亚洲精品乱久久久久久| www.自偷自拍.com| 亚洲天堂av无毛| 老司机在亚洲福利影院| 日本a在线网址| 日日爽夜夜爽网站| 亚洲,欧美精品.| 国产成人欧美在线观看 | 国产97色在线日韩免费| 18禁观看日本| 午夜av观看不卡| 五月天丁香电影| av有码第一页| 亚洲自偷自拍图片 自拍| 亚洲久久久国产精品| 欧美日韩亚洲国产一区二区在线观看 | 国产一卡二卡三卡精品| www.精华液| 亚洲图色成人| 男人操女人黄网站| 国产视频一区二区在线看| 狠狠婷婷综合久久久久久88av| 脱女人内裤的视频| 亚洲,欧美,日韩| 777米奇影视久久| www.自偷自拍.com| 精品国产乱码久久久久久小说| 水蜜桃什么品种好| 亚洲国产精品一区二区三区在线| 午夜福利影视在线免费观看| 国产免费现黄频在线看| 脱女人内裤的视频| 亚洲久久久国产精品| 国产精品 欧美亚洲| 丁香六月天网| 99久久精品国产亚洲精品| 热re99久久精品国产66热6| av网站免费在线观看视频| 丰满少妇做爰视频| 黑丝袜美女国产一区| 亚洲欧美日韩高清在线视频 | 欧美精品高潮呻吟av久久| 最黄视频免费看| 99热全是精品| 亚洲一区二区三区欧美精品| 狂野欧美激情性bbbbbb| 一级片免费观看大全| 日韩免费高清中文字幕av| 美女高潮到喷水免费观看| 亚洲情色 制服丝袜| 在线观看免费视频网站a站| 十分钟在线观看高清视频www| 久久久久久久国产电影| 汤姆久久久久久久影院中文字幕| 少妇 在线观看| 亚洲欧美一区二区三区黑人| 高清欧美精品videossex| 免费高清在线观看日韩| 午夜日韩欧美国产| 视频区欧美日本亚洲| 高潮久久久久久久久久久不卡| 国产欧美日韩一区二区三 | 丰满迷人的少妇在线观看| 久久亚洲精品不卡| 黑人猛操日本美女一级片| 国产在线观看jvid| 啦啦啦在线观看免费高清www| 在线观看人妻少妇| 久久久久久亚洲精品国产蜜桃av| 涩涩av久久男人的天堂| 亚洲av男天堂| av在线app专区| 国产国语露脸激情在线看| 国产一级毛片在线| 99精品久久久久人妻精品| 国产亚洲一区二区精品| 日本av免费视频播放| 国产日韩欧美视频二区| 汤姆久久久久久久影院中文字幕| 亚洲熟女毛片儿| 久久免费观看电影| 秋霞在线观看毛片| 欧美日韩成人在线一区二区| 亚洲精品在线美女| 成年人黄色毛片网站| 99热全是精品| 中文欧美无线码| 天堂俺去俺来也www色官网| cao死你这个sao货| 97人妻天天添夜夜摸| 国产精品久久久久久人妻精品电影 | 久久这里只有精品19| 999久久久国产精品视频| 美女主播在线视频| 久久人人97超碰香蕉20202| 国产熟女午夜一区二区三区| 欧美成人精品欧美一级黄| 波多野结衣一区麻豆| 蜜桃国产av成人99| 免费在线观看黄色视频的| 欧美日韩亚洲高清精品| 欧美日韩福利视频一区二区| 老熟女久久久| 在线av久久热| 亚洲欧美色中文字幕在线| 日韩中文字幕视频在线看片| 精品国产一区二区三区四区第35| 国产亚洲午夜精品一区二区久久| 少妇被粗大的猛进出69影院| 国产人伦9x9x在线观看| 高清视频免费观看一区二区| 亚洲,一卡二卡三卡| 国产在线免费精品| 精品少妇内射三级| 少妇裸体淫交视频免费看高清 | 色婷婷久久久亚洲欧美| 久久久久久久久久久久大奶| 97精品久久久久久久久久精品| 一级,二级,三级黄色视频| 久热爱精品视频在线9| 久久天躁狠狠躁夜夜2o2o | 美女主播在线视频| av线在线观看网站| 亚洲成国产人片在线观看| 一个人免费看片子| 又粗又硬又长又爽又黄的视频| 欧美97在线视频| 国产精品免费大片| 午夜免费男女啪啪视频观看| 久久热在线av| 啦啦啦 在线观看视频| 免费一级毛片在线播放高清视频 | 久热这里只有精品99| 日韩av免费高清视频| 两个人免费观看高清视频| 夫妻午夜视频| 熟女av电影| 久久热在线av| av有码第一页| 日韩免费高清中文字幕av| 成年美女黄网站色视频大全免费| 午夜激情av网站| 色精品久久人妻99蜜桃| 超碰97精品在线观看| 麻豆国产av国片精品| 飞空精品影院首页| 精品国产一区二区久久| 国产成人系列免费观看| 国产精品免费大片| 欧美黄色片欧美黄色片| 国产免费视频播放在线视频| 欧美日韩综合久久久久久| 亚洲av片天天在线观看| 亚洲av日韩在线播放| 国产精品一区二区精品视频观看| 在线观看免费日韩欧美大片| 亚洲精品国产av蜜桃| 18禁黄网站禁片午夜丰满| 亚洲少妇的诱惑av| 亚洲综合色网址| 80岁老熟妇乱子伦牲交| 高潮久久久久久久久久久不卡| 国产精品九九99| 国产成人精品久久二区二区免费| 性色av乱码一区二区三区2| 男人爽女人下面视频在线观看| 亚洲黑人精品在线| 亚洲中文字幕日韩| 制服诱惑二区| 亚洲少妇的诱惑av| 亚洲国产中文字幕在线视频| 深夜精品福利| 午夜福利视频精品| 99国产综合亚洲精品| 热99国产精品久久久久久7| 精品福利永久在线观看| 另类精品久久| 女警被强在线播放| 欧美av亚洲av综合av国产av| 伦理电影免费视频| 久久鲁丝午夜福利片| 久久99热这里只频精品6学生| 成人亚洲精品一区在线观看| 欧美 亚洲 国产 日韩一| 国产精品 国内视频| 啦啦啦啦在线视频资源| 亚洲天堂av无毛| 亚洲伊人色综图| 自拍欧美九色日韩亚洲蝌蚪91| 国产亚洲精品第一综合不卡| 自拍欧美九色日韩亚洲蝌蚪91| 热99久久久久精品小说推荐| 亚洲伊人久久精品综合| 欧美xxⅹ黑人| 9191精品国产免费久久| 人成视频在线观看免费观看| 丰满少妇做爰视频| 日韩一本色道免费dvd| 亚洲三区欧美一区| 精品一区二区三区四区五区乱码 | 久久这里只有精品19| 亚洲欧美色中文字幕在线| 男女高潮啪啪啪动态图| 亚洲国产日韩一区二区| 国产精品久久久久成人av| 99国产综合亚洲精品| 老司机在亚洲福利影院| 麻豆国产av国片精品| 亚洲少妇的诱惑av| svipshipincom国产片| 久久精品国产亚洲av涩爱| 久久精品国产综合久久久| 欧美在线一区亚洲| 黄色 视频免费看| 国产成人91sexporn| 男女无遮挡免费网站观看| 黑人欧美特级aaaaaa片| 午夜免费观看性视频| 校园人妻丝袜中文字幕| 欧美日韩亚洲综合一区二区三区_| 欧美日韩av久久| 制服诱惑二区| 无遮挡黄片免费观看| 免费av中文字幕在线| 欧美日本中文国产一区发布| 一本久久精品| 久久99精品国语久久久| 乱人伦中国视频| 午夜久久久在线观看| 午夜免费男女啪啪视频观看| 真人做人爱边吃奶动态| 天天操日日干夜夜撸| 蜜桃国产av成人99| 狂野欧美激情性xxxx| 精品人妻1区二区| 国产精品秋霞免费鲁丝片| 18禁裸乳无遮挡动漫免费视频| 欧美亚洲日本最大视频资源| 国产精品 国内视频| 午夜激情av网站| 欧美人与性动交α欧美软件| 久久久久久久大尺度免费视频| 一区在线观看完整版| 久久久久久人人人人人| 男人操女人黄网站| 狠狠精品人妻久久久久久综合| 亚洲av成人精品一二三区| av天堂在线播放| 国产一区二区在线观看av| 免费在线观看影片大全网站 | 成人三级做爰电影| h视频一区二区三区| 大片电影免费在线观看免费| 亚洲第一av免费看| 成人三级做爰电影| 午夜福利乱码中文字幕| 国产精品 欧美亚洲| 欧美变态另类bdsm刘玥| 欧美国产精品一级二级三级| 欧美黑人精品巨大| 国产精品 欧美亚洲| 1024香蕉在线观看| 两个人看的免费小视频| 久久久欧美国产精品| 激情五月婷婷亚洲| 日韩精品免费视频一区二区三区| 久久久久国产一级毛片高清牌| 亚洲欧美色中文字幕在线| 日韩中文字幕视频在线看片| 国产真人三级小视频在线观看| 成人三级做爰电影| 久久精品aⅴ一区二区三区四区| 欧美中文综合在线视频| avwww免费| 欧美另类一区| 午夜免费男女啪啪视频观看| 国产精品一区二区在线不卡| 亚洲,欧美精品.| 搡老乐熟女国产| 国产成人一区二区在线| 高潮久久久久久久久久久不卡| 亚洲精品美女久久av网站| 久久 成人 亚洲| 黑丝袜美女国产一区| 夜夜骑夜夜射夜夜干| 51午夜福利影视在线观看| 老司机在亚洲福利影院| 国产精品国产三级专区第一集| 婷婷成人精品国产| 18禁黄网站禁片午夜丰满| 丝瓜视频免费看黄片| 丰满人妻熟妇乱又伦精品不卡| 人人妻人人澡人人爽人人夜夜| 亚洲欧美精品综合一区二区三区| 国产一区二区在线观看av| 国产不卡av网站在线观看| 免费一级毛片在线播放高清视频 | 人人妻人人澡人人爽人人夜夜| 久热爱精品视频在线9| 菩萨蛮人人尽说江南好唐韦庄| 精品人妻一区二区三区麻豆| 各种免费的搞黄视频| 国产淫语在线视频| 亚洲欧洲精品一区二区精品久久久| 精品少妇黑人巨大在线播放| 好男人视频免费观看在线| 亚洲一码二码三码区别大吗| 国产日韩欧美视频二区| 又大又黄又爽视频免费| 十八禁高潮呻吟视频| 精品久久蜜臀av无| 1024香蕉在线观看| 黄色毛片三级朝国网站| 免费观看人在逋| 亚洲欧美中文字幕日韩二区| 美女大奶头黄色视频| 国产无遮挡羞羞视频在线观看| 别揉我奶头~嗯~啊~动态视频 | 欧美日韩黄片免| 天堂中文最新版在线下载| 久久精品久久精品一区二区三区| 久久亚洲国产成人精品v| 最近中文字幕2019免费版| 欧美乱码精品一区二区三区| 中文字幕人妻丝袜制服| 超碰成人久久| 男的添女的下面高潮视频| 亚洲成人免费电影在线观看 | 久久久久久免费高清国产稀缺| 亚洲欧洲国产日韩| 国产野战对白在线观看| 中文精品一卡2卡3卡4更新| av视频免费观看在线观看| 黄片播放在线免费| 精品人妻熟女毛片av久久网站| 美国免费a级毛片| 一级,二级,三级黄色视频| 啦啦啦啦在线视频资源| 一区二区av电影网| 国产在线观看jvid| 国产高清videossex| 九草在线视频观看| 咕卡用的链子| 电影成人av| 999精品在线视频| 99精品久久久久人妻精品| 精品亚洲乱码少妇综合久久| www.精华液| 久久鲁丝午夜福利片| 久久精品国产综合久久久| 国产成人精品无人区| 一级黄色大片毛片| 在线观看免费午夜福利视频| 极品少妇高潮喷水抽搐| 高清av免费在线| 一本一本久久a久久精品综合妖精|