• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A machine learning framework for low-field NMR data processing

    2022-06-02 05:00:04SiHuiLuoLiZhiXioYnJinGungZhiLioBinSenXuJunZhouCnLing
    Petroleum Science 2022年2期

    Si-Hui Luo ,Li-Zhi Xio ,*,Yn Jin ,Gung-Zhi Lio ,Bin-Sen Xu ,Jun Zhou ,b,Cn Ling

    a College of Artificial Intelligence and College of Petroleum Engineering,China University of Petroleum,Beijing,102249,China

    b China National Logging Corporation,Xi'an,Shaanxi 710076,China

    c Changzhou Institute of Technology,Changzhou,Jiangsu,213000,China

    Keywords:Dictionary learning Low-field NMR Denoising Data processing T2 distribution

    ABSTRACT Low-field (nuclear magnetic resonance) NMR has been widely used in petroleum industry,such as well logging and laboratory rock core analysis.However,the signal-to-noise ratio is low due to the low magnetic field strength of NMR tools and the complex petrophysical properties of detected samples.Suppressing the noise and highlighting the available NMR signals is very important for subsequent data processing.Most denoising methods are normally based on fixed mathematical transformation or handdesign feature selectors to suppress noise characteristics,which may not perform well because of their non-adaptive performance to different noisy signals.In this paper,we proposed a “data processing framework” to improve the quality of low field NMR echo data based on dictionary learning.Dictionary learning is a machine learning method based on redundancy and sparse representation theory.Available information in noisy NMR echo data can be adaptively extracted and reconstructed by dictionary learning.The advantages and application effectiveness of the proposed method were verified with a number of numerical simulations,NMR core data analyses,and NMR logging data processing.The results show that dictionary learning can significantly improve the quality of NMR echo data with high noise level and effectively improve the accuracy and reliability of inversion results.

    1.Introduction

    As a golden tool that can directly detect and reveal the dynamics of fluid molecules in rock porous media,nuclear magnetic resonance (NMR) technology can quantitatively identify the fluid components and precisely provide petrophysical parameters,such as pore structure,fluid saturation,permeability,wettability,viscosity and so on (Coates et al.,1999;Liang et al.,2019;Liu et al.,2019;Deng et al.,2014;Jia et al.,2016;Xiao et al.,2013).It is very helpful for the evaluation of oil and gas reservoirs.With the attention gradually being focused on unconventional and complex reservoirs,the NMR wireline,LWD (Logging While Drilling) and core analysis techniques become more and more important and practical for complex and tight reservoirs evaluation (Xiao et al.,2015;Luo et al.,2019,2020),such as shale and ultra-deep tight sandstone reservoirs (Song and Kausik,2019).

    However,the data signal-to-noise ratio (SNR) detected from low-field NMR instrument is normally low,which will affect the subsequent NMR data processing and interpretation work.Two reasons were concluded based on the recent publications in recent years.Firstly,the NMR instruments employ the static magnetic field generated by the permanent magnet to polarize the formation at certain depth of investigation (DOIs) (Liao et al.,2021).The magnetic field is extremely weak and varied with surrounding environment,resulting in lower signal amplitude.Secondly,unconventional reservoirs have a low porosity and permeability,leading to the acquired low signal amplitude(Xie et al.,2015;Song and Kausik,2019).It is necessary to increase averages during the acquisition period to meet the requirements of NMR data processing and interpretation when SNR of data is low.

    Low field NMR technology adopts CPMG pulse sequence based on relaxation and diffusion mechanism (Carr and Purcell,1954;Meiboom and Gill,1958) to accurately measure the formation.By collecting echo data and inverting echo data with Inverse Laplace Transformation(ILT)methods,the distribution of one-dimensional(1D) or two-dimensional (2D) parameters such as T1,T2,T1-T2and D-T2of formation fluid can be obtained (Xie and Xiao,2011;Song et al.,2002;Hürlimann and Venkataramanan,2002;Sun and Dunn,2002).The acquisition of these parameters is the premise of NMR interpretation for oil and gas reservoirs.Generally,the signal response equation of 1D or 2D NMR can be attributed to the first kind of “Fredholm Integral Equation”,which is an illconditioned problem without referable analytical solutions.At present,the most inversion researches are based on Singular Value Decomposition(SVD)and Butler-Reeds-Dawsons(BRD)(Prammer,1994;Butler et al.,1981).However,a small disturbance from noise in echo data will cause the deviation of the inversion results.Therefore,various inversion methods constrained by penalty terms have been developed (Zou et al.,2016),by choosing regularization parameters and its weight,to keep the solutions smoother and sparser simultaneously,in other words,to ensure the stability and resolution of the inversion results (Guo et al.,2018,2019).In addition,using machine learning methods to directly invert the echo data,suppress the uncertainty of numerical solution and improve resolution of 1D or 2D distributions is a new research direction of NMR data processing(Parasram et al.,2021;Wang et al.,2017).However,the artificial network method (Parasram et al.,2021) needs a large of label data sets (at least 400000 groups) to train the model,which are based on the forward T2distribution model.In addition,the sparse Bayesian learning method (Wang et al.,2017) requires the prior knowledge with initial inversion result of echo data,to provide the overlapping information of 2D distribution.

    Although various inversion methods emerged,the noise in the NMR signals still seriously affect the accuracy of the inversion results.How to improve the data quality,like suppressing noises while highlighting signals,is the fundamental issue to obtain more desirable NMR processing results.To this end,it is necessary to effectively suppress the noise characteristics introduced by the instrument and the surrounding environment and further improve the SNR of the raw echo data.A lot of meaningful works on lowfield NMR data denoising methods have been published.In the early stage,some researchers adopted time domain filtering(Edwards and Chen,1996)and SVD methods(Lin and Hwang,1993;Chen et al.,1994) to suppress noise by directly filtering out the noise and removing the smaller eigenvalues that represent the noise in the eigenvalue matrix respectively.These methods will lead to strong uncertainty in the inversion results.Subsequently,the mathematical transformation method based on multi-scale wavelet (Mallat and Hwang,1992) began to be used in the processing of NMR echo data with low signal-to-noise ratio.It is feasible because the wavelet can better fit the signal distribution under noise disturbance and extract the signal characteristics of raw echo data.Subsequently,a lot of research works depended on discrete wavelet (DWT) were published.Zheng and Zhang (2007)proposed a spatial correlation threshold denoising method based on non-decimated wavelet transformation.Wu et al.(2011)took a wavelet domain threshold method to realize the denoising process in digital phase signal detection period.Xie et al.(2014)compared three denoising methods of NMR echo data based on wavelet transformation,which proved that wavelet threshold method can obtain higher denoised results and more accurate formation porosity.Meng et al.(2015) proposed an adaptive wavelet packet threshold denoising algorithm to denoise NMR logging data,and verified the effectiveness of NMR data under the condition of low signal-to-noise ratio.Ge et al.(2015) proposed a method based on particle swarm optimization (PSO) algorithm to improve the performance of method,combination of wavelet threshold and timedomain exponential weighted moving average,to reduce the noise of echo data and achieved good inversion results.Recently,the methods based on mathematical morphology(Gao et al.,2020)and the combination of variable window sampling and discrete cosine transformation(DCT)(Gu et al.,2021)have also been used in the denoising process of the raw echo data,which can effectively suppress the noise amplitude to some degree.

    In the past decades,the sparse representation theory had been widely used in compression,inversion,feature extraction and image noise reduction (Starck et al.,2010;Ahmed and Fahmy,2001).The basic idea of sparse representation is to use a dictionary (a mathematical transformation matrix) and several corresponding non-zero coefficients to represent the signal.The noise has the characteristics with randomness and cannot be sparsely represented so that the available signals can be highlighted.For aforementioned DWT and DCT method,the noise can be directly removed by eliminating the small coefficients containing noise characteristics.If most of the coefficients of noise are zero,or there are only existed several values closer to zero,better denoising results can be achieved by filtering the smaller coefficients.The coefficients with large absolute values will retain the most effective information in the raw echo data.However,fixed mathematical transformation,such as DCT (Gu et al.,2021) and DWT (Xie et al.,2014;Meng et al.,2015),cannot effectively represent them all due to fixed analytic formula.For different types of NMR signals,the selection of threshold is highly subjective.When the noise is greater than the signal,the fixed mathematical transformation methods cannot fully represent signal characteristics.This may result in mistaken elimination of the sparse coefficients that representing the real echo data during denoising procedure and lead to the reduction of the accuracy and reliability in subsequent inversion process.If an appropriate transformation matrix or dictionary can be constructed adaptively according to the characteristics of NMR data,it could produce better denoising effects and improve the quality of raw data by preserving the sparse characteristics of signal and eliminating redundant noise characteristics.

    Dictionary learning (DL) is a machine learning method,which can extract signal characteristics from the raw echo data by a selfadaptive and adjustable dictionary learned from noisy signals.Therefore,stronger sparse representation ability than that of fixed mathematical transformation can be established.Dictionary learning has been widely used in seismic data(Beckouche and Ma,2014) and noisy image recovering (Xu et al.,2017;To?i? and Frossard,2011),but rarely applied to low-field NMR data processing.Therefore,in this paper,we will adopt dictionary learning and propose a “data processing framework” to adaptively learn the signal and noise characteristics from the raw NMR echo data.The sparse representation and dictionary updating are simultaneously operated with Orthogonal Match Pursuit (OMP) and K-SVD respectively.The difference of sparse characteristics between signal and noise can be processed and finally reconstructed to improve the quality of NMR echo data.We also applied dictionary learning to process NMR core analysis and well logging data,and verified the advantages of using dictionary learning in the low-field NMR data processing.The numerical simulations,core analysis and well logging data processing results show that the dictionary learning can further improve the inversion accuracy and reliability and spectrum resolution at low SNRs.It is believed that dictionary learning can play an important and practical role in NMR core analysis and downhole NMR data processing in the future.

    2.Principle and method

    2.1.Response equation for NMR measurement

    We firstly consider about 1D NMR data,and conduct numerical experiments based on dictionary learning.The 1D NMR measurement usually refers to the measurement of longitudinal relaxation time T1and transverse relaxation time T2of saturated fluid rocks,which can be used to obtain important reservoir parameter information such as pore structure,fluid saturation and permeability(Coates et al.,1999).T1is usually measured by“Inversion Recovery”or “Saturation Recovery” method,and T2is measured by CPMG(Carr-Purcell-Meiboom-Gill) pulse sequence.When the polarization time is sufficient,the general response equation of onedimensional T1or T2measurement can be obtained:

    In Equation(1),i=1,2.When i=1,it means the measurement of T1,if c1=1,c2=1,it means “Saturation Recovery” method,if c1=1,c2=2,it means“Inversion Recovery”method.When i=2,it means the measurement of T2,and c1=0,c2=-1.The discrete form of Equation (1) is

    In Equation (2),j=1,2,…,n,n is the number of preselected relaxation time components.k=1,2,…,m,m is the number of echo,tiis the acquisition time (usually an integral multiple of the echo interval).bkis the echo signal amplitude,Ti,jis the j-th relaxation time component preselected by Ti,εkis measured noise,including instrument background noise and external electromagnetic noise,f(Ti,j)is the amplitude of relaxation time Ti,j.

    Since T2measurement is mainly used in practical application,this paper only considers obtaining the spin echo data constructed by T2distribution.

    2.2.Dictionary learning

    Dictionary learning(DL)is a machine learning method,which is based on sparse representation theory to obtain the over complete dictionary and sparse representation of the signal through a given training sets.Assuming the signals can be sparsely represented by several atoms in an over complete dictionary (as shown in Fig.1),DL allows us to get rid of the limitation of selecting the corresponding methods for establishing fixed mathematical transformed basis matrix (fixed dictionary),and to adaptively and accurately capture the characteristics of the current data.

    Dictionary learning mainly contains two procedures:sparse representation and dictionary atoms updating.With sufficient iterations of these two procedures,we can adaptively obtain the redundant dictionaries representing data characteristics.The general procedure is as described below:

    Fig.1.Schematic of sparse representation with a dictionary for signals.Signals can be approximately represented by using a dictionary and corresponding sparse coefficients.The procedure for obtaining coefficients is sparse representation and for updating atoms is dictionary update.Dictionary learning is consisted of above two procedures.

    Step 1.Inputting training signal data x,initial dictionary D0,number of iterations N,error standard ε.

    Step 2.Initializing residual error r0=x,and giving initial dictionary D0,which is composed of signal or fixed transformation basis,and setting the number of iterations t=0.

    Step 3.For the given initial dictionary D0,obtaining the sparse representation αkof the training signal x (Sparse representation).

    Step 4.For the training signal x and its sparse representation αk,updating each atom dkin dictionary D column by column to obtain the updated dictionary Dt(Dictionary updating).

    Step 5.Judging whether the number of iterations t is greater than N and whether D meets the requirements of the final conditions.If yes,stop the iteration and output the final dictionary D,otherwise,return to Step 3.

    2.2.1.Sparse representation

    According to the sparse representation theory,the sparse representation of a signal can be expressed as an optimization problem with constraints (Starck et al.,2010;Elad and Aharon,2006) as followed:

    Equation (3) is an optimization problem based on error constraint,and Equation (4) is an optimization problem based on sparsity constraint,which are equivalent to each other;D is the dictionary,α is the sparsity coefficient,x is the original signal,ε is the error threshold,and T is the sparsity or the number of non-zero sparse coefficients.The sparse representation is a NP hard problem,which can be solved by algorithms such as basis pursuit(BP)(Chen et al.,2001),orthogonal matching pursuit(OMP)(Pati et al.,1993).OMP algorithm is widely used in sparse representation,data compression and reconstruction because of its good reconstruction efficiency and fast running speed.OMP algorithm is a typical greedy algorithm (Pati et al.,1993).Its general process is as followed:

    In Step 5,the operation speed is slow due to the inverse process.Therefore,the “Improved Batch OMP” method (Rubinstein et al.,2008) can be adopted to replace the least squares inversion process by using Cholesky decomposition in the conventional OMP algorithm,which will largely accelerate the running speed of the algorithm and ensure the solution accuracy.

    2.2.2.Dictionary updating

    The dictionary updating starts after the sparse representation.The sparse vector α or sparse matrix A obtained from sparse representation is used to update the atoms dkin dictionary D.The objective function of dictionary updating is:

    where X is the signal vector or matrix to be processed,dkand djare the j-th and k-th atoms in dictionary D,and αkand αjare the j-th and k-th elements in sparse coefficient matrix A,respectively.E is the corresponding error matrix,and Ejrepresents the error matrix except for djatom.

    Generally,the SVD method is used to decompose the error matrix,and its corresponding function expression is:

    The first column of matrix U obtained after decomposition is selected as the new djatom.The first row of matrix V is multiplied by first eigenvalue of positive semi-definite diagonal matrix Λ,which is calculated to update the sparse coefficient αjcorresponding to the djcolumn.Until all atoms are updated once,it is indicated that one iteration is completed.

    Repeating the sparse representation and the dictionary update procedures until the number of iterations K is reached or the iteration error meets the pre-selected value ε,the final dictionary and sparse coefficients are obtained.Because SVD decomposition with K iterations is required,this method is also called K-SVD dictionary learning (Aharon et al.,2006).

    2.3.Denoising with dictionary learning

    The NMR echo data can be regarded as an aperiodic and unilateral signal in time-domain,which is subjected to the law of multi-exponential decay.The energy of the NMR signal is mainly concentrated in the front part of the signal.The faster the signal decays,the more the short relaxation components are indicated,and the long decay part indicate the long relaxation components,as shown in Fig.2.Fig.2(b)and(c)show the sparse representation of NMR echo data in DCT domain and DWT domain (taking “db4”wavelet as an example).It can be seen that the DCT coefficients of echo signal are mainly concentrated within low frequency range,and the coefficients with high-frequency range mainly represent noise.DWT coefficients are calculated and spliced according to wavelet decomposition order.The noise may not be sparsely represented in both DCT domain and DWT domain,as shown in Fig.2(d).Therefore,the NMR echo data has good sparsity,and the coefficients of noise can be eliminated by using soft and hard thresholds to achieve the noise suppression of echo data(Gu et al.,2021;Xie et al.,2014;Meng et al.,2015).However,the denoising method based on the fixed mathematical transformation methods will not meet the adaptivity for different types of NMR echo data detected from different samples,because it uses a fixed base matrix to conduct the sparse representation.It can suppress the noise to a certain extent but its ability to improve the quality of data and corresponding inversion accuracy may be limited.

    The NMR echo data with random noise can be expressed as:

    where,y is the noisy echo data with noise,x is the noiseless echo data,n is the random noise which is generally thermal noise with standard deviation of σ.

    The 1D NMR echo data are normally acquired by CPMG pulse sequence,and each adjacent echoes are correlated.In order to fully extract the characteristics of NMR signal and ensure high-quality redundant dictionary and sufficient sparse representation,the noisy data y can be sampled into patches.Directly obtaining 1D patch data from y(as shown in Fig.3(a)),or firstly building the raw data y into an l×k matrix and then using a 2D patch operation scheme (as shown in Fig.3(b)),are alternative.After selecting a patch size of n×m (n,m?l,k),the 1D patch data can be directly used to construct the training set.The 2D patch data should be firstly transformed into a column vector like 1D patch data and then used to construct the training set.It is noted that the overlapping area of each patches should be the maximum to achieve the best applications.Optionally,70% of the patches or the entire patches can be employed as the training set for dictionary learning.

    If the denoised data X can be fully described through the sparse representation of each patch,the denoising problem of a complete natural signal can be expressed as (Elad and Aharon,2006):

    here,λ is a constraint factor and related to the standard deviation σ of noise.μijis a control factor of residual errors and is related to local patch data.It obeys constraint Dαij-≤T,and can be processed implicitly in the process of sparse representation,which can be neglected.Rijindicates the operation of patch selection.The first term of Equation (8) constrains the proximity between noisy data Y and denoised data X.The second term of Equation (8) ensures the continuous optimization of sparse coefficients when solving the objective function.The third term constrains the proximity between patch data RijX and its sparse representation Dαij.The latter two terms jointly ensure that there is only limited error resulted from the sparse representation of each patch RijX in the denoised results.

    The sparse representation of Equation (9) can be performed by employing orthogonal matching pursuit algorithm(OMP)to obtain the sparse coefficient αijof local patches.

    Finally,when dictionary D and sparsity coefficient αijof local patches are well learned and calculated respectively,Equation (8)can be rewritten as Equation (10)

    Fig.2.Sparsity of NMR echo data in DCT domain and DWT domain.(a)The demonstration of noisy echo data and noiseless data which is acquired by low-field NMR tools;(b)the DWT and DCT coefficient distribution of noiseless echo data;(c)the DWT and DCT coefficient distribution of noisy echo data;(d)the DWT and DCT coefficient distribution of noise.The DWT and DCT coefficient distribution of noise.

    Fig.3.Schematic of local patch operation for dictionary learning.

    The numerical solution of Equation (10) can be written as Equation (11).The purpose of Equation (11) is to average the overlappinge data pointss,and finally reconstruct the data to obtain denoised results (Elad and Aharon,2006).The scalar λ is a regularization parameter to give a weighted average which will work on the local patches that are less overlapped.If λ=0,there is no noisy signal will be averaged into denoised result.However,it is not possible for the noiseless reconstruction.In the 1D case for ours,Equation (11) represents the averages of each point on the NMR signal because the overlapped 1D patches are operated for dictionary learning.For reconstruction of denoised NMR signals in Equation (11),we select λ=Max(echo data)/(10*σ)which is selfadaptive for denoising NMR echo data rather than a fixed value.This choice is depended on the consideration that different types of NMR echo data have different noise level and signal amplitude.With the increment of noise level,small λ is better for the denoised results and vice versa.

    The framework of improving denoising NMR data quality with dictionary learning is shown in Fig.4.

    3.Numerical simulations

    3.1.Forward model

    Fig.4.Schematic of data processing framework for improving NMR data quality with using dictionary learning.

    Fig.5.Simulation echo data and corresponding inverted T2 distributions with SNR of 6,10,15 and 20 respectively.

    In order to verify the advantages of dictionary learning for improving NMR echo data,we constructed a bimodal T2distribution,as shown in Fig.5(a).The micro/nano pores are mainly developed in shale or tight sandstone reservoir rocks.NMR needs to measure fluid properties and effectively distinguish fluid components,such as high viscosity organic matter,bound water,movable oil and so on.Therefore,the T2values of the fluid components are assumed to be 10 ms and 150 ms respectively.Under the noiseless condition,we set the total porosity as 10 and the proportion of each component as 6.5 and 3.5,respectively.Considering that the shortest echo spacing of downhole NMR instruments employed in tight and complex shale/sandstone reservoir evaluation is 0.2 ms (Song and Kausik,2019),the number of echoes is set as 2500 and the total acquisition time is 500 ms to ensure sufficient decay of NMR echo data.

    In the numerical simulation,the Gaussian white noise is added to the noiseless echo data of the above model,and the SNR of the noisy echo data is set as 6,10,15 and 20 respectively.The BRD inversion method is used to obtain T2distribution (Butler et al.,1981),and the S-curve method (Zou et al.,2016) is used to select the regularization factor.In order to conduct comparison,we fixed the regularization factor within the range from 0.01 to 10,and adaptively obtained a satisfactory smoothing factor with the Scurve algorithm.Fig.5 demonstrates the echo data and corresponding inversion results with different noise levels.The porosity values are 11.62 p.u.(SNR=6),10.26 p.u.(SNR=10),10.42 p.u.(SNR=15),and 9.89 p.u.(SNR=20) respectively.The root mean square error(RMSE)betweenT2distribution and forward model are 0.057 p.u.(SNR=6),0.039 p.u.(SNR=10),0.040 p.u.(SNR=15),and 0.027 p.u.(SNR=20) respectively.It can be seen that the inversion results are deviated from the desirable values under low SNR condition.Next,we use dictionary learning method to process synthetic noisy echo data.The first step is to adaptively obtain the most suitable dictionary,which can characterize NMR data with different noise levels.Subsequently,well-learned dictionary is used to achieve noise suppression and improve the resolution of the T2spectrum and the accuracy of porosity estimation.

    3.2.Parameters setting for DL

    Prior to dictionary learning,we need to build a training set.The training set can be composed of noiseless NMR echo data or noisy one.However,we cannot previously know the characteristics of noiseless NMR data in practice.Therefore,we directly use noisy NMR echo data for training dictionary.As mentioned above,in order to fully extract the characteristics of NMR echo data,we directly extract maximum overlapped 2D patches from the NMR echo data.The size of each patch is n×n and the sampling step-size is 1.If the NMR signal can be transformed into a 2D sampling matrix with N×K (as demonstrated in Fig.3(b)),the maximum number of patches is:

    Fig.6.Simulation results of denoised results after dictionary learning processing.(a-1) to (a-5) demonstrate addressed echo data with SNR=6;(b-1) to (b-5) demonstrate addressed echo data with SNR=10;(c-1)to(c-5)demonstrate addressed echo data with SNR=15;(d-1)to(d-5)demonstrate addressed echo data with SNR=20.The panels of third and fourth row are demonstrated by 50 × 50 image from echo data,which is convenient for contrast.

    Fig.7.Calculated SNR of denoised echo data and RMSE with different patches for different SNR data.(a) SNR=9;(b) SNR=15;(c) SNR=20.

    Fig.8.Frequency statistics of calculated porosity and root-mean-square error(RMSE)from T2 distribution.Echo data with three types of noise level was synthesized,inverted and calculated under 1000 numerical simulations for repeated tests.

    The training set can be randomly selected from the extracted patches.According to Equation(12),all samples in the training set are selected for dictionary learning.The initial dictionary(a matrix)can be constructed by any fixed analytical transformed base,or it can also be built from the randomly selected training signals.Since the dictionary is redundant,the number of atoms (the number of columns in the dictionary) must be greater than the dimension of atom(the number of rows in the dictionary),and the dimension of atom must be the same as the dimension of column vectors converted from patches.Therefore,the size of the dictionary is n×(q ×n),where q >1.

    OMP algorithm can be used for both sparse representation and noise suppression,since noise is not sparse and will be filtered in the process of calculating residual.Therefore,we can use the sparse representation with sparsity constraint when learning noiseless data,where sparsity T=10 and calculation error ε=1× 10-6.When learning noisy data,we can use error constraint for sparse representation,and the calculation error needs to meet(Beckouche and Ma,2014):

    C=1.15 is the noise gain factor,which is an important empirical value estimation derived from a large number of image tests,which can make the sparse representation more stable(Elad and Aharon,2006);σ is the noise standard deviation of NMR echo data,and the noise standard deviation of each patch is assumed approximately the same;n is the dimension of each patch;Equation(13)represents a sparse representation of each local patch data of the noisy signal.

    In the simulation work,we directly use the NMR echo data with different noise levels (SNR=20,15,10,6) for dictionary learning.However,for NMR data with different noise levels,different patch dimensions can better ensure the effectiveness and reliability of learning.For NMR data with different SNR,we select the corresponding patch dimensions as 5 × 5,6 × 6,6 × 6 and 7 × 7 respectively,and set the dimensions of the dictionary as 25×100,36 × 144,36 × 144 and 49 × 196,respectively.The number of iterations of the dictionary learning is 30.The details will be introduced in the simulation part.

    3.3.Simulation results

    Fig.6 demonstrates the echo data processing results at different SNRs.After respective dictionary learning and denoising process,the BRD inversion processing is conducted to obtain the T2distribution.The panels of first row in Fig.6 illustrate the echo data and residual signal before and after noise suppression(residual_signal=y original-y_denoised,where y-original represents noiseless signal and y_denoised represents denoised signal).The second row demonstrates the dictionaries of different sizes corresponding to different noisy data obtained by dictionary learning;the third row demonstrates noisy data(converted into 50× 50 maps for comparison);the fourth row is the data after denoising,and the fifth row is the inversion result of the echo data after denoising.It can be seen from Fig.6 that the characteristics of NMR echo data with different noise levels can be extracted adaptively,indicated by different atomic components.After denoising,NMR data SNR is increased by 3 times at least.

    However,the noise is very difficult to be suppressed in the first few echoes because the energy of NMR echo data is mainly concentrated in first few echoes.As can be seen from the first row of Fig.6,with the increment of noise level,the fluctuation of residual signal gets more stronger within the first 100 ms of NMR echo data,as well as described in Gu et al.(2021).Instead of setting first few echoes into zeros to avoid the energy loss of first few echoes during denoising,we directly use well-trained dictionary to suppress noise.The T2distribution of denoised echo data has higher resolution and accuracy than that before denoised.It is owed to the sufficient learning from local patches of signal,which provides a more robust dictionary to achieve a more reliable noise suppression and signal reconstruction.

    Fig.9.Comparisons of T2 distributions at different repetitions for six tight core samples.Echo data with TR=256 and 512 is adopted for the standard T2 inversion as comparison.Dictionary learning is employed on the NMR echo data with TR=16,32,64.

    The selection of patch size is very important for dictionary learning,as demonstrated in Fig.7.The blue curve in Fig.7 represents the improved SNRs of denoised echo data after dictionary learning and denoising process.The orange curve is RMSE(the expression of RMES is described in Equation(15))representing the inversion accuracy and estimating the similarity between inversion results inverted from denoised echo data and forward model.It is very interesting that denoised echo data with the higher SNR will not increase the accuracy of inversion results.If the higher SNRs is pursued,the larger patch size will result in the elimination of available echo signals and produce over-smoothing denoised results.This conclusion indicates that reasonable selection of patches will give a more accurate solution.In addition,the patch size is not more than 6 when SNR is larger than 9.The patch size optimized for SNR lower than 9 is fluctuated (the range may be from 5 to 13)because of the strong randomness of noise,which is not demonstrated here.In the later applications,we select patch size of 7 as a compromise to perform dictionary learning when SNR is lower than 9.

    Next,we will consider the effects of the randomness of noise for inversion results.Dictionary learning ensures the stability of NMR data processing under different noise levels,which can be indicated by obtaining T2distribution with BRD inversion.Therefore,we have conducted 1000 repeated tests and recorded on the echo data with random noise before and after dictionary learning processing,and we conduct three steps at each test:dictionary learning,denoising and inversion.We use the total porosity φ=10 and the forward T2distribution as comparison.The deviation and the RMSE with the standard T2distribution are estimated.

    Total porosity φ and RMSE are respectively:

    where,N is the number of pre-distribution points when BRD inversion is used,N=128;f()is the amplitude of T2value of predistribution points obtained by inversion;f(T2i)is the amplitude of T2value corresponding to forward bimodal model.

    Fig.8 shows that the inversion results of echo data at different noise levels (blue histogram marks represent the original noisy echo data,and orange histogram represents the echo data after dictionary learning processing).A total of 1000 repeated tests and statistics were conducted.The statistical results show that the porosity calculated from the noisy echo data and denoised echo data after the dictionary learning obey the Normal Distribution.However,the porosity distribution meets the standard porosity(φ=10) Normal Distribution for the echo data processed by dictionary learning.It can be found that the porosity is still random due to the noise effects on the first few echoes and accurately obtained with the increment of SNR.Whereas,the T2distributions are closer to the model under low SNR.It is indicated that more effective information can be retained adaptively after dictionary learning.The overall statistical results are shown in Table 1,and the porosity values φ and RMSE values are the average of 1000 repeated tests.

    Table 2Amplitude calculated from data of NMR core analysis before and after employing DL,which are compared with results by performing higher repetitions.

    Table 3Parameters of dictionary learning for different repetitions.

    4.Application of DL on NMR rock core analysis and well logging data

    In practice,the raw NMR echo data contains several types of noise,such as antenna noise,electronic circuit noise and possible external electromagnetic harmonic noise.Generally,it is necessary to increase the repetition to reduce the noise level.Firstly,we conduct core experiments to verify the advantages of dictionary learning in noise suppression.The 2 MHz NMR Core Analyzer(Magritek,NZ) is adopted to obtain NMR echo data of different types of tight rock cores.The echo spacing is 0.2 ms,the number of echoes is 2500,and the acquisition time is 500 ms for each scan.NMR measurements were conducted on three shale and three tight sandstone samples.As shown in Fig.9,core 1-3 are shale samples,and core 3-6 are tight sandstone samples.Shale samples are dry samples with some residual oil in the pores,while tight sandstone samples are saturated with brine water and centrifuged.In the core analysis experiment,we do not pay attention to the porosity of rock core,but the improvement of the quality of echo data with low SNRs by using dictionary learning.We compared the calculated area of T2distribution inverted from noisy echo data and denoised echo data.The unit demonstrated is the amplitude of measured voltage (μV),as shown in Table 2.The parameters of dictionary learning are demonstrated in Table 3.For all the experiments,the SNRs of 16 repetitions is larger than 8 so that we select the patch size of 6 according to instruction from simulations.

    Fig.10.Processed results of NMR logging data using the proposed denoised method for Well 1.The T2 distributions through DL denoising exhibit more obvious bimodal peaks.

    Fig.9 shows the T2distribution results obtained from core samples by NMR experiments.For each core,CPMG measurements with 16,32 and 64 repetitions are conducted respectively.Dictionary learning and denoising processing are conducted after each measurement,and then BRD method is used for inversion.The range of regularization parameters used in inversion is 0.01-10,and the regularization parameter with the smallest inversion residual norm is adaptively selected.In order to reveal the improvement,we use the echo data acquired with 256 and 512 repetitions as comparison group to verify the inverted results after dictionary learning processing.The high repetitions will produce echo data with high quality leading to more precise inversion results.It can be seen from Fig.9 that the echo data of shale and sandstone samples measured by using 16,32 and 64 repetitions can be well improved after dictionary learning and denoising,indicating that the characteristics of the echo data are represented by the well-trained dictionary and the noise is greatly suppressed.After inversion,the T2distribution inverted by using the reconstructed echo data has better resolution.The peak positions become more accurate,and the estimations of T2distribution area are effectively improved,as shown in Table 2 (when signal amplitude is calibrated into porosity,the area under T2distribution is the total porosity).However,it should be noted that due to the randomness of noise,the noise level of each measurement is different.Even if it can be suppressed by dictionary learning,the sparse representation of dictionary learning is still an approximate representation of the original signal,and the echo data disturbed by noise cannot be completely restored,which leads to a certain deviation in the inversion result compared to echo data acquired with higher repetitions.Furthermore,the small characterizations of signal could be eliminated during denoising process with tolerated errors,leading to certain components with small amplitude vanished.It is an interesting issue which will be studied in the future.In general,dictionary learning has a good ability to improve the quality of echo data under different noise conditions,as well as to improve the accuracy and resolution of inversion results by using commonly used inversion methods.

    We also apply the dictionary learning to improve the quality of NMR logging data,as demonstrated in Fig.10.Well 1 is one of intervals in an oil tight sandstone reservoir,which is investigated by employing CMR instrument (Schlumberger Technology).The first track is the depth and formation interval of 30 m is selected as an example.The second track includes gamma ray(GR),spontaneous potential(SP)and caliper(CAL)curve.The formation interval is full of brine water mud and can be indicated by the SP curve.The third track demonstrates the raw echo data of NMR logging.The echo number is 1800 per depth point and the echo data is calibrated into porosity unit.The fourth track is the denoised echo data processed by employing dictionary learning.The fifth and sixth track is T2distribution inverted from raw data and denoised data by using the BRD method,respectively.The seventh track represents the porosity calculated from different methods.In this track,the black porosity curve is obtained from the raw echo data in second track;the red porosity curve is obtained from the echo data after dictionary learning and denoising processing in third track;the bottle green curve is obtained from conventional neutron-density logging,the blue dots represent the core porosity with gas measurement.The last track is the averaged atoms,which are used for the sparse representation of NMR echo data.The larger the averaged atom is,the more coefficients need to be used to represent NMR echo data.The SNR of the NMR measurements is ranged from 6 to 12 even though common depth points(CDP)stacking of 7 times is conducted.We conduct a variable scheme for the selection of patch size to meet the requirement of echo data with different SNR.For patch operation parameter,the patch size of 7 is selected for NMR data with SNR lower than 9,and of 6 is selected for NMR data larger than 9.The sparsity T of 15 and iteration of 30 are set as dictionary learning parameters.Only few seconds are cost for dictionary learning and denoising for each echo data.During inversion period,the regularization factor is also set within the range from 0.01 to 10.From T2distributions,it can be seen that the NMR echo data after dictionary learning shows good resolution.The porosity calculated from noisy and denoised echo data are almost the same,which indicates that there is no available signal eliminated during denoising process.The energy of first few echoes are maintained well and noise are suppressed through all the echo data.The trend of NMR porosity curve is similar and closed to the neutron-density porosity curve within tolerated error.The good agreement between the neutron-density porosity curve and core porosity dots indicates the accuracy of well logging.At last,the variation of averaged atoms demonstrated the adaptivity of dictionary learning for different types of NMR echo signals.The small atoms indicate the effective suppression of noise.

    5.Conclusions

    In this paper,we explored feasibility of employing dictionary learning and proposed a “data processing framework” to improve the quality of low-field NMR echo data.Dictionary learning is a machine learning method based on sparse representation theory.With dictionary learning,useful information in noisy NMR echo data can be adaptively extracted and reconstructed,and further improve the quality of raw echo data and the accuracy and stability of inversion results.We have verified the advantages and application effects of dictionary learning method with numerical simulations and applied it on NMR core analysis data and well logging data.Some conclusions can be drawn:

    1) Dictionary learning has good adaptability to echo data at different noise levels,which can be reflected by adaptively learned dictionaries and varied averaged atoms.

    2) The quality of raw echo data with low SNR can be improved by employing dictionary learning,which will further improve the accuracy and reliability of inversion results when common inversion methods are used.It is meaningful to the requirement of rapid NMR logging and laboratory analysis,since more accurate petrophysical parameters can be obtained with fewer averages of raw echo data.

    3) The selection of patch size is very important and its effect is valuable to be studied,since it will affect the quality of signal reconstruction.For NMR echo data,small signal elimination will also result in the inaccuracy of inversion results.

    Although the quality of NMR echo data after dictionary learning processing has been greatly improved,the conventional inversion method is essentially dependent on noisy signal and the uncertainty of numerical solution is still existed,which cannot be eliminated totally.How to infuse the response equation of NMR echo data into dictionary learning to further reduce the uncertainty of inversion results caused by noise disturbance,is our next research topic.

    Acknowledgements

    This paper is supported by Science Foundation of China University of Petroleum,Beijing(Grant Number ZX20210024),Chinese Postdoctoral Science Foundation (Grant Number 2021M700172),The Strategic Cooperation Technology Projects of CNPC and CUP(Grant Number ZLZX2020-03),and National Natural Science Foundation of China (Grant Number 42004105).

    人妻系列 视频| 女性生殖器流出的白浆| 国产精品国产三级专区第一集| 99热网站在线观看| 亚洲高清免费不卡视频| 亚洲精品一区蜜桃| 久久久久久久久久久免费av| 国产精品秋霞免费鲁丝片| 亚洲精品国产色婷婷电影| 国产精品嫩草影院av在线观看| 亚洲精品乱码久久久v下载方式| 久久亚洲国产成人精品v| 亚洲精品aⅴ在线观看| 久久久久精品久久久久真实原创| 日韩成人av中文字幕在线观看| 久久精品夜色国产| 午夜视频国产福利| 国产永久视频网站| 亚洲精品国产成人久久av| 夜夜骑夜夜射夜夜干| 纯流量卡能插随身wifi吗| 欧美+日韩+精品| 亚洲av国产av综合av卡| a级毛片免费高清观看在线播放| 日本欧美国产在线视频| 中文字幕久久专区| 国产永久视频网站| 久久综合国产亚洲精品| 国产精品久久久久久久久免| 国产一区二区三区综合在线观看 | 五月开心婷婷网| 汤姆久久久久久久影院中文字幕| 久久综合国产亚洲精品| 女的被弄到高潮叫床怎么办| 日韩熟女老妇一区二区性免费视频| 国产精品无大码| 一级二级三级毛片免费看| 久久精品夜色国产| 色5月婷婷丁香| 亚洲av中文av极速乱| 高清av免费在线| 久久国内精品自在自线图片| 黄色欧美视频在线观看| 在线看a的网站| 亚州av有码| 最后的刺客免费高清国语| 内地一区二区视频在线| 精品人妻熟女毛片av久久网站| 男人舔奶头视频| 精品久久久久久久久av| 熟女人妻精品中文字幕| 尾随美女入室| 国产欧美日韩综合在线一区二区 | 精品视频人人做人人爽| freevideosex欧美| 观看美女的网站| 丰满饥渴人妻一区二区三| 国产精品人妻久久久久久| 22中文网久久字幕| 伦理电影免费视频| 亚洲国产欧美日韩在线播放 | 午夜91福利影院| 久久精品国产亚洲av天美| 亚洲成色77777| 精品国产一区二区久久| 久久午夜福利片| 亚洲图色成人| 免费观看av网站的网址| 亚洲自偷自拍三级| 国产精品嫩草影院av在线观看| 欧美一级a爱片免费观看看| 你懂的网址亚洲精品在线观看| 亚洲av男天堂| 国产老妇伦熟女老妇高清| 精品久久久噜噜| 天堂中文最新版在线下载| 最近最新中文字幕免费大全7| 99九九在线精品视频 | 男女边吃奶边做爰视频| 麻豆乱淫一区二区| 欧美+日韩+精品| 国产真实伦视频高清在线观看| 18禁在线播放成人免费| 99久久精品国产国产毛片| 天堂俺去俺来也www色官网| 熟女av电影| 国产精品无大码| 一区二区三区四区激情视频| 国产亚洲av片在线观看秒播厂| 国产熟女欧美一区二区| 一级毛片电影观看| 亚洲av不卡在线观看| 欧美激情极品国产一区二区三区 | 国产亚洲5aaaaa淫片| 18禁在线播放成人免费| 免费看日本二区| 少妇高潮的动态图| 日韩大片免费观看网站| 欧美xxxx性猛交bbbb| 成年女人在线观看亚洲视频| 日本av免费视频播放| av在线播放精品| 免费黄网站久久成人精品| 亚洲成色77777| 日韩大片免费观看网站| 久久人人爽人人片av| 欧美人与善性xxx| 最新中文字幕久久久久| 精品一区二区三区视频在线| 尾随美女入室| 日日撸夜夜添| 51国产日韩欧美| 99热这里只有是精品50| 国产极品天堂在线| 久久国产亚洲av麻豆专区| av线在线观看网站| 亚洲av欧美aⅴ国产| 国产视频首页在线观看| 国产精品久久久久久久久免| 色网站视频免费| 亚洲自偷自拍三级| 亚洲成人一二三区av| 岛国毛片在线播放| 在线观看三级黄色| 在线观看www视频免费| 日本黄色日本黄色录像| 爱豆传媒免费全集在线观看| 性色av一级| 成年人免费黄色播放视频 | 另类精品久久| 国产有黄有色有爽视频| 我的老师免费观看完整版| 日韩精品免费视频一区二区三区 | 久久99热这里只频精品6学生| 麻豆精品久久久久久蜜桃| 欧美高清成人免费视频www| 肉色欧美久久久久久久蜜桃| 少妇人妻 视频| 久久精品久久精品一区二区三区| 亚洲av综合色区一区| 一个人看视频在线观看www免费| 91aial.com中文字幕在线观看| 激情五月婷婷亚洲| 久久久久国产精品人妻一区二区| 蜜桃在线观看..| 亚洲精品乱码久久久久久按摩| 国产国拍精品亚洲av在线观看| 午夜91福利影院| 三级国产精品片| 日韩精品免费视频一区二区三区 | 日韩在线高清观看一区二区三区| 亚洲国产av新网站| 最近的中文字幕免费完整| 在线精品无人区一区二区三| 久久久久视频综合| 免费黄色在线免费观看| 极品少妇高潮喷水抽搐| 午夜日本视频在线| av国产久精品久网站免费入址| 久久国产精品男人的天堂亚洲 | 久久99一区二区三区| 国产国拍精品亚洲av在线观看| 建设人人有责人人尽责人人享有的| 大香蕉久久网| 欧美国产精品一级二级三级 | 国产精品人妻久久久影院| 狂野欧美白嫩少妇大欣赏| 国产又色又爽无遮挡免| 少妇人妻久久综合中文| 亚洲欧美中文字幕日韩二区| 精品少妇黑人巨大在线播放| 啦啦啦在线观看免费高清www| 少妇熟女欧美另类| 日本免费在线观看一区| 日韩成人av中文字幕在线观看| 中文在线观看免费www的网站| av网站免费在线观看视频| 久久影院123| 亚洲不卡免费看| 人妻夜夜爽99麻豆av| 9色porny在线观看| 国产成人精品一,二区| 最新的欧美精品一区二区| 91久久精品国产一区二区成人| 在线观看一区二区三区激情| 观看av在线不卡| 春色校园在线视频观看| 国产在线免费精品| 在线观看国产h片| 熟妇人妻不卡中文字幕| 亚洲精品,欧美精品| 精品久久久精品久久久| 不卡视频在线观看欧美| 欧美国产精品一级二级三级 | 亚洲内射少妇av| 成人二区视频| 日韩人妻高清精品专区| 插阴视频在线观看视频| 亚洲在久久综合| 丰满人妻一区二区三区视频av| 在线免费观看不下载黄p国产| 欧美激情极品国产一区二区三区 | 中国三级夫妇交换| 国产精品久久久久久久久免| 亚洲av中文av极速乱| 18禁在线无遮挡免费观看视频| 午夜免费观看性视频| 精品国产乱码久久久久久小说| 日韩亚洲欧美综合| 九草在线视频观看| 亚洲av电影在线观看一区二区三区| 久久精品国产亚洲av天美| 一级,二级,三级黄色视频| 91久久精品国产一区二区成人| 一级毛片aaaaaa免费看小| 日韩中字成人| 三级国产精品片| 全区人妻精品视频| 久久精品久久精品一区二区三区| 国产精品人妻久久久久久| 99热网站在线观看| 性色av一级| 制服丝袜香蕉在线| 久久6这里有精品| 免费人成在线观看视频色| 国产午夜精品久久久久久一区二区三区| 性色avwww在线观看| 中国美白少妇内射xxxbb| 一级毛片 在线播放| 另类精品久久| 永久免费av网站大全| 国精品久久久久久国模美| h视频一区二区三区| 国产欧美日韩综合在线一区二区 | 日本黄大片高清| 久久青草综合色| 久久狼人影院| 丰满人妻一区二区三区视频av| 亚洲精品中文字幕在线视频 | 插阴视频在线观看视频| 日韩熟女老妇一区二区性免费视频| 成人特级av手机在线观看| 国产日韩欧美亚洲二区| 国产精品成人在线| 一级黄片播放器| 全区人妻精品视频| av卡一久久| 欧美高清成人免费视频www| 成人国产av品久久久| 国产色爽女视频免费观看| 深夜a级毛片| 欧美日韩av久久| 国产av精品麻豆| 少妇裸体淫交视频免费看高清| 免费少妇av软件| 少妇被粗大猛烈的视频| 久久精品夜色国产| 欧美精品人与动牲交sv欧美| 2021少妇久久久久久久久久久| 十八禁网站网址无遮挡 | 免费大片黄手机在线观看| 有码 亚洲区| 高清在线视频一区二区三区| 国产极品粉嫩免费观看在线 | av线在线观看网站| 久久精品久久久久久噜噜老黄| 亚洲精品第二区| 欧美xxⅹ黑人| 亚洲欧美清纯卡通| 免费av不卡在线播放| 国产日韩一区二区三区精品不卡 | 99国产精品免费福利视频| 国产亚洲91精品色在线| 老女人水多毛片| av免费观看日本| 国产成人aa在线观看| 国产有黄有色有爽视频| 一区二区三区免费毛片| 色94色欧美一区二区| 9色porny在线观看| 久久亚洲国产成人精品v| 久久人人爽人人爽人人片va| 国产精品麻豆人妻色哟哟久久| 国产一区二区在线观看日韩| 亚洲国产欧美日韩在线播放 | 亚洲av在线观看美女高潮| 日本av免费视频播放| 最近2019中文字幕mv第一页| 日本免费在线观看一区| 插阴视频在线观看视频| 欧美日韩综合久久久久久| 一级毛片久久久久久久久女| 久久毛片免费看一区二区三区| 精品午夜福利在线看| 自线自在国产av| 欧美日韩视频精品一区| 成人二区视频| 少妇人妻一区二区三区视频| 妹子高潮喷水视频| 街头女战士在线观看网站| 丝袜脚勾引网站| 99视频精品全部免费 在线| av在线播放精品| 色5月婷婷丁香| 欧美少妇被猛烈插入视频| 久久国产精品男人的天堂亚洲 | 99热国产这里只有精品6| 亚洲中文av在线| 18禁在线播放成人免费| 午夜福利,免费看| 国国产精品蜜臀av免费| av专区在线播放| 国产精品国产三级国产av玫瑰| 免费黄网站久久成人精品| 国产 一区精品| 国产乱来视频区| 国产永久视频网站| 欧美xxxx性猛交bbbb| 我的老师免费观看完整版| 春色校园在线视频观看| 天堂中文最新版在线下载| 丰满少妇做爰视频| 亚洲成人av在线免费| 久久 成人 亚洲| 亚洲久久久国产精品| 国产视频首页在线观看| 熟女av电影| 国产视频首页在线观看| a级片在线免费高清观看视频| 少妇的逼好多水| 久久久a久久爽久久v久久| 少妇人妻久久综合中文| 乱码一卡2卡4卡精品| 亚洲欧美成人综合另类久久久| 晚上一个人看的免费电影| 久久99精品国语久久久| 久久99热6这里只有精品| 成人毛片a级毛片在线播放| 日本欧美国产在线视频| 永久免费av网站大全| 亚洲四区av| 国产熟女欧美一区二区| 亚洲四区av| 欧美三级亚洲精品| 在线 av 中文字幕| 在线看a的网站| av不卡在线播放| 国产有黄有色有爽视频| 99久国产av精品国产电影| 高清午夜精品一区二区三区| 热99国产精品久久久久久7| 免费观看av网站的网址| 亚洲精品日本国产第一区| 欧美日韩一区二区视频在线观看视频在线| 国产亚洲精品久久久com| 国产av精品麻豆| 欧美日韩国产mv在线观看视频| av天堂中文字幕网| 性色avwww在线观看| 最近中文字幕2019免费版| 久久午夜福利片| av福利片在线观看| 一区二区三区乱码不卡18| 亚洲成人av在线免费| 亚洲婷婷狠狠爱综合网| 国产精品久久久久久久电影| 大香蕉97超碰在线| 伊人亚洲综合成人网| 下体分泌物呈黄色| 中文天堂在线官网| 久久精品夜色国产| 国产午夜精品久久久久久一区二区三区| 日本黄色片子视频| 成人黄色视频免费在线看| 欧美3d第一页| 最新中文字幕久久久久| av黄色大香蕉| 久久久久视频综合| 午夜免费观看性视频| 欧美国产精品一级二级三级 | 国产成人a∨麻豆精品| 亚洲中文av在线| 热re99久久国产66热| 成人亚洲精品一区在线观看| 亚洲欧美清纯卡通| 中文天堂在线官网| 国产日韩欧美视频二区| 蜜臀久久99精品久久宅男| 韩国av在线不卡| 只有这里有精品99| 中文字幕制服av| 少妇人妻 视频| 亚洲图色成人| 大码成人一级视频| 秋霞伦理黄片| 久久影院123| 在线精品无人区一区二区三| 国产成人a∨麻豆精品| 久久久久久久久久人人人人人人| 午夜福利影视在线免费观看| 成人毛片60女人毛片免费| 人妻少妇偷人精品九色| 欧美亚洲 丝袜 人妻 在线| 亚洲av综合色区一区| 日韩中字成人| 丁香六月天网| 高清av免费在线| 国产av一区二区精品久久| 精品一区在线观看国产| 欧美人与善性xxx| 精品少妇久久久久久888优播| 日本黄色日本黄色录像| 亚洲国产最新在线播放| 亚洲精品aⅴ在线观看| 亚洲人与动物交配视频| 亚洲精品,欧美精品| 久热久热在线精品观看| 亚洲怡红院男人天堂| av在线app专区| xxx大片免费视频| 99久久精品一区二区三区| 精品一区在线观看国产| 亚洲国产日韩一区二区| 久久午夜福利片| 成年美女黄网站色视频大全免费 | 久久久欧美国产精品| 国产高清国产精品国产三级| 十八禁网站网址无遮挡 | 男的添女的下面高潮视频| 在线免费观看不下载黄p国产| 久久99蜜桃精品久久| 国产欧美日韩一区二区三区在线 | 日韩三级伦理在线观看| 日日啪夜夜爽| 又大又黄又爽视频免费| 国产精品伦人一区二区| 亚洲激情五月婷婷啪啪| 麻豆成人午夜福利视频| 免费观看av网站的网址| 亚洲va在线va天堂va国产| 大陆偷拍与自拍| 男男h啪啪无遮挡| 不卡视频在线观看欧美| 亚洲国产精品一区三区| 另类亚洲欧美激情| 久久人人爽人人爽人人片va| 老熟女久久久| 国产真实伦视频高清在线观看| 免费黄色在线免费观看| 亚洲欧美一区二区三区黑人 | 99热这里只有是精品在线观看| 亚洲av日韩在线播放| 一级毛片我不卡| 色94色欧美一区二区| 久久 成人 亚洲| 亚洲在久久综合| 18禁在线播放成人免费| 另类亚洲欧美激情| 国产亚洲午夜精品一区二区久久| 一个人免费看片子| 九九久久精品国产亚洲av麻豆| 亚洲精品国产av蜜桃| 少妇人妻一区二区三区视频| 国产精品国产三级国产专区5o| 国产精品女同一区二区软件| 国产精品蜜桃在线观看| 一区二区三区精品91| 黄色视频在线播放观看不卡| 天堂中文最新版在线下载| 丰满少妇做爰视频| 亚洲av二区三区四区| √禁漫天堂资源中文www| 大又大粗又爽又黄少妇毛片口| 一本色道久久久久久精品综合| av在线app专区| 国产一区二区在线观看av| 国产91av在线免费观看| 久久国产乱子免费精品| 99热全是精品| 2022亚洲国产成人精品| 黄色一级大片看看| 日本爱情动作片www.在线观看| 熟女av电影| 国产精品一区二区在线不卡| 日韩在线高清观看一区二区三区| 男女边吃奶边做爰视频| 99久久精品一区二区三区| xxx大片免费视频| 在线观看一区二区三区激情| 我的女老师完整版在线观看| 乱人伦中国视频| 99热国产这里只有精品6| 国产精品久久久久久精品古装| 国产精品成人在线| 国产黄频视频在线观看| 下体分泌物呈黄色| 永久网站在线| 国产黄色视频一区二区在线观看| 人妻夜夜爽99麻豆av| 国产 精品1| 一区二区三区精品91| 嘟嘟电影网在线观看| 少妇丰满av| 国产爽快片一区二区三区| 26uuu在线亚洲综合色| 纯流量卡能插随身wifi吗| 国产伦精品一区二区三区视频9| 久久久久久久久久久丰满| 欧美bdsm另类| 亚洲av不卡在线观看| 99热网站在线观看| 国产色爽女视频免费观看| 欧美精品国产亚洲| a级一级毛片免费在线观看| 免费久久久久久久精品成人欧美视频 | 成人国产av品久久久| 免费看av在线观看网站| 日韩欧美 国产精品| av女优亚洲男人天堂| 久久久久久久久久久久大奶| 成年人午夜在线观看视频| 三级国产精品欧美在线观看| 一本一本综合久久| 男女免费视频国产| 简卡轻食公司| 亚洲精品亚洲一区二区| 国产视频内射| 国产精品成人在线| 欧美日韩亚洲高清精品| 国产欧美日韩综合在线一区二区 | 欧美日韩一区二区视频在线观看视频在线| 国产精品久久久久成人av| 边亲边吃奶的免费视频| 99热6这里只有精品| av播播在线观看一区| 嘟嘟电影网在线观看| 日韩不卡一区二区三区视频在线| 又爽又黄a免费视频| 各种免费的搞黄视频| 在线观看免费视频网站a站| 丰满乱子伦码专区| 亚洲无线观看免费| 日本av手机在线免费观看| 国产精品一区二区三区四区免费观看| 国产精品久久久久久av不卡| 人体艺术视频欧美日本| 99九九在线精品视频 | 六月丁香七月| 波野结衣二区三区在线| 国产精品一区二区在线不卡| 久久国内精品自在自线图片| 色网站视频免费| 国产亚洲精品久久久com| 91久久精品电影网| 久久人人爽人人片av| 亚洲熟女精品中文字幕| 女的被弄到高潮叫床怎么办| 亚洲av.av天堂| h视频一区二区三区| 久久久精品94久久精品| 能在线免费看毛片的网站| 热re99久久精品国产66热6| 亚洲欧美精品专区久久| 王馨瑶露胸无遮挡在线观看| 精品一品国产午夜福利视频| 偷拍熟女少妇极品色| 欧美激情极品国产一区二区三区 | 亚洲欧美日韩东京热| 啦啦啦中文免费视频观看日本| 91精品一卡2卡3卡4卡| 久久午夜综合久久蜜桃| 久久鲁丝午夜福利片| 丰满少妇做爰视频| 国产成人精品无人区| 久久久久国产精品人妻一区二区| av播播在线观看一区| 九九爱精品视频在线观看| 免费不卡的大黄色大毛片视频在线观看| av专区在线播放| 欧美日韩视频高清一区二区三区二| 成年人免费黄色播放视频 | 欧美97在线视频| a级毛片免费高清观看在线播放| 欧美人与善性xxx| 国产免费一区二区三区四区乱码| 日本-黄色视频高清免费观看| 亚洲精品成人av观看孕妇| 国产熟女午夜一区二区三区 | 亚洲人成网站在线观看播放| 久久99热这里只频精品6学生| 国产男女超爽视频在线观看| 亚洲国产色片| 91aial.com中文字幕在线观看| 亚洲av在线观看美女高潮| 亚洲内射少妇av| 亚洲久久久国产精品| 夜夜骑夜夜射夜夜干| 久久99蜜桃精品久久| 一个人免费看片子| 亚洲av电影在线观看一区二区三区| 色94色欧美一区二区| 在线 av 中文字幕| 日本与韩国留学比较| 午夜精品国产一区二区电影| 国产成人免费观看mmmm| 在线观看免费视频网站a站| 青青草视频在线视频观看| 不卡视频在线观看欧美| 一本大道久久a久久精品| 久久免费观看电影| 性色av一级| 99久久精品热视频| 99久久精品一区二区三区| 高清不卡的av网站| 你懂的网址亚洲精品在线观看|