• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A comparison of deep learning methods for seismic impedance inversion

    2022-07-14 09:18:20SiBoZhngHongJieSiXinMingWuShngShengYn
    Petroleum Science 2022年3期

    Si-Bo Zhng ,Hong-Jie Si ,Xin-Ming Wu ,Shng-Sheng Yn

    a Huawei Cloud EI Product Department,Xi'an,Shaanxi,710077,China

    b School of Earth and Space Sciences,University of Science and Technology of China,Hefei,Anhui,230026,China

    Keywords:

    ABSTRACT

    1.Introduction

    Seismic impedance inversion has been studied for decades as it is one of the most effective methods for reservoir characterization in seismic exploration.It aims at reconstructing impedance sequences from their corresponding seismic traces which is based on the forward model:

    where s is a seismic trace which is approximated as the convolution of a wavelet w and a reflectivity sequence r.Impedance i and reflectivity r have the following relationship:

    where i[k]represents the value of vertical impedance sequence at depth k.Solving i from s is an underdetermined problem(Jackson,1972),so traditional methods(Hu et al.,2009;Zhang and Castagna,2011;Zhang et al.,2013)use different regularization to constrain solution space,and also many structure-guided methods are proposed(Ma et al.,2012;Zhang and Revil,2015;Zhou et al.,2016;Wu,2017).Although these methods are widely used in the industry,some drawbacks still remain as the regularization terms need to be pertinently designed which may limit the generalization of the model.Besides,these model-driven methods usually need to solve an optimization problem which is time-consuming and often yields a smooth result.In addition,the wavelet w in Equation(1)is typically unknown and might be hard to estimate as it is often varying in time and space.In practice,the relationship between the recorded seismogram and the true impedance is much more complicated than the simple convolution model described in Equations(1)and(2).The acquisition limitations,potential measurement errors,processing errors,and noise make the impedance estimation from seismograms a highly nonlinear problem with large uncertainties.Therefore,a data-driven deep learning method is expected to estimate the complicated and nonlinear relationship between the seismic traces and impedance sequences.

    In recent years,Deep Learning(DL)has an explosive development in Computer Vision(CV)(Krizhevsky et al.,2012).Various architectures and skills(Szegedy et al.,2015;He et al.,2016;Huang et al.,2017)are proposed to promote the benchmarks in this area.Other fields,such as medicine,meteorology,remote sensing as well as seismic exploration,also benefit from the development and make significant breakthroughs(Zhao,2018,2019;Di et al.,2018,2019;Wu et al.,2019,2020).Compared with traditional modeldriven methods,the advantage of DL mainly lies in that feature learning,extraction and prediction are all included in an end-toend process,which avoids tedious manual design and achieves less errors by jointly optimizing all parameters.Consequently,many DL-based methods(Das et al.,2018;Wang et al.,2019a,b;Phan and Sen,2018;Biswas et al.,2019;Alfarraj and AlRegib,2019b,a;Zheng et al.,2019)are put forward to solve the seismic impedance inversion.The critical technology is Convolutional Neural Network(CNN)whose basic idea is to hierarchically extract features using stacked convolution layers and nonlinear activations.Due to the strong feature representation ability of CNN,DL-based methods are able to more accurately approximate the relationship between the seismograms and impedance sequences and therefore can generate more accurate inversion results.However,in most cases,CNNs are used as black boxes,few work performs indepth research on how to appropriately design an effective and efficient DL mechanism for the inversion problem.

    In this paper,we focus on further research on DL-based inversion methods.The in-fluence of various network hyperparameters and architectures on the inversion results is explored.Specifically,we carry out comparative experiments on three basic hyperparameters(i.e.,kernel sizes,number of channels and layers)and two multi-scale architectures,and make comprehensive analysis of the experimental results.In addition,we design a series of methods inspired by perceptual losses(Johnson et al.,2016)and Generative Adversarial Network(GAN)(Goodfellow et al.,2014)to promote the high-frequency information.The contributions of this paper can be summarized as follows:

    .We provide important bases for inversion network design by revealing the influence of network hyperparameters and structures on inversion performances.

    .We show a clear clue to address the inversion of high-frequency details by borrowing ideas from CV,and achieve the desired results.

    2.Methods

    As a data-driven technology,DL-based methods learn mapping functions from seismograms to impedance sequences.We use the conventional CNN as the baseline,based on which other architectures and techniques are developed step by step to improve the inversion performance.

    2.1.Conventional CNN

    2.1.1.Architecture

    CNN consists of stacked convolutional layers as shown in Fig.1.A convolutional layer can be defined as follows:

    Fig.1.A conventional CNN which contains 6 convolutional layers,the input is a seismic trace and the output is an impedance sequence.

    where wland blrepresent the kernel and bias at the l-th layer,and xl-1and xlare the input and output,respectively.In addition,we use Parametric Rectified Linear Unit.

    (PReLU)(He et al.,2015)as the nonlinear activationσ(x)which is formulated as

    Fig.2.(a)a seismic-impedance training pair.(b)a crossline seismic section.(c)an inline seismic section.

    The most intuitive inversion method is using 1-dimensional CNN as a mapping function from a seismic trace to an impedance sequence.Unlike model-driven methods,training the CNN requires a lot of training data.

    2.1.2.Dataset

    The seismic data and well logs that we use in this paper are extracted from the freely available Teapot Dome dataset(Anderson,2009).The seismic data is already converted to depth domain and matched with the well logs.Hundreds of wells are provided along with the seismic data,however,in our experiments of seismic impedance estimation,we choose only the wells that contain both velocity and density logs with significant long depth ranges.Consequently,we choose totally 27 wells and extract the seismic traces near the wells to obtain 27 pairs of impedance sequences and seismograms,in which 22 paris are randomly selected to train our DL networks for the impedance estimation and the remaining 5 pairs are used as validation set.Fig.2a shows one of the training data pairs where the smooth blue curve represents a seismogram while the red curve with more details denotes a target impedance sequence that we expect to estimate from the seismogram.Fig.2b and c shows a crossline and inline seismic sections that are extracted from the original 3D seismic volume.These two seismic sections are used in this paper to demonstrate effectiveness of our trained neural networks for the impedance estimation.

    2.1.3.Experiments

    Hyperparameters have a great impact on the CNN performance.In order to figure out the network design principles for inversion problem,we study three key parameters including kernel size,number of layers and channels which are related to network structure.In the experiments,we adopt the Adadelta(Zeiler,2012)optimizer with initial learning rate of 0.1,and the learning rate decays to 0.9 times every 50 epochs.The batch size is set to 8.The Mean Squared Error(MSE)is used as loss function,whose formula is as follows:

    where inand f(sn)are the true and predicted impedances of the n-th training pair,‖·‖2is the■2-norm operator,N is the number of training pairs,K is the signal length.Note that all the input seismic traces and target impedance sequences are normalized by subtracting mean and being divided by standard deviation.All experiments adopt the above settings by default.

    First,we fix the number of convolution layers to 5 and channels of each layer to 16,and observe the effect of the kernel size on the inversion result.The kernel size increases from 5 to 23 with steps of 6.As shown in Fig.4a and b,the larger the kernel size,the better the network convergence.We also observe that larger kernel size brings more high-frequency information as shown in Fig.3a.

    We then adjust the output channel of each layer from 8 to 64,and fix the kernel size to 11 and the number of layers to 5.We observe a trend similar to the kernel size experiment shown in Fig.4c and d.Networks with more channels can converge to lower training losses,but they converge to the same level of validation loss as the epoch increases.Despite this,their visual effects on the predicted impedance sequences are quite different as shown in Fig.3b.We can see that high frequency is getting richer,as the number of channels increases,especially within the depth window between 160 and 180.

    Fig.3.Inversion results of a seismic trace with different hyperparameters.The red solid curve and black dashed curve represent the true and predicted values respectively.(a)results with different kernel sizes.(b):results with different number of channels.(c)results with different number of layers.

    Fig.4.Training and validation loss curves.Left column:training loss.Right column:validation loss.Top row:loss curves with different kernel sizes.Middle row:loss curves with different number of channels.Bottom row:loss curves with different number of layers.

    Furthermore,we study the effect of the number of layers on the inversion results.In this experiment,kernel size and channels of each layer are fixed to 11 and 16 respectively,and the number of layers arranges from 2 to 16 in multiples of 2.Fig.4e and f shows that shallow network with 2 layers is underfitting.When the number of layers increases to 8,the network achieves the best convergence.It is worth notingthat the performance of the network with 16 layers has a great degradation.But this is not caused by overfitting since both the training and validation losses are degraded.The reason is that deeper architecture brings huge challenges to the gradient backpropagation(He et al.,2016).From the visual effects in Fig.3c,the results of #layers=2 and 16 are underfitting,and result of#layers=4 yields more details than that of#layers=8.

    In general,increasing the complexity of the architecture can improve the network's representation ability,but such improvement is limited.Different hyperparameters may lead to different visual effects.Therefore,it is necessary to consider various factors when designing the network.In order to compare all the methods designed in later chapters,we use a conventional CNN with kernel size of 13,channels of 16 and layers of 6 as a baseline model.

    2.2.Multi-scale architecture

    A conventional CNN uses a fixed kernel size to extract seismic features at a specific scale,which limits the feature representation.To improve the multi-scale representation capability of the network,we propose two methods in this chapter.

    2.2.1.Multi-scale CNN

    Inspired by the inception module(Szegedy et al.,2015),a Multi-Scale CNN(MSCNN)is designed as shown in Fig.5.It is composed of a stacked multi-scale block which is marked by a red frame,where the input feature is parallelly feeded into three conventional layers with different kernel sizes,the three-way output features are then concatenated in channel dimension to form the final output of the multi-scale block.The MSCNN can extract multi-scale features of seismic traces block by block,and uses a normal conventional layer to calculate the impedance in the end of the network.

    2.2.2.UNet

    Fig.5.MSCNN with three stacked multi-scale blocks.Three shades of blue rectangle stand for features extracted by convolutional layers with three different kernel sizes,and the bottleneck containing the three rectangles represents the concatenated features.

    Fig.6.UNet Architecture.The k,c,s stand for kernel size,number of output channels and stride respectively.The pool and tconv represent max pooling and transpose convolutional layers respectively.

    UNet(Ronneberger et al.,2015)is another multi-scale architecture which is originally proposed to solve the image segmentation.As shown in Fig.6,the UNet has two basic components:encoder and decoder.The encoder is similar to the backbone of a classification network,which consists of conventional layers and max pooling layers.A max pooling layer down samples features with stride of 2 and obtains larger-scale seismic representations.The decoder acts as an upsampling process which uses transpose convolution as the upsampling operator.In the decoder,each upsampled feature is concatenated with the feature of the same scale in the encoder.This concatenation contributes to highresolution information reconstruction.

    2.2.3.Experiments

    To make a relatively fair comparison of the baseline CNN,MSCNN and UNet,we keep their parameter amounts at the same level.For MSCNN,we use 5 multi-scale blocks whose kernel sizes of three ways are 7,13 and 19,and each way has 5 output channels.The kernel size of the final conventional layer is 11.For UNet,the hyperparameters are shown in Fig.5.Parameter amounts of the three methods are given in Table 1.

    Table1 Parameter amounts of the three methods.

    The same inversion experiments are executed on the three methods.Fig.7 shows that the three methods converge to almostthe same level on the training set,but MSCNN and UNet perform better on the validation set.This means the three networks have the same learning ability since they consist of the similar number of parameters,but multi-scale architectures show better generalization.

    The first column of Fig.8 shows the trace inversion results of the three methods.We can observe that MSCNN and UNet obtain relatively better results than the conventional CNN,especially within the depth window between 140 and 180 where the CNN yields highly smooth predictions.The same observation can be found in the first column of Fig.13 and Fig.14,where the layers,especially those thin ones,can be hardly resolved as the conventional CNN yields smooth predictions with limited details in the vertical dimension.

    2.3.Perceptual loss

    Even though the multi-scale methods achieve better results than the baseline model,they all lose much high-frequency information.This is because they all trained by using MSE loss function which is easy to produce smoothness.From the perspective of the CV,MSE only penalizes Euclidean distance between two images,but ignores the image content.To overcome this problem,we introduce the perceptual loss(Johnson et al.,2016),which measures content similarity,into the networks.

    Fig.7.Training and validation loss curves of the baseline,MSCNN and UNet.

    2.3.1.Definition

    Seismic impedance inversion can be considered as a signal reconstruction problem which is similar to the image superresolution,where it recovers the high-frequency impedance from a low-frequency seismic trace.The perceptual loss states that the reconstructed image should be similar to the ground truth not only in pixels,but also in the feature domain.The common idea is using a pre-trained network,e.g.,VGG-16(Simonyan and Zisserman,2014),as a loss network to extract the features in different layers,and calculating the Euclidean distance between the true and predicted features to measure the content difference.The perceptual loss experimentally proves the effectiveness of reconstructing high-frequency information.

    Inspired by the above ideas,we design a simple autoencoder as the loss network as shown in Fig.9.The autoencoder has the same structure with the UNet,but there are no links from encoder to decoder.Its hyperparameters of each layer are displayed in the figure.The autoencoder learns a mapping function from the impedance to itself.In other words,the input and output of the network are the same impedance.The main purpose is to extract proper features at different scales as shown in Fig.9,then we can use the features to calculate the perceptual loss which is defined as follows:

    whereφl(i)is the l-th layer's feature of the impedance i extracted by the loss network.

    2.3.2.Experiments

    We train the autoencoder using the impedance sequences of training set with the same implementation as the previous experiments.Fig.10 shows the reconstruction performance of the autoencoder,we can see that all the curves are well fitted.It should be noted that we add small Gaussian noise on the training samples at each step to release the overfitting in all experiments,since the size of training set is small.

    Fig.8.Trace inversion results by different methods.Top row:The baseline CNN.Middle row:MSCNN.Bottom row:UNet.First column:The pure networks.Second column:Networks with perceptual loss.Third column:Networks with GAN.

    Fig.9.Training with perceptual loss.The inversion network in the dashed box can be any architecture.The black and red traces are predicted impedance f(s)and true impedances i respectively.Three layers are used as the endpoints to export features which is represented byφl(·),where the subscript l represents the layer index.

    Fig.10.The recovered impedance sequences by the autoencoder for the validation set.The red and black curves are the input ground truth and output predictions respectively.

    By combing the MSE and perceptual losses,we can train the networks by the following loss function:

    whereλpis the weight factor of the perceptual loss.We conduct a series of experiments to study how to selectλpand l.The baseline model is used as the inversion network.First,theλpis fixed to 1.0,and we use different endpoints(i.e.,l=2,4,6)as the feature extraction layers.The results in Fig.11a shows that when l reaches 6,the ability to reconstruct high-frequency information is limited.The results of l=2,4 obtain relatively better details for depth around 170.Then we set l=2,and increaseλpfrom 0.01 to 10.0 by a factor of 10.We can see from Fig.11b that as the weight of the perceptual loss increases,more details are reconstructed,but some peak values may exceed the ground truth,e.g.,for depth at 100 and 120 withλp=10.0.

    The above observations validate the effectiveness of the perceptual loss.But we need to make a balance between detail reconstruction ability and amplitude fitting stability.We use l=4 andλp=1.0 as the default setting to make comparisons with other methods.The first two columns of Fig.8 are the inversion results of the three architectures with or without perceptual loss,which illustrates that perceptual loss improves the reconstruction of highfrequency information a lot.Besides,we have a consistent observation on the inversion planes in Figs.13 and 14,inversion planes by using perceptual loss have more and clearer horizons than that using the pure MSE loss.

    Fig.11.Inversion results by using different endpoint layers l and weightλp.

    2.4.GAN

    The previous methods focus on the design of backbones and loss functions to achieve desired results,which demonstrates that the developments and techniques in CV field can be used to tackle the seismic inversion problem.Following this clue,we further explore how to estimate more realistic impedance sequences by using GAN which achieves great success in image generation.

    2.4.1.Architecture

    GAN has two basic modules:generator and discriminator as shown in Fig.12.The generator can be any inversion network and it aims at fooling the discriminator.The discriminator is a classification network,and it should distinguish between the real traces and traces produced by the generator as much as possible.The two modules form an adversarial mechanism during the training process,as a result the generator produce realistic impedance sequences,and the discriminator could not distinguish between true and generated sequences.

    Fig.12.GAN Architecture.The generator is an inversion network that generates impedance sequences from seismic traces.The discriminator distinguish between generated and real impedance sequences.The hyperparameters of the discriminator are given in the yellow frame,where h and fc stand for number of hidden node and fully connected layer respectively.

    There is a strong correlation between seismic inversion and image super-resolution problems,as both of them reconstruct high-frequency signals from low-frequency signals.So we refer to Enhanced Super-Resolution GAN(ESRGAN)(Wang et al.,2018)as a reference to design an inversion GAN.The discriminator architecture and hyperparameters are given in Fig.12.The final fully connected layer has only one hidden node as it is a binary classification network.Different from the standard classification,we use the Relativistic average Discriminator(RaD)(Jolicoeur-Martineau,2019)to predict the realistic degree of generated impedance relative to true impedance.The RaD is formulated as follows:

    where irand igare the real and generated impedances,fD(i)represents the output of final fully connected layer for impedance i,Eig[·]represents the average value of the generated impedances in a mini-batch.δ(·)is the sigmoid function.An ideal discriminator makes DRa(ir,ig)=1 and DRa(ig,ir)=0.Then the discriminator loss is defined as:

    The adversarial loss of generator is defined as:

    The total loss for generator is then defined as follows:

    whereλpandλgare weight factors for perceptual and adversarial losses respectively.In the training process,the generator and discriminator are alternately updated by minimizing LGand LD.

    2.4.2.Experiments

    In order to speed up convergence,we first train an inversion network with MSE loss using the default setting,and then use the pre-trained model as an initial generator.The parametersλp,λg,l are empirically set to 1.0,7e-3,4.The initial learning rates of generator and discriminator are 0.7 and 0.9,and they decays by factors 0.95 with decay steps of 50 and 100 respectively.The GAN is trained for 1000 epochs.We adopt the three networks,i.e.,CNN,MSCNN,UNet,as the generator of the GAN.The trace inversion results are shown in Fig.8,we can see that GANs recover more details than pure networks and they have the similar visual effect to the networks with perceptual loss.But according to the plane inversion results in Figs.13 and 14,GANs generate finer layers than other two methods especially within the depth window between 50 and 250.In addition,GANs produce some dark layers(with low impedances)near the depth of 200 which can not be observed in the results by the other methods.

    3.Discussion

    The hyperparameter experiments on the conventional CNN demonstrate that networks with more parameters show stronger fitting ability from the two perspectives of the number of channels and kernel size.But this promotion tends to disappear as the amount of parameters increase as shown in Fig.4a,b,4c,and 4d.This is because the ability of each network to fit small dataset is saturated.From the layer number perspective,as shown in Fig.4e and f,excessive increase in the number of layers leads to degradation of convergence.A common view is that deeper network makes gradient back propagation difficult,and even produce the vanishing gradient problem(He et al.,2016).Fig.3a,b and 3c indicate that the curve fitting performance varies a lot with the hyperparameters,which is mainly reflected in the reconstruction of high-frequency details.

    Using conventional CNN to solve the inversion problem is an intuitive way.However it is hard to choose the proper hyperparameters,since the inversion result is hyperparameter-sensitive.Multi-scale architecture can extract features at different scales and therefore is able to recover more details than the conventional CNN with the same number of parameters.As a result,multi-scale architecture relieves the cost of hyperparameter selection.But we note that even though the three methods converge to the same level as shown in Fig.7,they yield quite different visual effects in the inversion sections as shown in the first columns in Figs.13 and 14.Overall,multi-scale inversion sections show more thin layers,but MSCNN and UNet produce different high impedance areas.Therefore,it is important to adopt appropriate architectures.

    From the inversion experiments,the key point is the reconstruction of the high-frequency information.In the CV field,the MSE loss is proven to produce smoothness,which can be improved by the perceptual loss in Equation(6).In order to build an impedance feature space which is used to calculated the perceptual loss,we design an autoencoder that learns a mapping function from impedance to itself as shown in Fig.9,and then extract the features by the endpoints of the autoencoder.Figs.8,13 and 14 show that perceptual loss provides a great contribution to reconstructing high-frequency information.On the other hand,the endpoint layer l and weight factorλpin Equation(6)also impose an effect on the inversion results as shown in Fig.11a and b.So a trade-off must be made between detail reconstructing and fitting stability.The GAN experiments demonstrate that adversarial training mechanism further promotes the reconstruction of details,and it generates finer layers as shown in Figs.13 and 14.Besides,some dark layers with low impedance values appear in the GAN inversion results,which may indicate its ability in recovering high-frequency information.

    Fig.13.Inline inversion results by different methods.Top row:The baseline CNN.Middle row:MSCNN.Bottom row:UNet.First column:The pure networks.Second column:Networks with perceptual loss.Third column:Networks with GAN.

    DL-based methods achieve promising results,but also show some limitations.Different architectures produce various vision effects,which may bring confusion to practical applications.However,there is no objective evaluation index to indicate which network should be used.The widely used MSE can provide a reference to the fitting performance,but it produces much smoothness.So it is necessary to build an evaluation functionrelated to structure and content of the impedance.The other obvious problem is the lack of training data.In practice,the number of well logs is highly limited,which often results in network overfitting.Using some tricks,such as adding Gaussian noise,cannot completely avoid the risk of overfitting.One way to address this is building realistic structure models(Wu et al.,2020)to simulate more seismic and impedance pairs.The other meaningful way is introducing the physical mechanism of Equation(1)into the network architecture to make full use of seismic traces that do not correspond to any true impedance sequences,which performs a semi-supervised learning.

    4.Conclusion

    This paper comprehensively studies the DL-based methods for seismic impedance inversion problem.A series of networks are designed to improve the reconstruction of high-frequency information.Through experiments,we reveal the influence of the network hyperparameter and architecture on the inversion performance.The difference between conventional CNN and multiscale architecture in convergence,trace fitting and vision effect are well studied.Inspired by the developments in the CV field,we adopt perceptual loss and GAN mechanism which are proven to be effective for enhancing high-frequency details.In spite of the success of DL-based methods,they still show the aforementioned limitations of objective evaluation index and training data.We plan to solve these two issues in the future.

    Acknowledgments

    This research was supported by the National Natural Science Foundation of China under Grant No.42050104.

    黄色毛片三级朝国网站| 夜夜骑夜夜射夜夜干| 亚洲欧美激情在线| 波多野结衣av一区二区av| 亚洲精品美女久久久久99蜜臀| 久久中文字幕一级| 精品国产一区二区久久| 脱女人内裤的视频| 少妇 在线观看| 在线观看人妻少妇| 一个人免费在线观看的高清视频 | 亚洲色图综合在线观看| 秋霞在线观看毛片| 国产主播在线观看一区二区| 国产伦人伦偷精品视频| 最黄视频免费看| 一本色道久久久久久精品综合| 欧美大码av| 精品一区二区三区av网在线观看 | 黄色怎么调成土黄色| 99热全是精品| 久久久精品免费免费高清| 精品免费久久久久久久清纯 | 国产色视频综合| 欧美大码av| 侵犯人妻中文字幕一二三四区| 欧美97在线视频| 久久精品国产亚洲av高清一级| 99热网站在线观看| 国精品久久久久久国模美| 老熟妇乱子伦视频在线观看 | 日韩三级视频一区二区三区| 免费高清在线观看视频在线观看| 男女国产视频网站| 欧美精品亚洲一区二区| 国产主播在线观看一区二区| 交换朋友夫妻互换小说| 老司机福利观看| 无限看片的www在线观看| 99国产极品粉嫩在线观看| 国产在线免费精品| 日韩有码中文字幕| av在线播放精品| 不卡一级毛片| 精品乱码久久久久久99久播| 美女脱内裤让男人舔精品视频| 国产成人精品久久二区二区91| 国产极品粉嫩免费观看在线| 色视频在线一区二区三区| 日韩中文字幕欧美一区二区| 亚洲精品一二三| 亚洲激情五月婷婷啪啪| www.熟女人妻精品国产| 狂野欧美激情性bbbbbb| 午夜福利一区二区在线看| 交换朋友夫妻互换小说| 欧美黄色淫秽网站| 国产高清国产精品国产三级| 欧美激情久久久久久爽电影 | 欧美xxⅹ黑人| 女警被强在线播放| 美女福利国产在线| 久久久久久久国产电影| 欧美黑人欧美精品刺激| 亚洲国产精品一区二区三区在线| 中文精品一卡2卡3卡4更新| 午夜福利影视在线免费观看| av在线老鸭窝| 中文字幕人妻丝袜制服| 91麻豆av在线| 老司机在亚洲福利影院| 亚洲精品久久成人aⅴ小说| 国产精品.久久久| 欧美成狂野欧美在线观看| 国产精品一区二区免费欧美 | 热99re8久久精品国产| 日日摸夜夜添夜夜添小说| 飞空精品影院首页| 18禁国产床啪视频网站| 欧美精品一区二区大全| 久久久久久亚洲精品国产蜜桃av| 中国国产av一级| 在线 av 中文字幕| 亚洲欧美色中文字幕在线| 亚洲人成77777在线视频| 久久香蕉激情| 久久午夜综合久久蜜桃| 国产老妇伦熟女老妇高清| 欧美av亚洲av综合av国产av| 精品免费久久久久久久清纯 | 国产麻豆69| netflix在线观看网站| 涩涩av久久男人的天堂| 狠狠狠狠99中文字幕| av网站在线播放免费| a级毛片黄视频| 久热这里只有精品99| 汤姆久久久久久久影院中文字幕| 青青草视频在线视频观看| 国产成人欧美| 欧美日韩中文字幕国产精品一区二区三区 | 国产精品香港三级国产av潘金莲| a级毛片在线看网站| 91麻豆av在线| 精品卡一卡二卡四卡免费| 十八禁高潮呻吟视频| 日韩欧美免费精品| 国产亚洲欧美精品永久| 国产精品1区2区在线观看. | 久久久精品区二区三区| 亚洲国产日韩一区二区| 国产成人av教育| 亚洲欧美一区二区三区久久| 国产精品久久久av美女十八| 电影成人av| 久久影院123| 不卡一级毛片| 免费高清在线观看日韩| e午夜精品久久久久久久| 99国产精品一区二区三区| 男人添女人高潮全过程视频| 亚洲一区二区三区欧美精品| 久久久久久久精品精品| 久久精品国产a三级三级三级| 亚洲九九香蕉| 亚洲精品国产一区二区精华液| 亚洲专区国产一区二区| 欧美日本中文国产一区发布| 国产麻豆69| 免费日韩欧美在线观看| 他把我摸到了高潮在线观看 | 天堂8中文在线网| 每晚都被弄得嗷嗷叫到高潮| 日韩熟女老妇一区二区性免费视频| 99热网站在线观看| 97人妻天天添夜夜摸| 美女高潮喷水抽搐中文字幕| 午夜老司机福利片| 岛国在线观看网站| 久久久欧美国产精品| 两性夫妻黄色片| 久久 成人 亚洲| av在线app专区| 最近最新中文字幕大全免费视频| 91大片在线观看| 飞空精品影院首页| 免费观看av网站的网址| 男女国产视频网站| 狠狠精品人妻久久久久久综合| 老司机影院毛片| 欧美一级毛片孕妇| 啦啦啦啦在线视频资源| 99精国产麻豆久久婷婷| 午夜久久久在线观看| 啦啦啦啦在线视频资源| 国产av精品麻豆| 熟女少妇亚洲综合色aaa.| 国产精品国产三级国产专区5o| 老汉色av国产亚洲站长工具| 色婷婷av一区二区三区视频| 97人妻天天添夜夜摸| 久久久久精品人妻al黑| 国产成人免费观看mmmm| 老熟妇乱子伦视频在线观看 | 夫妻午夜视频| 老汉色∧v一级毛片| 老司机影院毛片| 欧美中文综合在线视频| 免费在线观看影片大全网站| 亚洲九九香蕉| 欧美大码av| 日韩制服丝袜自拍偷拍| 69av精品久久久久久 | 亚洲精品国产区一区二| 精品久久久久久久毛片微露脸 | 乱人伦中国视频| 国产福利在线免费观看视频| 人人妻人人添人人爽欧美一区卜| videos熟女内射| 日韩人妻精品一区2区三区| 日本av手机在线免费观看| 涩涩av久久男人的天堂| 亚洲人成电影观看| 亚洲国产欧美网| 精品一品国产午夜福利视频| 亚洲色图 男人天堂 中文字幕| 99久久99久久久精品蜜桃| 久久精品aⅴ一区二区三区四区| 久久久国产成人免费| 亚洲自偷自拍图片 自拍| 岛国毛片在线播放| 少妇人妻久久综合中文| 97精品久久久久久久久久精品| 国产一区二区激情短视频 | 午夜福利,免费看| 菩萨蛮人人尽说江南好唐韦庄| 国产精品99久久99久久久不卡| 19禁男女啪啪无遮挡网站| 婷婷丁香在线五月| 搡老熟女国产l中国老女人| 丝袜喷水一区| 在线观看免费日韩欧美大片| 搡老熟女国产l中国老女人| 久久天堂一区二区三区四区| 亚洲伊人久久精品综合| 侵犯人妻中文字幕一二三四区| 黄色视频不卡| 午夜久久久在线观看| 三上悠亚av全集在线观看| 国产免费福利视频在线观看| 亚洲色图综合在线观看| av欧美777| 美女主播在线视频| 9热在线视频观看99| 一区二区日韩欧美中文字幕| 国产成人av教育| 国产精品免费视频内射| 大香蕉久久网| www.熟女人妻精品国产| 男女午夜视频在线观看| 久久性视频一级片| 老司机深夜福利视频在线观看 | 丰满少妇做爰视频| 高清视频免费观看一区二区| 久久久久视频综合| 少妇粗大呻吟视频| 99国产精品一区二区蜜桃av | 久久久久精品国产欧美久久久 | 欧美人与性动交α欧美精品济南到| 欧美 亚洲 国产 日韩一| 精品少妇黑人巨大在线播放| 欧美日韩福利视频一区二区| 91大片在线观看| 老熟妇乱子伦视频在线观看 | 精品国产乱码久久久久久小说| 老司机影院成人| 男女床上黄色一级片免费看| 欧美日韩视频精品一区| 国产一区二区在线观看av| 不卡一级毛片| 天天添夜夜摸| 欧美人与性动交α欧美精品济南到| 黑人欧美特级aaaaaa片| 中文字幕制服av| 免费观看人在逋| 成人18禁高潮啪啪吃奶动态图| 午夜视频精品福利| 午夜福利一区二区在线看| 最近最新免费中文字幕在线| 夫妻午夜视频| 一边摸一边做爽爽视频免费| 亚洲精品久久成人aⅴ小说| 精品国产乱码久久久久久男人| 日韩制服骚丝袜av| 日韩免费高清中文字幕av| 欧美av亚洲av综合av国产av| 丝袜人妻中文字幕| 亚洲 欧美一区二区三区| 悠悠久久av| 国产一区二区三区综合在线观看| av超薄肉色丝袜交足视频| √禁漫天堂资源中文www| 一二三四在线观看免费中文在| 在线永久观看黄色视频| 亚洲激情五月婷婷啪啪| 精品国产乱子伦一区二区三区 | 亚洲成av片中文字幕在线观看| 这个男人来自地球电影免费观看| 中文字幕最新亚洲高清| 国产亚洲一区二区精品| 国产精品二区激情视频| 最黄视频免费看| 亚洲av片天天在线观看| 亚洲伊人色综图| 俄罗斯特黄特色一大片| 亚洲伊人久久精品综合| 国产免费视频播放在线视频| 亚洲va日本ⅴa欧美va伊人久久 | 欧美老熟妇乱子伦牲交| av超薄肉色丝袜交足视频| 男女高潮啪啪啪动态图| 无遮挡黄片免费观看| 久9热在线精品视频| 一二三四在线观看免费中文在| 国产有黄有色有爽视频| 精品高清国产在线一区| 男女床上黄色一级片免费看| 丰满人妻熟妇乱又伦精品不卡| 乱人伦中国视频| 国产欧美日韩一区二区精品| 99热全是精品| 国产精品 国内视频| 一本久久精品| 国产精品一区二区在线观看99| 色视频在线一区二区三区| 久久久精品免费免费高清| 成年女人毛片免费观看观看9 | 日日摸夜夜添夜夜添小说| h视频一区二区三区| 日本av免费视频播放| 欧美 亚洲 国产 日韩一| 色94色欧美一区二区| 色视频在线一区二区三区| 日韩欧美一区二区三区在线观看 | 精品国产一区二区三区四区第35| 亚洲免费av在线视频| 国产欧美日韩一区二区三区在线| 这个男人来自地球电影免费观看| 日韩免费高清中文字幕av| 久久中文看片网| 亚洲伊人久久精品综合| 国产成人欧美| 久久九九热精品免费| 又紧又爽又黄一区二区| 69av精品久久久久久 | 亚洲第一av免费看| 欧美黄色淫秽网站| videos熟女内射| 久久久精品国产亚洲av高清涩受| 午夜精品国产一区二区电影| 国产av又大| 精品少妇一区二区三区视频日本电影| 国产精品 国内视频| 在线十欧美十亚洲十日本专区| 久久国产精品大桥未久av| 中文字幕av电影在线播放| 亚洲国产精品999| 国产精品 欧美亚洲| 国产日韩欧美在线精品| 日韩大片免费观看网站| 涩涩av久久男人的天堂| 国产成人免费观看mmmm| 精品亚洲成a人片在线观看| 国产欧美日韩一区二区精品| 亚洲av电影在线观看一区二区三区| 久久久国产欧美日韩av| 久久久久久亚洲精品国产蜜桃av| 亚洲少妇的诱惑av| 亚洲人成电影免费在线| 久久久久国产精品人妻一区二区| 亚洲九九香蕉| 国产免费视频播放在线视频| 女人久久www免费人成看片| 亚洲欧美日韩高清在线视频 | 亚洲色图 男人天堂 中文字幕| 亚洲精品一卡2卡三卡4卡5卡 | 一区二区三区乱码不卡18| 女人久久www免费人成看片| 国产99久久九九免费精品| 亚洲久久久国产精品| 黑人欧美特级aaaaaa片| 亚洲久久久国产精品| 亚洲精品国产色婷婷电影| 久久精品aⅴ一区二区三区四区| 国产亚洲欧美精品永久| 久久精品熟女亚洲av麻豆精品| 国产男人的电影天堂91| 男女无遮挡免费网站观看| 爱豆传媒免费全集在线观看| 丝瓜视频免费看黄片| 99香蕉大伊视频| 国产成人欧美| 久久精品久久久久久噜噜老黄| 中文字幕制服av| 欧美日韩一级在线毛片| 亚洲全国av大片| 最黄视频免费看| 女人被躁到高潮嗷嗷叫费观| 99香蕉大伊视频| 久久久久久久大尺度免费视频| 欧美人与性动交α欧美精品济南到| 国产一级毛片在线| 操出白浆在线播放| 亚洲伊人色综图| 少妇人妻久久综合中文| 无遮挡黄片免费观看| 18禁裸乳无遮挡动漫免费视频| 国产日韩欧美亚洲二区| 亚洲成国产人片在线观看| 男女午夜视频在线观看| 老司机午夜福利在线观看视频 | 深夜精品福利| 免费在线观看日本一区| 大香蕉久久网| 咕卡用的链子| 成年美女黄网站色视频大全免费| 手机成人av网站| av超薄肉色丝袜交足视频| 久久国产精品影院| 黄片大片在线免费观看| 另类精品久久| 精品熟女少妇八av免费久了| 国产深夜福利视频在线观看| 桃花免费在线播放| 男人添女人高潮全过程视频| 国产片内射在线| 国产一卡二卡三卡精品| 亚洲欧美日韩高清在线视频 | bbb黄色大片| 99久久人妻综合| 亚洲av男天堂| 巨乳人妻的诱惑在线观看| 国产激情久久老熟女| 少妇被粗大的猛进出69影院| 91老司机精品| 视频区欧美日本亚洲| 丰满少妇做爰视频| 永久免费av网站大全| 亚洲欧美精品综合一区二区三区| 免费高清在线观看视频在线观看| 少妇粗大呻吟视频| 性少妇av在线| 免费少妇av软件| 亚洲avbb在线观看| 一区二区日韩欧美中文字幕| 久久综合国产亚洲精品| 岛国毛片在线播放| av电影中文网址| 亚洲欧美精品自产自拍| 欧美精品亚洲一区二区| 日本av免费视频播放| 亚洲精品国产av蜜桃| 日本wwww免费看| 精品人妻一区二区三区麻豆| 熟女少妇亚洲综合色aaa.| 欧美变态另类bdsm刘玥| 黄片播放在线免费| 视频区欧美日本亚洲| 女人爽到高潮嗷嗷叫在线视频| 中文字幕人妻丝袜制服| 久热爱精品视频在线9| 欧美精品一区二区免费开放| 国产精品久久久久久精品电影小说| 99国产综合亚洲精品| 高清欧美精品videossex| 亚洲一码二码三码区别大吗| 久久人人爽人人片av| 91国产中文字幕| 免费在线观看影片大全网站| 桃花免费在线播放| 久久精品国产亚洲av香蕉五月 | 亚洲欧洲日产国产| 高清视频免费观看一区二区| 久久综合国产亚洲精品| 久久久精品区二区三区| 1024视频免费在线观看| 国产精品久久久久久精品电影小说| 成年人免费黄色播放视频| 天堂8中文在线网| 国产欧美日韩一区二区三 | 手机成人av网站| 免费观看av网站的网址| 女人爽到高潮嗷嗷叫在线视频| 亚洲av日韩精品久久久久久密| 人人澡人人妻人| 少妇被粗大的猛进出69影院| 黑人猛操日本美女一级片| 欧美黑人精品巨大| 法律面前人人平等表现在哪些方面 | 老司机影院毛片| 午夜福利视频精品| 国产一区二区激情短视频 | 91精品三级在线观看| 男女国产视频网站| 少妇被粗大的猛进出69影院| 午夜成年电影在线免费观看| 考比视频在线观看| 国产亚洲午夜精品一区二区久久| tube8黄色片| 国产成人av教育| 悠悠久久av| 成年人黄色毛片网站| 亚洲熟女毛片儿| 色精品久久人妻99蜜桃| 在线十欧美十亚洲十日本专区| 久久国产精品大桥未久av| 日日爽夜夜爽网站| 国产欧美日韩精品亚洲av| 国产高清videossex| 久久精品熟女亚洲av麻豆精品| av又黄又爽大尺度在线免费看| 新久久久久国产一级毛片| 国产精品秋霞免费鲁丝片| 免费在线观看影片大全网站| 成年人黄色毛片网站| 韩国高清视频一区二区三区| 国产黄色免费在线视频| 水蜜桃什么品种好| 欧美精品人与动牲交sv欧美| 国产精品1区2区在线观看. | 免费日韩欧美在线观看| 国产免费福利视频在线观看| 狂野欧美激情性xxxx| 免费在线观看日本一区| 一二三四社区在线视频社区8| 中国国产av一级| 久久天堂一区二区三区四区| √禁漫天堂资源中文www| 日韩精品免费视频一区二区三区| 可以免费在线观看a视频的电影网站| 少妇精品久久久久久久| 午夜老司机福利片| 人人妻,人人澡人人爽秒播| 国产成人一区二区三区免费视频网站| 三上悠亚av全集在线观看| 别揉我奶头~嗯~啊~动态视频 | 国产一区二区三区综合在线观看| 熟女少妇亚洲综合色aaa.| 美女中出高潮动态图| 中文精品一卡2卡3卡4更新| 久久国产精品男人的天堂亚洲| 狂野欧美激情性xxxx| 少妇精品久久久久久久| 可以免费在线观看a视频的电影网站| 午夜福利视频在线观看免费| 十分钟在线观看高清视频www| 免费黄频网站在线观看国产| 国产区一区二久久| av在线老鸭窝| 亚洲国产欧美在线一区| 青春草视频在线免费观看| 一级片'在线观看视频| 欧美黑人精品巨大| 国产片内射在线| 国产91精品成人一区二区三区 | 欧美日韩国产mv在线观看视频| 一级片免费观看大全| 妹子高潮喷水视频| av又黄又爽大尺度在线免费看| 亚洲久久久国产精品| 超碰成人久久| 一区在线观看完整版| 国产日韩欧美在线精品| 免费在线观看日本一区| 天天操日日干夜夜撸| 国产一级毛片在线| av网站在线播放免费| 啦啦啦 在线观看视频| 五月天丁香电影| 制服诱惑二区| 男人添女人高潮全过程视频| 中文字幕av电影在线播放| 国产精品久久久久久人妻精品电影 | 999久久久国产精品视频| 日本撒尿小便嘘嘘汇集6| 在线观看一区二区三区激情| 色综合欧美亚洲国产小说| 宅男免费午夜| 丁香六月欧美| 国产男女超爽视频在线观看| 国产1区2区3区精品| 大陆偷拍与自拍| 国产精品熟女久久久久浪| 免费看十八禁软件| 欧美日韩亚洲高清精品| 日韩大片免费观看网站| 亚洲欧美色中文字幕在线| 欧美另类亚洲清纯唯美| 日韩欧美一区视频在线观看| 日本猛色少妇xxxxx猛交久久| 两个人免费观看高清视频| 男女国产视频网站| 中文字幕人妻丝袜一区二区| 丁香六月欧美| 黄色视频,在线免费观看| 高清欧美精品videossex| 99国产精品一区二区蜜桃av | 性少妇av在线| 日本欧美视频一区| 高清在线国产一区| 99国产精品免费福利视频| 啦啦啦视频在线资源免费观看| 俄罗斯特黄特色一大片| 亚洲一区中文字幕在线| 亚洲精品久久久久久婷婷小说| 国产精品久久久久久精品古装| bbb黄色大片| 久热这里只有精品99| 亚洲欧美一区二区三区久久| 国产色视频综合| 精品一区二区三卡| 亚洲色图综合在线观看| 男女边摸边吃奶| 精品亚洲成a人片在线观看| 我的亚洲天堂| 欧美精品一区二区大全| 精品亚洲乱码少妇综合久久| 国产成人av激情在线播放| 欧美精品一区二区大全| av天堂在线播放| 夫妻午夜视频| 国产一区二区三区综合在线观看| 欧美 亚洲 国产 日韩一| 精品熟女少妇八av免费久了| 国产精品国产av在线观看| 在线观看舔阴道视频| 国产一区二区三区av在线| h视频一区二区三区| 欧美乱码精品一区二区三区| 亚洲色图 男人天堂 中文字幕| 午夜福利乱码中文字幕| 王馨瑶露胸无遮挡在线观看| 国产精品成人在线| 精品一区二区三卡| 黑人操中国人逼视频| 成人黄色视频免费在线看| 久久亚洲精品不卡| 一本综合久久免费| 操出白浆在线播放| 亚洲第一青青草原| 9热在线视频观看99| 国产区一区二久久| 国产在线免费精品| 一区二区三区激情视频| 亚洲avbb在线观看|