• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Unpacking the black box of deep learning for identifying El Ni?o-Southern oscillation

    2023-10-11 05:31:00YuSunYusupjanHabibullaGaokeHuJunMengZhenghuiLuMaoxinLiuandXiaosongChen
    Communications in Theoretical Physics 2023年9期

    Yu Sun,Yusupjan Habibulla,Gaoke Hu,Jun Meng,Zhenghui Lu,Maoxin Liu and Xiaosong Chen

    1 School of Systems Science/Institute of Nonequilibrium Systems,Beijing Normal University,Beijing 100875,China

    2 School of Physics and Technology,Xinjiang University,Wulumuqi 830017,China

    3 School of Science,Beijing University of Posts and Telecommunications,Beijing 100876,China

    4 National Institute of Natural Hazards,Ministry of Emergency Management of China,Beijing 100085,China

    Abstract By training a convolutional neural network (CNN) model,we successfully recognize different phases of the El Ni?o-Southern oscillation.Our model achieves high recognition performance,with accuracy rates of 89.4% for the training dataset and 86.4% for the validation dataset.Through statistical analysis of the weight parameter distribution and activation output in the CNN,we find that most of the convolution kernels and hidden layer neurons remain inactive,while only two convolution kernels and two hidden layer neurons play active roles.By examining the weight parameters of connections between the active convolution kernels and the active hidden neurons,we can automatically differentiate various types of El Ni?o and La Ni?a,thereby identifying the specific functions of each part of the CNN.We anticipate that this progress will be helpful for future studies on both climate prediction and a deeper understanding of artificial neural networks.

    Keywords: deep learning,El Ni?o-Southern oscillation,convolutional neural network,interpretability

    1.Introduction

    Deep learning [1–6] has emerged as a powerful and adaptive paradigm for handling complexity and acquiring abstract representations from data.It has led to groundbreaking advancements in various fields such as Earth system science[7–12],biology[13,14],finance[15],transportation[16,17],and more.However,the interpretability [8,18–24] of deep learning models,often referred to as ‘black boxes’ [25–27],remains a pressing concern.Deriving human-comprehensible insights from these models [28–31] is crucial for a deeper understanding and generating domain knowledge.Noteworthy interpretation skills,including layerwise relevance propagation [24,32–34],saliency maps [35–38],optimal input [39],and others,have been employed in efforts to unravel the inner mechanisms of deep learning models.However,these existing approaches predominantly focus on identifying the empirical features that contribute to the model’s output rather than delving into the causal clues within the black box itself [8,40–43].

    The El Ni?o-Southern Oscillation (ENSO) is a wellknown interannual climate variability phenomenon [44,45].ENSO is characterized by abnormal warming or cooling in the equatorial central and eastern Pacific region,and its anomalous state has far-reaching meteorological impacts on a global scale [46–48].Numerous studies have employed a variety of deep learning architectures to analyze and forecast the evolution of ENSO [9,34,49–51].For example,Ham et al proposed a deep ensemble prediction framework for forecasting El Ni?o events using CNNs with improved prediction accuracy compared to traditional statistical models[9].While this study contributes to ENSO prediction accuracy,it has several potential shortcomings.The primary limitation lies in the limited interpretability of CNNs,which hinders a clear understanding of the model’s decision-making process.In addition,the reliance on data quality may also affect the confidence of the predictions.

    Fig.1.Architecture of the CNN model used in this study.Our CNN architecture comprises a convolutional layer with 16 convolution kernels of size 5×5.These kernels function as filters,processing the input image through element-wise multiplications and aggregating activation feature maps.Subsequently,the feature maps undergo a 2×2 average pooling layer,reducing their size by aggregating values within local neighborhoods.The pooled feature maps are then flattened and fully connected to a hidden layer consisting of 100 neurons.Finally,the neurons in the hidden layer are connected to a fully connected output layer with 3 neurons.

    Deep learning models can be viewed as compositions of constituent units called neurons[1–3],with the representation encompassing both individual neuron parameter features and the collective behavior arising from their activation states.Investigating individual neurons and their activation states is of fundamental importance to achieve a comprehensive understanding of the representation.Moreover,the complexity of deep learning models is intrinsically linked to the complexity of the learning tasks they tackle,as exemplified by DeepMind’s research [52].When a deep learning model exhibits perfect performance in solving a specific task,it implies that the model’s representation encodes the task’s intrinsic features [53].Therefore,with improved measurability and transparency offered by deep learning models,the complexity inherent in these systems should not serve as an excuse to avoid studying them.Instead,it should serve as motivation to investigate further.

    Based on this perspective,a methodology is proposed to analyze the inner representation of deep learning models.We use it to elucidate the inner workings of a convolutional neural network(CNN)model designed to classify ENSO.Our deep learning task focuses on classifying different phases of ENSO based on near-surface air temperature.The welltrained model demonstrates precise identification of distinct ENSO phases.Analysis of the internal model representation reveals a condensed and simplified parameter structure,improving our understanding of each component’s taskspecific function.Remarkably,this parameter structure enables clear differentiation between eastern Pacific (EP)and central Pacific(CP)El Ni?o patterns,as well as weak and extreme La Ni?a patterns,shedding light on the crucial features of this natural phenomenon.

    This study serves as a rudimentary model for delving into the black boxes of complex systems,providing preliminary insights and inspiration for harnessing the power of deep learning to comprehend phenomena within complex systems.Moreover,we firmly believe that the fundamental perspective and methodology employed in this study possess the potential to be extended to more intricate models,owing to the inherent nature of complexity that is shared among diverse systems.

    2.Data and methods

    2.1.Data

    The aim of our learning task is to identify different phases of the ENSO based on the images of near-surface air temperature.ENSO is a basin-scale phenomenon that involves coupled atmosphere-ocean processes.It consists of three phases:El Ni?o,La Ni?a,and the normal phase.El Ni?o and La Ni?a refer to irregular warming and cooling of the equatorial central and eastern Pacific region.To define the phases,we conventionally employ the Oceanic Ni?o Index (ONI,https://origin.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ONI_v5.php,accessed on 31 May 2019)[54],which is a three-month running mean of sea surface temperature anomaly(SSTA)in the Ni?o3.4 region (5°N–5°S,170°W–120°W).According to NOAA’s definition,for a full-fledged El Ni?o or La Ni?a event,the ONI must exceed+0.5°C or–0.5°C for at least five consecutive months.

    We utilize near-surface(2 m)air temperature data as input and assign the corresponding ENSO phase as the label.The daily near-surface(2 m) air temperature data is obtained from the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) Reanalysis[55] (https://psl.noaa.gov/data/gridded/data.ncep.reanalysis.html,accessed on 31 May 2019),presented as grids with a latitude-longitude interval of 1.9° × 1.875°.Our region of interest spans from 60°S to 60°N,consisting of a total of N=64 × 192 grids.We label the ENSO phases using a threedimensional vector c(t),which is equal to(1,0,0)for El Ni?o,(0,1,0) for La Ni?a,and (0,0,1) for the normal phase.

    For the purposes of training and validation,we split the dataset into two parts.The training dataset covers 1 January 1950 to 31 December 1999,and consists of a total of 18,262 d.The validation dataset spans 1 January 2000 to 31 December 2018,and consists of a total of 6,490 d.To filter out the fluctuations of short time scales in the data,we applied a 30 d sliding average with a sliding step of 1 d.To validate the robustness of our analysis,the daily data without the sliding average is applied for training as a comparison in appendix B.

    We denote the surface temperature of grid i at time t as Si(t) with the average

    where t0is 1 January 1950,t1is 31 December 1999,and T represents the total number of days in this period.The temperature fluctuation of grid i at time t is given by

    The root mean square deviation for grid i is calculated as

    Considering the significant differences in temperature fluctuations across different regions,we standardize the data for each grid i using

    The input to the neural network is represented as

    2.2.Convolutional neural network

    In this study,we employ CNN,a classical and wellestablished deep learning architecture,to address the task of ENSO phases recognition within the deep learning paradigm,as shown in figure 1.Our specific CNN architecture consists of a convolutional layer comprising 16 convolution kernels,each with a size of 5×5.These kernels act as filters,transforming the input image by performing element-wise multiplications and subsequently aggregating the resulting activation feature maps.These feature maps then undergo a 2×2 average pooling layer,reducing the size of the feature maps through coarse-graining values within 2×2 local neighborhoods.The pooled feature maps are then connected to a fully-connected hidden layer comprising 100 neurons.Finally,the neurons within the hidden layer are connected to a fully-connected output layer,housing only 3 output neurons.This output layer facilitates the decision-making process,providing the output vector o(t)=(o1(t),o2(t),o3(t)),where o1(t),o2(t),and o3(t) represent the predicted intensities for El Ni?o,La Ni?a,and normal phases,respectively.

    The neuron parameters are randomly initialized before the training process,and the meaningful representation develops through the training process.Following the convention in deep learning classification tasks,cross-entropy loss function is utilized to quantify the discrepancy between the softmax normalized outputs and the labels.The training objective is to minimize the overall discrepancy by adjusting the neuron parameters.To mitigate overfitting,we incorporate L2parameter regularization [3] by adding a penalty term to the loss function.If a significant discrepancy in accuracy arises between the training and validation datasets,training with increased L2regularization strength is necessary to ensure satisfactory performance on the validation data.

    2.3.Unpacking the black box

    Unpacking the black box involves examining neuron parameters and activation states.The importance of neuron parameters is assessed based on their significance.Understanding the spatial distribution of parameters and sorting them accordingly provides valuable insights,highlighting the key components that drive the inner workings of the black box.Additionally,a deep understanding of the collective behavior of activation states is also essential for comprehending the black box.

    3.Results

    3.1.Identification of ENSO phases

    Impressive accuracy results have been achieved for both the training and validation datasets through sufficient training.The accuracy stands at 89.4% for the training datasets and reaches 86.4%for the validation datasets.From the validation datasets,we obtain the prediction o(t)=(o1(t),o2(t),o3(t)),which refers to the El Ni?o,the La Ni?a,and normal phases,respectively.The ENSO phases are identified by the largest output.In addition,we try to quantify the significance of the identified ENSO phase.The significance is related to the standard deviation

    which is depicted in figure 2(a).For some states with significance R(t)<10 and persistence in an ENSO phase for less than 2 months,their phase is defined as that of the previous states.

    From figure 2(a),we can see that all El Ni?o events since 2000 have been correctly identified.The very strong El Ni?o event from 2015 to 2016 has been classified with a very large R(t).The strong La Ni?a events at 1999–2000,2007–2008,and 2010–2011 have been identified with large R(t).Two misjudgments appear for the two weakest events at 2005–2006 and 2016–2017,which are indicated by small R(t) as well.

    3.2.Unpacking the neural network

    Fig.2.(a) Identification of ENSO phases and their significance for the validation dataset (2000–2018) with the El Ni?o,La Ni?a,and normal phases represented by the red,blue,and white backgrounds respectively.(b) The Oceanic Ni?o index (ONI) from 2000 to 2018,where the El Ni?o,La Ni?a,and normal phases are displayed with red,blue,and white backgrounds,respectively.

    Fig.3.Visualization of the convolution kernels with their values shown as 5×5 nodes on the surface.Among the 16 convolution kernels,only kernel 5(blue surface)and kernel 6(red surface)have nonzero values.

    3.2.1.Unpacking the convolutional layer.Our investigation initiates with a comprehensive examination of the convolutional layer within our CNN architecture,which consists of an ensemble of 16 convolution kernels.Notably,these kernels exhibit a remarkable degree of sparsity,with only kernels 5 and 6 emerging as the exclusive kernels manifesting notable values,as shown in figure 3.Kernel 6 presents a smooth,concave-up pattern characterized by positive values,while kernel 5 displays a smooth,convex-down pattern characterized by negative values.By comparing the output feature maps produced by each kernel with the input image,we can discern the distinct functions that these kernels serve,as demonstrated in appendix A.Kernel 6 effectively captures positive features,representing the magnitude of positive temperatures in its feature map.Conversely,kernel 5 adeptly captures negative features,representing the magnitude of negative temperatures in its feature map.The remaining convolution kernels yield null outputs,indicating that their outputs do not contribute to the final prediction.

    3.2.2.Unpacking the hidden layer.In the hidden layer consisting of 100 fully-connected neurons,it is vital to identify key neurons.By prioritizing high averages and significant variations in activation output,we observe sparse activation in this layer as well.Based on this analysis,we have identified two crucial active neurons:hidden neurons 50 and 100.Each active neuron demonstrates two distinct parameter patterns,corresponding to the connection parameters linked to feature maps derived from convolution kernels 5 and 6,respectively.

    The patterns observed in neuron 100 effectively capture the unique characteristics of El Ni?o,specifically the anomalous warming in the eastern and central equatorial Pacific.This neuron displays a selective response to positive signals,particularly in the central and eastern Pacific regions,when analyzing the positive features extracted by kernel 6(figure 4(a)).Conversely,when these regions exhibit concentrated negative features extracted by kernel 5,neuron 100 exhibits a low activation output,as shown in figure 4(b).

    It is very interesting to note that these patterns inherently capture the distinct spatial patterns between the EP and the CP El Ni?o flavors.The EP and CP flavors are of significant importance in comprehending the El Ni?o phenomenon as they represent distinct anomalous temperature patterns accompanied by notable variations in regional climatic impacts [56].As shown in figure 4(a),the connection parameters associated with the feature map derived from kernel 6 exhibit an EP flavor,aligning with the maximum warming observed in the eastern Pacific near the coast of South America (see the EP El Ni?o pattern shown in figure 2(a) of [56]).As depicted in figure 4(b),the connection parameters linked to the feature map derived from kernel 5 exhibit a distinct CP flavor.The CP flavor corresponds to the anomalous center observed in the central equatorial Pacific and is notable for its characteristic horseshoe pattern (see the CP El Ni?o pattern shown in figure 5(b) of [56]).

    Fig.4.(a)–(b) Visualization of nontrivial parameters for hidden neuron 100.The connection parameters associated with feature maps derived from convolution kernels 6 and 5 are visualized in (a) and (b),respectively.

    Fig.5.(a)–(b) Visualization of nontrivial parameters for hidden neuron 50.The connection parameters associated with feature maps derived from convolution kernels 6 and 5 are visualized in (a) and (b),respectively.

    On the other hand,the observed patterns in neuron 50 reflect the distinctive characteristics of La Ni?a,specifically the anomalous cooling observed in the eastern and central equatorial Pacific.This neuron demonstrates a selective response to negative signals,particularly in the central and eastern Pacific regions,when analyzing the negative features extracted by kernel 5,as shown in figure 5(b).Conversely,when these regions exhibit concentrated positive features extracted by kernel 6,neuron 50 exhibits a low activation,as shown in figure 5(a).The connection parameters associated with Neuron 50 also display two distinctive patterns.Notably,it reveals a clear distinction between different types of La Ni?a patterns.Despite the lack of distinctiveness in the CP and EP types of La Ni?a[57],there are significant differences in spatial distribution between weak and extreme La Ni?a[58].Figure 5(a) shows that the connection parameters associated with the feature map derived from kernel 6 correspond to the weak La Ni?a pattern,while the feature map derived from kernel 5 corresponds to the extreme La Ni?a pattern(refer to figures 1(a)–(b)of[58]for the weak and extreme La Ni?a patterns).

    3.2.3.Unpacking the output layer.The output layer comprises 3 output neurons,each playing an important role in predicting El Ni?o,La Ni?a,and the normal phases,respectively.Therefore,we directly refer to the three neurons as the El Ni?o neuron (output neuron 1),the La Ni?a neuron(output neuron 2),and the normal neuron (neuron 3),respectively.The contribution of hidden neurons to each output neuron is reflected in the parameters,as shown in figure 6.Sparsity is observed,with only active hidden neurons (hidden neurons 50 and 100) significantly contributing to the output neurons,consistent with the discussion in section 3.2.2.The distinct roles of these active hidden neurons can be identified based on their contributions.Hidden neuron 100 promotes El Ni?o prediction while inhibiting La Ni?a prediction,while hidden neuron 50 exhibits the opposite effect.

    4.Conclusions

    In this work,we employ near-surface (2 m) air temperature data from NCEP/NCAR reanalysis data to train a CNN in recognizing various phases of ENSO.Our model exhibits high recognition accuracy,with the training dataset and validation dataset achieving accuracy rates of 89.4% and 86.4%,respectively.To further understand the underlying principles behind the good performance of our neural network model,we examine the parameter distribution and activation output in the neural network.It turns out that only two sets of convolution kernels (the No.5 and the No.6) are actively contributing to the results,while others are of zero value.Thus,we can only use one of these two sets of convolution kernels (the No.5 and the No.6) to extract positivetemperature or negative-temperature features from the original input of the global temperature field,respectively.Similarly,among the total of 100 hidden layer neurons,only two neurons (the 50th and the 100th) are playing a dominant role in the model.The 100th neuron responds to El Ni?o features,and the 50th neuron responds to La Ni?a features.By analyzing the connection weight parameters between the two active convolution kernels(the No.5 and the No.6)and the two dominant hidden layer neurons (the 50th and the 100th),we found that each weight parameter represents the characteristics of a certain type of El Ni?o or La Ni?a.Therefore,we can distinguish different types of El Ni?o and La Ni?a in sufficiently clear geographical regions.For example,the four types of climate phenomena (the eastern type El Ni?o,the central type El Ni?o,the weak La Ni?a,and the extreme La Ni?a) are readily recognized according to the four connection weight parameter patterns between the two active convolution kernels and the two dominant hidden layer neurons(No.6 and the 100th,No.5 and the 100th,No.6 and the 50th,and No.5 and the 50th).This work shows that our model successfully learns and differentiates specific features of climate phenomena.We expect this progress is helpful for both future predictions of climate change and a deep understanding of the general underlying mechanism of artificial neural networks.

    Acknowledgments

    This work is supported by the National Natural Science Foundation of China (Grant No.12135003).We also acknowledge Jingfang Fan,Yongwen Zhang,Naiming Yuan,and Jiaqi Dong for discussions.

    Appendix A.Demonstration of the effects of convolution kernels

    As shown in figures A1–A5,convolution kernel 6 captures positive features,representing the magnitude of positive temperatures in its feature map.Convolution kernel 5 captures negative features,representing the magnitude of negative temperatures in its feature map.The remaining convolution kernels produce null outputs,signifying that their outputs do not contribute to the final prediction.These effects hold true for any selected dates as well.

    Figure A2.Illustration of convolution kernels with the input image sampled from 1 November 2009,during the CP El Ni?o period.

    Figure A3.Illustration of convolution kernels with the input image sampled from 1 February 2018,during the weak La Ni?a period.

    Figure A4. Illustration of convolution kernels with the input image sampled from 1 January 2008,during the extreme La Ni?a period.

    Figure A5. Illustration of convolution kernels with the input image sampled from 1 March 2007,during the normal period.

    Appendix B.Reproducibility and the robustness of results

    To demonstrate the robustness and reproducibility of our study,we conduct two groups of independent parallel trainings.We use daily data from the same dataset without the sliding average to validate the analysis’s robustness.In figures B1–B3,we compare the monthly predictions between the original training and the two parallel trainings.In figures B4–B6,we compare the outputs of active hidden neurons between the original training and the two parallel trainings.Similarly,in figures B7–B9,we compare the outputs between the original training and the two parallel trainings.The parameters for the two parallel trainings are shown in figures B10–B12 and figures B13–B15,respectively.During the parallel trainings,we observed that training with daily data resulted in the emergence of an additional active neuron in the hidden layer.By examining the output of this neuron,we discovered that it exhibited relatively low variability and followed an annual cycle,as shown in figures B5(c) and B6(c).This may be caused by the stronger fluctuations and volatility inherent in the daily data.

    Figure B1. The monthly predictions for the validation dataset (2000–2018).

    Figure B2. The monthly predictions for the validation dataset (2000–2018) in parallel training 1.

    Figure B3.The monthly predictions for the validation dataset (2000–2018) in parallel training 2.

    Figure B4.The nontrivial outputs of hidden layer for the validation dataset (2000–2018).

    Figure B5.The nontrivial outputs of hidden layer for the validation dataset (2000–2018) in parallel training 1.

    Figure B6. The monthly predictions for the validation dataset (2000–2018) in parallel training 2.

    Figure B7.The outputs of output layer for the validation dataset (2000–2018).

    Figure B8. The outputs of output layer for the validation dataset (2000–2018) in parallel training 1.

    Figure B9.The outputs of output layer for the validation dataset (2000–2018) in parallel training 2.

    Figure B10.Visualization of the convolution kernels in parallel training 1 with their values shown as 5×5 nodes on the surface.

    Figure B12.Visualization of the parameters for output neurons in parallel training 1.

    Figure B13.Visualization of the convolution kernels in parallel training 2 with their values shown as 5×5 nodes on the surface.

    Figure B14.Visualization of nontrivial parameters in the hidden layer of parallel training 2.

    Figure B15.Visualization of the parameters for output neurons in parallel training 2.

    Appendix C.Details about the CNN

    C.1.Neurons and convolution kernels

    A neuron extracts the feature from its input.We have depicted the workflow of a neuron in figure C1.The ReLU function which is used as the activation function in a neuron is shown in figure C2.Convolution kernels act as filters,transforming the input image by performing element-wise multiplications and subsequently aggregating the resulting activation feature maps.We present the schematic diagram of convolution kernels in figure C3,illustrating two kernels of size 3×3 as an example.

    C.2.Loss function and L2parameter regularization

    In the training process,the cross-entropy function is used to measure the loss at time t

    Here,c(t) denotes the label,and p(t) denotes the softmax normalized output with

    The training is essentially a fitting process which minimizes the total loss by tuning neural network parameters.To suppress overfitting during the training process,L2parameter regularization is used.This method updates the loss function to be minimized from loss(t) to

    is referred to as the L2parameter for neuron m,and α is the factor used to control the strength of L2parameter regularization during the training process.

    Figure C2. The ReLU function. ReLU(x )=max(x,0).

    Figure C3.The schematic diagram of convolution kernels.This is an example for two convolution kernels of size 3×3.These convolution kernels act as filters,transforming the input image by performing element-wise multiplications and subsequently aggregating the resulting activation feature maps.

    国产亚洲午夜精品一区二区久久| 一级毛片aaaaaa免费看小| 免费看光身美女| 中文天堂在线官网| 高清av免费在线| 精品卡一卡二卡四卡免费| 久久精品国产亚洲网站| a级片在线免费高清观看视频| 中文天堂在线官网| 91久久精品国产一区二区三区| 日韩亚洲欧美综合| 亚洲av成人精品一区久久| av福利片在线观看| 人人妻人人澡人人看| 99九九线精品视频在线观看视频| 国产69精品久久久久777片| 赤兔流量卡办理| 人妻夜夜爽99麻豆av| 最近最新中文字幕免费大全7| 嫩草影院新地址| 国产视频内射| 国产精品人妻久久久久久| 国产女主播在线喷水免费视频网站| 我的女老师完整版在线观看| 欧美 亚洲 国产 日韩一| 亚洲av在线观看美女高潮| 精品少妇黑人巨大在线播放| 91aial.com中文字幕在线观看| 亚洲欧洲精品一区二区精品久久久 | 亚洲无线观看免费| 亚洲av欧美aⅴ国产| 99久久中文字幕三级久久日本| 免费观看性生交大片5| 啦啦啦啦在线视频资源| 免费看光身美女| 如日韩欧美国产精品一区二区三区 | 99热国产这里只有精品6| 色视频在线一区二区三区| 丰满饥渴人妻一区二区三| 亚洲人成网站在线观看播放| 一区二区三区精品91| 国产亚洲最大av| 狂野欧美激情性bbbbbb| 色5月婷婷丁香| 最新的欧美精品一区二区| 国产精品欧美亚洲77777| 麻豆成人午夜福利视频| 欧美老熟妇乱子伦牲交| 欧美精品一区二区免费开放| 久久 成人 亚洲| 亚洲av不卡在线观看| 一个人看视频在线观看www免费| 亚洲精品久久久久久婷婷小说| 午夜久久久在线观看| 五月天丁香电影| 亚洲第一av免费看| 国内少妇人妻偷人精品xxx网站| 啦啦啦在线观看免费高清www| 夜夜骑夜夜射夜夜干| 一个人免费看片子| 久久久久久久久久久久大奶| 在线观看国产h片| 秋霞在线观看毛片| 久久鲁丝午夜福利片| 国产 一区精品| 啦啦啦中文免费视频观看日本| 国产精品久久久久久久久免| av福利片在线观看| 久久久久久久久大av| 国产高清有码在线观看视频| 高清在线视频一区二区三区| 国产女主播在线喷水免费视频网站| 国产成人精品婷婷| 韩国高清视频一区二区三区| 欧美激情极品国产一区二区三区 | 精品人妻偷拍中文字幕| 国产日韩一区二区三区精品不卡 | av线在线观看网站| 日本免费在线观看一区| 色哟哟·www| 国产 精品1| 午夜视频国产福利| 欧美日韩精品成人综合77777| 亚洲,一卡二卡三卡| 久久久久久人妻| 久久女婷五月综合色啪小说| 大码成人一级视频| a级毛色黄片| 高清视频免费观看一区二区| 亚洲伊人久久精品综合| 国产老妇伦熟女老妇高清| 免费人成在线观看视频色| 22中文网久久字幕| 国产综合精华液| 人妻系列 视频| 中文资源天堂在线| 国产极品粉嫩免费观看在线 | a 毛片基地| a级一级毛片免费在线观看| 日韩一本色道免费dvd| 中国国产av一级| 色视频www国产| 精品少妇黑人巨大在线播放| 亚洲伊人久久精品综合| 国产精品久久久久成人av| 国产免费又黄又爽又色| 国产黄频视频在线观看| 精品国产国语对白av| 老女人水多毛片| 午夜福利在线观看免费完整高清在| 国产一区亚洲一区在线观看| 青春草国产在线视频| 内射极品少妇av片p| 日韩中字成人| 夜夜骑夜夜射夜夜干| 国产淫语在线视频| 五月伊人婷婷丁香| 亚洲四区av| 久久这里有精品视频免费| 国产精品国产三级专区第一集| 一级毛片久久久久久久久女| 新久久久久国产一级毛片| 在线看a的网站| 免费av中文字幕在线| 久久精品久久久久久久性| 国产在线免费精品| 自拍欧美九色日韩亚洲蝌蚪91 | 久久国产乱子免费精品| av不卡在线播放| 精品一品国产午夜福利视频| 欧美日韩精品成人综合77777| 99久久综合免费| 国产一区有黄有色的免费视频| 交换朋友夫妻互换小说| 最近中文字幕高清免费大全6| 内地一区二区视频在线| 边亲边吃奶的免费视频| 久久97久久精品| 中文在线观看免费www的网站| 大话2 男鬼变身卡| 啦啦啦中文免费视频观看日本| 久久久久久久久久久丰满| 欧美xxⅹ黑人| 免费观看性生交大片5| 日日啪夜夜爽| 亚洲精品视频女| av在线播放精品| 中文字幕人妻熟人妻熟丝袜美| 午夜福利网站1000一区二区三区| 啦啦啦中文免费视频观看日本| 极品少妇高潮喷水抽搐| 亚洲精品,欧美精品| 日日摸夜夜添夜夜爱| 国产亚洲av片在线观看秒播厂| 纵有疾风起免费观看全集完整版| 青春草亚洲视频在线观看| 久久久久久久大尺度免费视频| 欧美区成人在线视频| 男女边吃奶边做爰视频| 国产精品蜜桃在线观看| 夜夜看夜夜爽夜夜摸| 国产亚洲午夜精品一区二区久久| 欧美3d第一页| 国产成人精品无人区| 9色porny在线观看| 国产精品麻豆人妻色哟哟久久| 精品人妻偷拍中文字幕| 丰满少妇做爰视频| 亚洲久久久国产精品| 亚洲综合色惰| 建设人人有责人人尽责人人享有的| 久久久久国产网址| 大码成人一级视频| 超碰97精品在线观看| 欧美最新免费一区二区三区| 我要看日韩黄色一级片| 国产爽快片一区二区三区| 永久网站在线| 日韩成人av中文字幕在线观看| 新久久久久国产一级毛片| 男女啪啪激烈高潮av片| 午夜福利视频精品| 日韩在线高清观看一区二区三区| 最近最新中文字幕免费大全7| 久久精品夜色国产| 全区人妻精品视频| 国产高清有码在线观看视频| 少妇被粗大的猛进出69影院 | 少妇高潮的动态图| 一边亲一边摸免费视频| 黑人巨大精品欧美一区二区蜜桃 | 99视频精品全部免费 在线| 亚洲欧洲日产国产| 亚洲美女搞黄在线观看| 国产亚洲5aaaaa淫片| 亚洲高清免费不卡视频| 高清视频免费观看一区二区| 99久久中文字幕三级久久日本| 亚洲精品第二区| 五月开心婷婷网| 精品久久国产蜜桃| av黄色大香蕉| av免费观看日本| 婷婷色综合www| 熟女av电影| 亚洲伊人久久精品综合| 久久精品久久久久久久性| 九九在线视频观看精品| av天堂久久9| 高清在线视频一区二区三区| 精华霜和精华液先用哪个| 在线 av 中文字幕| 久久国产精品大桥未久av | 国产精品一区二区三区四区免费观看| 成人影院久久| 精品一区二区三卡| 菩萨蛮人人尽说江南好唐韦庄| 欧美xxxx性猛交bbbb| 女人久久www免费人成看片| 午夜激情久久久久久久| 特大巨黑吊av在线直播| 亚洲精品视频女| 成人无遮挡网站| 国产又色又爽无遮挡免| 自拍欧美九色日韩亚洲蝌蚪91 | 国产一区亚洲一区在线观看| 99久久人妻综合| 日韩av在线免费看完整版不卡| 美女福利国产在线| 伦精品一区二区三区| 亚洲av福利一区| 欧美日韩亚洲高清精品| 国产精品一区二区性色av| 国精品久久久久久国模美| 精品久久久噜噜| 少妇被粗大猛烈的视频| 国产精品国产三级国产专区5o| 欧美区成人在线视频| 国产综合精华液| 大又大粗又爽又黄少妇毛片口| 欧美日韩视频精品一区| 97精品久久久久久久久久精品| 老司机影院毛片| 成人国产麻豆网| 亚洲欧美中文字幕日韩二区| 欧美三级亚洲精品| 精品国产国语对白av| 人妻一区二区av| 自拍欧美九色日韩亚洲蝌蚪91 | 少妇的逼好多水| 国产深夜福利视频在线观看| 日本午夜av视频| 国产黄色视频一区二区在线观看| 在线看a的网站| 男男h啪啪无遮挡| 久久亚洲国产成人精品v| 久久久久久久久大av| 超碰97精品在线观看| 欧美精品一区二区大全| 婷婷色综合www| 欧美激情极品国产一区二区三区 | 精华霜和精华液先用哪个| 精品人妻熟女av久视频| 男女免费视频国产| 日韩免费高清中文字幕av| 伊人久久精品亚洲午夜| 免费大片18禁| a级片在线免费高清观看视频| 亚洲国产欧美在线一区| 日本91视频免费播放| 欧美日韩视频精品一区| 色视频www国产| 午夜激情久久久久久久| 亚洲国产成人一精品久久久| 国产午夜精品久久久久久一区二区三区| 老司机影院成人| 色94色欧美一区二区| 一级二级三级毛片免费看| 夜夜爽夜夜爽视频| 国产一区亚洲一区在线观看| 丁香六月天网| 亚洲伊人久久精品综合| 丝瓜视频免费看黄片| 日本与韩国留学比较| 国产淫片久久久久久久久| 久久国产精品大桥未久av | 精品国产露脸久久av麻豆| 国产av国产精品国产| 18+在线观看网站| 精品午夜福利在线看| 精品99又大又爽又粗少妇毛片| 日日啪夜夜撸| 97精品久久久久久久久久精品| 一个人看视频在线观看www免费| 大片电影免费在线观看免费| 在线观看一区二区三区激情| 亚洲一级一片aⅴ在线观看| 欧美日韩亚洲高清精品| 国产熟女午夜一区二区三区 | 中文资源天堂在线| 最近最新中文字幕免费大全7| 中文字幕精品免费在线观看视频 | 伊人亚洲综合成人网| 国产精品三级大全| 久久精品夜色国产| av有码第一页| 久久人人爽人人爽人人片va| 国产精品久久久久久久久免| 国产精品一区二区性色av| 午夜免费鲁丝| 男人添女人高潮全过程视频| 国产真实伦视频高清在线观看| 欧美日韩国产mv在线观看视频| 男人添女人高潮全过程视频| 久久热精品热| 久久午夜综合久久蜜桃| 久久久久久久国产电影| 国产精品一区二区在线观看99| 亚洲伊人久久精品综合| 午夜福利影视在线免费观看| 观看免费一级毛片| 22中文网久久字幕| 91精品伊人久久大香线蕉| 一级毛片aaaaaa免费看小| 91aial.com中文字幕在线观看| 国内精品宾馆在线| 日韩欧美一区视频在线观看 | 国产精品久久久久久精品古装| 午夜福利,免费看| 草草在线视频免费看| 国产淫片久久久久久久久| 麻豆成人av视频| 一级毛片 在线播放| 亚洲精品第二区| 看免费成人av毛片| 亚洲精品国产成人久久av| 免费少妇av软件| 国产午夜精品久久久久久一区二区三区| 日韩亚洲欧美综合| h日本视频在线播放| 免费大片18禁| 成人午夜精彩视频在线观看| 国产欧美日韩综合在线一区二区 | 日日摸夜夜添夜夜添av毛片| 少妇人妻久久综合中文| 少妇人妻精品综合一区二区| 成人免费观看视频高清| 777米奇影视久久| 99热这里只有是精品在线观看| 久久国产精品大桥未久av | 国产精品不卡视频一区二区| 99热国产这里只有精品6| 熟女人妻精品中文字幕| 国产av一区二区精品久久| 我的女老师完整版在线观看| 亚洲精品成人av观看孕妇| 三级经典国产精品| 一级片'在线观看视频| 国产片特级美女逼逼视频| 黑丝袜美女国产一区| 久久久精品94久久精品| 国产欧美日韩一区二区三区在线 | 成人免费观看视频高清| 色哟哟·www| 精品亚洲乱码少妇综合久久| 婷婷色麻豆天堂久久| 久久午夜福利片| 国产精品国产av在线观看| 九九久久精品国产亚洲av麻豆| 在线免费观看不下载黄p国产| 五月开心婷婷网| 亚洲欧洲精品一区二区精品久久久 | 欧美最新免费一区二区三区| 成人特级av手机在线观看| 涩涩av久久男人的天堂| 午夜av观看不卡| av不卡在线播放| 国产成人freesex在线| 九九久久精品国产亚洲av麻豆| 如何舔出高潮| 大话2 男鬼变身卡| 一二三四中文在线观看免费高清| 久久婷婷青草| 亚洲国产av新网站| 午夜老司机福利剧场| 久久久久久久久久成人| 我的女老师完整版在线观看| 三级国产精品片| 亚洲电影在线观看av| 国产成人免费无遮挡视频| 免费少妇av软件| 欧美日韩精品成人综合77777| 精品人妻熟女毛片av久久网站| 亚洲国产精品专区欧美| 日韩一区二区三区影片| 夫妻性生交免费视频一级片| 欧美国产精品一级二级三级 | 日韩中字成人| 男人爽女人下面视频在线观看| 九草在线视频观看| a级毛片免费高清观看在线播放| 中文字幕人妻熟人妻熟丝袜美| 最黄视频免费看| 亚洲精品国产av成人精品| 中文资源天堂在线| 国产一区亚洲一区在线观看| 国产成人aa在线观看| 中国美白少妇内射xxxbb| 99热国产这里只有精品6| 国产精品99久久99久久久不卡 | 国产精品熟女久久久久浪| 人人妻人人爽人人添夜夜欢视频 | 乱系列少妇在线播放| 大香蕉久久网| 国产视频首页在线观看| 成人美女网站在线观看视频| 精品久久久久久久久亚洲| 久久久久网色| 久久久久久久久久人人人人人人| 狂野欧美激情性xxxx在线观看| 99久久人妻综合| 久久99蜜桃精品久久| 一本一本综合久久| 女人久久www免费人成看片| 91久久精品国产一区二区三区| 一个人免费看片子| 国产 一区精品| a 毛片基地| av福利片在线观看| 免费大片18禁| 午夜免费观看性视频| av免费在线看不卡| 亚洲精品中文字幕在线视频 | 国产精品免费大片| 国产深夜福利视频在线观看| 国产亚洲5aaaaa淫片| 国产老妇伦熟女老妇高清| 99精国产麻豆久久婷婷| 18禁在线播放成人免费| 日韩电影二区| 亚洲精品色激情综合| 伊人亚洲综合成人网| 精品久久久久久久久av| 亚洲图色成人| av福利片在线| 久久精品久久久久久久性| 只有这里有精品99| 秋霞在线观看毛片| 九九在线视频观看精品| 毛片一级片免费看久久久久| 亚洲欧美日韩东京热| 少妇的逼好多水| 涩涩av久久男人的天堂| 日本与韩国留学比较| 亚洲精品成人av观看孕妇| 在线天堂最新版资源| 日日爽夜夜爽网站| 久久av网站| 精品久久国产蜜桃| 久久国产精品男人的天堂亚洲 | 狠狠精品人妻久久久久久综合| 日韩一本色道免费dvd| 亚洲欧美一区二区三区国产| 欧美三级亚洲精品| 大香蕉97超碰在线| 久久久国产精品麻豆| 男女国产视频网站| 国产色爽女视频免费观看| 极品人妻少妇av视频| 久久久久久久国产电影| 欧美日韩精品成人综合77777| 狂野欧美激情性bbbbbb| 久久人妻熟女aⅴ| 高清不卡的av网站| 欧美+日韩+精品| 久久ye,这里只有精品| av在线app专区| 3wmmmm亚洲av在线观看| 国产精品嫩草影院av在线观看| 伦理电影大哥的女人| 美女xxoo啪啪120秒动态图| 精品一品国产午夜福利视频| 视频区图区小说| 妹子高潮喷水视频| 亚洲一区二区三区欧美精品| 亚洲国产精品成人久久小说| 天天躁夜夜躁狠狠久久av| 一本大道久久a久久精品| 黄色配什么色好看| 男女边吃奶边做爰视频| 一级爰片在线观看| 97超视频在线观看视频| 熟妇人妻不卡中文字幕| 人人妻人人澡人人看| 老熟女久久久| 精品久久久久久久久亚洲| 国产日韩欧美视频二区| 日本黄大片高清| 亚洲四区av| 国产av码专区亚洲av| 国模一区二区三区四区视频| 黄色配什么色好看| 国产精品无大码| 精品少妇久久久久久888优播| 午夜视频国产福利| 男女国产视频网站| 久久毛片免费看一区二区三区| 国产精品三级大全| 国产色爽女视频免费观看| 日韩中字成人| 成年av动漫网址| 少妇人妻久久综合中文| 精品人妻一区二区三区麻豆| 亚洲精品日韩av片在线观看| 丝袜在线中文字幕| 91aial.com中文字幕在线观看| 在线免费观看不下载黄p国产| 精品人妻偷拍中文字幕| 国产老妇伦熟女老妇高清| 午夜福利网站1000一区二区三区| 亚洲精品成人av观看孕妇| 欧美日韩在线观看h| 日本黄色片子视频| 国产一区二区三区av在线| 国产精品久久久久久久久免| 国产 一区精品| 欧美3d第一页| 国产一区二区在线观看日韩| 99久久人妻综合| 女人精品久久久久毛片| 丝袜喷水一区| 久久人人爽人人爽人人片va| 人妻 亚洲 视频| 欧美少妇被猛烈插入视频| 在线播放无遮挡| 亚洲精品久久午夜乱码| 女人久久www免费人成看片| 18禁在线无遮挡免费观看视频| 最近中文字幕2019免费版| 久久国内精品自在自线图片| 国产精品一区二区性色av| 国产 精品1| 亚洲精品久久久久久婷婷小说| 国产亚洲一区二区精品| 欧美日韩视频精品一区| 色哟哟·www| 欧美变态另类bdsm刘玥| 色视频在线一区二区三区| 伦理电影大哥的女人| 嫩草影院入口| av天堂中文字幕网| 国产免费一区二区三区四区乱码| 99视频精品全部免费 在线| 久久久久精品性色| 欧美亚洲 丝袜 人妻 在线| 啦啦啦啦在线视频资源| 免费观看a级毛片全部| 亚洲精品第二区| 一区二区三区精品91| 亚洲精品国产av蜜桃| 精品一区二区免费观看| 国产成人精品一,二区| 日韩av免费高清视频| a级一级毛片免费在线观看| √禁漫天堂资源中文www| 国产精品人妻久久久久久| 久久精品久久精品一区二区三区| 亚洲经典国产精华液单| 丰满少妇做爰视频| 交换朋友夫妻互换小说| 青春草国产在线视频| av免费观看日本| 免费黄网站久久成人精品| 亚洲国产欧美在线一区| 午夜视频国产福利| 国产精品女同一区二区软件| 日韩,欧美,国产一区二区三区| 啦啦啦视频在线资源免费观看| 夜夜看夜夜爽夜夜摸| 少妇 在线观看| 人人妻人人澡人人看| 亚洲国产毛片av蜜桃av| 色婷婷av一区二区三区视频| 夜夜爽夜夜爽视频| 亚洲av电影在线观看一区二区三区| 伦精品一区二区三区| 亚洲高清免费不卡视频| av网站免费在线观看视频| 国产无遮挡羞羞视频在线观看| 少妇丰满av| av在线app专区| 午夜久久久在线观看| 久久久久国产网址| 亚洲四区av| 亚洲av国产av综合av卡| 香蕉精品网在线| 亚洲精品色激情综合| 欧美老熟妇乱子伦牲交| 国产黄片视频在线免费观看| 精品酒店卫生间| 老司机亚洲免费影院| 免费高清在线观看视频在线观看| 特大巨黑吊av在线直播| 少妇人妻久久综合中文| 免费人成在线观看视频色| 亚洲国产精品一区三区| 色94色欧美一区二区| 女性被躁到高潮视频| 一区二区三区精品91| 亚洲图色成人| 亚洲内射少妇av| 亚洲在久久综合| 最近手机中文字幕大全| 亚洲精品日韩在线中文字幕| 亚洲国产成人一精品久久久|