• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Less is more:a new machine-learning methodology for spatiotemporal systems

    2022-06-29 07:54:04SihanFengKangWangFumingWangYongZhangandHongZhao
    Communications in Theoretical Physics 2022年5期

    Sihan Feng,Kang Wang,Fuming Wang,Yong Zhang,2 and Hong Zhao,2

    1 Department of Physics,Xiamen University,Xiamen 361005,China

    2 Lanzhou Center for Theoretical Physics,Key Laboratory of Theoretical Physics of Gansu Province,Lanzhou University,Lanzhou 730000,China

    Abstract Machine learning provides a way to use only portions of the variables of a spatiotemporal system to predict its subsequent evolution and consequently avoids the curse of dimensionality.The learning machines employed for this purpose,in essence,are time-delayed recurrent neural networks with multiple input neurons and multiple output neurons.We show in this paper that such kinds of learning machines have a poor generalization ability to variables that have not been trained with.We then present a one-dimensional time-delayed recurrent neural network for the same aim of model-free prediction.It can be trained on different spatial variables in the training stage but initiated by the time series of only one spatial variable,and consequently possess an excellent generalization ability to new variables that have not been trained on.This network presents a new methodology to achieve finegrained predictions from a learning machine trained on coarse-grained data,and thus provides a new strategy for certain applications such as weather forecasting.Numerical verifications are performed on the Kuramoto coupled oscillators and the Barrio?Varea?Aragon?Maini model.

    Keywords:machine learning,spatiotemporal systems,prediction,dynamical systems,time series,time-delayed recurrent neural network

    1.Introduction

    Predicting the evolution of dynamic systems is important[1–3].These systems usually have to be solved numerically using spatial and temporal discretization because of the mathematical intractability of obtaining analytical solutions.In this way,a partial differential equation is approximated by a set of ordinary differential equations.The main obstacle to this approach is that it becomes infeasible in higher dimensions due to the need of fine-grained spatial and temporal mesh points,which is known as the ‘curse of dimensionality’[4].For real applications,one has to seek a balance between accuracy and computing cost by applying a relatively coarse mesh[1,5].The machine-learning approach developed in recent years provides possible ways to attack this notoriously difficult problem[6–9].One way is to solve partial differential equations with variables from randomly sampled grid points when the equation of motion is known.Researchers have developed an effective approach called physical informed neural networks[10]for this purpose.Another way,which is also the focus of this paper,is for application scenarios that have data records that are available for a large number of spatial points,while the equations of motion are unknown.A learning machine trained by the time series of such spatial points can predict their subsequent evolution in a model-free way[11,12].Using such a strategy,researchers from Google?developed a deep-learning machine called MetNet and improved local weather forecasts greatly[13].

    In these applications,a recurrent neural network withQinput andQoutput neurons is employed to be the learning machine,whereQis the number of spatial variables whose time series records are used in the training.When training such a learning machine,the data from theQvariables are theQinputs,and the outputs represent the next-step evolution of these variables.Feeding the outputs back to the inputs,the learning machine can work as an iterative map.In this paper,we first check the generalization ability of such a learning machine applied to new sets ofQspatial variables that have not been trained with.This ability characterizes the robustness of the machine.For a real system,the system parameters may shift slightly as a function of time.As a result,the learning machine is expected to have a certain degree of robustness to such shifted data sets.Our numerical verification indicates,however,that such kind of learning machines have no such generalization ability.

    To solve this problem,we propose,instead,to apply a one-dimensional(1D)time-delayed recurrent neural network as the learning machine to predict the evolution of the underlying system.In the training stage,the learning machine is still trained with time series collected from different spatial variables.In the predicting stage,however,the learning machine can be initiated by the history record measured at any one spatial point.We show that this learning machine can not only predict the evolution of the variables that have been trained with,but also that of variables that have not been trained with before.The advantages of such generalization ability are that the learning machine will be robust against small variations of system parameters,and it is possible to perform fine-grained predictions with a learning machine trained with coarse-grained data sets.This should be useful for applications such as the weather forecast as one can perform the prediction of a local region without the need of learning from the fine-grained mesh of the entire spatiotemporal system.Our numerical studies are performed on models of the Kuramoto coupled oscillators(KCO)[14–16]and the Barrio?Varea?Aragon?Maini equation(BVAM)[17,18].The rest of the paper is structured as follows.The machine-learning model and method are introduced in section 2.The main results are reported in section 3,in which the mechanism why the traditional approach has failed in transformational generalization while ours has succeeded is also explained.The summary and discussion are given in section 4.

    2.Model and method

    The learning machines adopted in this paper are schematically illustrated in figure 1.Figure 1(a)represents an MQ-N-Q neural network(with MQ,N,andQ,respectively,the number of neurons in the input,hidden,and output layer),or equivalently a Q-N-Q time-delayed map with a delay length ofM(as illustrated in the gray ellipse,Mneurons are utilized as the input units for each ofQinput records).Hereafter,it is called theQdimensional time-delayed machine(QD-TDM),of which the iterated map is given below:

    Figure 1.Schematic diagrams of the learning machines.(a)a 2-4-2 QD-TDM,with a delay length of M=3,and(b)a 1-4-1 1D-TDM,with a delay length of M=3.Each gray ellipse represents an input unit for time series from one spatial variable.

    In the training stage,the cost function is minimized for theP?Mtraining samples under a properly chosen set of control parameters:M,N,Q,cμ,cν,cβ,andcb,and the last four specify the available ranges of the network parameters,i.e.,|μl,i|≤cμ,|νi,k|≤cν,|βi|≤cβ,|bi|≤cb.To perform the training,at fixedM,N,andQ,the μi,νi,k,βiandbiare randomly initialized in their respective value ranges,then a parameter from μi,νi,k,βiorbiis randomly chosen and then randomly mutated in its available range;this mutation is finally accepted if only that it does not increase the cost.Repeat the mutation step over and over again,the cost function will decrease monotonously.Since for each adaptation it needs to renew only a small portion of the network,this algorithm is practical for usual applications,see[19,20]for more details.Indeed,one can also apply the conventional gradient-based(also known as the back-propagation)algorithm to train our learning machine if removing the restrictions to the value ranges.Researchers in the machine learning community begin to try gradient-free algorithms in recent years,and the training algorithm presented here is one of such an effort.With this gradient-free algorithm one can achieve the goal of training,at least for the usual three-layer neural networks,while gaining the ability to limit the range of the network parameters,and therefore control the structural risk of the network.In addition,the parameters can take assigned discrete values,which may be required for certain practical applications.Under this background,we adopt our gradientfree algorithm instead of the conventional gradient-based algorithms.

    In the predicting stage,one just needs to input the last samples ofk=Pto start equation(1),and feed the outputs back to the inputs correspondingly to make equation(1)a self-evolve iterate map.In the case of figure 1(a),for example,after the first round of iteration one needs to feedback φ1(t+3)to the up inputting unit,and φ2(t+3)to the bottom inputting one,and turns the inputs to φ1(t+1),φ1(t+2),φ1(t+3),and φ2(t+1),φ2(t+2),φ2(t+3),respectively,to initiate the second round of iteration.

    Figure 1(b)represents an M-N-1 neural network,or equivalently a 1-N-1 time-delayed map with a delay length ofM(also illustrated in the gray ellipse).We will call it the onedimensional time-delayed machine(1D-TDM)afterwards,and its evolution dynamics are given by:

    This equation appears as a reduced form of equation(1)as the summation of φlis replaced with a single φ.In the training stage,the time series of variables at multiple spatial positions are used sequentially to train the machine so that the dynamic information of those multiple spatial points are all learned by the machine.This is a key difference from previous strategies of training a 1D-TDM-like machine.In more detail,for thelth time series,set the{xl(k?i)}i=0,…,M?1to be the inputs and{xl(k+1)}the expected outputs,we again obtainP?Mtraining samples from one time seriesl.Apply the above operations to all of theQtime series and we get total(P?M)Qtraining samples(each in one dimension).The cost function has the same form as equation(2).In the predicting stage,only the last sample from one spatial variable is needed to initiate the learning machine.

    We emphasize that those popular learning machines used for model-free prediction of the evolution of the underlying dynamic systems,such as the long short-term memory model[21]and the reservoir computer[22–25]are essentially equivalent to the time-delayed model described by the equation(1).The long short-term memory model manages a set of neurons in the hidden layers to memorize the history(i.e.,delayed information)of the inputted time series and thus is obviously equivalent to a time-delayed map.In using the reservoir computer,one needs to iterate the so-called reservoir for a sufficient number of steps before outputting the prediction in the initiation stage.The reservoir network indeed plays the role of storing the delayed information or memory.Thus,it is also a time-delayed map.Therefore,the QD-TDM can generally represent these conventional learning machines.

    3.Results

    3.1.The KCO model

    We first test QD-TDM on the KCO model,which is given by

    where ωiandK,respectively,represent the natural frequency and the coupling coefficient,andLis the total number of oscillators.We see from this equation that the motion of theith oscillator is determined not only by its natural frequency but also by the coupling from all other oscillators,with the strength being controlled by the parameterK.This model is widely applied to study the collective behavior of complex systems[15,16].With strong coupling,the oscillators may show synchronous oscillation and thus are reduced to an effective system with a low dimension.To avoid this case,weak coupling is applied to ensure that all of the oscillators oscillate roughly around their natural frequency ωiand the system lies in a weak chaotic state.The measurable variable of an oscillator is defined asx i(t)=sin(θi(t))+cos(θi(t))without loss of generality.

    AtL=20 andK=0.8,we set ωiwith a random number from the interval(0,1).It can be checked that the system is chaotic with the largest Lyponov exponent being about 0.005.Evolving the system for a long time(with the time step of integralh=0.01)to collect a sufficiently long time series for each oscillator,then sample these time series with an interval of Δt=0.1 to construct the training set.We train the QDTDM using the first ten oscillators(i=1,2,…,10).The width of the learning machine is fixed atN=1000 throughout this paper,which is large enough for achieving the training goal.By searching the control parameter space we find thatM=2000,cμ=0.03,cν=0.45,cβ=0.05,andcb=40 to be the optimum control parameters for training the learning machine in the case of the KCO data set.After finishing the training,we first apply the learning machine to predict these first ten oscillators,and then apply it to evolve the next ten oscillators that have not been trained with(i.e.,inputting the time series from oscillators ofi=11,12,…,20 respectively to initiate the learning machine).

    In figure 2(a),the black lines show segments of time seriesxi(t)fori=6 toi=15 as an example.The green and red lines show the predicted evolutions,for oscillators that have been trained with and for those that have not been trained with,respectively.We see that for the oscillators that have been trained with(i=6 toi=10),the predictions are quite good,but it totally loses the ability to predict for the oscillators(i=11 toi=15)that have not been trained with previously.

    Then we use the same data of the first ten oscillators to train the 1D-TDM and apply it to predict all the 20 oscillators one by one.We find that not only the oscillators that have been trained with are well predicted,but those that have not been trained with previously are also well predicted to a large extent as can be seen in figure 2(b).Even using fewer oscillators to train this learning machine,the generalization ability is still considerable.Figures 2(c)and(d)show the results of the learning machine trained respectively by the oscillators ofi=1,2,3,4 and by only the oscillator ofi=1.It can be seen that,although the prediction quality gets slightly worse with the decrease of the number of training oscillators,the difference is not remarkable.Therefore,the 1D-TDM has excellent generalization ability to the new data set even trained by only few variables.

    Figure 2.The generalization ability of QD-TDM and 1D-TDM to new variables for the KCO model.(a)Prediction results of QD-TDM trained with oscillators from i=1 to i=10,(b)–(d)prediction results of 1D-TDM trained with oscillators,(b)from i=1 to i=10,(c)from i=1 to i=4,and(d)with only i=1.The black lines represent data xi of the KCO model,and the green and red lines represent the predictions for oscillators that have been trained with(green)and for those that have not(red).(e)Shows,in semilogarithmic scale,the power spectrum of the first oscillator with control parameter K=0(red line)and K=0.1(blue line),and(f)shows the prediction of the 1D-TDM trained with the first oscillator(i=1)with K=0.

    It is not surprising that a learning machine has the generalization ability to new variables that have not been trained on.According to the Takens theory,for a time series with a sufficient length,one can reconstruct anM-dimensional phase space map,whereMis the delay length.If onlyM>2D+1,this map should be topologically equivalent to the underlying dynamic system,whereDis the fractal dimension of the attractor of the underlying system.This is the well-known phase space reconstruction technique of delay-coordinate embedding[26,27].This theory holds because any variable of a dynamic system is considered to be coupled with others,and thus involves the information of other variables.

    Figure 2(e)shows the power spectrum of the time series of the first oscillatorx1(t)withK=0(red line)andK=0.1(blue line),respectively.It can be seen that without coupling,the power spectrum only the frequency peak of this oscillator(red line).In this case,the learning machine involves no information of the other oscillators.It can be easily checked that the learning machine trained by this time series has no generalization ability to oscillators that have not been trained with previously,see figure 2(f).With a non-vanishing coupling,it is seen that though it is difficult to observe the coupling effect of other oscillators by just looking at the evolution curve(see black lines figure 2(d)),the power spectrum actually includes frequency peaks of the other oscillators(see the blue line in figure 2(e)).These peaks have a very small amplitude(note that the vertical axis is with the logarithmic scale),and differ among different oscillators.The power spectrum confirms that the information of other oscillators are coupled to the first oscillator,which provides the basis that the learning machine trained by data of only one of the oscillators could have the generalization ability to other coupled ones.

    Figure 3.Recovering fine-grained spatiotemporal structures of the underlying system using the 1D-TDM trained with small portions of the variables.(a)ωi of oscillators,(b)actual fine-grained spatiotemporal pattern of the KCO model,(c)predicted coarse-grained spatiotemporal pattern of the KCO model.(d)Fine-grained spatiotemporal pattern recovered by the 1D-TDM.(e)Prediction errors(panel(b)minus panel(d)).

    The phase space can also be reconstructed using multiple time series with the technique of delay-coordinate embedding[27].The reason why a 1D-TDM can,while a QD-TDM cannot be applied to spatial points that have not been trained with previously is the following.A remarkable different property of the 1D-TDM from the QD-TDM is that it does not involve the time unit explicitly in its equation of motion;what is inputted to the learning machine is just a discrete time series[see equation(3)].With this feature,a 1D-TDM can evolve the dynamics of any oscillator once it is initiated by its time series if only the information of this oscillator is correctly coupled into the learning machine.In contrast,theQtime series with different frequencies are needed to be evolved simultaneously in a QD-TDM(see equation(1)),which thus introduces a relative time unit to the machine.If,for example,it is trained by two time series with frequencies ω1and ω2,then a relative time unit is defined by the ratio of ω2/ω1.The learning machine could correctly recover the dynamics of two oscillators with frequenciescω1andcω2,if these two oscillators belong to the underlying system.In this case,the time series of ω1and ω2and the time series ofcω1andcω2have the same relative time unit.However,if the learning machine is applied to a new set of oscillators with ω3and ω4,the relative time unit changed,and mismatching arises.The mismatching may become more serious for largerQ.

    This generalization ability of the 1D-TDM provides a useful strategy for possible applications to perform finegrained predictions using a learning machine trained on coarse-grained data.To show such a possibility,we construct a KCO model with 100 oscillators.The oscillators take frequencies following the curve shown in figure 3(a).Setting the frequencies in such a way,the KCO model can exhibit finegrained evolution patterns as shown in figure 3(b)instead of random patterns.Using 10 evenly spaced oscillators to train a 1D-TDM,we find that it can recover the subsequent evolution of these 10 oscillators well,which gives a coarse-grained picture of figure 3(b)in figure 3(c).However,using the trained learning machine to predict every oscillator,we recover figure 3(b)approximately in figure 3(d).Figure 3(e),shows the prediction errors,indicating that the recovery is almost perfect in a long period of an earlier time;the errors increase with the further increase of time,but they are overall relatively small compared to the original variable,demonstrating an excellent prediction ability of the machine.

    Figure 4.The generalization ability of QD-TDM and 1D-TDM for the BVAM model.(a)Spatiotemporal patterns of the model with the spatial variable u.(b)Predictions of the QD-TDM trained with mesh nodes of i=9,10,…,16.(c)and(d)are the predictions of the 1DTDM trained with mesh nodes of i=9,10,…,16 and i=1,3,5,7,9,respectively.

    3.2.The BVAM model

    In the KCO model,the amplitudes of oscillators are identical and oscillators are coupled globally.In this case,the time series of any oscillator may fully involve the information of all oscillators.Therefore,the 1D-TDM trained by only the time series of one oscillator may recover the dynamics of all the other oscillators.In other systems,the coupling among different variables may be different and may only couple locally.In this case,however,one has to involve more time series of representative spatial variables to train the learning machine to gain a better generalization ability to other variables.To show this,we study another spatiotemporal system,i.e.,the BVAM model,whose equations of motion are given by

    whereC,D,H,η,a,andbare system parameters.This model presents the richness of dynamic behavior[18].Fix the system size atLb=6.4 with 32 mesh nodes for bothuandv,and solve the equation(5)numerically(with the time step of integralh=0.01)in one dimension with zero flux boundary conditions and random initial conditions around the equilibrium point(0,0),atD=0.08,H=3,η=1,a=?1,b=?3/2,andC=?1.54.The evolution of time is shown in figure 4(a),from where typical chaotic spatiotemporal patterns appear,i.e.,spatial variables oscillate chaotically with the evolution of time.Typically,it can be seen that the oscillation range is different in different regions,manifested by the bright-yellow and dark-blue colors.

    Sample the time series of the mesh nodeui(t)at an interval of Δt=0.1 and obtain the measurable time seriesxi(t).For this system,the optimum set of control parameters areM=300,cμ=0.02,cν=1,cβ=0.5,andcb=30.Using data ofxi(t)/ui(t)withi=9,…,16 to train the QD-TDM and the 1D-TDM respectively and then apply them to produce the evolution of the whole system after finishing the training,and the results are shown in figures 4(b)and(c),respectively.It is seen again that the QD-TDM can recover the dynamics of the spatial variables that have been trained with but not to the ones that have not(see figure 4(b)).The 1D-TDM,in contrast,possesses the ability to predict time evolution of variables in the spatial regions that have not been used for training(see figure 4(c)).Since this is a relatively strong chaotic system,the predictable time region is relatively shorter than in the case of the KCO model.

    Choosing mesh nodesxi(t)/ui(t)withi=1,3,5,7,9 to be the training time series,the 1D-TDM can still recover the evolution dynamics of the whole system in figure 4(d).Even using only three mesh nodesxi(t)withi=1,which represents variables in the dark-blue region,i=9,which represents those in the bright-yellow region,andi=5,which represents those in the light-green region,the 1D-TDM still retains some ability to generalize.When using the data from only one spatial variable,the generalization ability remains to some extent for spatial variables that belong to the same class,but not for the other classes.

    4.Summary and discussions

    In summary,the usual recurrent learning machine,requiring multiple system variables to iterate,almost has no generalization ability to the ones that have not been trained with previously.Applying such a learning machine,the number of sampled variables has to be sufficiently large to cover the major modes of the system,and predictions are only available for those variables that have been trained with.Meanwhile,using the 1D time-delayed map as the learning machine can predict the subsequent evolution of not only variables that have been trained with,but also variables that have not.Specifically,this learning machine requires data from only one variable to iterate,it can be trained with a small set of variables,and it can predict the evolution of the small set of variables it trained with as well as other variables that it did not train with.One of the possible applications of this property is that we may,after being well-trained,apply the learning machine to densely sampled spatial variables in a local region and thus gain the fine-grained evolution patterns of this region.This finding provides a new strategy for special applications such as weather forecasting,where high-resolution prediction may be obtained for a local region using a learning machine trained on low-resolution data from the same region.Presently,this strategy is just illustrated for relatively simple spatiotemporal systems.We hope a more systematic approach can be developed to treat real-life spatiotemporal systems in the future.

    Acknowledgments

    We acknowledge support from the NSFC(Grants No.11 975 189,No.11 975 190).

    国产精品欧美亚洲77777| 下体分泌物呈黄色| 丝袜喷水一区| 亚洲人成77777在线视频| 久久九九热精品免费| 婷婷丁香在线五月| 岛国在线观看网站| 天天影视国产精品| 五月开心婷婷网| 一区在线观看完整版| 久久午夜综合久久蜜桃| 久久久精品区二区三区| 精品人妻熟女毛片av久久网站| 中国美女看黄片| 午夜久久久在线观看| 欧美 亚洲 国产 日韩一| 久久精品国产a三级三级三级| 欧美日本中文国产一区发布| 在线观看免费日韩欧美大片| 国产成人av激情在线播放| 欧美成人午夜精品| 亚洲男人天堂网一区| 91av网站免费观看| 成人三级做爰电影| 777久久人妻少妇嫩草av网站| tube8黄色片| 国产精品自产拍在线观看55亚洲 | 午夜老司机福利片| 热re99久久国产66热| 丝袜在线中文字幕| 99re6热这里在线精品视频| 最新的欧美精品一区二区| √禁漫天堂资源中文www| 757午夜福利合集在线观看| 波多野结衣av一区二区av| 亚洲性夜色夜夜综合| 国产国语露脸激情在线看| 一级毛片女人18水好多| www.自偷自拍.com| 日韩欧美免费精品| 2018国产大陆天天弄谢| 又紧又爽又黄一区二区| 欧美黑人欧美精品刺激| 在线观看免费午夜福利视频| 欧美乱码精品一区二区三区| tube8黄色片| 狠狠婷婷综合久久久久久88av| 夜夜夜夜夜久久久久| 精品国产国语对白av| 久久精品国产a三级三级三级| 90打野战视频偷拍视频| 久久久久视频综合| 变态另类成人亚洲欧美熟女 | 亚洲 欧美一区二区三区| 久久久久久免费高清国产稀缺| 亚洲,欧美精品.| 亚洲五月婷婷丁香| 亚洲色图综合在线观看| 极品少妇高潮喷水抽搐| 国产免费福利视频在线观看| 国产激情久久老熟女| 成年版毛片免费区| 亚洲av美国av| 99久久国产精品久久久| 香蕉久久夜色| 日韩视频在线欧美| a在线观看视频网站| 欧美大码av| 午夜两性在线视频| 啦啦啦 在线观看视频| av福利片在线| 无限看片的www在线观看| 搡老岳熟女国产| 男女免费视频国产| 久久ye,这里只有精品| 亚洲五月色婷婷综合| 少妇被粗大的猛进出69影院| 亚洲熟女精品中文字幕| 欧美乱码精品一区二区三区| 两个人免费观看高清视频| 久久精品成人免费网站| 日日爽夜夜爽网站| 免费黄频网站在线观看国产| 久久天躁狠狠躁夜夜2o2o| 高清在线国产一区| 亚洲国产成人一精品久久久| 狠狠婷婷综合久久久久久88av| 91麻豆av在线| 麻豆乱淫一区二区| 国产在线免费精品| 可以免费在线观看a视频的电影网站| 免费日韩欧美在线观看| 国产一卡二卡三卡精品| 波多野结衣一区麻豆| 一级毛片精品| 天天影视国产精品| 欧美乱妇无乱码| 黑人欧美特级aaaaaa片| 99久久人妻综合| 国产成人精品在线电影| 国产男女内射视频| 久久婷婷成人综合色麻豆| 日韩三级视频一区二区三区| 国产单亲对白刺激| 日韩中文字幕欧美一区二区| 天天添夜夜摸| 高清av免费在线| 国产精品一区二区精品视频观看| 欧美激情极品国产一区二区三区| 老司机午夜福利在线观看视频 | 香蕉丝袜av| av又黄又爽大尺度在线免费看| 成年人午夜在线观看视频| 欧美黑人精品巨大| 午夜精品久久久久久毛片777| 精品高清国产在线一区| 午夜福利视频在线观看免费| 亚洲人成电影观看| 一级毛片精品| 精品国产一区二区三区四区第35| 后天国语完整版免费观看| 中文字幕av电影在线播放| 黑人巨大精品欧美一区二区蜜桃| 亚洲黑人精品在线| 99精品在免费线老司机午夜| 免费少妇av软件| 色精品久久人妻99蜜桃| 成人精品一区二区免费| 国产主播在线观看一区二区| 亚洲人成电影免费在线| 美女国产高潮福利片在线看| 国产欧美亚洲国产| 高潮久久久久久久久久久不卡| 在线亚洲精品国产二区图片欧美| 在线看a的网站| 成人影院久久| 麻豆乱淫一区二区| 午夜福利欧美成人| 日本精品一区二区三区蜜桃| 国产精品久久久久久精品古装| 欧美黄色片欧美黄色片| 亚洲性夜色夜夜综合| 精品国产一区二区久久| 日本av手机在线免费观看| 五月天丁香电影| 国产亚洲精品第一综合不卡| av有码第一页| 后天国语完整版免费观看| 怎么达到女性高潮| 欧美日韩黄片免| av视频免费观看在线观看| 大香蕉久久网| 国产色视频综合| 成人精品一区二区免费| 另类亚洲欧美激情| 中文字幕人妻熟女乱码| av免费在线观看网站| 亚洲久久久国产精品| 国产成人av教育| 超碰成人久久| 日本黄色日本黄色录像| 国产男女内射视频| 国产欧美日韩一区二区三区在线| 老鸭窝网址在线观看| 一级毛片女人18水好多| 亚洲精品久久成人aⅴ小说| 如日韩欧美国产精品一区二区三区| 亚洲中文日韩欧美视频| 久久天堂一区二区三区四区| 首页视频小说图片口味搜索| 悠悠久久av| 叶爱在线成人免费视频播放| 精品一区二区三区av网在线观看 | 久久99一区二区三区| 国产精品久久电影中文字幕 | 亚洲成人免费av在线播放| 中亚洲国语对白在线视频| 亚洲 欧美一区二区三区| 一个人免费看片子| 久久香蕉激情| 久热爱精品视频在线9| 久久久水蜜桃国产精品网| 日韩大片免费观看网站| 久久久精品区二区三区| 亚洲黑人精品在线| 热re99久久国产66热| 欧美黑人精品巨大| 两人在一起打扑克的视频| 亚洲精品一卡2卡三卡4卡5卡| 亚洲精品国产精品久久久不卡| 精品久久蜜臀av无| 999久久久精品免费观看国产| 极品教师在线免费播放| 黄色怎么调成土黄色| 午夜福利视频在线观看免费| 在线观看免费午夜福利视频| 午夜两性在线视频| 大型av网站在线播放| 脱女人内裤的视频| 蜜桃在线观看..| 精品国产一区二区久久| 国产无遮挡羞羞视频在线观看| 久久人妻福利社区极品人妻图片| 最近最新中文字幕大全电影3 | 一进一出好大好爽视频| 在线av久久热| 亚洲国产成人一精品久久久| 最新的欧美精品一区二区| 99久久99久久久精品蜜桃| 亚洲第一av免费看| 久久这里只有精品19| 一级黄色大片毛片| 国产福利在线免费观看视频| 精品少妇内射三级| 高清视频免费观看一区二区| 亚洲国产中文字幕在线视频| 91成人精品电影| 高清毛片免费观看视频网站 | 自拍欧美九色日韩亚洲蝌蚪91| 亚洲精品在线美女| 一级片免费观看大全| 精品少妇内射三级| 欧美日本中文国产一区发布| 久久午夜综合久久蜜桃| 日韩精品免费视频一区二区三区| 两人在一起打扑克的视频| 亚洲中文字幕日韩| 亚洲av欧美aⅴ国产| 精品少妇内射三级| 久久久水蜜桃国产精品网| 亚洲国产中文字幕在线视频| 亚洲欧美色中文字幕在线| 男人操女人黄网站| 欧美成人午夜精品| 久久亚洲真实| 免费在线观看视频国产中文字幕亚洲| 国产三级黄色录像| 人人澡人人妻人| 日本欧美视频一区| 99热网站在线观看| 大型黄色视频在线免费观看| tocl精华| 在线观看免费视频日本深夜| cao死你这个sao货| 亚洲专区字幕在线| 成人亚洲精品一区在线观看| 色视频在线一区二区三区| 国产精品香港三级国产av潘金莲| 国产成人欧美在线观看 | 19禁男女啪啪无遮挡网站| 午夜成年电影在线免费观看| 久久国产精品男人的天堂亚洲| 日本一区二区免费在线视频| 亚洲精品中文字幕在线视频| 黄片大片在线免费观看| 下体分泌物呈黄色| 最近最新中文字幕大全免费视频| 一本久久精品| 天天添夜夜摸| 丝袜美足系列| 深夜精品福利| 国产人伦9x9x在线观看| avwww免费| 日韩一卡2卡3卡4卡2021年| 亚洲熟女毛片儿| 久久人妻熟女aⅴ| 啦啦啦视频在线资源免费观看| 亚洲视频免费观看视频| 亚洲国产中文字幕在线视频| 99re在线观看精品视频| 一本—道久久a久久精品蜜桃钙片| 老汉色∧v一级毛片| 老鸭窝网址在线观看| av电影中文网址| 久久久水蜜桃国产精品网| 亚洲精品国产色婷婷电影| 丝袜喷水一区| 久久久久久亚洲精品国产蜜桃av| 丝瓜视频免费看黄片| 国产黄频视频在线观看| 麻豆成人av在线观看| 香蕉丝袜av| 国产1区2区3区精品| 国产精品久久久人人做人人爽| 中文字幕色久视频| 国产亚洲精品第一综合不卡| 极品人妻少妇av视频| 婷婷成人精品国产| 精品福利观看| 日韩视频一区二区在线观看| 国产免费视频播放在线视频| 国产一区二区三区综合在线观看| 我的亚洲天堂| 一本一本久久a久久精品综合妖精| 熟女少妇亚洲综合色aaa.| av天堂在线播放| 99riav亚洲国产免费| 在线十欧美十亚洲十日本专区| 国产精品一区二区在线观看99| 怎么达到女性高潮| www.999成人在线观看| 午夜成年电影在线免费观看| 99re在线观看精品视频| 精品午夜福利视频在线观看一区 | 精品熟女少妇八av免费久了| 免费在线观看影片大全网站| 一区在线观看完整版| 亚洲五月色婷婷综合| 交换朋友夫妻互换小说| www日本在线高清视频| 大陆偷拍与自拍| 亚洲色图综合在线观看| 一本一本久久a久久精品综合妖精| 高清av免费在线| 欧美av亚洲av综合av国产av| 中文字幕色久视频| 欧美av亚洲av综合av国产av| 中文字幕色久视频| 欧美精品高潮呻吟av久久| aaaaa片日本免费| 久久久久久久大尺度免费视频| 91国产中文字幕| 国产精品一区二区精品视频观看| 久久天躁狠狠躁夜夜2o2o| 国产老妇伦熟女老妇高清| 国内毛片毛片毛片毛片毛片| 日韩免费高清中文字幕av| 19禁男女啪啪无遮挡网站| 欧美久久黑人一区二区| tube8黄色片| 桃红色精品国产亚洲av| 欧美成人午夜精品| 亚洲av第一区精品v没综合| 久久午夜综合久久蜜桃| 亚洲精品一二三| 在线播放国产精品三级| 久久久欧美国产精品| 国产精品免费大片| 国产高清激情床上av| 国产亚洲欧美精品永久| av在线播放免费不卡| 女人精品久久久久毛片| 亚洲国产毛片av蜜桃av| 99热网站在线观看| 少妇精品久久久久久久| 中文字幕色久视频| 国产又色又爽无遮挡免费看| 多毛熟女@视频| 国产主播在线观看一区二区| av一本久久久久| 考比视频在线观看| 亚洲欧洲精品一区二区精品久久久| 一区二区日韩欧美中文字幕| 久久久精品94久久精品| 国产精品久久久人人做人人爽| 国产精品欧美亚洲77777| 在线观看免费视频网站a站| 精品午夜福利视频在线观看一区 | 99热网站在线观看| 天天躁狠狠躁夜夜躁狠狠躁| 一级片'在线观看视频| 天天影视国产精品| 欧美中文综合在线视频| av视频免费观看在线观看| 色尼玛亚洲综合影院| 久久精品国产亚洲av香蕉五月 | 国产精品1区2区在线观看. | 极品人妻少妇av视频| 日韩欧美国产一区二区入口| 亚洲免费av在线视频| 国产日韩欧美亚洲二区| 久久久久国产一级毛片高清牌| 亚洲美女黄片视频| 成年女人毛片免费观看观看9 | 天堂动漫精品| 国产成人免费无遮挡视频| 在线天堂中文资源库| 亚洲 欧美一区二区三区| 19禁男女啪啪无遮挡网站| 免费一级毛片在线播放高清视频 | 女性被躁到高潮视频| 国产福利在线免费观看视频| 在线观看66精品国产| 这个男人来自地球电影免费观看| 999久久久精品免费观看国产| 欧美成狂野欧美在线观看| 国产精品久久久久久精品古装| 亚洲天堂av无毛| 欧美 日韩 精品 国产| 精品国产乱码久久久久久小说| 午夜精品国产一区二区电影| 99热网站在线观看| 精品一区二区三卡| 在线播放国产精品三级| 中文字幕人妻丝袜一区二区| 国产av国产精品国产| 久久精品人人爽人人爽视色| 无限看片的www在线观看| 精品免费久久久久久久清纯 | 国产av一区二区精品久久| 黑人猛操日本美女一级片| 91麻豆精品激情在线观看国产 | 免费av中文字幕在线| 欧美午夜高清在线| 亚洲欧美日韩高清在线视频 | 欧美精品啪啪一区二区三区| 亚洲一区二区三区欧美精品| 蜜桃国产av成人99| 一区二区av电影网| 免费少妇av软件| 亚洲国产中文字幕在线视频| 午夜福利,免费看| 脱女人内裤的视频| 极品教师在线免费播放| 亚洲一区中文字幕在线| 汤姆久久久久久久影院中文字幕| 黄色 视频免费看| 亚洲人成电影免费在线| 午夜福利视频在线观看免费| 午夜成年电影在线免费观看| 天堂动漫精品| netflix在线观看网站| 国产亚洲精品久久久久5区| 国产精品国产高清国产av | 国产aⅴ精品一区二区三区波| 99re6热这里在线精品视频| 日日爽夜夜爽网站| 免费观看人在逋| 国产精品国产高清国产av | 青青草视频在线视频观看| 日韩三级视频一区二区三区| 久久精品亚洲精品国产色婷小说| av超薄肉色丝袜交足视频| 日韩一区二区三区影片| 女警被强在线播放| 波多野结衣av一区二区av| 国产成人啪精品午夜网站| 91麻豆精品激情在线观看国产 | 欧美大码av| 99在线人妻在线中文字幕 | tube8黄色片| 亚洲欧洲日产国产| 老汉色av国产亚洲站长工具| 久久 成人 亚洲| 日韩欧美免费精品| 久久国产精品影院| 精品国产乱码久久久久久小说| 97在线人人人人妻| 国产精品欧美亚洲77777| 国产伦理片在线播放av一区| 亚洲 欧美一区二区三区| 国产一区二区激情短视频| 国产亚洲精品一区二区www | 正在播放国产对白刺激| 日本黄色日本黄色录像| 一本大道久久a久久精品| 大型av网站在线播放| 午夜两性在线视频| 国产麻豆69| 欧美av亚洲av综合av国产av| 中文字幕最新亚洲高清| 久久精品人人爽人人爽视色| 激情在线观看视频在线高清 | 一本综合久久免费| 色精品久久人妻99蜜桃| 纯流量卡能插随身wifi吗| 欧美黄色淫秽网站| 热99久久久久精品小说推荐| 真人做人爱边吃奶动态| 纵有疾风起免费观看全集完整版| 激情视频va一区二区三区| 国产成人一区二区三区免费视频网站| 精品国产一区二区三区四区第35| 天堂8中文在线网| 露出奶头的视频| a级毛片黄视频| 青草久久国产| 99国产极品粉嫩在线观看| 高清av免费在线| 中文字幕人妻丝袜一区二区| 一本综合久久免费| 十八禁人妻一区二区| 久久人人97超碰香蕉20202| 少妇的丰满在线观看| 国产欧美日韩一区二区三| 国产成人免费无遮挡视频| 国产免费福利视频在线观看| 另类精品久久| 成人国产av品久久久| av网站在线播放免费| a级片在线免费高清观看视频| 国产1区2区3区精品| 51午夜福利影视在线观看| 亚洲中文字幕日韩| e午夜精品久久久久久久| 日韩欧美一区视频在线观看| 日日摸夜夜添夜夜添小说| 国产免费福利视频在线观看| 不卡一级毛片| 狂野欧美激情性xxxx| 亚洲,欧美精品.| 99九九在线精品视频| 一级,二级,三级黄色视频| 精品国产超薄肉色丝袜足j| 国产一区二区在线观看av| 欧美黑人精品巨大| 国产1区2区3区精品| 纵有疾风起免费观看全集完整版| 12—13女人毛片做爰片一| 亚洲第一欧美日韩一区二区三区 | 亚洲欧洲日产国产| 国产精品九九99| 在线 av 中文字幕| 国产熟女午夜一区二区三区| 久久国产精品男人的天堂亚洲| 天堂8中文在线网| 肉色欧美久久久久久久蜜桃| 热re99久久精品国产66热6| 国产成人一区二区三区免费视频网站| 乱人伦中国视频| 色婷婷av一区二区三区视频| 国产精品久久久人人做人人爽| 黄色毛片三级朝国网站| 亚洲人成77777在线视频| 天天躁日日躁夜夜躁夜夜| 国产成人精品无人区| 蜜桃国产av成人99| 自拍欧美九色日韩亚洲蝌蚪91| 亚洲精品美女久久久久99蜜臀| 一本久久精品| 香蕉国产在线看| 国产一区有黄有色的免费视频| 黄色a级毛片大全视频| 男女之事视频高清在线观看| 久久婷婷成人综合色麻豆| 欧美一级毛片孕妇| 亚洲熟妇熟女久久| 国产老妇伦熟女老妇高清| 69精品国产乱码久久久| 日韩精品免费视频一区二区三区| 国产1区2区3区精品| 亚洲国产欧美一区二区综合| 久久精品国产a三级三级三级| 亚洲精品国产色婷婷电影| 国产一区二区三区在线臀色熟女 | 五月天丁香电影| 99香蕉大伊视频| 精品视频人人做人人爽| 在线观看免费高清a一片| 最近最新免费中文字幕在线| 老熟女久久久| 搡老乐熟女国产| 亚洲性夜色夜夜综合| 亚洲熟女毛片儿| 18禁裸乳无遮挡动漫免费视频| 美女高潮喷水抽搐中文字幕| 免费日韩欧美在线观看| 日本一区二区免费在线视频| 另类亚洲欧美激情| 天天躁日日躁夜夜躁夜夜| 国产黄频视频在线观看| 日本欧美视频一区| 国产成人免费无遮挡视频| 午夜福利在线观看吧| 99国产极品粉嫩在线观看| 久久人妻av系列| 18禁美女被吸乳视频| 欧美日韩亚洲高清精品| 美女扒开内裤让男人捅视频| 久久精品人人爽人人爽视色| 在线观看免费视频日本深夜| 成人国语在线视频| 老汉色av国产亚洲站长工具| 黄色视频在线播放观看不卡| 久久这里只有精品19| 黄片小视频在线播放| 欧美国产精品va在线观看不卡| av福利片在线| 一进一出好大好爽视频| 中文字幕精品免费在线观看视频| 别揉我奶头~嗯~啊~动态视频| 久热这里只有精品99| 搡老熟女国产l中国老女人| 国产色视频综合| 精品视频人人做人人爽| 9191精品国产免费久久| 欧美成狂野欧美在线观看| 日韩精品免费视频一区二区三区| 亚洲av片天天在线观看| 久久性视频一级片| 男女无遮挡免费网站观看| 精品少妇内射三级| 国产黄频视频在线观看| 精品视频人人做人人爽| 精品国产国语对白av| 如日韩欧美国产精品一区二区三区| 国产精品香港三级国产av潘金莲| 99久久99久久久精品蜜桃| 久久午夜亚洲精品久久| 国产高清激情床上av| 一本综合久久免费| 欧美日韩福利视频一区二区| 热99久久久久精品小说推荐| 欧美 亚洲 国产 日韩一| 女警被强在线播放| 精品乱码久久久久久99久播| 一边摸一边抽搐一进一出视频| 亚洲伊人久久精品综合| 亚洲精品美女久久久久99蜜臀| 777久久人妻少妇嫩草av网站| 国产97色在线日韩免费| 欧美日本中文国产一区发布| 亚洲五月色婷婷综合| 欧美成人免费av一区二区三区 | 国产黄色免费在线视频|