• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Artificial Neural Network Methods for the Solution of Second Order Boundary Value Problems

    2019-04-29 06:22:06CosminAnitescuElenaAtroshchenkoNaifAlajlanandTimonRabczuk
    Computers Materials&Continua 2019年4期

    Cosmin Anitescu, Elena Atroshchenko, Naif Alajlan and Timon Rabczuk,

    Abstract: We present a method for solving partial differential equations using artificial neural networks and an adaptive collocation strategy. In this procedure, a coarse grid of training points is used at the initial training stages, while more points are added at later stages based on the value of the residual at a larger set of evaluation points. This method increases the robustness of the neural network approximation and can result in significant computational savings, particularly when the solution is non-smooth. Numerical results are presented for benchmark problems for scalar-valued PDEs, namely Poisson and Helmholtz equations, as well as for an inverse acoustics problem.

    Keywords: Deep learning, adaptive collocation, inverse problems, artificial neural networks.

    1 Introduction

    Artificial neural networks (ANNs) have been a topic of great interest in the machine learning community due to their ability to solve very difficult problems, particularly in the fields of image processing and object recognition, speech recognition, medical diagnosis, etc. More recently, applications have been found in engineering, especially where large data sets are involved. From a mathematical point of view, neural networks are also interesting due to their ability to efficiently approximate arbitrary functions[Cybenko (1989)].

    A natural question is to determine whether ANNs can be used to approximate the solution of partial differential equations which commonly appear in physics, engineering and mathematical problems. Several articles and even a book [Yadav (2015)] have been very recently devoted to this topic. In most of the approaches considered, a collocation-type method is employed which attempts to fit the governing equations and the boundary conditions at randomly selected points in the domain and on the boundary. Among these methods we mention the Deep Galerkin Method [Sirignano and Spiliopoulos (2018)],Physics Informed Neural Networks [Raissi, Perdikaris, and Karniadakis (2019)], as well as the earlier works in Lagaris et al. [Lagaris, Likas and Fotiadis (1998); Lagaris, Likas and Papageorgiou (2000); van Milligen, Tribaldos and Jiménez (1995); Kumar and Yadav(2011); McFall and Mahan (2009)]. This method appears to produce reasonably accurate results, particularly for high-dimensional domains [Han, Jentzen and Weinan (2018)] and domains with complex geometries [Berg and Nystr?m (2018)], where the meshfree character of these methods makes them competitive with established discretization methods.Another related approach is to use an energy minimization formulation of the governing equation as in Weinan et al. [Weinan and Yu (2018); Wang and Zhang (2019)]. This formulation has the advantage that only the first derivatives need to be computed for a 2ndorder problem, however it requires a more precise integration procedure and not all governing equations can be cast in an energy-minimization framework.

    In this work, we employ a collocation formulation for solving 2ndorder boundary value problems such as Poisson’s equation and Helmholtz equation. Different from existing methods which typically use a randomly scattered set of collocation points, we present an adaptive approach for selecting the collocation points based on the value of the residual at previous training steps. This method can improve the robustness of collocation method,particularly in cases when the solution has a non-smooth region where increasing the number of training points is beneficial.

    The paper is structured as follows: in Section 2 we give an overview of artificial neural networks and briefly discuss their approximation properties. The application of ANNs to forward and inverse boundary-value problems is discussed in Section 3. Detailed numerical results are presented in Section 4, followed by concluding remarks.

    2 Structure of neural network

    In this section, we briefly describe the anatomy of a neural network and its properties for approximating arbitrary functions. We focus in particular on simple feed-forward neural networks which are used in the subsequent sections.

    2.1 Feed-forward networks

    A feed-forward network can be seen as a computational graph consisting of an input layer, an output layer and an arbitrary number of intermediary hidden layers, where all the neurons (units) in adjacent layers are connected with each other. It can be used to represent a function u:by using n neurons in input layer and m neurons in the output layer, see Fig. 1. We index the layers, starting with the input layer at 0, and the output layer as L, and we denote the number of neurons in each layer by k0=n,k1,…,kL=m. To each connection between the i-th neuron in layer l-1 and the j-th neuron in layer l, with 0<l≤L, we associate a weightand to each neuron in the layers 0<l≤ L we associate a bias bi,i=1,…,kl. Moreover, we define an activation function σl:? → ? between the layers l and l-1. Then the values at each neuron can be written in terms of the activation function applied to a linear combination of the neurons in the previous layer given by the corresponding weights and biases, i.e.,

    Figure 1: Schematic of a feed-forward neural network with L-1 hidden layers

    This can be written more compactly in matrix form as:

    where Wlis a matrix of weights corresponding to the connections between layers l-1 and l, ul=and bl=are column vectors and the activation function is applied element-wise.

    Existing computational frameworks such as Tensorflow or Pytorch perform very efficient parallel execution, also on GPUs if available, of computational graphs like the ones defined by (2). Moreover, the input values can be defined as multi-dimensional arrays(tensors) and the computation of the corresponding outputs is efficiently vectorized and distributed across the available computational resources.

    A key observation is that, for a given a neural network, the partial derivatives of the outputs with respect to the weights and biases can also be efficiently computed by the backpropagation algorithm. The idea is to perform the chain rule starting with the last layer and store the intermediary values in a computational graph where the order of the layers is reversed. The back-propagation algorithm enables the application of gradient-based minimization algorithms were a loss function based on the output of the neural network is to be minimized. Moreover, the partial derivatives of the outputs with respect to the inputs or with respect to some other prescribed parameters can be calculated in a similar way.

    In typical applications of deep learning, a neural network is trained on a set of matching inputs and outputs by seeking to minimize the difference between the predicted values and some known correct outputs. This often requires large data sets that often must be manually processed and are themselves subject to different types of errors. We avoid this by defining a loss function which minimizes the residuals of the governing equations at a chosen set of training points. The training points can be simply generated as randomly scattered points in the domain as in Raissi et al. [Raissi, Perdikaris and Karniadakis(2019)]. In this work, we adopt instead an adaptive procedure which iteratively adds more points to the training set where the residual values are higher than some prescribed threshold, as detailed in Section 3.

    Aside from the choice of the size of the neural networks (the number of hidden layers and the number of neurons in each layers) and that of the training points, other important parameters are related to the selection of the activation function and the choice of the minimization algorithm. Typical activation functions used are ramp functions like ReLU,sigmoid (logistic) function, and the hyperbolic tangent function (tanh). In this work, we use the tanh activation function which is preferable due to its smoothness. For optimization, we use the Adam (adaptive momentum) optimizer which based on stochastic gradient descent followed by a quasi-Newton method (L-BFGS) which builds an approximated Hessian at each gradient-descent step.

    2.2 Theoretical approximation properties of neural networks

    It is well-known that artificial neural networks have very good approximation properties when the function to be approximated is continuous. This has been established since the 1980s, where it was shown (Hornik, Stinchcombe and White 1989) that under mild assumptions on the activation function, a given function f(x) can be approximated within any chosen tolerance by a network with single hidden layer and a finite number of neurons.

    It was later shown, e.g., [Lu, Pu, Wang et. al. (2017)] that by choosing an non-linear activation function and forming deeper networks, the number of neurons can be significantly reduced. Various estimates of the approximation properties of neural networks for approximating PDEs have been more recently derived [Sirignano and Spiliopoulos (2018)].

    We note however, that while the neural networks are in theory capable to represent complex functions in a very compact way, finding the actual parameters (weights and biases) which solve a given differential equation within a given approximation tolerance can be quite difficult. In the following, we present a method for selecting the training data which gives some control over the performance of the solver and optimization algorithms.

    3 Collocation solver for PDEs

    In the collocation method, the main idea is to define a loss function which is based on the strong form of the governing equations and the boundary conditions. The loss function is evaluated at chosen sets of points in the interior on the domain as well as on the boundary.More specifically, let us assume that a general boundary value problem can be written as:

    where Ω ∈ ?dis the problem domain with boundary ?Ω, ?, G are interior and boundary differential operators, and f, g are prescribed functions (e.g., loading data).

    3.1 Poisson equation

    We consider equations of the form:

    where u(x) is the unknown solution,is the computational domain, k≥ 0 is a given constant, f is the given source term,is the prescribed Dirichlet data on the Dirichlet boundary ?ΩD, n is the outer normal vector, and g is the Neumann data. When k=0, the problem is a standard Poisson equation; we also consider a more general setting when e.g., k=1.

    where Nint, Ndirand Nneurepresent the number of points,, andin the interior domain, on the Dirichlet boundary and the Neumann boundary respectively. Moreover, γdirand γneurepresent penalty terms for the Dirichlet and Neumann boundaries. While in some cases it is enough to set γdir= γneu=1, the convergence of the loss function can be improved by increasing one or both of these values to ensure that the boundary conditions are satisfied. During the minimization process, the terms in the cost function can be monitored and usually more accurate results can be obtained when the loss for the interior points is of the same order as the losses corresponding to the Dirichlet and Neumann boundaries.

    3.2 Helmholtz equation

    The governing equation for the homogeneous Helmholtz equation is of the form:

    Here u(x,y) can be complex-valued function. This equation is a time-independent form of the wave equation and it has applications in the study of various physical phenomena,such as acoustics, seismology and electromagnetic radiation. For many problems, the domain Ω is not bounded and the solution u(x,y) can be highly oscillatory, which creates difficulties in standard finite element analysis.

    Different types of boundary conditions, usually of the Neumann type can be imposed depending on the problem. As for Poisson’s equation, we construct a loss function which seeks to minimize the residual of the governing equation at collocation points. While in theory neural networks are defined in ?n, where n is the number of neurons in the input layer and can capture unbounded domains, we limit ourselves here to finite domains.

    3.3 Inverse problems

    In this class of problems, one is given a particular solution u?(x), which satisfies the governing equations of the form:

    where λ?is unknown. The problem is then to determine the unknown parameter or vector of parameters λ?. This can be reformulated as an optimization problem, where we start with an initial guess λ, and we approximate the solution u which satisfies the governing equations. In the framework of neural networks, we can use gradient descent to minimizeunder some suitable norm. Formally, we define a cost function of the form:

    3.4 Adaptive collocation

    Several methods can be proposed to select the collocation points in the interior and the boundary of the domain. Here we propose a method where we start with a coarse grid and then select additional points based on the evaluation of the residual. We apply this method for the selection of the interior points, for which evaluating the governing equations requires the most computational effort, although in principle it can also be applied to the boundary points.

    The main idea of the method is described in Fig. 2. The blue dots represent the training(collocation points) in the interior, the green dots represent the model evaluation points after the first training is complete, and the purple dots represent the additional training points. We note that evaluating the model at a larger number of points is quite inexpensive computationally, while the number of training points impacts the performance much more significantly as the governing equations need to be evaluated at the training points at each gradient descent step. Therefore, this method provides a criterion for selecting the collocation points in an efficient manner. Once the training is completed for one step, the network weights and biases can be carried over to the subsequent step, resulting in faster convergence.

    Figure 2: The steps of the adaptive collocation method, assuming the residual values are higher in the center of the domain

    4 Numerical results

    4.1 Poisson equation on the unit square

    We first consider a Poisson equation with Dirichlet and Neu mann boundary conditions:

    The exact solution of this equation is u(x,y)=sin(2πx)cos(2πy). We consider a loss function of the form:

    The relative L2errors obtained by increasing the number of layers while keeping the number of neurons per layer fixed and using the same refinement strategy are shown in Tab. 1. It can be observed that except for the single-layer network, the error decreases significantly as more training points are used. Moreover, the error for deeper networks is greatly reduced compared to the single-layer network, although the number of parameters and computational cost increases as well.

    Table 1: Relative L2 errors for different levels of refinement and different numbers of layers for the Poisson equation on the unit square

    4.2 Source problem on a quarter-annulus domain

    We now consider a 2ndorder problem with pure Dirichlet boundary conditions on a nonrectangular domain. The governing equation is given by:

    where Ω is quarter of an annulus located in the first quadrant and centered at the origin,with inner radius rint=1 and outer radius rext=4. Here f(x,y) is chosen such that the exact solution is u(x,y)=(x2+y2-1)(x2+y2-16)sin(x)sin(y), i.e.,f(x,y)=(3x4-67x2-67y2+3y4+6x2y2+116)sin(x)sin(y)+(68x-8x3-8xy2)cos(x)sin(y)+(68y-8y3-8yx2)cos(x)sin(y).

    As before, we define a cost function consisting of term corresponding to the interior governing equation and another term corresponding to the boundary conditions. To ensure that the boundary conditions are satisfied during the minimization process, we apply a penalty factor γ=100, so that the cost function becomes:

    We first choose a neural network with 2 hidden layers with 10 neurons each and the tanh activation function. The initial set of collocation points consists of Nint=192points in the interior and Nbnd=168 points on the boundary, spaced uniformly as shown in Fig. 4.Subsequent refinements are done according to the same procedure as in the first example.The relative L2error is calculated as 0.00053738 for the initial training and decreases to 0.00036766 and 0.00043207 as the number of collocation points is increased.

    Figure 3: Computed solution with error and the training sets of collocation points using a network with one hidden layer and 10 neurons for the Poisson equation on a unit square

    4.3 Poisson equation with a corner singularity

    To investigate the ability of the proposed refinement scheme to approximate solutions with sharp gradients, we next consider the Poisson equation on a domain with an internal boundary which results in a corner-type singularity appearing in the solution. The domain considered is given by Ω ? (-1,1)2-[0,1) in Cartesian coordinates and we seek an unknown solution which satisfies:

    Figure 4: Computed solution with error and the training sets of collocation points using a network with two hidden layer and 10 neurons each for the source equation on a quarterannulus domain

    The exact solution in polar coordinates is uex(r,θ)=r1/2sin(θ/2), which has the singular term r1/2creating approximation difficulties near the origin. In finite elements methods, a more refined mesh is typically required to obtain a good approximation. This problem was also investigated in Weinan et al. [Weinan and Yu (2018)] using an energy minimization method.

    The geometry is modelled by considering 3 rectangular subdomains (-1,0)×(-1,1),(0,1)×(-1,0), and (0,1)×(0,1). We define a loss function of the form:

    In the initial grid we choose equally spaced points with a distance of 0.05 in the x and y directions. For the points on the boundary, we choose more densely spaced points, with a distance of 0.025 in Cartesian coordinates and we set a penalty factor of γ=500 to ensure that the boundary conditions are respected. As before, we evaluate the model on grids with more points and append the points where the residual value is large to the training set in the next step.

    The results obtained by the adaptive collocation scheme using a network with 3 hidden layers and 30 neurons each are shown in Fig. 5. In general, the residual values in a narrow region around the singularity are much larger than in the rest of the domain and they are selected in the subsequent training step. Also, larger residuals are observed along the line y=0, with x<0 as the neural network with a coarser training grid has difficulties in capturing correctly the end of the internal boundary. However, as can be seen from the plots, the error diminishes as the number of training points increases. The accuracy can be further improved by choosing larger networks although the number of training points needs to be increased as well.

    Figure 5: Error between the exact solution and computed solution for the Poisson equation with a singularity at origin and the training sets at each refinement step for a network with 4 hidden layers and 30 neurons per layer

    4.4 Acoustic duct problem

    To investigate the applicability of neural networks to approximate the oscillatory solutions such as those obtained by solving the Helmholtz equation in acoustics. The benchmark problem under consideration has complex-valued governing equations of the form:

    Here we select k=12 as the wave number and m=2 as the mode number. This problem admits an analytical solution which can be written as:

    In the following, we compute only the real part of the solution u(x,y) as the imaginary part can be computed by a similar procedure.

    As before, we define a loss functions which minimizes the residual of the governing equation at interior and boundary points:

    The results of the adaptive collocation method are shown in Fig. 6. We have used a neural network with 3 hidden layers of 30 neurons each and a grid of 99×49 uniformly spaced points in the interior of the domain in the initial step. For the boundary, we have used Nbnd=400 uniformly spaced points and a penalty parameter of γ=100. As before, the size of the training set is increased based on the residual value on a finer grid(with double the points in each direction) in subsequent steps. Due to the oscillatory nature of the solution, the additional training points are also generally evenly distributed in the domain with higher concentration in the areas where the residual value was initially larger than average.

    4.5 Inverse acoustic problem

    Here we consider the same governing equation and boundary conditions as in the previous example, but we seek instead to solve for the wave number k while the value of the solution (for the correct k) is given. In this case we build a loss function that minimizes the difference between the given solution and the residual of the governing equations simultaneously:

    Figure 6: Computed solution, the error and the sets of training points for the acoustic duct benchmark problem with k=12 and m=2

    Here uexhas the same form as in the previous section but with k=4 and m=1. We start with k=1 as an initial guess and seek to minimize the loss function with k as a free parameter. For this problem, we choose a grid of 149×29 equally spaced points in the interior of the domain and Nbnd=800 boundary collocation points and γ=100.

    The results for this example are presented in Fig. 7. We can observe that the solution has been represented with reasonable accuracy both in terms of u(x,y) as well k. The relative L2error for u(x,y) in this example is 0.084, while the computed k is 3.882 as compared to 4 in the reference solution. As in the other examples, we have used the Adam optimizer followed by a quasi-Newton method (L-BFGS). It can be noted that the latter converges significantly faster, however in many cases performing a stochastic gradient-descent like Adam helps the solver to avoid being trapped in early local minima.

    Figure 7: Computed solution, the error and the convergence of the loss function and wave number k where the reference solution has k=4 for the inverse acoustic problem

    5 Conclusions

    We have presented a collocation method for solving boundary values problem using artificial neural networks. The method is completely mesh-free as only scattered sets of points are used in the training and evaluation sets. Although uniform grids of training points have been used in the initial training step, the method could be easily adapted to scattered data obtained e.g. by Latin hypercube sampling methods. The method was shown to produce results with good accuracy for the parameters chosen, although as common in deep learning methods, parameter selection may require some manual tuning.A more detailed study of the convergence and approximation properties of neural networks,as well as selecting robust minimization procedures remain open as possible research topics.Moreover, the applicability of these methods to energy minimization formulations, for the differential equations which allow it, can be investigated in future work.

    Acknowledgements:N. Alajlan and T. Rabczuk acknowledge the Distinguished Scientist Fellowship Program (DSFP) at King Saud University for supporting this work.

    成人特级av手机在线观看| 国产黄片美女视频| 亚洲,欧美精品.| 国产精品自产拍在线观看55亚洲| 免费看日本二区| 国产成人福利小说| 国产午夜精品论理片| 日日摸夜夜添夜夜添小说| 成人精品一区二区免费| 午夜精品一区二区三区免费看| a级一级毛片免费在线观看| 可以在线观看毛片的网站| 赤兔流量卡办理| 亚洲欧美日韩东京热| 亚洲人成网站在线播放欧美日韩| 最好的美女福利视频网| 在线免费观看的www视频| 亚洲av电影在线进入| 狠狠狠狠99中文字幕| 久久久久九九精品影院| 精品久久久久久成人av| 韩国av一区二区三区四区| 欧美日韩瑟瑟在线播放| 亚洲精品亚洲一区二区| 久久中文看片网| 久久久精品欧美日韩精品| 舔av片在线| 18美女黄网站色大片免费观看| 久久久色成人| 国产精品久久视频播放| 男人舔女人下体高潮全视频| 国产精品日韩av在线免费观看| 夜夜看夜夜爽夜夜摸| 久久久久久久久中文| 亚洲男人的天堂狠狠| 亚洲人与动物交配视频| 麻豆国产97在线/欧美| 91久久精品电影网| 婷婷精品国产亚洲av| 国产熟女xx| 免费人成在线观看视频色| 国产精品永久免费网站| 亚洲精品一区av在线观看| 国产极品精品免费视频能看的| 日本三级黄在线观看| 亚洲精品456在线播放app | 中文字幕精品亚洲无线码一区| 国产精品三级大全| 欧美又色又爽又黄视频| 国产激情偷乱视频一区二区| 听说在线观看完整版免费高清| 高清在线国产一区| 一级毛片久久久久久久久女| 精品人妻1区二区| av在线老鸭窝| 最近在线观看免费完整版| av在线蜜桃| 午夜两性在线视频| 毛片女人毛片| 国产精品自产拍在线观看55亚洲| 久久久久久久久久成人| 热99re8久久精品国产| 狠狠狠狠99中文字幕| 最新中文字幕久久久久| 午夜两性在线视频| 在线观看舔阴道视频| 中出人妻视频一区二区| 男人的好看免费观看在线视频| 亚州av有码| 草草在线视频免费看| 国产成人欧美在线观看| 国产成年人精品一区二区| 真人做人爱边吃奶动态| 久久天躁狠狠躁夜夜2o2o| 国产精品久久久久久亚洲av鲁大| 99热6这里只有精品| 香蕉av资源在线| 亚洲成人久久性| 亚洲色图av天堂| 窝窝影院91人妻| 一边摸一边抽搐一进一小说| 欧美激情在线99| 国产欧美日韩精品亚洲av| 久久国产乱子伦精品免费另类| 88av欧美| 少妇人妻精品综合一区二区 | 午夜激情福利司机影院| 国内精品一区二区在线观看| 免费大片18禁| 亚洲美女搞黄在线观看 | 亚洲精品乱码久久久v下载方式| 村上凉子中文字幕在线| 国产精品久久电影中文字幕| 五月玫瑰六月丁香| 免费看光身美女| 我要看日韩黄色一级片| 窝窝影院91人妻| 国产在视频线在精品| 日韩高清综合在线| 精品国产三级普通话版| 内地一区二区视频在线| 天美传媒精品一区二区| 露出奶头的视频| 国产欧美日韩精品亚洲av| 波多野结衣高清无吗| 欧美激情在线99| 夜夜看夜夜爽夜夜摸| 国产在线精品亚洲第一网站| 亚洲久久久久久中文字幕| 国产视频内射| 尤物成人国产欧美一区二区三区| 欧美日韩国产亚洲二区| 亚洲经典国产精华液单 | a在线观看视频网站| 很黄的视频免费| a在线观看视频网站| www.www免费av| 欧美xxxx黑人xx丫x性爽| 在线观看66精品国产| 国产精品综合久久久久久久免费| 中文亚洲av片在线观看爽| 免费av观看视频| 嫩草影视91久久| 一进一出抽搐gif免费好疼| 久久久久久久久久成人| 国产精品一区二区三区四区免费观看 | 欧美色视频一区免费| 久久久久精品国产欧美久久久| 黄色日韩在线| 免费电影在线观看免费观看| 色精品久久人妻99蜜桃| 首页视频小说图片口味搜索| 国语自产精品视频在线第100页| 久久国产乱子伦精品免费另类| 免费高清视频大片| 亚洲成人久久爱视频| 啦啦啦韩国在线观看视频| 日本黄大片高清| 久久中文看片网| 成人午夜高清在线视频| 久久伊人香网站| 免费无遮挡裸体视频| 久久精品国产自在天天线| 日韩欧美免费精品| 欧美成人免费av一区二区三区| 久久久久国产精品人妻aⅴ院| 18禁黄网站禁片免费观看直播| 最近中文字幕高清免费大全6 | 99国产精品一区二区蜜桃av| 高清在线国产一区| av在线蜜桃| 国产一区二区在线av高清观看| 狂野欧美白嫩少妇大欣赏| 国产在线男女| 亚洲精品亚洲一区二区| 搡老妇女老女人老熟妇| 99久国产av精品| 人妻制服诱惑在线中文字幕| 久久亚洲精品不卡| 1024手机看黄色片| 色5月婷婷丁香| 亚洲国产精品sss在线观看| 观看美女的网站| 成年女人毛片免费观看观看9| 亚洲欧美日韩东京热| 男女之事视频高清在线观看| 男人和女人高潮做爰伦理| 最近最新免费中文字幕在线| 国产av在哪里看| 国产真实伦视频高清在线观看 | 夜夜躁狠狠躁天天躁| 欧美高清性xxxxhd video| 日韩欧美在线二视频| 久久精品国产99精品国产亚洲性色| 欧美极品一区二区三区四区| 99热这里只有是精品50| 欧美绝顶高潮抽搐喷水| 国内精品一区二区在线观看| 久久精品人妻少妇| 国产一区二区亚洲精品在线观看| 俺也久久电影网| 亚洲在线观看片| 小说图片视频综合网站| 男人舔女人下体高潮全视频| 欧美xxxx黑人xx丫x性爽| 国产色爽女视频免费观看| 三级男女做爰猛烈吃奶摸视频| 在线免费观看的www视频| 久久久色成人| 国产成+人综合+亚洲专区| 男女下面进入的视频免费午夜| 欧美黑人巨大hd| 国产综合懂色| 老熟妇仑乱视频hdxx| 露出奶头的视频| 亚洲精品456在线播放app | 久久久久亚洲av毛片大全| 88av欧美| 日本撒尿小便嘘嘘汇集6| 国产精品亚洲美女久久久| 美女免费视频网站| 国产三级黄色录像| 亚洲五月婷婷丁香| 99热6这里只有精品| 51国产日韩欧美| 国产麻豆成人av免费视频| 亚洲欧美激情综合另类| 欧美3d第一页| 成熟少妇高潮喷水视频| 校园春色视频在线观看| 亚洲不卡免费看| 九色国产91popny在线| 国产 一区 欧美 日韩| 久久精品国产99精品国产亚洲性色| 色视频www国产| 又爽又黄无遮挡网站| 老司机福利观看| www日本黄色视频网| 日韩中字成人| 他把我摸到了高潮在线观看| 久久久久久大精品| 一夜夜www| 国产精华一区二区三区| 久久久久性生活片| 直男gayav资源| 久久久久久久久中文| 国产精品1区2区在线观看.| 亚洲第一区二区三区不卡| 免费看a级黄色片| 亚洲激情在线av| 成人特级av手机在线观看| 久久亚洲真实| 能在线免费观看的黄片| 免费看日本二区| 亚洲欧美日韩高清在线视频| 国产亚洲精品综合一区在线观看| 好看av亚洲va欧美ⅴa在| 国产黄片美女视频| 欧美一区二区精品小视频在线| 精品久久久久久久久亚洲 | 婷婷精品国产亚洲av在线| 91麻豆精品激情在线观看国产| 舔av片在线| 非洲黑人性xxxx精品又粗又长| 人人妻人人看人人澡| 欧美日韩综合久久久久久 | 国产精品乱码一区二三区的特点| 美女免费视频网站| 男女视频在线观看网站免费| 别揉我奶头 嗯啊视频| 亚洲va日本ⅴa欧美va伊人久久| 热99在线观看视频| 极品教师在线视频| 亚洲美女黄片视频| 非洲黑人性xxxx精品又粗又长| 亚洲成人免费电影在线观看| 一个人观看的视频www高清免费观看| 成人欧美大片| 国产成年人精品一区二区| 成人亚洲精品av一区二区| 国产爱豆传媒在线观看| 男人和女人高潮做爰伦理| 亚洲色图av天堂| 极品教师在线免费播放| 亚洲av中文字字幕乱码综合| 哪里可以看免费的av片| 亚洲av成人精品一区久久| 18禁黄网站禁片免费观看直播| 51国产日韩欧美| 亚洲国产高清在线一区二区三| 国产黄色小视频在线观看| 脱女人内裤的视频| 亚洲av美国av| 最近最新中文字幕大全电影3| 亚洲精品成人久久久久久| 三级毛片av免费| 精品一区二区三区视频在线观看免费| 亚洲精品久久国产高清桃花| 波野结衣二区三区在线| 欧美在线一区亚洲| 亚洲最大成人中文| 舔av片在线| 99热精品在线国产| 免费电影在线观看免费观看| 久久久久精品国产欧美久久久| 亚洲精品成人久久久久久| 国内精品久久久久久久电影| 欧美中文日本在线观看视频| 午夜福利在线在线| 51国产日韩欧美| 午夜福利在线观看免费完整高清在 | 亚洲av成人精品一区久久| 精品国产三级普通话版| 中文字幕免费在线视频6| av天堂在线播放| 最近在线观看免费完整版| 国产老妇女一区| 嫩草影院新地址| 赤兔流量卡办理| 一二三四社区在线视频社区8| 又爽又黄a免费视频| 精华霜和精华液先用哪个| 最近在线观看免费完整版| 免费无遮挡裸体视频| 偷拍熟女少妇极品色| av专区在线播放| 亚州av有码| 国产av一区在线观看免费| 中文字幕人成人乱码亚洲影| 国产精品伦人一区二区| 国产午夜精品久久久久久一区二区三区 | 亚洲乱码一区二区免费版| 国产人妻一区二区三区在| 熟女电影av网| 国产精品自产拍在线观看55亚洲| 久久草成人影院| 看十八女毛片水多多多| 在线免费观看的www视频| 国内精品久久久久久久电影| 老司机深夜福利视频在线观看| 香蕉av资源在线| 色视频www国产| 一级黄色大片毛片| 国产91精品成人一区二区三区| 成人无遮挡网站| 日韩大尺度精品在线看网址| 亚洲精品一区av在线观看| 成人无遮挡网站| 成人精品一区二区免费| 少妇丰满av| 国产成人aa在线观看| 日韩精品中文字幕看吧| 欧美激情在线99| 伊人久久精品亚洲午夜| 一个人免费在线观看的高清视频| 免费av观看视频| 亚洲av日韩精品久久久久久密| 免费在线观看日本一区| 国产一区二区在线av高清观看| 怎么达到女性高潮| 欧美精品国产亚洲| 人妻丰满熟妇av一区二区三区| 美女高潮喷水抽搐中文字幕| 国产色婷婷99| 精品久久国产蜜桃| 中文字幕久久专区| 亚洲av免费在线观看| 中文字幕人妻熟人妻熟丝袜美| 久久这里只有精品中国| 在线播放国产精品三级| 亚洲激情在线av| 精品人妻一区二区三区麻豆 | 亚洲午夜理论影院| 超碰av人人做人人爽久久| 高潮久久久久久久久久久不卡| 深夜精品福利| 乱码一卡2卡4卡精品| 欧美日本亚洲视频在线播放| 一区二区三区激情视频| 成人国产一区最新在线观看| 亚洲av日韩精品久久久久久密| 99久久成人亚洲精品观看| 2021天堂中文幕一二区在线观| 最近最新免费中文字幕在线| 中文字幕高清在线视频| 亚洲自拍偷在线| 99国产精品一区二区蜜桃av| 丰满人妻熟妇乱又伦精品不卡| 日本免费一区二区三区高清不卡| 久久亚洲真实| 国产精品人妻久久久久久| 国产免费av片在线观看野外av| 精品无人区乱码1区二区| 熟女电影av网| 少妇裸体淫交视频免费看高清| 又黄又爽又刺激的免费视频.| 亚洲欧美日韩无卡精品| 亚洲最大成人手机在线| 国产精品国产高清国产av| 乱人视频在线观看| 国内精品久久久久精免费| 亚洲精品成人久久久久久| 91在线精品国自产拍蜜月| 亚洲内射少妇av| 99热这里只有是精品在线观看 | 国产蜜桃级精品一区二区三区| 欧美色视频一区免费| 日韩欧美国产在线观看| 亚洲成人免费电影在线观看| 我的女老师完整版在线观看| 午夜激情福利司机影院| 成年版毛片免费区| 久久精品影院6| 亚洲人成网站在线播| 在线观看av片永久免费下载| 午夜福利在线观看吧| 欧美日韩国产亚洲二区| 丰满人妻一区二区三区视频av| 国产精品久久久久久久久免 | 18禁黄网站禁片午夜丰满| 亚洲av成人不卡在线观看播放网| 成年免费大片在线观看| 久久精品久久久久久噜噜老黄 | 三级男女做爰猛烈吃奶摸视频| 最后的刺客免费高清国语| 亚洲国产高清在线一区二区三| 亚洲成av人片免费观看| 91字幕亚洲| 欧美成狂野欧美在线观看| 国产黄片美女视频| 亚洲av日韩精品久久久久久密| 国产精品爽爽va在线观看网站| 夜夜夜夜夜久久久久| 国产精品影院久久| 国产美女午夜福利| 国产精品98久久久久久宅男小说| 亚洲av熟女| 91九色精品人成在线观看| 男女床上黄色一级片免费看| 99久国产av精品| 国产免费一级a男人的天堂| 在线观看美女被高潮喷水网站 | 在线观看免费视频日本深夜| 天天躁日日操中文字幕| 国产毛片a区久久久久| 69人妻影院| 午夜日韩欧美国产| 丰满的人妻完整版| 亚洲精品456在线播放app | 亚洲精品亚洲一区二区| 日本一二三区视频观看| 黄色配什么色好看| 午夜福利18| av在线蜜桃| 偷拍熟女少妇极品色| 久久久久久久亚洲中文字幕 | 欧美激情在线99| 国产欧美日韩精品一区二区| 国产一区二区在线观看日韩| 好男人在线观看高清免费视频| netflix在线观看网站| 十八禁网站免费在线| 99久久成人亚洲精品观看| eeuss影院久久| 成年免费大片在线观看| 好男人电影高清在线观看| 日本一二三区视频观看| h日本视频在线播放| 在线观看美女被高潮喷水网站 | 亚洲激情在线av| 成人美女网站在线观看视频| 乱人视频在线观看| 国产精品国产高清国产av| 国产成人影院久久av| 国产爱豆传媒在线观看| 亚洲第一欧美日韩一区二区三区| 色噜噜av男人的天堂激情| 99热这里只有是精品在线观看 | 又紧又爽又黄一区二区| 久久精品夜夜夜夜夜久久蜜豆| 国产精品美女特级片免费视频播放器| 精品欧美国产一区二区三| 国内精品久久久久久久电影| 欧美绝顶高潮抽搐喷水| 国产免费av片在线观看野外av| 男人和女人高潮做爰伦理| 久久久久久久久久成人| 国产精品av视频在线免费观看| 亚洲成av人片免费观看| 9191精品国产免费久久| 久久精品国产自在天天线| 欧美乱妇无乱码| 国产熟女xx| 黄片小视频在线播放| 一卡2卡三卡四卡精品乱码亚洲| 国产精品久久久久久亚洲av鲁大| 男人的好看免费观看在线视频| 91九色精品人成在线观看| 亚洲激情在线av| 性欧美人与动物交配| 日韩欧美国产一区二区入口| 国产淫片久久久久久久久 | 色在线成人网| 亚洲成a人片在线一区二区| 亚洲精品一区av在线观看| 很黄的视频免费| 国产精品久久久久久亚洲av鲁大| 国产成人aa在线观看| 999久久久精品免费观看国产| 高清毛片免费观看视频网站| 日本黄大片高清| 97超级碰碰碰精品色视频在线观看| 午夜福利欧美成人| 亚洲三级黄色毛片| 亚洲av一区综合| 波野结衣二区三区在线| bbb黄色大片| 欧美在线黄色| 亚洲三级黄色毛片| 国产高清有码在线观看视频| 午夜福利在线观看免费完整高清在 | 国产探花极品一区二区| 日本一二三区视频观看| 国产野战对白在线观看| 国产综合懂色| 18禁在线播放成人免费| 精品不卡国产一区二区三区| 国产麻豆成人av免费视频| 久久精品久久久久久噜噜老黄 | 日韩人妻高清精品专区| 99热这里只有精品一区| 国产av不卡久久| 亚洲最大成人中文| 久9热在线精品视频| 亚洲国产精品sss在线观看| 十八禁国产超污无遮挡网站| 日韩欧美免费精品| 亚洲 欧美 日韩 在线 免费| 首页视频小说图片口味搜索| 国产91精品成人一区二区三区| 免费av观看视频| 一区二区三区免费毛片| av天堂在线播放| 精品久久久久久久久久久久久| 午夜福利免费观看在线| av福利片在线观看| 亚洲欧美激情综合另类| 国产精品一及| 国产伦一二天堂av在线观看| 亚洲最大成人手机在线| 丁香欧美五月| 免费观看精品视频网站| 日韩av在线大香蕉| 大型黄色视频在线免费观看| 色尼玛亚洲综合影院| 久久久久久大精品| 麻豆成人av在线观看| 在线免费观看不下载黄p国产 | 国产成人影院久久av| 最好的美女福利视频网| 又黄又爽又刺激的免费视频.| 一级毛片久久久久久久久女| 999久久久精品免费观看国产| 国产单亲对白刺激| 国产精品一区二区三区四区久久| 国产探花极品一区二区| 真人一进一出gif抽搐免费| 亚洲欧美日韩无卡精品| 亚洲三级黄色毛片| 免费人成视频x8x8入口观看| 精品人妻偷拍中文字幕| 精品日产1卡2卡| 色哟哟·www| 一个人看的www免费观看视频| 午夜激情欧美在线| 久久精品国产清高在天天线| 国产伦在线观看视频一区| 国产精品久久视频播放| av在线观看视频网站免费| 国产综合懂色| 嫩草影院新地址| 一区二区三区高清视频在线| 内地一区二区视频在线| 综合色av麻豆| 夜夜看夜夜爽夜夜摸| 久久草成人影院| 精品久久久久久久久av| 免费av不卡在线播放| 老司机午夜十八禁免费视频| 免费电影在线观看免费观看| 一个人看视频在线观看www免费| 欧美色视频一区免费| 一区福利在线观看| 欧美高清成人免费视频www| 午夜免费激情av| 搡老岳熟女国产| 国产av麻豆久久久久久久| 天天躁日日操中文字幕| 亚洲精品一区av在线观看| 淫妇啪啪啪对白视频| 欧美在线一区亚洲| 亚洲精品亚洲一区二区| 久久人妻av系列| av在线天堂中文字幕| 日本黄大片高清| 国产精品久久电影中文字幕| 我要看日韩黄色一级片| 国产一区二区三区视频了| 99热这里只有精品一区| 久久久精品欧美日韩精品| 国产极品精品免费视频能看的| 免费看日本二区| www.色视频.com| 免费人成在线观看视频色| 国产精品女同一区二区软件 | 99久久九九国产精品国产免费| 99riav亚洲国产免费| 悠悠久久av| 在线免费观看不下载黄p国产 | 欧美+亚洲+日韩+国产| 一级黄片播放器| 午夜影院日韩av| 中文字幕免费在线视频6| 国产91精品成人一区二区三区| 美女被艹到高潮喷水动态| 俄罗斯特黄特色一大片| 精品福利观看| av福利片在线观看| 日韩亚洲欧美综合| 人人妻人人澡欧美一区二区| 欧美中文日本在线观看视频| 免费高清视频大片| 变态另类成人亚洲欧美熟女| 免费看日本二区| 久久久久久大精品|