• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Relationship between manifold smoothness and adversarial vulnerability in deep learning with local errors?

    2021-05-06 08:56:20ZijianJiang蔣子健JianwenZhou周健文andHaipingHuang黃海平
    Chinese Physics B 2021年4期
    關(guān)鍵詞:黃海

    Zijian Jiang(蔣子健), Jianwen Zhou(周健文), and Haiping Huang(黃海平)

    PMI Laboratory,School of Physics,Sun Yat-sen University,Guangzhou 510275,China

    Keywords: neural networks,learning

    1. Introduction

    Artificial deep neural networks have achieved the stateof-the-art performances in many domains such as pattern recognition and even natural language processing.[1]However,deep neural networks suffer from adversarial attacks,[2,3]i.e., they can make an incorrect classification with high confidence when the input image is slightly modified yet maintaining its class label. In contrast, for humans and other animals,the decision making systems in the brain are quite robust to imperceptible pixel perturbations in the sensory inputs.[4]This immediately establishes a fundamental question: what is the origin of the adversarial vulnerability of artificial neural networks? To address this question, we can first gain some insights from recent experimental observations of biological neural networks.

    A recent investigation of recorded population activity in the visual cortex of awake mice revealed a power law behavior in the principal component spectrum of the population responses,[5]i.e., the nthbiggest principal component (PC)variance scales as n?α,where α is the exponent of the power law. In this analysis, the exponent is always slightly greater than one for all input natural-image stimuli, reflecting an intrinsic property of a smooth coding in biological neural networks.It can be proved that when the exponent is smaller than 1+2/d,where d is the manifold dimension of the stimuli set,the neural coding manifold must be fractal,[5]and thus slightly modified inputs may cause extensive changes in the outputs.In other words, the encoding in a slow decay of population variances would capture fine details of sensory inputs, rather than an abstract concept summarizing the inputs. For a fast decay case,the population coding occurs in a smooth and differentiable manifold,and the dominant variance in the eigenspectrum captures key features of the object identity. Thus,the coding is robust, even under adversarial attacks. Inspired by this recent study, we ask whether the power-law behavior exists in the eigen-spectrum of the correlated hidden neural activity in deep neural networks. Our goal is to clarify the possible fundamental relationship between classification accuracy,the decay rate of activity variances, manifold dimensionality,and adversarial attacks of different nature.

    Taking the trade-off between biological reality and theoretical analysis, we consider a special type of deep neural network, trained with a local cost function at each layer.[6]Moreover, this kind of training offers us the opportunity to look at the aforementioned fundamental relationship at each layer. The input signal is transferred by trainable feedforward weights,while the error is propagated back to adjust the feedforward weights via random quenched weights connecting the classifier at each layer. The learning is therefore guided by the target at each layer, and layered representations are created due to this hierarchical learning. These layered representations offer us the neural activity space for the study of the above fundamental relationship.

    We remark the motivation and relevance of our model setting, i.e., deep supervised learning with local errors. As already known, the standard back propagation widely used in machine learning is not biologically plausible.[7]The algorithm has three unrealistic (in biological sense) assumptions:(i) errors are generated from the top layer and are thus nonlocal;(ii)a typical network is deep,thereby requiring a memory buffer for all layers’ activities; (iii) weight symmetry is assumed for both forward and backward passes. In our model setting,the errors are provided by local classifier modules and are thus local. Updating the forward weight needs only the neural state variable in the corresponding layer [see Eq. (2)],without requiring the whole memory buffer. And finally, the error is backpropagated through a fixed random projection,allowing easy implementation of breaking the weight symmetry.The learning algorithm in our paper thus bypasses the above three biological implausibilities.[6]Moreover, this model setting still allows a deep network to transform the low-level features at earlier layers into high-level abstract features at deeper layers.[6,8]Taken together,the model setting offers us the opportunity to look at the fundamental relationship between classification accuracy, the power-law decay rate of activity variances, manifold dimensionality, and adversarial vulnerability at each layer.

    2. Model

    where hi=δi,q(Kronecker delta function) and q is the digit label of the input image.

    The local cost function Elis minimized when hi=Pifor every i. The minimization is achieved by the gradient decent method. The gradient of the local error with respect to the weight of the feedforward layer can be calculated by applying the chain rule,given by

    After learning, the input ensemble can be transfered throughout the network in a layer-wise manner. Then, at each layer,the activity statistics can be analyzed by the eigenspectrum of the correlation matrix(or covariance matrix). We use principle component analysis (PCA) to obtain the eigenspectrum, which gives variances along orthogonal directions in the descending order. For each input image,the population output of nlneurons at the layer l can be thought of as a point in the nl-dimensional activation space. It then follows that,for k input images,the outputs can be seen as a cloud of k points.The PCA first finds the direction with a maximal variance of the cloud,then chooses the second direction orthogonal to the first one,and so on. Finally,the PCA identifies nlorthogonal directions and nlcorresponding variances. In our current setting,the nleigenvalues of the the covariance matrix of the neural manifold explain nlvariances. Arranging the nleigenvalues in the descending order leads to the eigen-spectrum whose behavior will be later analyzed in the next section.

    3. Results and discussion

    In this section,we apply our model to clarify the possible fundamental relationship between classification accuracy, the decay rate of activity variances,manifold dimensionality,and adversarial attacks of different nature.

    3.1. Test error decreases with depth

    We first show that the deep supervised learning in our current setting works. Figure 2 shows that the training error decreases as the test accuracy increases(before early stopping)during training. We remark that it is challenging to rigorously prove the convergence of the algorithm we used in this study,as the deep learning cost landscape is highly non-convex,and the learning dynamics is non-linear in nature. As a heuristic way,we judge the convergence by the stable error rate(in the global sense), which is also common in other deep learning systems. As the layer goes deeper, the test accuracy grows until saturation despite a slight deterioration. This behavior provides an ideal candidate of deep learning to investigate the emergent properties of the layered intermediate representations after learning,without and with adversarial attacks.Next, we will study in detail how the test accuracy is related to the power-law exponent,how the test accuracy is related to the attack strength,and how the dimensionality of the layered representation changes with the exponent, under zero, weak,and strong adversarial attacks.

    Fig.2.Typical trajectories of training and test error rates versus training epoch. Lines indicate the train error rate, and the symbols are the test error rate. The network width of each layer is fixed to N =200 (except the input layer),with 60000 images for training and 10000 images for testing. The initial learning rate η =0.5 which is multiplied by 0.8 every ten epochs.

    3.2. Power-law decay of dominant eigenvalues of the activity correlation matrix

    A typical eigen-spectrum of our current deep learning model is given in Fig.3. Notice that the eigen-spectrum is displayed in the log–log scale,then the slope of the linear fit of the spectrum gives the power-law exponent α. We use the first ten PC components to estimate α but not all for the following two reasons: (i) A waterfall phenomenon appears at the position around the 10thdimension, which is more evident at higher layers. (ii)The first ten dimensions explain more than 95%of the total variance, and thus they capture the key information about the geometry of the representation manifold. The waterfall phenomenon in the eigen-spectrum can occur multiple times,especially for deeper layers[Fig.3(a)],which is distinct from that observed in biological neural networks[see the inset of Fig.3(a)]. This implies that the artificial deep networks may capture fine details of stimuli in a hierarchical manner. A typical example of obtaining the power-law exponent is shown in Fig.3(b)for the fifth layer. When the stimulus size k is chosen to be large enough (e.g., k ≥2000; k=3000 throughout the paper), the fluctuation of the estimated exponent due to stimulus selection can be neglected.

    Fig.3. Eigen-spectrum of layer-dependent correlated activities and the power-law behavior of dominant PC dimensions. (a) The typical eigenspectrum of deep networks trained with local errors(L=8,N=200). Loglog scales are used. The inset is the eigen-spectrum measured in the visual cortex of mice(taken from Ref.[5]). (b)An example of extracting the power-law behavior at the fifth layer in(a). A linear fitting for the first ten PC components is shown in the log–log scale.

    3.3. Effects of layer width on test accuracy and power-law exponent

    We then explore the effects of the layer width on both test accuracy and power-law exponent. As shown in Fig.4(a),the test accuracy becomes more stable with increasing layer width.This is indicated by an example of nl=50 which shows a large fluctuation of the test accuracy especially at deeper layers. We conclude that a few hundreds of neurons at each layer are sufficient for an accurate learning.

    The power-law exponent also shows a similar behavior;the estimated exponent shows less fluctuations as the layer width increases. This result also shows that the exponent grows with layers. The deeper the layer is,the larger the exponent becomes. A larger exponent suggests that the manifold is smoother,because the dominant variance decays fast,leaving few space for encoding the irrelevant features in the stimulus ensemble. This may highlight that the depth in hierarchical learning is important for capturing key characteristics of sensory inputs.

    Fig.4. Effects of network width on test accuracy and power-law exponent α. (a) Test accuracy versus layer. Error bars are estimated over 20 independently training models. (b)α versus layer. Error bars are also estimated over 20 independently training models.

    3.4. Relationship between test accuracy and power-law exponent

    3.5. Properties of the model under black-box attacks

    Fig.5. The power-law exponent α versus test accuracy of the manifold. α grows along the depth,while the test accuracy has a turnover at the layer 2,and then decreases by a very small margin. Error bars are estimated over 50 independently training models.

    Fig.6. Relationship between test accuracy and power-law exponent α when the input test data is attacked by independent Gaussian white noises.Error bars are estimated over 20 independently training models. (a) Accuracy versus ε. ε is the attack amplitude. (b) α versus ε. (c) Accuracy versus α over different values of ε. Different symbol colors refer to different layers. The red arrow points to the direction along which ε increases from 0.1 to 4.0,with an increment size of 0.1. The relationship of Alt(α)with increasing ε in the first three layers shows a linear function,with the slopes of 0.56,0.86,and 1.04,respectively. The linear fitting coefficients R2 are all larger than 0.99. Beyond the third layer,the linear relationship is not evident. For the sake of visibility,we enlarge the deeper-layer region in(d). A turning point α ≈1.0 appears. Above this point,the manifold seems to become smooth,and the exponent becomes stable even against stronger black-box attacks[see also(b)].

    3.6. Properties of the model under white-box attacks

    Fig.7. Relationship between test accuracy and exponent α under the FGSM attack. Error bars are estimated over 20 independently training models.(a)changes with ε. (b)α changes with ε. (c)versus α over different attack magnitudes. ε increases from 0.1 to 4.0 with the increment size of 0.1. The plot shows a non-monotonic behavior different from that of white-box attacks in Fig.6(c).

    3.7. Relationship between manifold linear dimensionality and power-law exponent

    The linear dimensionality of a manifold formed by data/representations can be thought of as a first approximation of intrinsic geometry of a manifold,[12,13]defined as follows:

    where {λi} is the eigen-spectrum of the covariance matrix.Suppose the eigen-spectrum has a power-law decay behavior as the PC dimension increases,we simplify the dimensionality equation as follows:

    Fig.8. Relationship between dimensionality D and power-law exponent.(a) D(α) estimated from the integral approximation and in the thermodynamic limit. N is the layer width. (b)D(α)under the Gaussian white noise attack. The dimensionality and the exponent are estimated directly from the layered representations given the immediate perturbed input for each layer[Eq. (4)]. We show three typical cases of attack: no noise with ε =0.0,small noise with ε =0.5,and strong noise with ε =3.0. For each case,we plot eight results corresponding to eight layers. The green dashed line is the theoretical prediction [Eq. (5)], provided that N =35. Error bars are estimated over 20 independently training models. (c) D(α) under the FGSM attack. The theoretical curve(dashed line)is computed with N=30. Error bars are estimated over 20 independently training models.

    The results are shown in Fig.8. The theoretical prediction agrees roughly with simulations under zero, weak, and strong attacks of black-box and white-box types. This shows that using the power-law decay behavior of the eigen-spectrum in terms of the first few dominant dimensions to study the relationship between the manifold geometry and adversarial vulnerability of artificial neural networks is also reasonable,as also confirmed by many aforementioned non-trivial properties about this fundamental relationship. Note that when the network width increases, a deviation may be observed due to the waterfall phenomenon observed in the eigen-spectrum(see Fig.3).

    4. Conclusion

    All in all, although our study does not provide precise mechanisms underlying the adversarial vulnerability, the empirical works are expected to offer some intuitive arguments about the fundamental relationship between generalization capability and the intrinsic properties of representation manifolds inside the deep neural networks with biological plausibility(to some degree), encouraging future mechanistic studies towards the final goal of aligning machine perception and human perception.[4]

    猜你喜歡
    黃海
    你不會(huì)是……強(qiáng)迫癥吧
    大眾健康(2022年4期)2022-04-27 21:48:15
    刻舟求劍
    幼兒畫刊(2022年4期)2022-04-21 02:50:54
    東方濕地 黃海明珠
    黃海綠洲的燈
    黃海簡(jiǎn)介
    黃海 用海報(bào)為電影打開一扇窗
    海峽姐妹(2019年8期)2019-09-03 01:00:54
    黃海生教授
    First-principles investigations on the mechanical,thermal, electronic,and optical properties of the defect perovskites Cs2Sn X6(X=Cl,Br,I)?
    三角恒等變換去哪兒了
    Solurion ro Beacon Conflicr Based on IEEE 802.15.4
    麻豆成人av视频| av播播在线观看一区| 亚洲av中文字字幕乱码综合| 国产综合懂色| 久久久a久久爽久久v久久| 国产精品久久电影中文字幕| 亚洲成人久久爱视频| .国产精品久久| 成人一区二区视频在线观看| 成人美女网站在线观看视频| 蜜桃久久精品国产亚洲av| 日本三级黄在线观看| 麻豆av噜噜一区二区三区| 国产精品一二三区在线看| 日本一二三区视频观看| 亚洲欧美日韩高清专用| 国产精品嫩草影院av在线观看| 国产精品熟女久久久久浪| 亚洲欧美中文字幕日韩二区| 乱系列少妇在线播放| a级毛色黄片| 韩国高清视频一区二区三区| 欧美另类亚洲清纯唯美| 国产视频内射| 99久久成人亚洲精品观看| 国产 一区 欧美 日韩| 波多野结衣巨乳人妻| 91狼人影院| 99热这里只有精品一区| 亚洲av男天堂| 国产伦精品一区二区三区视频9| 久久精品久久久久久噜噜老黄 | 少妇的逼好多水| 国产一区亚洲一区在线观看| 男女国产视频网站| 乱人视频在线观看| 深爱激情五月婷婷| av国产久精品久网站免费入址| 深爱激情五月婷婷| 最近中文字幕2019免费版| 欧美日本视频| 欧美+日韩+精品| 少妇高潮的动态图| 在线a可以看的网站| 高清午夜精品一区二区三区| 国产一级毛片七仙女欲春2| 麻豆久久精品国产亚洲av| 亚洲精品aⅴ在线观看| 99久久精品热视频| 国国产精品蜜臀av免费| 有码 亚洲区| 中文字幕av在线有码专区| 国产在线男女| 人人妻人人看人人澡| 中文字幕av在线有码专区| 中文字幕av在线有码专区| 欧美日韩国产亚洲二区| av在线亚洲专区| av在线亚洲专区| 亚洲av二区三区四区| 亚洲电影在线观看av| 亚洲电影在线观看av| 在线观看av片永久免费下载| av播播在线观看一区| 久久久久久伊人网av| 国产亚洲91精品色在线| 国产亚洲最大av| 色播亚洲综合网| 亚洲欧美清纯卡通| 午夜亚洲福利在线播放| 成人综合一区亚洲| 国产日韩欧美在线精品| 成人欧美大片| 91狼人影院| 小说图片视频综合网站| 一边摸一边抽搐一进一小说| 国产高清有码在线观看视频| 嫩草影院新地址| 久久6这里有精品| 91精品伊人久久大香线蕉| 亚洲高清免费不卡视频| 好男人视频免费观看在线| 色播亚洲综合网| 精品一区二区三区视频在线| 午夜激情欧美在线| 精品久久久久久久久久久久久| 国产伦精品一区二区三区四那| 国产午夜精品一二区理论片| 免费观看在线日韩| 22中文网久久字幕| av国产免费在线观看| 日韩制服骚丝袜av| 一级毛片电影观看 | 久久久久九九精品影院| 51国产日韩欧美| 亚洲av不卡在线观看| 国产在线一区二区三区精 | 国产免费一级a男人的天堂| 久久久午夜欧美精品| 国内揄拍国产精品人妻在线| 国产成年人精品一区二区| 视频中文字幕在线观看| 亚洲国产最新在线播放| 精品人妻熟女av久视频| 亚洲欧美精品综合久久99| 老女人水多毛片| 99久久精品一区二区三区| 欧美激情久久久久久爽电影| 久久久精品大字幕| www.色视频.com| 日本色播在线视频| 男的添女的下面高潮视频| 欧美激情久久久久久爽电影| 不卡视频在线观看欧美| 黄色欧美视频在线观看| 亚洲精品一区蜜桃| 国产日韩欧美在线精品| 久久精品熟女亚洲av麻豆精品 | 性色avwww在线观看| 日本欧美国产在线视频| 乱系列少妇在线播放| 欧美极品一区二区三区四区| 真实男女啪啪啪动态图| 99视频精品全部免费 在线| 久久久久精品久久久久真实原创| 国产精品av视频在线免费观看| 一区二区三区免费毛片| 一级毛片aaaaaa免费看小| 22中文网久久字幕| 日韩av在线大香蕉| av专区在线播放| 蜜臀久久99精品久久宅男| 搡老妇女老女人老熟妇| 亚洲人成网站在线播| av天堂中文字幕网| 99在线人妻在线中文字幕| 中文字幕精品亚洲无线码一区| 伦精品一区二区三区| 少妇被粗大猛烈的视频| 久久久久网色| 两性午夜刺激爽爽歪歪视频在线观看| 26uuu在线亚洲综合色| 好男人视频免费观看在线| 男女那种视频在线观看| 亚洲欧美日韩无卡精品| 少妇猛男粗大的猛烈进出视频 | 亚洲一区高清亚洲精品| 国产av码专区亚洲av| 欧美精品国产亚洲| 午夜激情欧美在线| av福利片在线观看| or卡值多少钱| 久久久欧美国产精品| 97人妻精品一区二区三区麻豆| 毛片一级片免费看久久久久| 亚洲三级黄色毛片| 免费观看a级毛片全部| 国产成人a∨麻豆精品| 中文在线观看免费www的网站| 日本黄大片高清| 男女国产视频网站| 99久久成人亚洲精品观看| 国产精品一区二区性色av| 久久精品综合一区二区三区| 特大巨黑吊av在线直播| 欧美精品一区二区大全| 91久久精品国产一区二区成人| 老女人水多毛片| 天天躁夜夜躁狠狠久久av| 一级毛片久久久久久久久女| 亚洲国产精品国产精品| 2022亚洲国产成人精品| 国产精品三级大全| 国产在视频线精品| 国产69精品久久久久777片| a级毛色黄片| www.色视频.com| 岛国在线免费视频观看| 男人狂女人下面高潮的视频| 亚洲自偷自拍三级| 黑人高潮一二区| 国产精品久久久久久av不卡| 欧美高清性xxxxhd video| av在线亚洲专区| 汤姆久久久久久久影院中文字幕 | 久久久欧美国产精品| 国产在线男女| 欧美高清成人免费视频www| 别揉我奶头 嗯啊视频| 国产成人freesex在线| 婷婷色麻豆天堂久久 | 99久国产av精品| 国产麻豆成人av免费视频| 国产精品熟女久久久久浪| 国产亚洲91精品色在线| 边亲边吃奶的免费视频| 精品久久久久久久久av| 亚洲欧美日韩卡通动漫| 看免费成人av毛片| av专区在线播放| 建设人人有责人人尽责人人享有的 | av在线蜜桃| 成人av在线播放网站| 久久精品国产自在天天线| 乱码一卡2卡4卡精品| 色综合站精品国产| 波野结衣二区三区在线| 亚洲国产成人一精品久久久| 久久久久久久久大av| 欧美激情久久久久久爽电影| 久久精品夜色国产| av免费在线看不卡| 成人一区二区视频在线观看| 成人午夜高清在线视频| 国产三级在线视频| 午夜精品一区二区三区免费看| 久久韩国三级中文字幕| 看非洲黑人一级黄片| 日本黄大片高清| .国产精品久久| 亚洲国产精品合色在线| 免费av观看视频| 大又大粗又爽又黄少妇毛片口| 直男gayav资源| 蜜桃久久精品国产亚洲av| 国产一级毛片在线| 国产亚洲一区二区精品| 久久精品夜色国产| 热99re8久久精品国产| 在现免费观看毛片| 夜夜爽夜夜爽视频| 成人高潮视频无遮挡免费网站| 国产精品电影一区二区三区| 久久久久久久久中文| 午夜精品一区二区三区免费看| 九九爱精品视频在线观看| 国产av一区在线观看免费| 国产av一区在线观看免费| 国产在视频线在精品| 色尼玛亚洲综合影院| 久久精品国产鲁丝片午夜精品| 日本三级黄在线观看| 久久久国产成人精品二区| 亚洲精品456在线播放app| 人妻系列 视频| 久久久国产成人精品二区| 麻豆精品久久久久久蜜桃| 最近2019中文字幕mv第一页| 国产亚洲一区二区精品| 久久久欧美国产精品| 国产成人福利小说| av在线天堂中文字幕| 亚洲精品国产成人久久av| 亚洲成人精品中文字幕电影| 精品一区二区三区人妻视频| 寂寞人妻少妇视频99o| 国语自产精品视频在线第100页| 小蜜桃在线观看免费完整版高清| 精品人妻一区二区三区麻豆| 建设人人有责人人尽责人人享有的 | 久久精品91蜜桃| 高清视频免费观看一区二区 | 国产精品蜜桃在线观看| 直男gayav资源| 赤兔流量卡办理| 国产精品.久久久| 国产真实伦视频高清在线观看| 免费大片18禁| 最近的中文字幕免费完整| 国产一区有黄有色的免费视频 | 久久精品久久久久久久性| 亚洲av电影不卡..在线观看| 亚洲丝袜综合中文字幕| 两性午夜刺激爽爽歪歪视频在线观看| 在线观看66精品国产| 亚洲成人精品中文字幕电影| 亚州av有码| 国产精品精品国产色婷婷| 美女内射精品一级片tv| 亚洲欧美中文字幕日韩二区| 秋霞在线观看毛片| 乱码一卡2卡4卡精品| 小蜜桃在线观看免费完整版高清| 精品熟女少妇av免费看| 欧美又色又爽又黄视频| 亚洲国产精品专区欧美| 国产高清视频在线观看网站| 色综合站精品国产| 亚洲四区av| 午夜精品国产一区二区电影 | 亚洲最大成人中文| 插逼视频在线观看| 免费看a级黄色片| 高清av免费在线| 亚洲av电影在线观看一区二区三区 | 国产精品一二三区在线看| 免费观看精品视频网站| 国产高清不卡午夜福利| 国产三级中文精品| 亚洲国产欧洲综合997久久,| 美女内射精品一级片tv| 国产亚洲一区二区精品| 尤物成人国产欧美一区二区三区| 欧美+日韩+精品| 日韩一本色道免费dvd| 亚洲欧洲国产日韩| 国产视频首页在线观看| 日本欧美国产在线视频| 国产免费男女视频| 亚洲无线观看免费| 男人舔奶头视频| 久久久久久久久久黄片| 欧美zozozo另类| 国产中年淑女户外野战色| 中文在线观看免费www的网站| 日韩欧美在线乱码| 精品不卡国产一区二区三区| 国产一区二区三区av在线| 97超视频在线观看视频| 深爱激情五月婷婷| 中文乱码字字幕精品一区二区三区 | 网址你懂的国产日韩在线| 日日啪夜夜撸| 欧美bdsm另类| 国产成人免费观看mmmm| 国产黄片视频在线免费观看| 成人午夜高清在线视频| 少妇人妻精品综合一区二区| 联通29元200g的流量卡| 九九在线视频观看精品| 久久久久久久久大av| 色播亚洲综合网| 晚上一个人看的免费电影| 三级经典国产精品| 少妇被粗大猛烈的视频| 久久综合国产亚洲精品| 少妇人妻精品综合一区二区| 国产激情偷乱视频一区二区| 欧美日韩一区二区视频在线观看视频在线 | 欧美日韩在线观看h| 欧美精品国产亚洲| 一区二区三区乱码不卡18| 国产精品美女特级片免费视频播放器| 亚洲人成网站高清观看| 亚洲色图av天堂| 国产一区二区三区av在线| 日本爱情动作片www.在线观看| 国产精品永久免费网站| av.在线天堂| 国产高潮美女av| 床上黄色一级片| 最近手机中文字幕大全| 夜夜爽夜夜爽视频| 小说图片视频综合网站| 久久久久精品久久久久真实原创| 色综合亚洲欧美另类图片| АⅤ资源中文在线天堂| 亚洲欧美清纯卡通| 蜜桃亚洲精品一区二区三区| 精品人妻熟女av久视频| 亚洲精品一区蜜桃| 中文字幕久久专区| 在线观看66精品国产| 免费av毛片视频| 69av精品久久久久久| 免费黄网站久久成人精品| 国产精品国产高清国产av| 五月伊人婷婷丁香| 日日干狠狠操夜夜爽| 美女高潮的动态| 天天一区二区日本电影三级| 3wmmmm亚洲av在线观看| 亚洲av熟女| 好男人在线观看高清免费视频| 亚洲欧美日韩无卡精品| 国产 一区精品| 色吧在线观看| 男女视频在线观看网站免费| 成人亚洲精品av一区二区| 一个人看视频在线观看www免费| av播播在线观看一区| 亚洲av电影在线观看一区二区三区 | 精品一区二区三区视频在线| 真实男女啪啪啪动态图| 你懂的网址亚洲精品在线观看 | 99热全是精品| 小蜜桃在线观看免费完整版高清| 汤姆久久久久久久影院中文字幕 | 国产午夜福利久久久久久| 只有这里有精品99| 一级毛片久久久久久久久女| 久久久国产成人精品二区| 日韩国内少妇激情av| 日韩三级伦理在线观看| 成人亚洲精品av一区二区| 国产三级中文精品| 日本爱情动作片www.在线观看| 亚洲天堂国产精品一区在线| av播播在线观看一区| 99久国产av精品国产电影| 精品酒店卫生间| 国产高清国产精品国产三级 | 女人久久www免费人成看片 | 国产亚洲精品av在线| 国产成年人精品一区二区| 国产久久久一区二区三区| 亚洲精品日韩av片在线观看| 精品一区二区三区视频在线| 色网站视频免费| 综合色丁香网| 韩国高清视频一区二区三区| 六月丁香七月| 午夜日本视频在线| 亚洲av熟女| 中文资源天堂在线| 亚洲一级一片aⅴ在线观看| 国产v大片淫在线免费观看| 成人毛片a级毛片在线播放| 91精品伊人久久大香线蕉| 91精品一卡2卡3卡4卡| 七月丁香在线播放| 欧美极品一区二区三区四区| 日韩人妻高清精品专区| 亚洲婷婷狠狠爱综合网| 波多野结衣巨乳人妻| 久久久久久久久久黄片| 中文字幕免费在线视频6| 男的添女的下面高潮视频| 岛国在线免费视频观看| 日本一本二区三区精品| 日韩欧美精品v在线| 亚洲高清免费不卡视频| 国产激情偷乱视频一区二区| 最近最新中文字幕免费大全7| 欧美丝袜亚洲另类| av天堂中文字幕网| 99视频精品全部免费 在线| 亚洲国产欧美在线一区| 免费黄网站久久成人精品| 久久99蜜桃精品久久| 99国产精品一区二区蜜桃av| 久久久久久久久久久丰满| 午夜福利网站1000一区二区三区| av视频在线观看入口| 天堂网av新在线| 亚洲成人中文字幕在线播放| 国产老妇伦熟女老妇高清| 一级黄色大片毛片| 午夜亚洲福利在线播放| 国产探花在线观看一区二区| 少妇丰满av| 日本欧美国产在线视频| 亚洲美女视频黄频| 激情 狠狠 欧美| 免费看av在线观看网站| 国产白丝娇喘喷水9色精品| 97超视频在线观看视频| 亚洲人与动物交配视频| 亚洲,欧美,日韩| 色综合亚洲欧美另类图片| 国产老妇伦熟女老妇高清| 欧美+日韩+精品| 国产成人免费观看mmmm| 乱系列少妇在线播放| 两性午夜刺激爽爽歪歪视频在线观看| 日本wwww免费看| 国产老妇伦熟女老妇高清| 欧美日韩国产亚洲二区| 国产成年人精品一区二区| 干丝袜人妻中文字幕| 亚洲第一区二区三区不卡| 噜噜噜噜噜久久久久久91| 精品久久久久久久久久久久久| 午夜福利高清视频| 欧美zozozo另类| 免费人成在线观看视频色| 精华霜和精华液先用哪个| 久久久色成人| 免费一级毛片在线播放高清视频| 汤姆久久久久久久影院中文字幕 | 国产伦理片在线播放av一区| 日韩欧美精品v在线| 久久久久网色| 少妇熟女aⅴ在线视频| 国产成人精品婷婷| 国产精品久久久久久久久免| 老司机影院毛片| 国产一区二区亚洲精品在线观看| 国产乱人偷精品视频| 村上凉子中文字幕在线| 一边摸一边抽搐一进一小说| 久久久欧美国产精品| 久久久久久久久大av| 国产精品一二三区在线看| av.在线天堂| 69人妻影院| a级一级毛片免费在线观看| 久久这里只有精品中国| 亚洲精品456在线播放app| 纵有疾风起免费观看全集完整版 | 国产成人精品久久久久久| 色播亚洲综合网| 欧美高清成人免费视频www| 午夜a级毛片| 最近的中文字幕免费完整| 欧美最新免费一区二区三区| 欧美性猛交黑人性爽| 99热这里只有精品一区| ponron亚洲| 插阴视频在线观看视频| 中国美白少妇内射xxxbb| 日本猛色少妇xxxxx猛交久久| 蜜桃亚洲精品一区二区三区| 日本免费一区二区三区高清不卡| 男女视频在线观看网站免费| 亚洲人成网站在线观看播放| 我的女老师完整版在线观看| 成人鲁丝片一二三区免费| 三级国产精品欧美在线观看| 91在线精品国自产拍蜜月| 波多野结衣巨乳人妻| 国产私拍福利视频在线观看| 久久99热6这里只有精品| 欧美三级亚洲精品| 日本av手机在线免费观看| 日韩欧美在线乱码| 国产精品人妻久久久影院| 蜜桃久久精品国产亚洲av| 亚洲成人精品中文字幕电影| 亚洲国产精品成人久久小说| 国产精品.久久久| 听说在线观看完整版免费高清| 亚洲人成网站在线观看播放| 床上黄色一级片| 麻豆一二三区av精品| 久久精品综合一区二区三区| 联通29元200g的流量卡| 久久久久精品久久久久真实原创| 久久久国产成人精品二区| 国内少妇人妻偷人精品xxx网站| 春色校园在线视频观看| 日韩,欧美,国产一区二区三区 | 夜夜爽夜夜爽视频| 99在线人妻在线中文字幕| 麻豆一二三区av精品| 综合色av麻豆| 精品国内亚洲2022精品成人| 国产精品久久久久久精品电影| 久热久热在线精品观看| 中文天堂在线官网| 午夜福利视频1000在线观看| 人妻制服诱惑在线中文字幕| 久久久精品94久久精品| 欧美97在线视频| 久久久成人免费电影| 国产精品蜜桃在线观看| 亚洲欧美精品专区久久| 亚洲成人久久爱视频| 国产精品久久久久久久电影| 国产成人a∨麻豆精品| 久久亚洲国产成人精品v| 亚洲欧洲日产国产| 中文在线观看免费www的网站| 最新中文字幕久久久久| 国产一级毛片在线| 国产极品天堂在线| 好男人在线观看高清免费视频| 熟女电影av网| 国模一区二区三区四区视频| 少妇高潮的动态图| 日韩成人av中文字幕在线观看| 麻豆一二三区av精品| 欧美精品国产亚洲| 乱码一卡2卡4卡精品| 亚洲av成人精品一二三区| 欧美成人精品欧美一级黄| 国产精品永久免费网站| 97人妻精品一区二区三区麻豆| 欧美xxxx性猛交bbbb| 寂寞人妻少妇视频99o| 久久久色成人| 午夜福利在线观看免费完整高清在| 99久久无色码亚洲精品果冻| 国产精品99久久久久久久久| 欧美性猛交黑人性爽| 免费一级毛片在线播放高清视频| 男女国产视频网站| 国产亚洲精品av在线| 噜噜噜噜噜久久久久久91| 欧美日本视频| 欧美一区二区国产精品久久精品| 日韩视频在线欧美| 国产高清视频在线观看网站| 久久久久久久亚洲中文字幕| 国产精品人妻久久久影院| 一级爰片在线观看| 爱豆传媒免费全集在线观看| 级片在线观看| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 午夜福利成人在线免费观看| 精华霜和精华液先用哪个| 亚洲精品456在线播放app| 亚洲国产精品sss在线观看| 国产视频内射| 亚洲乱码一区二区免费版| 色视频www国产| 国产午夜精品久久久久久一区二区三区| 99热全是精品| 欧美日本亚洲视频在线播放| 日韩一区二区三区影片| 日产精品乱码卡一卡2卡三| 麻豆精品久久久久久蜜桃| 自拍偷自拍亚洲精品老妇| 亚洲一区高清亚洲精品| 成人三级黄色视频|