• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    DSNNs: learning transfer from deep neural networks to spiking neural networks ①

    2020-07-12 02:34:14ZhangLeiDuZidongLiLingChenYunji
    High Technology Letters 2020年2期

    Zhang Lei(張 磊) , Du Zidong, Li Ling, Chen Yunji

    (*State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, P.R.China) (**Institute of Computing Technology, Chinese Academy of Sciences, Beijing 100190, P.R.China) (***Cambricon Tech. Ltd, Beijing 100010, P.R.China) (****Institute of Software, Chinese Academy of Sciences, Beijing 100190, P.R.China)

    Abstract

    Key words: deep leaning, spiking neural network (SNN), convert method, spatially folded network

    0 Introduction

    Deep neural networks (DNNs) perform the state-of-the-art results on many tasks, such as image recognition[1-4], speech recognition[5-7]and natural language processing[8,9]. Current state-of-the-art DNNs usually contain many layers with high abstracted neuron models, causing a heavy burden for computation. To highly efficiently process DNNs, many customized architectures have been proposed.

    Despite DNNs, another type of neural network from neuroscience is also emerging. Spiking neural networks (SNNs) mimic the biological brain bionically and consequentially are thought as the next generation of neural networks[10,11]. Spike, used in SNNs to pass information among neurons, is thought to be a more efficient hardware solution as 1-bit is enough for representing one spike. Some special hardware architectures have been proposed for SNNs[12-14]. However, currently, the bio-inspired, spike-based neuromorphic SNNs still fail to achieve comparable results with DNN.

    To close the performance gap between DNNs and SNNs, researchers have tried many solutions. IBM[15]proved that the structural and operational differences between neuromorphic computing and deep learning are not fundamental. ConvNets[16]applied a weights converting technique and IBM adopted back propagation (BP) in training. However, these techniques are only proven feasible using small networks on simple tasks, such as recognition on hand-written digital numbers (MNIST[17]). As a result, the capability of SNNs remains unclear, especially on large and complex tasks.

    This work proposes a simple but effect way to construct deep spiking neural networks (DSNNs) by transferring the learned ability of DNNs to SNNs. During the process, initial trained synaptic weights are converted and used in SNNs; features in SNNs are introduced to original DNN for further training. Evaluated with large and complex datasets (including ImageNet[18]), DSNNs achieve comparable accuracy with DNNs. Furthermore, to appeal the hardware design, this work proposes an enhanced SNN computing algorithm, called ‘DSNN-fold’, which also improves the accuracy of the directly converted SNN.

    Therefore the overall contribution is as follows:

    (1) An algorithm to convert DNN to SNN is proposed.

    (2) The algorithm is improved for more hardware friendly design.

    1 DNN vs. SNN

    In this section, two different models are briefly introduced : DNNs and SNNs,as depicted in Fig.1. Despite the layer based architecture, SNNs are different from DNNs in neuron model,input stimuli, results readout and training method.

    Fig.1 DNN vs. SNN

    1.1 Topology

    Both DNNs and SNNs mimic the biological brain but in different levels of abstraction. DNNs usually contain multiple layers where each layer contains numerous neurons; inputs are passed and processed through layers with different inter-layer connections (i.e., synapses) (Fig.1(a)). Recent development of deep learning leads to increasingly deeper and larger networks, i.e., more layers and more neurons in a layer. Meanwhile, connections among layers vary through different types of layers, which consequently leads to different types of layers and neural networks, e.g., multiple layer perceptron (MLP), convolutional neural network (CNN), recurrent neural network (RNN), and long-short-term-memory (LSTM).

    Similarly, SNNs consist of multiple layers, but less types than DNNs. Commonly, each neuron connects to not only all neurons in the last layer but also all other neurons in the current layer through an inhibition mechanism. Therefore, the state of each neuron is related to inputs (i.e., spikes) from previous layers and inhibition signals from its layer (Fig.1(b)). The inhibition mechanism, observed from biological nerve system, causes the so-called ‘Winner-Take-All’ effect, i.e., only one neuron can fire in a shot period (inhibition period), which has been proven to achieve good results in previous work[19].

    1.2 Neuron model

    A typical neuron in DNNs receives different inputs and generates output passing through synapses to following neurons, as shown in Fig.1(c). Formally, a neuron generates outputNoutasNout=f(∑cg(Ii,Wij)), whereIiis the inputs,Wijis the synapse weight,Cis the set of connected input neurons,g() andf() are processing operators.g() can be inner production as in fully-connected layers and convolutional layers, or unsampling in pooling layers.f() is the activation function, such as sigmoid and ReLU functions typically.

    A neuron in SNN accumulates input spikes continuously to its potential and fires out spikes to following neurons once its potential reaches the firing threshold; its potential will be reset afterwards. Formally, the potential of an output neuronPout(t) in the time window [T1,T2] can be described by

    1.3 Input stimulus

    Typical inputs, i.e., image pixels, audio information, are used directly with or without preprocessing like normalization and centralization. Texts are processed to digital representations through word embedding process, such as word2vec[20].

    Unlike DNNs, SNNs take spikes as inputs, thus an encoding process which converts numeric values into spikes is required. However, SNN encoding has been a quite controversial topic in the field of neuromorphic computing. There have been years of debates and discussions about better encoding schemes, e.g., rate coding, rank order coding, temporal coding, etc. Despite that, there is no obvious experimental evidence showing the superiority of temporal coding, which uses the precise time of firing —— it is believed that temporal coding carries more information but currently it is unclear how to leverage that. While all of them have been shown to be biologically plausible, researches have proved that SNN with temporal coding schemes is less accurate than rate coding schemes[13], with regard to hardware brevity. Here,rate coding is chosen as the coding scheme in the following sections.

    1.4 Readout

    The output layer of DNNs is used to classify or recognize the input sample. For example, each output neuron in an MLP corresponds to a label; for CNNs, the softmax function is applied to output layers to turn the output value into probability distributions. Usually, the winner neuron has the maximum output value and will label the input with its label.

    Readout in SNN is tightly related to the network topology and training method. All these different readouts aim to find the winner output neuron(s) as DNNs. The winner neuron can be the one having the largest potential, the one firing first or the one firing most times. Note that output neurons in SNNs could be much more than the labels[13]. Thus the current input sample can be labeled with the label of the winner neuron or neurons. In this work, considering the construction work from DNNs to SNNs, which is trained with supervised learning, 3 readout strategies are exploited that might fit in the transferred networks, i.e., FS (first spike), MS (maximum spike times) and MP (maximum accumulated potential). In this exploration, FS fails to achieve the same accurate results with other two strategies; while MS and MP show good performance on simple tasks such as MINIST or simple networks such as Lenet-5. However, MS fails on larger or deeper topology where the accuracy drops drastically. Therefore, MP readout method is the first choice which shows steady good performance.

    1.5 Training

    Training is essential and crucial to DNNs and several training methods have been proposed. Among them, BP, a supervised learning algorithm, has been proven to be most effective. During neural network training, errors between actual outputs and desired outputs are back propagated to input layers to adjust the network parameters gradually.

    SNN training techniques are far different from DNNs. Most SNNs adopt neuromorphic learning models in biology/neuroscience to optimize their training processes. For example, the well-known STDP (spike-timing-dependent plasticity) mechanism, an unsupervised learning algorithm, achieves similar accuracy as a two-layer MLP on MNIST dataset[21]. In STDP, the learning principle is to detect causality between input and output spikes (i.e., presynaptic and postsynaptic). If a neuron fires soon after receiving an input spike from a given synapse, it suggests that synapse plays an important role in the firing, and thus it should be reinforced by long-term potentiation (LTP). Conversely, if a neuron fires a long time after receiving an input spike, or shortly before receiving it, the corresponding synapse will be depressed by long-term depression (LTD). Additionally, neuron will adjust its potential threshold to keep the neuron firing at a reasonable speed through a homeostasis mechanism. Thus, all the neurons are forced to participate with similar activities. Recently, researchers begin to explore supervised learning with backward propagation. But none of them is able to achieve the comparable results as BP in DNNs, especially on tasks with larger problem sizes.

    2 Constructing DSNN

    In this section, a construction procedure that transfers learned ability in DNNs to SNNs is proposed. This work focuses on CNNs. As shown in Fig.2, the DSNN construction workflow can be divided into 2 stages: from CNN to SNN and from SNN to CNN. In the former stage, DSNN is constructed with weights and topology directly converted from CNN; in the latter stage, SNN features are introduced in the original CNN which will be modified for further training. Final DSNN is constructed with retrained weights.

    Fig.2 Flow of DSNN construction

    2.1 Intrinsic consistency between DNNs and SNNs

    The intrinsic consistency between DNNs and SNNs reveals a possibility of transferring the learned ability of DNNs to SNNs. Despite the differences of neuron models and training algorithms, regarding the inference, DNNs can be viewed as a simplified version SNNs by removing the timing information. Given an SNN and a DNN in a same topology, considering the formulas in Section 1, SNN turns out to convert the original input of the DNN from floating-point numbers or high fixed-width numbers into lower width integers of spikes if removing the time window. The following question is about the accuracy loss due to that conversion. Previous work show that spiking encoding currently works worse. However, recent work on less bit-width for data representation have been extended to binary neural networks[22-24]that use 1 bit for data. Such feature indicates that SNNs with rate encoding may not suffer accuracy loss due to moderate discretization of DNNs inputs.

    In addition, ReLU, the most popular activation function used in deep learning[25,26], may help to bridge the gap between DNNs and SNNs. ReLU eliminates the negative neuron outputs and preserves the linear property of the positive outputs. Its function is intrinsically consistent with the firing mechanism (IF model) in SNN that a neuron fires only when its potential (always≥0) is larger than the threshold. That indicates that an integrate-and-fire (IF) neuron[27]is equal to an ‘a(chǎn)rtificial neurons plus ReLU’ in some degree.

    2.2 From CNN to SNN

    TopologyTo transfer the learned ability, multiple layers are needed in SNN to achieve the functions of different layers in CNN. Intuitively and directly, this work constructs a new topology of SNN with SNN-CONV, SNN-POOL, and SNN-FC layers for convolutional (CONV), pooling (POOL), and fully-connected (FC) layers, as shown in Fig.3. In another words, SNN retains the connections and weights in the trained CNN during the transfer. Especially, for layers having no weights like POOL, SNN-POOL is constructed with fixed weights 1/Size(Kernel), whereSize(Kernel)is the number of presynaptic neurons in the kernel.

    Fig.3 DSNN construction from LeNet-5

    InputThis work has explored 2 commonly used methods uniform coding[12]and Poisson coding[28]to encode CNN input values into spike trains for SNN. With uniform coding, the input neuron fires periodically with a firing rate which is proportional to the input value. With Poisson coding, the input neuron fires spikes following a Poisson process whose time constant is inversely proportional to the input value. Additionally, note that the centralization and normalization techniques in DNNs can accelerate the convergence of the training process, but it will inevitably introduce negative input values. To overcome the difficulty that input spikes are unable to decrease the neuron potentials, ‘negative spike’ is introduced in the converted SNN model.

    For an input neuron firing a ‘negative spike’, received neurons integrate it similarly as positive spikes but decrease their potentials.

    ParametersThe converted SNN needs to decide 2 types of parameters: synapse weights and firing threshold in each neuron. For the former one,they are directly obtained from the fully trained CNN in the from SNN to CNN flow.

    For the latter one, previous methods such as model-based normalization and data-based normalization[16], work only on simple and small datasets/networks, such as MNIST/LeNet-5, but fail on larger datasets and complex networks, such as ImageNet/AlexNet. The model-based method requires large spike time window and leads to longer computation latency in SNN. Data-based method is worse, since it needs to propagate the entire network with the entire training data set and store all the activations which will further be calculated as scaling factors.Instead, this work proposes a greedy search based method to decide the firing thresholds, as shown in Algorithm 1, which makes better trade-offs between accuracy and efficiency.Briefly, first find the maximum possible outputMifor each layer based on the current weight model (in Algorithm 1,Mi=input_sum,input_wtis the synapse weight). The threshold for each layer is given byσ×Mi, whereσis a constant to be decided. Search widely onσin the set {1, 0.1, 0.01,…} until a satisfactory result is obtained. To guarantee the optimal thresholds, greedy search on the nearby thresholds is needed.

    Algorithm 1: Threshold set algorithmfor layer in layers do maxposinput = 0 for neuron in layer. neurons do inputsum=0 for inputwt in neuron.inputwts do inputsum+=max(0, inputwt) end for maxposinput = max(maxposinput, inputsum) end for layer.threshold=σ×maxposinput.end forSearch on σ in the set {1, 0.1,0.01, …} until a satis-factory result is obtained.

    2.3 From SNN to DNN

    After the first stage of transfer, features from the converted SNN are introduced to the original CNN model for further adjustments. The adjusted CNN will be trained finely to obtain parameters that better retain the accuracy on SNN.

    ReLUactivationsIn CNN, all the CONV layers and FC layers are made to use ReLU as an activation function, in order to eliminate negative neuron outputs (which could be only transferred as ‘negative spikes’ in SNN). There is no need to add ReLU functions after POOL layers since both MAX-POOL and Average-POOL do not change the polarity of input spikes. Fortunately, most of the mainstream CNNs have already included ReLU as activation function since it is shown to have better accuracy results.

    AveragepoolingRegarding the pooling layer in CNN, this work changes them to average pooling (AVG-POOL) as it is easier to be simulated in the form of spikes. Also, previous work have demonstrated that MAX-POOL or AVG-POOL does not have a significant impact on network accuracy[29].

    BiasNo suitable methods have been found to accurately simulate bias in SNN.

    The adjusted CNN in this stage will be fully trained to obtain new weights. Together with the SNN architecture in the first stage, a powerful DSNN is constructed.

    The performance of the DSNNs is reported in Section 4.

    3 Spatially folded DSNN

    Considering the contradiction of limited hardware resources and unlimited size of networks, architects have to design architecture flexible enough to be reused in time, i.e., a time division multiple accesses method. In another words, algorithms should compute different pieces at different time. Specifically, network should be folded spatially. For time-independent CNNs, they can be divided easily for a small footprint of hardware[30]. However, this spatially-foled property will not hold in any previous SNNs including the DSNNs, because the computation in each neuron is strongly related to the firing time of each pre-synaptic neuron. To solve this problem, previously proposed architectures usually use an expanded method for the computation which keeps the time causality.

    This work proposes an algorithm to further construct ‘DSNN-fold’ for hardware benefit while maintaining the accuracy results.The key feature of ‘DSNN-fold’ is the split two-phase computation, which is described in Fig.4 and Fig.5. In first phase, postsynaptic neurons accumulate their potentials if the corresponding presyanptic neuron emits negative spikes. Since negative spikes only reduce the neuron potentials, postsynaptic neurons will not fire spikes. In the second phase, positive spikes are fired to postsynaptic neurons. Other parts such as input encoding, readout and threshold policy are not changed. Obviously, the 2 phases are independent and will not affect the number of spikes. Thus, in DSNN-fold, the computation can be divided into pieces.

    Fig.4 The first phase of DSNN-fold

    Fig.5 The second phase of DSNN-fold

    By using the DSNN-fold method, the spatio-temporal correlation of the entire SNN is removed. In this way, the deployment of network segments of any size can be realized in hardware mapping. As shown in Fig.6, the computation of an SNN network is split into operations of independent layers. The operation of each layer can be divided into 2 phases and the polarity of the influence of the operations in each phase on the output neurons is independent and stable. Therefore, it is possible to split computations in each phase into several fragments. Computations in each fragment will be easily mapped to any hardware design.

    Fig.6 Folded SNN

    Interestingly, the accuracy results of DSNNs-fold are actually slightly higher than DSNNs. That is mainly because DSNNs-fold eliminates the disturbance of accidental spiking due to randomly input spikes from the previous layer.

    Also, the firing thresholds determination is much easier. In DSNNs, the threshold is sensitive as the neuron will fire too much times than the expected if positive spikes come first. However, in DSNNs-fold, the final numbers of spikes depend on inputs, regardless of the coming order.

    Additionally, maximum pooling is feasible in DSNNs-fold either, as it could be achieved by selecting the neurons with maximum number of spikes and inhibiting other neurons to propagate to the next layer.

    4 Evaluation

    4.1 Methodology

    In this work, 4 representative CNNs are selected as benchmarks, and implement those 4 CNN models with Caffe[31], including LeNet-5[17], caffe-cifar10-quick[2], AlexNet[1]and VGG-16[3], as shown in Table 1. The

    4 CNNs are designed for 3 different datasets: LeNet-5 for MNIST, caffe-cifar10-quick for CIFAR10, AlexNet and VGG-16 for ImageNet. Particularly, MNIST consists of 60 000 individual images (28×28 grayscale) of handwritten digits (0-9) for training and 10 000 digits for testing. CIFAR-10 consists of 60 k colorful images (32×32) in 10 classes. ImageNet ILSVRC-2012 includes high resolution(224×224) images in 1 000 classes and is split into 3 sets: training (1.3 M images), validation (50 k images), and testing (100 k images).

    The classification performance is evaluated using 2 measures: the top-1 error and top-5 error. The former reflects the error rate of the classification and the latter is often used as the criterion for final evaluation.

    Table 1 Network depth comparison

    4.2 Accuracy

    Table 2 compares the accuracies achieved by CNN, adjusted CNN (adjustments in stage from SNN to CNN), DSNN, and DSNN-fold. Adjusted CNN causes trivial accuracy loss (0.01% to 2.42%) compared to CNN. Even for the deepest network, VGG-16, the accuracy loss is only 2.42%. This illustrates that CNN training is able to make trade-offs on strategies like bias and max-pooling, if the only factor that is taken into consideration is accuracy, and other factors such as convergence speed are ignored in such cases.

    For small networks on MNIST and Cifar datasets, DSNN-fold achieves comparable results with adjusted-CNN, with accuracy decreases of 0.1% and 0.56% respectively. Moreover, for large scale networks, AlexNet and VGG-16, the top-1 and top-5 errors are restricted to a reasonable range (i.e., 1.03% and 1.838% for AlexNet, 3.42% and 2.09% for VGG-16). Compared to previous work of converting CNN to SNN, the results greatly improve the accuracy achieved by SNN.

    Table 2 Accuracy results

    As the number of network layers increases, the accuracy loss of SNN slowly increases due to parameter settings. Two of the key parameters are the image presentation time and the maximum spike frequency. Consistent with the original SNN, the maximum firing frequency is limited no larger than 100 Hz and the image presentation less than 500 ms in this work. SNN could be more efficient in practical tasks under these parameters. However, these limitations could lead to bad simulation of output behaviors of CNN neurons. This problem has been solved by applying the following DSNN-fold algorithm. Another crucial parameter is the firing threshold. In order to reduce the complexity of SNN, the same threshold value is set to neurons in the same layer despite they can be set independently. Although the simplified threshold setting strategy is able to reduce the workloads of threshold setting, this work sacrifices the higher accuracy that could be obtained by setting an independent threshold for each neuron.

    Compared to DSNN, DSNN-fold achieves a better accuracy, e.g., 88.39% for VGG-16, which is slightly higher than the accuracy achieved by DSNN. In original SNN, the positive and negative spike timings cross each other, and bring about unreasonable firing behaviors of postsynaptic neurons. However, in DSNN-fold, such behaviors avoided as negative spikes are computed before positive spikes.

    The pre- and post-conversion accuracy and accuracy of previous SNN on a typical network are presented in Fig.7. From left to right, the complexity of the network is gradually increasing, and the difficulty of identifying tasks is also gradually increasing. Although the performance of DNN, SNN and the proposed method is very similar in the simple task, the stability of our method is obviously better than that of the previous SNN network. Obviously, on the performance of complex tasks, the improvement of the proposed method compared to the previous SNN algorithm is significant. Considering that the best result of previous SNN work on ImageNet was 51.8%(top1) and 81.63%(top5)[32,33], this work improves the accuracy of the SNN on ImageNet by a maximum of 6.76%. It is clear that our SNN is able to achieve practical results on complex recognition tasks.

    Fig.7 Compare accuracy results among typical networks

    4.3 Maximum spikes vs. maximum potential

    This work selects the maximum potential (MP) strategy over the maximum spikes (MS) strategy as the readout strategy due to its ability to support large scale networks. These 2 strategies are evaluated on benchmarks shown in Fig.8. These 2 strategies achieve similar performance on small datasets and networks. However, on large datasets and networks, the performance of MS strategy is poor, as many neurons in the last layer produce the same number of maximum spikes, which seriously blocks the judgement of output labeling in MS.

    Fig.8 Comparison between 2 readout strategies

    4.4 Robustness

    The performance of the 2 encoding methods mentioned in Section 1 are compared in Fig.9. Both methods achieve satisfactory performance in the conversion methods. Poisson coding adds randomness to the input stimulus which proves that the converted SNN can still be effective under unstable input environment. Since Poisson encoding is statistically random and will increase the computational complexity, it is not recommended to be applied in algorithms or hardware designs.

    Fig.9 Comparison between 2 coding schemes

    5 Discussion

    Compared to traditional CNNs, the major advantage of DSNN is that it will significantly reduce hardware storage and computation overhead.On the one hand, DSNN converts floating-point numbers with large data width into fixed-point spikes with smaller data width,thereby reducing storage overhead. On the other hand, DSNN divides the dot product operation in CNN into add operations, which will significantly reduce the computational power consumption and area in the hardware.

    Compared with another similar network BNN (binary neural network)[23], the effect of SNN on reducing overhead is obviously inferior to it, because BNN only operates 1 bit neurons and weights. Besides, one add operation is needed for calculating the effect of one input neuron on an output neuron. Although DSNN is weaker than BNN in this respect, DSNN achieves better accuracy compared with BNN. Note that BNN has completely failed to complete the task on ImageNet.In summary, DSNN is very suitable for working in high-precision and low-cost work scenarios.

    The practice of using ReLU activations to avoid negative neurons has appeared in many articles, but it still lacks reasonable interpretations of the occurrence of negative weights. It is a generally accepted fact that it takes much more SNN neurons with inhibitory mechanisms to simulate negative weights, therefore the conversion techniques of turning CNN into a biological SNN is still worth exploring. If SNN would one day be considered in hardware design, the quantification of weights and neuron potentials are also critical, which requires the SNN to remain high precision with low-precision weights like half-precision floating-point weights or neuron potentials. The latest CNN technologies such as sparse and binary techniques also pose challenges to the accuracy of the SNN, and it remains unknown whether SNN can successfully transform them.

    In addition to the previous classic network algorithms, combined with the latest generative model, DNN is still making breakthroughs in multiple application scenarios[33-36]. How SNN completes new network technologies such as GAN in DNN is still worth studying.

    6 Conclusion

    This work proposes an effective way to construct deep spiking neural networks with ‘learning transfer’ from DNNs, which makes it possible to construct a high precision SNN without complicated training procedures. This kind of SNN has been able to match the accuracy of CNN in complex mission scenarios, which is a huge improvement over the previous SNN. This work also improves the computing algorithms of the transferred SNN in order to extend SNN to a spatially-folded version (DSNN-fold). The DSNN-fold turns out to be effective in both accuracy and computation, which can be a good reference for future hardware designs.

    www日本在线高清视频| 中文字幕最新亚洲高清| 99久久精品国产亚洲精品| 成人三级做爰电影| 成人亚洲精品av一区二区| 欧美中文综合在线视频| 日本三级黄在线观看| 淫妇啪啪啪对白视频| 精品久久久精品久久久| 欧美日韩瑟瑟在线播放| 欧美亚洲日本最大视频资源| 男女午夜视频在线观看| 国产精品二区激情视频| 啦啦啦 在线观看视频| 成熟少妇高潮喷水视频| 中文字幕久久专区| 少妇粗大呻吟视频| 色综合站精品国产| 日本撒尿小便嘘嘘汇集6| 窝窝影院91人妻| 999久久久精品免费观看国产| 久热这里只有精品99| 高清黄色对白视频在线免费看| 久久久国产成人精品二区| 久久久久久久久中文| 久久国产精品影院| 成人av一区二区三区在线看| 久久精品国产清高在天天线| 9色porny在线观看| 国产成人啪精品午夜网站| 免费久久久久久久精品成人欧美视频| 久久香蕉激情| 人妻久久中文字幕网| 91老司机精品| 香蕉久久夜色| 变态另类成人亚洲欧美熟女 | 亚洲国产毛片av蜜桃av| 丰满的人妻完整版| 90打野战视频偷拍视频| 日韩av在线大香蕉| 一区在线观看完整版| 欧美一级毛片孕妇| 可以在线观看毛片的网站| 岛国在线观看网站| 午夜福利高清视频| avwww免费| 国产一区二区三区综合在线观看| 午夜福利免费观看在线| 国内精品久久久久久久电影| 99国产精品一区二区三区| 欧美av亚洲av综合av国产av| 大型av网站在线播放| 精品日产1卡2卡| 国产精品乱码一区二三区的特点 | 国内久久婷婷六月综合欲色啪| 成人国产综合亚洲| 亚洲激情在线av| 欧美日韩乱码在线| 97碰自拍视频| 亚洲性夜色夜夜综合| 久久国产精品影院| 黑人欧美特级aaaaaa片| 亚洲精品国产色婷婷电影| 在线十欧美十亚洲十日本专区| 久久欧美精品欧美久久欧美| 久久精品国产清高在天天线| 欧美日韩亚洲综合一区二区三区_| 久99久视频精品免费| 中出人妻视频一区二区| 成人18禁高潮啪啪吃奶动态图| av片东京热男人的天堂| 久9热在线精品视频| 久久久久久国产a免费观看| 亚洲av熟女| 国产亚洲av嫩草精品影院| 日韩大码丰满熟妇| 精品熟女少妇八av免费久了| 久久精品成人免费网站| 亚洲精品久久国产高清桃花| 麻豆av在线久日| 中国美女看黄片| 一级片免费观看大全| 久久久久国内视频| 18禁裸乳无遮挡免费网站照片 | 看黄色毛片网站| 日本vs欧美在线观看视频| 亚洲熟妇中文字幕五十中出| 久久久水蜜桃国产精品网| 亚洲精品国产精品久久久不卡| 久久精品国产亚洲av香蕉五月| 女人高潮潮喷娇喘18禁视频| av在线播放免费不卡| 男女做爰动态图高潮gif福利片 | 真人一进一出gif抽搐免费| 日韩一卡2卡3卡4卡2021年| 免费在线观看黄色视频的| 丝袜人妻中文字幕| 在线十欧美十亚洲十日本专区| 天天躁夜夜躁狠狠躁躁| 国产成年人精品一区二区| 久久亚洲真实| 成人手机av| 亚洲免费av在线视频| 极品人妻少妇av视频| 国产精品秋霞免费鲁丝片| 国产真人三级小视频在线观看| 久久国产乱子伦精品免费另类| 国产成人精品在线电影| 电影成人av| 一进一出抽搐动态| 一卡2卡三卡四卡精品乱码亚洲| 精品久久久久久成人av| 久久青草综合色| 妹子高潮喷水视频| 久久国产精品男人的天堂亚洲| 精品久久蜜臀av无| 免费在线观看黄色视频的| 免费不卡黄色视频| 成人三级做爰电影| 我的亚洲天堂| 午夜老司机福利片| 色综合欧美亚洲国产小说| 欧美性长视频在线观看| 日韩成人在线观看一区二区三区| 国产亚洲欧美在线一区二区| 亚洲av熟女| 成人手机av| 99在线视频只有这里精品首页| 欧美乱妇无乱码| 极品人妻少妇av视频| 亚洲国产中文字幕在线视频| xxx96com| 一本综合久久免费| 欧美日韩黄片免| 在线观看一区二区三区| 国产亚洲精品久久久久5区| 免费在线观看亚洲国产| 99久久99久久久精品蜜桃| 亚洲电影在线观看av| av视频免费观看在线观看| 成人亚洲精品一区在线观看| 午夜福利在线观看吧| 99久久综合精品五月天人人| 可以在线观看毛片的网站| 999久久久精品免费观看国产| 精品国产一区二区三区四区第35| 在线天堂中文资源库| 日韩欧美免费精品| 狠狠狠狠99中文字幕| 高清毛片免费观看视频网站| 国产精品 国内视频| 日韩免费av在线播放| avwww免费| 少妇裸体淫交视频免费看高清 | 一区二区日韩欧美中文字幕| 成人18禁在线播放| 757午夜福利合集在线观看| 国产精品久久久av美女十八| 深夜精品福利| cao死你这个sao货| 熟女少妇亚洲综合色aaa.| 久久精品国产综合久久久| 亚洲 国产 在线| 成人免费观看视频高清| 午夜精品在线福利| av中文乱码字幕在线| 国语自产精品视频在线第100页| 成人av一区二区三区在线看| 久久天堂一区二区三区四区| 性色av乱码一区二区三区2| 操美女的视频在线观看| 中文字幕精品免费在线观看视频| 一级a爱片免费观看的视频| 免费搜索国产男女视频| 老司机福利观看| 波多野结衣巨乳人妻| 丝袜在线中文字幕| 国产精品,欧美在线| 大香蕉久久成人网| 欧美大码av| 免费在线观看日本一区| 夜夜看夜夜爽夜夜摸| 国产欧美日韩一区二区三| 香蕉丝袜av| 成人亚洲精品av一区二区| 欧美乱码精品一区二区三区| 中文字幕高清在线视频| 国产精品久久久久久亚洲av鲁大| av在线播放免费不卡| 制服丝袜大香蕉在线| 午夜成年电影在线免费观看| 女同久久另类99精品国产91| 亚洲精品美女久久久久99蜜臀| 波多野结衣一区麻豆| 免费在线观看日本一区| 变态另类成人亚洲欧美熟女 | 国产成人一区二区三区免费视频网站| 美国免费a级毛片| 亚洲国产精品999在线| 国产黄a三级三级三级人| 极品教师在线免费播放| av福利片在线| 欧美成人免费av一区二区三区| 亚洲情色 制服丝袜| 色婷婷久久久亚洲欧美| 欧美一区二区精品小视频在线| 啦啦啦免费观看视频1| 天堂√8在线中文| 满18在线观看网站| 性少妇av在线| 一本综合久久免费| 乱人伦中国视频| 美女高潮喷水抽搐中文字幕| 国产精品av久久久久免费| 国产三级在线视频| 国产一区二区在线av高清观看| 国产欧美日韩综合在线一区二区| 啪啪无遮挡十八禁网站| 久久久久亚洲av毛片大全| 亚洲,欧美精品.| 免费观看人在逋| 人人妻人人澡人人看| av天堂久久9| 一级毛片女人18水好多| 成人永久免费在线观看视频| 国产亚洲精品一区二区www| 麻豆成人av在线观看| 日韩精品免费视频一区二区三区| 国产成人精品在线电影| 给我免费播放毛片高清在线观看| 久久久国产成人精品二区| 精品久久久精品久久久| 亚洲美女黄片视频| 国产高清有码在线观看视频 | av电影中文网址| 日本欧美视频一区| 日韩欧美免费精品| 欧美日韩亚洲国产一区二区在线观看| 国产熟女xx| 波多野结衣av一区二区av| 久久久国产精品麻豆| 国产亚洲精品久久久久久毛片| 亚洲色图综合在线观看| 一区二区三区激情视频| 久久国产乱子伦精品免费另类| av有码第一页| 一级片免费观看大全| 午夜久久久在线观看| 亚洲中文日韩欧美视频| 午夜福利影视在线免费观看| 国产主播在线观看一区二区| 亚洲成av人片免费观看| 精品国产国语对白av| 国产精品香港三级国产av潘金莲| 中国美女看黄片| 中出人妻视频一区二区| 岛国在线观看网站| 少妇被粗大的猛进出69影院| 精品欧美国产一区二区三| 亚洲国产精品久久男人天堂| 免费在线观看视频国产中文字幕亚洲| 波多野结衣av一区二区av| 天天添夜夜摸| 国产精品永久免费网站| 欧美国产精品va在线观看不卡| 精品一区二区三区视频在线观看免费| 久久国产精品男人的天堂亚洲| 亚洲av日韩精品久久久久久密| av福利片在线| 日韩大尺度精品在线看网址 | 极品教师在线免费播放| 亚洲第一欧美日韩一区二区三区| 曰老女人黄片| 亚洲激情在线av| 在线观看免费午夜福利视频| 国产亚洲精品av在线| 亚洲精品在线观看二区| 国产精品九九99| 18禁裸乳无遮挡免费网站照片 | 亚洲欧美日韩高清在线视频| 国产午夜福利久久久久久| 淫秽高清视频在线观看| 很黄的视频免费| 一边摸一边抽搐一进一出视频| 亚洲精品粉嫩美女一区| 成人三级做爰电影| 国产欧美日韩一区二区三区在线| 一个人观看的视频www高清免费观看 | 韩国精品一区二区三区| 亚洲精品粉嫩美女一区| 精品久久久久久成人av| 国产99白浆流出| 久久中文字幕一级| 亚洲成a人片在线一区二区| 成人国产综合亚洲| 国产一区二区在线av高清观看| 国产精品亚洲av一区麻豆| 国产乱人伦免费视频| 免费在线观看完整版高清| 免费看十八禁软件| 国产人伦9x9x在线观看| 757午夜福利合集在线观看| 亚洲成人免费电影在线观看| 亚洲国产日韩欧美精品在线观看 | 久久午夜亚洲精品久久| 久久午夜综合久久蜜桃| 老熟妇仑乱视频hdxx| 叶爱在线成人免费视频播放| 国产熟女午夜一区二区三区| 久久精品亚洲精品国产色婷小说| 免费av毛片视频| 亚洲五月天丁香| 首页视频小说图片口味搜索| 窝窝影院91人妻| 久久香蕉精品热| 欧美激情 高清一区二区三区| 久久精品91无色码中文字幕| 中出人妻视频一区二区| 亚洲专区字幕在线| 婷婷精品国产亚洲av在线| 国产精品久久久人人做人人爽| 久久人妻福利社区极品人妻图片| 久久久久国产精品人妻aⅴ院| 美女大奶头视频| 1024视频免费在线观看| 中文字幕色久视频| 又黄又粗又硬又大视频| 日本 av在线| 一级,二级,三级黄色视频| 男男h啪啪无遮挡| 精品国产美女av久久久久小说| 亚洲免费av在线视频| 国产一级毛片七仙女欲春2 | 国产精品免费一区二区三区在线| 在线观看日韩欧美| 91老司机精品| 999久久久精品免费观看国产| av天堂在线播放| 欧美亚洲日本最大视频资源| 精品国产一区二区三区四区第35| 中文字幕高清在线视频| 日韩免费av在线播放| 成人亚洲精品一区在线观看| 国产片内射在线| 波多野结衣高清无吗| 国产精品免费一区二区三区在线| 亚洲第一欧美日韩一区二区三区| 日本免费一区二区三区高清不卡 | 午夜日韩欧美国产| 1024香蕉在线观看| 精品无人区乱码1区二区| 一区二区三区激情视频| 免费搜索国产男女视频| 又黄又粗又硬又大视频| 亚洲精品粉嫩美女一区| 亚洲精品国产色婷婷电影| 国产又爽黄色视频| bbb黄色大片| 美女国产高潮福利片在线看| 亚洲性夜色夜夜综合| 久久久久久久久免费视频了| 国产不卡一卡二| 首页视频小说图片口味搜索| 纯流量卡能插随身wifi吗| 午夜免费激情av| 欧美乱码精品一区二区三区| 他把我摸到了高潮在线观看| 少妇的丰满在线观看| 亚洲全国av大片| 久久精品91无色码中文字幕| 亚洲最大成人中文| 极品教师在线免费播放| 啦啦啦观看免费观看视频高清 | 校园春色视频在线观看| 午夜精品国产一区二区电影| 色精品久久人妻99蜜桃| 亚洲va日本ⅴa欧美va伊人久久| 亚洲中文av在线| 成人国语在线视频| 韩国精品一区二区三区| 亚洲五月婷婷丁香| 亚洲专区字幕在线| 在线观看免费视频网站a站| 一进一出抽搐动态| 黑人巨大精品欧美一区二区mp4| 精品久久久久久久毛片微露脸| 欧洲精品卡2卡3卡4卡5卡区| 韩国精品一区二区三区| 高清毛片免费观看视频网站| 真人做人爱边吃奶动态| 可以在线观看毛片的网站| 国产高清视频在线播放一区| 少妇裸体淫交视频免费看高清 | 日本a在线网址| 伦理电影免费视频| 久久欧美精品欧美久久欧美| 久久亚洲真实| 啦啦啦韩国在线观看视频| 国产在线观看jvid| 禁无遮挡网站| 高潮久久久久久久久久久不卡| 身体一侧抽搐| 丝袜人妻中文字幕| 精品久久久久久久人妻蜜臀av | 色综合亚洲欧美另类图片| 免费在线观看亚洲国产| 侵犯人妻中文字幕一二三四区| 亚洲av第一区精品v没综合| 99国产精品99久久久久| 男女床上黄色一级片免费看| www.精华液| 999久久久国产精品视频| 亚洲av熟女| 美国免费a级毛片| or卡值多少钱| 精品第一国产精品| 97人妻精品一区二区三区麻豆 | 成在线人永久免费视频| 免费在线观看日本一区| 在线观看免费视频网站a站| av欧美777| 大香蕉久久成人网| 久久精品亚洲精品国产色婷小说| 日韩三级视频一区二区三区| 欧美中文综合在线视频| 波多野结衣av一区二区av| 在线观看日韩欧美| 亚洲av五月六月丁香网| 亚洲精品粉嫩美女一区| 免费少妇av软件| 日韩视频一区二区在线观看| 国产精品野战在线观看| 男人舔女人下体高潮全视频| 国产极品粉嫩免费观看在线| av免费在线观看网站| 成在线人永久免费视频| 黄色a级毛片大全视频| 欧美黑人精品巨大| 三级毛片av免费| 成人特级黄色片久久久久久久| 最近最新中文字幕大全免费视频| 91精品国产国语对白视频| 免费在线观看完整版高清| 成人亚洲精品av一区二区| 成人国产一区最新在线观看| 给我免费播放毛片高清在线观看| 男女之事视频高清在线观看| 国产精品日韩av在线免费观看 | 久久久久久久久免费视频了| 在线观看免费视频网站a站| 国产精品秋霞免费鲁丝片| 黄网站色视频无遮挡免费观看| 一级片免费观看大全| 久久精品亚洲精品国产色婷小说| 午夜福利,免费看| 欧美色欧美亚洲另类二区 | 免费在线观看影片大全网站| 欧美黄色片欧美黄色片| 久久午夜综合久久蜜桃| 精品乱码久久久久久99久播| 久久影院123| 在线免费观看的www视频| 久久热在线av| 午夜福利,免费看| 精品第一国产精品| 男女床上黄色一级片免费看| 日本在线视频免费播放| 国产精品久久久久久精品电影 | 老司机午夜福利在线观看视频| 欧美一区二区精品小视频在线| 亚洲伊人色综图| 亚洲熟妇熟女久久| 黑人欧美特级aaaaaa片| 亚洲精品久久成人aⅴ小说| 亚洲成a人片在线一区二区| 999久久久国产精品视频| 亚洲色图 男人天堂 中文字幕| 国产精品免费视频内射| 性色av乱码一区二区三区2| 搡老熟女国产l中国老女人| 极品教师在线免费播放| 色综合婷婷激情| 中文字幕人妻丝袜一区二区| 国产亚洲欧美精品永久| 天天躁夜夜躁狠狠躁躁| 这个男人来自地球电影免费观看| 超碰成人久久| 露出奶头的视频| 亚洲欧美日韩高清在线视频| 国产主播在线观看一区二区| e午夜精品久久久久久久| 身体一侧抽搐| 欧美成人性av电影在线观看| 91精品国产国语对白视频| 精品乱码久久久久久99久播| 午夜福利成人在线免费观看| 久久久久亚洲av毛片大全| 国产麻豆成人av免费视频| 久久欧美精品欧美久久欧美| 国产麻豆成人av免费视频| 欧美+亚洲+日韩+国产| 国产色视频综合| 亚洲色图av天堂| 丝袜美腿诱惑在线| 欧美最黄视频在线播放免费| 亚洲激情在线av| 一卡2卡三卡四卡精品乱码亚洲| 乱人伦中国视频| 国产av又大| 少妇 在线观看| 亚洲精华国产精华精| 亚洲国产日韩欧美精品在线观看 | 欧美中文综合在线视频| 国产三级在线视频| 日韩有码中文字幕| 一区二区三区激情视频| 一本久久中文字幕| 久久香蕉精品热| 亚洲五月婷婷丁香| 午夜两性在线视频| 1024视频免费在线观看| 成年版毛片免费区| 日日干狠狠操夜夜爽| 一本综合久久免费| 国产精品自产拍在线观看55亚洲| 91九色精品人成在线观看| 欧美日本中文国产一区发布| 免费看美女性在线毛片视频| 亚洲av成人一区二区三| 99国产综合亚洲精品| 国产精品亚洲美女久久久| 伊人久久大香线蕉亚洲五| 亚洲av美国av| 日韩欧美国产在线观看| 亚洲成人国产一区在线观看| 久久久久久久久免费视频了| 99精品欧美一区二区三区四区| 国产精品电影一区二区三区| 欧美激情 高清一区二区三区| 制服人妻中文乱码| 久久天堂一区二区三区四区| 性欧美人与动物交配| 久久草成人影院| 女人高潮潮喷娇喘18禁视频| 在线av久久热| 天天躁狠狠躁夜夜躁狠狠躁| 精品欧美一区二区三区在线| 一a级毛片在线观看| 欧美成人免费av一区二区三区| 午夜福利欧美成人| 精品日产1卡2卡| 久久精品aⅴ一区二区三区四区| 级片在线观看| 亚洲男人天堂网一区| 91大片在线观看| 午夜免费观看网址| 精品一区二区三区四区五区乱码| 91精品国产国语对白视频| av有码第一页| 看黄色毛片网站| 美女高潮喷水抽搐中文字幕| 久久精品人人爽人人爽视色| 99国产精品99久久久久| 亚洲精品国产精品久久久不卡| 波多野结衣巨乳人妻| 免费高清在线观看日韩| 亚洲午夜理论影院| 国产精品爽爽va在线观看网站 | 老熟妇乱子伦视频在线观看| 免费观看人在逋| 日韩精品中文字幕看吧| 色播亚洲综合网| 亚洲熟女毛片儿| 一区在线观看完整版| 少妇粗大呻吟视频| 国产麻豆69| 欧美日本亚洲视频在线播放| 成人亚洲精品av一区二区| 丰满的人妻完整版| 高潮久久久久久久久久久不卡| 女性生殖器流出的白浆| 看片在线看免费视频| 女性生殖器流出的白浆| 亚洲av成人一区二区三| 男女之事视频高清在线观看| 日韩精品中文字幕看吧| 午夜福利在线观看吧| 999久久久精品免费观看国产| 女警被强在线播放| 女人高潮潮喷娇喘18禁视频| 婷婷丁香在线五月| 一进一出好大好爽视频| 看免费av毛片| or卡值多少钱| netflix在线观看网站| 欧美激情高清一区二区三区| 岛国在线观看网站| 国产精品九九99| 久久午夜综合久久蜜桃| 午夜日韩欧美国产| 午夜福利,免费看| 久久香蕉国产精品| 精品久久久久久久人妻蜜臀av | 一区二区三区精品91| 老熟妇仑乱视频hdxx| 最新在线观看一区二区三区| 国产又爽黄色视频| 天堂影院成人在线观看| 中文字幕人妻丝袜一区二区| 热re99久久国产66热| 亚洲七黄色美女视频| 精品不卡国产一区二区三区| 午夜精品久久久久久毛片777| 久久久久亚洲av毛片大全| 国产精品香港三级国产av潘金莲|