• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Lightweight and highly robust memristor-based hybrid neural networks for electroencephalogram signal processing

    2023-09-05 08:48:40PeiwenTong童霈文HuiXu徐暉YiSun孫毅YongzhouWang汪泳州JiePeng彭杰CenLiao廖岑WeiWang王偉andQingjiangLi李清江
    Chinese Physics B 2023年7期
    關鍵詞:孫毅王偉清江

    Peiwen Tong(童霈文), Hui Xu(徐暉), Yi Sun(孫毅), Yongzhou Wang(汪泳州), Jie Peng(彭杰),Cen Liao(廖岑), Wei Wang(王偉), and Qingjiang Li(李清江)

    College of Electronic Science and Technology,National University of Defense Technology,Changsha 410073,China

    Keywords: memristor,lightweight,robust,hybrid neural networks,depthwise separable convolution,bidirectional gate recurrent unit(BiGRU),one-transistor one-resistor(1T1R)arrays

    1.Introduction

    With the booming development of brain–computer synergy technology in recent years, brain electrode arrays have been expanded and the accuracy and speed of electroencephalogram (EEG) signal acquisition have been continuously improved.This has led to a dramatic increase in the amount of EEG data and has placed higher demands on the processing.[1–5]The traditional EEG signal processing system is affected by the high latency and power consumption resulting from the A/D conversion and separation architecture of the memory and computing unit architecture,which makes it more and more difficult to meet the increasing demand for high-speed,high-throughput EEG signal processing.[6,7]

    The emergence of EEG signal processing systems based on memristors has been a boon for EEG signal processing.[8–10]With their good switching characteristics,excellent retention characteristics, static linearI–Vcharacteristics and biological likelihood characteristics, memristors can achieve fast computing with low power consumption and low latency in the analogue domain.However,the most advanced one-transistor one-resistor (1T1R) arrays are affected by the fabrication process and suffer from non-idealities such as array yield rate and device weight fluctuation.[11]These lead to higher limits of scale and demand robustness of the network.The typical end-to-end processing algorithms for EEG signals mainly use complex and massive multiply accumulate operations to pursue extreme performance.[12,13]This brings challenges to the implementation of networks in memristor arrays.

    Here we propose the depthwise separable convolution and bidirectional gate recurrent unit (DSC-BiGRU) network,a lightweight and highly robust hybrid neural network based on 1T1R arrays,which extracts and learns the features of EEG signals from the temporal, frequency and spatial domains by using a hybrid of a convolutional neural network (CNN) and recurrent neural network(RNN).The novelty of the proposed network lies in improving the efficiency of network learning and reducing the utilization of resources.Within the network,the DSC block is used to reduce the network complexity and improve the robustness of the network[14]while the BiGRU block is used to improve the learning efficiency to reduce the size of the network.Simulated results show a 95%reduction in network parameter resources compared with the traditional convolutional networks DeepConvNet (DCN) and Shallow-ConvNet (SCN).[15]With a 95% array yield rate and 5% tolerance error, the network classification accuracy is improved by 21% to 85%.These results show that the DSC-BiGRU network can effectively achieve lightweight and highly robust EEG signal processing, thus providing a new solution for the application of memristor neuromorphic computing in brain–computer interfaces.

    2.Application of EEG signal analysis system based on memristor arrays

    2.1.EEG signal analysis system based on memristor arrays

    Figure 1 illustrates an EEG signal analysis system based on memristor crossbar arrays.A complete EEG signal processing application (EEG communication, motor aids, metaverse, etc.) can be designed by integrating the system with neural probes for application scenarios.Multiply-accumulate operations are usually the most critical and computationally intensive part of EEG processing algorithms, and memristor arrays can achieve parallel multiply-accumulate calculations in the analog domain.This overcomes the limitations caused by the Von Neumann architecture and A/D conversion processing, providing an efficient hardware platform to perform various processing algorithms and helping to improve the latency,power consumption and scalability of applications.However, due to the limitation of present manufacturing and fabrication processes, the memristor array suffers from size and non-ideality problems.[11]Therefore,a high-performance neural network that can adapt to practical memristor arrays and still meet the high requirements of EEG signal processing is of significant importance.

    2.2.Description of the application task

    An EEG signal classification task is used to verify the performance of the network.The application tasks are described in terms of dataset,network model training and network validation.

    The performance of the network is validated by classifying event-related potential (ERP) EEG data in a four-class classification task using the sample dataset provided in the MNE package.[16,17]The data are acquired using the Neuromag Vectorview system at the MGH/HMS/MIT Athinoula A.Martinos Centre for Biomedical Imaging.Two hundred and eighty eight samples of EEG data are taken from a 60-channel electrode cap.During data acquisition,checkerboard patterns are presented in the subject’s left and right visual fields and interspersed by tones in the left and right ears.The time interval between stimuli is 750 ms.A smiley face will incidentally appear in the center of the visual field.Subjects are asked to press a key with their right index finger soon after the appearance of the smiley face.The four classes used from this dataset are LA,left-ear auditory stimulation;RA,right-ear auditory stimulation;LV,left visual field stimulation;RV,right visual field stimulation.

    In order to achieve the four-classification task, the network model is fitted by the Adam optimizer to minimize the classification cross-entropy loss function.The training process ends after 300 epochs and the model weights that produce the lowest validation set loss are saved.Dropout techniques are used to prevent overfitting during small sample training with a dropout rate of 0.5.The Keras API in TensorFlow on an NVIDIA GeForce RTX 2060 is used to train all models.It is worth noting that we omit the use of bias units in all convolutional layers for the purposes of hardware implementation.

    Considering the limited size of the dataset we used, the simulation uses five-fold cross-validation to obtain network classification accuracy.This is achieved by dividing the dataset into five parts,with one part being taken for validation while the remaining four parts are used for training.In this way five different results are obtained, and the final result is obtained by averaging the five validation results.This extracts as much valid information as possible from the limited dataset to obtain more accurate results.In addition,the simulation is repeated 10 times to obtain more comprehensive results.

    2.3.Device characterization and analysis

    For reliable verification of the network performance and successful hardware implementation of the network,the characteristics of practical memristors are measured.The writing error of the weight modulation is statistically analyzed and data for the weight fluctuation are provided to support the subsequent performance simulation.

    The memristor measurements are based on our fabricated TiN/HfOx/TaOx/TiN 1T1R arrays.The electrical characteristics of the memristor are characterized using a Keithley 4200 SCS parameter analyzer and the board-level verification system equipped with a 1T1R array.

    A DC voltage is applied to the gate of the target cell when we perform the modulation operation.During the SET process, the excitation is applied at the top electrode while the source electrode is grounded,as shown in the inset of Fig.2(a).On the contrary, the excitation is applied at the source electrode and grounded at the top electrode during the RESET process.In Fig.2(a),measured results show that our 1T1R device possesses excellent bidirectional analogue switching behavior,which allows the device conductance to be modulated continuously during SET and RESET.This will aid precise mapping of the network parameters.Importantly, the multiconductance level exhibits good retention characteristics, as shown in Fig.2(b).In order to reduce the total measurement time, 40 randomly selected devices are modulated to 40 different conductance states and the conductance values are measured every hour.These devices maintain a stable conductance state over a 24-h period.The results of the 1T1R cell measurements show that the device has good bipolar switching characteristics and retention characteristics to support the validation of the network in the array.

    Fig.2.(a)The 1T1R cell DC scan characteristics.(b)Retention characteristics of the 1T1R cell.(c)Composition of the 1T1R chip board-level verification system.(d) Statistical analysis of the weight modulation error under different tolerable errors.The tolerable error is the range of tolerable device conductance fluctuations at the end of modulation.

    We explore a board-level verification system embedded with a 1T1R array to measure memristor characteristics and verify network parameter mapping,as shown in Fig.2(c).The board has three circuits for writing weights, reading weights and identifying output currents,respectively.In this paper we use a variable gate voltage modulation method with rough and precise modulation performed in cooperation.This method is inspired by the variable gate voltage modulation method,[19]where a threshold value of±20% from the target resistance value is set,with precise modulation within the threshold and rough modulation outside the threshold.Both rough and precise modulation increase the gate voltage gradually according to the present conductance value.The difference between the two modulations is the different step size of the increase in gate voltage: rough modulation has a larger step size to facilitate fast convergence,while precise modulation has a smaller step size,which improves accuracy.In addition,different tolerable errors can be set to meet the requirements of different applications.For example,with a 5%tolerance error,the modulation is considered successful when the device resistance is modulated to within±5%of the target resistance(5%near the ultimate in system accuracy).

    The system and modulation method are verified for experiments.The 10μS–100μS range is divided equally into 32 conductance states with 10 devices in each conductance state.Measurements are performed at different tolerable errors.During error analysis, the practical error of the device meets the requirement of less than 5%, as shown in the cyan histogram in Fig.2(d).Then we perform a statistical analysis and find it obeys a standard distribution (μ=?0.0028,σ=0.012619).Similarly,following the same approach,the statistical analysis is implemented for results with tolerable errors of 10%,20%,40% and 80%, as shown in Fig.2(d).The results show that both ends of the fitted curves are basically included within the setting threshold(the mean values of the errors are?0.0072,?0.0153,?0.1195,and?0.2091,respectively),which proves that the modulation method can effectively achieve the tolerable error requirement.We find that starting each measurement with a low conductance state causes the center of the standard distribution to shift toward the negative axis.The statistical analysis of the weight modulation accuracy digitizes the random weight fluctuations to provide practical performance of the devices for the 1T1R array validation of the network,which makes the validation more reliable.

    3.SCN-BiGRU network structure and performance simulation

    3.1.SCN-BiGRU network structure

    In order to complete the network implementation on the memristor,it is of primary importance to reduce the network’s size.The proposed SCN-BiGRU network is an innovative small-scale network based on the SCN network.It is implemented by shrinking the class of convolutional kernels of the SCN network and adding recurrent neural networks.A visualization and full description of the network model is shown in Fig.3(a), where the input is an EEG trial withCchannels(here 60)andStime samples(here 151).The network consists of two blocks: the first is the SCN block, which uses CNN to extract and learn spatial and frequency features in the EEG signal.The second is the BiGRU block, which learns temporal feature information about the EEG signal through RNN learning and finally obtains classification results.

    The SCN block extracts and learns spatial features by two sequentially performed convolutions.First,N1(here four)convolution filters of size (1,L1) are set, whose length is determined based on the sampling frequency of the data [here(1, 16)].A convolution kernel of height 1 can be used as a filter, and the output contains EEG features with different band passes.This achieves feature extraction in the frequency domain.In addition, a convolution kernel with a height of 1 is equivalent to only handling the spatial features of the signal within the same moment.Then the signal space features are extracted usingN2spatial filters of size(C, 1,N1), whereN2=8 and controls the number of spatial filters to be learned.In CNN image processing applications, deep directional convolution reduces the size of the parameters that need to be fitted and each convolutional kernel does not need to learn all the input data.Inspired by the filter-bank common spatial pattern (FBCSP)[18]algorithm for feature extraction, a separate spatial filter is given to the output of each temporal filter using depth directional convolution.Before applying exponential linear units (ELUs), batch normalization is applied along the dimensions of the features.Then,an average pooling layer of size(1,7,35)is used to further converge the features with a size of(1,35)and a stride of 7.

    In the BiGRU block, the obtained feature matrix is cut into blocks in temporal order [here, 15 (1,8) feature vectors]and sent to the GRU sequentially.This achieves learning features in the temporal domain.The GRU has two gates,a reset gate and an update gate.The reset gate determines how the new input information is combined with previous memories and the update gate defines how much of the previous memory is saved to the present time.A single layer of bi-directional GRUs is used which provides the complete past and future information for each cell in the input sequence.The parameterUis set to determine the size of the output vector (here, 16).Then the output features are normalized and activated(ELUs).Finally, the information which condenses the temporal, frequency and spatial characteristics of the signal is sent into the dense layer and classified by the SoftMax function.

    3.2.SCN-BiGRU network performance simulation

    Hybrid neural networks of CNN and RNN are used to learn signal features in three dimensions(time,frequency and space).This improves the network learning efficiency and reduces the size of the network needed to accomplish the same work.In the comparison process,besides our proposed SCNBiGRU hybrid neural network, three other common combinations are compared.The classification accuracy rates and scales of four hybrid networks (SG = SCN-GRU, SBG =SCN-BiGRU, SL=SCN-LSTM, SBL=SCN-BiLSTM) are compared with the traditional DCN and SCN, as shown in Fig.3(b).

    Fig.3.(a) Overall visualization of the SCN-BiGRU network architecture.It mainly consists of a CNN block (SCN block) and a RNN block(BiGRU block).(b)Comparison of classification accuracy and scale of the hybrid neural network and conventional convolutional network.The SCN-BiGRU network obtains the most balanced performance.(c)Robustness analysis of SCN-BiGRU.

    It can be seen from the simulated results that all four hybrid networks can effectively reduce the network size.The SCN-BiGRU network provides the most balanced network classification performance.The bi-directional recurrent network reveals the past and future information of the EEG signal at each time point, so higher accuracy is obtained.The SCN-BiGRU network classification accuracy rate is reduced by only 0.3%, with a 15% reduction in network parameters compared with SCN-BiLSTM.The GRU uses two gates to achieve the information storage and transfer that the LSTM achieves with three gates,which saves network resources and provides a more balanced performance with limited datasets.Compared with the superior SCN of the two convolutional networks,the network parameters are reduced by 96%while the classification accuracy is only reduced by 3.5%.

    However, the network not only needs to meet the array size but must also adapt to the non-ideal characteristics of the memristors for the network to be implemented on an array of memristors.It mainly includes two aspects: on the one hand, the arrays have a limited yield rate with damaged devices showing a high resistance state;on the other hand,there are writing errors in the weights, which fluctuate around the target value.Therefore, array yield rate and array errors are introduced to verify the robustness of the network.Some parameters are placed at 0,depending on the array yield rate,to observe the effect of different yield rates on network classification accuracy.The volatility of the weights is based on the statistical results of the practical measured weight writing errors under different tolerable errors,and the normal distribution of errors is applied to the parameters of the network.The results are shown in Fig.3(c), and the robustness of SCN-BiGRU is very poor in meeting the needs of the memristor array.

    Therefore, a lightweight and highly robust hybrid neural network is important in EEG signal processing based on 1T1R arrays.

    4.DSC-BiGRU network structure and performance simulation

    The proposed DSC-BiGRU network is a lightweight and highly robust hybrid neural network structure based on 1T1R arrays.The novelty of the network is that it can be implemented on a practical memristor array of limited size.A visualization and full description of the network model are shown in Fig.4 and Table 1.The difference from the SCN-BiGRU network is the DSC block for the convolution part.

    Table 1.The DSC-BiGRU architecture.

    Fig.4.Overall visualization of the DSC-BiGRU network architecture.It mainly consists of a CNN block(DSC block)and a RNN block(BiGRU block).

    Fig.5.(a)The effect of array yield rate on the accuracy of SCN-BiGRU and DSC-BiGRU networks.(b)The effect of array tolerance error on the accuracy of SCN-BiGRU and DSC-BiGRU networks.(c)Comparison of the classification accuracy and size of DSC-BiGRU and traditional convolutional networks (DCN, SCN).(d) The effect of array tolerance error on the accuracy of DSC-BiGRU and traditional convolutional networks(DCN,SCN).(e)The effect of array yield rate on the accuracy of DSC-BiGRU and traditional convolutional networks(DCN,SCN).(f)Robustness analysis of DSC-BiGRU.

    The DSC block can be divided into two parts: spatial feature extraction and depthwise separable convolution.In spatial feature extraction, the two convolution steps are similar to the SCN-BiGRU network.First, eight convolution filters of size (1, 16) are set.Then two (60, 1, 8) spatial filters are used to extract the signal spatial features.Compared with the SCN-BiGRU network,the number of classes of convolutional kernels is compressed.Before applying ELUs,batch normalization is applied along the dimensions of the features.Then,an average pooling layer of size(1, 4)is used to further converge the features.Following the feature extraction process,features are learned using a depth-separable convolution.This is a depthwise convolution [here, of size (1, 8, 16)] followed by pointwise convolutions[here,of size(1,1,16,16)]which reduces the convolution kernel parameters and improves network robustness compared with traditional CNN.The principle applies to multi-band feature signals such as EEG signals where the features in each feature map are first learned individually and then weighted and combined according to their importance.This allows for effective learning of features within different frequency bands as well as integrated learning of features across different frequency bands.Finally, an average pooling layer of size(1,8)is used for feature convergence.

    In the BiGRU block, the obtained feature matrix is cut into blocks in temporal order [here, 4 (1,16) feature vectors]according to the same method as the SCN-BiGRU network and sent to the GRU sequentially.The information about the past and future of the EEG signal features is learned in BiGRU and the classification results are calculated in the full connection layer.

    4.1.DSC-BiGRU network performance simulation

    To verify the high accuracy of EEG signal classification achieved by DSC-BiGRU with limited scale and high robustness, we compare it with SCN-BiGRU and traditional convolutional networks (DCN, SCN).During the validation, the statistics for the practical weight modulation errors under different tolerable errors are brought into the network to test the robustness.

    In Fig.5(a), depending on the array yield rate some parameters are placed at 0 to observe the effect of different yield rates on network classification accuracy.As in Fig.5(b), we investigated the effect of different weight fluctuations on network accuracy by adding the statistics of a normal distribution with different allowable errors to the network parameters.The results show clearly that the network size of DSC-BiGRU is comparable (4416 for SCN-BiGRU, 4672 for DSC-BiGRU)but the robustness is better than for SCN-BiGRU.This demonstrates that it is more suitable for memristor arrays than the SCN-BiGRU network.

    DSC-BiGRU ensures similar network classification accuracy compared with traditional convolutional networks,with a 95% and 97% reduction in network parameters, respectively,as shown in Fig.5(c).To compare the performance of the DCN, SCN and DSC-BiGRU networks in the case of a nonideal memristor array, we reveal the effect of different tolerance errors (Fig.5(d)) with a constant yield rate (95%) and the effect of different yield rates (Fig.5(e)) with a constant tolerance error (5%), respectively.With the addition of the 1T1R array non-idealities, the performance superiority of the DSC-BiGRU network over traditional convolutional networks is even more significant.Specifically,with a 95%array yield rate and 5%, 10%, and 20% tolerance errors, the classification accuracy of the EEG signals of the DSC-BiGRU network is improved by 21%, 26%, and 22%, respectively.We also find that the impact of tolerance errors and yield rate on the network is different.As shown in Fig.5(f), the classification accuracy of this network is stable in the error tolerance range of 0%–20%,which decreases as the array yield rate decreases.These results directly demonstrate that the impact of small deviations in all weights is less than the impact of missing some of the weights.It can be inferred that the reliability of the neural network performance in the implementation of the memristor array mainly depends on the overall mapping of the weights.This provides implications for future neural network hardware implementations in memristor arrays.

    5.Conclusion and perspectives

    In summary,we propose a lightweight and highly robust hybrid neural network(DSC-BiGRU)which utilizes the DSC and the BiGRU hybrid blocks to achieve classification of EEG signals by extracting and learning EEG signal feature information from three dimensions.The simulated experiments are performed by extracting the error distribution of the practical device.The simulated results demonstrate that the DSCBiGRU network parameters are substantially reduced and the network classification accuracy is significantly improved compared with the DCN and SCN in the same non-ideal situation.This provides a new algorithm for implementing neural networks on larger memristor arrays in the future.Lightweight and highly robust hybrid neural networks based on memristors are of great importance for applications such as EEG signal processing.

    Acknowledgments

    Project supported by the National Key Research and Development Program of China (Grant No.2019YFB2205102)and the National Natural Science Foundation of China(Grant Nos.61974164, 62074166, 61804181, 62004219, 62004220,and 62104256).

    猜你喜歡
    孫毅王偉清江
    CHARACTERIZATION OF RESIDUATED LATTICES VIA MULTIPLIERS*
    財政局局長“公款生財”
    檢察風云(2022年15期)2022-08-03 08:41:34
    小 蝌 蚪 的 尾 巴
    Convection: a neglected pathway for downward transfer of wind energy in the oceanic mixed layer*
    清江引
    影劇新作(2018年1期)2018-05-26 09:00:52
    山里的“小道士”
    故事會(2018年8期)2018-04-26 09:48:28
    藝術百家 王偉
    電影文學(2017年10期)2017-12-27 00:47:46
    魚躍清江 廣場舞
    文化交流(2017年8期)2017-09-14 22:03:27
    同飲清江水 共護母親河——首個“清江保護日”在長陽舉行
    學習月刊(2015年23期)2015-07-09 05:41:54
    清江吟
    江河文學(2011年1期)2011-12-26 01:51:42
    男人舔女人的私密视频| 国产一区亚洲一区在线观看| freevideosex欧美| 国产亚洲午夜精品一区二区久久| 国产老妇伦熟女老妇高清| 亚洲国产最新在线播放| 在线观看国产h片| 中文字幕精品免费在线观看视频 | 国产一区二区三区av在线| 国产精品免费大片| 亚洲国产欧美日韩在线播放| 欧美激情极品国产一区二区三区 | 好男人视频免费观看在线| 国产在线一区二区三区精| 亚洲,欧美,日韩| 搡女人真爽免费视频火全软件| 观看美女的网站| 999精品在线视频| 如日韩欧美国产精品一区二区三区| 99久久综合免费| 男女下面插进去视频免费观看 | 日日啪夜夜爽| 80岁老熟妇乱子伦牲交| 精品一品国产午夜福利视频| 国产精品 国内视频| 卡戴珊不雅视频在线播放| 亚洲图色成人| 中国三级夫妇交换| 国产又色又爽无遮挡免| 丰满迷人的少妇在线观看| 午夜影院在线不卡| 免费不卡的大黄色大毛片视频在线观看| 天堂俺去俺来也www色官网| 中文字幕另类日韩欧美亚洲嫩草| 久久97久久精品| 国产亚洲精品第一综合不卡 | 亚洲国产日韩一区二区| 又黄又粗又硬又大视频| 国产精品偷伦视频观看了| 新久久久久国产一级毛片| 亚洲欧洲日产国产| 欧美亚洲日本最大视频资源| 精品人妻在线不人妻| 国产黄色免费在线视频| 99九九在线精品视频| 国产精品不卡视频一区二区| 成人国语在线视频| 熟女人妻精品中文字幕| 一级片免费观看大全| 黄网站色视频无遮挡免费观看| 人人妻人人澡人人看| 国产有黄有色有爽视频| 亚洲丝袜综合中文字幕| 九色亚洲精品在线播放| 亚洲情色 制服丝袜| 国产永久视频网站| 国产av一区二区精品久久| 97精品久久久久久久久久精品| 伦理电影免费视频| 欧美日韩视频精品一区| 捣出白浆h1v1| 国产亚洲精品第一综合不卡 | 777米奇影视久久| 久久99精品国语久久久| 国产精品人妻久久久久久| 久久这里有精品视频免费| 欧美97在线视频| 国产又爽黄色视频| 中国美白少妇内射xxxbb| 精品人妻熟女毛片av久久网站| 满18在线观看网站| www.av在线官网国产| 黄色配什么色好看| 伦精品一区二区三区| 亚洲国产看品久久| 又黄又爽又刺激的免费视频.| 亚洲美女搞黄在线观看| 日韩一区二区三区影片| 日韩免费高清中文字幕av| 18禁在线无遮挡免费观看视频| 欧美人与性动交α欧美软件 | 久久久国产欧美日韩av| 三上悠亚av全集在线观看| 黄网站色视频无遮挡免费观看| 最近2019中文字幕mv第一页| 国产精品久久久久成人av| 亚洲高清免费不卡视频| 国产精品一区二区在线不卡| 桃花免费在线播放| 91aial.com中文字幕在线观看| 免费在线观看黄色视频的| 18在线观看网站| 精品国产露脸久久av麻豆| 在线精品无人区一区二区三| 国产日韩欧美在线精品| 十八禁高潮呻吟视频| 精品人妻一区二区三区麻豆| 国产亚洲精品第一综合不卡 | 街头女战士在线观看网站| 纵有疾风起免费观看全集完整版| 伊人亚洲综合成人网| 1024视频免费在线观看| 精品久久久久久电影网| 免费不卡的大黄色大毛片视频在线观看| 国产亚洲午夜精品一区二区久久| 成人18禁高潮啪啪吃奶动态图| 国产免费视频播放在线视频| 久久国产亚洲av麻豆专区| 国产精品免费大片| 亚洲国产精品专区欧美| 国产一区二区三区av在线| 中文天堂在线官网| 黑丝袜美女国产一区| 久久久久精品人妻al黑| 精品人妻在线不人妻| 久久精品国产亚洲av涩爱| 精品少妇黑人巨大在线播放| 成人毛片a级毛片在线播放| a级毛片在线看网站| 亚洲激情五月婷婷啪啪| 久久99蜜桃精品久久| 欧美人与性动交α欧美精品济南到 | 爱豆传媒免费全集在线观看| 色吧在线观看| 26uuu在线亚洲综合色| 亚洲精品乱久久久久久| 欧美精品国产亚洲| 久久久精品94久久精品| 久久青草综合色| 精品久久国产蜜桃| 亚洲美女黄色视频免费看| 婷婷色综合大香蕉| 精品国产乱码久久久久久小说| 精品一区二区三区四区五区乱码 | 久久精品熟女亚洲av麻豆精品| 欧美成人精品欧美一级黄| 下体分泌物呈黄色| 免费日韩欧美在线观看| 亚洲综合精品二区| 高清欧美精品videossex| 亚洲四区av| 亚洲精品日本国产第一区| a 毛片基地| 女性被躁到高潮视频| 嫩草影院入口| 青春草国产在线视频| 内地一区二区视频在线| 国产一区有黄有色的免费视频| 日韩欧美一区视频在线观看| 亚洲性久久影院| 欧美人与善性xxx| 欧美激情国产日韩精品一区| 夫妻性生交免费视频一级片| 午夜福利网站1000一区二区三区| 成人免费观看视频高清| 国产在线视频一区二区| 成人亚洲精品一区在线观看| 九九在线视频观看精品| 90打野战视频偷拍视频| 久久精品人人爽人人爽视色| 精品少妇黑人巨大在线播放| 熟女av电影| 午夜福利乱码中文字幕| 免费少妇av软件| 三上悠亚av全集在线观看| 国产亚洲av片在线观看秒播厂| h视频一区二区三区| 久久精品熟女亚洲av麻豆精品| 亚洲,欧美,日韩| 91精品三级在线观看| 国产成人精品在线电影| 人成视频在线观看免费观看| 成人国产麻豆网| 亚洲欧美成人综合另类久久久| 日日撸夜夜添| 男女边吃奶边做爰视频| 成人18禁高潮啪啪吃奶动态图| 一二三四中文在线观看免费高清| 亚洲综合精品二区| 欧美xxxx性猛交bbbb| 青春草国产在线视频| 女人被躁到高潮嗷嗷叫费观| 十八禁高潮呻吟视频| 男人添女人高潮全过程视频| 国产不卡av网站在线观看| 亚洲国产日韩一区二区| 最近中文字幕2019免费版| 天天躁夜夜躁狠狠久久av| 亚洲精品aⅴ在线观看| av在线观看视频网站免费| 夫妻午夜视频| 丝袜脚勾引网站| 中文字幕免费在线视频6| 少妇精品久久久久久久| 少妇的逼水好多| 亚洲av电影在线进入| 一级,二级,三级黄色视频| 国产一区二区三区av在线| 欧美bdsm另类| 日韩大片免费观看网站| 2018国产大陆天天弄谢| 国产成人a∨麻豆精品| 亚洲精品国产av成人精品| 最近的中文字幕免费完整| 久久久久久人人人人人| 久久久国产欧美日韩av| 国产淫语在线视频| 亚洲av电影在线进入| a级毛片在线看网站| 18在线观看网站| 一本—道久久a久久精品蜜桃钙片| 久久精品久久精品一区二区三区| 亚洲综合精品二区| 精品亚洲成a人片在线观看| 亚洲欧美成人精品一区二区| 日本黄色日本黄色录像| 一级片免费观看大全| 777米奇影视久久| 久久青草综合色| 丰满少妇做爰视频| 少妇熟女欧美另类| 高清毛片免费看| 欧美97在线视频| 亚洲中文av在线| 交换朋友夫妻互换小说| 哪个播放器可以免费观看大片| 久久久久久伊人网av| 狂野欧美激情性xxxx在线观看| 国产成人a∨麻豆精品| 国产老妇伦熟女老妇高清| 久久精品久久久久久久性| 丝袜在线中文字幕| 在线观看人妻少妇| 色5月婷婷丁香| 看免费成人av毛片| 大香蕉久久网| 亚洲国产精品一区二区三区在线| 亚洲av中文av极速乱| 日本wwww免费看| 国产视频首页在线观看| 中国国产av一级| 王馨瑶露胸无遮挡在线观看| 国产精品一区www在线观看| 丝袜在线中文字幕| 亚洲丝袜综合中文字幕| 亚洲精品美女久久av网站| av有码第一页| 亚洲国产日韩一区二区| 日韩av不卡免费在线播放| 黑人高潮一二区| 大片免费播放器 马上看| 欧美丝袜亚洲另类| 日韩精品免费视频一区二区三区 | 老女人水多毛片| 精品一品国产午夜福利视频| 考比视频在线观看| 中文字幕另类日韩欧美亚洲嫩草| 国产精品久久久久久久电影| 国产成人免费观看mmmm| 99久久综合免费| 欧美成人精品欧美一级黄| 亚洲,欧美精品.| 亚洲在久久综合| 成人手机av| 国产片内射在线| 日日爽夜夜爽网站| 国产片特级美女逼逼视频| a级毛色黄片| 十八禁高潮呻吟视频| 亚洲性久久影院| 国产男女超爽视频在线观看| 热99久久久久精品小说推荐| 深夜精品福利| 青春草视频在线免费观看| 看免费成人av毛片| 国产成人精品久久久久久| 国产熟女欧美一区二区| 视频在线观看一区二区三区| 精品国产一区二区久久| 欧美少妇被猛烈插入视频| 热re99久久国产66热| 欧美xxxx性猛交bbbb| 久久精品国产a三级三级三级| 少妇被粗大的猛进出69影院 | 日本-黄色视频高清免费观看| av免费在线看不卡| 夜夜骑夜夜射夜夜干| 久久精品久久久久久久性| 国产在线视频一区二区| 亚洲精品日韩在线中文字幕| 在线观看三级黄色| 大香蕉97超碰在线| 成人午夜精彩视频在线观看| 日韩av在线免费看完整版不卡| 国产国拍精品亚洲av在线观看| 人妻人人澡人人爽人人| tube8黄色片| 日本-黄色视频高清免费观看| 国产国语露脸激情在线看| xxx大片免费视频| 久久韩国三级中文字幕| 三级国产精品片| 久久精品aⅴ一区二区三区四区 | 青春草亚洲视频在线观看| 中文字幕av电影在线播放| 热99国产精品久久久久久7| 热re99久久精品国产66热6| 99久久人妻综合| 赤兔流量卡办理| 99视频精品全部免费 在线| 欧美bdsm另类| 日本vs欧美在线观看视频| 在线观看美女被高潮喷水网站| 少妇人妻精品综合一区二区| 免费少妇av软件| 欧美日韩一区二区视频在线观看视频在线| 日本与韩国留学比较| 国产欧美另类精品又又久久亚洲欧美| 在线观看国产h片| 丰满少妇做爰视频| 欧美成人精品欧美一级黄| 亚洲国产看品久久| 亚洲精华国产精华液的使用体验| 在线天堂最新版资源| 一区二区三区乱码不卡18| 菩萨蛮人人尽说江南好唐韦庄| 国产男女内射视频| 久久久久久久精品精品| 欧美国产精品va在线观看不卡| 亚洲五月色婷婷综合| 国产免费又黄又爽又色| 久久久久国产网址| 欧美亚洲 丝袜 人妻 在线| 一边亲一边摸免费视频| 亚洲精品国产av成人精品| 亚洲国产成人一精品久久久| 高清视频免费观看一区二区| 99香蕉大伊视频| 国产日韩一区二区三区精品不卡| 免费日韩欧美在线观看| 纯流量卡能插随身wifi吗| 久久精品国产亚洲av涩爱| 欧美日韩成人在线一区二区| 一边亲一边摸免费视频| videossex国产| 欧美3d第一页| 精品视频人人做人人爽| www.av在线官网国产| 久久女婷五月综合色啪小说| 永久网站在线| 这个男人来自地球电影免费观看 | 两个人看的免费小视频| 久久青草综合色| 国产成人欧美| 91精品三级在线观看| 国产永久视频网站| 成人国语在线视频| 两个人看的免费小视频| 18禁在线无遮挡免费观看视频| 国产精品国产三级国产专区5o| 亚洲国产精品国产精品| 色5月婷婷丁香| 亚洲一区二区三区欧美精品| 亚洲精品色激情综合| 国产免费视频播放在线视频| 久久精品国产a三级三级三级| 夜夜爽夜夜爽视频| 国产精品人妻久久久久久| 狠狠婷婷综合久久久久久88av| 黄片无遮挡物在线观看| 亚洲av男天堂| 精品久久久精品久久久| 丁香六月天网| 久久久久久久久久人人人人人人| 99香蕉大伊视频| 亚洲国产精品一区三区| 人妻少妇偷人精品九色| 亚洲国产精品国产精品| 在线精品无人区一区二区三| 亚洲av欧美aⅴ国产| 中文字幕人妻熟女乱码| 国产精品人妻久久久影院| 亚洲婷婷狠狠爱综合网| 欧美日韩亚洲高清精品| 欧美变态另类bdsm刘玥| 满18在线观看网站| 日日撸夜夜添| 美女内射精品一级片tv| 免费黄色在线免费观看| 国产又爽黄色视频| 精品国产露脸久久av麻豆| 草草在线视频免费看| 亚洲综合精品二区| 亚洲第一av免费看| 99视频精品全部免费 在线| 七月丁香在线播放| 精品卡一卡二卡四卡免费| 国产深夜福利视频在线观看| 亚洲国产精品999| 午夜日本视频在线| av免费在线看不卡| 毛片一级片免费看久久久久| 国产成人欧美| 精品一品国产午夜福利视频| 免费高清在线观看日韩| 国产国语露脸激情在线看| 国产激情久久老熟女| 91精品伊人久久大香线蕉| 在线亚洲精品国产二区图片欧美| 99re6热这里在线精品视频| 18在线观看网站| av国产精品久久久久影院| 中文字幕免费在线视频6| 亚洲成av片中文字幕在线观看 | 大片免费播放器 马上看| 国产日韩欧美视频二区| 久热这里只有精品99| 成人毛片a级毛片在线播放| 美女国产高潮福利片在线看| 欧美精品av麻豆av| 欧美xxxx性猛交bbbb| 午夜免费男女啪啪视频观看| 中文天堂在线官网| 制服人妻中文乱码| 国产乱来视频区| 欧美激情极品国产一区二区三区 | 99久国产av精品国产电影| 人妻 亚洲 视频| 久久人妻熟女aⅴ| www.色视频.com| 国产免费一区二区三区四区乱码| 老熟女久久久| av有码第一页| 日本色播在线视频| 黑人欧美特级aaaaaa片| 999精品在线视频| 成人亚洲欧美一区二区av| 免费不卡的大黄色大毛片视频在线观看| 色婷婷久久久亚洲欧美| 亚洲性久久影院| 亚洲精品视频女| 欧美精品国产亚洲| 久久精品久久久久久久性| 赤兔流量卡办理| 街头女战士在线观看网站| 18禁观看日本| 亚洲国产精品成人久久小说| 国产色婷婷99| 国产男女超爽视频在线观看| 国产一区二区在线观看日韩| 亚洲成人手机| 日韩一区二区视频免费看| 国产精品成人在线| 国产免费一区二区三区四区乱码| 国产色爽女视频免费观看| 国产综合精华液| 亚洲五月色婷婷综合| 成年女人在线观看亚洲视频| 美女大奶头黄色视频| av在线观看视频网站免费| 色哟哟·www| 卡戴珊不雅视频在线播放| a级毛色黄片| 亚洲精品一二三| 久久久久精品人妻al黑| 寂寞人妻少妇视频99o| 亚洲欧洲国产日韩| 大片免费播放器 马上看| 亚洲av在线观看美女高潮| 日韩熟女老妇一区二区性免费视频| 七月丁香在线播放| 秋霞在线观看毛片| 成人亚洲精品一区在线观看| 美女国产视频在线观看| 看免费av毛片| 精品亚洲成a人片在线观看| 欧美老熟妇乱子伦牲交| 91aial.com中文字幕在线观看| 一本—道久久a久久精品蜜桃钙片| 国产成人a∨麻豆精品| 中文字幕亚洲精品专区| www.av在线官网国产| 欧美bdsm另类| 亚洲高清免费不卡视频| 国产日韩欧美亚洲二区| 老司机影院毛片| 69精品国产乱码久久久| 日本爱情动作片www.在线观看| 啦啦啦啦在线视频资源| 国产熟女欧美一区二区| 七月丁香在线播放| 亚洲欧美中文字幕日韩二区| 亚洲欧美成人精品一区二区| 亚洲欧美成人综合另类久久久| 欧美 亚洲 国产 日韩一| 久久av网站| 桃花免费在线播放| 国产老妇伦熟女老妇高清| 日本黄色日本黄色录像| 99久久中文字幕三级久久日本| 国产成人aa在线观看| 91精品三级在线观看| 十分钟在线观看高清视频www| 国产精品熟女久久久久浪| 免费看不卡的av| 日韩不卡一区二区三区视频在线| 一本大道久久a久久精品| 国产av一区二区精品久久| 只有这里有精品99| 精品国产国语对白av| 亚洲丝袜综合中文字幕| 欧美精品一区二区免费开放| 美国免费a级毛片| 巨乳人妻的诱惑在线观看| 在线观看免费日韩欧美大片| 久久97久久精品| tube8黄色片| 日本午夜av视频| 曰老女人黄片| 人人妻人人澡人人爽人人夜夜| 免费久久久久久久精品成人欧美视频 | 五月伊人婷婷丁香| 曰老女人黄片| 女性生殖器流出的白浆| 日韩av免费高清视频| 精品99又大又爽又粗少妇毛片| 男女免费视频国产| 热re99久久国产66热| 美女中出高潮动态图| 久久久久精品久久久久真实原创| 在线 av 中文字幕| 亚洲经典国产精华液单| 多毛熟女@视频| 国产乱人偷精品视频| 人成视频在线观看免费观看| 黄色怎么调成土黄色| 乱码一卡2卡4卡精品| 婷婷色综合大香蕉| 春色校园在线视频观看| 国产在线视频一区二区| 色哟哟·www| 久久久国产欧美日韩av| 超色免费av| 97人妻天天添夜夜摸| 夫妻性生交免费视频一级片| 女人精品久久久久毛片| 在线 av 中文字幕| 18+在线观看网站| 免费久久久久久久精品成人欧美视频 | 啦啦啦视频在线资源免费观看| 人妻 亚洲 视频| 爱豆传媒免费全集在线观看| 亚洲图色成人| 只有这里有精品99| 久久久久国产精品人妻一区二区| 99视频精品全部免费 在线| 免费观看无遮挡的男女| 久久国产精品男人的天堂亚洲 | 国产亚洲精品久久久com| 看免费成人av毛片| 大片电影免费在线观看免费| 久久人人97超碰香蕉20202| 街头女战士在线观看网站| 丰满少妇做爰视频| 夜夜爽夜夜爽视频| 日产精品乱码卡一卡2卡三| 日本欧美国产在线视频| 免费看光身美女| 亚洲内射少妇av| 国产日韩欧美在线精品| 免费播放大片免费观看视频在线观看| 97人妻天天添夜夜摸| av.在线天堂| 99九九在线精品视频| 亚洲中文av在线| 精品少妇久久久久久888优播| 欧美变态另类bdsm刘玥| 国产亚洲午夜精品一区二区久久| 国产精品不卡视频一区二区| 午夜免费鲁丝| 国产一区二区三区综合在线观看 | av在线app专区| 精品一区二区三区四区五区乱码 | 中文字幕最新亚洲高清| 在线观看免费视频网站a站| 久久久久久伊人网av| 亚洲美女搞黄在线观看| 美女主播在线视频| 亚洲精品456在线播放app| 香蕉国产在线看| 天堂俺去俺来也www色官网| 2022亚洲国产成人精品| 97精品久久久久久久久久精品| 91国产中文字幕| 欧美 日韩 精品 国产| 黄片无遮挡物在线观看| 婷婷成人精品国产| 国产黄频视频在线观看| 超碰97精品在线观看| 欧美人与性动交α欧美精品济南到 | 亚洲精品乱码久久久久久按摩| 五月伊人婷婷丁香| 三级国产精品片| 国产熟女午夜一区二区三区| 欧美成人午夜免费资源| 亚洲av.av天堂| 久久精品国产鲁丝片午夜精品| 少妇人妻 视频| 亚洲精品中文字幕在线视频| 亚洲熟女精品中文字幕| 美女视频免费永久观看网站| 亚洲色图 男人天堂 中文字幕 | 国产日韩欧美在线精品| 五月玫瑰六月丁香| 亚洲第一av免费看|