• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Retraining Deep Neural Network with Unlabeled Data Collected in Embedded Devices

    2022-04-19 05:49:02HongXuChengLeTianHuangJunShiWangMasoumehEbrahimi

    Hong-Xu Cheng | Le-Tian Huang | Jun-Shi Wang | Masoumeh Ebrahimi

    Abstract—Because of computational complexity,the deep neural network (DNN) in embedded devices is usually trained on high-performance computers or graphic processing units (GPUs),and only the inference phase is implemented in embedded devices.Data processed by embedded devices,such as smartphones and wearables,are usually personalized,so the DNN model trained on public data sets may have poor accuracy when inferring the personalized data.As a result,retraining DNN with personalized data collected locally in embedded devices is necessary.Nevertheless,retraining needs labeled data sets,while the data collected locally are unlabeled,then how to retrain DNN with unlabeled data is a problem to be solved.This paper proves the necessity of retraining DNN model with personalized data collected in embedded devices after trained with public data sets.It also proposes a label generation method by which a fake label is generated for each unlabeled training case according to users’feedback,thus retraining can be performed with unlabeled data collected in embedded devices.The experimental results show that our fake label generation method has both good training effects and wide applicability.The advanced neural networks can be trained with unlabeled data from embedded devices and the individualized accuracy of the DNN model can be gradually improved along with personal using.

    Index Terms—Deep neural network (DNN),embedded devices,fake label,retraining.

    1.lntroduction

    The deep the neural network (DNN) has achieved many breakthroughs in various fields,such as image classification,speech recognitions and natural language processing[1].However,because of high computational complexity and memory overhead of DNN algorithms,it is seldom fully implemented in embedded devices.Some applications use DNN in the cloud through the Internet[2],whereas the privacy security and real time cannot be guaranteed[3].For example,using face recognition to unlock the smartphone,even without the Internet,the unlocking ought to work correctly and rapidly.Moreover,the users’ face pictures captured by the phone are personal privacy,thus it is in high risk of privacy disclosure to use DNN through the Internet.Therefore,for the application with high demands of privacy security or real time,using DNN locally is necessary.

    In general,deep learning includes two major phases,training and inference.In training,each layer of the model is assigned some weights initialized with random numbers,and then the model is feed with cases of objects to be detected or recognized,predicting the class label of each case.This is the forward pass of the training phase.After that,the predicted label is compared against the real label to compute an error via a loss function.Then the error is propagated backward through the network to update weights with some weight update algorithms,such as stochastic gradient descent.This is the backward pass of the training phase.Unlike training,inference only comprises the forward pass similar to training,in which a trained model is used to infer/predict the label of some samples.Obviously,the training phase has much more overhead when implemented on the embedded systems than inference.Thus in some DNN implementations[4],[5],the training phase is performed externally in the graphic processing unit (GPU) based high-performance computers,and only weights are transmitted to the embedded devices for inference after training.This is called off-chip learning.Although off-chip learning is power and computation friendly to embedded systems,it is not appropriate for all application scenarios.

    Since the data sets processed by embedded devices,such as smartphones and wearables,are usually personalized,the neural network model trained with public data sets may not be good at inferring the personalized data.For example,the handwriting digit recognition application in our experiment,the neural network is fully trained on the public data sets (MNIST) to get the accuracy of 99.14% on its test data set,then the model is used for inferring handwriting digits written by 3 different people and the accuracy is shown inFig.1.As everyone has his/her own writing habits,the model gets different accuracies of 87.36%,82.8%,and 88.90%,respectively,all far lower than 99.14%.Consequently,to solve this problem,retraining the neural network model with personalized data sets collected locally in embedded devices is necessary.

    Fig.1.Neural network performance on digits written by different persons,which is trained on the training data set of MNIST.

    Some on-chip learning implementations of DNN are also proposed,in which the training and inference phases of the neural network algorithm are both performed in embedded devices[6],[7].Nevertheless,those studies mainly focus on the hardware implementation and optimization of training algorithms within strict energy and computing power limits,while the personalization of data sets and how to train the neural network with unlabeled data collected in embedded devices are not concerned.

    In this paper,we firstly analyze the personalization of the data sets collected in embedded devices and prove that retraining neural networks in embedded systems is necessary by experiments.Secondly,we propose a fake label generation algorithm to solve the problem that data sets collected in embedded devices are unlabeled.The experimental results show that fake labels generated by the algorithm can effectively train the neural network.

    Section 2 presents some related works to apply deep neural networks in embedded systems.Section 3 proves the necessity of retraining neural networks in embedded systems by experiments.Section 4 proposes the solution to retrain DNN with unlabeled data collected in embedded devices.Section 5 gives the conclusion.

    2.Related Works

    2.1.DNN in Embedded Devices

    Many efforts have been made to apply DNN in embedded systems,and one idea of them is model miniaturization which manages to reduce the size of the neural network model,so that the embedded devices can run it within the power and performance limits.Two techniques can be applied to model miniaturization.One is adjusting the model structure and then training a small model directly,including many different methods,such as the binarized neural network[8],depth-wise convolution[9],and kernel reduction[10].The other technique is model compression,in which only a little of the model is modified and retraining is not needed,and the corresponding methods include quantization,code optimization,pruning,and the integration of them[11].

    Another idea to run DNN in embedded devices is the accelerator,the implementations including software and hardware.In a software accelerator,such as DeepX[12],a pair of resource control algorithms are used to optimize the resource,allowing even large-scale deep learning models to execute efficiently on modern mobile processors.The purpose of the hardware accelerator is to enhance the computing power of embedded devices and optimize energy consumption meanwhile.Because of high overhead of the training phase,in many hardware accelerators[13]-[16],only inference is implemented.Even though some works also implement online training in the accelerator[17],the problem of the unlabeled data sets collected locally for training is not mentioned.

    In addition,[18] analyzed the challenges and characteristics that can be exploited in embedded DNN processing and introduced some promising algorithmic and processor techniques that bring deep learning to Internet of things (IoT) and edge devices.Some researchers propose to apply DNN for IoT with edge computing to offload the cloud tasks and an elastic model to DNN for IoT with edge computing is formulated[19].Furthermore,distributed deep neural networks (DDNNs),which can accommodate DNN in the cloud and at the edge and end devices,are proposed to improve the recognition accuracy and reduce the communications cost[20].

    To summarize,all the efforts above more focus on running the neural network model faster in embedded devices and consuming less resources,such as energy and area at the same time.Thus,the characteristics of personalization and lack of labels of the data sets collected in embedded devices are usually ignored.In this paper,the necessity of retraining is proven and the solution to train the neural network with unlabeled data is proposed.

    2.2.Retraining

    The word “retraining” is mentioned in many papers related to hardware neural networks[21]-[23],but the purposes of retraining are different in various works.Reference [21] proposed to leverage the error resiliency of the neural network to mitigate timing errors in neural network accelerators.Retraining is needed to update the weights of the neural network,but timing errors significantly affect the output results,thus the critical timing errors is necessary.Reference [22] proposed that the power consumption of the multilayer perceptron accelerator during classification can be saved by approximation,such as reducing bit-precision and using inexact multiplication.Furthermore,retraining the network after approximation can improve the accuracy and retaining the power consumption meanwhile.LightNN[23],[24]was introduced and compared with various quantized DNNs.Retraining here was used for compensating the accuracy loss caused by quantization.All the retraining mentioned above uses the same training data sets as the pre-training for the purpose of compensating the accuracy loss.Nevertheless,in this paper,the data sets used for retraining are collected in embedded devices,which are different from the public data sets used in pre-training.Further,the personalization of data collected locally is analyzed and the necessity of retraining is proved.Moreover,to retrain with unlabeled collected data,a fake label generation method is proposed.

    3.Necessity of Retraining Locally

    To prove the necessity of retraining the neural network with data sets collected in embedded devices,a convolution neural network (CNN) shown inFig.2(detailedly described in subsection 5.1) is fully trained on the training data sets of MNIST and finally gets the accuracy of 99.14% on the test data sets of MNIST.

    Fig.2.Topology of CNN for MNIST data sets used in experiments.

    Fig.1presents the handwriting digits collected from three different people and those from MNIST test data sets,from which we can see that different people have different handwriting habits,so that the digits written by them have obvious differences.And then these differences may affect the accuracy,when using a pretrained neural network model to recognize digits written by different people.To prove this point,the fully trained model is used to infer ten people’s handwriting digits and all get low accuracy (less than 90%),which indicates that a neural network fully trained on public data sets may not have good performance on inferring personalized data sets.Further,everyone’s handwriting digits are divided into two parts,the training set and the test set.The fully pre-trained model is retrained on one person’s training set,and then the retrained model is tested on the test set of the same person.

    The accuracy of the model before and after retrained on each person’s data set is shown inTable 1.Table 1illustrates that after retraining,the accuracy on each person’s testing data increasesmore than 10% and therefore retraining in embedded devices with local collected data is necessary,after the neural network is transferred from a cumbersome model trained with public data sets.

    Table 1:Comparison of accuracy before and after retraining

    4.Training with Unlabeled Data Collected in Embedded Devices

    Embedded devices,such as smartphones and wearables,are usually equipped with lots of sensors so that it is easy for them to collect datain situ[26].Fig.3shows the example of data collecting of a handwriting digit recognition application in the smartphone.When someone writes a digit in the touch screen of the smartphone,the digit is sampled as an object in format of an integer array.On one hand,the handwriting digit object is sent to the neural network model,and then the model infers the digit and gives a prediction.On the other hand,the object is saved to some special storage,such as the secure digital card in the smartphone and the collected digit objects will be used to train the neural network.However,the digits collected have no label,so labeling these digits before using them to train the neural network is necessary.

    In this scenario,only the user who writes the digit knows the corresponding label.If the application asks the user to label the digit,the user’s experience will be seriously damaged.Although the user cannot label the handwriting digit directly,he/she may give some feedback on the prediction result.For example,the user writes a “9” in the touch screen,if the neural network model recognizes it as “9”,then the user may press the “ensure” or “next” button and implicitly give the feedback of “correct prediction” at the same time,otherwise he/she may press the “delete” button and give the feedback of “wrong prediction” implicitly.The feedback exists widely in embedded applications.For another example,in a speech control application,the user may say “Let there be light” to turn on the lights,and if the lights are not turned on,then the user would say the words again and give the implicit feedback of “wrong prediction” meanwhile;otherwise the user will not repeat the words and give the implicit feedback of “correct prediction”.This kind of feedback is named correctness feedback (CF) in this paper.Moreover,as shown inFig.4,a label method using CF to generate the fake label is proposed,and then the unlabeled case coupled with its corresponding fake label can be used to train the neural network.

    Fig.4.Generating the fake label with CF and retraining the neural network.

    To figure out how fake labels are generated by CF,firstly we should understand the principle of loss function calculation with real labels.Most neural networks used for classification adopt the softmax layer as the output layer,and the cross entropy of the prediction distribution and real distribution as the loss function[27]-[29].As shown inFig.5,assume that the output of the last hidden layer isyj,j∈{1,2,…,n},then the softmax layer maps the output to the distribution of [0,1] by

    Fig.5.Neural network adopting softmax layer as the output layer.

    For the cross entropy loss function,it is defined as

    whereprepresents the real distribution corresponding to the real label,i.e.,for ann-classification model,if the real label indicates classi,then

    whereqis the prediction distribution corresponding to the prediction result of the neural network,i.e.,q(i)=aj,j∈{1,2,…,n}.

    For an example of 10 classifications,if the real label indicates class 1,then the real distribution isp={1,0,0,0,0,0,0,0,0,0}.Assume that the prediction distribution,i.e.the output of the sofmax layer isq={0.010,0.020,0.010,0.910,0.003,0.010,0.009,0.011,0.005,0.012},thus the cross entropy loss can be calculated ash=4.6052.And then the neural network applies the back propagation algorithm with the loss to update each weight,so as to learn from the training case and its real label.

    In the CF scenario proposed in this paper,we do not know the real labels corresponding to the training cases,but we can get the feedback whether the neural network prediction is correct.In this scenario,the feedback of neural network prediction can be divided into two cases:Correct prediction and wrong prediction.In both cases,we can generate fake labels based on CF,and the loss calculated by fake labels is similar to that by real labels,so the fake labels can be effectively used for neural network training.Algorithm 1 describes the fake label generation algorithm based on CF,which will be discussed on two cases later.

    Algorithm 1.Fake label generation algorithm according to CF

    Input:The softmax output of neural network:

    Correct prediction:If the neural network has correct prediction for a training case,we can deduce its real label directly,i.e.its fake label is just real label.For example,the real class of a training case is the 4th,and the output of the softmax layer isq={0.010,0.020,0.010,0.910,0.003,0.010,0.009,0.011,0.005,0.012},which means that the prediction class is the 4th as well,so we can get “correct prediction” feedback from the user.Then in the real label,the probability of the 4th class must be 1,and others are 0,so the real label can be deduced asq={0,0,0,1,0,0,0,0,0,0}.

    Wrong prediction:If the neural network has wrong prediction for a training case,we can construct a fake label where the probability of the predicted class is 0,because the prediction is wrong.For the probabilities of other classes,we do not know which should be 1,because the real class that the training case belongs to is unknown.Nevertheless,to make the sum of all probabilities to be 1,we let the other classes share the probability of the predicted class equally.For example,assuming that the real class of a training case is the 1st,but the prediction result isq={0.010,0.020,0.010,0.910,0.003,0.010,0.009,0.011,0.005,0.012},i.e.the prediction class is the 4th,and then we can get “wrong prediction” feedback from the user.Since the result of the neural network prediction is wrong,the probability of the 4th class must be 0.In order to ensure that the sum of the probability of each class is 1,we average the probability of the 4th class in the prediction results and add it to that of the other 9 classes.Finally the constructed fake label isf={0.1111,0.1211,0.1111,0,0.1041,0.1111,0.1101,0.1121,0.1061},and the corresponding cross entropy loss ish=4.7004,similar to that calculated by real labels.

    5.Evaluation

    5.1.Experimental Setup

    In order to prove the necessity of retraining the neural network with personal data sets in embedded devices and evaluate the fake label generation algorithm,CNN shown inFig.2is constructed with Tensorflow[25].The size of each convolution kernel in CNN is 5×5,and the strides both in width and height are 1 with the padding method of “SAME”.The kernel size of the max-pooling layer is 2×2,and the strides both in width and height are 2 with the padding method of “SAME”.The full-connection layer FC5 flattens the results of the last max-pooling layer,and the drop out is adopted to reduce overfit.Finally,the full-connection layer FC6 gets a result vector with the length of 10 and then the vector is mapped to a probability distribution of [0,1] by the softmax algorithm.

    To evaluate the training effect using the fake label generated with CF,a mechanism to simulate CF scenario is shown inFig.6,in which the feedback simulator simulates a user to give CF by comparing the prediction result of the neural network with the real label,i.e.if the prediction is the same as the real label,then the feedback simulator gives the feedback of “correct prediction”,otherwise it gives “wrong prediction”feedback.And then the feedback is used for generating the fake label to train the neural network.CNN shown inFig.2is trained on the MNIST training data set with real labels and fake labels generated with CF from scratch,respectively.The training is performed with the initial learning rate of 10?4,dropout rate of 0.5,and batch size of 50.Moreover,the adaptive moment estimation (ADAM)[30]optimization is adopted.For each step of training,the accuracy and loss on MNIST testing data sets is measured,and the model is trained for 10000 steps totally.

    In order to prove that the fake label generation algorithm can also work well on other DNNs and data sets,CNN for CIFAR-10[31]data sets shown inFig.7is build with Tensorflow.The input layer of CNN has 3 channels corresponding to the three color channels (i.e.red,green,and blue) of input images.The convolution kernel in CNN is 5 ×5 with the strides of 1 and the pool kernel has the size of 3 ×3 with strides of 2.Both the convolution layer and the pool layer adopt the padding method of “SAME”.There are two local response normalization (LRN)[27]layers in CNN,one is after the first pool layer S2,and the other is after the second convolution layer C4.CNN shown inFig.7is trained on the CIFAR-10 training data set with real labels and fake labels from scratch,respectively.The batch size of each training step is 128 and the model is trained for 250000 steps.

    As mentioned earlier in this paper,DNN in embedded devices is mostly pre-trained on some public data sets,and therefore in the CF scenario of embedded systems,the model to be retrained with fake labels generated by CF is fully pre-trained on the training data sets of MNIST and gets the accuracy of 99.14% on the test data sets of MNIST beforehand.To evaluate the retraining effect of fake labels in this scenario,we repeat the experiment in Section 3 on ten different people’s data sets,but the only difference is that the experiment is conducted twice on each person’s data set,using the real label and fake label,respectively,to retrain the neural network model corresponding to each person.

    5.2.Results

    Fig.8shows the comparison between the accuracy/loss trained with real labels and fake labels on MNIST data sets.It can be seen that the accuracy rising and the loss falling in the fake label training are slower than that in the real label training within the initial few training steps.However,as the training goes on,the accuracy and loss in the two cases are going to coincide gradually,which illustrates that the fake label generated with CF from scratch can effectively train the neural network.

    Fig.8.Comparison of the accuracy/loss trained with real labels and fake labels on MNIST data sets.

    The training curves of accuracy/loss with real labels and fake labels on CIFAR-10 are shown inFig.9.The training curves inFig.9have the same trends as those inFig.8,which illustrate that fake labels generated with CF can work well for CNN shown inFig.7on the CIFAR-10 data sets.The fake label generation algorithm has both good training effect and wide applicability.

    Fig.9.Comparison of the accuracy/loss trained with real labels and fake labels on CIFAR-10 data sets.

    The retraining effect of the fake labels can be seen fromTable 2.The handwriting digits of each person listed inTable 2is divided into two parts,one for training and the other for test.Meanwhile,CNN shown inFig.2is pre-trained on the training data set of MNIST and reaches the accuracy of 99.14%,when tested on the test data set of MNIST.Then the pre-trained CNN model is performed on each person’s test data set and gets the corresponding accuracy shown in the “Before retraining” column.Subsequently,the pre-trained CNN model is retrained with the training data set of each person twice,using real labels and fake labels,respectively.And the retraining accuracy is shown in “After retraining with the real labels” and “After retraining with fake labels” columns,respectively.From the accuracy before retraining,we can see that each person gets different accuracy,when using the CNN model pre-trained on public data sets to test the personal data set,because each person may have his/her own handwriting habits.Therefore,the pre-trained neural network model cannot be applied directly to personal data sets in this scenario,i.e.retraining the pre-trained model with personal data sets is necessary.The accuracy after retraining with real labels indicates that after retraining with personal data sets,the accuracy for every person’s test data set is improved a lot,which means that using personal data sets to retrain the neural network model pre-trained on the public data sets is necessary and effective.As can be seen from the comparison between the accuracy obtained after retrained with real labels and with fake labels,the pre-trained model has almost the same accuracy,and therefore the fake label generated with CF can effectively retrain the neural network in embedded devices.

    Table 2:Comparison of final accuracy after retraining with real labels and fake labels

    Fig.10shows the retraining curves of accuracy/loss with real labels and fake labels from ten people’s personal data sets,respectively.

    Fig.10.Comparison of the accuracy/loss curves retrained with real labels and fake labels from ten different persons:(a)person Y’s retraining curve,(b) person H’s retraining curve,(c) person K’s retraining curve,(d) person G’s retraining curve,(e) person A’s retraining curve,(f) person J’s retraining curve,(g) person Z’s retraining curve,(h) person B’s retraining curve,(i) person P’s retraining curve,and (j) person O’s retraining curve.

    These training curves illustrate the following conclusions:

    1) Even though the trends of those curves are very similar,different curves have different initial accuracy,final accuracy,raising slope of accuracy,and the range of loss.These differences indicate that different person’s data set has its own personality.

    2) The accuracy rising and loss falling in the fake label retraining may be slower than that in the real label retraining within the initial few training steps,but as the retraining goes on,the accuracy and loss in the two cases become coincide gradually,which illustrates that the fake label generated with CF can effectively retrain the neural network.

    In the field of deep learning,new ideas pop up every single week,which bring state-of-the-art technologies and higher accuracy.Most of these advance neural networks need to be trained with labeled data,while user’s data sets collected in embedded devices are unlabeled.Then how to train the neural network model without labeled data is a problem to be solved.The fake label generation algorithm in this paper is a solution to the problem,therefore,the purpose of our method is not to improve the current state-of-the-art accuracy,but to provide a method with which these advanced neural networks can be trained even without labeled data and get almost the same training effect as that trained with real labeled data.A series of experiments are designed to prove the effectiveness of our method,which compare the fake label training effect with the real label training effect of the same neural network.All the results and conclusions above prove that the fake label generation algorithm is effective and widely applicable.

    6.Conclusion

    Because of extra overhead in the training phase,many implementations of DNN in embedded devices only focus on the inference stage of the neural network.Even though some accelerators implement the training phase,they mainly optimize the performance and power consumption and the data sets used for training usually do not get much attention.However,as the data processed by embedded devices,such as smart phones and wearables,are personalized,thus the DNN model trained on public data sets may have poor accuracy when inferring the personalized data sets collected in embedded devices,and this is proven by experiments in this paper.

    Therefore,this paper proposes that retraining with data collected in embedded devices is necessary.Meanwhile,this paper also proves that retraining the pre-trained neural network model is effective by experiments.Furthermore,to solve the problem that data collected locally are unlabeled,a fake label generation method is proposed in this paper,and the fake label can both train the neural network from scratching and retraining the pre-trained model effectively.This work will be useful in many application scenarios of neural networks.For example,the handwriting input method in the smart phone can use the fake label generation method to retrain the neural network model,so that the recognition accuracy for some person can be improved gradually.Because each person has his/her own voice,the pre-trained the speech recognition model may not work well for everyone.Therefore,the voice controlled devices in the smart home system can also use this method to improve the speech recognition accuracy.With this work,the accuracy of DNN models can be gradually improved along with personal using i.e.,the more users use,the higher the accuracy of the neural network model is.

    Disclosures

    The authors declare no conflicts of interest.

    99精品在免费线老司机午夜| 自拍偷自拍亚洲精品老妇| 亚洲美女视频黄频| 亚洲七黄色美女视频| 最近2019中文字幕mv第一页| 久久久久久久亚洲中文字幕| 九九在线视频观看精品| av免费在线看不卡| 在线观看午夜福利视频| 欧美另类亚洲清纯唯美| 日韩国内少妇激情av| 波多野结衣高清无吗| 国产欧美日韩精品亚洲av| 丝袜美腿在线中文| 晚上一个人看的免费电影| 尾随美女入室| 国产精品无大码| 亚洲最大成人av| 能在线免费观看的黄片| 精品久久久久久久人妻蜜臀av| 不卡一级毛片| 国产黄a三级三级三级人| 99久久精品国产国产毛片| 在线播放国产精品三级| 大香蕉久久网| 婷婷色综合大香蕉| 日日摸夜夜添夜夜爱| 国产单亲对白刺激| 国内精品一区二区在线观看| 亚洲欧美成人综合另类久久久 | 国产一区二区三区av在线 | 国产日本99.免费观看| 免费一级毛片在线播放高清视频| 午夜福利在线在线| 国产精品精品国产色婷婷| 午夜激情欧美在线| 赤兔流量卡办理| 91久久精品电影网| 久久久久久大精品| 成年免费大片在线观看| 国产精品乱码一区二三区的特点| 日本 av在线| av视频在线观看入口| 久久精品国产99精品国产亚洲性色| 亚洲精华国产精华液的使用体验 | 欧美一区二区国产精品久久精品| 久久久久性生活片| 免费看光身美女| 日韩欧美精品v在线| 深爱激情五月婷婷| 特大巨黑吊av在线直播| 黄色一级大片看看| 色综合亚洲欧美另类图片| 嫩草影院入口| 又黄又爽又刺激的免费视频.| 中文亚洲av片在线观看爽| 色综合亚洲欧美另类图片| 国产精品久久视频播放| 亚洲人成网站在线播| 国内精品一区二区在线观看| 91麻豆精品激情在线观看国产| 日韩欧美精品免费久久| 成人综合一区亚洲| 神马国产精品三级电影在线观看| 免费看美女性在线毛片视频| 不卡一级毛片| 亚洲最大成人av| 日本撒尿小便嘘嘘汇集6| 在线播放无遮挡| 国产av在哪里看| 激情 狠狠 欧美| 日本撒尿小便嘘嘘汇集6| av在线亚洲专区| 2021天堂中文幕一二区在线观| 成人漫画全彩无遮挡| 国产中年淑女户外野战色| 成人精品一区二区免费| 国产精品福利在线免费观看| 一个人观看的视频www高清免费观看| 精品99又大又爽又粗少妇毛片| 黄色配什么色好看| 色吧在线观看| 国产黄色视频一区二区在线观看 | 国产综合懂色| 成人毛片a级毛片在线播放| 有码 亚洲区| 国产精品一区www在线观看| 日韩一区二区视频免费看| 大型黄色视频在线免费观看| 成年女人永久免费观看视频| 夜夜夜夜夜久久久久| 91狼人影院| 亚洲欧美日韩无卡精品| 中国美女看黄片| 久久久a久久爽久久v久久| 亚洲中文字幕一区二区三区有码在线看| 日韩欧美三级三区| 亚洲欧美日韩高清专用| a级一级毛片免费在线观看| 久久精品人妻少妇| 精品免费久久久久久久清纯| 精品人妻一区二区三区麻豆 | 国产成人a∨麻豆精品| 天堂影院成人在线观看| 国产爱豆传媒在线观看| 91在线精品国自产拍蜜月| 女的被弄到高潮叫床怎么办| 你懂的网址亚洲精品在线观看 | 菩萨蛮人人尽说江南好唐韦庄 | 亚洲熟妇熟女久久| 欧美3d第一页| 成人高潮视频无遮挡免费网站| 男女下面进入的视频免费午夜| 国模一区二区三区四区视频| 国产精品野战在线观看| 久久精品国产自在天天线| 久久午夜福利片| 观看免费一级毛片| 婷婷精品国产亚洲av在线| 搡老妇女老女人老熟妇| 国产又黄又爽又无遮挡在线| 此物有八面人人有两片| 国产成人a∨麻豆精品| 老女人水多毛片| 1000部很黄的大片| 欧美性感艳星| 老女人水多毛片| 12—13女人毛片做爰片一| 亚洲精品国产成人久久av| 桃色一区二区三区在线观看| 亚洲av二区三区四区| av免费在线看不卡| 欧美色欧美亚洲另类二区| 久久精品国产亚洲av涩爱 | 日韩欧美精品免费久久| 欧美高清成人免费视频www| 午夜福利18| 免费搜索国产男女视频| 成人欧美大片| 最新中文字幕久久久久| 久久久国产成人免费| 国产精品人妻久久久久久| 欧美日韩在线观看h| 国产成人a∨麻豆精品| 国产成人a区在线观看| 在线观看av片永久免费下载| 夜夜爽天天搞| 久久久欧美国产精品| 亚洲国产精品sss在线观看| 六月丁香七月| 少妇的逼水好多| 深夜a级毛片| 性插视频无遮挡在线免费观看| 日本一本二区三区精品| 人人妻,人人澡人人爽秒播| 日韩成人av中文字幕在线观看 | 国产一区二区在线av高清观看| 久久精品国产亚洲网站| 高清午夜精品一区二区三区 | 99在线视频只有这里精品首页| 久久久a久久爽久久v久久| 日韩在线高清观看一区二区三区| 超碰av人人做人人爽久久| 亚洲五月天丁香| 久久人人爽人人片av| 成年av动漫网址| 亚洲熟妇中文字幕五十中出| 国产一区二区亚洲精品在线观看| 亚洲va在线va天堂va国产| 成年女人毛片免费观看观看9| 麻豆一二三区av精品| 国产乱人视频| 日韩强制内射视频| 非洲黑人性xxxx精品又粗又长| 国产精品一二三区在线看| 国产又黄又爽又无遮挡在线| 国产精品亚洲美女久久久| 国内少妇人妻偷人精品xxx网站| 丰满人妻一区二区三区视频av| 日韩av在线大香蕉| 欧美中文日本在线观看视频| 久久久欧美国产精品| 中文字幕av成人在线电影| 精品久久久久久久久av| 日本欧美国产在线视频| 国产aⅴ精品一区二区三区波| av福利片在线观看| 日韩精品有码人妻一区| 少妇猛男粗大的猛烈进出视频 | 亚洲av成人精品一区久久| 亚洲精品一区av在线观看| 嫩草影院入口| 亚洲激情五月婷婷啪啪| av视频在线观看入口| 精品久久久久久久久久免费视频| 欧美日韩精品成人综合77777| 亚洲四区av| 特级一级黄色大片| 国产精品,欧美在线| 免费在线观看成人毛片| 全区人妻精品视频| 99国产精品一区二区蜜桃av| 亚洲成av人片在线播放无| 国产蜜桃级精品一区二区三区| 狂野欧美白嫩少妇大欣赏| 看十八女毛片水多多多| 麻豆国产97在线/欧美| 中出人妻视频一区二区| 久久欧美精品欧美久久欧美| 国产成人影院久久av| 久久亚洲国产成人精品v| 一进一出好大好爽视频| 亚洲av不卡在线观看| 国产女主播在线喷水免费视频网站 | 又爽又黄无遮挡网站| av.在线天堂| 18禁在线播放成人免费| 99精品在免费线老司机午夜| 99久久中文字幕三级久久日本| 久久久久性生活片| 91久久精品电影网| 春色校园在线视频观看| 伦理电影大哥的女人| 国产一区二区激情短视频| 精品99又大又爽又粗少妇毛片| 22中文网久久字幕| 一边摸一边抽搐一进一小说| 日韩国内少妇激情av| 国产国拍精品亚洲av在线观看| 亚洲精品456在线播放app| 免费观看在线日韩| 搡老妇女老女人老熟妇| 国产精品野战在线观看| 五月玫瑰六月丁香| 99久国产av精品| 国产精品综合久久久久久久免费| 性色avwww在线观看| 欧美+日韩+精品| 波多野结衣高清作品| 欧美色欧美亚洲另类二区| 精品国内亚洲2022精品成人| 午夜激情福利司机影院| 成人亚洲精品av一区二区| 看免费成人av毛片| 五月伊人婷婷丁香| 99热这里只有是精品在线观看| 成人欧美大片| 欧美一区二区精品小视频在线| 床上黄色一级片| 国产淫片久久久久久久久| 淫妇啪啪啪对白视频| 国产成人精品久久久久久| 婷婷六月久久综合丁香| 日韩欧美免费精品| 国产精品一二三区在线看| 青春草视频在线免费观看| 在线天堂最新版资源| 久久人人精品亚洲av| 黄色视频,在线免费观看| 欧美xxxx性猛交bbbb| 国产精品不卡视频一区二区| 蜜臀久久99精品久久宅男| 床上黄色一级片| 成人性生交大片免费视频hd| 精品久久久久久久人妻蜜臀av| 国产免费一级a男人的天堂| 午夜a级毛片| 欧美性猛交黑人性爽| 亚洲国产色片| 免费观看精品视频网站| 亚洲中文字幕一区二区三区有码在线看| 国产精品免费一区二区三区在线| 久久久色成人| 国产在线精品亚洲第一网站| 身体一侧抽搐| 精品午夜福利在线看| 久久久久九九精品影院| 丝袜喷水一区| 22中文网久久字幕| 变态另类丝袜制服| 51国产日韩欧美| 国产精品不卡视频一区二区| 午夜影院日韩av| 成人午夜高清在线视频| 亚洲一区高清亚洲精品| 国产精品不卡视频一区二区| 国产精品无大码| а√天堂www在线а√下载| 桃色一区二区三区在线观看| 18+在线观看网站| 三级经典国产精品| 国产成年人精品一区二区| 亚洲精品久久国产高清桃花| 麻豆国产av国片精品| 午夜日韩欧美国产| 亚洲欧美日韩东京热| 欧美bdsm另类| 国产黄色小视频在线观看| 欧美成人免费av一区二区三区| 亚洲色图av天堂| 日韩欧美三级三区| 亚洲aⅴ乱码一区二区在线播放| 久久久久精品国产欧美久久久| 三级经典国产精品| 午夜影院日韩av| 男人的好看免费观看在线视频| 久久久久久久久久久丰满| 午夜福利在线观看免费完整高清在 | 欧美日韩一区二区视频在线观看视频在线 | 精品久久久久久久久久免费视频| 看十八女毛片水多多多| www.色视频.com| 俺也久久电影网| 精品国内亚洲2022精品成人| 亚洲成人精品中文字幕电影| 天天躁日日操中文字幕| 欧美国产日韩亚洲一区| 亚洲av五月六月丁香网| 亚洲国产欧洲综合997久久,| 最好的美女福利视频网| 超碰av人人做人人爽久久| 国产日本99.免费观看| 尾随美女入室| 精品久久久久久久久亚洲| 嫩草影院新地址| 国产成人aa在线观看| 国产激情偷乱视频一区二区| 人妻夜夜爽99麻豆av| 免费电影在线观看免费观看| 一级毛片我不卡| 久久精品国产99精品国产亚洲性色| 久久久欧美国产精品| 日韩av不卡免费在线播放| 色噜噜av男人的天堂激情| 亚洲精品影视一区二区三区av| 久久久精品大字幕| 成人三级黄色视频| 久久久精品94久久精品| 九九爱精品视频在线观看| 成人毛片a级毛片在线播放| 变态另类丝袜制服| 久久久国产成人精品二区| 国产又黄又爽又无遮挡在线| 99热精品在线国产| 亚洲欧美日韩东京热| 久久久久国产精品人妻aⅴ院| 天天躁夜夜躁狠狠久久av| 一级毛片我不卡| 99热这里只有是精品50| 国产高清视频在线播放一区| 国产黄色小视频在线观看| 免费黄网站久久成人精品| 老熟妇仑乱视频hdxx| 日本爱情动作片www.在线观看 | 日本免费a在线| 日韩在线高清观看一区二区三区| 最近视频中文字幕2019在线8| 波多野结衣巨乳人妻| 国产精品三级大全| 精品久久久久久久人妻蜜臀av| 成年女人永久免费观看视频| 国产激情偷乱视频一区二区| 舔av片在线| 免费av观看视频| 国产一区二区三区在线臀色熟女| 色5月婷婷丁香| 国产一区二区在线av高清观看| 99精品在免费线老司机午夜| 综合色av麻豆| 国产精品99久久久久久久久| 一区二区三区四区激情视频 | 久久精品国产自在天天线| 女的被弄到高潮叫床怎么办| 国产精品av视频在线免费观看| 亚洲不卡免费看| 亚洲欧美成人综合另类久久久 | 又爽又黄无遮挡网站| 国产一区二区三区av在线 | 久久久久国产网址| 六月丁香七月| 女同久久另类99精品国产91| 亚洲最大成人手机在线| 国内少妇人妻偷人精品xxx网站| 乱系列少妇在线播放| 日本黄大片高清| 麻豆成人午夜福利视频| 国内少妇人妻偷人精品xxx网站| 能在线免费观看的黄片| 久久精品人妻少妇| 亚洲一区高清亚洲精品| 久久精品久久久久久噜噜老黄 | 亚洲精品456在线播放app| 亚洲成人中文字幕在线播放| 国产精品久久电影中文字幕| 一进一出好大好爽视频| 成人综合一区亚洲| 综合色av麻豆| 伊人久久精品亚洲午夜| 男女视频在线观看网站免费| 亚洲电影在线观看av| 老司机影院成人| 日本免费一区二区三区高清不卡| 在线看三级毛片| 午夜影院日韩av| 中出人妻视频一区二区| www.色视频.com| 男女那种视频在线观看| 国国产精品蜜臀av免费| 国产伦一二天堂av在线观看| 女人十人毛片免费观看3o分钟| 网址你懂的国产日韩在线| 欧美xxxx黑人xx丫x性爽| 亚洲国产高清在线一区二区三| 久久久色成人| av天堂在线播放| 啦啦啦啦在线视频资源| 精品国产三级普通话版| 国产在线精品亚洲第一网站| 久久久精品94久久精品| 日韩国内少妇激情av| 免费搜索国产男女视频| 国产午夜福利久久久久久| 成人性生交大片免费视频hd| 久久人人精品亚洲av| 老司机福利观看| 99久久精品热视频| 黄色欧美视频在线观看| 综合色av麻豆| 久久久久国产精品人妻aⅴ院| 午夜福利在线观看吧| 成人特级黄色片久久久久久久| 麻豆成人午夜福利视频| 午夜影院日韩av| 一卡2卡三卡四卡精品乱码亚洲| 日韩制服骚丝袜av| 国产探花在线观看一区二区| 中文字幕熟女人妻在线| 看免费成人av毛片| 男人的好看免费观看在线视频| 成人毛片a级毛片在线播放| 欧美激情久久久久久爽电影| 久久午夜福利片| 熟女电影av网| 国产麻豆成人av免费视频| 最好的美女福利视频网| 亚洲不卡免费看| 日韩三级伦理在线观看| 麻豆一二三区av精品| 免费观看人在逋| 亚洲经典国产精华液单| 麻豆成人午夜福利视频| 国产一区二区三区av在线 | 在线观看免费视频日本深夜| 国产欧美日韩一区二区精品| 亚洲精品在线观看二区| 免费黄网站久久成人精品| 美女黄网站色视频| 日韩欧美在线乱码| av视频在线观看入口| 插逼视频在线观看| 少妇人妻精品综合一区二区 | 精品午夜福利在线看| 中文字幕av在线有码专区| 69人妻影院| 99视频精品全部免费 在线| 亚洲欧美中文字幕日韩二区| 免费看a级黄色片| 成年av动漫网址| 国产成年人精品一区二区| 日韩欧美在线乱码| 国产成人a∨麻豆精品| 国产一级毛片七仙女欲春2| 51国产日韩欧美| 少妇人妻一区二区三区视频| 国产大屁股一区二区在线视频| 精品午夜福利在线看| 欧美日本视频| 一区二区三区免费毛片| 日韩一本色道免费dvd| 夜夜夜夜夜久久久久| 看非洲黑人一级黄片| 又黄又爽又刺激的免费视频.| 九九热线精品视视频播放| 两性午夜刺激爽爽歪歪视频在线观看| 精品久久久久久久末码| 国产乱人视频| 中文字幕免费在线视频6| 亚洲av免费在线观看| 小蜜桃在线观看免费完整版高清| 免费人成视频x8x8入口观看| 国产高潮美女av| 成年女人毛片免费观看观看9| av免费在线看不卡| 日韩在线高清观看一区二区三区| 一区二区三区四区激情视频 | 国产日本99.免费观看| 日韩精品有码人妻一区| 欧美潮喷喷水| 女人十人毛片免费观看3o分钟| 精品熟女少妇av免费看| 99热这里只有是精品在线观看| 男女做爰动态图高潮gif福利片| 亚洲av成人精品一区久久| 露出奶头的视频| 精品福利观看| 日韩 亚洲 欧美在线| 一级a爱片免费观看的视频| 中文在线观看免费www的网站| 国产精品一区二区三区四区久久| 91狼人影院| 99riav亚洲国产免费| 神马国产精品三级电影在线观看| 国产精品永久免费网站| 男女边吃奶边做爰视频| 免费无遮挡裸体视频| 老司机影院成人| 成人毛片a级毛片在线播放| 极品教师在线视频| 久久人人爽人人爽人人片va| 日韩国内少妇激情av| 亚洲四区av| 深夜a级毛片| 亚洲在线自拍视频| 久久综合国产亚洲精品| 欧美日韩一区二区视频在线观看视频在线 | 少妇猛男粗大的猛烈进出视频 | 亚洲无线在线观看| 白带黄色成豆腐渣| 亚洲人与动物交配视频| 久久精品夜夜夜夜夜久久蜜豆| 欧美潮喷喷水| 99久久无色码亚洲精品果冻| 日韩一本色道免费dvd| 蜜桃久久精品国产亚洲av| 亚洲国产欧美人成| 三级经典国产精品| 在线免费观看不下载黄p国产| 国产高清激情床上av| 国产在线男女| videossex国产| 99国产极品粉嫩在线观看| 别揉我奶头 嗯啊视频| 九九热线精品视视频播放| av福利片在线观看| 在线看三级毛片| 美女黄网站色视频| 搞女人的毛片| 欧美色视频一区免费| 看非洲黑人一级黄片| 国产欧美日韩精品一区二区| 夜夜爽天天搞| 人人妻,人人澡人人爽秒播| 国产在视频线在精品| 亚洲性夜色夜夜综合| 国产精品亚洲一级av第二区| 亚洲成a人片在线一区二区| 久久久久久伊人网av| 色5月婷婷丁香| 99久国产av精品国产电影| 18禁黄网站禁片免费观看直播| 免费人成在线观看视频色| 91久久精品国产一区二区成人| 国产乱人视频| 亚洲图色成人| 久久久久久久午夜电影| 黄色日韩在线| 久久精品国产亚洲网站| 99热只有精品国产| 看免费成人av毛片| 最近2019中文字幕mv第一页| 日日撸夜夜添| 一个人看视频在线观看www免费| 麻豆成人午夜福利视频| 国产精品av视频在线免费观看| 免费av观看视频| 免费观看的影片在线观看| 日韩欧美精品v在线| 联通29元200g的流量卡| 六月丁香七月| 亚洲乱码一区二区免费版| 精品一区二区三区av网在线观看| 亚洲av第一区精品v没综合| 亚洲人成网站在线播放欧美日韩| 51国产日韩欧美| 日韩欧美一区二区三区在线观看| 亚洲无线观看免费| 老司机影院成人| 色噜噜av男人的天堂激情| 人人妻人人看人人澡| 免费观看的影片在线观看| 少妇被粗大猛烈的视频| 看片在线看免费视频| 久久午夜福利片| 国产精品无大码| 国产av麻豆久久久久久久| 日日干狠狠操夜夜爽| 国内少妇人妻偷人精品xxx网站| 亚洲精品乱码久久久v下载方式| 欧美性猛交黑人性爽| 亚洲国产高清在线一区二区三| 老司机福利观看| 精品午夜福利在线看| 黄色一级大片看看| 久久久久性生活片| 国产亚洲91精品色在线| 国语自产精品视频在线第100页| 久久久久性生活片| 欧美精品国产亚洲| 国产高清不卡午夜福利| 国产精品伦人一区二区| 有码 亚洲区| 中文亚洲av片在线观看爽| 成熟少妇高潮喷水视频| 亚洲精品国产成人久久av| 全区人妻精品视频|