• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Image-Based Lifelogging:User Emotion Perspective

    2021-12-16 07:50:30JunghyunBumHyunseungChooandJoyceJiyoungWhang
    Computers Materials&Continua 2021年5期

    Junghyun Bum,Hyunseung Choo and Joyce Jiyoung Whang

    1College of Computing,Sungkyunkwan University,Suwon,16417,Korea

    2School of Computing,KAIST,Daejeon,34141,Korea

    Abstract:Lifelog is a digital record of an individual’s daily life.It collects,records,and archives a large amount of unstructured data;therefore,techniques are required to organize and summarize those data for easy retrieval.Lifelogging has been utilized for diverse applications including healthcare,self-tracking,and entertainment,among others.With regard to the imagebased lifelogging,even though most users prefer to present photos with facial expressions that allow us to infer their emotions,there have been few studies on lifelogging techniques that focus upon users’emotions.In this paper,we develop a system that extracts users’ own photos from their smartphones and configures their lifelogs with a focus on their emotions.We design an emotion classifier based on convolutional neural networks(CNN)to predict the users’emotions.To train the model,we create a new dataset by collecting facial images from the CelebFaces Attributes (CelebA) dataset and labeling their facial emotion expressions,and by integrating parts of the Radboud Faces Database(RaFD).Our dataset consists of 4,715 high-resolution images.We propose Representative Emotional Data Extraction Scheme(REDES)to select representative photos based on inferring users’ emotions from their facial expressions.In addition,we develop a system that allows users to easily configure diaries for a special day and summaize their lifelogs.Our experimental results show that our method is able to effectively incorporate emotions into lifelog,allowing an enriched experience.

    Keywords:Lifelog;facial expression;emotion;emotion classifier;transfer learning

    1 Introduction

    Nowadays,people record their daily lives on smartphones thanks to the availability of devices with cameras and high-capacity memory cards.The proliferation of social media encourages the creation and sharing of more personal photos and videos.Users can now store and access their entire life through their smartphones [1].As the number of photo collections preserved by individuals rapidly grows,it becomes more difficult to browse and retrieve among the unorganized collections.People often waste their time exploring vast numbers of photos trying to remember certain events or memorable moments;this accelerates the need to automatically organize large collections of personal media [2-4].

    Different methods have been proposed for automatically collecting,archiving,and summarizing huge collections of lifelog data [5].Structuring images into temporal segments (that is,separating them into events) facilitates image retrieval [6].Metadata such as time and location are typically used to identify events in a photo collection and then key photos are selected within events.Lifelogging helps to increase the memory of elderly people and dementia patients by recording the events of their daily lives and allowing them to recall what they have seen and done [5,7].Photo-sharing social-networking services have received increased attention in recent years;smartphones and wearable cameras combined with social media extend the scope of lifelogging.

    Most users prefer photos with facial expressions in their photo collections.According to a recent study [8],human presence and facial expressions are key indicators of how appealing photos are.Google Photos or Apple’s iPhoto applications now automatically generate a summary based on people,places,and things.With the recent development of research on the visualsentiment analysis,the sentiment-based recommender system and explanation interface have been studied [9].However,few studies use emotional characteristics to select key photos (especially emotions in facial expressions such as surprise,joy,and sadness).Moreover,accurate analysis and interpretation of the emotion conveyed by human facial expressions remain a great challenge.

    With the advent of deep neural networks,facial expression recognition systems have changed profoundly [10].In particular,convolutional neural networks (CNN) have led to breakthroughs in image recognition,classification,and object tracking based on big datasets such as ImageNet [11],and are widely applied in various fields.Training a deep-learning model from scratch by collecting sufficient data is an expensive and time-consuming task.One recent alternative to this is transfer learning [12,13],whereby a new model is trained with pre-trained weights.Face recognition systems that used hand-crafted features such as histogram-oriented gradients (HOG) and Gabor filters have been moved to those using deep-learning features.Especially,CNN architectures have shown excellent results in the field of computer vision.Here,we exploit a new facial-emotion classifier based on CNN architectures.

    In the present paper,we develop a lifelogging system that extracts photos from smartphones,analyzes the users’ emotions based on their faces,selects a set of representative images that show different emotions,and then outputs a summary in diary form.We also propose a selection technique that can improve emotional diversity and representativeness in a photo collection.Our main contribution is the creation of a new emotion dataset in high resolution and the development of an emotion classifier,and the use of emotion features to group images in a visual lifelog.The rest of this paper is organized as follows.Section 2 briefly reviews related studies.Section 3 explains the structure and processes of our image-based lifelogging system.Implementation and user experiments are presented in Section 4.Finally,future research directions are discussed in Section 5.

    2 Related Work

    Lifelog research that collects and visualizes data using wearable cameras and sensors has received much attention in recent years [14-17].Since high-end smartphones have become widely used by the public enabling lifelogs to be easily collected without requiring users to carry additional devices,various studies have been conducted on the use of smartphones as lifelogging platforms.UbiqLog is a lightweight and extendable lifelog framework that uses mobile phones as a lifelog tool through continuous sensing [18].Gou et al.proposed a client-server system that uses a smartphone to manage and analyze personal biometric data [2].To automatically organize large photo collections,a hierarchical photo organization method using topic-related categories has been proposed [4].The method estimates latent topics in the photos by applying probabilistic latent semantic analysis and automatically assigning a name to each topic by relying on a lexical database.

    Lifelogs must be summarized in a concise and meaningful manner because similar types of data are repeatedly accumulated.We focus on images from among the different kinds of lifelog data.The most relevant study was presented in [19],where emotion factors in lifelogs were used when selecting keyframes.A wearable biosensor was used to measure emotions quantitatively through the physiological response of skin conductance.Assuming that the most important and memorable moments in life involve emotional connections among human beings,they collectively evaluated the image quality and emotional criteria for keyframe selection.The important moments that individuals want to remember are highly subjective,making it difficult to achieve satisfactory results using uniform objective criteria.Even though it is interesting to include user emotions in the keyframe selection,there are limitations arising from the need to wear a biosensor to measure emotions.A recent study has explored how lifelogging technology can help users to collect data for self-monitoring and reflection [20];they use a biosensor and a camera to provide a timeline of experience,including self-reported key experiences,lifelog experiences,heart rates,decisions,and valence.They conclude that their result supports recall and data richness.However,when those techniques are combined with automated tools such as key photo selection,better user experiences can be achieved [21].

    EmoSnaps,the product of another emotion-related study,is a mobile application that allows a smartphone to take a picture while the user is unaware to help recall the emotion at that moment later [22].Users evaluate their photos every day and enter emotional values.In addition,they can observe changes in their happiness every day and week.The study shows that facial expressions play an important role in the memory of one’s emotions.It has been also shown that emotions are closely related to facial expressions,making it difficult to hide basic emotions such as anger,contempt,disgust,fear,happiness,neutrality,sadness,and surprise [23].In this study,emotion features are extracted from facial expressions and key images are selected to represent the diversity of lifelogs.

    Facial emotion recognition is indeed a challenging task in the computer vision field.Largescale databases play a significant role in solving this problem.The facial emotion recognition 2013(FER2013) dataset was created in [24].The dataset consists of 35,887 facial images that were resized 48×48 pixels and then converted to grayscale.Human recognition on this dataset has an accuracy of 65%±5%.It has been revealed that CNNs were indeed capable of outperforming handcrafted features for recognizing facial emotion.The extended Cohn-Kanade (CK+) facial expression database [25]includes 592 sequences from 210 adults from various ethnicities (primarily European- and African-Americans).The database contains 24-bit color images,of which the majority are grayscale.The Japanese female facial expression (JAFFE) database [26]contains 213 images of 10 Japanese female models exhibiting six facial expressions.All images are grayscale.The Radboud faces database (RaFD) [27]consists of 4,824 images collected from 67 Caucasian and Moroccan Dutch models displaying eight emotions.RaFD is a high-quality face database,with all images being captured in an experimental environment with a white background.

    With the advancement of deep neural networks,the CNN architecture has yielded excellent results in image-related problems with learning spatial characteristics [28,29].Research on facial emotion recognition has also been noticeably improved.Kumar et al.[30]utilized a CNN which has nine convolutional layers to train and classify seven types of emotion on the FER-2013 and CK+ databases.They were able to achieve an accuracy of about 90% or more.Li et al.[31]proposed new strategies for face cropping and rotation as well as a simplified CNN architecture with two convolution layers,two sub-sampling layers,and one output layer.The image was cropped to remove the useless region,and histogram equalization,Z-score normalization,and downsampling were conducted.They experimented on the CK+ and JAFFE databases to obtain high average recognition accuracies of 97.38% and 97.18%,respectively.Handouzi et al.[32]proposed a deep convnet architecture for facial expression recognition based on the RaFD dataset.This architecture consists of a class of networks which have six layers:two convolutional layers,two pooling layers,and two fully connected layers.Recent research has extended from basic emotion recognition to compound emotion recognition.Gou et al.[33]released a dataset consisting of 50 classes of compound emotions composed of dominant and complementary emotions (e.g.,happily-disgusted and sadly-fearful).The compound-emotion pairs are more difficult to recognize;thus,the top-5 accuracy is less than 60%.Slimani et al.[34]proposed a highway CNN architecture for the recognition of compound emotions.The highway layers of this architecture facilitate the transfer of the error to the top of the CNN.They achieved an accuracy of 52.14% for 22 compound-emotion categories.

    VGGNet [35]is a CNN model composed of convolutional layers,pooling layers,and fully connected layers.VGG16 uses a 3 × 3 convolution filter,and the simplicity of the VGG16 architecture makes it quite appealing.VGG16 performs almost as well as the larger VGG19 neural network.Transfer learning has received much attention in recent years where the idea is to transfer knowledge from one domain to another [36].A pre-trained model is a saved network that was previously trained on a large dataset such as ImageNet;we can use the pre-trained model as it is or use transfer learning to customize this model for our own purposes.Transfer learning applies when insufficient data are provided;it has the advantage of requiring relatively little computation because it only needs to adjust the weights of the designated layers.In general,the lower layers of CNN maintain general features regardless of the problem domain and the higher layers to be optimized to the specific dataset.Therefore,we can reduce the number of parameters to be trained,while reusing the lower-level layers.In this paper,we experiment with various methods including modified transfer learning and finetuning to select the optimal model.

    3 The Proposed Scheme

    We propose the Representative Emotional Data Extraction Scheme (REDES),a technique for selecting representative images from the user’s emotional perspective to identify the most representative photos from the user’s lifelog data.We develop a novel emotion classifier in this process to correctly extract emotions and use the Systemsensplus server to archive photos from the smartphones.Details are provided in the following subsections.

    3.1 System Architecture

    The proposed system is divided into two parts:A mobile application and a server based on Systemsensplus.The overall architecture is shown in Fig.1.The mobile application consists of a face-registration module and a diary-generation module that includes REDES.The faceregistration module registers the user’s face for recognition,while the REDES submodule identifies the days of special events and chooses representative photos.The diary-generation module displays the representative photos selected by REDES on the user’s smartphone screen and creates a diary for a specific date.

    Figure 1:Basic architecture and modules of the proposed system including REDES

    Systemsens [37]is a monitoring tool that tracks data on smartphone usage (e.g.,CPU,memory,network connectivity).We use the extended Systemsensplus to effectively manage user photos and related tagging information.The emotion-detection module identifies the user’s face,predicts emotion from the facial expression,and tags the information using International Press Telecommunications Council (IPTC) photographic metadata standards.The IPTC format is suitable because it provides metadata,such as a description of the photograph,copyright,date created,and custom fields.The Systemsensplus client module operates as a background process on the user’s smartphone;it collects and stores data at scheduled times and events in a local database.When the smartphone is inactive,the Systemsensplus uploader sends data to the server overnight.

    3.2 Emotion Classification Model

    Well-refined,large-scale datasets are required for high accuracy of facial expression classification.On the FER2013 dataset [24],it is difficult to achieve a high performance due to the very low resolution of the given images.RaFD [27]consists of 4,824 images collected from 67 participants and includes eight facial expressions.The CelebFaces Attributes (CelebA) [38]dataset contains 202,599 celebrity facial images,but includes no label for facial expressions.We collect seven facial emotion expressions (excluding contempt) from the RaFD dataset and manually label facial emotion expressions in the CelebA dataset.The integrated dataset consists of seven emotions and Tab.1 shows the number of images for each facial emotion.

    Table 1:The number of images per emotion in our dataset

    For experiments,we divided the dataset into three parts:Training (70%),validation (10%) and testing (20%).AlexNet [39],VGG 11,VGG16,and VGG19 network structures are used for comparing transfer learning and finetuning of facial expression recognition problems.The pre-trained VGG16 and VGG19 networks perform facial expression classification using transfer learning methods.To make an acceptable comparison,the same training options are used for all scenarios.When recognizing emotions from facial expressions,a pre-processing is performed to detect facial regions—i.e.,detecting faces in images and cropping only the face region.For this pre-processing step,we utilize the face recognition python library [40].This library has an accuracy of 99.38%for the labeled faces in the wild benchmark [41].A sample of our dataset is shown in Fig.2.

    Figure 2:Exemplary images from our refined dataset

    We aim to select the best model by comparing the performance of facial emotion classifiers using transfer learning or finetuning techniques based on pre-trained CNN models.We evaluated the four models—AlexNet,VGG11,VGG16,and VGG19—using different learning scenarios.The last fully connected layer of each model is replaced with the new fully connected layers for the purpose of emotion classification and the softmax function is applied at the end.For the VGG16 and VGG19 models,four methods are impelmented:A) Transfer learning in which all of the convolution layers are frozen and only the fully connected layers are trained;B) and C) Two transfer learning techniques in which the front-convolution layers are..xed and the weight of highconvolution layers,i.e.,convolution block 5 for B) and convolution blocks 4 and 5 for C),are trained;and D) A finetuning technique in which the weights of all layers are adjusted.

    Our experiments were conducted using Keras [42]deep-learning library.A Nvidia GeForce RTX 2080 Ti GPU has been used to execute the experiments,and the Adam optimizer has been applied to a mini-batch of 32 images with categorical cross-entropy loss.The number of epochs and the learning rate is set to 100 and 10?4,respectively.Tab.2 shows the validation and test accuracies of facial emotion classification in transfer learning for four CNN models.The results with the best test accuracy are highlighted in bold.

    On VGG16,the worst performance is shown when its convolution layers are fixed and only its fully connected layers are trainable.Considering the results,we see that the features extracted from the pre-trained model on ImageNet are not perfectly suitable for our classification problem.Although it shows slightly different results,the best test accuracy is obtained when the lower convolution layers are fixed and the layers of the last convolution block are trained.We also observe that finetuning methods,for which weights of all layers were adjusted,failed to acquire the best score.

    Table 2:The validation and test accuracies of the emotion classification models

    Inspired by VGG16’s best results for transfer learning,we experimented to see if performance can be further improved when the weights of its layers 1 to 7 are fixed and the remaining layers are shallower.The CNN architecture of the emotion recognition classifier is shown in the Fig.3.Based on the VGG16 architecture,the number of layers and filters of convolution blocks 4 and 5 are adjusted to improve the training speed and to prevent overfitting.It is assumed that feature maps for emotion recognition are not much required at a high level.

    Figure 3:Proposed CNN model for facial emotion classification

    We trained the proposed model using the same parameters.The test and validation accuracies of the proposed model are 95.22% and 97.01%,respectively,which is higher than the bestperforming model,VGG16 transfer learning (fixed layer 1~7).Fig.4 depicts the confusion matrix for the test samples using the proposed model;Fig.5 depicts the model accuracy and loss,respectively,during the training and validation phase.We can observe from the plot that the model converges.The classification results of the model based on precision,recall,and F1-score are provided in Tab.3.We utilize this emotion classifier to extract emotion features from the user’s photos.

    Figure 4:Confusion matrix for the seven emotion categories

    Figure 5:Training accuracy vs. validation accuracy (left);training loss vs. validation loss (right)

    Table 3:The classification results based on precision,recall,and F1-score

    3.3 Data Collection Phase

    This subsection describes how photos and related information are collected from users.The Systemsensplus client module operates as a background process and stores the required metadata in a local database,along with smartphone usage data.In this study,we focus on the image-based lifelogging system,which we consider to store mainly photo-related data.

    When a user launches the application for the first time,the user’s face registration screen will be activated.After the user’s facial image has been successfully registered,it can be recognized and its emotion scores can be extracted.Users are also able to register new facial images through the face-registration user interface.The Systemsensplus supports two types of virtual sensors;eventbased sensors generate log records whenever the corresponding state changes,and are activated when a user registers new face photos;and polling sensors record information at regular intervals.The upload mechanism is designed to upload only when the smartphone is being charged and has a network connection.

    The emotion-detection module on the Systemsensplus server estimates the emotion scores of new photos added at a predefined time when uploading is completed.The extraction of emotion scores occurs in two steps:(1) Determining whether the user’s face is in the photo and if it is,(2) Calculating the confidence scores for seven emotion types.If faces are found,their similarity to the registered user is checked,and if the face in the photo is the user’s face,the emotion scores are acquired.Otherwise,the process stops.The emotion types and scores acquired are tagged in the photo in accordance with the IPTC standards.The type of emotion with the largest score is tagged as text in the ‘category’ column and the seven emotion scores are given as an array in the custom field of the IPTC content.Systemsensplus transfers only the metadata of the photos that have been tagged with emotion data to the client during the next scheduled cycle.

    3.4 Representative Photo Selection Phase

    Our goal is to maximize the diversity and representativeness of the lifelog images from the emotional perspective.To obtain diversity,we use emotion scores.A group of photos with similar emotion scores is grouped together and we choose key photos among the group.REDES uses the k-means clustering algorithm for this task.This algorithm partitions the given data into k clusters such that the variance of the distances within each cluster is minimized [43].

    We use the k-means clustering algorithm with eight-dimensional vectors,consisting of time and the seven emotion scores:

    —Time:The date/timestamp from the exchangeable image file formant header of the photos normalized to a real number between 0 and 1.

    —Emotion:Seven emotion scores for anger,disgust,fear,happiness,neutrality,sadness,and surprise (where ∑E[i]=1,i=1,...,7).

    Given a set of photos {x1,x2,...,xN} with each photo represented in an 8-dimensional vector space,the algorithm partitions theNphotos intok(≤N) setsC={C1,C2,...,Ck},to minimize the pairwise-squared deviations of points in the same cluster,as shown Eq.(1):

    whereμiis the mean vector of clusterCi.

    To apply the k-means clustering algorithm,the numberkshould be specified.The number of clusters can be calculated as the square root of half the input size [44]as shown in Eq.(2):

    whereNis the number of photos.

    We obtain a finalkof cluster groups.Then REDES determines the photos represented by the data points closest to each center.Each cluster is a group of similar photos,so the data point closest to the center can be used to represent each cluster group.

    4 Experiments

    The proposed system extracts emotion scores and stores the photos on the server.Tab.4 shows the distribution of emtions collected from the facial expressions of one graduate student over the last five years;as we might expect,the distribution is quite skewed.However,it helps REDES to select diverse facial expressions.The results of the proposed method are shown in Fig.6.

    Table 4:Ratio of emotions extracted from one user’s photos

    To evaluate the performance of the REDES,we conducted a user experiment employing three datasets for 32 participants.One dataset contains 300 photos taken over the past three months,whereas the others are special day datasets of 97 and 48 photos,respectively.We employed a clustering method using only time feature as the baseline method,called Time in our comparison.In addition,we designed two comparable methods:One using only emotion features,called REDES(Emotion),and the other which selects photos with the largest emotion score,denoted by MaxEmotion.To conduct this experiment,we prepared an online questionnaire;first,we showed the entire dataset to the participants and instructed them to “Please rate each photo collection according to your preference (i.e.,how representative and/or appealing the group of photos is).” We prepared four different groups of photo collections according to the photo selection strategies—i.e.,REDES,REDES(Emotion),MaxEmotion,and Time—and randomly presented them to participants.The scoring system that we used was a scale of 1 to 5.The higher the score,the higher the user’s preference.

    Figure 6:An example of the REDES result for a special day.The entire photostream of a day(top);representative photos selected by REDES (bottom)

    The overall grade-distribution range and score averagesare presented in Fig.7,indicating that our proposed method outperforms the other methods.When comparing the mean values of each method for the experimental results,the preferred summary methods are REDES,REDES(Emotion),Time,and MaxEmotion,in that order.To further check whether the difference between the above four methods was statistically significant,one-way analysis of variance (ANOVA) was performed using the MATLAB statistics and machine learning toolbox [45].As a result,the ratio of the between-group variability to the within-group variability (F) is 34.604 and theP-value is 8.57E-20.Thus,we conclude that there was a significant difference in the preference of photo selection from the datasets among those methods at significance level of 0.01.In other words,there was a significant difference between at least two of the group means.

    Figure 7:The rating results from the experiment.The dots indicate the means;the boxes indicate the interquartile range,with vertical lines depicting the range and the red lines indicating the median

    To determine which means significantly differ from that of our proposed method,we conductedpost hoccomparisons between pairs of methods.Using the overall rating results (N=96),a pairedt-test was conducted.As shown by theP-values in Tab.5,there is a significant difference between REDES and the other methods.

    Table 5:Result of a paired t-test between our proposed method and other methods

    Our detailed findings are as follows.Firstly,the MaxEmotion method selects the photo with the largest emotion score;thus,there is a tendency for similar photos with a high score in the happiness column to be chosen.Secondly,the time clustering method—as the baseline method—selects the photos among significant time intervals.We found that this method randomly yielded good or bad results depending on the dataset.Finally,since the REDES(Emotion) method mainly focuses on emotional diversity,the selected photos sometimes do not cover the entire period of the datasets.Thus,we concluded that REDES outperforms the other methods,and through the paired t-test we observed that the result has statistical significance.Our proposed scheme visualizes the lifelog by distinguishing the emotions displayed in the user’s photos.Representative photos are selected using emotion and time features,and the user configures the diary form conveniently using different metadata provided from the photos.Representative photos and emotions are displayed together,such that the user can easily recall the emotions of the day.Therefore,our proposed system,REDES,is able to effectively generate a lifelog based on the users’ emotional perspective.

    5 Conclusion and Future Work

    In this work,we have proposed a new scheme for visualizing lifelogs using emotions extracted from the facial expressions in a user’s photos.The experimental results show that users preferred REDES over the baseline methods.For future work,we plan to extend our clustering scheme to more effectively capture the underlying clustering structure [46]and to develop a more sophisticated lifelogging system that can automatically generate a diary by capturing the objects,locations,and people as well as the user’s emotions.

    Acknowledgement:The authors acknowledge Yechan Park for providing the personal dataset and supporting the experiments.

    Funding Statement:This research was supported by the MSIT(Ministry of Science and ICT),Korea,under the Grand Information Technology Research Center Support Program (IITP-2020-2015-0-00742),Artificial Intelligence Graduate School Program (Sungkyunkwan University,2019-0-00421),and the ICT Creative Consilience Program (IITP-2020-2051-001) supervised by the IITP.This work was also supported by NRF of Korea (2019R1C1C1008956,2018R1A5A1059921) to J.J.Whang.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    女人久久www免费人成看片| 成人黄色视频免费在线看| 18禁裸乳无遮挡动漫免费视频| 在线观看国产h片| av卡一久久| 水蜜桃什么品种好| 特大巨黑吊av在线直播| 美女内射精品一级片tv| 国产 一区精品| 免费观看av网站的网址| 亚洲性久久影院| 国产爽快片一区二区三区| 国模一区二区三区四区视频| 能在线免费看毛片的网站| 国产男女超爽视频在线观看| 国产成人freesex在线| 亚洲av二区三区四区| 狠狠婷婷综合久久久久久88av| 欧美xxⅹ黑人| 一区在线观看完整版| 啦啦啦在线观看免费高清www| 国产精品一区二区在线不卡| 能在线免费看毛片的网站| 久久人人爽人人片av| 极品少妇高潮喷水抽搐| 99久久精品一区二区三区| 欧美最新免费一区二区三区| 成人手机av| 五月玫瑰六月丁香| 日日摸夜夜添夜夜爱| 国产精品一区www在线观看| 99热这里只有是精品在线观看| 亚洲国产成人一精品久久久| 新久久久久国产一级毛片| 午夜免费观看性视频| 久久久久久久久久久免费av| 欧美精品人与动牲交sv欧美| 亚洲av国产av综合av卡| 哪个播放器可以免费观看大片| 亚洲怡红院男人天堂| 午夜久久久在线观看| 99九九线精品视频在线观看视频| 免费大片18禁| 9色porny在线观看| 久久综合国产亚洲精品| 99久久综合免费| 日韩亚洲欧美综合| 狂野欧美激情性bbbbbb| 天堂俺去俺来也www色官网| 亚洲国产欧美日韩在线播放| 丝袜喷水一区| 少妇的逼水好多| 99re6热这里在线精品视频| 人妻制服诱惑在线中文字幕| 天天操日日干夜夜撸| 777米奇影视久久| 日本欧美国产在线视频| 国产精品国产av在线观看| 久久久亚洲精品成人影院| 亚洲欧洲日产国产| 日韩一区二区三区影片| 欧美日本中文国产一区发布| 日韩制服骚丝袜av| 久久久久网色| 日韩电影二区| 汤姆久久久久久久影院中文字幕| 日本爱情动作片www.在线观看| 啦啦啦啦在线视频资源| 成年人免费黄色播放视频| 3wmmmm亚洲av在线观看| 99九九在线精品视频| 久久久久久久久久久丰满| 亚洲成色77777| 一区二区av电影网| 好男人视频免费观看在线| 丝瓜视频免费看黄片| 看十八女毛片水多多多| 97超视频在线观看视频| 熟女人妻精品中文字幕| 日韩一区二区视频免费看| 精品人妻偷拍中文字幕| 日韩强制内射视频| 18禁动态无遮挡网站| 国产欧美亚洲国产| 亚洲国产av新网站| 美女国产高潮福利片在线看| 亚洲,欧美,日韩| 哪个播放器可以免费观看大片| 人人妻人人添人人爽欧美一区卜| 九九爱精品视频在线观看| 亚洲经典国产精华液单| 中文精品一卡2卡3卡4更新| 老司机影院成人| 亚洲丝袜综合中文字幕| 国产一区二区三区av在线| 久久青草综合色| 纯流量卡能插随身wifi吗| 性高湖久久久久久久久免费观看| 爱豆传媒免费全集在线观看| 色婷婷久久久亚洲欧美| 国产精品不卡视频一区二区| 免费播放大片免费观看视频在线观看| 国产老妇伦熟女老妇高清| 一区二区日韩欧美中文字幕 | 一个人免费看片子| 22中文网久久字幕| 久久综合国产亚洲精品| 丝瓜视频免费看黄片| 国产精品久久久久久久电影| 国产片内射在线| 久久精品久久久久久久性| 精品少妇黑人巨大在线播放| 女人精品久久久久毛片| 国产精品99久久99久久久不卡 | 午夜免费鲁丝| 亚洲少妇的诱惑av| 日本黄大片高清| 亚州av有码| 天美传媒精品一区二区| 2021少妇久久久久久久久久久| 蜜桃久久精品国产亚洲av| 久久这里有精品视频免费| 国产av一区二区精品久久| 人妻制服诱惑在线中文字幕| 三级国产精品欧美在线观看| 国产黄色免费在线视频| 一级毛片 在线播放| 少妇猛男粗大的猛烈进出视频| 王馨瑶露胸无遮挡在线观看| 丰满少妇做爰视频| 狂野欧美白嫩少妇大欣赏| 熟妇人妻不卡中文字幕| 少妇的逼水好多| 黄色视频在线播放观看不卡| 国产成人aa在线观看| 久久久a久久爽久久v久久| 黄色一级大片看看| 亚洲精品自拍成人| 亚洲av男天堂| 久久久久久久精品精品| 少妇人妻精品综合一区二区| 精品少妇黑人巨大在线播放| 中国三级夫妇交换| 性色av一级| 国产在线免费精品| 国产亚洲欧美精品永久| 国产视频内射| 国产av精品麻豆| 国产国拍精品亚洲av在线观看| 久久久久国产精品人妻一区二区| 精品人妻熟女毛片av久久网站| 日韩精品有码人妻一区| 国产黄色视频一区二区在线观看| 久久久久久久久久久免费av| 91久久精品国产一区二区三区| 妹子高潮喷水视频| 国产高清不卡午夜福利| 精品久久久久久久久av| 秋霞在线观看毛片| 秋霞伦理黄片| 少妇丰满av| 99久久精品国产国产毛片| 国产精品麻豆人妻色哟哟久久| 女人久久www免费人成看片| 国产精品国产三级国产av玫瑰| 欧美日韩av久久| 午夜福利,免费看| 亚洲天堂av无毛| 丰满少妇做爰视频| 国产毛片在线视频| 亚洲精品色激情综合| 少妇 在线观看| 亚洲国产日韩一区二区| 中文精品一卡2卡3卡4更新| a级毛色黄片| 亚洲av二区三区四区| 久久久国产一区二区| 久久久精品区二区三区| 国产成人精品无人区| 综合色丁香网| 亚洲国产毛片av蜜桃av| 成人黄色视频免费在线看| 国产在视频线精品| 一个人看视频在线观看www免费| 国产成人91sexporn| av女优亚洲男人天堂| 亚洲精品一二三| 精品久久久久久电影网| 下体分泌物呈黄色| 色网站视频免费| 男女边吃奶边做爰视频| 三上悠亚av全集在线观看| 人妻夜夜爽99麻豆av| 天堂8中文在线网| 十八禁高潮呻吟视频| 久久久久久久久久久久大奶| 精品卡一卡二卡四卡免费| 精品一品国产午夜福利视频| 国产成人91sexporn| 亚洲欧美日韩另类电影网站| 成人免费观看视频高清| 国产精品一区二区在线不卡| 欧美一级a爱片免费观看看| 中文欧美无线码| 国语对白做爰xxxⅹ性视频网站| 18+在线观看网站| 搡女人真爽免费视频火全软件| 人妻少妇偷人精品九色| 黑丝袜美女国产一区| 爱豆传媒免费全集在线观看| 韩国av在线不卡| 亚洲av二区三区四区| 日韩av免费高清视频| 国产精品蜜桃在线观看| 日韩一本色道免费dvd| 热re99久久国产66热| 99久久精品一区二区三区| 久久精品久久久久久噜噜老黄| 亚洲国产精品成人久久小说| 色94色欧美一区二区| 一级a做视频免费观看| 欧美 日韩 精品 国产| 成人毛片60女人毛片免费| av在线播放精品| 国产日韩欧美在线精品| 亚洲美女视频黄频| 丝袜在线中文字幕| freevideosex欧美| 日韩亚洲欧美综合| videosex国产| 综合色丁香网| 热99久久久久精品小说推荐| 又粗又硬又长又爽又黄的视频| 999精品在线视频| 国产精品一区二区三区四区免费观看| 人人妻人人澡人人看| av电影中文网址| 高清毛片免费看| 久久久国产精品麻豆| 男人操女人黄网站| 免费观看性生交大片5| 91精品国产九色| 欧美激情极品国产一区二区三区 | 亚洲天堂av无毛| 18禁在线播放成人免费| 日本黄色片子视频| 久久av网站| 视频在线观看一区二区三区| av天堂久久9| 国产日韩欧美视频二区| 免费看不卡的av| av在线观看视频网站免费| 综合色丁香网| 欧美 日韩 精品 国产| av有码第一页| 国产成人91sexporn| 色婷婷久久久亚洲欧美| 九九在线视频观看精品| 亚洲精品av麻豆狂野| 精品人妻在线不人妻| 久久久久久久久大av| 国产精品久久久久久精品古装| 在线看a的网站| 亚洲精品av麻豆狂野| 国产午夜精品一二区理论片| 国产男女内射视频| 免费少妇av软件| 看非洲黑人一级黄片| 国产成人aa在线观看| 99国产综合亚洲精品| xxxhd国产人妻xxx| 久久亚洲国产成人精品v| 中文精品一卡2卡3卡4更新| 亚洲美女搞黄在线观看| 黄片无遮挡物在线观看| 在线免费观看不下载黄p国产| 99国产综合亚洲精品| 最近中文字幕高清免费大全6| 在线观看免费视频网站a站| 国产一区二区在线观看av| 欧美丝袜亚洲另类| 亚洲精品日韩在线中文字幕| 日韩 亚洲 欧美在线| 国产av精品麻豆| 啦啦啦中文免费视频观看日本| 亚洲国产成人一精品久久久| 满18在线观看网站| 精品国产国语对白av| 伊人久久精品亚洲午夜| 美女xxoo啪啪120秒动态图| 亚洲国产毛片av蜜桃av| 我的女老师完整版在线观看| 在线亚洲精品国产二区图片欧美 | 亚洲av不卡在线观看| 亚洲精品亚洲一区二区| 国产精品99久久99久久久不卡 | 久久精品国产亚洲网站| 中文天堂在线官网| 亚洲综合精品二区| 国产男女超爽视频在线观看| 精品少妇内射三级| 黄色一级大片看看| 视频中文字幕在线观看| 亚洲,欧美,日韩| 亚洲欧洲日产国产| 大又大粗又爽又黄少妇毛片口| 菩萨蛮人人尽说江南好唐韦庄| 蜜桃国产av成人99| 大香蕉久久成人网| 日本黄色片子视频| 超色免费av| 国产精品久久久久久精品电影小说| 久久久久久久久久久免费av| 高清不卡的av网站| www.av在线官网国产| 青青草视频在线视频观看| 日韩一区二区视频免费看| 一本色道久久久久久精品综合| 狠狠精品人妻久久久久久综合| 这个男人来自地球电影免费观看 | 一级,二级,三级黄色视频| 春色校园在线视频观看| 国产av精品麻豆| 精品少妇内射三级| 免费高清在线观看日韩| 国产免费一区二区三区四区乱码| 色94色欧美一区二区| 最近中文字幕高清免费大全6| 中国美白少妇内射xxxbb| 一级a做视频免费观看| 日本欧美视频一区| 爱豆传媒免费全集在线观看| 午夜影院在线不卡| 大码成人一级视频| 美女中出高潮动态图| 精品久久蜜臀av无| 男人添女人高潮全过程视频| 亚洲内射少妇av| 国产免费又黄又爽又色| 黑人高潮一二区| 亚洲少妇的诱惑av| 十分钟在线观看高清视频www| 亚洲精品国产av蜜桃| 日本色播在线视频| 午夜精品国产一区二区电影| 亚洲欧美成人综合另类久久久| 午夜精品国产一区二区电影| 视频区图区小说| 亚洲av不卡在线观看| 国产 一区精品| 国产亚洲一区二区精品| 久久99蜜桃精品久久| 青春草国产在线视频| 久久久午夜欧美精品| 国产成人aa在线观看| 欧美精品人与动牲交sv欧美| 久久精品夜色国产| 只有这里有精品99| 999精品在线视频| 制服诱惑二区| 一级黄片播放器| 精品国产露脸久久av麻豆| 精品少妇久久久久久888优播| 精品一品国产午夜福利视频| 七月丁香在线播放| 国产一级毛片在线| 满18在线观看网站| 少妇猛男粗大的猛烈进出视频| 久久久精品免费免费高清| 99久久中文字幕三级久久日本| 国产欧美日韩一区二区三区在线 | a 毛片基地| 九九在线视频观看精品| 日日摸夜夜添夜夜爱| 国产成人午夜福利电影在线观看| 又大又黄又爽视频免费| 久久久久国产精品人妻一区二区| 天堂中文最新版在线下载| 在线观看美女被高潮喷水网站| 精品久久久噜噜| 最新中文字幕久久久久| 春色校园在线视频观看| 亚洲无线观看免费| 日韩中字成人| 狂野欧美白嫩少妇大欣赏| 老司机影院毛片| 亚洲av电影在线观看一区二区三区| 乱码一卡2卡4卡精品| 久久99蜜桃精品久久| 中文乱码字字幕精品一区二区三区| 亚洲中文av在线| 五月玫瑰六月丁香| 在线亚洲精品国产二区图片欧美 | av女优亚洲男人天堂| 久久精品人人爽人人爽视色| 男女国产视频网站| 伊人久久国产一区二区| 亚洲久久久国产精品| 大码成人一级视频| 欧美变态另类bdsm刘玥| 中国国产av一级| 国产精品99久久久久久久久| 国产黄片视频在线免费观看| 日韩熟女老妇一区二区性免费视频| 国产成人a∨麻豆精品| 亚洲人成77777在线视频| 性色avwww在线观看| 久久人妻熟女aⅴ| 国内精品宾馆在线| 亚洲精品乱码久久久久久按摩| 久久精品国产亚洲网站| 观看av在线不卡| 国产一级毛片在线| 成人黄色视频免费在线看| 中文精品一卡2卡3卡4更新| 天美传媒精品一区二区| 亚洲国产最新在线播放| av黄色大香蕉| 22中文网久久字幕| 黄色怎么调成土黄色| 亚洲精品中文字幕在线视频| kizo精华| 插逼视频在线观看| 国产国拍精品亚洲av在线观看| 久久精品国产亚洲网站| 男人添女人高潮全过程视频| 日本黄大片高清| av福利片在线| 久久久久久久精品精品| 老熟女久久久| 男的添女的下面高潮视频| 亚洲综合色惰| 亚洲欧美日韩卡通动漫| 丁香六月天网| 国产熟女欧美一区二区| 亚洲av男天堂| 精品久久久精品久久久| h视频一区二区三区| 夫妻性生交免费视频一级片| 草草在线视频免费看| xxx大片免费视频| 亚洲国产精品成人久久小说| 亚洲av国产av综合av卡| 91精品国产九色| 最新中文字幕久久久久| 高清在线视频一区二区三区| 亚州av有码| 亚洲三级黄色毛片| 精品一区在线观看国产| 亚洲久久久国产精品| 中文字幕av电影在线播放| 日韩熟女老妇一区二区性免费视频| 卡戴珊不雅视频在线播放| 波野结衣二区三区在线| 春色校园在线视频观看| 母亲3免费完整高清在线观看 | 大香蕉97超碰在线| 男的添女的下面高潮视频| 久久精品国产亚洲网站| 看非洲黑人一级黄片| 亚洲少妇的诱惑av| 国产国拍精品亚洲av在线观看| 免费看不卡的av| 色视频在线一区二区三区| 国产在线免费精品| 人妻人人澡人人爽人人| 永久免费av网站大全| 在线免费观看不下载黄p国产| 中文字幕精品免费在线观看视频 | 日韩制服骚丝袜av| 国产日韩欧美在线精品| 久久久精品94久久精品| 亚洲久久久国产精品| 91精品伊人久久大香线蕉| 国产熟女欧美一区二区| 97精品久久久久久久久久精品| 成人手机av| 亚洲美女搞黄在线观看| 黑人高潮一二区| 午夜免费观看性视频| 久久综合国产亚洲精品| videosex国产| 日本欧美国产在线视频| 国产黄色免费在线视频| 如何舔出高潮| 国产一区亚洲一区在线观看| 免费看不卡的av| 亚洲欧美日韩卡通动漫| 成年美女黄网站色视频大全免费 | 日本色播在线视频| 新久久久久国产一级毛片| 国产亚洲欧美精品永久| 国产伦理片在线播放av一区| 啦啦啦视频在线资源免费观看| 一级毛片黄色毛片免费观看视频| 欧美精品亚洲一区二区| 91aial.com中文字幕在线观看| 亚洲精品国产av蜜桃| 97超视频在线观看视频| 另类亚洲欧美激情| 亚洲色图综合在线观看| 永久网站在线| 麻豆精品久久久久久蜜桃| 日韩不卡一区二区三区视频在线| 成人综合一区亚洲| 丰满迷人的少妇在线观看| 性高湖久久久久久久久免费观看| 国产亚洲精品久久久com| 这个男人来自地球电影免费观看 | 日韩制服骚丝袜av| 日韩一区二区视频免费看| 综合色丁香网| 最近中文字幕2019免费版| a 毛片基地| 国产精品无大码| 久久精品国产鲁丝片午夜精品| 午夜激情久久久久久久| 亚洲久久久国产精品| 一本久久精品| 亚洲精品视频女| 女的被弄到高潮叫床怎么办| 97超视频在线观看视频| 亚洲国产精品999| 热re99久久国产66热| av网站免费在线观看视频| 如何舔出高潮| 午夜激情久久久久久久| 精品少妇黑人巨大在线播放| 插阴视频在线观看视频| 婷婷色麻豆天堂久久| 精品99又大又爽又粗少妇毛片| 五月玫瑰六月丁香| 国产精品99久久久久久久久| 日韩免费高清中文字幕av| 亚洲精品视频女| 久久精品国产亚洲av天美| 一级片'在线观看视频| 欧美激情国产日韩精品一区| 夜夜骑夜夜射夜夜干| 蜜臀久久99精品久久宅男| 亚洲精品乱码久久久v下载方式| 国产精品三级大全| 看十八女毛片水多多多| 亚洲国产精品成人久久小说| 看非洲黑人一级黄片| 精品视频人人做人人爽| 亚洲欧美日韩卡通动漫| 成人毛片a级毛片在线播放| 婷婷成人精品国产| 97超视频在线观看视频| 欧美+日韩+精品| 亚洲精品国产av成人精品| 久久久午夜欧美精品| 亚洲不卡免费看| 午夜久久久在线观看| 日韩,欧美,国产一区二区三区| 国产精品久久久久成人av| 欧美亚洲 丝袜 人妻 在线| 丝瓜视频免费看黄片| 男女高潮啪啪啪动态图| 久久久久久久久久人人人人人人| 久久人人爽人人片av| 日日撸夜夜添| 日韩在线高清观看一区二区三区| 精品卡一卡二卡四卡免费| 久久久亚洲精品成人影院| 九九爱精品视频在线观看| 免费久久久久久久精品成人欧美视频 | 99久国产av精品国产电影| 欧美精品一区二区免费开放| 又黄又爽又刺激的免费视频.| 18禁在线播放成人免费| 国产精品久久久久久久电影| 久久精品久久久久久久性| 亚洲精品色激情综合| 欧美人与善性xxx| 在现免费观看毛片| 精品国产乱码久久久久久小说| av专区在线播放| 91精品伊人久久大香线蕉| 亚洲精品日韩在线中文字幕| 国产在线一区二区三区精| 青春草亚洲视频在线观看| 99国产综合亚洲精品| 久久99热6这里只有精品| 我的女老师完整版在线观看| 日日摸夜夜添夜夜添av毛片| 亚洲精品国产av成人精品| 成人黄色视频免费在线看| 成年女人在线观看亚洲视频| 一二三四中文在线观看免费高清| 国产精品 国内视频| 欧美成人午夜免费资源| a级毛片免费高清观看在线播放| 久久女婷五月综合色啪小说| 99热全是精品| 一本—道久久a久久精品蜜桃钙片| 日日摸夜夜添夜夜添av毛片| av不卡在线播放| 国产欧美日韩综合在线一区二区| 另类亚洲欧美激情| 我要看黄色一级片免费的| av免费观看日本| 成年女人在线观看亚洲视频| 一区二区三区免费毛片| 黄片无遮挡物在线观看| 日本欧美视频一区| 亚洲,一卡二卡三卡| 日韩人妻高清精品专区| 亚洲精品一区蜜桃| av电影中文网址| 日本av免费视频播放| 男女无遮挡免费网站观看| 高清不卡的av网站| 久久久午夜欧美精品|