• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Understanding Nonverbal Communication Cues of Human Personality Traits in Human-Robot Interaction

    2020-11-05 09:41:58ZhihaoShenArmaganElibolandNakYoungChong
    IEEE/CAA Journal of Automatica Sinica 2020年6期

    Zhihao Shen, Armagan Elibol, and Nak Young Chong,

    Abstract—With the increasing presence of robots in our daily life, there is a strong need and demand for the strategies to acquire a high quality interaction between robots and users by enabling robots to understand users’ mood, intention, and other aspects. During human-human interaction, personality traits have an important influence on human behavior, decision, mood, and many others. Therefore, we propose an efficient computational framework to endow the robot with the capability of understanding the user’s personality traits based on the user’s nonverbal communication cues represented by three visual features including the head motion, gaze, and body motion energy, and three vocal features including voice pitch, voice energy, and mel-frequency cepstral coefficient (MFCC). We used the Pepper robot in this study as a communication robot to interact with each participant by asking questions, and meanwhile, the robot extracts the nonverbal features from each participant’s habitual behavior using its on-board sensors. On the other hand, each participant’s personality traits are evaluated with a questionnaire. We then train the ridge regression and linear support vector machine (SVM) classifiers using the nonverbal features and personality trait labels from a questionnaire and evaluate the performance of the classifiers. We have verified the validity of the proposed models that showed promising binary classification performance on recognizing each of the Big Five personality traits of the participants based on individual differences in nonverbal communication cues.

    I. Introduction

    WITH the population aging and sub-replacement fertility problems increasingly prominent, many countries have started promoting robotic technology for assisting people toward a better life. Various types of robotic solutions have been demonstrated to be useful in performing dangerous and repetitive tasks which humans are not able to do, or do not prefer to do. In relation to elderly care provision, assistive robots could replace and/or help human caregivers support the elderly socially in their home or residential care environments.

    Researchers gradually realized that the interactions between a human user and a robot are far more than sending commands to the robot or reprogramming, as a new class of social robots are emerging in our daily life. It is now widely understood that not only the robot’s appearance but also its behaviors are important for human-robot interaction [1], [2].Therefore, synchronized verbal and nonverbal behaviors [3]were designed and applied to a wide variety of humanoid robots, like Pepper, NAO, ASIMO, and many others, to improve the user’s engagement in human-robot interaction.For instance, the Honda ASIMO robot can perform various movements of arms and hands including metaphoric, iconic,and beat gestures [4]. Likewise, some researchers have designed such gestures using the SAIBA framework [5] for the virtual agents. The virtual agents were interfaced with the NAO robot to model and perform the combined synchronized verbal and nonverbal behavior. In [6], the authors tested the combined verbal and nonverbal gestures on a 3D virtual agent MAX to make the agent act like humans. Meanwhile, cultural factors are also considered to be crucial components in human-robot interaction [7]. In [8], the authors designed emotional bodily expressions for the Pepper robot and enabled the robot to learn the emotional behaviors from the interacting person. Further investigations on the influence of the robot’s nonverbal behaviors on humans were conducted in [9]. These efforts were made to enable robots to act like humans.However, the synchronized behaviors are unilateral movements with which robots track the person’s attention.Therefore, the authors in [10] claimed that social robots need to act or look like humans, but more importantly they will need to be capable of responding to the person with the synchronized verbal and nonverbal behavior based on his/her personality traits. Inspired by their insight in [10], we aim to develop a computational framework that allows robots to understand the user’s personality traits through their habitual behavior. Eventually, it would be possible to design a robot that is able to adapt its combined verbal and nonverbal behavior toward enhancing the user’s engagement with the robot.

    A. Why Are the Personality Traits Important During the Interaction?

    In [11], the authors investigated how personality traits affect humans in their whole life. The personality traits encompass relatively enduring patterns of human feelings, thoughts, and behaviors, which make each different from one another. When the human-human conversational interaction is considered, the speaker’s behavior is affected by the speaker’s personality traits,and the listener’s personality traits also affect their attitude toward the speaker. If their behaviors make each other feel comfortable and satisfying, they would enjoy talking to each other. In social science research, there have been different views toward the importance of interpersonal similarity and attraction. Some people tend be attracted to other people with similar social skills, cultural background, personality, attitude,and several others [12], [13]. Interestingly, in [14], the authors addressed the complementary attraction that some people prefer to talk with other people whose personality traits are complementary to themselves. Therefore, we believe that if the robot is able to understand the user’s coherent social cues, it would improve the quality of human-robot interaction,depending on the user’s social behavior and personality.

    In previous studies, the relationships between the user’s personality traits and the robot’s behavior were investigated. It was shown in [15] that humans are able to recognize the personality of the voice that was synthesized by the digital systems and computers. Also, a compelling question was explored to better understand the personality of people whether they are willing to trust a robot or not in an emergency scenario in [16]. Along the lines, a strong correlation between the personality traits of users and the social behavior of a virtual agent was presented in [17]. In [18], the authors designed the robot that have personalities to interact with a human, where significant correlation between human and robot personality traits were revealed. Their results showed how the participants’technological background affected the way they perceive the robot’s personality traits. Also, the relationship between the profession and personality was investigated in [19]. The result conforms with our common sense such as that doctors and teachers tend to be more introverted, while managers and salespersons tend to be more extroverted. Furthermore, the authors investigated how humans think about the NAO robot with different personality traits (introversion or extroversion),when the robot plays different roles in human-robot interaction[20]. However, their results were not in accordance with our common sense. The robot seems smarter to the human when the robot acted as an introverted manager and extroverted teacher.On the contrary, the extroverted manager and introverted teacher robots were not perceived intelligent by the participants.These two results conflict with each other. This could be due to the fact that people treat and perceive robots differently from humans in the aforementioned settings. Another reason could be that the introverted manager robot looked like more deliberate, because it took more time to respond, while the extroverted teacher robot looked like more erudite, because it took less time to respond during the interaction. Even though these two studies found conflicting results, the results imply the importance of robot personality traits in designing professional roles for human-robot interaction.

    In light of the previous studies on personality match in humanrobot interaction, some of the findings are inconsistent with each other. The result that was shown in [21] indicated that the participants enjoyed interacting more with the AIBO robot when the robot has a complementary personality to the participants’. While the conclusions from [22] showed that the participant was more comfortable when they interacted with the robot with a similar personality to theirs. Similarly, the engagement and its relation to the personality traits were analyzed during human-robot interaction in [23], where the participants’ personality traits played an important role in evaluating individual engagement. The best result was achieved when the participant and robot both were extroverted. Note that when both the participant and the robot were introverted, the performance was the worst. Although the complementary and similar attraction theory may need further exploration in the future, these studies clearly showed that how the personality traits are important in human-robot interaction.

    On the other hand, the personality traits have been shown to have a strong connection with the human emotion. In [24], it was discussed that how the personality and mind model influence the human social behavior. A helpful analogy for explaining the relationship between personality and emotion is“personality is to emotion as the climate is to weather” [25].Therefore, theoretically, once the robot is able to understand the user’s personality traits, it would be very helpful for the robot to predict the user’s emotion fluctuation.

    Fig. 1 illustrates our final goal by integrating the proposed model of inferring human personality traits with the robot’s speech and behavioral generation module. The robot will be able to adjust its voice volume, speed, and body movements to improve the quality of human-robot interaction.

    Fig. 1. Integrating proposed model of inferring human personality traits into robot behavior generation.

    B. Architecture for Inferring Personality Traits in Human-Robot Interaction

    In the sub-section above, the importance of the personality traits in human-human and human-robot social interactions is clearly stated. Here we propose our computational framework for enabling the robot to recognize the user’s personality traits based on their visual and vocal nonverbal behavior cue. This paper is built upon our preliminary work in [26].

    In this study, the Pepper robot [27] equipped with two 2D cameras and four microphones interactsw ith each participant.In the previous research on the emergent LEAder corpus(ELEA)[28],when recording the video of a group meeting,the camera was set in them iddle of the desk to capture each participant’s facial expression and upper body movement.Reference[23]also used the external camera to record the individual and interpersonal activities for analyzing the engagement of human-robot interaction.However,we do not use any external devices for the two reasons;First,we attempt to make sure that all audio-visual features are captured from the first-person perspective,ensuring that the view from the robot is closely sim ilar to that from the human.Secondly,if the position and pose of the external camera changes for some reasons,it would yield a significant difference between the visual features.Thus, we use the Pepper’s forehead camera only.

    Fig.2 briefly illustrates our experimental protocol which consists of the nonverbal feature extraction and themachine learningmodel training.Part a):all participants recruited from the Japan Advanced Institute of Scienceand Technology were asked to communicate w ith the Pepper robot.The robotkeeps asking questions related to the participant,and each participant answers the questions.The participantsare supposed to reply to the robot’s questions w ith their habitual behavior.Before or after each participant finished interacting w ith the robot, they were asked to fill out a questionnaire to evaluate their personality traits.The personality traits scores were binarized to perform the classification task.Part b):we extracted the participants’audio-video features that include the headmotion,gaze, body motion energy,voice pitch,voice energy,and MFCC during the interaction.Part c):the nonverbal features and personality traits labels w ill be used to train and test our machine learning models.

    Fig.2.Experimental protocol for inferring human personality traits.

    To the best of our know ledge,this is the first work that shows how to extract the user’s visual features from the robot’s first-person perspective,as well as the prosodic features,in order to infer the user’s personality traits during human-robot interaction.In[29],the non-verbal cues were extracted from the participant’s first-person perspective and used to analyze the relationship between the participant and robot personalities.W ith our framework,the robot is endowed w ith the capability of understanding human personalities during face-to-face interaction.W ithout using any external devices,the proposed system can be conveniently applicable to any typeof environment.

    The rest of this paper is organized as follows.Section II explains the personality traits model used corresponding to Part a).Section III explains why we used the nonverbal features,and what nonverbal features were used for recognizing the participant personality traits corresponding to Part b).Section IV presents the technical details of our experiments.Section V is devoted to experimental results and analysis corresponding to Part c).Section VI draws conclusions.

    II.Per sona l ity Traits

    Based on the definition in[30],[31],personality traits have a strong long-term effect in generating the human’s habitual behavior:“the pattern of collective character, behavioral,temperamental,emotional,and mental traits of an individual that has consistently over time and situations”.

    In the most of existing studies on personality traits,the researchers proposed many different personality models including Meyers-Briggs(extroversion-introversion, judgingperceiving,thinking-feeling,and sensation-intuition)[32];Eysenck model of personality (PEN)(psychoticism,extroversion,and neuroticism)[33];and the Big-Five personality model(extroversion,openness,emotional stability,conscientiousness,agreeableness)[34],[35].The Big-Five personality traits are the very common descriptor of human personality in psychology.In[36],[37], the authors investigated the relationship between the Big-Five personality traitsmodel and nonverbal behaviors.We also use the Big-Five personality traits model in this study.Table I denotes the intuitiveexpressions for the Big-Fivepersonality traits.

    TABLE I Big-Five Per sona l ity Traits

    As the personality traits become more popular in the last few decades[38], various questionnaires were proposed in the literature for the assessment of human Big-Five personality traits.Themost popular format of questionnaire is the Likert scale:ten item personality inventory(TIPI)which has 10-itemsand each question ison a7 point scale[39];The revised NEO personality inventory(NEO PI-R) which contains 240 items[40]; the NEO five-factor inventory(NEO-FFI),a shortened version of NEO PI-R, which comprises 60 items[41];and the international personality item pool(IPIP)Big-Five Factor Markers which has been simplified to 50 questions[42].We used the IPIP questionnaire in this paper,and all participantswere asked to fillout the questionnaire to evaluate their Big-Five personality traits.The IPIP questionnaire is relatively easier to answer,and it does not need toomuch time to complete.

    Specifically,the participants are asked to rate the extent to which they agree/disagreew ith the personality questionnaires on a five-point scale. A total of 50 questions are divided into ten questions for each of the Big-Five traits and the questions also include the reverse-scored and positive-scored items.For the reverse-scored items,Strongly disagree equals 5 points,neutral equals 3 points,and strongly agree equals 1 point;for the positive-scored items,strongly disagree equals 1 point,neutral equals 3 points,and strongly agree equals 5 points.A fter the participants rate themselves for each question,each personality trait is represented by the mean score of 10 questions.We did not use the scale of 1–5 to represent the participant’s personality traits.Instead,the personality traits are binarized using themean score of all participantsas a cutoff point to indicatewhether the participanthas a high or low level of each of the Big-Five traits.For instance,if a participant’s trait of extroversion was rated 2 which is less than theaverage value2.8,then, this participant is regarded as introvert and his/her trait scorew ill be re-assigned 0.Then,we used the binary labels to train our machine learning models and evaluate the classification performanceaccordingly.

    III.Feature Representation

    It is known that the personality trait encompasses the human’s feeling, thoughts,and behaviors.The question to be investigated then arises as“how can it be inferred human personality traits based on their verbal and nonverbal behaviors?”

    A. Related Work on Verbal and Nonverbal Behaviors

    The influences of personality traits on linguistic speech production have been addressed in previousworks[43],[44].The user’s daily habitswere investigated to ascertain whether they are related to the user’s personality traits.The changesof facialexpression were also used to infer the personality traits,which was proposed in[45].In[46],the participants were asked to use the electronically activated recorder(EAR) to record their daily activities,which included locations, moods,language,and many others,to verify the manifestations of personality.Moreover, the authors investigated how the w riting language reflects thehuman personality stylebased on their daily w riting diaries,assignments,and journal abstracts[47].More specific details were presented in[48].In that study, two corpora that contain 2479 essays and 15 269 utterancesmore thanm illion wordswere categorized and used to analyze the relation to each participant’s Big-Five personality traits.A lthough the participant’s verbal information can be used to analyze their personality traits based on Pennebaker and King’s work[47],it should be noted that categorizing so many words would be an arduous task.In[49],the authorsaddressed that the language differences could influence the annotator’s impressions toward the participants.Therefore,they asked three annotators to watch the video that was recorded in themeeting w ithout audio and to annotate the personality traits of each participant. Notably,the issue of conversational error was addressed in[50],where the error caused the loss of trust in the robot during human-robot interaction.In light of the aforementioned studies,the participants in our study were free to use any language to talk w ith the robot.It can generally be said that the nonverbal behavior would be a better choice in this study.

    On the other hand,it is desirable that the robot can change its distances w ith the user depending on a variety of social factors leveraging a reinforcement learning technique in[51].In[52],the author also used the changes in the distance between the robot and the participant as one of their features for predicting the participant’s extroversion trait.Sim ilarly,the authors proposed a model of automatic assessment of human personality traits by using body postures, head pose,body movements,proxim ity information,and facial expressions[53].The results in[54]also revealed that the extrovert could accept people to come closer than the introvert.However,the proxem ics feature was not considered in our study,as the human-robot distance remains unchanged during our communicative interaction settings.

    In the related research on inferring human personality traits,a variety of fascinatingmultimodal featureswere proposed.In[36],[55],the authors used vocal features to infer personality traits.In[37], they used vocal and simple visual features to recognize the personality traits based on MS-2 corpus(M ission Survival 2).References[49]and[56]detailed how to infer personality traits in the groupmeeting.They used the ELEA corpus,and the participant’s personality traits were annotated by the external observer.Meanwhile, the participant’s vocal and visual features such as voice pitch, voice energy,head movement, bodymovement,and attentionswere extracted from audio and videos.The sim ilar features were used in [57]to infer the personality traitsw ith YouTube video blogs.The convolutionalneuralnetworks were also applied to predict human personality traits based on an enormous database that contains video,audio,and text information from YouTube vlogs[58],[59].In[60],[61],the authorsexplained a nonverbal feature extraction approach to identifying the emergent leaders.The nonverbal features that were used to infer the emergent leaders included prosodic speech feature(pitch and energy), visual features(head activity and body activity),and motion template-based features.In[62],[63],the frequently-used audio and visual nonverbal features in existing research were summarized for predicting the emergent leader or personality traits.Similarly,amethod was proposed in[64]for identifying the human’s confidence during human-robot interaction w ith the sound pressure,voice pitch,and head movement.

    In the previous studies[37],[49],[60],[62],the authors used the statistical features and activity length features.Since the personality traits are long-term characteristics that affect people’s behaviors,they believed that the statistical features can well represent the participants’behaviors.Sim ilar nonverbal features were used in our study.However,we believe that the state transitions of the nonverbal behaviors or features are also importance to understand the human’s personality traits.The study in[56] proposed their cooccurrent features to indicate some movements of other participants that happened at the same time.Hence,in our study, the raw form and time-series based features of the visual and vocal nonverbal behavior were used to train the machine learning models.

    B. Nonverbal Feature Representation

    Taking into account the findings of the aforementioned studies, we intend to extract sim ilar features from the participant’s nonverbal behaviors. Nonverbal behaviors include vocal and visual behaviors.Table II shows the three visual features including the participant’s head motion,gaze score,and upper body motion energy,as well as the three vocal features including the voice pitch,voice energy,and mel-frequency cepstral coefficient (MFCC).

    TABLE II Nonverba l Fea ture Representat ion

    In our basic human-robot interaction scenario,it is assumed that the participant talks to a robot using gestures the way a person talks to a person.Therefore,the participant’s visual features can be extracted using the robot’s on-board camera while the participantor the robot talks. Note that,in Table II,some of the visual featuresHM2,GS2,andME2 are extracted when the participant listens to the robot asking four simple questions.The total time duration was too short to capture sufficient data enough to train our machine learning models.Therefore,we did not use these three features in our study.

    1) Head Motion:An approach to analyze the head activity was proposed in[60].They applied the optical flow on the detected face area to decide whether the head wasmoving or not.Based on the head activity states,they were able to understand when and for how long the head moved.We followed themethod thatwas proposed in[65].First,every frame captured by the Pepper’s forehead camerawasused for scanning procedure to extract the sub-w indows.The authors in[65]has trained 60 detectors based on left-right rotationout-of-plane and rotation-in-plane angle,and each detector containsmany layers that are able to estimate the head pose and detect a human face.Each sub-w indow was used as an input to each detectorwhich was trained by a set of the face w ith a specific angle.The outputwould provide the 3D head pose (pitch, yaw,and roll)asshown in the left imageof Fig.3.In this study, the pitch angle covers[–90°,90°],the rollangle covers[–45°,45°],and yaw angles covers[–20°,20°].And then the Manhattan distance of every two adjacent head angle was used to represent the participant’s head motion.Let,,anddenote the pitch,yaw,and roll angles,respectively.Then the head motion(HM1)can be calculated by the follow ing equation:

    whereiandare two consecutive frames at 1 s time interval.

    Fig.3.Visual features(The left image illustrates the 3D head angles,and the right image shows the different pixels by overlapping two consecutive frames).

    2)Gaze Score:In[66],the influence of gaze in the small group human interaction was investigated.The previous studies used the visual focusof attention(VFOA)to represent the participant’s gaze direction[61]in the group discussion.However,the high-resolution image is required for the analysis of the gaze direction,which w ill tremendously increase the computational cost.In our experiment,the participant sitsata table in front of the robot positioned 1.5m to 1.7 m away.In practice,the calculation of gaze direction mightnot be feasible,if we consider the image resolution and the distance,since the eye occupies only a few pixels in the image.As the head pose and gaze direction are highly related w ith each other[67],an efficient way of calculating the gaze direction was proposed based on the head orientation in[68].Therefore, we used the head direction to estimate the gaze direction which is highly related to the head yaw and pitch angles.In the real experimental environment,we found that the face was hardly detected when the facial plane exceeds.When the participant faces the robot’s forehead camera,the tilt/pan angle is.Therefore,we measure the Euclidean distance from theto the head yaw and pitch angle.Then,the full range(distance)of tilt/pan angles[0°,20°]is normalized to 0 to 1.Finally,thenormalized score between 0 and 1 is used as the gaze score which indicates the confidence in the fact that the participant is looking at the robot.If we denote byandthe head pitch and yaw angles,respectively,thegaze scoreof the frameican be calculated by the follow ing equation:whereandrepresent the maximum degree of the head pitch and yaw angle,respectively.

    3) Motion Energy:The motion energy images[69],[70]were used in the previous studies to describe body motion.Their basic idea is to compute the number of different pixels of every two consecutive frames.We applied the same idea to calculate the ratio of the different pixels between every two frames.The right image of Fig.3 shows an example of different pixels between two frames.Thismethod is simple and effective.However,it requires the image to have stationary background and distance between the robot and each participant.Otherw ise,the change of the background w ill be perceived as the participant’s body movement,and the number of different pixels w ill increase if the participant sits closer to the robot. Now,all three visual features were calculated and normalized in the whole database,denoted byHM1,,andME1.The binary features,,andmentioned in Table IIare the binarizedHM1,,andME1 which were simply calculated by comparing whether the value is larger than 0 or not.

    4)Voice Pitch and Energy:The vocal behavior is another important featurewhen humansexpress themselves.Pitch and energy are the two well-known vocal features and very commonly used in emotion recognition.Pitch,which is generated by the vibration of vocal cords,is perceived asthe fundamental voice frequency.There are many different methods to track the voice pitch.For instance,average magnitude difference function(AMDF)[71],simple inverse filter tracking(SIFT)[72],and auto-correlation function(ACF)[73]are the time domain approach,while harmonic product spectrum(HPS)[74]is the frequency domain approach.We used the auto-correlation function denoted bygiven in (3) to calculate pitch

    In(5),Tis the time duration of the audio signal in one frame.Since the frame sizeused in this study is 800,the time durationTis50m illiseconds.

    Now the average of the short-term energy can be calculated by the follow ing equation:

    5) Mel-Frequency Cepstral Coefficient:MFCC[75]is a vocal feature well known for its good performance in speech recognition[76].The procedures to calculate MFCC are highly related to the vocalism principle and also able to discard the redundant information that the voice carries,e.g.,thebackground noise,emotion,and many others.We intend to test this pure and essential feature which reflects how the sound was generated.We calculated the MFCC based on the follow ing steps.

    First,we calculate the power spectrum by calculating the fast Fourier transform(FFT)of each frame.Themotivating idea is from the concept of how our brain understands the sound.The cochlea in the ear converts sound waves,which caused the vibrations in different spots,to the electrical impulses to inform the brain that some frequencies are present.Usually,only 256 pointswere kept from 512 points in FFT.

    Then,20–40(usually 26) triangular filtersof themel-spaced filterbank were applied to the power spectrum.This step is to simulate how the cochlea perceives the sound frequencies.The human ear is less sensitive to the closely spaced frequencies,and it becomeseven harder when the frequency is increasing.This iswhy the triangular filter becomesw ider as the frequency increases.

    Third,the logarithm was applied to the 26 filtered energies.This is also motivated by human hearing.We need to put 8 times more energy to double the loudness of the sound.Therefore, we used the logarithm to compress the features much closer towhat humansactually hear.

    Finally,we compute the discrete cosine transform(DCT)of the logarithm ic energies.In the previous step,the filterbanks were partially overlapped, which provide high correlated filtered energies.The DCT was used to decorrelate the energies.Only 13 coefficientswere kept as the finalMECC.

    IV.Exper imenta l Design

    The experimentwas designed in the scenario that the robot asks questions as the robotmeets w ith the participant.In the follow ing, we introduced the experimental environment and themachine learning methodsused.

    A. Experimental Setup

    The relationship between people’s professions and personality traits was investigated in[19].In our study,all the participantswere recruited from the Japan Advanced Institute of Science and Technology.Therefore,the relationship between professionsand personality traitswas notconsidered.On the other hand,the interactions between participants and the robot were assumed to be casual everyday conversations.Specifically,each participant sits at a table w ith his/her forearm resting on the tabletop and talksw ith the robot.The participants did not have any strenuous exercises before they were invited to theexperiment.

    The experimental setup is shown in Fig.4.Each participant was asked to sitata table in frontof the robot standing 1.5 m to 1.7m away in a separate room.Only the upper partof the participant’s body was observable from the robot’s on-board camera that extracts the visual features.The robot keeps asking questions one after another.The participantwas asked to respond to each question using his/her habitual gesture. As mentioned in Section III,the participants were free to use any language(such as English,Italian,Chinese,and Vietnamese)to communicate w ith the robot.

    Fig.4.Detailsof experimental setup.

    We recruited 15 participants in the study; however,3 of the participantswere too nervous during the experiment,and they looked at the experimenter frequently.Therefore, they were excluded,and our database contains the data of 12 participants w ith a total duration 2000 s.One of the convenient ways to infer personality traits is using the fixed time length.Once the robot hasenough data,it would be able to infer the personality traits.Therefore,we divided the data into 30-sec long clips.The 30-sec clip may contain data from different sentences.When we divided the clips,each clip hasoverlap w ith the previousone,and then wewere able to generatemore data generalized.

    The flow chart in Fig.5 shows the architecture for extracting features.The robot first detects whether there is a person to talk to.Then the robot would sequentially select a question from a file“questions.txt”and use the speech synthesizers to start the conversation.Meanwhile,the robotalso extracted the visual features every second.Even after the robot finished asking its question,the vocal and visual features extraction would be continued while the participantwas responding to the question.The participant was instructed that they were expected to stop talking for 5 s for letting the robot know that it may ask the next question.

    Fig.5.The pipeline for feature extraction.

    TABLE III Averaged Accuracies for Big Five Persona l ity Traits (Ridge Regression Classifier)

    TABLE IV Averaged Accuracies for Big Five Per sona l ity Traits (Linear SVM Classifier)

    B.Classification Model

    In[63]and[77],the authors summarized differentmethods used for the prediction of the leadership style such as the logistic regression[49],[56],[78], rule-based[79],Gaussian m ixture model[80],and support vector machine[49],[56],[81].The ridge regression and linear SVM were both used in[49],[56].We opted to apply the samemethods in our study tomake a simple comparison.The cross-validation was used to find the optimal regression parameters.The follow ing formulas were used to calculate the regression parameters:

    whereXis the featurematrix,Iis an identitymatrix,is the binarized label of the personality traits,andis the ridge parameter calculated using the follow ing equation:

    In(9),iis an integer indicating thateach regressionmodel is executed for 30 times for optimizing the regression parameter. As we used the regression model to perform a classification task,we used the accuracy rather than themean squared error, which would givemoremeaningful results.

    SVM is used to perform the linear or nonlinear classification task by using different types of kernel functions.It requires a longer time to train an SVM classifier than ridge regression.From Table III,it can be noticed that the binary features did not present their advantages in ridge regression.Therefore, the binary featureswere discarded in SVM.Then,we trained an SVM classifier w ith the linear kernel w ith the penalty parameter of the error term which was chosen from[0.1,0.4,0.7,1].Therefore,each SVM classifier was trained for 4 times based on the equation thatwasmentioned in[82]which isshown in the follow ing equation:

    The leave-one-out method was used to evaluate the performance of ridge regression and linear SVM.The results of the linear SVM was presented in Table IV.

    V.Exper imen ta l Resu l ts

    A.Classification Results

    B. Regression

    For the ridge regression model,we used the average personality trait scores that were calculated from the questionnaire ranging from 1 to 5.For evaluating the regression model,we calculated mean squared error (MSE) valuesandR2which is known as the coefficient of determ ination used to evaluate the goodness of fitof the regressionmodel[49].The maximumR2values of conscientiousness and openness are smaller than 0.1.Therefore,we only presented theR2values of extroversion,agreeableness,and emotional stability in Table V.We calculatedR2based on

    In Table V,the best classification resultof three personality traits were inferred by the features w ith the highestR2values marked in bold.

    The MSE values were given in Figs.6–10.In order to show the changes of the MSE values clearer, we only revealed thei(the parameter for calculating the ridge parameterfrom(9))from 0 to 16.The variables that were shown in Fig.6 to 10 were represented by using two capital letters of the abbreviation of personality traitand the feature name(refer to Table II,MFCC6 is the 6th MFCC vector).Figs.6,7,and 9 of extroversion,agreeableness,and emotional stability also showed that the feature w ith the smallestMSE value acquired the best classification result.The differences of the other two traits conscientiousness and openness were not very obvious compared to the aforementioned three traits.

    VI.Conc lusion and the futu re w orks

    In this paper, we have proposed a new computationalframework to enable a social robot to assess the personality traits of the user it is interacting w ith.In the beginning,the user’s nonverbal featureswere defined as easily-obtainable as possible and extracted from video and audio collected w ith the robot on-board camera andm icrophone.By doing so, we have decreased the computational cost in the feature extraction stage,yet the features provided promising results in the estimation of the Big Five personality traits.Moreover,the proposed framework isgeneric and applicable to aw ide range of off-the-shelf social robot platforms.To the best of our know ledge, this is the first study to show how the visual features can be extracted in the first-person perspective,which could be the reason that our system outperformed the previous studies. Notably,the MFCC feature was beneficial to assessing each of the Big Five personality traits.We also found that,apparently,extroversion appeared to be the hardest trait.One reason could be the current experimental settings,where the participantssat ata table w ith their forearms resting on the tabletop that lim ited their body movements.Another reason could be the confusing relationship between the participants and the robot,which made the participants hesitate to express themselves naturally in the way they do in everyday situations.

    TABLE V The Max imum Va lues of R2 of the Regression Resu l ts for Ex t roversion,Agreeableness,and Emotiona l Stabil ity

    Fig.6.MSE valuesof the ridge regression for inferring extroversion.

    Fig.10.MSE values of the ridge regression for inferring openness.

    Fig.7.MSE valuesof the ridge regression for inferring agreeableness.

    Fig.8.MSE valuesof the ridge regression for inferring conscientiousness.

    Fig.9.MSE valuesof the ridge regression for inferring emotional stability.

    Each feature showed its advantage in a different aspect.However,there is not a standard way of draw ing the conclusion that declares the user’s personality traits.Therefore,one of the future works is to find an efficientway to fuse the multi-modal features.On the other hand,the personality traits can be better understood through frequent and long-term interaction.Thismeans that the system should be able to update its understandings of the user’s personality traits whenever the robot interacts w ith its user.It is also needed to evaluate the engagement between a human and a robot,and attitude of the human toward the robot,since the user’sbehaviors can be precariouswhen theuser loses interest in interacting w ith the robot.Finally,in order to achieve the best possible classification performance,more sophisticated machine learning modelsneed to be incorporated.

    我要看日韩黄色一级片| 免费少妇av软件| 黄片wwwwww| www.av在线官网国产| 天天一区二区日本电影三级| 亚洲久久久久久中文字幕| 成人亚洲精品av一区二区| 内地一区二区视频在线| 亚洲久久久久久中文字幕| 亚洲av不卡在线观看| 免费看av在线观看网站| 欧美 日韩 精品 国产| 欧美日韩亚洲高清精品| 内地一区二区视频在线| 国产 一区 欧美 日韩| 成人美女网站在线观看视频| 日韩视频在线欧美| 亚洲精华国产精华液的使用体验| 国产精品不卡视频一区二区| av专区在线播放| 久久久久国产网址| 午夜日本视频在线| 精品国产三级普通话版| 高清av免费在线| 日韩av免费高清视频| 精品久久国产蜜桃| 黑人高潮一二区| 热99在线观看视频| 国产成年人精品一区二区| 免费电影在线观看免费观看| 亚洲综合色惰| 99久久精品热视频| 亚洲av电影在线观看一区二区三区 | 精品久久久久久电影网| 2021天堂中文幕一二区在线观| 精品熟女少妇av免费看| 你懂的网址亚洲精品在线观看| 纵有疾风起免费观看全集完整版 | 日韩欧美三级三区| 亚洲av电影不卡..在线观看| av专区在线播放| 精品不卡国产一区二区三区| 只有这里有精品99| 亚洲自偷自拍三级| 大片免费播放器 马上看| 水蜜桃什么品种好| 九九爱精品视频在线观看| 亚洲精品一二三| 欧美区成人在线视频| 免费观看a级毛片全部| 丰满乱子伦码专区| 好男人在线观看高清免费视频| 成年av动漫网址| 国产伦理片在线播放av一区| 精品一区二区三区人妻视频| 日日撸夜夜添| 免费观看无遮挡的男女| 麻豆av噜噜一区二区三区| 男女那种视频在线观看| av在线亚洲专区| 久久久久久久久久久免费av| 国产精品久久久久久久电影| 男人舔女人下体高潮全视频| 综合色av麻豆| 在现免费观看毛片| 熟妇人妻不卡中文字幕| 久久久久免费精品人妻一区二区| 国产免费福利视频在线观看| 久久这里只有精品中国| av在线天堂中文字幕| 成人性生交大片免费视频hd| 亚洲精品乱码久久久v下载方式| 国产激情偷乱视频一区二区| 夫妻性生交免费视频一级片| 精品人妻偷拍中文字幕| 亚洲一级一片aⅴ在线观看| 精品一区在线观看国产| 午夜福利视频1000在线观看| 少妇熟女欧美另类| 极品少妇高潮喷水抽搐| 亚洲三级黄色毛片| 特级一级黄色大片| 夫妻性生交免费视频一级片| 国内少妇人妻偷人精品xxx网站| 中文欧美无线码| 国产精品国产三级国产av玫瑰| 亚洲欧美日韩卡通动漫| 日韩欧美精品免费久久| 亚洲精品日本国产第一区| 国产中年淑女户外野战色| 国产视频内射| 99久久人妻综合| av在线老鸭窝| 久久久欧美国产精品| 免费无遮挡裸体视频| av.在线天堂| 亚洲欧美日韩无卡精品| 久久精品夜夜夜夜夜久久蜜豆| 国产成年人精品一区二区| 亚州av有码| 国产精品爽爽va在线观看网站| 亚洲精品国产成人久久av| 日韩av在线大香蕉| 亚洲精品国产av成人精品| 一区二区三区四区激情视频| 欧美xxxx黑人xx丫x性爽| 免费观看的影片在线观看| 少妇人妻精品综合一区二区| 午夜激情福利司机影院| 亚洲av电影不卡..在线观看| av在线老鸭窝| 麻豆成人av视频| 岛国毛片在线播放| 成人亚洲欧美一区二区av| 日本黄大片高清| 1000部很黄的大片| 中文字幕av成人在线电影| 日本与韩国留学比较| 男女下面进入的视频免费午夜| av播播在线观看一区| 在线观看一区二区三区| 男人舔女人下体高潮全视频| 国产成人免费观看mmmm| 亚洲熟妇中文字幕五十中出| 有码 亚洲区| 一夜夜www| av国产久精品久网站免费入址| 亚洲四区av| 午夜激情欧美在线| 蜜桃亚洲精品一区二区三区| 国产成人午夜福利电影在线观看| 国产一区二区三区综合在线观看 | av播播在线观看一区| 免费电影在线观看免费观看| 性插视频无遮挡在线免费观看| 免费观看的影片在线观看| 精品一区二区三区视频在线| 秋霞在线观看毛片| 狂野欧美白嫩少妇大欣赏| 婷婷色av中文字幕| 99久久人妻综合| 免费人成在线观看视频色| 人妻制服诱惑在线中文字幕| 国产精品一区二区在线观看99 | 国产真实伦视频高清在线观看| av免费观看日本| 一个人免费在线观看电影| 国产高清有码在线观看视频| 亚洲精品国产av成人精品| 亚洲国产色片| 国产亚洲一区二区精品| 亚洲美女搞黄在线观看| 大又大粗又爽又黄少妇毛片口| 国产探花极品一区二区| av天堂中文字幕网| 91久久精品电影网| 纵有疾风起免费观看全集完整版 | 高清av免费在线| av卡一久久| 国产亚洲av嫩草精品影院| 三级毛片av免费| 国产精品麻豆人妻色哟哟久久 | 国产一区二区三区av在线| 亚洲自拍偷在线| 男人舔女人下体高潮全视频| 亚州av有码| 深爱激情五月婷婷| 能在线免费看毛片的网站| 日韩 亚洲 欧美在线| 亚洲av电影不卡..在线观看| 国产免费又黄又爽又色| av在线老鸭窝| 久久亚洲国产成人精品v| 美女脱内裤让男人舔精品视频| 国内精品美女久久久久久| av福利片在线观看| 99re6热这里在线精品视频| 夜夜看夜夜爽夜夜摸| 国产一区二区三区av在线| 99久久中文字幕三级久久日本| 国产精品一区二区三区四区久久| 亚洲av一区综合| 最近手机中文字幕大全| 国产美女午夜福利| 日本三级黄在线观看| 99久久精品一区二区三区| 国产成人精品福利久久| 久久99精品国语久久久| 国产精品人妻久久久影院| 国产黄片视频在线免费观看| 午夜爱爱视频在线播放| 高清毛片免费看| 可以在线观看毛片的网站| 国产一区有黄有色的免费视频 | 精品久久久久久久人妻蜜臀av| 国产综合懂色| 久久精品国产亚洲av涩爱| 亚洲第一区二区三区不卡| 人人妻人人澡欧美一区二区| 色综合亚洲欧美另类图片| 久久久精品94久久精品| 欧美日韩国产mv在线观看视频 | 国产成人精品久久久久久| 亚洲精品,欧美精品| 美女国产视频在线观看| 欧美日韩视频高清一区二区三区二| 精品久久久精品久久久| 国产精品一二三区在线看| 日韩视频在线欧美| 岛国毛片在线播放| 国产极品天堂在线| 99久久中文字幕三级久久日本| 国产黄色小视频在线观看| 久久6这里有精品| 人妻系列 视频| 夜夜爽夜夜爽视频| 久久精品国产亚洲av涩爱| 国产av码专区亚洲av| 亚洲美女视频黄频| 亚洲精品日韩av片在线观看| 黑人高潮一二区| 一区二区三区四区激情视频| 成人性生交大片免费视频hd| 精品亚洲乱码少妇综合久久| 国产伦理片在线播放av一区| 久久久亚洲精品成人影院| 亚洲av.av天堂| 国产黄片视频在线免费观看| 狂野欧美白嫩少妇大欣赏| 日本一本二区三区精品| 国内精品一区二区在线观看| av在线观看视频网站免费| 国产成人精品婷婷| 欧美成人一区二区免费高清观看| 久久久久网色| 亚洲av免费高清在线观看| 色视频www国产| 亚洲精品中文字幕在线视频 | 久久国内精品自在自线图片| 超碰97精品在线观看| 国产在线男女| 男女啪啪激烈高潮av片| 国产精品福利在线免费观看| 国产精品国产三级国产av玫瑰| 啦啦啦中文免费视频观看日本| 在现免费观看毛片| 天堂影院成人在线观看| 日本熟妇午夜| 国产黄频视频在线观看| 国产亚洲午夜精品一区二区久久 | 国内少妇人妻偷人精品xxx网站| 亚洲av成人精品一二三区| 欧美一区二区亚洲| 国产老妇伦熟女老妇高清| 九九爱精品视频在线观看| 国产精品99久久久久久久久| 两个人的视频大全免费| 一级毛片 在线播放| 亚洲高清免费不卡视频| 亚洲精品国产av蜜桃| 人人妻人人看人人澡| 国产男女超爽视频在线观看| 少妇丰满av| 91久久精品国产一区二区成人| 免费在线观看成人毛片| 亚洲丝袜综合中文字幕| 一区二区三区高清视频在线| 一级毛片 在线播放| 性插视频无遮挡在线免费观看| 91精品国产九色| 中文字幕av在线有码专区| 国产一级毛片七仙女欲春2| 人人妻人人澡欧美一区二区| 亚洲一级一片aⅴ在线观看| 在线观看美女被高潮喷水网站| 国产美女午夜福利| 赤兔流量卡办理| 国内少妇人妻偷人精品xxx网站| 免费电影在线观看免费观看| 校园人妻丝袜中文字幕| 99热这里只有是精品在线观看| 国产午夜精品一二区理论片| 国产成人精品久久久久久| 精品一区二区免费观看| 中文字幕亚洲精品专区| 免费观看精品视频网站| 人体艺术视频欧美日本| 亚洲综合色惰| 六月丁香七月| 别揉我奶头 嗯啊视频| 国产成人freesex在线| 国产真实伦视频高清在线观看| 国产一区二区亚洲精品在线观看| 亚洲国产色片| 久久这里有精品视频免费| 亚洲国产高清在线一区二区三| 午夜福利在线在线| 亚洲熟妇中文字幕五十中出| 毛片女人毛片| 高清在线视频一区二区三区| 不卡视频在线观看欧美| 人人妻人人澡欧美一区二区| 麻豆国产97在线/欧美| 水蜜桃什么品种好| 午夜免费观看性视频| 三级国产精品欧美在线观看| xxx大片免费视频| 一本一本综合久久| 成人美女网站在线观看视频| 欧美xxⅹ黑人| 日韩三级伦理在线观看| 免费观看的影片在线观看| 九九爱精品视频在线观看| av福利片在线观看| 久久久成人免费电影| 成人无遮挡网站| av黄色大香蕉| 亚洲欧美一区二区三区黑人 | 国产综合懂色| 久久久久久久久久人人人人人人| 免费播放大片免费观看视频在线观看| 乱人视频在线观看| 一级av片app| kizo精华| 人体艺术视频欧美日本| 久久久久精品久久久久真实原创| 国产黄片视频在线免费观看| 蜜桃久久精品国产亚洲av| 免费观看在线日韩| 精品久久久久久久久av| 人妻少妇偷人精品九色| 国产精品人妻久久久久久| 黄片无遮挡物在线观看| 综合色丁香网| 尤物成人国产欧美一区二区三区| 国产有黄有色有爽视频| 爱豆传媒免费全集在线观看| 男人和女人高潮做爰伦理| 美女被艹到高潮喷水动态| 中文字幕av成人在线电影| 欧美日韩视频高清一区二区三区二| 午夜激情欧美在线| 亚洲图色成人| 国产高清不卡午夜福利| 国产一区亚洲一区在线观看| 91精品伊人久久大香线蕉| 久久精品国产亚洲av天美| 亚洲电影在线观看av| 又粗又硬又长又爽又黄的视频| 亚洲最大成人手机在线| 联通29元200g的流量卡| 两个人的视频大全免费| 97在线视频观看| 麻豆乱淫一区二区| 免费av不卡在线播放| 免费不卡的大黄色大毛片视频在线观看 | 偷拍熟女少妇极品色| 国产v大片淫在线免费观看| 三级经典国产精品| 国产伦精品一区二区三区视频9| 国产在视频线精品| 国产探花在线观看一区二区| 91午夜精品亚洲一区二区三区| 亚洲精品久久午夜乱码| 亚洲精品一区蜜桃| 看十八女毛片水多多多| 久久久久久久久久黄片| 中文乱码字字幕精品一区二区三区 | 国产高潮美女av| 国产成人精品婷婷| 日韩一区二区三区影片| 亚洲欧美日韩东京热| 亚洲av成人精品一二三区| 18禁裸乳无遮挡免费网站照片| 中文字幕久久专区| 精品人妻视频免费看| 国产成人a∨麻豆精品| 伊人久久国产一区二区| 欧美精品一区二区大全| 激情 狠狠 欧美| 高清毛片免费看| 国产有黄有色有爽视频| xxx大片免费视频| 天堂俺去俺来也www色官网 | 男的添女的下面高潮视频| 日本免费a在线| 国产精品伦人一区二区| 熟女电影av网| 午夜激情久久久久久久| 非洲黑人性xxxx精品又粗又长| 国产成人freesex在线| 美女黄网站色视频| 久久精品夜色国产| 久久久久国产网址| 日韩三级伦理在线观看| 亚洲精品第二区| 亚洲成色77777| 亚洲av成人av| 国产单亲对白刺激| 免费观看av网站的网址| 天美传媒精品一区二区| 久久久久久久亚洲中文字幕| 国产极品天堂在线| 亚洲欧美清纯卡通| 国产精品人妻久久久久久| 搡女人真爽免费视频火全软件| 亚洲欧美中文字幕日韩二区| 亚洲av中文字字幕乱码综合| 日韩不卡一区二区三区视频在线| 久久久色成人| 亚洲欧美日韩东京热| 亚洲熟妇中文字幕五十中出| 久久久久久久国产电影| 亚洲精品视频女| 搡老妇女老女人老熟妇| 久久久久久久亚洲中文字幕| 国产真实伦视频高清在线观看| 免费观看的影片在线观看| 欧美日本视频| 亚洲伊人久久精品综合| 亚洲av免费高清在线观看| 午夜精品一区二区三区免费看| 2021天堂中文幕一二区在线观| 中文乱码字字幕精品一区二区三区 | 精品国产露脸久久av麻豆 | 人妻系列 视频| 插逼视频在线观看| 久久综合国产亚洲精品| 国产高清三级在线| av播播在线观看一区| 一级毛片我不卡| 久久久久久久亚洲中文字幕| 欧美97在线视频| 免费观看在线日韩| 久久综合国产亚洲精品| 又黄又爽又刺激的免费视频.| 日本与韩国留学比较| 日本一二三区视频观看| 国产成人一区二区在线| 国产一区亚洲一区在线观看| 中文字幕亚洲精品专区| 日韩欧美 国产精品| 国产免费视频播放在线视频 | 久久久成人免费电影| 秋霞在线观看毛片| 国产精品1区2区在线观看.| 激情 狠狠 欧美| 男女那种视频在线观看| 亚洲伊人久久精品综合| 简卡轻食公司| av福利片在线观看| 日本免费a在线| 91狼人影院| 亚洲av二区三区四区| 国产成人91sexporn| 欧美日韩亚洲高清精品| 日韩人妻高清精品专区| 国模一区二区三区四区视频| 国产伦在线观看视频一区| 日韩欧美三级三区| 精品久久久久久成人av| ponron亚洲| 美女内射精品一级片tv| 国产午夜精品论理片| 天堂√8在线中文| 久久精品国产鲁丝片午夜精品| 国产精品国产三级专区第一集| 99久久精品国产国产毛片| 岛国毛片在线播放| 日韩强制内射视频| 乱码一卡2卡4卡精品| 男女下面进入的视频免费午夜| 国产中年淑女户外野战色| 国产女主播在线喷水免费视频网站 | 18禁裸乳无遮挡免费网站照片| 不卡视频在线观看欧美| 男女边吃奶边做爰视频| 天堂网av新在线| 搡女人真爽免费视频火全软件| 男人爽女人下面视频在线观看| 老司机影院毛片| 日本爱情动作片www.在线观看| 波多野结衣巨乳人妻| 97超视频在线观看视频| 99热这里只有精品一区| 亚洲成人精品中文字幕电影| 午夜爱爱视频在线播放| 欧美日韩一区二区视频在线观看视频在线 | kizo精华| 久久久久久国产a免费观看| av黄色大香蕉| 人妻少妇偷人精品九色| 久久久久久久久久成人| 国产精品三级大全| 一级爰片在线观看| 亚洲伊人久久精品综合| 国产视频内射| 天天躁夜夜躁狠狠久久av| 国模一区二区三区四区视频| 看免费成人av毛片| 噜噜噜噜噜久久久久久91| 午夜久久久久精精品| 国产精品福利在线免费观看| 51国产日韩欧美| 国产成人精品久久久久久| 一本久久精品| 色综合色国产| 一个人免费在线观看电影| 欧美人与善性xxx| 成人特级av手机在线观看| 成人欧美大片| 亚洲精品色激情综合| 亚洲av在线观看美女高潮| 精品久久久久久久久亚洲| 亚洲一区高清亚洲精品| 国产永久视频网站| 亚洲精品国产av成人精品| 一个人看的www免费观看视频| 女人十人毛片免费观看3o分钟| 色5月婷婷丁香| 国产视频首页在线观看| 啦啦啦韩国在线观看视频| 丰满人妻一区二区三区视频av| 国产在线一区二区三区精| 黄片wwwwww| 18禁动态无遮挡网站| 中文欧美无线码| 亚洲一区高清亚洲精品| 汤姆久久久久久久影院中文字幕 | 国产在视频线在精品| 亚洲欧美成人综合另类久久久| 亚洲精品中文字幕在线视频 | 特级一级黄色大片| 成人特级av手机在线观看| 青春草视频在线免费观看| 亚洲精品色激情综合| 又黄又爽又刺激的免费视频.| 亚洲精品色激情综合| 国产 一区 欧美 日韩| 亚洲av男天堂| av黄色大香蕉| 亚洲综合色惰| 国产大屁股一区二区在线视频| 日韩不卡一区二区三区视频在线| 亚洲精品456在线播放app| 婷婷色麻豆天堂久久| 男人爽女人下面视频在线观看| 亚洲av二区三区四区| 欧美激情在线99| 黄色一级大片看看| 在线观看美女被高潮喷水网站| 中国美白少妇内射xxxbb| 久久久久久国产a免费观看| 中文字幕av在线有码专区| 日本猛色少妇xxxxx猛交久久| 欧美97在线视频| 日韩三级伦理在线观看| 国产日韩欧美在线精品| 中文精品一卡2卡3卡4更新| 日韩欧美国产在线观看| 久久久久久久久大av| 久久草成人影院| 美女主播在线视频| 狠狠精品人妻久久久久久综合| 美女主播在线视频| 熟妇人妻不卡中文字幕| 久久精品国产自在天天线| 亚洲av在线观看美女高潮| 丰满人妻一区二区三区视频av| 最近最新中文字幕大全电影3| xxx大片免费视频| 国产免费又黄又爽又色| 夫妻午夜视频| 亚洲精华国产精华液的使用体验| 国产高清国产精品国产三级 | 国产精品爽爽va在线观看网站| 97超碰精品成人国产| 美女xxoo啪啪120秒动态图| 亚洲性久久影院| 国产黄片视频在线免费观看| 亚洲最大成人中文| 亚洲国产精品成人综合色| 免费无遮挡裸体视频| 一区二区三区免费毛片| 国产有黄有色有爽视频| 国产亚洲精品久久久com| 国产女主播在线喷水免费视频网站 | 欧美成人午夜免费资源| 精品久久久久久电影网| 国产综合懂色| 国产成年人精品一区二区| 啦啦啦韩国在线观看视频| 有码 亚洲区| 国产伦理片在线播放av一区| 天美传媒精品一区二区| 白带黄色成豆腐渣| 丰满少妇做爰视频| 亚洲av日韩在线播放| 亚洲人与动物交配视频| 只有这里有精品99| 韩国高清视频一区二区三区| 五月伊人婷婷丁香| 亚洲精品一区蜜桃| 亚洲熟妇中文字幕五十中出| 日韩欧美一区视频在线观看 | 91在线精品国自产拍蜜月| 国产黄片美女视频| 午夜福利视频1000在线观看| 精品一区二区三区人妻视频| 非洲黑人性xxxx精品又粗又长| 亚洲熟女精品中文字幕| 久久草成人影院| 久久精品夜夜夜夜夜久久蜜豆| 极品少妇高潮喷水抽搐| 国产亚洲91精品色在线|