• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Understanding Nonverbal Communication Cues of Human Personality Traits in Human-Robot Interaction

    2020-11-05 09:41:58ZhihaoShenArmaganElibolandNakYoungChong
    IEEE/CAA Journal of Automatica Sinica 2020年6期

    Zhihao Shen, Armagan Elibol, and Nak Young Chong,

    Abstract—With the increasing presence of robots in our daily life, there is a strong need and demand for the strategies to acquire a high quality interaction between robots and users by enabling robots to understand users’ mood, intention, and other aspects. During human-human interaction, personality traits have an important influence on human behavior, decision, mood, and many others. Therefore, we propose an efficient computational framework to endow the robot with the capability of understanding the user’s personality traits based on the user’s nonverbal communication cues represented by three visual features including the head motion, gaze, and body motion energy, and three vocal features including voice pitch, voice energy, and mel-frequency cepstral coefficient (MFCC). We used the Pepper robot in this study as a communication robot to interact with each participant by asking questions, and meanwhile, the robot extracts the nonverbal features from each participant’s habitual behavior using its on-board sensors. On the other hand, each participant’s personality traits are evaluated with a questionnaire. We then train the ridge regression and linear support vector machine (SVM) classifiers using the nonverbal features and personality trait labels from a questionnaire and evaluate the performance of the classifiers. We have verified the validity of the proposed models that showed promising binary classification performance on recognizing each of the Big Five personality traits of the participants based on individual differences in nonverbal communication cues.

    I. Introduction

    WITH the population aging and sub-replacement fertility problems increasingly prominent, many countries have started promoting robotic technology for assisting people toward a better life. Various types of robotic solutions have been demonstrated to be useful in performing dangerous and repetitive tasks which humans are not able to do, or do not prefer to do. In relation to elderly care provision, assistive robots could replace and/or help human caregivers support the elderly socially in their home or residential care environments.

    Researchers gradually realized that the interactions between a human user and a robot are far more than sending commands to the robot or reprogramming, as a new class of social robots are emerging in our daily life. It is now widely understood that not only the robot’s appearance but also its behaviors are important for human-robot interaction [1], [2].Therefore, synchronized verbal and nonverbal behaviors [3]were designed and applied to a wide variety of humanoid robots, like Pepper, NAO, ASIMO, and many others, to improve the user’s engagement in human-robot interaction.For instance, the Honda ASIMO robot can perform various movements of arms and hands including metaphoric, iconic,and beat gestures [4]. Likewise, some researchers have designed such gestures using the SAIBA framework [5] for the virtual agents. The virtual agents were interfaced with the NAO robot to model and perform the combined synchronized verbal and nonverbal behavior. In [6], the authors tested the combined verbal and nonverbal gestures on a 3D virtual agent MAX to make the agent act like humans. Meanwhile, cultural factors are also considered to be crucial components in human-robot interaction [7]. In [8], the authors designed emotional bodily expressions for the Pepper robot and enabled the robot to learn the emotional behaviors from the interacting person. Further investigations on the influence of the robot’s nonverbal behaviors on humans were conducted in [9]. These efforts were made to enable robots to act like humans.However, the synchronized behaviors are unilateral movements with which robots track the person’s attention.Therefore, the authors in [10] claimed that social robots need to act or look like humans, but more importantly they will need to be capable of responding to the person with the synchronized verbal and nonverbal behavior based on his/her personality traits. Inspired by their insight in [10], we aim to develop a computational framework that allows robots to understand the user’s personality traits through their habitual behavior. Eventually, it would be possible to design a robot that is able to adapt its combined verbal and nonverbal behavior toward enhancing the user’s engagement with the robot.

    A. Why Are the Personality Traits Important During the Interaction?

    In [11], the authors investigated how personality traits affect humans in their whole life. The personality traits encompass relatively enduring patterns of human feelings, thoughts, and behaviors, which make each different from one another. When the human-human conversational interaction is considered, the speaker’s behavior is affected by the speaker’s personality traits,and the listener’s personality traits also affect their attitude toward the speaker. If their behaviors make each other feel comfortable and satisfying, they would enjoy talking to each other. In social science research, there have been different views toward the importance of interpersonal similarity and attraction. Some people tend be attracted to other people with similar social skills, cultural background, personality, attitude,and several others [12], [13]. Interestingly, in [14], the authors addressed the complementary attraction that some people prefer to talk with other people whose personality traits are complementary to themselves. Therefore, we believe that if the robot is able to understand the user’s coherent social cues, it would improve the quality of human-robot interaction,depending on the user’s social behavior and personality.

    In previous studies, the relationships between the user’s personality traits and the robot’s behavior were investigated. It was shown in [15] that humans are able to recognize the personality of the voice that was synthesized by the digital systems and computers. Also, a compelling question was explored to better understand the personality of people whether they are willing to trust a robot or not in an emergency scenario in [16]. Along the lines, a strong correlation between the personality traits of users and the social behavior of a virtual agent was presented in [17]. In [18], the authors designed the robot that have personalities to interact with a human, where significant correlation between human and robot personality traits were revealed. Their results showed how the participants’technological background affected the way they perceive the robot’s personality traits. Also, the relationship between the profession and personality was investigated in [19]. The result conforms with our common sense such as that doctors and teachers tend to be more introverted, while managers and salespersons tend to be more extroverted. Furthermore, the authors investigated how humans think about the NAO robot with different personality traits (introversion or extroversion),when the robot plays different roles in human-robot interaction[20]. However, their results were not in accordance with our common sense. The robot seems smarter to the human when the robot acted as an introverted manager and extroverted teacher.On the contrary, the extroverted manager and introverted teacher robots were not perceived intelligent by the participants.These two results conflict with each other. This could be due to the fact that people treat and perceive robots differently from humans in the aforementioned settings. Another reason could be that the introverted manager robot looked like more deliberate, because it took more time to respond, while the extroverted teacher robot looked like more erudite, because it took less time to respond during the interaction. Even though these two studies found conflicting results, the results imply the importance of robot personality traits in designing professional roles for human-robot interaction.

    In light of the previous studies on personality match in humanrobot interaction, some of the findings are inconsistent with each other. The result that was shown in [21] indicated that the participants enjoyed interacting more with the AIBO robot when the robot has a complementary personality to the participants’. While the conclusions from [22] showed that the participant was more comfortable when they interacted with the robot with a similar personality to theirs. Similarly, the engagement and its relation to the personality traits were analyzed during human-robot interaction in [23], where the participants’ personality traits played an important role in evaluating individual engagement. The best result was achieved when the participant and robot both were extroverted. Note that when both the participant and the robot were introverted, the performance was the worst. Although the complementary and similar attraction theory may need further exploration in the future, these studies clearly showed that how the personality traits are important in human-robot interaction.

    On the other hand, the personality traits have been shown to have a strong connection with the human emotion. In [24], it was discussed that how the personality and mind model influence the human social behavior. A helpful analogy for explaining the relationship between personality and emotion is“personality is to emotion as the climate is to weather” [25].Therefore, theoretically, once the robot is able to understand the user’s personality traits, it would be very helpful for the robot to predict the user’s emotion fluctuation.

    Fig. 1 illustrates our final goal by integrating the proposed model of inferring human personality traits with the robot’s speech and behavioral generation module. The robot will be able to adjust its voice volume, speed, and body movements to improve the quality of human-robot interaction.

    Fig. 1. Integrating proposed model of inferring human personality traits into robot behavior generation.

    B. Architecture for Inferring Personality Traits in Human-Robot Interaction

    In the sub-section above, the importance of the personality traits in human-human and human-robot social interactions is clearly stated. Here we propose our computational framework for enabling the robot to recognize the user’s personality traits based on their visual and vocal nonverbal behavior cue. This paper is built upon our preliminary work in [26].

    In this study, the Pepper robot [27] equipped with two 2D cameras and four microphones interactsw ith each participant.In the previous research on the emergent LEAder corpus(ELEA)[28],when recording the video of a group meeting,the camera was set in them iddle of the desk to capture each participant’s facial expression and upper body movement.Reference[23]also used the external camera to record the individual and interpersonal activities for analyzing the engagement of human-robot interaction.However,we do not use any external devices for the two reasons;First,we attempt to make sure that all audio-visual features are captured from the first-person perspective,ensuring that the view from the robot is closely sim ilar to that from the human.Secondly,if the position and pose of the external camera changes for some reasons,it would yield a significant difference between the visual features.Thus, we use the Pepper’s forehead camera only.

    Fig.2 briefly illustrates our experimental protocol which consists of the nonverbal feature extraction and themachine learningmodel training.Part a):all participants recruited from the Japan Advanced Institute of Scienceand Technology were asked to communicate w ith the Pepper robot.The robotkeeps asking questions related to the participant,and each participant answers the questions.The participantsare supposed to reply to the robot’s questions w ith their habitual behavior.Before or after each participant finished interacting w ith the robot, they were asked to fill out a questionnaire to evaluate their personality traits.The personality traits scores were binarized to perform the classification task.Part b):we extracted the participants’audio-video features that include the headmotion,gaze, body motion energy,voice pitch,voice energy,and MFCC during the interaction.Part c):the nonverbal features and personality traits labels w ill be used to train and test our machine learning models.

    Fig.2.Experimental protocol for inferring human personality traits.

    To the best of our know ledge,this is the first work that shows how to extract the user’s visual features from the robot’s first-person perspective,as well as the prosodic features,in order to infer the user’s personality traits during human-robot interaction.In[29],the non-verbal cues were extracted from the participant’s first-person perspective and used to analyze the relationship between the participant and robot personalities.W ith our framework,the robot is endowed w ith the capability of understanding human personalities during face-to-face interaction.W ithout using any external devices,the proposed system can be conveniently applicable to any typeof environment.

    The rest of this paper is organized as follows.Section II explains the personality traits model used corresponding to Part a).Section III explains why we used the nonverbal features,and what nonverbal features were used for recognizing the participant personality traits corresponding to Part b).Section IV presents the technical details of our experiments.Section V is devoted to experimental results and analysis corresponding to Part c).Section VI draws conclusions.

    II.Per sona l ity Traits

    Based on the definition in[30],[31],personality traits have a strong long-term effect in generating the human’s habitual behavior:“the pattern of collective character, behavioral,temperamental,emotional,and mental traits of an individual that has consistently over time and situations”.

    In the most of existing studies on personality traits,the researchers proposed many different personality models including Meyers-Briggs(extroversion-introversion, judgingperceiving,thinking-feeling,and sensation-intuition)[32];Eysenck model of personality (PEN)(psychoticism,extroversion,and neuroticism)[33];and the Big-Five personality model(extroversion,openness,emotional stability,conscientiousness,agreeableness)[34],[35].The Big-Five personality traits are the very common descriptor of human personality in psychology.In[36],[37], the authors investigated the relationship between the Big-Five personality traitsmodel and nonverbal behaviors.We also use the Big-Five personality traits model in this study.Table I denotes the intuitiveexpressions for the Big-Fivepersonality traits.

    TABLE I Big-Five Per sona l ity Traits

    As the personality traits become more popular in the last few decades[38], various questionnaires were proposed in the literature for the assessment of human Big-Five personality traits.Themost popular format of questionnaire is the Likert scale:ten item personality inventory(TIPI)which has 10-itemsand each question ison a7 point scale[39];The revised NEO personality inventory(NEO PI-R) which contains 240 items[40]; the NEO five-factor inventory(NEO-FFI),a shortened version of NEO PI-R, which comprises 60 items[41];and the international personality item pool(IPIP)Big-Five Factor Markers which has been simplified to 50 questions[42].We used the IPIP questionnaire in this paper,and all participantswere asked to fillout the questionnaire to evaluate their Big-Five personality traits.The IPIP questionnaire is relatively easier to answer,and it does not need toomuch time to complete.

    Specifically,the participants are asked to rate the extent to which they agree/disagreew ith the personality questionnaires on a five-point scale. A total of 50 questions are divided into ten questions for each of the Big-Five traits and the questions also include the reverse-scored and positive-scored items.For the reverse-scored items,Strongly disagree equals 5 points,neutral equals 3 points,and strongly agree equals 1 point;for the positive-scored items,strongly disagree equals 1 point,neutral equals 3 points,and strongly agree equals 5 points.A fter the participants rate themselves for each question,each personality trait is represented by the mean score of 10 questions.We did not use the scale of 1–5 to represent the participant’s personality traits.Instead,the personality traits are binarized using themean score of all participantsas a cutoff point to indicatewhether the participanthas a high or low level of each of the Big-Five traits.For instance,if a participant’s trait of extroversion was rated 2 which is less than theaverage value2.8,then, this participant is regarded as introvert and his/her trait scorew ill be re-assigned 0.Then,we used the binary labels to train our machine learning models and evaluate the classification performanceaccordingly.

    III.Feature Representation

    It is known that the personality trait encompasses the human’s feeling, thoughts,and behaviors.The question to be investigated then arises as“how can it be inferred human personality traits based on their verbal and nonverbal behaviors?”

    A. Related Work on Verbal and Nonverbal Behaviors

    The influences of personality traits on linguistic speech production have been addressed in previousworks[43],[44].The user’s daily habitswere investigated to ascertain whether they are related to the user’s personality traits.The changesof facialexpression were also used to infer the personality traits,which was proposed in[45].In[46],the participants were asked to use the electronically activated recorder(EAR) to record their daily activities,which included locations, moods,language,and many others,to verify the manifestations of personality.Moreover, the authors investigated how the w riting language reflects thehuman personality stylebased on their daily w riting diaries,assignments,and journal abstracts[47].More specific details were presented in[48].In that study, two corpora that contain 2479 essays and 15 269 utterancesmore thanm illion wordswere categorized and used to analyze the relation to each participant’s Big-Five personality traits.A lthough the participant’s verbal information can be used to analyze their personality traits based on Pennebaker and King’s work[47],it should be noted that categorizing so many words would be an arduous task.In[49],the authorsaddressed that the language differences could influence the annotator’s impressions toward the participants.Therefore,they asked three annotators to watch the video that was recorded in themeeting w ithout audio and to annotate the personality traits of each participant. Notably,the issue of conversational error was addressed in[50],where the error caused the loss of trust in the robot during human-robot interaction.In light of the aforementioned studies,the participants in our study were free to use any language to talk w ith the robot.It can generally be said that the nonverbal behavior would be a better choice in this study.

    On the other hand,it is desirable that the robot can change its distances w ith the user depending on a variety of social factors leveraging a reinforcement learning technique in[51].In[52],the author also used the changes in the distance between the robot and the participant as one of their features for predicting the participant’s extroversion trait.Sim ilarly,the authors proposed a model of automatic assessment of human personality traits by using body postures, head pose,body movements,proxim ity information,and facial expressions[53].The results in[54]also revealed that the extrovert could accept people to come closer than the introvert.However,the proxem ics feature was not considered in our study,as the human-robot distance remains unchanged during our communicative interaction settings.

    In the related research on inferring human personality traits,a variety of fascinatingmultimodal featureswere proposed.In[36],[55],the authors used vocal features to infer personality traits.In[37], they used vocal and simple visual features to recognize the personality traits based on MS-2 corpus(M ission Survival 2).References[49]and[56]detailed how to infer personality traits in the groupmeeting.They used the ELEA corpus,and the participant’s personality traits were annotated by the external observer.Meanwhile, the participant’s vocal and visual features such as voice pitch, voice energy,head movement, bodymovement,and attentionswere extracted from audio and videos.The sim ilar features were used in [57]to infer the personality traitsw ith YouTube video blogs.The convolutionalneuralnetworks were also applied to predict human personality traits based on an enormous database that contains video,audio,and text information from YouTube vlogs[58],[59].In[60],[61],the authorsexplained a nonverbal feature extraction approach to identifying the emergent leaders.The nonverbal features that were used to infer the emergent leaders included prosodic speech feature(pitch and energy), visual features(head activity and body activity),and motion template-based features.In[62],[63],the frequently-used audio and visual nonverbal features in existing research were summarized for predicting the emergent leader or personality traits.Similarly,amethod was proposed in[64]for identifying the human’s confidence during human-robot interaction w ith the sound pressure,voice pitch,and head movement.

    In the previous studies[37],[49],[60],[62],the authors used the statistical features and activity length features.Since the personality traits are long-term characteristics that affect people’s behaviors,they believed that the statistical features can well represent the participants’behaviors.Sim ilar nonverbal features were used in our study.However,we believe that the state transitions of the nonverbal behaviors or features are also importance to understand the human’s personality traits.The study in[56] proposed their cooccurrent features to indicate some movements of other participants that happened at the same time.Hence,in our study, the raw form and time-series based features of the visual and vocal nonverbal behavior were used to train the machine learning models.

    B. Nonverbal Feature Representation

    Taking into account the findings of the aforementioned studies, we intend to extract sim ilar features from the participant’s nonverbal behaviors. Nonverbal behaviors include vocal and visual behaviors.Table II shows the three visual features including the participant’s head motion,gaze score,and upper body motion energy,as well as the three vocal features including the voice pitch,voice energy,and mel-frequency cepstral coefficient (MFCC).

    TABLE II Nonverba l Fea ture Representat ion

    In our basic human-robot interaction scenario,it is assumed that the participant talks to a robot using gestures the way a person talks to a person.Therefore,the participant’s visual features can be extracted using the robot’s on-board camera while the participantor the robot talks. Note that,in Table II,some of the visual featuresHM2,GS2,andME2 are extracted when the participant listens to the robot asking four simple questions.The total time duration was too short to capture sufficient data enough to train our machine learning models.Therefore,we did not use these three features in our study.

    1) Head Motion:An approach to analyze the head activity was proposed in[60].They applied the optical flow on the detected face area to decide whether the head wasmoving or not.Based on the head activity states,they were able to understand when and for how long the head moved.We followed themethod thatwas proposed in[65].First,every frame captured by the Pepper’s forehead camerawasused for scanning procedure to extract the sub-w indows.The authors in[65]has trained 60 detectors based on left-right rotationout-of-plane and rotation-in-plane angle,and each detector containsmany layers that are able to estimate the head pose and detect a human face.Each sub-w indow was used as an input to each detectorwhich was trained by a set of the face w ith a specific angle.The outputwould provide the 3D head pose (pitch, yaw,and roll)asshown in the left imageof Fig.3.In this study, the pitch angle covers[–90°,90°],the rollangle covers[–45°,45°],and yaw angles covers[–20°,20°].And then the Manhattan distance of every two adjacent head angle was used to represent the participant’s head motion.Let,,anddenote the pitch,yaw,and roll angles,respectively.Then the head motion(HM1)can be calculated by the follow ing equation:

    whereiandare two consecutive frames at 1 s time interval.

    Fig.3.Visual features(The left image illustrates the 3D head angles,and the right image shows the different pixels by overlapping two consecutive frames).

    2)Gaze Score:In[66],the influence of gaze in the small group human interaction was investigated.The previous studies used the visual focusof attention(VFOA)to represent the participant’s gaze direction[61]in the group discussion.However,the high-resolution image is required for the analysis of the gaze direction,which w ill tremendously increase the computational cost.In our experiment,the participant sitsata table in front of the robot positioned 1.5m to 1.7 m away.In practice,the calculation of gaze direction mightnot be feasible,if we consider the image resolution and the distance,since the eye occupies only a few pixels in the image.As the head pose and gaze direction are highly related w ith each other[67],an efficient way of calculating the gaze direction was proposed based on the head orientation in[68].Therefore, we used the head direction to estimate the gaze direction which is highly related to the head yaw and pitch angles.In the real experimental environment,we found that the face was hardly detected when the facial plane exceeds.When the participant faces the robot’s forehead camera,the tilt/pan angle is.Therefore,we measure the Euclidean distance from theto the head yaw and pitch angle.Then,the full range(distance)of tilt/pan angles[0°,20°]is normalized to 0 to 1.Finally,thenormalized score between 0 and 1 is used as the gaze score which indicates the confidence in the fact that the participant is looking at the robot.If we denote byandthe head pitch and yaw angles,respectively,thegaze scoreof the frameican be calculated by the follow ing equation:whereandrepresent the maximum degree of the head pitch and yaw angle,respectively.

    3) Motion Energy:The motion energy images[69],[70]were used in the previous studies to describe body motion.Their basic idea is to compute the number of different pixels of every two consecutive frames.We applied the same idea to calculate the ratio of the different pixels between every two frames.The right image of Fig.3 shows an example of different pixels between two frames.Thismethod is simple and effective.However,it requires the image to have stationary background and distance between the robot and each participant.Otherw ise,the change of the background w ill be perceived as the participant’s body movement,and the number of different pixels w ill increase if the participant sits closer to the robot. Now,all three visual features were calculated and normalized in the whole database,denoted byHM1,,andME1.The binary features,,andmentioned in Table IIare the binarizedHM1,,andME1 which were simply calculated by comparing whether the value is larger than 0 or not.

    4)Voice Pitch and Energy:The vocal behavior is another important featurewhen humansexpress themselves.Pitch and energy are the two well-known vocal features and very commonly used in emotion recognition.Pitch,which is generated by the vibration of vocal cords,is perceived asthe fundamental voice frequency.There are many different methods to track the voice pitch.For instance,average magnitude difference function(AMDF)[71],simple inverse filter tracking(SIFT)[72],and auto-correlation function(ACF)[73]are the time domain approach,while harmonic product spectrum(HPS)[74]is the frequency domain approach.We used the auto-correlation function denoted bygiven in (3) to calculate pitch

    In(5),Tis the time duration of the audio signal in one frame.Since the frame sizeused in this study is 800,the time durationTis50m illiseconds.

    Now the average of the short-term energy can be calculated by the follow ing equation:

    5) Mel-Frequency Cepstral Coefficient:MFCC[75]is a vocal feature well known for its good performance in speech recognition[76].The procedures to calculate MFCC are highly related to the vocalism principle and also able to discard the redundant information that the voice carries,e.g.,thebackground noise,emotion,and many others.We intend to test this pure and essential feature which reflects how the sound was generated.We calculated the MFCC based on the follow ing steps.

    First,we calculate the power spectrum by calculating the fast Fourier transform(FFT)of each frame.Themotivating idea is from the concept of how our brain understands the sound.The cochlea in the ear converts sound waves,which caused the vibrations in different spots,to the electrical impulses to inform the brain that some frequencies are present.Usually,only 256 pointswere kept from 512 points in FFT.

    Then,20–40(usually 26) triangular filtersof themel-spaced filterbank were applied to the power spectrum.This step is to simulate how the cochlea perceives the sound frequencies.The human ear is less sensitive to the closely spaced frequencies,and it becomeseven harder when the frequency is increasing.This iswhy the triangular filter becomesw ider as the frequency increases.

    Third,the logarithm was applied to the 26 filtered energies.This is also motivated by human hearing.We need to put 8 times more energy to double the loudness of the sound.Therefore, we used the logarithm to compress the features much closer towhat humansactually hear.

    Finally,we compute the discrete cosine transform(DCT)of the logarithm ic energies.In the previous step,the filterbanks were partially overlapped, which provide high correlated filtered energies.The DCT was used to decorrelate the energies.Only 13 coefficientswere kept as the finalMECC.

    IV.Exper imenta l Design

    The experimentwas designed in the scenario that the robot asks questions as the robotmeets w ith the participant.In the follow ing, we introduced the experimental environment and themachine learning methodsused.

    A. Experimental Setup

    The relationship between people’s professions and personality traits was investigated in[19].In our study,all the participantswere recruited from the Japan Advanced Institute of Science and Technology.Therefore,the relationship between professionsand personality traitswas notconsidered.On the other hand,the interactions between participants and the robot were assumed to be casual everyday conversations.Specifically,each participant sits at a table w ith his/her forearm resting on the tabletop and talksw ith the robot.The participants did not have any strenuous exercises before they were invited to theexperiment.

    The experimental setup is shown in Fig.4.Each participant was asked to sitata table in frontof the robot standing 1.5 m to 1.7m away in a separate room.Only the upper partof the participant’s body was observable from the robot’s on-board camera that extracts the visual features.The robot keeps asking questions one after another.The participantwas asked to respond to each question using his/her habitual gesture. As mentioned in Section III,the participants were free to use any language(such as English,Italian,Chinese,and Vietnamese)to communicate w ith the robot.

    Fig.4.Detailsof experimental setup.

    We recruited 15 participants in the study; however,3 of the participantswere too nervous during the experiment,and they looked at the experimenter frequently.Therefore, they were excluded,and our database contains the data of 12 participants w ith a total duration 2000 s.One of the convenient ways to infer personality traits is using the fixed time length.Once the robot hasenough data,it would be able to infer the personality traits.Therefore,we divided the data into 30-sec long clips.The 30-sec clip may contain data from different sentences.When we divided the clips,each clip hasoverlap w ith the previousone,and then wewere able to generatemore data generalized.

    The flow chart in Fig.5 shows the architecture for extracting features.The robot first detects whether there is a person to talk to.Then the robot would sequentially select a question from a file“questions.txt”and use the speech synthesizers to start the conversation.Meanwhile,the robotalso extracted the visual features every second.Even after the robot finished asking its question,the vocal and visual features extraction would be continued while the participantwas responding to the question.The participant was instructed that they were expected to stop talking for 5 s for letting the robot know that it may ask the next question.

    Fig.5.The pipeline for feature extraction.

    TABLE III Averaged Accuracies for Big Five Persona l ity Traits (Ridge Regression Classifier)

    TABLE IV Averaged Accuracies for Big Five Per sona l ity Traits (Linear SVM Classifier)

    B.Classification Model

    In[63]and[77],the authors summarized differentmethods used for the prediction of the leadership style such as the logistic regression[49],[56],[78], rule-based[79],Gaussian m ixture model[80],and support vector machine[49],[56],[81].The ridge regression and linear SVM were both used in[49],[56].We opted to apply the samemethods in our study tomake a simple comparison.The cross-validation was used to find the optimal regression parameters.The follow ing formulas were used to calculate the regression parameters:

    whereXis the featurematrix,Iis an identitymatrix,is the binarized label of the personality traits,andis the ridge parameter calculated using the follow ing equation:

    In(9),iis an integer indicating thateach regressionmodel is executed for 30 times for optimizing the regression parameter. As we used the regression model to perform a classification task,we used the accuracy rather than themean squared error, which would givemoremeaningful results.

    SVM is used to perform the linear or nonlinear classification task by using different types of kernel functions.It requires a longer time to train an SVM classifier than ridge regression.From Table III,it can be noticed that the binary features did not present their advantages in ridge regression.Therefore, the binary featureswere discarded in SVM.Then,we trained an SVM classifier w ith the linear kernel w ith the penalty parameter of the error term which was chosen from[0.1,0.4,0.7,1].Therefore,each SVM classifier was trained for 4 times based on the equation thatwasmentioned in[82]which isshown in the follow ing equation:

    The leave-one-out method was used to evaluate the performance of ridge regression and linear SVM.The results of the linear SVM was presented in Table IV.

    V.Exper imen ta l Resu l ts

    A.Classification Results

    B. Regression

    For the ridge regression model,we used the average personality trait scores that were calculated from the questionnaire ranging from 1 to 5.For evaluating the regression model,we calculated mean squared error (MSE) valuesandR2which is known as the coefficient of determ ination used to evaluate the goodness of fitof the regressionmodel[49].The maximumR2values of conscientiousness and openness are smaller than 0.1.Therefore,we only presented theR2values of extroversion,agreeableness,and emotional stability in Table V.We calculatedR2based on

    In Table V,the best classification resultof three personality traits were inferred by the features w ith the highestR2values marked in bold.

    The MSE values were given in Figs.6–10.In order to show the changes of the MSE values clearer, we only revealed thei(the parameter for calculating the ridge parameterfrom(9))from 0 to 16.The variables that were shown in Fig.6 to 10 were represented by using two capital letters of the abbreviation of personality traitand the feature name(refer to Table II,MFCC6 is the 6th MFCC vector).Figs.6,7,and 9 of extroversion,agreeableness,and emotional stability also showed that the feature w ith the smallestMSE value acquired the best classification result.The differences of the other two traits conscientiousness and openness were not very obvious compared to the aforementioned three traits.

    VI.Conc lusion and the futu re w orks

    In this paper, we have proposed a new computationalframework to enable a social robot to assess the personality traits of the user it is interacting w ith.In the beginning,the user’s nonverbal featureswere defined as easily-obtainable as possible and extracted from video and audio collected w ith the robot on-board camera andm icrophone.By doing so, we have decreased the computational cost in the feature extraction stage,yet the features provided promising results in the estimation of the Big Five personality traits.Moreover,the proposed framework isgeneric and applicable to aw ide range of off-the-shelf social robot platforms.To the best of our know ledge, this is the first study to show how the visual features can be extracted in the first-person perspective,which could be the reason that our system outperformed the previous studies. Notably,the MFCC feature was beneficial to assessing each of the Big Five personality traits.We also found that,apparently,extroversion appeared to be the hardest trait.One reason could be the current experimental settings,where the participantssat ata table w ith their forearms resting on the tabletop that lim ited their body movements.Another reason could be the confusing relationship between the participants and the robot,which made the participants hesitate to express themselves naturally in the way they do in everyday situations.

    TABLE V The Max imum Va lues of R2 of the Regression Resu l ts for Ex t roversion,Agreeableness,and Emotiona l Stabil ity

    Fig.6.MSE valuesof the ridge regression for inferring extroversion.

    Fig.10.MSE values of the ridge regression for inferring openness.

    Fig.7.MSE valuesof the ridge regression for inferring agreeableness.

    Fig.8.MSE valuesof the ridge regression for inferring conscientiousness.

    Fig.9.MSE valuesof the ridge regression for inferring emotional stability.

    Each feature showed its advantage in a different aspect.However,there is not a standard way of draw ing the conclusion that declares the user’s personality traits.Therefore,one of the future works is to find an efficientway to fuse the multi-modal features.On the other hand,the personality traits can be better understood through frequent and long-term interaction.Thismeans that the system should be able to update its understandings of the user’s personality traits whenever the robot interacts w ith its user.It is also needed to evaluate the engagement between a human and a robot,and attitude of the human toward the robot,since the user’sbehaviors can be precariouswhen theuser loses interest in interacting w ith the robot.Finally,in order to achieve the best possible classification performance,more sophisticated machine learning modelsneed to be incorporated.

    久久精品国产清高在天天线| 观看免费一级毛片| 蜜桃久久精品国产亚洲av| 校园春色视频在线观看| 成人欧美大片| 国产午夜福利久久久久久| 国产三级中文精品| 九九久久精品国产亚洲av麻豆| 亚洲七黄色美女视频| 亚洲人与动物交配视频| 久久久精品大字幕| 国产爱豆传媒在线观看| 九九热线精品视视频播放| 一区二区三区免费毛片| 观看免费一级毛片| 国产免费一级a男人的天堂| 国产精品久久久久久精品电影| а√天堂www在线а√下载| 欧美色欧美亚洲另类二区| 亚洲在线自拍视频| 欧美区成人在线视频| 一a级毛片在线观看| 一级毛片久久久久久久久女| 亚洲精品456在线播放app | 亚洲在线自拍视频| 一区二区三区激情视频| 亚洲内射少妇av| 日本一二三区视频观看| 亚洲第一电影网av| 亚洲成人免费电影在线观看| 亚洲自偷自拍三级| av女优亚洲男人天堂| 国产成人a区在线观看| 亚洲专区国产一区二区| 国产精品免费一区二区三区在线| 男人和女人高潮做爰伦理| 亚洲美女搞黄在线观看 | 午夜日韩欧美国产| 中文字幕av成人在线电影| АⅤ资源中文在线天堂| 午夜激情福利司机影院| 亚洲中文日韩欧美视频| 2021天堂中文幕一二区在线观| 成人特级av手机在线观看| 99精品久久久久人妻精品| 人妻丰满熟妇av一区二区三区| 精品国产亚洲在线| 亚洲av日韩精品久久久久久密| 亚洲国产色片| 脱女人内裤的视频| 国产三级中文精品| 亚洲自偷自拍三级| 国产真实乱freesex| 国内精品久久久久精免费| 亚洲第一区二区三区不卡| 亚洲综合色惰| 俺也久久电影网| 天堂影院成人在线观看| 国产精品不卡视频一区二区 | 波多野结衣巨乳人妻| 国产中年淑女户外野战色| 久久久国产成人精品二区| 夜夜夜夜夜久久久久| 欧美丝袜亚洲另类 | 极品教师在线免费播放| 此物有八面人人有两片| 一a级毛片在线观看| 日韩精品青青久久久久久| 国产成人aa在线观看| 一本久久中文字幕| 国产成人影院久久av| 午夜激情欧美在线| 蜜桃久久精品国产亚洲av| 精品国内亚洲2022精品成人| 亚洲最大成人手机在线| 国产高清三级在线| 国内少妇人妻偷人精品xxx网站| 久久精品影院6| 亚洲人成电影免费在线| 国产午夜精品论理片| 午夜免费成人在线视频| 国产黄色小视频在线观看| 蜜桃亚洲精品一区二区三区| 亚洲电影在线观看av| 国产成年人精品一区二区| 亚洲成av人片免费观看| 99久久99久久久精品蜜桃| 亚洲av成人不卡在线观看播放网| av视频在线观看入口| 亚洲av电影不卡..在线观看| 欧美区成人在线视频| 88av欧美| x7x7x7水蜜桃| 亚洲最大成人中文| 国产成年人精品一区二区| 麻豆成人午夜福利视频| 韩国av一区二区三区四区| 午夜福利18| 日本a在线网址| 午夜福利成人在线免费观看| 日本黄色视频三级网站网址| 亚洲最大成人av| 欧美色欧美亚洲另类二区| 国产免费男女视频| 精品一区二区免费观看| 成年人黄色毛片网站| 99热精品在线国产| 亚洲自拍偷在线| 99国产精品一区二区蜜桃av| av在线老鸭窝| 97碰自拍视频| 国产精品久久久久久人妻精品电影| 精品日产1卡2卡| 国产精品不卡视频一区二区 | 中文字幕人妻熟人妻熟丝袜美| 少妇人妻精品综合一区二区 | 国产精品爽爽va在线观看网站| 久久6这里有精品| 欧美日韩福利视频一区二区| 中文字幕免费在线视频6| a级一级毛片免费在线观看| 国产黄片美女视频| 欧美又色又爽又黄视频| 他把我摸到了高潮在线观看| 日韩欧美三级三区| 老司机午夜十八禁免费视频| 久久久久久久久久成人| 一进一出好大好爽视频| 狂野欧美白嫩少妇大欣赏| 亚洲午夜理论影院| 丰满的人妻完整版| 亚洲av免费高清在线观看| av专区在线播放| 97碰自拍视频| 一区二区三区四区激情视频 | 久久久久久久午夜电影| 午夜福利成人在线免费观看| 亚洲成人久久性| 午夜福利免费观看在线| 亚洲国产精品久久男人天堂| 又粗又爽又猛毛片免费看| 一夜夜www| 直男gayav资源| 1000部很黄的大片| 最近中文字幕高清免费大全6 | 免费观看精品视频网站| 国产真实伦视频高清在线观看 | 级片在线观看| 欧美又色又爽又黄视频| 免费人成视频x8x8入口观看| 可以在线观看毛片的网站| 国产av在哪里看| 亚洲七黄色美女视频| 国产aⅴ精品一区二区三区波| 每晚都被弄得嗷嗷叫到高潮| 性色av乱码一区二区三区2| 别揉我奶头 嗯啊视频| 国产一区二区三区视频了| 嫩草影院精品99| 欧美xxxx性猛交bbbb| 黄色丝袜av网址大全| 亚洲av中文字字幕乱码综合| 国产熟女xx| 自拍偷自拍亚洲精品老妇| 观看美女的网站| 亚洲一区高清亚洲精品| 欧美精品啪啪一区二区三区| 日本与韩国留学比较| 午夜福利欧美成人| 亚洲国产欧美人成| 男插女下体视频免费在线播放| 亚洲欧美精品综合久久99| 99久久99久久久精品蜜桃| 精品国产三级普通话版| 免费一级毛片在线播放高清视频| 国产高潮美女av| 男女做爰动态图高潮gif福利片| 天美传媒精品一区二区| 欧美日韩福利视频一区二区| 在线国产一区二区在线| 黄色一级大片看看| 日韩欧美三级三区| 非洲黑人性xxxx精品又粗又长| 老司机深夜福利视频在线观看| avwww免费| 国产成人影院久久av| 国产成人a区在线观看| 91久久精品电影网| АⅤ资源中文在线天堂| 国产中年淑女户外野战色| 国产精品永久免费网站| 国产成+人综合+亚洲专区| 成人毛片a级毛片在线播放| 午夜影院日韩av| 丰满乱子伦码专区| 欧美激情久久久久久爽电影| 久久国产乱子伦精品免费另类| 露出奶头的视频| 一进一出抽搐动态| 久久国产精品人妻蜜桃| av福利片在线观看| 婷婷六月久久综合丁香| 亚洲五月天丁香| 99久久成人亚洲精品观看| 两个人的视频大全免费| 亚洲五月天丁香| 看黄色毛片网站| 国内揄拍国产精品人妻在线| 99久久精品国产亚洲精品| 国产视频一区二区在线看| 午夜福利免费观看在线| 一本精品99久久精品77| 搡老岳熟女国产| 午夜精品在线福利| 人妻制服诱惑在线中文字幕| 亚洲精品色激情综合| 日本黄色片子视频| 免费在线观看影片大全网站| 欧美成人一区二区免费高清观看| 熟妇人妻久久中文字幕3abv| 免费高清视频大片| 又爽又黄a免费视频| 亚洲av成人不卡在线观看播放网| 一本一本综合久久| 看十八女毛片水多多多| 偷拍熟女少妇极品色| 精品人妻一区二区三区麻豆 | 色视频www国产| 成人特级黄色片久久久久久久| av在线天堂中文字幕| 国产又黄又爽又无遮挡在线| 91午夜精品亚洲一区二区三区 | av黄色大香蕉| 久久欧美精品欧美久久欧美| 嫁个100分男人电影在线观看| 日日摸夜夜添夜夜添av毛片 | 青草久久国产| 18+在线观看网站| 亚洲专区中文字幕在线| 中国美女看黄片| 精品久久久久久久人妻蜜臀av| 99久久精品热视频| 国产成人欧美在线观看| 国产精品一区二区三区四区免费观看 | 亚洲激情在线av| 日本撒尿小便嘘嘘汇集6| 欧美区成人在线视频| 日韩欧美免费精品| 麻豆一二三区av精品| 色综合亚洲欧美另类图片| 日本撒尿小便嘘嘘汇集6| 成年女人看的毛片在线观看| 国产精品久久视频播放| 国产精品一区二区三区四区免费观看 | 人妻夜夜爽99麻豆av| 国产高清有码在线观看视频| netflix在线观看网站| 国产精品免费一区二区三区在线| 久久精品国产亚洲av天美| av中文乱码字幕在线| 午夜福利视频1000在线观看| h日本视频在线播放| 变态另类丝袜制服| 国产色爽女视频免费观看| 少妇人妻一区二区三区视频| 午夜激情福利司机影院| 国产爱豆传媒在线观看| 好男人在线观看高清免费视频| 成人国产综合亚洲| 性色av乱码一区二区三区2| 可以在线观看毛片的网站| 免费av毛片视频| 欧美中文日本在线观看视频| 人人妻人人澡欧美一区二区| 非洲黑人性xxxx精品又粗又长| 精品国产亚洲在线| 国产成人影院久久av| 亚洲欧美日韩东京热| 成年女人永久免费观看视频| 中文字幕久久专区| 久久久久久久久大av| 首页视频小说图片口味搜索| 精品人妻1区二区| 欧美色视频一区免费| 欧美日韩中文字幕国产精品一区二区三区| 精品国内亚洲2022精品成人| 亚洲国产欧洲综合997久久,| 国产在线精品亚洲第一网站| 美女黄网站色视频| 中文在线观看免费www的网站| 成人三级黄色视频| 亚洲成a人片在线一区二区| 麻豆久久精品国产亚洲av| 免费搜索国产男女视频| 黄色配什么色好看| 欧美性感艳星| 色5月婷婷丁香| 国产国拍精品亚洲av在线观看| 级片在线观看| 亚洲av.av天堂| 亚洲av不卡在线观看| 免费看a级黄色片| 国产淫片久久久久久久久 | 51午夜福利影视在线观看| 中文字幕高清在线视频| 亚洲精品粉嫩美女一区| 最近最新中文字幕大全电影3| av欧美777| xxxwww97欧美| 午夜福利在线在线| 狂野欧美白嫩少妇大欣赏| 三级国产精品欧美在线观看| 十八禁人妻一区二区| 国内精品一区二区在线观看| 制服丝袜大香蕉在线| 中文字幕高清在线视频| 淫秽高清视频在线观看| 午夜视频国产福利| 亚洲av日韩精品久久久久久密| 人妻丰满熟妇av一区二区三区| 久久久久久久久大av| 蜜桃久久精品国产亚洲av| 亚洲成人中文字幕在线播放| 老司机午夜福利在线观看视频| 97碰自拍视频| 很黄的视频免费| 91久久精品电影网| 亚洲成人久久爱视频| 国产一级毛片七仙女欲春2| 夜夜爽天天搞| 日韩精品中文字幕看吧| 91在线精品国自产拍蜜月| 亚洲成av人片在线播放无| 国产一区二区在线av高清观看| 国产在线精品亚洲第一网站| 亚洲18禁久久av| 黄色丝袜av网址大全| 久久精品久久久久久噜噜老黄 | 国产精品乱码一区二三区的特点| av在线天堂中文字幕| 18+在线观看网站| 国产亚洲欧美在线一区二区| 成人精品一区二区免费| a级毛片a级免费在线| 国产精品电影一区二区三区| 久久精品综合一区二区三区| 国产在线男女| 我要看日韩黄色一级片| 日韩欧美在线乱码| 中文字幕熟女人妻在线| 亚洲精华国产精华精| 亚洲最大成人中文| 九九热线精品视视频播放| 国产精品野战在线观看| 哪里可以看免费的av片| 丁香六月欧美| 久久久久久国产a免费观看| 怎么达到女性高潮| 十八禁网站免费在线| 最近最新中文字幕大全电影3| 国产欧美日韩一区二区三| 亚洲欧美精品综合久久99| 91av网一区二区| 亚洲精品456在线播放app | 亚洲七黄色美女视频| 国产探花极品一区二区| 亚洲aⅴ乱码一区二区在线播放| 久久精品国产亚洲av涩爱 | 小说图片视频综合网站| 午夜福利视频1000在线观看| 麻豆久久精品国产亚洲av| 亚洲欧美日韩高清专用| 成人av一区二区三区在线看| 日韩有码中文字幕| 亚洲精品在线观看二区| 最新在线观看一区二区三区| 18禁黄网站禁片免费观看直播| 久久精品国产99精品国产亚洲性色| 亚洲美女黄片视频| 午夜福利成人在线免费观看| 国产免费av片在线观看野外av| 国产久久久一区二区三区| 黄色女人牲交| 99热这里只有是精品50| 国产av不卡久久| 日日摸夜夜添夜夜添小说| 九色成人免费人妻av| 国产激情偷乱视频一区二区| 亚洲av电影在线进入| 18+在线观看网站| 国产精品自产拍在线观看55亚洲| 成人永久免费在线观看视频| 丝袜美腿在线中文| 在线观看66精品国产| 中国美女看黄片| 全区人妻精品视频| 好男人在线观看高清免费视频| 日韩欧美 国产精品| 午夜精品久久久久久毛片777| 国产高清视频在线播放一区| aaaaa片日本免费| 亚洲真实伦在线观看| 97超级碰碰碰精品色视频在线观看| 欧美日本亚洲视频在线播放| 校园春色视频在线观看| 国产私拍福利视频在线观看| 免费人成在线观看视频色| 色综合站精品国产| 国产免费男女视频| 嫩草影院入口| 亚洲一区二区三区色噜噜| 亚洲欧美日韩高清专用| 亚洲精品在线观看二区| 精品日产1卡2卡| 久久婷婷人人爽人人干人人爱| 久久久久精品国产欧美久久久| 色综合亚洲欧美另类图片| 真实男女啪啪啪动态图| 色哟哟·www| 日韩欧美在线乱码| 两个人视频免费观看高清| 婷婷亚洲欧美| 俺也久久电影网| av在线蜜桃| 久99久视频精品免费| 人人妻,人人澡人人爽秒播| 色视频www国产| 欧美bdsm另类| 亚洲天堂国产精品一区在线| 国产黄a三级三级三级人| 麻豆一二三区av精品| 美女xxoo啪啪120秒动态图 | 真人一进一出gif抽搐免费| 国内少妇人妻偷人精品xxx网站| 免费在线观看亚洲国产| 男人舔女人下体高潮全视频| 一本精品99久久精品77| 亚洲国产欧美人成| 亚洲va日本ⅴa欧美va伊人久久| 欧美黑人巨大hd| 99视频精品全部免费 在线| 亚洲最大成人手机在线| 国产精品久久久久久亚洲av鲁大| 99久国产av精品| 国产高清激情床上av| 男女那种视频在线观看| a级毛片a级免费在线| 欧美绝顶高潮抽搐喷水| 最近最新免费中文字幕在线| 欧美日韩黄片免| 99热6这里只有精品| 久久国产乱子伦精品免费另类| 亚洲久久久久久中文字幕| 国产欧美日韩一区二区三| 怎么达到女性高潮| 免费看美女性在线毛片视频| 欧美黑人巨大hd| 欧美精品国产亚洲| 成人亚洲精品av一区二区| 人人妻,人人澡人人爽秒播| 亚洲国产色片| 在线天堂最新版资源| 日韩欧美精品免费久久 | 变态另类丝袜制服| 精品一区二区三区av网在线观看| 国内揄拍国产精品人妻在线| 日本黄色片子视频| 亚洲av成人不卡在线观看播放网| 午夜老司机福利剧场| 亚洲欧美日韩东京热| 亚洲av电影在线进入| 国产精品野战在线观看| 免费高清视频大片| 亚洲在线自拍视频| 免费av毛片视频| 亚洲精品乱码久久久v下载方式| 精品久久久久久久久亚洲 | 中文字幕人妻熟人妻熟丝袜美| 久久精品夜夜夜夜夜久久蜜豆| 日本黄色片子视频| 日韩欧美国产一区二区入口| 国产淫片久久久久久久久 | 久久久久久久午夜电影| 欧美潮喷喷水| 亚洲熟妇熟女久久| 精品一区二区三区av网在线观看| 婷婷六月久久综合丁香| 蜜桃久久精品国产亚洲av| 精品人妻偷拍中文字幕| 精品人妻熟女av久视频| 日韩免费av在线播放| 精品人妻熟女av久视频| 在线观看av片永久免费下载| 国产精品久久久久久久久免 | 校园春色视频在线观看| 长腿黑丝高跟| 老司机深夜福利视频在线观看| www.www免费av| 国产三级中文精品| 久久伊人香网站| 亚洲熟妇熟女久久| 久久久久国产精品人妻aⅴ院| 桃红色精品国产亚洲av| av在线观看视频网站免费| 国产精品永久免费网站| 成人特级av手机在线观看| 日韩精品中文字幕看吧| 午夜精品久久久久久毛片777| 久久国产乱子免费精品| 淫妇啪啪啪对白视频| 超碰av人人做人人爽久久| .国产精品久久| 日韩 亚洲 欧美在线| 色噜噜av男人的天堂激情| 啦啦啦观看免费观看视频高清| 啦啦啦观看免费观看视频高清| 制服丝袜大香蕉在线| 国产精品久久久久久久电影| 精品无人区乱码1区二区| 国产精品av视频在线免费观看| 丁香六月欧美| 啪啪无遮挡十八禁网站| 老熟妇仑乱视频hdxx| 色av中文字幕| 精品久久久久久久久av| 欧美不卡视频在线免费观看| 成人av在线播放网站| 婷婷色综合大香蕉| 亚洲三级黄色毛片| 岛国在线免费视频观看| 国产高清视频在线播放一区| 中文字幕免费在线视频6| 少妇的逼好多水| 又黄又爽又免费观看的视频| 麻豆国产av国片精品| 婷婷色综合大香蕉| 日日夜夜操网爽| 91久久精品国产一区二区成人| 一区二区三区高清视频在线| 极品教师在线视频| 国产黄a三级三级三级人| 黄色一级大片看看| 一进一出抽搐动态| 久久久成人免费电影| 欧美高清成人免费视频www| 在线观看午夜福利视频| 自拍偷自拍亚洲精品老妇| 久久国产精品人妻蜜桃| 天美传媒精品一区二区| 两个人的视频大全免费| 亚洲男人的天堂狠狠| 欧美一区二区国产精品久久精品| 亚洲专区国产一区二区| 97超级碰碰碰精品色视频在线观看| 我的女老师完整版在线观看| 搡老熟女国产l中国老女人| 美女cb高潮喷水在线观看| 精品人妻视频免费看| 久久久久久久亚洲中文字幕 | 97人妻精品一区二区三区麻豆| 久久久久久久亚洲中文字幕 | 亚洲精品在线美女| 国产v大片淫在线免费观看| 哪里可以看免费的av片| 国产黄片美女视频| 免费在线观看成人毛片| 欧美黄色片欧美黄色片| 成人高潮视频无遮挡免费网站| 欧美日韩福利视频一区二区| 国产伦精品一区二区三区视频9| 国产精品人妻久久久久久| 最后的刺客免费高清国语| 亚洲成人精品中文字幕电影| 日韩大尺度精品在线看网址| 69av精品久久久久久| 成人国产一区最新在线观看| 丰满人妻熟妇乱又伦精品不卡| 亚洲性夜色夜夜综合| 精品一区二区三区视频在线观看免费| 中出人妻视频一区二区| 亚洲熟妇熟女久久| av福利片在线观看| 女人被狂操c到高潮| 亚洲熟妇熟女久久| 在线观看66精品国产| 成人鲁丝片一二三区免费| 久久精品影院6| 黄色日韩在线| 日本黄色片子视频| 国产av不卡久久| 欧美最新免费一区二区三区 | 精品人妻偷拍中文字幕| 国产在视频线在精品| 天美传媒精品一区二区| 久久热精品热| 尤物成人国产欧美一区二区三区| 黄色视频,在线免费观看| 身体一侧抽搐| 亚洲男人的天堂狠狠| 国产精品自产拍在线观看55亚洲| 亚洲中文字幕一区二区三区有码在线看| 欧美日韩亚洲国产一区二区在线观看| 综合色av麻豆| 成人特级黄色片久久久久久久| 老司机福利观看| 色综合站精品国产| 夜夜看夜夜爽夜夜摸| 午夜福利成人在线免费观看| 一个人免费在线观看的高清视频| 中文亚洲av片在线观看爽| 欧美成人免费av一区二区三区| 综合色av麻豆| 日本黄大片高清| 亚洲第一电影网av| 搡老妇女老女人老熟妇|