• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Empathic Responses of Behavioral-Synchronization in Human-Agent Interaction

    2022-08-24 03:30:58SungParkSeongeonParkandMincheolWhang
    Computers Materials&Continua 2022年5期

    Sung Park,Seongeon Park and Mincheol Whang

    1Savannah College of Art and Design,Savannah,GA,31401,USA

    2Sangmyung University,Seoul,03016,Korea

    Abstract: Artificial entities, such as virtual agents, have become more pervasive.Their long-term presence among humans requires the virtual agent’s ability to express appropriate emotions to elicit the necessary empathy from the users.Affective empathy involves behavioral mimicry, a synchronized co-movement between dyadic pairs.However, the characteristics of such synchrony between humans and virtual agents remain unclear in empathic interactions.Our study evaluates the participant’s behavioral synchronization when a virtual agent exhibits an emotional expression congruent with the emotional context through facial expressions,behavioral gestures,and voice.Participants viewed an emotion-eliciting video stimulus(negative or positive)with a virtual agent.The participants then conversed with the virtual agent about the video,such as how the participant felt about the content.The virtual agent expressed emotions congruent with the video or neutral emotion during the dialog.The participants’facial expressions, such as the facial expressive intensity and facial muscle movement, were measured during the dialog using a camera.The results showed the participants’significant behavioral synchronization(i.e.,cosine similarity ≥.05)in both the negative and positive emotion conditions,evident in the participant’s facial mimicry with the virtual agent.Additionally,the participants’facial expressions,both movement and intensity,were significantly stronger in the emotional virtual agent than in the neutral virtual agent.In particular,we found that the facial muscle intensity of AU45(Blink)is an effective index to assess the participant’s synchronization that differs by the individual’s empathic capability(low,mid,high).Based on the results,we suggest an appraisal criterion to provide empirical conditions to validate empathic interaction based on the facial expression measures.

    Keywords: Facial emotion recognition; facial expression; virtual agent;virtual human; embodied conversational agent; empathy; human-computer interaction

    1 Introduction

    The prevalence of AI technology,including deep fake or advanced 3D modeling,has introduced unprecedented virtual humans closely resembling human appearance and behavior.For example,virtual human Roji created by Sidus Studio X has gained immense popularity in Korea as an advertisement model through its life-like motion and behavior.People did not realize that Roji was a virtual human until the company revealed it four months after its public debut.The human-like character was achieved using an AI model, learning actual human facial expressions and behavior,and applying the learned facial and bodily patterns to the 3D model.

    However, virtual humans are not new.They have been used in many domains, including advertisements as well as medical practice [1,2], healthcare [3,4], education [5,6], entertainment [7,8], and the military [9,10], interacting with the user and acting on the environment to provide a positive influence, such as behavioral change of the human counterpart [11].The emphasis on interactivity with the user brought the term virtual agent, which utilizes verbal (e.g., conversation) and nonverbal communication (e.g., facial expressions and behavioral gestures) channels to learn, adapt,and assist the human.Similar to human-to-human communication,understanding the human dyad’s emotion[12],expressed by speech,body motion,and facial expression,is paramount for an effective conversation.For the virtual agent to help humans with their mental or health-related problems or assist their daily activities, the virtual agent should be capable of empathizing with the human user.That is,recognizing humans’emotional states,thoughts,and situations and behave accordingly.

    1.1 Literature Review on Empathy Research

    We feel similar emotions to other people, which is sometimes a result of understanding others’thoughts and feelings.Empathy involves “an affective response more appropriate to someone else’s situation than to one’s own” [13].The crux considers the other’s affective state and situation for cooperation,prosocial behavior,and positive relationships[13–16].

    Empathy research has been conducted in social development, clinical psychology, and neuroscience.Since discovering mirror neurons in monkeys[17],neuroscientists have identified underlying neurological evidence for empathy[18].Overlapping brain patterns were observed when an observer perceived the same emotions from a target,suggesting shared affective neural networks[19–21].

    However, there is no consensus on the definition of empathy.The number of definitions is proportional to the number of researchers [22].Researchers agree that empathy consists of multiple subcomponents [13,23,24], and some critical elements of empathy (recognition, process, outcome,response)are commonly identified(for an extensive review of empathy as a concept,see[25]).A typical empathic episode initiates when the observer perceives empathic cues (expression or situation) from the target through verbal(e.g.,“I don’t feel well”)or non-verbal channels.The observer then engages in an internal affective or cognitive process, which may result in a congruent emotional state (i.e.,feeling through the target),and if willing,an empathic response(e.g.,“I understand that you are in a bad mood”).Based on the most prominent empathy theories[13,23,26–28],the affective or cognitive processes are the underlying mechanisms that produce empathic outcomes.Because the latter,cognitive empathy,is an extensive research field,including perspective taking,we limited our research to affective empathy.

    The crux of affective empathy is motor mimicry, an observer’s automatic and unconscious imitation of the target.Mimicry was first described by Lipps and organized by Hoffman[29]into a twostep process:1)the observer imitates the target’s empathic expressions(e.g.,facial expression,voice,and posture); 2) this imitation results in afferent feedback that produces a parallel affect congruent with the target’s feedback.For example, a virtual agent may imitate a human’s facial expression,who looks cheerful and changes their emotional state accordingly.This mechanism is also referred to as primitive emotional contagion[30]or the chameleon effect[31].Mimicry is essential in building rapport[32]and makes the observer more persuasive[33];however,in certain situations,it may have a diminishing effect.

    1.2 Limitations of Empathy Research with Virtual Agents

    Because empathy is a directional construct involving an observer empathizing with a target,empathy research related to a virtual agent is twofold:research on 1) the virtual agent (observer)empathizing with the human user(target),or 2)the human user(observer)empathizing with the virtual agent(target).

    Empathic virtual agents have been studied in the context of playing games [34–36], health-care interventions [37,38], job-interviews [39], email assistance [40], social dialog [41], or even a story narrative[42].A typical empirical study evaluated the participants’perceptions when interacting with or observing an empathic virtual agent compared to a non-empathic one.Overall, empathic agents were perceived positively in liking [35,37,41,42], trust [35,37] and felt more human-like [34], caring[34,35],attractive[34],respected[37],and enjoyable[38].While most studies were based on one-time interaction, a few studies identified the participants’intention to use empathic virtual agents longer[37,38].The research community certainty has established a grounding that an empathic virtual agent,when implemented to provide an appropriate response congruent with the situation,elicits a positive perception of the users with a perspective for long-term interaction.

    However, some studies have investigated the participants’empathic and emotional responses to virtual agents.The participants’affective states were estimated through a dialog(e.g.,multiple choices)[39–41,43], or more direct measures such as physiological measures (e.g., skin conductance, EMG)[34,36,39]and facial expressions[38,44].Lisetti et al.[38]developed a virtual counselor architecture with a facial expression recognizer model to recognize the participant’s facial expression as part of its multimodal empathy model.The participants’facial photos were obtained from the JPEG-Cam library and sent for analysis to the processing module.The analysis is yet to return limited output such as smiling status and five emotion categories(happy,sad,angry,surprised,and neutral)and not the kind of advanced analysis such as facial mimicry.That is, the study did not directly investigate the facial synchronization.Prendinger et al.[39] evaluated psychophysiological measures (e.g., skin conductance and heart rate)of the participants interacting with an empathic companion.The study suggested that the companion’s comments,displayed through text,positively affected the participant’s stress level.The companion was designed to express empathy based on a decision network.However,the research did not provide direct evidence suggesting behavioral mimicry,such as the convergence of heart rate[45],evidence of empathy.Ochs et al.[40]suggested an emotion model for an empathic dialog agent that infer a user’s emotions considering the user’s beliefs, intentions, and uncertainties.The model captured the dynamic change of the agent’s mental state due to an emotion eliciting event.However,the inference is yet a logical approximation,absent of direct empirical evidence suggesting empathic synchronization.

    1.3 Contribution of Current Study

    While some studies have suggested a promising empathy recognition model,few studies empirically validated its effectiveness with human participants.Empirical data are limited to indirect methods, such as multiple choices during dialog [39–41].The empathic synchronization is approximated with the participant’s subjective responses to an empathic agent [40].That is, no study directly compares the emotional expression between a participant and a virtual agent on an equal scale to assess empathy.

    Our research is interested in understanding the participants’behavioral mimicry when they empathize with the virtual agent by analyzing the participants’facial expression changes, including the intensity and movement of the facial appearance and muscles.To the best of our knowledge,this is the first study to directly compare a battery of facial expression measures between a participant and an empathic virtual agent.Furthermore,this is the first research to analyze how such measures differed as a function of participants’empathy capability(low,mid,high).Based on the analysis,we provide an empirically validated criterion for detecting a participant’s empathic state when interacting with a virtual agent.Such a criterion is paramount to inform the virtual agent or any kind of AI system to adapt its emotion and behavior to the user in real time.

    2 Methods

    By definition,the observer’s(the participant)empathy occurs when the observer recognizes and relates to the target’s (the virtual agent) emotional expression (facial expression, behavioral gesture,voice) congruent with the situation.We designed our experiment to achieve an emotion-embedded shared experience (viewing either positive or negative video clips) between the participant and the virtual agent.Through dialog,the virtual agent expresses an emotion congruent with the valence of the video so that the participant can empathize with the virtual agent.The emotion expressed is varied;emotional or neutral,and the valence of the video stimuli is either positive or negative.

    2.1 Research Hypothesis

    Our research is directed towards verifying the following three hypotheses:

    ·H1:There is a difference in the facial synchronization between the participant and virtual agent when interacting with an emotional virtual agent and a neutral virtual agent.

    ·H2:The participants’facial expressions differ when interacting with an emotional virtual agent and a neutral virtual agent.

    ·H3:The participants’facial expressions differ depending on the level of the empathic capability of the participant.

    2.2 Manipulation Check

    The current research used a video stimulus to evoke emotions and a dialog to evoke empathy.We conducted a manipulation check to ensure that the video stimuli and dialog interaction were effective before the main experiment,in which participants’facial data were acquired.Thirty university students were recruited as the participants.The participants’ages ranged from 21 to 40 years (mean=28,SD=4), with 16 males and 14 females.We defined four interaction cases between theemotion valencefactor(negative or positive)andvirtual agent expressionfactor(emotional or neutral)where the participant viewed the video stimulus, conversed with the virtual agent, and responded to a questionnaire.We adopted Russell’s valence dimension in his circumplex model[46],where emotional states can be defined at any or neutral level of valence.The materials (video clip and virtual agent)were identical to those used in the main experiment.We used video stimuli known to elicit emotions,which were organized and empirically validated by Stanford University(n=411,[47]).

    The experimenter explained the experiment procedure and clarified the terms in the questionnaire.After viewing each stimulus and conversing with the virtual agent, the participants responded to the survey (see Fig.1).The interaction with the virtual agent was identical to the main experiment where the virtual agent led the conversation asking a series of questions related to the shared viewing experience of the content (details of the virtual agent and dialog will be explained in Section2.6 Materials and Data Acquisition).The order of the stimulus and the virtual agent were randomized.

    Figure 1:Questionnaire for manipulation check

    We analyzed the participants’perceptions(i.e.,questionnaire responses)to determine whether the video stimuli could evoke emotion and if the virtual agent could elicit empathy from the participant.We concluded that participants reported congruent emotions with the target emotion of the stimulus(▲in Fig.2).That is,the participants reported negative valence in the negative emotion condition and positive valence in the positive emotion condition.

    To test whether the virtual agent would elicit empathy,we conducted a paired sample t-test on the questionnaire data after validating the normality through the Shapiro-Wilk test(see Fig.3).Whether the participant emphasized the virtual agent was analyzed based on the vector distance between the participant’s emotional state and perceived emotional state of the virtual agent.In the negative emotion condition, the distance between the participant’s emotional state and perceived emotional state of the virtual agent was significantly smaller in the emotional virtual agent condition than in the neutral condition(t=-5.14,p<.001)(Figs.2a and 3a).Similarly,in the positive emotion condition,the vector distance between the participant’s emotional state and perceived emotional state of the virtual agent was significantly lower in the emotional virtual agent condition than in the neutral condition(t=-6.41,p<.001)(Figs.2b and 3b).

    Figure 2:Distance between emotional states.(a)Negative emotion(b)Positive emotion

    Figure 3:T-tests results on mean distance between emotional states.(a)Negative emotion(b)Positive emotion

    2.3 Experiment Design

    The main experiment was a mixed factorial design of 2(virtual agent expression:emotional and neutral)×2 (emotion valence:negative and positive).The virtual agent expression was a betweensubject factor, whereas the emotion valence factor was a within-subject factor design.That is, a participant interacted with only one type of virtual agent’s expression, but the virtual agent would express with all two emotions.The participants were randomly distributed with equal numbers between a group that would interact with an emotional virtual agent and a group that would interact with a neutral virtual agent.

    2.4 Participants

    We conducted an a priori power analysis with the program G*Power with power set at 0.8 andα= 0.05,d=0.6 (independent t-test) and 0.4 (one-way repeated ANOVA), two-tailed.The results suggest an n of approximately 45 would be needed to achieve appropriate statistical power.Therefore,forty-five university students were recruited as participants.The participants’ages ranged from 20 to 30 years(mean=28,SD=2.9),with 20(44%)males and 25(56%)females.We selected participants with a corrective vision of.8 or above without any vision deficiency,to ensure the participant’s reliable recognition of visual stimuli.We recommended that participants have sufficient sleep and prohibit alcohol,caffeine,and smoke the day before the experiment.Because the experiment requires a valid recognition of the participant’s facial expression,we limited the use of glasses and cosmetic makeup.All participants were briefed on the purpose and procedure of the experiment and signed a consent form.They were then compensated with participation fees.

    2.5 Experiment Procedure

    The experiment consisted of two sessions with a week interval between the participant’s two visits to the lab (see Fig.4).The main experiment used identical experimental materials as those used in the manipulation check.In the first session, the experimenter explained the experiment procedure and clarified the terms in the questionnaire to be administered.The participants then responded to a Korean adaptation version of the Empathy Quotient (EQ) survey [48] to assess their empathic capability.The survey is explained in detail in Section2.6 Materials and Data Acquisition.

    Figure 4:Experiment procedure

    The participants were then placed in an experiment room with two monitors.The video stimulus was displayed on the left monitor,whereas the virtual agent was displayed on the right.The camera was placed on top of the right monitor to capture the participant’s facial data during the dialog.

    The participant then conversed with the virtual agent to build a rapport.After the conversation,the participant viewed the affective video that was designed to evoke emotions; negative emotions in the first session and positive emotions in the second.The virtual agent was programmed to face toward the left monitor as if the virtual agent was viewing the video content played on the left monitor.That is,the participants had a sense of watching the stimulus together.Each video clip lasted for 30 s.

    The participant then engaged in a series of interactive dialogs with the virtual agent.The dialog is explained in detail in Section 2.6Materials and Data Acquisition.The dialog lasted approximately four to seven minutes.After the dialog,the participant relaxed for 180 s.The participants then viewed the second video clip with the same emotion, followed by an identical interaction with the virtual agent.The camera captured the participant’s facial expressions during the entire dialog.The order of the video clips was randomized.The participants then retired and revisited the lab after a week.Because the evoked emotion tends to permeate throughout the day,we had the interval between the two sessions so that the results could be attributed to a single evoked emotion.In the second session,the participants followed the same steps but with a positive emotion stimulus.

    2.6 Materials and Data Acquisition

    2.6.1 Empathy Quotient(EQ)

    To validate the relationship between the participants’empathic capability and facial expression when interacting with the virtual agent(H3),we administered a battery of empathy quotient surveys.While there is a long academic history involving the development of a survey to measure empathy in adults,scales have evolved to fully capture empathy without being confounded with other constructs.For example,one of the early scales,Hogan’s Empathy(EM)[49],in a strict sense,only had one factor,sensitivity, related to empathy as a construct.The Questionnaire Measure of Emotional Empathy(QMEE)[50]is also considered to measure empathy,but the authors themselves suggest a confounding variable such as being emotionally aroused to the environment that is not relevant to empathy in an interpersonal interaction [51].Davis’Interpersonal Reactivity Index (IRI) [52] started to include higher-level cognitive empathy, such as perspective taking, but is considered to be broader than construct empathy[48].

    The most recent Baron-Cohen’s EQ(Emotion Quotient)[48]scale was designed to capture both the cognitive and affective components of empathy and validated by a panel of six psychologists to determine whether the battery agrees with the scholarly definition of empathy.The original Baron-Cohen scale is a 4-point scale that includes 40 items and 20 control items, which conveys either a positive or negative emotional valence.

    Because any kind of scale designed to measure a psychological construct is affected by cultural differences,we used an adapted and translated version for Koreans,the K-EQ[53].The Baron-Cohen scale has a four-point Likert scale with bidirectional ends(positive and negative).To put the responses on a unidirectional scale,the K-EQ converts the answer items(1–4)to a three-point Likert scale(0–2).That is,participants answered 0,1,or 2,so the total score ranged from 0 to 80.The validity and reliability of the K-EQ battery have been verified[53,54].

    Using the questionnaire data,we divided the participants into three groups(high,mid,and low)according to the cumulative percentage of the K-EQ score so that the three groups had a distribution of 3:4:3(see Fig.5).We report and discuss the participants’differential facial expressions as a function of empathic capability in the Results section.

    Figure 5:Distribution of empathic capability

    2.6.2 Virtual Agent and Dialog

    Research indicates that emotional expression elicits differential social responses from different cultural members [55–58].Ethnic stereotyping also applies to virtual agents [59,60].Specifically,participants responded to a virtual human of the same ethnicity as more intelligent, trustworthy,attractive,and persuasive[61].In our research,because the participant would emphasize the virtual agent,we designed the agent to match the participant’s ethnicity and race.

    The virtual agent was a three-dimensional female character that was refined for the experiment(see Fig.6).We used the animation software Maya 2018(Autodesk)to modify an open-source,F(xiàn)BX(Filmbox) formatted virtual agent model.Specifically, we adjusted the number and location of the cheekbone and chin to express the facial expressions according to the experiment design(i.e.,negative and positive emotion expressions).We used the Unity 3D engine to animate the modified model and developed an experiment program using C#4.5,an objective-oriented programming language.

    Figure 6:(a)Negative and(b)positive expressions of virtual agent

    The virtual agent expressed emotion through three means:facial expression,behavioral gestures,and voice.The agent expressed two emotions, positive and negative, on the valence plane based on the dimensional affect by Russell[46].The corresponding facial expression was designed based on the Facial Action Code System (FACS) [62].The behavioral gestures were designed based on previous studies on the perceived intentions and emotions of gestures [63–65].For example, in the negative emotion condition,the palms were facing inward,and the arms were bent,concealing the chest(see(a)in Fig.6).In the positive emotion condition,the virtual agent had the palms facing upward with the arm and chest opened(see(b)in Fig.6).We used a voice recording of a female in her 20s,congruent with the appearance of the virtual agent.To make the expression as natural and believable as possible,we guided the voice actor to speak as similar as possible to the visual appearance.The tone and manner were congruent with the dialog script.We designed the virtual agent’s lips to synchronize with the vocal recordings.

    To elicit empathy from the participant, the virtual agent was designed to converse with the participant on the shared experience that just occurred.We referred to the dialog script from previous virtual agent systems [37,42,66,67].The following is a representative dialog script, which includes rapport building.In this case,the video viewed by the participant and virtual agent was about a playful episode between a baby and her father.Her father teased her by saying, “make an evil look,”when obviously she is too young to understand the meaning.She made a frown as if she was responding to her father.The video was empirically validated to elicit a positive response[47].This video stimulus was one of the stimuli used in the experiment.The bolded script affects the virtual agent’s facial expression,behavioral gesture,and voice depending on the expression condition(i.e.,emotionalvs.neutral).

    Agent:Hi,my name is Mary.How did you come to the lab today?

    Participant:Hi,I took a bus.

    Agent:I see.It is nice to see you.Can I ask you about your hobbies?

    Participant:I like to listen to music and watch a movie.

    Agent:I am really into watching a YouTube video.I want to see something with you.Could you please watch this with me?

    Participant:Sure.

    (The virtual agent turns her posture on the left monitor).

    (After watching the stimulus video).

    Agent:(gestures)Have you seen it well?

    Participant:Yes.

    Agent:(gestures)(in positive condition)The video was really funny(in negative condition:unpleasant).How about you?

    Participant:I thought it was funny too.

    Agent:(gestures)Could you tell me what part of the video made you feel that way?

    Participant:The baby seemed to understand what her father was saying,and her response was so funny.

    Agent:(gestures)How would you feel if the same thing happened to you?

    Participant:I would keep smiling because the baby is so cute!

    Agent:(gestures)How would you behave in that situation?

    Participant:I would tease the baby just like the video.

    Agent:(gestures)Have you experienced something similar in real life?

    Participant:I used to tease my young cousin,similar to the video.

    Agent:(gestures)Could you share an experience that has a similar emotion to the video?

    Participant:I felt similar emotions when viewing a video of a puppy.

    Agent:(gestures)I see.What do you think my expressions were like when viewing the video?

    Participant:I would imagine that you were smiling and had your corner of the lip pulled,just like me.

    Agent:(gestures)I see.What do you think my feelings are now?

    Participant:You probably feel better.You know babies;they make us happy.

    The dialog script was presented using the Wizard of Oz method [68] as if the virtual agent was conversing naturally with the participant.We recorded and analyzed the participants’facial expressions during the conversation,which will be explained in detail in the following section.

    2.6.3 Facial Data

    The facial data were captured by a camera (Logitech, C920), which was fixed on the monitor with the virtual agent to record the participant’s frontal facial expressions.The video was captured at a 1920×1080 pixel resolution at a frame rate of 30 fps in an MP4 format.The facial movement was measured through Open face [69], which is an open-source tool developed based on a machine learning model.The library can analyze and track a participant’s face in real time.We extracted the facial landmarks,blend shape,action units(AU)and their strength,and head pose data at a rate of 30 fps.The dependent variables elicited from the facial data are as follows.

    Facial appearance movement.The Open Face provides feature characteristics such as facial landmarks that include the participant’s facial appearance.The two-dimensional landmark coordination(x,y)is a result of the AI decoder,which is trained based on a machine learning model,which consists of 68 features.Because the data are dependent on the size(height,breadth)and location of the face,we normalized the data through the min-max normalization, using Eqs.(1) and (2), resulting in a normalized value[0,1],referred to as the facial appearance movement.

    Facial muscle movement.AU are modular components (i.e., facial muscles) that can be broken down from facial expressions[62].AU is considered the basic analytical element of a facial cognitive system,the Facial Action Code System(FACS).We used 27 AU,which is the centroid value between the facial landmarks(see Tab.1).The centroid value is the average of the respective two-dimensional coordinates(x,y)of the facial landmarks that are related to the respective AU.

    Facial muscle intensity.Through the dynamic model based on machine learning, Open Face provides the degree of muscle contraction (0–1) through a variable that captures the strength of the facial muscles from AU1 to AU45.

    Table 1:Action Units(AU)definitions

    Facial expressive intensity.Using the animation software Maya (Autodesk), we produced AUbased blend shapes for the virtual agent in the experiment.For a more natural look,the blend shapes morphed the base shape to the target shape of the face, using the linear interpolation method.The facial expressive intensity variable represents the strength of the blend shapes,between 0 and 100,which are based on the facial regions(e.g.,brows,eyes,nose,cheek,mouth)involving facial expressions.

    Head pose movement.The head pose movement involves the orientation of the participant’s head.The research analyzed the head pose to understand a person’s point of attention and interest[70–72].The participant’s head pose variable consists of the yaw(X),pitch(Y),and roll(Z)on threedimensional planes,represented byEuler angles(seeFig.7).

    Figure 7:Rotation of the virtual agent’s head pose

    2.7 Analysis Plan

    We eliminated any data outside of the mean±3SD as outliers[73].To validateH1,we conducted an independent t-test on the degree of facial synchronization of the participant to the virtual agent between the two conditions:emotional and neutral virtual agents.To measure the degree of synchronization,we used cosine similarity,which captures the similarity between thefacial expressive intensityof the participant and virtual agent.Cosine similarity was used as a distance metric for facial verification[74].Prior to the t-test,we conducted the Shapiro-Wilt and Levene’s test to confirm data normality and homoscedasticity.

    To validateH2,we conducted an independent t-test on dependent measures of the facial expression(facial expressive intensity,facial appearance movement,facial muscle movement,and head pose)between the two conditions:emotional and neutral virtual agents.We used the same methods to test data normality and homoscedasticity.

    To validateH3, we conducted a one-way ANOVA test on thefacial muscle intensity, followed by a post-hoc Scheffe’s test.If the data did not meet normality,we conducted the Kruskal-Wallis test instead.If the data did not meet the equal variance criterion,we conducted Welch’s ANOVA,followed by a post-hoc Games-Howell test.

    3 Results

    3.1 Synchronization of Facial Expression

    Fig.8 depicts the differences in the average cosine similarity in the negative condition between the emotional and neutral virtual agents.The brows frown on both sides(p<.001),brows down on both sides(p<.001),mouth sad on both sides(p<.001),narrowing the mouth on both sides(p<.001)had a significantly higher cosine similarity in the emotional condition than in the neutral condition.Remarkably,brow frowning and narrowing the mouth had a cosine similarity of more than 0.5.Cosine similarity above 0.5,is considered as a strong synchronization[75,76].

    Figure 8:Averaged cosine similarity in the negative emotion condition

    Fig.9 depicts the differences in the average cosine similarity in the positive condition between the emotional and neutral virtual agents.The brows raised on both sides (p<.001), cheek raise on both sides (p<.001), nose scrunching on both sides (p<.001), mouth smiling on both sides (p<.001),extending the mouth on both sides(p<.001),and mouth opening(p<.001)had significantly higher cosine similarities in the emotional condition than the neutral condition.Remarkably,the brows raising,mouth smiling,and mouth opening had a cosine similarity of more than.05.

    Figure 9:Average cosine similarities in the positive emotion condition

    3.2 Difference in Facial Appearance

    Fig.10 depicts the averagefacial expressive intensity(i.e., blend shape strength) in the negative emotion condition.Thefacial expressive intensityof the brows frown on both sides(p<.001),brows down on both sides(p<.05),and mouth narrow on the left side(p<.001)in the emotional condition was significantly higher than in the neutral condition.Conversely,thefacial expressive intensityof the brows raised on both sides(p<.05)and mouth extension on the left side(p<.001)was significantly higher in the neutral condition than in the emotional condition.

    Figure 10:Average facial expressive intensity in the negative emotion condition

    Fig.11 depicts the averagefacial expressive intensity(i.e., blend shape strength) in the positive emotion condition.Thefacial expressive intensityof the mouth extension on the right side(p<.01)was higher in the emotional condition than in the neutral condition.Thefacial expressive intensityof the eyes open on both sides(p<.05)and the mouth narrow on the right(p<.05)was higher in the neutral condition than in the emotional condition.

    Figure 11:Average facial expressive intensity in the positive emotion condition

    Fig.12a depicts thefacial appearance movementof the participants in the negative emotion condition.The appearance movement of the eyebrows,eyes,nose,and lips was significantly higher in the emotional condition than in the neutral condition.Fig.12b shows the positive emotion condition.The movement of the lower jaw and left cheekbone was significantly higher in the emotional condition than in the neutral condition.

    Figure 12:Difference of facial appearance movement.(a)Negative condition(b)Positive condition

    Fig.13a depicts thefacial muscle movementof the participants in the negative emotion condition.The 68 landmarks and 27 AU are plotted on an X-Y axis.Based on the t-test results, AU with a significant difference(p<.05)on the X-axis are colored with yellow,on the Y-axis are colored with orange, and on both axes are colored with green.The movement of the eyes, nose, mouth, chin,cheekbone, and philtrum was significantly higher in the emotional condition than in the neutral condition.Fig.13b shows the positive emotion condition.The muscle movements of the eyes, nose,mouth,and chin were significantly higher in the emotional condition than in the neutral condition.

    Figure 13:Difference of facial muscle movement.(a)Negative condition(b)Positive condition

    Finally, we conducted an independent t-test on thehead pose movementbetween the emotional and neutral conditions.The movement indicates the rate of change of the Euler angles.We found no significant difference in both conditions(see Figs.14 and 15),involving all three-dimensional planes(yaw(X),pitch(Y),and roll(Z)).

    Figure 14:Rate of change in head pose movement in the negative condition

    Figure 15:Rate of change in head pose movement in the positive condition

    3.3 Facial Muscle Intensity as a Function of the Empathic Capability

    We conducted a one-way ANOVA on thefacial muscle intensitybut found no significant difference between the different empathic capabilities in the negative emotion condition.However,in the positive emotion condition, we found a significant difference in thefacial muscle intensityin AU45 (Blink)(p<.5,F(xiàn)=3.737)(see Fig.16).We conducted a post hoc Games-Howell test and found a significant difference between the low and high empathy(p<.01),the midvs.high empathy group(p<.05),and the lowvs.mid empathy group(p<.05).

    Involving the AU that did not meet normality,we conducted the Kruskal-Wallis H test instead.The results showed a significant difference in AU1(Inner brow raiser,p<.01,χ2=9.252),AU6(Cheek raiser,p<.05,χ2=7.686),AU12(Lip corner puller,p<.001,χ2=24.025),and AU25(Lips,p<.05,χ2=6.079).The post hoc pairwise comparison results showed a significant difference in AU1 between the low and high empathy group(p<.01),in AU6 between the low and high empathy group(p<.05),in AU12 between the low and mid empathy group(p<.05),and between the low and high empathy group(p<.001),and in AU25 between the low and high empathy group(p<.05).

    Figure 16:Difference in facial muscle intensity of AU45 between three empathy groups

    4 Conclusion and Discussion

    This work evaluated whether a participant(observer)can empathize with a virtual agent(target)and exhibit behavioral mimicry accordingly.We designed an interaction context in which the participants viewed the stimuli in the presence of a virtual agent to give a sense of shared experience.We developed a dialog script to build rapport, and had the virtual agent express an emotion congruent(or neutral) with the context via facial expressions, behavioral gestures, and voice.We analyzed the participants’facial expressions during the conversation to validate facial synchronization, which is evidence of facial mimicry.

    In summary, when the two dyads (the participant and virtual agent) shared the same emotion(negative or positive) in a shared task (i.e., viewing a video together), we found that the participant mimicked the virtual agent’s facial expression if the virtual agent projected an emotion congruent with the emotion of the stimulus.To the best of our knowledge,this is the first study to provide extensive evidence, albeit limited to facial expressions, that humans can exhibit behavioral mimicry to virtual agents in a shared empathic context.

    In the negative emotion condition,the level of synchronization of the brows frown,brows down,mouth sad,and mouth narrow in both the left and right was higher in the emotional condition than in the neutral condition.Such expressive elements(i.e.,blend shape),with the exception of the nose scrunch, were the elements that the virtual agent utilized to express negative emotion.This implies that the participants were synchronized to nearly all facial regions corresponding to the virtual agent’s negative facial expressions.The exception was because the nose scrunch was not prominent in the design,such that the participants were not able to perceive it clearly.The prerequisite of empathizing is a clear recognition by the observer of the emotional cue from the target [77].Furthermore, the cosine similarity of the brows frown and mouth narrow was more than 0.5, which is regarded as a strong synchronization [74].Such expressive elements can be used as appraisal criteria for detecting empathy.Future studies may directly test this.

    In the positive emotion condition, the level of synchronization of the brows raised, cheek raise,nose scrunch,mouth smile,mouth extension,and mouth opening on both the left and right was higher in the emotional condition than in the neutral condition.Again, such expressive elements were the elements that the virtual agent utilized to express positive emotions.Furthermore,the cosine similarity of the brows raised,mouth smile,and mouth opening was greater than 0.5.

    Additionally, we found that in both emotion conditions (positive and negative), the facial expressions were stronger in the emotional condition than in the neutral condition.That is,all three variables that constitute the facial expression,facial expressive intensity,facial appearance movement,and facial muscle movement,were higher in the emotional condition than in the neutral condition.

    The results on thefacial expressive intensityand the expression of the virtual agent had a greater effect on the participants more so in the negative emotion condition than in the positive emotion condition.Specifically, in the negative condition, there was a difference in the expressive intensity between the emotional and neutral virtual agents in all brow-related measures,mouth extension,and narrowing.However, in the positive condition, there was no difference in the expressive intensity in any brow-related measures,mouth extension,and mouth narrowing.

    The results of thefacial appearance movementshowed that in the negative condition,the movement related to the brow, eyes, nose, and lips was higher in the emotional condition than in the neutral condition.In the positive emotion condition,the movement of the lower jaw and left cheekbone was higher in the emotional condition than in the neutral condition.

    The results of thefacial muscle movementshowed that in the negative condition,the movement of the eyes,nose,mouth,chin,cheekbone,and philtrum was higher in the emotional condition than in the neutral condition.In the positive emotion condition,the movement of the eyes,nose,mouth,and chin was higher in the emotional condition than in the neutral condition.

    Finally, we confirmed that there was a difference in thefacial muscle intensitybetween the participants in the different empathic capability groups.The intensity of AU45 (Blink)was lower in the higher empathy group and vice versa.AU45 consists of the relaxation of the levator paplebrae and the contraction of the orbicularis oculi.The low intensity of these muscles indicates a low number of eye blinks.This implies that the higher the empathic capability, the more likely it is to engage in an empathic process,resulting in a low number of eye blinks.Future studies may analyze the differential weight of AU45’s components.Additionally, there was a significant difference in AU1 (Inner brow raise), AU6 (Cheek raiser), and AU25 (Lips part) between the low and high-level empathy groups.Specifically,in the high-level empathy group,the muscle intensity was significantly higher.The results imply that the high-level empathy group utilized facial muscles such as frontalis,orbicularis oculi,and depressor labbi more so than the low-level empathy group.The low empathy group had a lower use of the zygomatic major than the other two empathy groups.

    Empathic responses initiate perceptual information involving the observer’s environment to be sent to the superior temporal sulcus(STS).This information is used to determine whether the observer is in danger.This is an unconscious response known as neuroception [78].If the observer’s external environment is perceived as safe, the nucleus ambiguus (NA) becomes activated, suppressing the observer’s defense mechanism.The observer is then in a state of social engagement, controlling the facial muscles responsible for pro-social behavior.That is,the observer controls the muscles to hear the target’s voice better and orients the observer’s gaze toward the target.A more fluid facial expression is now possible[79].Because the high-empathy group would be more likely to transit to a pro-social state when interacting with the virtual agent, with a more fixed eye gaze, the number of eye blinks found in our experiment would be lower.

    In summary, we demonstrated that humans could synchronize their facial expressions with a virtual agent in a shared emotion context with an emotional virtual agent, evident in a significant increment in nearly all dependent measures (movement and intensity) in facial expressions.We also found that such measures differed as a function of the empathy capability.Based on these findings,we suggested two evaluation criteria to assess whether a human user empathizes with the virtual agent:

    1.Criterion for facial mimicry:the synchronization level(cosine similarity)is on and above 0.5.

    2.Criterion for empathic capability:thefacial muscle intensityof AU45 is within the following range:

    (1) Low empathy group:0.18±.05

    (2) Mid empathy group:0.16±.08

    (3) High empathy group:0.13±.04

    This criterion can be used as a modular assessment tool that can be implemented in any interactive system involving human users to validate whether the user empathizes with the system,including the virtual agent.The application of such modules is significant.The gaming and content media industry is heading toward interactive storytelling based on the viewer’s input or response.So far,it has mostly been the user’s explicit feedback that drives the story, but with the implementation of an empathic recognition system,it can be a more fluid and seamless interactive storytelling experience.

    Since the introduction of voice recognition systems such as Amazon Echo,we have been experiencing AI devices with a mounted camera.For example,Amazon Show has a camera to detect,recognize,and follow the user faces during a dialog.The system can now tap into the user’s response in real time,whether the content is congruent with the user’s emotional state,and changes the response or service accordingly.

    The implications of this study extend to social robots.They typically have a camera mounted on their head for eye contact and indicate their intention through their gaze.The social robot’s emotional expression can now be amended according to the facial feed from the camera.

    We acknowledge the limitations of this study.For ecological validity,we manipulated the virtual agent’s emotional expression at three different levels(facial expression,voice,and behavioral gestures).Although we found an effect,we cannot attribute the results to a single manipulation from the three because of the limitations of the experimental design.Further studies may dissect each manipulation and determine which modality of the virtual agent has a differential impact on the participant’s behavioral mimicry.

    The study is also limited to statistical test results.We conducted a targeted t-test and ANOVA to validate the hypothesis established from the integrated literature review.However, future studies may train a model with modern methods (machine learning, deep learning, fuzzy logic) [80,81]using the dependent measures(movement and intensity)in facial expressions and output(emotional and neutral).Such an approach may provide weights and parameters that accurately predict the participant’s behavioral synchronization.In such a model, the context identification module for an empathic virtual agent is critical because empathy is affected by interaction context and task.

    Acknowledgement:Authors thank those who contributed to write this article and give some valuable comments.

    Funding Statement:This work was supported by the National Research Foundation of Korea(NRF)grant funded by the Korean government(MSIT)(NRF-2020R1A2B5B02002770,Recipient:Whang,M.).URL:https://english.msit.go.kr/eng/index.do.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    性欧美人与动物交配| 国产成人欧美在线观看| 国产v大片淫在线免费观看| 国产单亲对白刺激| 欧美性猛交黑人性爽| 无限看片的www在线观看| 午夜福利视频1000在线观看| 亚洲国产精品999在线| 亚洲国产精品久久男人天堂| 很黄的视频免费| 俄罗斯特黄特色一大片| 一区二区三区激情视频| 91国产中文字幕| 久久久久久久久久黄片| 亚洲人成网站在线播放欧美日韩| 国产av麻豆久久久久久久| 亚洲精品久久国产高清桃花| 国产男靠女视频免费网站| 啦啦啦观看免费观看视频高清| 18美女黄网站色大片免费观看| 亚洲av第一区精品v没综合| 国产av一区二区精品久久| 成年版毛片免费区| 欧美一级毛片孕妇| 俺也久久电影网| 亚洲精品色激情综合| 亚洲中文av在线| 桃红色精品国产亚洲av| 国内久久婷婷六月综合欲色啪| 97超级碰碰碰精品色视频在线观看| 亚洲天堂国产精品一区在线| 一级片免费观看大全| 又大又爽又粗| 99久久无色码亚洲精品果冻| 淫秽高清视频在线观看| 亚洲成人国产一区在线观看| 男插女下体视频免费在线播放| 色综合亚洲欧美另类图片| 91字幕亚洲| 成人国产综合亚洲| netflix在线观看网站| 午夜成年电影在线免费观看| 夜夜夜夜夜久久久久| 视频区欧美日本亚洲| 美女黄网站色视频| 欧美在线一区亚洲| 成人18禁在线播放| 全区人妻精品视频| 久久这里只有精品中国| 国产免费av片在线观看野外av| 黑人巨大精品欧美一区二区mp4| 可以在线观看的亚洲视频| 亚洲av美国av| 2021天堂中文幕一二区在线观| 每晚都被弄得嗷嗷叫到高潮| 两个人免费观看高清视频| 日韩欧美免费精品| 校园春色视频在线观看| 国产黄片美女视频| 午夜视频精品福利| 国产精品98久久久久久宅男小说| 免费在线观看亚洲国产| 久久精品国产亚洲av香蕉五月| 久热爱精品视频在线9| 国产成+人综合+亚洲专区| 国产一区二区激情短视频| 国产一区在线观看成人免费| 色播亚洲综合网| 欧美成人免费av一区二区三区| 韩国av一区二区三区四区| 波多野结衣高清无吗| 变态另类丝袜制服| 精品无人区乱码1区二区| 可以在线观看毛片的网站| 久久久久久免费高清国产稀缺| 国产欧美日韩一区二区三| 亚洲五月天丁香| 欧美成人免费av一区二区三区| 成人永久免费在线观看视频| 欧美在线黄色| 亚洲狠狠婷婷综合久久图片| 可以在线观看的亚洲视频| 日日爽夜夜爽网站| 免费看a级黄色片| a级毛片a级免费在线| 一级作爱视频免费观看| 中文字幕高清在线视频| www.自偷自拍.com| 国产黄色小视频在线观看| xxxwww97欧美| 成人永久免费在线观看视频| 午夜a级毛片| 两个人视频免费观看高清| 少妇的丰满在线观看| 中文资源天堂在线| 久久欧美精品欧美久久欧美| 91av网站免费观看| 蜜桃久久精品国产亚洲av| 国产精品一区二区三区四区免费观看 | 精品久久久久久成人av| 亚洲精品久久国产高清桃花| 大型av网站在线播放| 国产成人精品久久二区二区91| 久久久精品欧美日韩精品| 天堂动漫精品| 国产精品爽爽va在线观看网站| 国产高清有码在线观看视频 | 大型av网站在线播放| 国产蜜桃级精品一区二区三区| 日本免费一区二区三区高清不卡| 老司机在亚洲福利影院| 又黄又爽又免费观看的视频| 国产精品野战在线观看| 看片在线看免费视频| 老熟妇乱子伦视频在线观看| 免费在线观看视频国产中文字幕亚洲| 国产v大片淫在线免费观看| 精品熟女少妇八av免费久了| 一个人免费在线观看的高清视频| 亚洲激情在线av| 长腿黑丝高跟| 丝袜美腿诱惑在线| 少妇裸体淫交视频免费看高清 | 久久久水蜜桃国产精品网| av在线播放免费不卡| 12—13女人毛片做爰片一| 亚洲最大成人中文| 熟妇人妻久久中文字幕3abv| 婷婷亚洲欧美| 我的老师免费观看完整版| 国产蜜桃级精品一区二区三区| a在线观看视频网站| 久久久精品大字幕| 日韩欧美 国产精品| 床上黄色一级片| 久久精品成人免费网站| 无人区码免费观看不卡| 91国产中文字幕| 国产精品久久久久久人妻精品电影| 亚洲精品色激情综合| 国产av一区二区精品久久| 日韩欧美一区二区三区在线观看| 长腿黑丝高跟| 免费在线观看成人毛片| 男男h啪啪无遮挡| 婷婷精品国产亚洲av| 精品久久久久久久久久免费视频| 亚洲欧美激情综合另类| bbb黄色大片| 国产精品美女特级片免费视频播放器 | 超碰成人久久| 亚洲欧美精品综合久久99| 日韩大码丰满熟妇| 又大又爽又粗| 一本久久中文字幕| 午夜老司机福利片| 久久久久国产精品人妻aⅴ院| av欧美777| 18美女黄网站色大片免费观看| 黑人欧美特级aaaaaa片| 日韩欧美 国产精品| 1024香蕉在线观看| 久久精品国产清高在天天线| 久久久久亚洲av毛片大全| 女同久久另类99精品国产91| √禁漫天堂资源中文www| 又爽又黄无遮挡网站| 制服人妻中文乱码| 中文在线观看免费www的网站 | 欧美又色又爽又黄视频| 亚洲片人在线观看| 婷婷精品国产亚洲av在线| 男人舔奶头视频| 嫩草影视91久久| 日韩欧美 国产精品| av天堂在线播放| 波多野结衣高清无吗| 亚洲av成人av| 91麻豆av在线| 午夜福利高清视频| 久久精品91无色码中文字幕| 在线国产一区二区在线| 国产91精品成人一区二区三区| 日韩有码中文字幕| 成年版毛片免费区| 亚洲片人在线观看| 男女之事视频高清在线观看| 亚洲av第一区精品v没综合| 欧美日韩中文字幕国产精品一区二区三区| 国产精品久久久久久精品电影| 欧美在线黄色| 国产高清视频在线观看网站| www.www免费av| 夜夜躁狠狠躁天天躁| 熟女电影av网| 看片在线看免费视频| 亚洲av中文字字幕乱码综合| 脱女人内裤的视频| 男男h啪啪无遮挡| 欧美乱色亚洲激情| 三级国产精品欧美在线观看 | 亚洲国产欧美一区二区综合| 亚洲成av人片在线播放无| 巨乳人妻的诱惑在线观看| 在线观看免费日韩欧美大片| 女人爽到高潮嗷嗷叫在线视频| 国产一区二区在线观看日韩 | 成人国产综合亚洲| 成人亚洲精品av一区二区| 别揉我奶头~嗯~啊~动态视频| 香蕉丝袜av| 岛国视频午夜一区免费看| 国产av一区二区精品久久| 麻豆国产97在线/欧美 | 婷婷精品国产亚洲av| www.熟女人妻精品国产| 久久精品国产亚洲av高清一级| 国产精品久久久久久精品电影| 国内少妇人妻偷人精品xxx网站 | 老熟妇乱子伦视频在线观看| 五月伊人婷婷丁香| 草草在线视频免费看| 久久国产乱子伦精品免费另类| 国产精品永久免费网站| 俄罗斯特黄特色一大片| 国产精品,欧美在线| 日韩欧美一区二区三区在线观看| 亚洲精品久久成人aⅴ小说| 91麻豆精品激情在线观看国产| 香蕉丝袜av| 国产高清videossex| 少妇粗大呻吟视频| 母亲3免费完整高清在线观看| 级片在线观看| 18禁黄网站禁片免费观看直播| 久久久久九九精品影院| 香蕉久久夜色| 一二三四社区在线视频社区8| 欧美3d第一页| 久久久久久免费高清国产稀缺| 美女高潮喷水抽搐中文字幕| 欧美黑人欧美精品刺激| 99精品欧美一区二区三区四区| 久久久久久国产a免费观看| 国产精品日韩av在线免费观看| 精品国产美女av久久久久小说| 午夜福利免费观看在线| 国产精华一区二区三区| 亚洲欧美日韩东京热| 国产欧美日韩精品亚洲av| 久久久久国产一级毛片高清牌| 欧美成人午夜精品| 亚洲av熟女| 亚洲av电影不卡..在线观看| 国产av一区在线观看免费| 亚洲精品国产一区二区精华液| 国产久久久一区二区三区| 国内精品久久久久精免费| 国产免费男女视频| 日韩欧美 国产精品| 亚洲成人国产一区在线观看| 狂野欧美白嫩少妇大欣赏| 又黄又爽又免费观看的视频| 90打野战视频偷拍视频| 啦啦啦观看免费观看视频高清| 免费看a级黄色片| 日本免费a在线| 成人手机av| 欧美日韩福利视频一区二区| 国产精品亚洲av一区麻豆| 午夜精品一区二区三区免费看| 国产高清视频在线播放一区| 亚洲乱码一区二区免费版| 欧美日韩中文字幕国产精品一区二区三区| 老鸭窝网址在线观看| 久久草成人影院| 国产激情久久老熟女| 亚洲精品国产一区二区精华液| 最新美女视频免费是黄的| 在线观看舔阴道视频| 成年版毛片免费区| 天天躁狠狠躁夜夜躁狠狠躁| 亚洲中文av在线| 亚洲精品国产一区二区精华液| 色哟哟哟哟哟哟| 99在线人妻在线中文字幕| 一边摸一边抽搐一进一小说| 精品熟女少妇八av免费久了| 欧美另类亚洲清纯唯美| 99国产精品99久久久久| 国产一区二区三区视频了| 在线观看一区二区三区| 欧美日韩乱码在线| 国产精品亚洲av一区麻豆| 两性午夜刺激爽爽歪歪视频在线观看 | 日韩欧美在线二视频| 特大巨黑吊av在线直播| 脱女人内裤的视频| 老司机深夜福利视频在线观看| 亚洲中文字幕一区二区三区有码在线看 | 在线永久观看黄色视频| 亚洲一区高清亚洲精品| 可以在线观看的亚洲视频| 午夜精品在线福利| xxx96com| 国产亚洲av高清不卡| 美女黄网站色视频| 欧美色视频一区免费| 欧美性长视频在线观看| 久久精品夜夜夜夜夜久久蜜豆 | 久久久国产精品麻豆| 香蕉国产在线看| 亚洲在线自拍视频| 大型黄色视频在线免费观看| 在线观看美女被高潮喷水网站 | 午夜免费激情av| 中文在线观看免费www的网站 | 午夜福利视频1000在线观看| 淫妇啪啪啪对白视频| 午夜两性在线视频| 99精品欧美一区二区三区四区| 麻豆国产av国片精品| 久久 成人 亚洲| 老司机在亚洲福利影院| 日韩欧美三级三区| 久久久久九九精品影院| 国产高清有码在线观看视频 | 99国产综合亚洲精品| 国产区一区二久久| 婷婷亚洲欧美| 久久中文看片网| 国产欧美日韩一区二区精品| 国产亚洲精品av在线| 日韩免费av在线播放| 99国产综合亚洲精品| 中出人妻视频一区二区| 精品少妇一区二区三区视频日本电影| 日本精品一区二区三区蜜桃| 久久久久性生活片| 老熟妇乱子伦视频在线观看| 丰满的人妻完整版| 好男人在线观看高清免费视频| 日韩欧美在线乱码| 两个人免费观看高清视频| 亚洲一卡2卡3卡4卡5卡精品中文| 久久精品国产综合久久久| av国产免费在线观看| 免费看十八禁软件| 中文字幕高清在线视频| 亚洲精品久久成人aⅴ小说| 国产真实乱freesex| 国产免费男女视频| 亚洲,欧美精品.| x7x7x7水蜜桃| 欧美日韩精品网址| 91麻豆精品激情在线观看国产| 欧美在线一区亚洲| 欧美中文日本在线观看视频| 99国产精品一区二区三区| 亚洲熟妇中文字幕五十中出| 精品熟女少妇八av免费久了| 久久人妻av系列| 午夜福利在线在线| 免费人成视频x8x8入口观看| 一本精品99久久精品77| 国产99白浆流出| videosex国产| 高清在线国产一区| 少妇粗大呻吟视频| 精品免费久久久久久久清纯| 天堂影院成人在线观看| 亚洲成人精品中文字幕电影| 亚洲av第一区精品v没综合| 欧美一级a爱片免费观看看 | 久久这里只有精品19| 国产精品一区二区精品视频观看| АⅤ资源中文在线天堂| 桃色一区二区三区在线观看| 在线a可以看的网站| 舔av片在线| 首页视频小说图片口味搜索| 亚洲精品中文字幕在线视频| 国内毛片毛片毛片毛片毛片| 欧美中文综合在线视频| 久久香蕉精品热| 欧美zozozo另类| 国产亚洲精品av在线| 999久久久国产精品视频| 午夜福利在线观看吧| 国产激情偷乱视频一区二区| 男女做爰动态图高潮gif福利片| 亚洲人成电影免费在线| 久久久久免费精品人妻一区二区| 久久99热这里只有精品18| 久久久精品国产亚洲av高清涩受| 欧美 亚洲 国产 日韩一| 国产av一区二区精品久久| 久久香蕉激情| 国产精品一区二区免费欧美| 91麻豆av在线| 日本熟妇午夜| 亚洲av熟女| 99国产精品一区二区三区| 搡老妇女老女人老熟妇| 国产精品一区二区三区四区免费观看 | 久久精品国产清高在天天线| 成人亚洲精品av一区二区| av在线播放免费不卡| 亚洲欧美精品综合久久99| 欧美av亚洲av综合av国产av| 成年免费大片在线观看| 国产三级在线视频| 人妻久久中文字幕网| 国内揄拍国产精品人妻在线| 天堂影院成人在线观看| 国产欧美日韩精品亚洲av| 国产激情欧美一区二区| 中文字幕熟女人妻在线| 亚洲va日本ⅴa欧美va伊人久久| 国产亚洲精品久久久久5区| 久久精品亚洲精品国产色婷小说| 大型av网站在线播放| 又黄又爽又免费观看的视频| 久久精品成人免费网站| 动漫黄色视频在线观看| 最近在线观看免费完整版| 久久久久久九九精品二区国产 | 最好的美女福利视频网| 欧美中文日本在线观看视频| 久久亚洲精品不卡| 日本一本二区三区精品| 国产精品日韩av在线免费观看| 国产激情欧美一区二区| 成在线人永久免费视频| 深夜精品福利| 国产成人啪精品午夜网站| 国产精品久久久久久精品电影| 制服人妻中文乱码| 黄色视频不卡| 99久久99久久久精品蜜桃| 中文字幕熟女人妻在线| 757午夜福利合集在线观看| 欧美成人性av电影在线观看| 女同久久另类99精品国产91| 香蕉国产在线看| 麻豆久久精品国产亚洲av| 亚洲色图 男人天堂 中文字幕| 每晚都被弄得嗷嗷叫到高潮| 国产视频内射| 国产一区在线观看成人免费| 中文字幕最新亚洲高清| 欧美黄色片欧美黄色片| 99国产精品99久久久久| 亚洲av第一区精品v没综合| 亚洲色图av天堂| 免费看十八禁软件| 老熟妇仑乱视频hdxx| 非洲黑人性xxxx精品又粗又长| 日韩成人在线观看一区二区三区| 精品久久久久久成人av| 美女高潮喷水抽搐中文字幕| 真人做人爱边吃奶动态| 亚洲精品中文字幕一二三四区| 免费无遮挡裸体视频| 亚洲精华国产精华精| 国产99白浆流出| 国产成人av教育| 亚洲美女黄片视频| 日本在线视频免费播放| 亚洲熟妇中文字幕五十中出| 国产亚洲精品av在线| 99国产精品一区二区蜜桃av| 熟女电影av网| 国产成人av教育| 国产在线观看jvid| 国产高清videossex| 深夜精品福利| 国产亚洲精品av在线| 亚洲七黄色美女视频| 国产精品98久久久久久宅男小说| 毛片女人毛片| 国产精品久久久久久亚洲av鲁大| 亚洲av电影不卡..在线观看| 岛国视频午夜一区免费看| 久久 成人 亚洲| 日本免费a在线| 国产精品99久久99久久久不卡| 午夜精品在线福利| 国产野战对白在线观看| 一卡2卡三卡四卡精品乱码亚洲| 美女黄网站色视频| 亚洲av片天天在线观看| 亚洲精品中文字幕一二三四区| 在线观看免费视频日本深夜| 亚洲一卡2卡3卡4卡5卡精品中文| 久久久久亚洲av毛片大全| 99久久综合精品五月天人人| 久久久久久久久久黄片| 久久久久久久久中文| 99国产精品一区二区蜜桃av| 色哟哟哟哟哟哟| aaaaa片日本免费| 久久精品夜夜夜夜夜久久蜜豆 | 美女大奶头视频| 国内久久婷婷六月综合欲色啪| 在线免费观看的www视频| 天堂av国产一区二区熟女人妻 | 一级毛片精品| 亚洲九九香蕉| 国产黄片美女视频| 麻豆成人午夜福利视频| 国产在线观看jvid| 成人18禁高潮啪啪吃奶动态图| 91成年电影在线观看| 757午夜福利合集在线观看| 高清毛片免费观看视频网站| 国产av在哪里看| 丰满人妻熟妇乱又伦精品不卡| 长腿黑丝高跟| 亚洲欧美日韩东京热| 久久久久九九精品影院| 国产男靠女视频免费网站| www.精华液| 国产精品免费一区二区三区在线| 久久精品国产综合久久久| 色精品久久人妻99蜜桃| 我的老师免费观看完整版| 可以免费在线观看a视频的电影网站| 国产91精品成人一区二区三区| 麻豆一二三区av精品| 两个人视频免费观看高清| 不卡一级毛片| 欧美中文日本在线观看视频| 亚洲精品色激情综合| 国产伦在线观看视频一区| 母亲3免费完整高清在线观看| 999精品在线视频| 成年女人毛片免费观看观看9| 美女黄网站色视频| 国产伦在线观看视频一区| 久久精品成人免费网站| 婷婷精品国产亚洲av在线| 日韩欧美在线二视频| 国产一区二区在线av高清观看| 超碰成人久久| 1024手机看黄色片| 97人妻精品一区二区三区麻豆| 亚洲av成人精品一区久久| av在线天堂中文字幕| 精品不卡国产一区二区三区| 日本撒尿小便嘘嘘汇集6| 日本 av在线| 可以在线观看毛片的网站| 亚洲国产精品成人综合色| 99久久久亚洲精品蜜臀av| 欧美日韩亚洲国产一区二区在线观看| 床上黄色一级片| 岛国在线观看网站| 日韩有码中文字幕| 久久精品国产亚洲av香蕉五月| 一级毛片精品| 国产精品久久久久久人妻精品电影| 色哟哟哟哟哟哟| 成人三级做爰电影| 男女床上黄色一级片免费看| 亚洲专区字幕在线| 18禁裸乳无遮挡免费网站照片| 欧美又色又爽又黄视频| 成人18禁高潮啪啪吃奶动态图| 听说在线观看完整版免费高清| 国产精品久久久久久久电影 | 国产av又大| 亚洲成人中文字幕在线播放| 成人一区二区视频在线观看| av视频在线观看入口| 两个人的视频大全免费| 特大巨黑吊av在线直播| av有码第一页| 老司机在亚洲福利影院| 亚洲人与动物交配视频| 久久精品aⅴ一区二区三区四区| 黄色毛片三级朝国网站| 1024视频免费在线观看| 女生性感内裤真人,穿戴方法视频| 日韩精品中文字幕看吧| 日韩有码中文字幕| 两个人免费观看高清视频| 国产亚洲精品av在线| 毛片女人毛片| 巨乳人妻的诱惑在线观看| 国产av麻豆久久久久久久| 亚洲自拍偷在线| 国产高清videossex| 国产精品自产拍在线观看55亚洲| 亚洲 欧美 日韩 在线 免费| 国产成人欧美在线观看| 一二三四社区在线视频社区8| 欧美不卡视频在线免费观看 | 搡老岳熟女国产| 免费在线观看日本一区| 欧美在线一区亚洲| ponron亚洲| 国产精品 欧美亚洲| 精品久久久久久久毛片微露脸| 国产精品免费视频内射| 老司机午夜十八禁免费视频| 88av欧美| 精品福利观看| 三级毛片av免费| 亚洲激情在线av| 亚洲专区字幕在线| 99在线视频只有这里精品首页| 国产主播在线观看一区二区| 欧美乱码精品一区二区三区| 久久九九热精品免费|