• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    MDNN:Predicting Student Engagement via Gaze Direction and Facial Expression in Collaborative Learning

    2023-02-17 03:12:36YiChenJinZhouQiantingGaoJingGaoandWeiZhang

    Yi Chen,Jin Zhou,Qianting Gao,Jing Gao and Wei Zhang

    1School of Computer Science,Central China Normal University,Wuhan,430079,China

    2Computer Science,School of Science,Rensselaer Polytechnic Institute,Troy,12180,USA

    3National Engineering Laboratory of Educational Big Data Application Technology,Central China Normal University,Wuhan,430079,China

    ABSTRACT Prediction of students’engagement in a Collaborative Learning setting is essential to improve the quality of learning.Collaborative learning is a strategy of learning through groups or teams. When cooperative learning behavior occurs,each student in the group should participate in teaching activities.Researchers showed that students who are actively involved in a class gain more.Gaze behavior and facial expression are important nonverbal indicators to reveal engagement in collaborative learning environments.Previous studies require the wearing of sensor devices or eye tracker devices,which have cost barriers and technical interference for daily teaching practice.In this paper,student engagement is automatically analyzed based on computer vision. We tackle the problem of engagement in collaborative learning using a multi-modal deep neural network(MDNN).We combined facial expression and gaze direction as two individual components of MDNN to predict engagement levels in collaborative learning environments.Our multi-modal solution was evaluated in a real collaborative environment.The results show that the model can accurately predict students’performance in the collaborative learning environment.

    KEYWORDS Engagement;facial expression;deep network;gaze

    1 Introduction

    Collaborative learning is a mode of knowledge construction based on the group [1]. According to psychosocial developmental theory, interactions between learners will improve their mastery and understanding of crucial task concepts. However, traditional classroom often emphasizes teaching content,ignoring students’feedback and interaction.Activities of collaborative learning pay attention to student interaction; students complete learning tasks in a relaxed, happy, democratic learning atmosphere. The interactive way of collaborative learning makes learning tasks complete efficiently and promote the development of learners’knowledge and ability.

    Collaborative learning needs group interaction. The success of collaborative learning is closely related to the state,intensity,content,and emotion of interpersonal interaction between group participants.Students need to maintain a positive mood and a positive enterprising mentality.However,in the current collaborative learning situation,students’communication,interaction,and other learning behaviors cannot be collected, analyzed, and evaluated effectively. Therefore, effective intervention and guidance are impeded.

    Different from the traditional classroom teaching model,the analysis object of learning analysis is expanded from individual to group members in the collaborative learning model.It is necessary to study learners’behaviors in interactive social activities within the group.

    Students’performance is often reflected in their facial expressions and mutual eye contact.Sinha et al. [2] used four aspects, such as behavioral engagement, social engagement, cognitive engagement,and concept-to-outcome engagement to describe the level of engagement in collaborative learning among group members.Engagement is closely related to learning efficiency.

    1.1 Research Challenges

    Although the measurement of learning engagement has been around for decades, collaborative learning environments are very different from traditional classrooms. Several key issues remain unresolved for collaborative learning scenarios:

    1) There is a lack of an automatic evaluation and analysis model for collaborative learning scenarios.The previous examination or teacher scoring methods cost manpower and material resources.In addition,they are susceptible to subjective factors.

    2) Computer-aided studies rely heavily on facial expressions, ignoring the information on gaze behavior between learners in collaborative learning.

    3) Among science and engineering courses at university, learners’emotional fluctuations are subtle. Relying only on facial emotional changes to identify engagement leads to a poor prediction effect.

    4) Wearing gaze tracking glasses interferes with the learners’learning state.The post-processing technology is also complicated.

    5) In scenarios of collaborative learning,eyes or faces are partially occluded while learners lean to other members.That makes gaze estimation difficult.

    6) There is also a lack of corresponding open datasets in collaborative environments.

    Given the above problems,this paper mainly considers the detection of facial expression and gaze in real collaborative learning scenarios to predict students’engagement in class.

    The difference between our approach and previous gaze-tracking studies is that learners do not need gaze-tracking glasses,but use cameras to capture images undisturbedly,so the research has the potential for wider application with fewer condition constraints.

    The main contributions of this paper are summarized as follows.

    1.2 Contributions of Our Approach

    1. A new perspective. In collaborative learning, communication and interaction among group members reflect the inner state of learners’engagement. Given this respect, we proposed a method to detect learners’engagement by joint analysis of facial expressions and gaze.

    2. New idea. The difference between this paper and the classical gaze tracking/eye movement analysis is that there are fewer constraints on the learners.Learners do not need to wear gaze tracking glasses or sit in front of a screen.Moreover,learners do not need to face the camera directly but use the camera in the distance to capture learners’gaze behavior,which has little interference to learners. In cooperative learning, the communication or movement between members results in partial occlusion of eyes, which makes it difficult to predict gaze. Our method can output an estimated gaze direction based on the visible head features even when the eyes are completely occluded.

    3. New technology. In this paper, deep neural networks in computer vision are used to analyze learners’engagement through gaze behavior automatically instead of manual coding, which has the advantages of promptness and wider applicability.

    4. New solution. For some courses (such as science), there are few obvious facial expression changes to learners,so the accuracy of manual or automatic results of the facial analysis is not ideal.We proposed a solution to tackle this problem by a fuzzy logic of joint facial expression and gaze to complement each other.

    Mining student behaviors in collaborative mode can be used to construct student portraits.It is of great significance and value to identify learning risks and propose intervention measures.Given the above problems,this paper mainly considers the estimation of facial expression and gaze behaviors in collaborative learning scenarios to detect student engagement.

    2 Related Works

    2.1 Analysis of Methods for Teaching in a Classroom

    The quality of classroom teaching is a core content to measure the level of schools’and teachers’teaching quality. However, the evaluation method of teaching quality has not formed a unified standard.Traditional classes use tests,questions,manual observations,self-reports,and after-school surveys,which cost manpower and time.As the collected data are often not comprehensive enough,and there are too many human factors,there exists a certain lag,which is of little significance for the establishment of a real-time and objective classroom teaching evaluation system.Therefore,more and more researchers have participated in the study of classroom teaching evaluation.

    The classroom has always been the most important site for teachers and students to learn and communicate, so it has attracted wide attention from educational researchers. However, traditional classroom often lacks collaboration and interaction,and the evaluation of teaching quality is usually limited to the evaluation of teachers’teaching level,and the feedback and interaction of students are not paid enough attention to.

    With the profound changes in education,education technology,and talent training,the engagement of classroom learning members has become an important index to evaluate the quality of the classroom.

    2.2 Student Engagement

    The research on student engagement began with educational psychologist Ralph Tyler[3].Engagement is used to represent the active,effective,and continuous state of learners in the learning process.Existing studies showed a positive correlation between student engagement and teaching quality or academic performance. The literature emphasizes the central role of engagement in classroom learning[4].

    For individual behavior,Fredricks proposed the definition of the indicator to evaluate individual learning engagement. This indicator is a three-dimensional structure, namely behavior, cognition,and emotion [5]. These three dimensions reflect the internal dynamic interrelation of an individual student.For group learning,the social input dimension is added.The interaction direction,interaction content, and emotional state between partners are mainly concerned. For example, Li et al. [6]established behavior, cognition, emotion, and social interaction to analyze the engagement levels of group members.

    Both individual behavior and group collaboration contain less obvious internal cognitive processes. The level of engagement in classroom behavior and emotion can usually be observed from external performance,using two methods:self-report method or expert scoring method.

    1) Self-report method. Self-report is simple, practical, and widely used. However, it depends on whether there is a deviation in the understanding of the requirements or any concealment in the content. 2) Expert scoring method. The expert score method has high quality and strong reliability.However,due to the small number of experts,it is impossible to increase the observation samples in the classroom. And the experts may be influenced by personal factors, so there are deviations and inconsistencies.Moreover,both of these methods have lag problems and scale limitations,which are not ideal for solving real classroom teaching problems timely[7].

    With the development of wireless connectivity and the Internet of Things(IoT),researchers make efforts to sense information in the classroom environment and predict student engagement through explicit behaviors and learning data. Gao et al. [8] detected multimodal data such as physiological signals and physical activities of students through IoT sensors, recording students’physiological responses and changes in activities. Then student engagement level was inferred through emotions,behaviors,and cognition aspects.It is verified that the level of classroom engagement of high school students is highly correlated with the measured physiological data, which can be used to predict students’engagement.

    However, fitting every student with a physical sensor like a wristband would be expensive and impractical for the average school.With the proliferation of cameras in classrooms,computer vision offers a cheaper,non-invasive and unobtrusive alternative to sensing.In recent years,computer vision technology has been used for student engagement[9,10].

    2.3 Research on Engagement Based on Computer Vision

    2.3.1 Single-Modal Student Engagement Detection

    In recent years, computer vision and deep learning techniques have been widely used to detect student engagement. Gupta et al. [11] directly labeled the engagement of images and people in the DAiSEE dataset by employing InceptionNet[12],C3D[13],and long-term Recurrent convolutional Networks(LRCN)[14].After training and testing,the accuracy of engagement according to different networks is 46.4%,56.1%,and 57.9%,respectively.

    To solve the serious imbalance of positive and negative sample ratios in the dataset,Geng et al.[15]improved the cross-entropy loss function by Focal Loss to improve the engagement accuracy by 56.2% in the DAiSEE dataset. Then, Zhang et al. [16] proposed a cross-entropy loss function with weight to reduce the performance degradation by the imbalance of negative and positive samples.In combination with the Inflated 3D network (I3D), parameters of the 2D network were extended with the advantages of a wider view field in the time domain, with an accuracy rate of 52.35% on the DAiSEE dataset. Zhang et al. [16] stated that the root cause of sample proportion imbalance was the small scale of low engagement data. Therefore, they simplified the original four categories(high engagement, low engagement, very low engagement, and nonengagement) into two categories(engagement and nonengagement) to increase the number of cases in each set. Experimental results showed that the accuracy is 98.82%.

    In 2021,Liao et al.[17]proposed a network structure based on Spatio-temporal features of images,which effectively utilized spatial and temporal information of images, improving the accuracy of engagement prediction to 58.84%. Their design includes the pretrained SE-ResNet-50 (SENet) and LSTM Network with the global attention mechanism. Although SE-ResNet-50 has strong spatial feature extraction capability, it still suffers the problem of low classification accuracy from the unbalanced case distribution in the dataset.

    To further improve the classification performance,Abedi et al.[18]proposed end-to-end network architecture, detecting engagement levels from video data directly. It is a hierarchical structure composed of ResNet + TCN. Firstly, 2D ResNet extracts spatial features from successive video frames, then TCN (Temporal Convolutional Network-TCN) analyzes temporal features to detect engagement.The spatial feature vector extracted from the continuous frame(via a ResNet)is input to TCN,so TCN retains more feature information than LSTM.The prediction accuracy of engagement is improved to 63.9%.

    2.3.2 Multi-Modal Student Engagement Detection

    There are three main categories of multi-modal data:(1)learning process data;(2)physiological data;(3)image data.Learning data include test results,response time,online time,etc.,with limitations and lag. Physiological data need special sensors. Due to economic reasons and invasiveness, it is difficult to implement in general classrooms with several students.

    Image data have attracted increasing attention from researchers in recent years, due to their non-contact and instantaneity.Existing studies on engagement are mainly based on facial expression analysis or gaze tracking[19].

    Although studies based on facial expression and gaze tracking have been carried out for a long time,previous studies were only limited to an individual rather than a group because the subject was required to wear an eye tracker to locate one’s current learning content [20]. Therefore, only noninvasive,image-based facial expression and gaze recognition techniques are discussed below.

    Facial Expression Classification

    Facial expression is a language of emotion, a kind of physiological and psychological state expression.Psychologists have defined six basic categories of human expressions—surprise,sadness,happiness,anger,disgust,and fear.By analyzing the facial expression information of learners in images or videos,we can understand their inner psychological and emotional states.

    At present,some international enterprises have tried to apply facial recognition technology in the classroom.SensorStar Lab uses cameras to capture learners’smiles,frowns,and sounds in images by the technology of EngageSense to determine whether students are distracted or engaged in class[21].

    Wu et al. [22] used LSTM and GRU networks to extract the facial and upper body features from the video, classifying the level of students’engagement in the course. Dhall et al. [23] adopted the improved GRU model to analyze students’engagement levels. By using attention weighting, the training time of the model on the EmotiW data set was accelerated.Huang et al.[24]added attention mechanism to the LSTM network to train extracted facial features with 60% accuracy of students’engagement on the DAiSEE dataset.Wang et al.[25]used a CNN network to extract facial features and classify student engagement levels.

    Murshed et al. [26] proposed a two-level engagement detection model by training face images extracted from videos in the DAiSEE dataset. They used local direction mode (LDP) for humanindependent edge futures extraction, nuclear principal component analysis (KPCA) for nonlinear correlation analysis of extracted features, and deep belief network (DBN) for participation level classification of extracted features.Two-stage engagement detection accuracy is 91%,and three-stage engagement detection accuracy is 87%.

    Gaze Detection

    A learner’s gaze behavior is an important real-time indicator of learning status.Gaze,as a basic form of communication,is of great significance in the study of human behavior,emotional expression,and social interaction[27,28].Gaze tracking refers to automatically detecting the direction of the gaze and mapping it accurately to a real-world target.

    In 1948, the first modern head-mounted eye tracker appeared [29]. It can record the complex process of the line of sight without the effect of head movements.In recent years,the Microsoft Kinect camera has been widely applied in many fields such as gaze tracking, due to its advantages of low cost, small size, and depth perception in 3D scenes. Gaze-tracking glasses also use infrared light to estimate gaze direction through corneal reflection,tracking the target at which the eye is gazing.But these technologies rely on hardware support.They are expensive and patented,limiting use popularity.

    However, the emergence of AI has revolutionized the field of gaze tracking. Computer vision and intelligent learning have made considerable progress.Devices such as optical camera sensors have become cheap, which has prompted researchers to automatically extract knowledge from images or videos.

    Understanding where one is looking and analyzing the behavior behind the gaze is the goal of this field of research.The results of this research can provide an implicit way to study the dynamics of human interaction,as well as the interaction between people[30].

    Chong et al.[31]sent the scene image and the face segment cut from the original image into two separate convolutional layers to extract features respectively and determine the in-scene target of the person’s actual gaze after features fusion.This is the first time to solve the gaze estimation problem from a third-person perspective.

    Researchers in education have been exploring the importance of gaze fixation, aversion, and following in the classroom[32].Researchers found that intimacy and positive feelings between teachers and students in the class were positively correlated with the frequency of eye communication between teachers and students[33].

    In class, students who are gazed more inclined to be more active [34]. The frequency of gaze is generally thought to be associated with interest or indifference[35].

    Gaze, therefore, expresses a common interest between interlocutors and reflects a positive response to each other[36].In cooperative learning situations,students who made frequent eye contact with a teacher or partner were more likely to participate effectively in class learning than students who made low eye contact.

    3 Model Formulation

    The goal of our study is to predict student engagement in learning scenarios based on computer vision. This study attempts to add gaze detection to facial expression recognition to improve the accuracy of student engagement classification.

    In collaborative learning,students’gaze direction and facial expression are unconscious behaviors while communicating with each other.Studying this nonverbal behavior in the classroom can provide an important indicator for teachers. Previous studies required the wearing of sensor devices or eyetracking devices,which had a high price and technical hindrances for general applications.This study uses cameras to capture learners’facial expressions and gazes in the wild and adopts AI technology to detect student engagement.We adopt fuzzy control to integrate the two modal features and output the engagement value.

    3.1 Overview of the Proposed Method

    Fig.1 shows the overall network structure of this method.It includes three parts of face detection,facial expression identification, and gaze estimation. The sentiment in the facial expression is set in two levels,and the gaze is divided into four levels(high,a little bit high,a bit little low,low).Combing human facial expression and gaze estimation jointly, the final student engagement is classified into four levels(high engagement,engagement,a little bit low engagement,low engagement),as is shown in Table 1.

    Figure 1:Overview of the proposed network for students’engagement level prediction

    Table 1: Student engagement table

    3.2 Facial Expression Processing

    1)Video preprocessing

    The FFmpeg[37]is used to process the video recorded by the camera in class,and the corresponding frames are extracted according to the time sequences.

    2)Face detection using MTCNN

    MTCNN is a cascade multi-task convolutional neural network. In this model, three cascaded networks are used,and a candidate bounding box plus classifier is used to detect face area quickly and efficiently.

    By using cascaded CNN networks, the face information in corresponding frames is extracted through PNET,RNET,and ONET CNN networks in MTCNN.

    MTCNN algorithm performs a multi-scale transformation on the image to form an image pyramid, which contains 10 images of scale transformation and forms multiple images of different sizes.These results are sent to the network for detection respectively so that the MTCNN algorithm can automatically adapt to the target detection requirements of large or small targets.

    P-NET Proposal Network is a network structure mainly to get the candidate bounding boxes of the local face area. The fully connected convolution network is used to detect the face of the input image, and at the same time, the boundary boxes are used for regression processing to calibrate the position of the candidate boxes. Finally, the non-maximum suppression NMS is used to merge the highly overlapping candidate boxes and optimize the quantity and quality of the candidate boxes.

    R-NET Refine Network is a network structure that is mainly to further optimize the face candidate boxes obtained by P-NET,and input the candidate boxes obtained by P-NET into R-NET as parameters. Compared with the P-NET network structure, a full connection layer is added, the candidate boxes are further adjusted by the regression of face boundary frames, and the candidate frames are discarded by the non-maximum suppression-NMS again, thus further improving the suppression effect on interference information of wrong candidate boxes.

    O-NET Output Network,the main task of this network structure,is to output facial feature points.Compared with R-NET, it has an additional convolution layer, which will make the results of face candidate boxes more accurate. This layer realizes more supervision over candidate boxes, and can also complete the positioning and output of the nose,left eye,right eye,left mouth,and right mouth in face candidate frames.

    As shown in Fig.2, many candidates’bounding boxes are generated after P-NET. The number of bounding boxes is reduced after maximum suppression. Then they are sent to the next R-NET resulting in the face boundary candidate box being further reduced.Finally,the only face prediction bounding box is output after O-NET.

    Figure 2:The output of predicted face boundary boxes through MTCNN

    3)Identify effective students’facial expressions

    The CNN-based lightweight neural network can further process the face information extracted by the MTCNN network to obtain facial expression information.The network is trained using the RAF data set.The network structure is a 6-layer CNN network,wherein the convolution layer operator is 3 * 3 lightweight operators, and the multi-layer convolution layer is superimposed, with the kernel being 5 * 5 and 7 * 7.

    The network model structure of the design is in Table 2:

    Table 2: Network for face expression identification

    For the network training, we use the RAF-DB data set for the training and testing network parameters.The data set contains about 30,000 face images,and each image is labeled by expression classification.The labeled results are surprise,happiness,neutrality,fear,disgust,sadness,and anger.In the actual training process, we selected 12,271 pictures as the training set for training network parameters and 3068 pictures as the test set. When testing the training results, we used the crossentropy loss function to detect the accuracy of the network.

    4)Classify the student status based on the seven classification results

    According to the seven-class classification, the corresponding emotional state is divided into surprise,happiness,fear,disgust,sadness,and angry,neutral expression.Fig.3 shows the engagement level based on the facial expression in collaborative learning activities.

    Figure 3:Facial expression-based engagement level

    The emotional result is computed as follows:

    Among them,ωiit is the weight of the emotion network outputoi. Since students rarely show extremely facial expressions,such as sadness,fear,etc.,Efaceshould be formulated by weighted seven emotions.

    3.3 Gaze Estimation Model

    The gaze is a continuous signal,so we use the Bidirectional Long Short-Term Memory Capsules(Bilstm)to capture temporal information.A 7-frame image sequence is used to predict the gaze of the central frame.Fig.4 is a gaze estimate architecture.In each frame,the image uses CNN firstly to get high-level features with Dimensionality 256.Then,These Features are input to a bidirectional LSTM network with two layers.Finally,all vectors and features are input into a full connecting layer,which outputs the gaze prediction and an error quantile estimation.

    Fig.4 shows the direction of the gaze predictive model for the subject who stands straightly in front of the camera.An ImageNet-pretrained ResNet-18 is firstly used to capture the high-level features from the crop of each frame.The model is trained using an Adam optimizer with 0.0001 learning rate.

    Since the collaborative learning setting is in unconstrained environments,the prediction is likely to degrade while the eye in the image deviates from the front of the camera.To model error bounds,a pinball loss function to predict error quantiles.For an image,we estimate the expected gaze direction(as shown in Fig.5)as well as the cone of error with a 10%-90%ground truth quantile boundary.

    The output of the network is the gaze direction in spherical coordinates which is shown as

    whereθ=-arctanandφ=arcsingy.

    Figure 4:Gaze estimation model architecture

    Figure 5:The coordinate system of the eye

    Then we use the pre-known ground-truth gaze vector in the eye coordinate system to compute the loss.σis the offset from the expected gaze with the quantileτ.Soθ+σandφ+σrepresent the 90%quantile,whileθ-σandθ-σrepresent 10%.

    Ify=(θgt,φgt),the lossLτfor the quantileτcorrespondence to theθis shown as:

    In the scene of collaborative learning,the position of the camera is relatively fixed to the place of the subject.For the sake of simplicity,the camera of this study is set in front of a group of students at a constant distance.Assume the faces,the notebook on the table,and the front screen as sensitive target areas in advance.By calculating the direction of the students and the relative proximity to the target,we can identify whether or not the student’s gaze is fixed on the target.The engagement of students is judged by statistics of data over some time.

    4 Experiment

    4.1 Experiment Setup

    The subjects of this study are second-year undergraduate students in Chinese universities,with an age range of 18-20.This course is a seminar for computer science students.The experiment tool is the fixed cameras in the classroom.The classroom is equipped with 8 cameras to record the course scene from different directions.The experimental collection tool is an automatic recording system connected with the cloud platform.A 45-min video of the class is recorded.All participants gave written consent for the images or videos used for research.

    4.2 The Experimental Process

    In a seminar,students were grouped into 3-5 persons.They were asked to analyze a sequence of problems on the screen.One person in the group would be selected randomly by the teacher to answer questions related to the topic discussed,and the other ones’scores were graded by the person’s answers.Given this requirement,collaborative learning among group members meets everyone’s needs.

    When the learner’s position was fixed,the effective targets for their gaze were:the faces of their peers, the study materials on the desk, and the two projection screens at the front of the classroom(which displayed the requirements and prompts for discussion). In the experiment, we randomly captured a representative video frame or multiple adjacent frames at time intervals and analyzed them.Using these data,we can get the direction of students’gazes,and count the frequency and duration of learners’gazes.

    Fig.6 is an example of a collaborative learning setting.We found that group members spent most of their time looking at the screen to analyze the problem,or gazing at their peers’faces to discuss the problem.Fig.6a shows two members looking at one of the two-screen blackboards,the other person looking at the face of one member outside the image. Fig.6b shows two members looking at each other in discussion while the third member looked at the screen of the blackboard.

    Figure 6:An example of collaborative learning

    4.3 Classification of Effective Facial Expression

    Since the members in Fig.6b wearing masks or occluding parts of the face,we adopted the left three members in Fig.6a for experiment illustration.

    After the human face frame is detected by the MTCNN network,the corresponding face image is extracted.Then they are sent to the Facial Expression Network to detect the student expression.The experiment result is shown as follows.

    Table 3 shows the emotional result of Fig.7.

    Table 3: The emotion result of Fig.7

    Figure 7:Face detection by the MTCNN network

    4.4 Gaze Direction Estimation

    This experiment gains 3-dimension rotation(yaw,pitch,and roll),so the gaze direction and gaze targets can be estimated.Fig.8 offers a result of gaze directions and the corresponding values which are given in Table 4.

    Figure 8: (Continued)

    Figure 8:Gaze direction estimation of the students

    Table 4: The gaze direction of Fig.8

    Fig.9 shows an example of the detection result of the gaze target.

    Figure 9: (Continued)

    Figure 9:The gaze target prediction

    4.5 Joint Facial Expression and Gaze Direction for Engagement Evaluation

    Fuzzy logic refers to the way of reasoning that imitates the judgment of uncertainty and reasoning of the human brain.For the system whose model is unknown or uncertain,fuzzy sets and fuzzy rules are used for reasoning, to express qualitative experience, to simulate the way of the human brain,to implement fuzzy comprehensive judgment and reasoning to solve the regular fuzzy information problems which are difficult to deal with by conventional methods.Fuzzy logic is good at expressing qualitative knowledge and experience with unclear boundaries. Using the concept of membership function,fuzzy logic distinguishes fuzzy sets,deals with fuzzy relations,and simulates the human brain to implement regular reasoning.

    We use two fuzzy logics A and B to fuse the estimates of facial expression and gaze in Fig.10.The average value of logic A was 0.74,and the Pearson value was 0.97.The average value of logic B is 0.88 and the Pearson value is 0.99,which can well describe students’final learning effectiveness.

    After the multi-modal deep neural network(MDNN)is employed,the students’engagement can be predicted. To evaluate the difference between the real results. After the course is completed, the teacher gives 20 questions.The topic is very relevant to this course’s collaborative content,of which the difficulty level is average distribution.We make statistical comparisons for the average and individual scores of the whole class.

    Figure 10: (Continued)

    Figure 10:Fuzzy logic

    The segment cross point of membership for expression recognition is 50%. The membership function boundaries of fuzzy Logic A for engagement(Low,A little Low,A little high,high)are(20%,55%,80%),and Logic B(30%,50%,70%).

    The result is shown in Table 5.Our results show that fuzzy Logic B is more correlated with posttest values,which indicates that people’s gaze frequency has a concentrated distribution.

    Table 5: The result of engagement

    1)Comparative study

    For comparing the MDNN network with other works, we took results from the following method in Table 6.It shows our results in comparison with other methods in the detection of student engagement using our class video.

    Facial Expression[38]is acquired by using only facial expressions.

    Liao et al.[17]directly judged engagement from students’facial expressions.

    Abedi et al.[18]combines RestNet+TCN for end-to-end engagement prediction.

    Gaze Only[39]uses the gaze in a frame to predict engagement.

    Gaze+LSTM[39]uses the gaze in a sequence of frames to predict engagement.

    Table 6: Quantitative result over the real dataset

    Although Facial Expression[38]has a relatively high accuracy rate in the data set with front faces.However,the classification effect of facial expression is not ideal when the human body turned sideways during the discussion.

    Liao et al. [17] predicted engagement directly from students’facial expressions. However, the accuracy of the engagement was low because the students could not keep their faces facing the camera.

    Abedi et al.[18]judged students’engagement levels mainly based on the front faces.But mistakes can occur when students look down and turn sideways to discuss such states.

    Gaze Only[39]can know what the target of attention is through gaze detection.Because it cannot detect expressions,useful information contained in facial expressions like“smile”is ignored.

    Gaze + LSTM [39] use the temporal information of gaze to predict the target that people are paying attention to. However, it is not possible to judge engagement through facial expressions immediately.

    MDNN uses both facial expression and gaze estimation;and integrates temporal information of gaze.The results show that this method has the best effect.

    2)Ablation study analysis

    To prove and better understand the importance of each module in the prosed MDNN,we test the performance of the different modules of MDNN.

    These constructed networks are as follows:

    1)Weighted Facial Expression:only introduce different weights in the classification of emotions for different facial expressions without Gaze estimation.2)Gaze+LSTM:only use the effective gaze without facial expressions;and 3)Weighted Facial Expression+Gaze+LSTM+Fuzzy Fuse A:first use the Facial expression and gaze estimation to detect emotions and the targets being gazed, then fuse the results with Fuzzy Logic A.

    We make statistics of the total number of video frames and use different modules to detect them.It can be seen from Table 7 that the detection effect is the best by integrating facial expression and gaze estimation modules.

    Table 7: Ablation study on collaborative learning dataset

    5 Discussion

    Effective gaze ratio.In collaborative learning, the target gaze ratio is defined as the Effective Gaze Ratio, which reveals valuable information about learners, such as gazing at teammates’faces,teammates’sketches,the screen in front of them,the teacher,etc.[40,41].Few studies have investigated computer vision-based approaches to better measure and apply in unconstrained scenarios such as collaborative learning. Compared with other methods using traditional high-cost eye-tracking equipment[42],our method can automatically capture effective gaze ratios through a simple camera throughout the learning process.

    Targets gazed ratio and post-test scores.The study showed a significant positive linear relationship between the target gaze ratio and students’post-test scores.Students who focus on effective targets for longer periods during the learning process are more likely to score higher on post-test tests.This finding is consistent with previous research on the positive effects of gaze on learning outcomes[43,44].

    6 Limitations and Future Work

    There is some space for improvement in the future. 1) We only counted the gaze frequency of students on certain targets.However,due to individual differences,some students may look up to the sky while thinking,which would be identified as unengagement.How to judge the learning effect based on individual differences is a direction in the future.2)Turning the body and blocking the face caused a decrease in expression and gaze judgment accuracy. In the future, we will combine more effective AI computer vision recognition algorithms to improve facial expressions [38,45,46], and head/body posture estimation[47,48]to achieve a more comprehensive explanation of the learning process This study provides an assessment tool for collaborative learning environments based on facial and gaze information,and provides implications for the field of educational technology.

    7 Conclusion

    In a collaborative learning environment, students interact with each other and share aspects with their partners. Students’gaze direction and facial expression are unconscious behaviors when communicating with each other. Studying this nonverbal behavior in the classroom can provide important feedback to teachers. It is important to analyze student behavior and understand one’s focus.

    In this paper, we proposed an automatic assessment method of student engagement based on computer vision. The method uses gaze and facial expression information to predict gaze objects and identify emotions. We test our proposed method by extracting gaze and facial features to assess learning achievements. The results showed that students with higher gaze ratios and positive expressions performed better on tests as determined by our automated assessment method.

    Funding Statement:This work is supported by the National Natural Science Foundation of China(No. 61977031) and XPCC’s Plan for Tackling Key Scientific and Technological Problems in Key Fields(No.2021AB023-3).

    Conflicts of Interest:The authors declare they have no conflicts of interest to report regarding the present study.

    精品一区在线观看国产| 日韩 亚洲 欧美在线| 在线观看人妻少妇| 欧美日本中文国产一区发布| 免费高清在线观看视频在线观看| 99九九线精品视频在线观看视频| 男人和女人高潮做爰伦理| 亚洲av在线观看美女高潮| 国产精品99久久久久久久久| 国产精品国产三级专区第一集| 亚洲内射少妇av| 99九九在线精品视频 | 国产精品福利在线免费观看| 天堂8中文在线网| 欧美xxxx性猛交bbbb| 中文字幕人妻熟人妻熟丝袜美| 亚洲中文av在线| 亚洲熟女精品中文字幕| 观看美女的网站| 老司机影院成人| 欧美精品高潮呻吟av久久| 国产黄片美女视频| 七月丁香在线播放| 一本大道久久a久久精品| 亚洲国产精品国产精品| 中文字幕人妻丝袜制服| 久久久久久伊人网av| 91精品国产国语对白视频| 国产一级毛片在线| 国产精品一区二区三区四区免费观看| 日韩不卡一区二区三区视频在线| av国产久精品久网站免费入址| 亚洲美女搞黄在线观看| 亚洲成人av在线免费| 亚洲av成人精品一二三区| 久久6这里有精品| 男人狂女人下面高潮的视频| 欧美激情极品国产一区二区三区 | 亚洲综合精品二区| 国产成人精品一,二区| 欧美区成人在线视频| www.色视频.com| 国产精品久久久久久av不卡| 狠狠精品人妻久久久久久综合| 精品少妇黑人巨大在线播放| 欧美日韩精品成人综合77777| 黄色怎么调成土黄色| 少妇 在线观看| 两个人免费观看高清视频 | 免费看日本二区| 国产毛片在线视频| 五月伊人婷婷丁香| 精品国产国语对白av| 国产深夜福利视频在线观看| 最近中文字幕2019免费版| 人人妻人人澡人人看| 亚洲,一卡二卡三卡| 国产精品久久久久久久久免| 欧美激情国产日韩精品一区| 亚洲欧洲国产日韩| 18禁在线无遮挡免费观看视频| 18禁动态无遮挡网站| 国产成人精品福利久久| 少妇人妻久久综合中文| 亚洲av免费高清在线观看| 少妇精品久久久久久久| 国产免费福利视频在线观看| 亚洲国产欧美在线一区| 亚洲欧美日韩另类电影网站| 精品人妻熟女毛片av久久网站| 如日韩欧美国产精品一区二区三区 | 亚洲av日韩在线播放| 大码成人一级视频| 在线免费观看不下载黄p国产| 久久精品熟女亚洲av麻豆精品| 亚洲电影在线观看av| 亚洲国产精品专区欧美| 国产淫片久久久久久久久| 欧美一级a爱片免费观看看| 在线看a的网站| 久久久国产一区二区| 久久久久网色| 少妇的逼水好多| 国产精品国产三级专区第一集| 9色porny在线观看| 久久精品国产鲁丝片午夜精品| 欧美日韩综合久久久久久| www.色视频.com| 春色校园在线视频观看| 午夜免费男女啪啪视频观看| av免费观看日本| 日本爱情动作片www.在线观看| 青春草国产在线视频| 伦精品一区二区三区| 亚洲熟女精品中文字幕| 日韩免费高清中文字幕av| av线在线观看网站| 久久久国产精品麻豆| 精品一区二区免费观看| 久久久久网色| 久久国产精品男人的天堂亚洲 | 乱码一卡2卡4卡精品| 日韩视频在线欧美| 亚洲无线观看免费| 欧美最新免费一区二区三区| 中文天堂在线官网| 欧美3d第一页| 久久久a久久爽久久v久久| 久久久久久久久久人人人人人人| 国产精品熟女久久久久浪| 国产成人91sexporn| 国产一区二区三区综合在线观看 | 成年人免费黄色播放视频 | 久久久久久久久久久免费av| 亚洲美女搞黄在线观看| 最近2019中文字幕mv第一页| 男男h啪啪无遮挡| 国产精品嫩草影院av在线观看| 精品少妇黑人巨大在线播放| 日韩不卡一区二区三区视频在线| 国产中年淑女户外野战色| 欧美日韩视频高清一区二区三区二| 最近2019中文字幕mv第一页| 嫩草影院入口| 80岁老熟妇乱子伦牲交| 成人毛片60女人毛片免费| videos熟女内射| 黄色欧美视频在线观看| 男人和女人高潮做爰伦理| 男女国产视频网站| 自拍偷自拍亚洲精品老妇| 亚洲av日韩在线播放| 99久久精品一区二区三区| 国产精品国产三级国产专区5o| 99热这里只有精品一区| 春色校园在线视频观看| 日本色播在线视频| 新久久久久国产一级毛片| 男女边吃奶边做爰视频| 中文天堂在线官网| 亚洲av欧美aⅴ国产| 精品亚洲乱码少妇综合久久| 欧美高清成人免费视频www| 国产成人精品福利久久| 男的添女的下面高潮视频| freevideosex欧美| 国产成人精品无人区| 看非洲黑人一级黄片| 七月丁香在线播放| 丰满迷人的少妇在线观看| 嫩草影院入口| 国产精品久久久久久久久免| 国产又色又爽无遮挡免| 国产黄片视频在线免费观看| 一级毛片电影观看| 亚洲一级一片aⅴ在线观看| 亚洲综合色惰| 日韩av在线免费看完整版不卡| 国产熟女欧美一区二区| 国产精品蜜桃在线观看| 最近中文字幕高清免费大全6| 一本—道久久a久久精品蜜桃钙片| 女性生殖器流出的白浆| 天天躁夜夜躁狠狠久久av| 欧美亚洲 丝袜 人妻 在线| 亚洲国产精品一区二区三区在线| 久久亚洲国产成人精品v| 国产在线视频一区二区| 寂寞人妻少妇视频99o| av播播在线观看一区| 亚洲国产欧美在线一区| 在线观看美女被高潮喷水网站| 最近手机中文字幕大全| 免费看av在线观看网站| 午夜影院在线不卡| 日韩熟女老妇一区二区性免费视频| 美女大奶头黄色视频| 看免费成人av毛片| 国产老妇伦熟女老妇高清| 9色porny在线观看| 久久久久人妻精品一区果冻| 伊人亚洲综合成人网| 中文字幕人妻丝袜制服| 不卡视频在线观看欧美| 少妇的逼水好多| 亚洲国产精品国产精品| 热re99久久精品国产66热6| 久久精品久久久久久久性| 亚洲欧美一区二区三区国产| 精品久久久精品久久久| 国产免费视频播放在线视频| 亚洲va在线va天堂va国产| 91精品国产国语对白视频| 视频中文字幕在线观看| 在线观看美女被高潮喷水网站| 精品午夜福利在线看| 国国产精品蜜臀av免费| freevideosex欧美| 免费高清在线观看视频在线观看| 国产永久视频网站| 青春草视频在线免费观看| 中文字幕制服av| 国语对白做爰xxxⅹ性视频网站| 久久久久久伊人网av| 亚洲精品久久久久久婷婷小说| 国产精品人妻久久久久久| 性高湖久久久久久久久免费观看| 涩涩av久久男人的天堂| 国产精品一区www在线观看| 99热6这里只有精品| 国产免费一区二区三区四区乱码| 精品午夜福利在线看| 91午夜精品亚洲一区二区三区| 日韩熟女老妇一区二区性免费视频| 亚洲av成人精品一区久久| 观看免费一级毛片| 日本91视频免费播放| 午夜激情福利司机影院| 香蕉精品网在线| 亚洲精品乱码久久久v下载方式| 久久久久久久亚洲中文字幕| 九草在线视频观看| av福利片在线| 亚洲欧美一区二区三区国产| 在线天堂最新版资源| 伦理电影免费视频| 精品亚洲成国产av| 国产极品粉嫩免费观看在线 | 91成人精品电影| 日产精品乱码卡一卡2卡三| 国模一区二区三区四区视频| 高清视频免费观看一区二区| 国产国拍精品亚洲av在线观看| 成人二区视频| 伊人亚洲综合成人网| 国产精品久久久久久av不卡| 亚洲无线观看免费| 大片电影免费在线观看免费| 乱码一卡2卡4卡精品| 国产极品天堂在线| 老司机影院成人| 免费看光身美女| 国产男人的电影天堂91| 极品少妇高潮喷水抽搐| 亚洲熟女精品中文字幕| 国产中年淑女户外野战色| 欧美最新免费一区二区三区| 伦理电影免费视频| 免费少妇av软件| 看非洲黑人一级黄片| 精品人妻熟女av久视频| 成人亚洲欧美一区二区av| 精品一区在线观看国产| 中文字幕久久专区| 少妇人妻久久综合中文| 国产中年淑女户外野战色| 国产熟女欧美一区二区| 六月丁香七月| 精品人妻熟女毛片av久久网站| 国产精品秋霞免费鲁丝片| 狂野欧美白嫩少妇大欣赏| 色网站视频免费| 国产日韩一区二区三区精品不卡 | 欧美成人精品欧美一级黄| 欧美精品一区二区大全| 国产在视频线精品| 国产精品.久久久| 免费av中文字幕在线| 免费黄网站久久成人精品| 精品人妻偷拍中文字幕| 午夜91福利影院| 97超碰精品成人国产| www.色视频.com| 狂野欧美白嫩少妇大欣赏| 免费av不卡在线播放| 久久影院123| 极品少妇高潮喷水抽搐| 搡女人真爽免费视频火全软件| 青春草国产在线视频| 夜夜看夜夜爽夜夜摸| 激情五月婷婷亚洲| 啦啦啦啦在线视频资源| 少妇的逼水好多| 久久人人爽人人爽人人片va| 秋霞在线观看毛片| 精品一品国产午夜福利视频| 性高湖久久久久久久久免费观看| 免费播放大片免费观看视频在线观看| 搡老乐熟女国产| 51国产日韩欧美| 亚洲国产精品国产精品| 亚洲在久久综合| 一级,二级,三级黄色视频| 一级a做视频免费观看| 亚洲欧美成人精品一区二区| 日韩av不卡免费在线播放| 日本色播在线视频| 精品久久久久久久久亚洲| 亚洲伊人久久精品综合| 天堂8中文在线网| 久久ye,这里只有精品| 国产淫片久久久久久久久| av线在线观看网站| 十八禁网站网址无遮挡 | 人人妻人人添人人爽欧美一区卜| 97精品久久久久久久久久精品| 欧美一级a爱片免费观看看| 国产精品久久久久久久电影| 人人妻人人澡人人看| 亚洲色图综合在线观看| 成年av动漫网址| 国产欧美亚洲国产| 日本vs欧美在线观看视频 | 成人亚洲欧美一区二区av| 久久久久精品久久久久真实原创| 在线观看人妻少妇| 亚洲性久久影院| 日韩精品有码人妻一区| 日本欧美国产在线视频| av线在线观看网站| 国产免费一级a男人的天堂| 亚洲av福利一区| 青春草亚洲视频在线观看| 日日摸夜夜添夜夜爱| 午夜激情福利司机影院| 国产毛片在线视频| 精品人妻偷拍中文字幕| 亚洲国产精品专区欧美| 一级二级三级毛片免费看| 热99国产精品久久久久久7| 亚洲图色成人| 不卡视频在线观看欧美| 一区在线观看完整版| 黑人猛操日本美女一级片| 老司机影院成人| 又黄又爽又刺激的免费视频.| 国产av精品麻豆| 黄片无遮挡物在线观看| 久久午夜综合久久蜜桃| 久久精品久久久久久久性| 人妻制服诱惑在线中文字幕| 国产欧美日韩综合在线一区二区 | 蜜桃在线观看..| 一边亲一边摸免费视频| 一级毛片aaaaaa免费看小| 亚洲精品乱久久久久久| 成人美女网站在线观看视频| 爱豆传媒免费全集在线观看| 夜夜爽夜夜爽视频| av又黄又爽大尺度在线免费看| 久久久久久久久久久丰满| 精品久久久久久久久亚洲| 一级av片app| 亚洲美女黄色视频免费看| 少妇人妻精品综合一区二区| 国产亚洲5aaaaa淫片| 日韩 亚洲 欧美在线| 高清黄色对白视频在线免费看 | 国产黄色免费在线视频| 青春草国产在线视频| 婷婷色麻豆天堂久久| 麻豆成人午夜福利视频| 两个人的视频大全免费| 亚洲国产精品成人久久小说| 交换朋友夫妻互换小说| 简卡轻食公司| 这个男人来自地球电影免费观看 | 久热这里只有精品99| 亚洲av福利一区| 国产成人精品一,二区| 久久久久久伊人网av| av黄色大香蕉| 日韩大片免费观看网站| 日韩视频在线欧美| 欧美国产精品一级二级三级 | 高清不卡的av网站| 人人妻人人澡人人爽人人夜夜| 啦啦啦视频在线资源免费观看| 国产在线视频一区二区| 男男h啪啪无遮挡| 大香蕉久久网| 91精品伊人久久大香线蕉| h日本视频在线播放| 大片免费播放器 马上看| 免费av不卡在线播放| 99九九在线精品视频 | 丝袜脚勾引网站| 欧美精品人与动牲交sv欧美| 狂野欧美激情性bbbbbb| 汤姆久久久久久久影院中文字幕| 建设人人有责人人尽责人人享有的| 一级,二级,三级黄色视频| 午夜激情福利司机影院| 国产精品无大码| 香蕉精品网在线| 少妇丰满av| 亚洲不卡免费看| 亚洲欧美成人综合另类久久久| 久久久久久人妻| 国产极品粉嫩免费观看在线 | 观看免费一级毛片| 成人黄色视频免费在线看| 午夜91福利影院| 日日啪夜夜爽| 午夜福利网站1000一区二区三区| 国产日韩一区二区三区精品不卡 | 久久久久久久久大av| 最近手机中文字幕大全| 国产精品麻豆人妻色哟哟久久| 男的添女的下面高潮视频| 久久久精品94久久精品| 久久久久久久大尺度免费视频| 久久久久久久久久久免费av| av又黄又爽大尺度在线免费看| 韩国高清视频一区二区三区| h日本视频在线播放| 男女边吃奶边做爰视频| 91精品国产九色| 人妻夜夜爽99麻豆av| 国产高清国产精品国产三级| 亚洲图色成人| 只有这里有精品99| 91精品国产国语对白视频| 久久国产精品男人的天堂亚洲 | 精品国产一区二区三区久久久樱花| 能在线免费看毛片的网站| 亚洲第一区二区三区不卡| 99久国产av精品国产电影| 男的添女的下面高潮视频| 少妇熟女欧美另类| 亚洲熟女精品中文字幕| 亚洲av在线观看美女高潮| 中国三级夫妇交换| 日韩电影二区| 噜噜噜噜噜久久久久久91| 国产精品不卡视频一区二区| 久久青草综合色| 国产精品.久久久| 亚洲精品国产av成人精品| 日日啪夜夜爽| 一边亲一边摸免费视频| 国产毛片在线视频| 99久久综合免费| 日本-黄色视频高清免费观看| 人妻少妇偷人精品九色| 深夜a级毛片| 国产精品蜜桃在线观看| 亚洲国产精品专区欧美| 精品亚洲乱码少妇综合久久| 精品久久久久久久久亚洲| 美女cb高潮喷水在线观看| 国产淫语在线视频| 极品教师在线视频| 91久久精品电影网| 最后的刺客免费高清国语| 国产毛片在线视频| 又粗又硬又长又爽又黄的视频| 九九爱精品视频在线观看| 国产精品福利在线免费观看| 亚洲国产色片| 最近中文字幕2019免费版| 91久久精品国产一区二区成人| 伦精品一区二区三区| 成人影院久久| 大又大粗又爽又黄少妇毛片口| 精品国产乱码久久久久久小说| 51国产日韩欧美| 寂寞人妻少妇视频99o| 精品久久久精品久久久| 人人妻人人爽人人添夜夜欢视频 | 天美传媒精品一区二区| 少妇的逼水好多| 777米奇影视久久| 99久久人妻综合| 国产亚洲5aaaaa淫片| 午夜福利影视在线免费观看| 女性生殖器流出的白浆| 国产精品熟女久久久久浪| 午夜福利在线观看免费完整高清在| 国产 精品1| 少妇人妻一区二区三区视频| 在线观看人妻少妇| 国产男人的电影天堂91| 夜夜爽夜夜爽视频| 少妇的逼好多水| 国产毛片在线视频| 国模一区二区三区四区视频| 成人亚洲欧美一区二区av| 美女福利国产在线| 插逼视频在线观看| 少妇熟女欧美另类| 国产精品秋霞免费鲁丝片| 久久狼人影院| 国产黄片视频在线免费观看| 国产av精品麻豆| 亚洲欧美一区二区三区国产| 26uuu在线亚洲综合色| 免费黄频网站在线观看国产| 久久影院123| 波野结衣二区三区在线| 国产美女午夜福利| 日本午夜av视频| 中文在线观看免费www的网站| 97在线视频观看| 国产探花极品一区二区| 丰满少妇做爰视频| 日韩av不卡免费在线播放| 国产欧美日韩一区二区三区在线 | 我要看黄色一级片免费的| 最新的欧美精品一区二区| 亚洲美女搞黄在线观看| 国产亚洲午夜精品一区二区久久| 免费看光身美女| 免费观看的影片在线观看| 欧美亚洲 丝袜 人妻 在线| 99久久人妻综合| 国产亚洲午夜精品一区二区久久| 黄色一级大片看看| 日本欧美国产在线视频| 欧美日韩视频精品一区| 老女人水多毛片| a级毛片免费高清观看在线播放| 国产精品一区二区三区四区免费观看| 麻豆成人午夜福利视频| 看非洲黑人一级黄片| 亚洲三级黄色毛片| 午夜免费鲁丝| 亚洲av日韩在线播放| 国产精品伦人一区二区| xxx大片免费视频| 亚洲在久久综合| 国产精品久久久久成人av| 国产精品成人在线| 国产高清三级在线| 欧美性感艳星| 欧美日韩综合久久久久久| 狂野欧美激情性bbbbbb| 亚洲怡红院男人天堂| h视频一区二区三区| 免费观看性生交大片5| 高清黄色对白视频在线免费看 | 亚洲精品乱久久久久久| 欧美激情国产日韩精品一区| 亚洲精品国产av成人精品| 自拍偷自拍亚洲精品老妇| 欧美成人午夜免费资源| 日本猛色少妇xxxxx猛交久久| 91精品一卡2卡3卡4卡| 亚洲婷婷狠狠爱综合网| freevideosex欧美| 中文在线观看免费www的网站| 丰满迷人的少妇在线观看| 夜夜看夜夜爽夜夜摸| 国产精品一二三区在线看| 少妇人妻久久综合中文| 久久久欧美国产精品| 最后的刺客免费高清国语| 亚洲欧美一区二区三区黑人 | 成人黄色视频免费在线看| 日本欧美视频一区| 视频中文字幕在线观看| 国产精品一区二区性色av| 美女视频免费永久观看网站| 日韩av免费高清视频| 看免费成人av毛片| 国产精品99久久99久久久不卡 | 久久韩国三级中文字幕| 亚洲精品乱码久久久久久按摩| 自线自在国产av| 国国产精品蜜臀av免费| 亚洲成人av在线免费| 在线观看www视频免费| 亚洲一级一片aⅴ在线观看| 国产精品99久久久久久久久| 99久久精品国产国产毛片| 亚洲美女黄色视频免费看| 中文字幕久久专区| 男男h啪啪无遮挡| 最新的欧美精品一区二区| 久久久a久久爽久久v久久| 日本av手机在线免费观看| 欧美精品一区二区大全| 亚洲av在线观看美女高潮| 丰满迷人的少妇在线观看| 亚洲欧美一区二区三区黑人 | 黑人高潮一二区| 亚洲久久久国产精品| 老司机亚洲免费影院| 久久精品久久久久久噜噜老黄| 另类亚洲欧美激情| 日韩免费高清中文字幕av| 永久免费av网站大全| a级毛色黄片| 日本免费在线观看一区| av播播在线观看一区| 欧美3d第一页| 日韩中字成人| 丝袜脚勾引网站| 亚洲精品自拍成人| 嘟嘟电影网在线观看| 亚洲欧美成人综合另类久久久| 久久精品国产鲁丝片午夜精品| 国产成人91sexporn| 男女无遮挡免费网站观看| 国产高清国产精品国产三级| 亚洲无线观看免费| 在线观看免费高清a一片| 国产高清不卡午夜福利| 欧美 亚洲 国产 日韩一| 亚洲精品视频女| 在线观看www视频免费| 国内揄拍国产精品人妻在线| 观看免费一级毛片| 亚洲欧美一区二区三区黑人 | 日韩一区二区三区影片|