• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Eye Gaze Detection Based on Computational Visual Perception and Facial Landmarks

    2021-12-11 13:32:14DebajitDattaPramodKumarMauryaKathiravanSrinivasanChuanYuChangRishavAgarwalIshitaTutejaandBhavyashriVedula
    Computers Materials&Continua 2021年8期

    Debajit Datta,Pramod Kumar Maurya,Kathiravan Srinivasan,Chuan-Yu Chang,Rishav Agarwal,Ishita Tuteja and V.Bhavyashri Vedula

    1School of Computer Science and Engineering,Vellore Institute of Technology(VIT),Vellore,632014,India

    2School of Information Technology and Engineering,Vellore Institute of Technology(VIT),Vellore,632014,India

    3Department of Computer Science and Information Engineering,National Yunlin University of Science and Technology,Yunlin,64002,Taiwan

    Abstract:The pandemic situation in 2020 brought about a‘digitized new normal’and created various issues within the current education systems.One of the issues is the monitoring of students during online examination situations.A system to determine the student’s eye gazes during an examination can help to eradicate malpractices.In this work, we track the users’eye gazes by incorporating twelve facial landmarks around both eyes in conjunction with computer vision and the HAAR classifier.We aim to implement eye gaze detection by considering facial landmarks with two different Convolutional Neural Network(CNN)models,namely the AlexNet model and the VGG16 model.The proposed system outperforms the traditional eye gaze detection system which only uses computer vision and the HAAR classifier in several evaluation metric scores.The proposed system is accurate without the need for complex hardware.Therefore,it can be implemented in educational institutes for the fair conduct of examinations,as well as in other instances where eye gaze detection is required.

    Keywords:Computer vision; convolutional neural network; data integrity;digital examination; eye gaze detection; extraction; information entropy

    1 Introduction

    The COVID-19 pandemic was one of the worst global biological disasters in 2020 and affected everything around the world significantly.The traditional delivery of education had come to a standstill as the educational institutions had to be shut down due to the virus.As new, digitized education systems came into effect, new problems began to present themselves.While most educational institutions have started conducting online classes, the examinations are still problematic due to the lack of effective administration and invigilation approaches.At present, the online examinations are insufficiently monitored to detect malpractices by the students.A flawed examination system cannot accurately measure the students’abilities or be used to educate the students.

    In this work, image processing and computer vision techniques are applied with the HAAR classifier to locate the eyes in an image.Once the eyes have been located, grey-scaling and threshold are applied to detect and extract both pupils from the image.The two Convolutional Neural Network (CNN) models-the AlexNet model and the VGG16 model—are applied to identify facial landmarks while maintaining data integrity.There are 68 facial landmarks in a face with six landmarks around the left eye and six around the right eye.For eye gaze detection, the Euclidean distances are used along with the twelve landmark points around both eyes.The system is validated by comparing the results to those from the traditional approach, which only considers eye detection using the HAAR classifier.The comparison is made using evaluation metrics that are accepted globally, namely the precision, recall, F1 score, specificity, sensitivity, and mutual information entropy scores.The proposed system can be used for online invigilation processes to automatically determine whether or not the students are participating in any malpractices, thus allowing for a fair examination process.

    The implementation of this work can provide a smart examination system for both the teachers and students.The work presented here aims to ensure that the education process receives the recognition it deserves even when the world is in turmoil.The proposed eye gaze detection system may be applied to detect malpractices during examinations.It can also be applied where eye gaze detection is required, such as analyzing the user’s interest level for an online course by tracking the person’s eye gaze.The proposed system can also be applied to recommendation systems [1], where the eye gaze plays a vital role in understanding the users’interest.The eye gaze data can also be combined with web data mining concepts and technologies like clickstream, to determine the most relevant part of a web page and facilitate effective website design.The system also helps in analyzing the effectiveness of a steganographic image by capturing the region of interest for different users.

    The following sections describe our work in detail, explaining the implementation along with the experimental results.Section 2 discusses related research works by several researchers.Section 3 briefly discusses the proposed system.Section 4 discusses the execution of each module in detail,as well as the concepts that have been used or proposed.Section 5 discusses the results and observations, along with visualizations through charts and plots for better understanding.Finally,Section 6 provides the conclusions based on the results and proposes future enhancements.

    2 Related Works

    The work of Qiu et al.[2] dealt with recognizing facial expressions based on the 68 different landmarks on the face.They successfully determined seven expressions, which are neutral,happy, fearful, sad, surprise, disgusted and angry expressions.They used distance vectors over the coordinates to design the algorithm for predicting the expressions accurately.The performance of their method was comparable to that of the known CNN models like the VGG-16 and ResNet models.According to the work of Dhingra et al.[3], non-verbal communication was detected by tracking the eye gaze.Their system detected non-verbal communication at a distance of around 3 meters for 28 testers.Using OpenFace, iView, and SVM, they had created a system that provided decent accuracies.Park et al.[4] used facial landmarks to provide insights into facial expression differences and determine a person smiles spontaneously or fakes a smile.They used Singular Value Decomposition (SVD) on the distances obtained to find the optimal transition to detect a spontaneous smile.The robustness of the system could be optimized further for real-time expression detection.

    Su et al.[5] created an eye gaze tracking system that targets the pupil and the corners of the eyes to trace the eye movements.They used the inner corner-pupil center vector calculated in Euclidean distances.The model was based on the Deep Neural Network (DNN), and the ReLu was deployed as the activation layer.The system can be made to provide real-time responses.Iannizzotto et al.[6] provided a remote eye-tracking system to cope with interactive school tasks amid the COVID-19 pandemic.They used OpenCV to implement a video conferencing software in their model.In addition, the system is scaled to capture more faces in a single frame.

    Sáiz Manzanares et al.[7] dealt with eye gaze tracking and data mining techniques for a sustainable education system.They used statistical analysis and a hierarchical clustering algorithm based on BIRCH and the Expectation-Maximization algorithm was deployed to test the system.The patterns obtained after data analysis were used to develop the education system.

    According to the work of Tran et al.[8], the Interpersonal-Calibrating Eye gaze Encoder(ICE) was effective in extracting the eye gaze movement from a video.The paper used the ICE dynamic clustering algorithm and validated it using an infrared gaze tracker.Nevertheless,due to the errors caused by the clustering algorithm the data was not sampled appropriately and discrepancies were not removed.Dahmani et al.[9] developed a motorized wheelchair with an eye-tracking system.The paper described the CNN as the best choice for gaze estimation.Ujbanyi et al.[10] tracked eye movements to determine where the person is looking at.Their work captured the eye movements by performing human computer interactions-based motoric operations.They focused on how the eye-tracking methodology affected the cognitive processes and did not elaborate on the landmarks used to detect eye movements.

    Zhu et al.[11] investigated a gaze tracking system with superior accuracies.They developed new algorithms to identify the inner eye corner and the focal point of the iris with sub-pixel precision.Their method established a sub-pixel feature tracking algorithm that upgraded the precision of head pose estimation.However, their work was only accurate for high-resolution images.According to Chennamma et al.[12], there are four different oculographic methods:Electro-oculography, scleral search coils, infrared oculography, and video oculography.Depending on the hardware used for tracking the eye movements, video oculography were further classified into a single camera eye tracker and a multi-camera eye tracker.Moreover, there are two gaze estimation methods, namely feature-based gaze estimation and appearance-based gaze estimation.The feature-based approaches are further categorized into model-based and interpolation-based approaches.Their work mainly highlighted the need for standardizing the metrics of eye movements.Datta et al.[13,14] provided a better way of implementing the Convolutional Neural Network (CNN) models for image classification.The traditional method of implementing the CNNs was time-consuming; moreover, the performance and hardware utilization could be further improved.By utilizing parallel computation with Ray, the authors reduced operating time and increased efficiency.

    3 Proposed Work

    Over the years, several advancements and research have been done in computer vision and image processing.With the growing number of technologies and scientific knowledge base, there have been many research works to predict the eye gaze direction.The traditional eye gaze detection method was based purely on image processing, which provided decent evaluation.Infrared was often used for eye gaze detection, but it is considered to be harmful in several research works [15,16].In this paper, we propose an approach that is more effective than the traditional eye gaze detection method.The proposed method calculates the distances between facial landmarks to provide more accurate eye gaze detection.

    The architecture diagram in Fig.1 shows the different eye gaze detection steps described in this work.Image processing and computer vision techniques, along with the HAAR classifier,are used to detect and mark around the pupils with a red circle, referred to as a blob.The twelve facial landmarks around the eyes are also detected and extracted from the dataset to train the Convolutional Neural Network (CNN) models, namely the AlexNet and VGG16 models.The reason multiple CNN models were chosen is to evaluate the accuracies of the models.Next,the output gaze is predicted from the live video capture data using the Euclidean distances.This work compares the proposed system’s evaluation metric scores to that of the traditional system which uses only image processing and computer vision techniques to predict the eye gaze.

    Figure 1:Architecture of the eye gaze detection system

    4 Implementation

    4.1 Dataset,Hardware and Background Description

    The dataset for the proposed eye gaze detection system has been used to locate the twelve facial landmarks around the eyes.The facial landmark dataset consists of the 68 facial landmarks with their x and y coordinates in Comma Separated Value (CSV) format and a total of 5770 colored images of different faces.

    The work has been carried out on an HP Spectre×360 Convertible 15-ch0xx workstation,with an x64-based Intel? Core?i7-8550U processor.Additionally, the system configuration also includes 16 GB RAM, a 64-bit operating system, as well as a touch and pen support.

    The environment in which this work has been implemented is illuminated by a white Phillips light source with a Light Emitting Diode (LED) tube light.

    4.2 Facial Landmark Detection Using the CNN

    A human can quickly identify a face or other parts of the face like the eyes or nose in an image or video, but a computer needs more information to do the same.To identify a face in an image, face detection methods are used to determine the human face’s location in the image [17,18].The location is often returned as a bounding box or coordinates values.To find smaller features such as the eyes and lips, facial landmarks are used to localize and extract the required coordinates of the facial parts from the bounding box.There are 68 landmarks on the face representing the face’s salient features such as the eyes (both left and right), eyebrows (both left and right), nose, mouth, and jawline.These landmarks represent the points that are considered

    to be essential in the localization and extraction of facial features.Each of these 68 landmarks can be represented uniquely by index numbers from 1 to 68.Each of these landmarks has unique coordinates represented as(x,y)values.The Convolutional Neural Network (CNN) is a branch of Deep Neural Networks used for image recognition and classification [19,20].In the CNN, an image is taken as an input; it is further assigned numerical weight and bias values to enhance certain image features.At the end, the images are classified into particular groups based on the probabilistic values.The process is carried out by passing the image into a convolution layer,pooling, and flattening the obtained output through a fully connected layer.In the convolution layers of CNN, the image RGB values are updated [14] according to Eq.(1).

    In Eq.(1),fis the input image andhrepresents the filter.The dimensions of the resultant matrix arembyn.For image classifications, the CNNs are usually preferred over other neural networks due to their feature extraction ability and minimum pre-processing requirement.The AlexNet and VGG16 architectures are used to train the model for facial landmark detection.

    4.3 Evaluation of the Models

    The CNN models used for detecting the facial landmarks are evaluated based on several performance evaluation metrics, namely the precision score as shown in Eq.(2), the recall and sensitivity scores as shown in Eq.(3), the F1 score as shown in Eq.(4), the specificity score as shown in Eq.(5).

    In these equations, TP, TN, FP and FN respectively represent the true positive, true negative,false positive and false negative values.A true positive represents that the predicted and the actual classifications are both positive; a true negative represents that the predicted and the actual classifications are both negative.A false positive is where the predicted value is positive but the actual value is negative, and vice versa for a false negative.For all of these evaluation metrics,higher scores values are better than lower score values.

    4.4 Pupil Detection Using Computer Vision and Image Processing

    In this work, image processing techniques are implemented to detect the pupil in a live video stream which is composed of a series of images.Since this sub module’s main goal is to extract the pupil from an image, the pupil needs to be detected first.For pupil detection, the first step is to identify the face and the eyes.The HAAR classifiers are used for these identifications.The HAAR classifiers are XML files for facial feature detection.After the eyes are detected, pupil extraction is performed followed by thresholding.After thresholding, the pupils are extracted and the blobs are drawn around the pupils.The proposed system is implemented using Python in conjunction with the OpenCV library.

    For eye detection using OpenCV, the HAAR classifiers are used to detect the objects in the given image or video.OpenCV has built-in classifiers, haarcascade_frontalface_default.xml and haarcascade_eye.xml, for face and eye detection, respectively.The classifiers perform the detection process based on certain properties [21,22].These HAAR classifiers are in-built in OpenCV and are trained using many positive (images containing faces or eyes) and negative (images without faces or eyes) datasets.If a face is detected using the HAAR face classifier, an ROI (Region of Interest) is created for the face and eye detection is applied to the ROI.The ROI is specified by coordinates which represent the bounding regions of the face.For eye detection, the surrounding areas also need to be considered, such as areas containing the eyelids, eyelashes, and so on.Therefore, accurate demarcation is required before eye detection.

    4.5 Gaze Prediction Using Euclidean Distances

    The Euclidean distance calculates the segment’s length between any two points in space.The calculation for the Euclidean distance is shown in Eq.(6).

    As shown in Eq.(6), the distance between any two points on a planep(x1,y1)andq(x2,y2)is calculated asdist(p,q).

    The conceptual representation of the eye can be seen in Fig.2.Once the blob is formed using OpenCV, the center of the blob is calculated as the center of the pupil,ep(xep,yep),as shown in Fig.2.The eye-landmarks are detected using facial landmark detection.Each eye has six landmark pointse1(xe1,ye1),e2(xe2,ye2),e3(xe3,yec3),e4(xe4,ye4),e5(xe5,ye5)ande6(xe6,ye6)around it.An assumed centerec(xec,yec)is determined based on the six landmark locations according to Eqs.(7) and (8), as shown in Fig.2.

    In Eqs.(7) and (8),xecis the x-coordinate andyecis the y-coordinate.After the assumed center is calculated and the pupil’s center is located, the projections ofepon the axes,epxandepy,are considered.As shown in Fig.2a threshold is decided.In this work, the threshold is selected as 40%.The threshold determines the part of the eye region that is considered as the center, the distances fromectoe1,e4,et, andebare calculated using Eq.(6), asdist(e1,ec),dist(e4,ec),dist(et,ec), anddist(eb,ec), respectively.The x coordinate ofetcan be calculated using Eq.(9),the y coordinate ofetcan be calculated using Eq.(10), the x coordinate ofebcan be calculated using Eq.(11), and the y coordinate ofebcan be calculated using Eq.(12).

    In Eqs.(9)-(12),etandebare the assumed top and bottom points of the eye as shown in Fig.2.Once these values are calculated, the system calculates the distances fromecto the projections on the axes to detect the eye gaze using Euclidean Distances,dist(epx,ecx), anddist(epy,ecy).The obtained distances are compared with the distances based on the obtained threshold and the distances from the assumed center of the eye,ec.The following scenarios are considered:

    (1)epx>ecxanddist(epx,ecx)<=40% ofdist(et,ec)

    (2)epx

    (3)epy>ecyanddist(epy,ecy)<=40% ofdist(e4,ec)

    (4)epy

    (5)epx>ecxanddist(epx,ecx)>40% ofdist(et,ec)

    (6)epx40% ofdist(eb,ec)

    (7)epy>ecyanddist(epy,ecy)>40% ofdist(e4,ec)

    (8)epy40% ofdist(e4,ec)

    If any one of conditions 1 to 4 holds true, the eye gaze is considered to be at the “center”,otherwise, if 5 and 7 are true, the gaze is considered to be at the “top right,” if 5 and 8 are true, the gaze is considered to be at the “top left”, if 6 and 7 are true, the gaze is considered to be at the “bottom right”, and if 6 and 8 are true, then the eye gaze is considered to be at the“bottom left.”

    Figure 2:Conceptual representation of the eye in the eye gaze detection

    5 Results and Discussion

    The eye gaze tracking system is implemented with image processing functions from OpenCV.Two different convolutional neural networks (CNN) models are used for facial landmark detection, the AlexNet and VGG16 models.There are 68 distinct facial landmarks, and each landmark is defined by a set of x and y coordinates.In this work, we consider the 12 landmarks around the eyes.The indices allocated to these landmarks are 37 to 48.According to the dataset, the relevant columns range from 73 to 84 for the right eye and 85 to 96 for the left eye since each landmark has its associated x and y values.

    The created models are benchmarked with several performance evaluation metrics, namely the precision, recall, F1 score, specificity and the sensitivity scores [23].The true positive, true negative,false positive and false negative scores are calculated using a confusion matrix.The two CNN models for facial landmark detection are evaluated using these metrics.The models are trained and tested using the same dataset, which has been adapted from the Kaggle platform.

    The system is validated based on the five-evaluation metrics.Tab.1 shows the evaluation metric scores obtained by the AlexNet CNN model for the twenty-four labels, twelvex-axis and twelvey-axis labels, which represent the twelve facial landmarks around both eyes.Tab.2 shows the various evaluation metric scores obtained by the VGG16 CNN model.

    Table 1:Evaluation metric scores of the AlexNet CNN model

    The values in the tables are plotted using the Python library matplotlib for better visualization and insights into the inherent pattern of the results.

    Table 2:Evaluation metric scores of the VGG16 CNN model

    The plots in Fig.3 visualize the comparison of the different evaluation metric scores between the AlexNet model and the VGG16 model.From Fig.3a, it can be inferred that for most of the class labels, the classification for the VGG16 has better precision than that of the AlexNet.The assessment of recall (sensitivity) scores between the AlexNet model and the VGG16 model can be observed in Fig.3b.The two scores are always equal, thus they are plotted in the same figure.From Fig.3b, it can be seen that for most of the times, the classification has better recall and better sensitivity scores for the VGG16 than those of the AlexNet, irrespective of the class labels.The F1 scores between the AlexNet model and the VGG16 model are visualized in the plot shown in Fig.3c.It can be seen from Fig.3c that for most of the class labels, the classification for the VGG16 has better F1 score values than that of the AlexNet.The specificity scores of the AlexNet model and the VGG16 model are compared and plotted in Fig.3d.Again, it can also be observed that the VGG16 has better specificity scores compared to that of the AlexNet for most of the class labels.

    The evaluation metric scores of the AlexNet and the VGG16 models are respectively aggregated and plotted in Figs.4a and 4b for better visualization.It can be seen from Figs.4a and 4b that the scores for the two models fluctuate for the different class labels.Since the VGG16 model has comparatively better results, we use the VGG16 CNN model in this work.

    Figure 3:Visualization of the evaluation metric scores that are obtained by the AlexNet model and the VGG16 models.(a) The precision scores; (b) the recall (sensitivity) scores; (c) the F1 scores; (d) the specificity scores

    Figure 4:Visualizations of the aggregated evaluation metric scores obtained by the two CNN models:(a) the AlexNet model and (b) the VGG16 model

    While this module is dedicated to facial landmark determination using the CNN models, the other module of this system uses image processing, along with computer vision and the HAAR classifier to determine the pupil and extract it from the live videos.

    The HAAR classifier implemented in the provided XML files is combined with Python’s OpenCV library to determine the face and eyes, as shown in Fig.5.The HAAR classifier is able to detect the face and eyes with high accuracies.Using Python’s OpenCV library, the face image is converted too gray-scale and the face and eyes are detected using the HAAR Classifier.Once the eyes are detected, the gray-scale image undergoes thresholding to enhance the pupils.The blobs,represented as red circles, are drawn around the detected pupils and displayed in the image.

    Figure 5:Face and eye detection using the HAAR classifier

    Figure 6:The image processing techniques are implemented for the images from a live video:(a) The colored image of a face and (b) the gray-scale image after image processing

    The original, colored face image is shown in Fig.6a and the resultant gray-scaled image is shown in Fig.6b.Gray-scaling is applied to the colored image to obtain the image in Fig.6b.The 4-tuple coordinate values obtained for both eyes are used to determine the left and right eyes after eye detection.The sub-images of the eyes undergo gray-scaling to remove colors and improve the outcomes of thresholding.

    Thresholding is required to extract the pupil.Different threshold values are tested to select the most suitable value.The gray-scaled image undergoes thresholding to obtain a segmented image for better feature extraction, as shown in Fig.7.In Fig.7, three different threshold values are used:135, 90 and 45.It can be seen from Fig.7 that the rightmost image with a threshold value of 45 has the best pupil extraction among the tested values.Once both eyes have been detected,one-fourth of the image is removed from the top to eliminate the eyebrows.The image is eroded to remove unwanted boundary pixels such as the eye corners and the image is dilated to restore the pixels lost in erosion and expand the pupils’features.The salt and pepper noise created by erosion and dilation are removed using a median filter, which also smoothens the image.

    Figure 7:Images obtained using different threshold values

    The left and right pupils extracted after thresholding are shown in Figs.8a and 8c, respectively.The blobs are drawn around the extracted pupils’outline, as shown in Fig.8b and 8d.

    After the blobs have been determined using the CNN models by following the steps described in the previous sections, the Euclidean distances are calculated to perform gaze prediction Image processing algorithms for eye gaze prediction have been investigated in several works [24-30].In this work, we compare the performances of the traditional eye gaze detection techniques using only image processing techniques with that of the proposed system using the VGG16 CNN model.

    The evaluation metric scores obtained by the traditional eye gaze detection method, using only image processing techniques, are shown in Tab.3.In addition to precision, recall, F1 score,specificity, and specificity, this work also compares the mutual information (MI).The MI is calculated according to Eq.(14), where the entropy function,H, is given by Eq.(13).

    In Eq.(13),fxrepresents the probability density function of the variable,X.E[.]denotes the expected value function, which is negated to provide the entropy.In Eq.(14),H(X,Y)is the total entropy of the joint variables(X, Y).The mutual information is denoted byI(X; Y)for the variablesXandY.A lower MI value is preferred since it denotes higher independence of the variables.

    The evaluation metric scores obtained by the proposed method are shown in Tab.4.Comparing Tabs.3 and 4, it can be seen that the scores obtained by the proposed method are higher than the scores for the traditional method.

    Figure 8:Pupil extraction and blob formation using HAAR classifier:(a) Left pupil extraction after thresholding, (b) blob formation on the left pupil, (c) right pupil extraction after thresholding and (d) blob formation on the right pupil

    Table 3:Evaluation metric scores for eye gaze detection using the traditional method

    Table 4:Evaluation metric scores for eye gaze detection using the proposed method

    Figure 9:Visualization of the evaluation metrics for the systems implemented using the traditional and the proposed method:(a) the precision score, (b) the recall score, (c) the F1 score, (d) the specificity score, (e) the mutual information score

    Figs.9a-9e show the superior performances of the proposed system over the traditional system.The evaluation scores for precision, recall, F1 score, specificity, and sensitivity of the proposed system are all higher than those of the traditional system.The proposed system’s mutual information is lower than that of the traditional system, inferring that the variables are more independent and is more desirable.

    As demonstrated by the results, the proposed system outperforms the traditional approach for eye gaze detection.However, this system has not yet taken into account the effects of varied light sources, which may be a topic for investigation in the future.Additionally, the proposed system may also be extended to consider users with spectacles or various expressions involving the eyes,such as squinting or winking.

    6 Conclusions

    The pandemic caused by the COVID-19 in 2020 gave rise to a new, digitized normal, which drastically changed people’s lives.In particular, the education and examination systems around the world were seriously affected.This work has proposed an eye gaze detector to monitoring the user’s eye motions, which can be implemented in academic institutions as well as other locations where eye gaze tracking is required.

    The system has been implemented by considering specific facial landmarks with the Convolutional Neural Network (CNN) models, in addition to the application of image processing and computer vision techniques.According to various evaluation metrics, namely the precision,recall, F1 score, specificity, sensitivity, and mutual information, the proposed system outperforms traditional eye gaze detectors, which only use image processing and computer vision methods.The gaze detection system is able to determine pupil positions classed as center, top-left, top-right,bottom-left, and bottom-right.

    The proposed system facilitates education and conforms to the new digitized normal that might persist indefinitely after 2020.The work also discusses prospective improvements that can be made to increase the evaluation metric scores.In the future, the system can be further optimized towards better performances and more generalized applications, such as adapting to scenarios with various light sources, users with glasses or different facial expressions.The work can be further integrated with researches from other domains for more extended applications.

    Funding Statement:This research was partially funded by the “Intelligent Recognition Industry Service Research Center” from The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan.Grant Number:N/A and the APC was funded by the aforementioned Project.

    Conflicts of Interest:The authors declare that they have no conflicts of interest regarding the present study.

    亚洲欧美日韩无卡精品| 高清在线国产一区| 午夜福利免费观看在线| 免费高清视频大片| 国产亚洲欧美在线一区二区| 琪琪午夜伦伦电影理论片6080| or卡值多少钱| 欧美激情高清一区二区三区| 人成视频在线观看免费观看| 成人手机av| 久久久精品国产亚洲av高清涩受| 中文字幕色久视频| 久久影院123| 色播在线永久视频| 美女国产高潮福利片在线看| 亚洲人成网站在线播放欧美日韩| 免费在线观看影片大全网站| 亚洲欧美激情在线| 美女 人体艺术 gogo| 亚洲免费av在线视频| 欧美成人免费av一区二区三区| 国产av一区在线观看免费| 成人国语在线视频| 免费久久久久久久精品成人欧美视频| 国产成人啪精品午夜网站| 亚洲欧洲精品一区二区精品久久久| 又黄又粗又硬又大视频| 国产乱人伦免费视频| 国产一区二区三区视频了| 一进一出抽搐gif免费好疼| 亚洲精品一区av在线观看| 美女高潮喷水抽搐中文字幕| 亚洲av第一区精品v没综合| 亚洲欧美一区二区三区黑人| 美女高潮到喷水免费观看| 黑丝袜美女国产一区| 51午夜福利影视在线观看| 视频在线观看一区二区三区| av福利片在线| 成人国语在线视频| 两个人看的免费小视频| 色播在线永久视频| 亚洲欧美日韩另类电影网站| 亚洲av成人不卡在线观看播放网| 一a级毛片在线观看| 男女床上黄色一级片免费看| 国内毛片毛片毛片毛片毛片| 黄片大片在线免费观看| 久久久久久亚洲精品国产蜜桃av| 久久影院123| 在线观看66精品国产| 一区二区日韩欧美中文字幕| 日韩大码丰满熟妇| 国产精品永久免费网站| 妹子高潮喷水视频| 久久精品亚洲精品国产色婷小说| 真人一进一出gif抽搐免费| 久久久国产成人免费| 在线观看免费日韩欧美大片| 真人一进一出gif抽搐免费| 大香蕉久久成人网| 99re在线观看精品视频| 动漫黄色视频在线观看| 亚洲成国产人片在线观看| 可以在线观看的亚洲视频| e午夜精品久久久久久久| 免费在线观看亚洲国产| 性欧美人与动物交配| 欧美久久黑人一区二区| 欧美乱码精品一区二区三区| 精品少妇一区二区三区视频日本电影| 久久人人97超碰香蕉20202| 啦啦啦免费观看视频1| www.999成人在线观看| 制服丝袜大香蕉在线| 国产亚洲av嫩草精品影院| 久久草成人影院| 一级毛片女人18水好多| 超碰成人久久| 成人av一区二区三区在线看| 亚洲精品粉嫩美女一区| 在线观看免费午夜福利视频| 一本综合久久免费| 亚洲视频免费观看视频| 欧美乱妇无乱码| 人妻丰满熟妇av一区二区三区| 欧美亚洲日本最大视频资源| 国产黄a三级三级三级人| 亚洲人成电影免费在线| 欧美乱色亚洲激情| 久久精品aⅴ一区二区三区四区| 母亲3免费完整高清在线观看| 国产精品 欧美亚洲| 精品少妇一区二区三区视频日本电影| 99在线人妻在线中文字幕| 欧美日韩一级在线毛片| 一本大道久久a久久精品| 国产在线观看jvid| 十八禁人妻一区二区| 久久人人爽av亚洲精品天堂| 少妇粗大呻吟视频| 欧美激情高清一区二区三区| av天堂久久9| 亚洲电影在线观看av| 长腿黑丝高跟| 少妇的丰满在线观看| 欧美激情久久久久久爽电影 | 国产亚洲欧美在线一区二区| 色综合欧美亚洲国产小说| 午夜激情av网站| 国产欧美日韩一区二区三| 久久久久九九精品影院| 在线观看舔阴道视频| 欧美av亚洲av综合av国产av| 两个人视频免费观看高清| svipshipincom国产片| 国产精品亚洲一级av第二区| 一个人免费在线观看的高清视频| 亚洲av电影不卡..在线观看| 黄色 视频免费看| 美女高潮到喷水免费观看| e午夜精品久久久久久久| 亚洲一区中文字幕在线| АⅤ资源中文在线天堂| 亚洲成a人片在线一区二区| 在线观看舔阴道视频| 欧美乱色亚洲激情| 在线观看日韩欧美| 老司机午夜十八禁免费视频| 国产精品永久免费网站| 欧美 亚洲 国产 日韩一| 一区二区三区激情视频| 国产精品香港三级国产av潘金莲| 免费观看人在逋| 热99re8久久精品国产| 久久人人爽av亚洲精品天堂| 国语自产精品视频在线第100页| 亚洲伊人色综图| 亚洲熟妇中文字幕五十中出| 母亲3免费完整高清在线观看| 女人精品久久久久毛片| 夜夜躁狠狠躁天天躁| 97人妻天天添夜夜摸| 国产野战对白在线观看| 亚洲久久久国产精品| 无人区码免费观看不卡| av免费在线观看网站| 亚洲成av片中文字幕在线观看| 非洲黑人性xxxx精品又粗又长| 午夜日韩欧美国产| 丝袜在线中文字幕| www.自偷自拍.com| 亚洲欧美日韩高清在线视频| 看黄色毛片网站| 少妇 在线观看| 亚洲精品一区av在线观看| 国产成人精品在线电影| 久久午夜亚洲精品久久| 精品久久久久久成人av| 在线观看一区二区三区| 黄色女人牲交| 91av网站免费观看| 别揉我奶头~嗯~啊~动态视频| 精品日产1卡2卡| 黄色毛片三级朝国网站| 亚洲国产欧美日韩在线播放| 精品国产美女av久久久久小说| 91av网站免费观看| 精品无人区乱码1区二区| 欧美久久黑人一区二区| 国产伦一二天堂av在线观看| 日本免费一区二区三区高清不卡 | 一个人观看的视频www高清免费观看 | 久久精品国产清高在天天线| 男女床上黄色一级片免费看| 国产成人精品在线电影| 一本久久中文字幕| 亚洲欧美精品综合久久99| 好看av亚洲va欧美ⅴa在| 伊人久久大香线蕉亚洲五| 久久久水蜜桃国产精品网| 大香蕉久久成人网| 欧美久久黑人一区二区| 麻豆成人av在线观看| 精品一品国产午夜福利视频| 免费女性裸体啪啪无遮挡网站| 极品人妻少妇av视频| 麻豆一二三区av精品| 精品电影一区二区在线| 国产黄a三级三级三级人| 久久久国产成人免费| 啦啦啦韩国在线观看视频| 国产99白浆流出| 免费在线观看完整版高清| 亚洲成av人片免费观看| 日日夜夜操网爽| 久久人人97超碰香蕉20202| 成人免费观看视频高清| 久久午夜综合久久蜜桃| 国产在线观看jvid| 精品无人区乱码1区二区| 黄色成人免费大全| 国内毛片毛片毛片毛片毛片| 日日夜夜操网爽| 在线观看一区二区三区| 国产极品粉嫩免费观看在线| 777久久人妻少妇嫩草av网站| 久久久水蜜桃国产精品网| 精品日产1卡2卡| 成人av一区二区三区在线看| 日韩大码丰满熟妇| 亚洲午夜精品一区,二区,三区| 亚洲熟妇熟女久久| 国产片内射在线| 乱人伦中国视频| 中亚洲国语对白在线视频| 脱女人内裤的视频| 亚洲第一av免费看| 久久这里只有精品19| 国产精品98久久久久久宅男小说| 午夜亚洲福利在线播放| 国产在线精品亚洲第一网站| 一a级毛片在线观看| 午夜福利在线观看吧| 黄色毛片三级朝国网站| 黄色女人牲交| 久久久精品国产亚洲av高清涩受| 精品国产亚洲在线| 亚洲七黄色美女视频| 亚洲天堂国产精品一区在线| 午夜影院日韩av| 99国产精品一区二区蜜桃av| 一卡2卡三卡四卡精品乱码亚洲| 咕卡用的链子| 国产亚洲精品久久久久久毛片| 日韩精品青青久久久久久| 色在线成人网| 国产在线观看jvid| 国产精品一区二区三区四区久久 | 欧美久久黑人一区二区| 久热爱精品视频在线9| 久久婷婷人人爽人人干人人爱 | 无限看片的www在线观看| 在线视频色国产色| 狠狠狠狠99中文字幕| svipshipincom国产片| 日韩中文字幕欧美一区二区| 岛国视频午夜一区免费看| 香蕉国产在线看| 伦理电影免费视频| 在线观看免费视频日本深夜| 99热只有精品国产| 在线观看日韩欧美| 国产成人免费无遮挡视频| 亚洲av片天天在线观看| 法律面前人人平等表现在哪些方面| 女性被躁到高潮视频| 99精品欧美一区二区三区四区| 免费人成视频x8x8入口观看| 午夜成年电影在线免费观看| 久久精品亚洲熟妇少妇任你| 久久久国产欧美日韩av| 日本免费a在线| 人妻久久中文字幕网| 一级a爱视频在线免费观看| 久久精品aⅴ一区二区三区四区| 免费看十八禁软件| 黄片播放在线免费| 大型黄色视频在线免费观看| 日韩欧美免费精品| 欧美日韩乱码在线| 黄色视频,在线免费观看| 欧美一级毛片孕妇| 国产亚洲精品一区二区www| 一区二区日韩欧美中文字幕| 99riav亚洲国产免费| 成人18禁高潮啪啪吃奶动态图| 夜夜看夜夜爽夜夜摸| 天堂√8在线中文| 久久久国产成人免费| 亚洲av片天天在线观看| 亚洲成人精品中文字幕电影| 母亲3免费完整高清在线观看| 亚洲精品国产色婷婷电影| 不卡一级毛片| 19禁男女啪啪无遮挡网站| 日韩一卡2卡3卡4卡2021年| 岛国在线观看网站| 高清黄色对白视频在线免费看| 欧美成人免费av一区二区三区| 欧美激情久久久久久爽电影 | 久久这里只有精品19| 免费在线观看视频国产中文字幕亚洲| 淫秽高清视频在线观看| 真人做人爱边吃奶动态| 女生性感内裤真人,穿戴方法视频| 中文字幕最新亚洲高清| 精品国产美女av久久久久小说| 精品午夜福利视频在线观看一区| 亚洲美女黄片视频| 国产区一区二久久| 国产av在哪里看| 亚洲人成电影免费在线| 一进一出抽搐gif免费好疼| 午夜福利视频1000在线观看 | 国产欧美日韩综合在线一区二区| 国产精品综合久久久久久久免费 | 9热在线视频观看99| 99国产综合亚洲精品| 真人做人爱边吃奶动态| 国产亚洲欧美在线一区二区| 真人做人爱边吃奶动态| 成人亚洲精品一区在线观看| 最近最新中文字幕大全电影3 | 国产精品久久久久久精品电影 | 国产高清有码在线观看视频 | 亚洲中文av在线| 伦理电影免费视频| 人人妻,人人澡人人爽秒播| 嫁个100分男人电影在线观看| 亚洲一区中文字幕在线| 国产精品久久久久久人妻精品电影| 搡老熟女国产l中国老女人| videosex国产| 亚洲中文日韩欧美视频| 90打野战视频偷拍视频| 看片在线看免费视频| 久久精品影院6| 久久久久久久久免费视频了| 99国产精品免费福利视频| 成人18禁高潮啪啪吃奶动态图| av视频免费观看在线观看| 亚洲avbb在线观看| 久久婷婷人人爽人人干人人爱 | 十八禁网站免费在线| 九色国产91popny在线| 两个人免费观看高清视频| 亚洲男人的天堂狠狠| 一级a爱片免费观看的视频| 国产区一区二久久| 丁香欧美五月| 亚洲男人的天堂狠狠| 日韩 欧美 亚洲 中文字幕| 亚洲第一欧美日韩一区二区三区| 亚洲国产看品久久| 午夜久久久在线观看| 国产又色又爽无遮挡免费看| 欧美中文综合在线视频| 久久婷婷人人爽人人干人人爱 | 亚洲精品粉嫩美女一区| 成人精品一区二区免费| 亚洲人成77777在线视频| 色综合婷婷激情| 露出奶头的视频| 看片在线看免费视频| 亚洲中文av在线| 免费一级毛片在线播放高清视频 | 日韩国内少妇激情av| 少妇裸体淫交视频免费看高清 | 欧美色欧美亚洲另类二区 | 国产片内射在线| 国产免费av片在线观看野外av| 国产亚洲欧美精品永久| 久热这里只有精品99| 久久伊人香网站| 在线观看免费视频网站a站| 自拍欧美九色日韩亚洲蝌蚪91| 成人三级黄色视频| 亚洲国产中文字幕在线视频| 国产欧美日韩一区二区三| 亚洲精品国产色婷婷电影| 在线永久观看黄色视频| 精品久久久久久久久久免费视频| 啦啦啦 在线观看视频| 丁香六月欧美| 极品教师在线免费播放| 99国产精品一区二区三区| 亚洲七黄色美女视频| 日韩精品中文字幕看吧| 国产精品久久视频播放| 久久精品91蜜桃| 如日韩欧美国产精品一区二区三区| 久久精品91蜜桃| 亚洲色图 男人天堂 中文字幕| 妹子高潮喷水视频| 日本一区二区免费在线视频| 久久久久亚洲av毛片大全| 大香蕉久久成人网| 岛国在线观看网站| 1024香蕉在线观看| 99re在线观看精品视频| 亚洲成av人片免费观看| 欧美绝顶高潮抽搐喷水| 国产一区二区在线av高清观看| 中文字幕最新亚洲高清| 国产精品一区二区三区四区久久 | 久久人人爽av亚洲精品天堂| 国产精品九九99| 91精品国产国语对白视频| 免费高清在线观看日韩| 欧美国产精品va在线观看不卡| 欧美成人性av电影在线观看| 精品不卡国产一区二区三区| tocl精华| 老熟妇仑乱视频hdxx| 国产国语露脸激情在线看| 精品人妻1区二区| 国产单亲对白刺激| av视频免费观看在线观看| 久久影院123| 老汉色av国产亚洲站长工具| 色综合婷婷激情| 免费观看人在逋| 又黄又爽又免费观看的视频| 高清毛片免费观看视频网站| 午夜福利高清视频| 夜夜夜夜夜久久久久| 最好的美女福利视频网| av福利片在线| 国产成人av激情在线播放| 国产1区2区3区精品| 午夜福利18| 久久久久久久久久久久大奶| 国内精品久久久久精免费| 国产野战对白在线观看| 欧美在线一区亚洲| 真人做人爱边吃奶动态| 99国产精品免费福利视频| 日日摸夜夜添夜夜添小说| 国产成人精品在线电影| 国产精品久久久av美女十八| 男人操女人黄网站| 一二三四在线观看免费中文在| 久久中文字幕一级| 91精品国产国语对白视频| 国产成年人精品一区二区| 国产高清videossex| 亚洲成人国产一区在线观看| 丁香六月欧美| 亚洲av第一区精品v没综合| 777久久人妻少妇嫩草av网站| 久久久久九九精品影院| 自线自在国产av| 少妇的丰满在线观看| 午夜福利一区二区在线看| 两个人看的免费小视频| 亚洲中文av在线| 成人永久免费在线观看视频| 18美女黄网站色大片免费观看| 欧美亚洲日本最大视频资源| 女人精品久久久久毛片| 日韩欧美一区二区三区在线观看| 高潮久久久久久久久久久不卡| 一个人免费在线观看的高清视频| 美女国产高潮福利片在线看| 男人的好看免费观看在线视频 | 黄片播放在线免费| 亚洲狠狠婷婷综合久久图片| 亚洲专区字幕在线| 又大又爽又粗| 韩国精品一区二区三区| 91麻豆av在线| 久久天堂一区二区三区四区| 成人精品一区二区免费| 日韩av在线大香蕉| 丝袜在线中文字幕| 可以免费在线观看a视频的电影网站| 国产蜜桃级精品一区二区三区| 50天的宝宝边吃奶边哭怎么回事| av在线天堂中文字幕| 免费无遮挡裸体视频| 两性夫妻黄色片| 精品人妻1区二区| 亚洲国产中文字幕在线视频| 国产一区二区三区综合在线观看| 欧美 亚洲 国产 日韩一| 这个男人来自地球电影免费观看| 老熟妇仑乱视频hdxx| 国产成人欧美在线观看| 亚洲激情在线av| 丁香六月欧美| cao死你这个sao货| 久久久久久免费高清国产稀缺| 国产国语露脸激情在线看| 欧美在线黄色| 怎么达到女性高潮| 午夜亚洲福利在线播放| 在线观看www视频免费| 老司机午夜十八禁免费视频| 亚洲片人在线观看| 久久青草综合色| 亚洲九九香蕉| 日韩中文字幕欧美一区二区| 性少妇av在线| 亚洲视频免费观看视频| 亚洲av成人一区二区三| 麻豆av在线久日| 精品一区二区三区视频在线观看免费| e午夜精品久久久久久久| 国产精品久久久久久精品电影 | 黑人巨大精品欧美一区二区蜜桃| 亚洲国产中文字幕在线视频| av有码第一页| 国语自产精品视频在线第100页| 欧美人与性动交α欧美精品济南到| 亚洲国产精品sss在线观看| 成人18禁高潮啪啪吃奶动态图| 亚洲狠狠婷婷综合久久图片| 亚洲欧美精品综合一区二区三区| 1024视频免费在线观看| 黄色成人免费大全| 日韩 欧美 亚洲 中文字幕| 亚洲av第一区精品v没综合| av电影中文网址| 91av网站免费观看| 曰老女人黄片| 日韩大码丰满熟妇| 中文亚洲av片在线观看爽| 日本a在线网址| 国产精品乱码一区二三区的特点 | 九色国产91popny在线| 男女做爰动态图高潮gif福利片 | 欧美日韩精品网址| 午夜免费激情av| 国产欧美日韩一区二区三区在线| 中出人妻视频一区二区| 欧美黑人精品巨大| 法律面前人人平等表现在哪些方面| 97人妻精品一区二区三区麻豆 | 99热只有精品国产| 真人做人爱边吃奶动态| 香蕉丝袜av| 黄色成人免费大全| 国产成人精品无人区| 91九色精品人成在线观看| 十八禁网站免费在线| 九色国产91popny在线| 男女下面进入的视频免费午夜 | 午夜精品在线福利| 老熟妇仑乱视频hdxx| 很黄的视频免费| 免费看a级黄色片| 可以免费在线观看a视频的电影网站| 国产成人系列免费观看| 中出人妻视频一区二区| 99re在线观看精品视频| 亚洲男人的天堂狠狠| 黄色女人牲交| 九色国产91popny在线| 男女床上黄色一级片免费看| 欧美最黄视频在线播放免费| 女性生殖器流出的白浆| 色播在线永久视频| 丝袜美腿诱惑在线| 欧美精品啪啪一区二区三区| 夜夜爽天天搞| 一区二区三区激情视频| 欧美成狂野欧美在线观看| 亚洲 国产 在线| 十分钟在线观看高清视频www| 亚洲伊人色综图| 大码成人一级视频| 久久人人精品亚洲av| 久久久国产成人精品二区| 中文字幕人成人乱码亚洲影| svipshipincom国产片| 人人妻人人爽人人添夜夜欢视频| 成人精品一区二区免费| 免费观看人在逋| 亚洲精品一卡2卡三卡4卡5卡| 十八禁人妻一区二区| 12—13女人毛片做爰片一| 亚洲成a人片在线一区二区| av欧美777| 亚洲天堂国产精品一区在线| 久久香蕉国产精品| 亚洲一区二区三区不卡视频| 亚洲精品国产区一区二| 国产成人精品无人区| 国产精品国产高清国产av| 久久久久久久精品吃奶| 久热爱精品视频在线9| 久久久国产成人免费| 亚洲av成人不卡在线观看播放网| 欧美色欧美亚洲另类二区 | 嫁个100分男人电影在线观看| 国产高清视频在线播放一区| 桃色一区二区三区在线观看| 在线观看一区二区三区| 久久人妻av系列| 色综合站精品国产| 咕卡用的链子| 桃红色精品国产亚洲av| 97人妻精品一区二区三区麻豆 | 精品午夜福利视频在线观看一区| 两个人免费观看高清视频| 久久人妻av系列| 宅男免费午夜| 一级毛片高清免费大全| 色综合站精品国产| 国产精品乱码一区二三区的特点 | 色老头精品视频在线观看| 在线观看午夜福利视频| 午夜福利高清视频| www国产在线视频色| 一进一出好大好爽视频| 久久 成人 亚洲| 中文字幕另类日韩欧美亚洲嫩草| 女性被躁到高潮视频| 亚洲色图 男人天堂 中文字幕| 国产高清激情床上av| 我的亚洲天堂| 最近最新免费中文字幕在线| 久久人人97超碰香蕉20202| 黄片播放在线免费|