• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Gaze Estimation via a Differential Eyes’ Appearances Network with a Reference Grid

    2021-11-26 03:46:12SongGuLihuiWngLongHeXindingHeJinWng
    Engineering 2021年6期

    Song Gu, Lihui Wng*, Long He, Xinding He, Jin Wng

    a Chengdu Aeronautic Polytechnic, Chengdu 610100, China

    b Department of Production Engineering, KTH Royal Institute of Technology, Stockholm 10044, Sweden

    Keywords:

    ABSTRACT A person’s eye gaze can effectively express that person’s intentions.Thus,gaze estimation is an important approach in intelligent manufacturing to analyze a person’s intentions. Many gaze estimation methods regress the direction of the gaze by analyzing images of the eyes, also known as eye patches. However,it is very difficult to construct a person-independent model that can estimate an accurate gaze direction for every person due to individual differences. In this paper, we hypothesize that the difference in the appearance of each of a person’s eyes is related to the difference in the corresponding gaze directions.Based on this hypothesis,a differential eyes’appearances network(DEANet)is trained on public datasets to predict the gaze differences of pairwise eye patches belonging to the same individual. Our proposed DEANet is based on a Siamese neural network (SNNet) framework which has two identical branches. A multi-stream architecture is fed into each branch of the SNNet. Both branches of the DEANet that share the same weights extract the features of the patches; then the features are concatenated to obtain the difference of the gaze directions. Once the differential gaze model is trained, a new person’s gaze direction can be estimated when a few calibrated eye patches for that person are provided. Because personspecific calibrated eye patches are involved in the testing stage, the estimation accuracy is improved.Furthermore, the problem of requiring a large amount of data when training a person-specific model is effectively avoided.A reference grid strategy is also proposed in order to select a few references as some of the DEANet’s inputs directly based on the estimation values,further thereby improving the estimation accuracy.Experiments on public datasets show that our proposed approach outperforms the state-of-theart methods.

    1. Introduction

    The eye gaze is informative in human communication. When working in a noisy shared space, people prefer to express their intentions through non-verbal behaviors such as eye gaze and gesture. The eye gaze carries a considerable amount of information that allows for task completion. A person’s intention can be effectively perceived by estimating her or his gaze direction. Many researchers have investigated the‘‘intention reading”ability based on gaze cues [1,2]. For example, in Ref. [1], a robot held a block in each of its hands, while a human controlled the robot successfully to make it give the human one of the blocks when the human gazed at the robot’s hand. This experiment demonstrates that the rich information carried by eye gaze has a significant impact on collaboration.Gaze estimation has been applied in many domains,such as human–robot collaboration (HRC) [1,2], virtual reality [3],and mobile-device controllers [4]. In HRC in particular, gaze estimation systems will be adopted as an additional modality to control robots through multimodal fusion in addition to gestures,speech command,and body motion[5,6].The addition of eye gaze will extend the scale of application in HRC and help improve the reliability of multimodal robot control.

    In intelligent manufacturing, humans are part of the process loop in intelligent and flexible automation[7,8]and play important roles in collaboration with robots. The range of tasks that robots can deal with is increasing [9], and humans generally prefer to communicate with robots through natural methods. For example,it would be preferable to give orders to robots through a gesture or gaze rather than by using a remote controller. Furthermore,people are often unwilling to use invasive solutions,such as wearing special glasses [10] that can estimate their gaze direction.Instead, a camera can be installed in a nearby location to observe the operator, and the operator’s gaze direction can be estimated by analyzing the digital image captured by the camera. This is a common noninvasive solution based on computer vision technology. The operator does not perceive the existence of the system when his or her gaze direction is estimated.

    Noninvasive vision-based solutions can generally be divided into two types: model-based methods and appearance-based methods[11].In model-based methods,geometric models of parts of the eye, such as the radius and the center of the pupil, are evaluated by analyzing the image, and the gaze direction is estimated based on the geometric models [12,13]. In appearancebased methods, the gaze direction is directly regressed by analyzing images of the eyes, known as eye patches. On the one hand,compared with appearance-based methods, the accuracy of the estimated direction in model-based methods depends on the quality of the captured image,such as the image resolution and illumination,because certain edges or feature points must be extracted accurately. In contrast, appearance-based methods do not require feature points. Ref. [14] evaluates popular gaze estimation methods to demonstrate that appearance-based methods achieve better performance than model-based ones. On the other hand, it is a challenging task with model-based methods to obtain a good model based on prior knowledge in order to estimate the gaze direction accurately[15].However,deep neural networks can effectively identify the intrinsic features of the data.The successful application of deep neural networks in appearance-based methods increases the estimation accuracy dramatically. Thus, appearance-based methods have attracted a great deal of attention in recent years[16–18].Refs.[19,20]propose video-based gaze estimation systems,which are model-based methods.It is possible to enhance the performance of the system by means of a deep neural network, such as a recurrent neural network or a long short-term memory network.However,such usage is beyond the scope of this paper.

    With appearance-based methods, the key step is to determine the relationship between the input images and the gaze directions.Many researchers have constructed varying models to fit the relationship.These models are trained and tested on data from different persons, in what are referred to as cross-person evaluations. The corresponding model is denoted as a person-independent model.Because a person-independent model does not contain information about the tested person, individual differences in appearance will affect the estimation accuracy.If certain conditions from the testing process,such as the tested person’s appearance,the level of illumination that will be present at the testing site,and so on,are involved when models are constructed in the training process,the system’s performance will be improved. A common method is to collect labeled data belonging to the tested person for model training.This is referred to as a person-specific model. However, learning a person-specific model requires a large amount of labeled data.Collecting person-specific training data is a time-consuming task,which limits the applicability of such methods. Although some technologies, such as those discussed in Refs. [21,22], have been proposed to decrease the complexity of the collecting phase,these methods still require a great deal of training data.Inspired by Refs.[23–25],we propose that the input images and output directions be replaced with differential ones.Once the relationship between the difference of both the input images and the difference of both the gaze directions is constructed, only a few labeled images of the new person are required, and these can be treated as one of the inputs in the testing stage. Using this method, the gaze direction will also be estimated accurately.

    In this paper, we propose a differential eyes’ appearances network (DEANet) to estimate gaze direction based on a deep neural network learning framework. The proposed network is based on a Siamese neural network (SNNet) [26], which has two identical branches. A pair of sample-sets are fed into both of the network’s branches simultaneously. Each sample-set includes both the left eye patch and the right eye patch in an image. Both patches are fed into one of the branches as a part of the multi-stream architecture [27]. The features are extracted from all the patches by each branch of the network, which contains two VGG16 networks [28]with different parameters.The outputs of both branches,in combination with the head pose information,are concatenated.The output of the network is the differential gaze of the pairwise samplesets,followed by some full link layers.In the testing stage,a labeled sample-set belonging to the tested person,which is taken as the reference sample-set, is fed into one of the network’s branches. The tested sample-set is fed into another network branch,and the output of the network is the gaze difference between the reference sample-set and the tested one.Because the gaze direction of the reference sample-set is labeled,the estimated gaze direction is equal to the network’s output plus the labeled gaze direction corresponding to the reference sample-set. Moreover, a reference selection strategy can be adopted to enhance the system’s performance if a few reference sample-sets are labeled. Our proposed approach assumes that the difference in the appearance of each of a person’s eyes is related to the difference in the corresponding gaze directions. Because the information of the tested person is embedded into the trained models in the testing stage,the estimation accuracy is improved. Furthermore, only a few labeled images of the tested person are needed when estimating the gaze direction of that person.The proposed network does not need a large amount of data for training a person-specific model. Evaluations on many popular datasets show that our proposed algorithm performs favorably against other state-of-the-art methods.

    Our contributions can be summarized as follows:

    (1) This work provides a new formulation for differential gaze estimation that is integrated with both eye images and the normalized head pose information.A multi-stream architecture is fed into each of the branches in an SNNet.The SNNet-based framework not only incorporates information about the tested person in the testing stage,but also does not require the collection of a large amount of data for training a person-specific model.

    (2) A reference selection strategy is provided. In this paper, a novel approach for a reference sample-set selection strategy is proposed to improve the estimation accuracy.A reference grid is constructed in the gaze space, and valid reference sample-sets are directly selected by the estimation values, which simplifies the computation of the system.

    The rest of this paper is organized as follows.Related works are introduced in Section 2. Our proposed approach is then demonstrated in detail in Section 3.Experimental results and discussions are presented in Section 4. Finally, a conclusion and a future research plan are highlighted in Section 5.

    2. Related work

    This section provides a brief overview of recent works in appearance-based gaze estimation, person-specific estimation,and SSNets.

    2.1. Appearance-based gaze estimation

    Most appearance-based algorithms for gaze estimation are regarded as regressive solutions. The estimated gaze direction is a function of the input image. Intuitively, eye patches carry the greatest amount of information on the gaze direction (of the left and right eye) and should be sufficient to estimate the gaze direction. Zhang et al. [29] proposed a method for in-the-wild appearance-based gaze estimation based on a multimodal convolutional neural network (CNN). Lian et al. [30] presented a shared CNN to estimate the gaze direction in multi-view eye patches captured from different cameras. Liu et al. [23,25] demonstrated the direct training of a differential CNN to predict the gaze difference between a pair of eye patches.Park et al.[31]proposed a novel pictorial representation in a fully convolutional framework to estimate the gaze direction. However, aside from eye patches, many other elements also affect the estimation accuracy, such as the head position, the scale of the eyes in the image, the head pose,and so forth.Some information should be embedded in the system.Liu et al. [32] used both the eye patches and an eye grid to construct a two-step training network to improve the estimation accuracy on mobile devices. Krafka et al. [4] took the eye patches,full-face patch, and the face grid as their system’s input, and obtained a promising performance. Wong et al. [33] proposed a residual network model that incorporated the head pose and face grid features to estimate the gaze direction on mobile devices. In Ref. [34], the gaze was divided into three regions based on the localization of the pupil centers, and an Ize-Net network was constructed to estimate the gaze direction using an unsupervised learning technique. Yu et al. [17] introduced a constrained landmark-gaze model to achieve gaze estimation by integrating the eye landmark locations.Funes-Mora and Odobez[35]proposed a head pose invariance algorithm for gaze estimation based on RGB-D cameras and evaluated the performance on a lowresolution dataset [36]. Zhang et al. [16] analyzed the effects of all of the above information based on their own models. In Ref.[37],full-face images were used as the system’s input,and an Alex-Net [38] network with spatial weights was shown to significantly outperform many eye-images-input algorithms. These experiments suggest that the full-face appearance is more robust against head pose and illumination than eye-only methods. However, the full-face approach dramatically increases the computation complexity because the size of the input data is much larger than in the eye-only approach. Compression methods, such as those in Ref.[39], have been proposed in order to compress the image efficiently while preserving the estimation accuracy.It is still an open question whether the full-face approach or the eye-only approach will obtain a better performance.

    Feeding raw images into the system without any pre-processing will increase the complexity of the regressive network.Some information can be normalized in the pre-processing stage in order to decrease the network’s complexity. Sugano et al. [40] proposed a novel normalization method in the pre-processing stage to align the images before they were fed into the network.All kinds of data,including the images and gaze directions, were transformed into the normalized space as well. The object’s scale did not need to be considered when learning or testing the network. In Ref. [40],a virtual camera was constructed by transforming or rotating the camera to a fixed position from the person’s eye.The input images and gaze directions were derived in the virtual camera coordinates.Zhang et al.[41]analyzed the normalization method in detail,and extended the original normalization method to full-face images in Ref. [37].

    2.2. Person-specific estimation

    The goal of most gaze estimation algorithms is to train a personindependent model and to achieve a good cross-person evaluation performance. A person-independent model is constructed to describe the correlation between the input image and the gaze direction. However, according to the analysis proposed in Ref.[25], the difference between the visual axis and the optical axis varies for each person. A person-independent model cannot describe the correlation between the visual axis and the optical axis accurately, but a person-specific model can accurately estimate the gaze direction. A good performance of a person-specific model was demonstrated in Ref.[16],provided that there were sufficient training samples.

    The collection of samples is a time-consuming task. Many methods for simplifying sample collection have been proposed in recent papers.Sugano et al.[42]proposed an incremental learning method to update the estimation parameters continuously. In Ref.[43], many kinds of data collected from different devices were fed into a single CNN composed of shared feature extraction layers and device-specific encoders/decoders. Huang et al. [22] built a supervised self-learning algorithm to train the gaze model incrementally. Moreover, the robust data validation mechanism could distinguish good training data from noise data. Lu et al. [21] also proposed an adaptive linear regression to adaptively select an optimal set of samples for training. The number of required training samples was significantly reduced, while a promising estimation accuracy remained.Although the above methods simplify the process of data collection, many labeled samples are still required to train a person-specific model. Yu et al. [44] designed a gaze redirection framework to generate large amounts of labeled data based on a few samples. Liu et al. [23] proposed a new idea for personspecific estimation based on only one eye patch. The difference in gaze direction was estimated by an SNNet according to the corresponding images as input. A few labeled samples were required in the testing stage after the SNNet was trained.

    2.3. Siamese neural network

    An SNNet was first introduced in Ref. [26] to verify the signatures written on a pen-input tablet. One of the characteristics of an SNNet is its two identical branches. Instead of a single input, a pair of inputs with the same type and different parameters are fed into the SNNet. Consequently, the output of the network is the difference of the corresponding inputs. This method has many applications in numerous fields. Venturelli et al. [24] proposed an SNNet framework to estimate the head pose in the training stage.A differential item was added to the loss function in order to improve the learning of the regressive network. Veges et al. [45]introduced a Siamese architecture to reduce the need for data augmentation in three-dimensional (3D) human pose estimation. The closest works to ours are Refs. [23,25]. However, the SNNet proposed in Refs. [23,25] does not consider the influences of both the eyes and the head pose.Moreover,it was demonstrated in both algorithms that the reference samples affected the estimation accuracy. However, the reference selection strategy was not discussed systematically in Refs.[23,25].It should be noted that pairwise input will dramatically increase the number of pairwise training samples. The selection of a subset in training samples is analyzed in Refs. [46–48].

    3. Differential eyes’ appearances network

    3.1. Definitions

    3.2. Pre-processing and normalization

    As proposed in Refs.[37,40],the raw images should be normalized for gaze estimation in order to alleviate the influences caused by different cameras and the original head pose information,thereby decreasing the network’s complexity. The normalization process is a series of perspective transformations so that the normalized patch is the same as the picture captured from a virtual camera looking at the same reference point. The normalization procedure and the performance have been demonstrated in detail in Refs. [40,41]. Some key steps are introduced in this section.

    Initially,a single face image like the tested face image in Fig.1 is provided. Facial landmarks, such as the corner points of the eyes and mouth,are detected by popular algorithms[49].A left eye center point,a right eye center point,and a mouth center point,which are computed by the corner points, are used to construct a plane.The line from the right eye center to the left eye center is the xaxis, and the y-axis is perpendicular to the x-axis inside the plane,pointing from the eyes to the mouth.The z-axis is the norm of the plane conforming to the right hand rule. Integrating with the left eye center or right eye center as the original points,the three axes construct the normalized space of both eyes. According to the detected landmarks and the generic mean facial shape model[16],the normalized head pose information can then be computed by the efficient perspective-n-point(EPnP)algorithm[50].It should be noted that both the original head pose information and the camera’s intrinsic parameters are provided by popular datasets,whose performances are evaluated in Section 4. All patches that are fed into the DEANet are normalized in the normalized space.After normalization, a histogram equalization is used for all normalized patches in order to alleviate the influences caused by illumination.

    The DEANet has two advantages for normalization.

    (1)Normalization,as an image aligning operation,decreases the network’s complexity,alleviating the influences on the eye patches caused by different camera distances, different camera intrinsic parameters,and different original head pose information.Normalized images can be simultaneously fed into the Siamese network whose branches share the same weights.

    (2)Normalization simplifies the computation of the differential gaze. All parameters are in the normalized space, and the computation of gaze difference is equivalent to the operation of both gaze vectors,regardless of coordinate transformation.The proposed reference selection strategy is demonstrated for simplification in Section 3.4.

    3.3. Training phase of the DEANet

    After normalization, all patches are aligned in the normalized space regardless of the camera’s intrinsic parameters and the size of the images. The normalized patches are fed into the network to improve the system’s performance because they make the network learning more efficient than un-normalized ones. Our hypothesis is that the difference in the appearance of each of a person’s eyes is related to the difference in the corresponding gaze directions.Moreover,this correlation is independent of the person.To this end, a DEANet is proposed based on an SNNet for appearance-based gaze estimation.The architecture and configurations of the network are illustrated in Fig. 2.

    During training, the inputs of our DEANet are a pair of samplesets, Ptand Pf. Each of them includes a left eye patch, a right eye patch, and the normalized head pose information. The components of the sample-set, acting as three streams, pass through a branch of the SNNet whose parameters are shared for both branches. In one of the Siamese branches, all patches fed into the network are a fixed-size 36×60 RGB or gray image. When the input patch is gray, it will be treated as an RGB image with the same intensity value in three channels. The normalized head pose information is a vector with a length of 2. The left eye patch and the right one are fed separately into VGG16 networks that extract the features of both patches, resulting in a vector with a length of 512. Each VGG16 network is followed by sequential operations, such as a fully connected (FC) layer with a size of 1024, a batch normalization (BN), and a rectified linear unit(ReLU) activation. The feature maps computed by each Siamese pair are concatenated (CAT), followed by another FC layer with a size of 512. After appending the normalized head pose information, other sequential operations follow, including a BN, a ReLU activation, an FC layer with a size of 256, and another ReLU activation. Lastly, the feature maps computed from both Siamese branches are concatenated, and two more FC layers with sizes of 256 and 2 follow. To avoid overfitting, a dropout layer is added before the last FC layer.

    Fig. 2. DEANet configurations (from top to bottom). and are RGB images with a size of 36×60. Ht and Hf are normalized head pose information corresponding to the Siamese pairs. Gd is the predicted differential gaze. All are vectors with a length of 2.VGG16 is a 16-layer Visual Geometry Group network.FC is the fully connected layer, BN is the batch normalization layer, Dropout is the Dropout layer. The layers’ names are followed by their parameters. CAT is the operation that concatenates both vectors into one vector. The layers that share the same weights are highlighted by the same colors.

    3.3.1. Siamese pair for the training phase

    According to the hypothesis in this paper,a pair of labeled training samples belonging to the same person are fed into the network.Considering a dataset of N training samples, there are N2possible pairs that can be used for network training. Compared with single-input algorithms [4,16,37], our proposed approach has a large number of samples for training because of the different framework. Since it is a huge value, a subset of training samples is adopted in the training phase. Strategies about the subset have been proposed in Refs. [47,48]. These are used for a classification framework where there are positive and negative pairs for both inputs. However, our proposed approach is a regressive solution that does not use explicitly positive and negative pairs.In our solution, K < N2pairs of training samples selected randomly are adopted in the training process.

    3.3.2. Loss function

    3.4. Reference grid for the inference phase

    As illustrated in Fig. 1, a gaze direction will be estimated by a labeled reference sample-set in the inference phase. The selection of the reference sample-sets will affect the estimation accuracy.Intuitively, in a good reference selection strategy, the difference between the adopted reference patches and the tested ones should not be large.A large difference will result in large errors during estimation. Moreover, a few reference sample-sets adopted in the inference phase are better than a single reference sample-set in terms of the estimation accuracy.A demonstration of the above will be discussed in Section 4.3. According to the above rules, a reference grid is then constructed in the whole gaze space,which is supported by both dimensions of the gaze directions,as shown in Fig.3.When the difference between the input patches is small,the output of the DEANet is small as well,and vice versa.As a result,the output of the DEANet,the differential gaze,can be a metric of the distance between the reference patches and the tested ones.The evenly distributed references, as shown in Fig. 3, make the differences between some of the adopted reference patches and the tested ones so small that a promising accuracy will be achieved if the step of the grid is small enough.For example,12 red points are the candidates for the reference gazes denoted as Gf,j,j=0,1,...,11.A testing gaze is marked by a blue point, which is denoted as Gt. Obviously, Gtis computed by Gf,3, Gf,4, Gf,6, and Gf,7, rather than by other reference gazes, because the distance between Gtand one of the above four reference gazes is smaller than the distance between Gtand the other reference gazes. Meanwhile, because the distance between the testing gaze and the reference gaze in the gaze space can be predicted by the differential gaze in our proposed DEANet, reference gazes whose corresponding differential gazes are smaller than a certain threshold are adopted to estimate the testing gaze.To avoid empirical parameters, four reference gazes whose corresponding differential gazes are smaller than the other differential gazes are adopted in this paper. After that, the testing gaze is predicted by adding each reference gaze to the corresponding differential gaze.The average value is then the final estimation.In experiments,this strategy was shown to be a good choice for all test sets.

    Fig. 3. An example of a reference grid in gaze space. Twelve reference gazes(marked with red points) are distributed in the gaze space. The blue point represents a testing gaze. The distance between Gf,i and Gt in the gaze space is predicted by the corresponding differential gaze, Gd,i.

    In Ref. [25], the averaging weights are determined by comparing both feature maps extracted from the input patches.According to the construction of DEANet,the output of the network is related to the difference of both patches.Using the differential gaze as the criteria for reference selection simplifies the computation, rather than using the feature maps proposed in Ref. [25].

    4. Experiments

    4.1. Implementation details

    Our proposed DEANet was implemented in a pytorch framework.It was trained by randomly selecting 10 000 pairs of training samples for each person. Transfer learning was utilized, and the weights of the VGG16 models were initialized by the pre-trained model [28]. An stochastic gradient descent (SGD) optimizer was adopted with a momentum of 0.9 and a weight decay of 0.0001.The batch size was 512. The initial learning rate was 0.1, and decayed by 0.1 every 5 epochs.A single GTX 1080 ti GPU was used for the network, with 20 epochs for each person.

    Three experiments are reported in this section.The first experiment (Section 4.3) evaluated the DEANet based on the MPIIGaze dataset to demonstrate the reference selection strategy. The second experiment (Section 4.4) assessed the DEANet’s performance in a cross-person and cross-dataset evaluation. The third experiment (Section 4.5) evaluated the DEANet against variation.

    4.2. Datasets and protocol

    The performance of the DEANet was evaluated on two public datasets, MPIIGaze and UT-Multiview. MPIIGaze was first introduced in Ref. [16]. It comprises 213 659 images from 15 participants of different ages and genders. The images were collected over different periods. To evaluate our proposed DEANet in RGB images, the eye patches and annotated gaze direction in the MPIIGaze dataset were normalized by ourselves,although some labeled gray patches and gaze directions were provided in the MPIIGaze dataset.It should be noted that the original head pose information and the target position provided by the dataset were used directly in our normalization process. UT-Multiview was initially introduced in Ref. [40]. It comprises 64 000 raw images from 50 different people. This dataset allows large amounts of synthesized eye images to be constructed by means of 3D eye shape models. UTMultiview has a greater distribution of gaze angle than MPIIGaze.Because our introduced normalization was based on Ref. [40], the normalized patches were the same size as those in UTMultiview.All gray patches in UT-Multiview were adopted as DEANet’s training samples to evaluate the network’s performance.

    In experiments, a leave-one-person-out protocol was applied for the MPIIGaze dataset, while a three-fold cross-person validation protocol was used for the UT-Multiview dataset.The protocols adopted in this section are the same as other state-of-the-art algorithms [4,16,18,25,37,40].

    4.3. Selection of reference sample-sets

    In our proposed approach, the performance of the reference sample-sets will affect the estimation accuracy of the system,making the sample-sets a critical element in DEANet. In this experiment, 500 references were adopted randomly for each person in the MPIIGaze dataset.Each reference sample-set and every sample belonging to the same person made up the Siamese pairs for testing. To demonstrate the influence of the reference sample-sets on estimation accuracy, Fig. 4 illustrates the average angular error for each person in terms of references.All the Siamese pairs of each person were fed into the DEANet for gaze estimation,and the average angular error Atfor each reference was formulated as follows:

    Fig. 4. Average angular error for each reference in the MPIIGaze dataset for different reference selection strategies: a random selection strategy, where 500 reference sample-sets were adopted randomly;and a reference grid strategy,where 12 reference sample-sets were adopted by a reference grid.

    where M is the number of samples for each person in the dataset and ω(?,? )is the function computing the angular difference between both vectors.It should be noted that the ω function is another metric of estimation error that is equivalent to the l2-norm function in Eq.(1).The ω function is intuitively adopted as the metric in experiments rather than the l2-norm function for fair comparison with other algorithms adopting the same metric. As the blue bars show in Fig. 4, every person had a different estimation accuracy. Some people,such as persons No.0,No.1,and No.2,had smaller angular errors than others. However, the average angular errors for other persons, such as No. 3, No. 7, No. 8, and No. 9, were much worse than those of the above persons. For example, some of the eye patches of person No. 7 included glasses, while other patches did not. If the adopted reference sample-sets did not include glasses,and the test sample-sets included glasses, their different appearances would result in large errors in the estimation accuracy,because the glasses would induce a significant amount of noise in the appearance computations. Although it is demonstrated in Ref.[16]that a generic mean facial shape model used in the normalization stage is sufficiently accurate to estimate the gaze direction, an inaccurately normalized eye patch will obviously lead to large errors in the inference stage if it is treated as a reference sampleset. Some examples are illustrated in Fig. 5.

    A good reference selection strategy contributes to the improvement of the system. A key element for a reference selection strategy is to determine which patches are candidates for reference sample-sets, and which are not. This is related to the distribution of the tested samples. Fig. 6 illustrates the distribution of the 500 randomly selected reference sample-sets for persons No. 0, No. 5,and No.7 in the gaze space.Each reference gaze can be represented by a point in the gaze space.When the average angular error of reference i,At,i,is smaller than the mean of all the references,the corresponding reference is identified as a‘‘good”reference(marked in red in Fig. 6). Conversely, when At,iis greater than the mean of all the references,the corresponding reference is identified as a‘‘bad”reference (marked in blue in Fig. 6). The gray points are all the samples that are used to represent the whole distribution for each person. In Fig. 6, bad references are almost all located at the periphery of the whole distribution, especially in person No. 7,while good references are evenly distributed in the whole space.Some sample-sets that include large gaze directions cannot be selected as reference sample-sets. Moreover, a single reference strategy is not sufficient for accurate estimation.

    Fig.5. Examples of normalized patches that result in large errors.(a,b)Inaccurately normalized eye patches (p03-day54-0097-left and p08-day31-0301-left). (c) Noise induced by glasses (p09-day12-0158-left). (d, e) An image without glasses as the reference sample-set (p07-day24-0046-left) and an image with glasses as the test one (p07-day25-0255-right). The name of each patch comes from the MPIIGaze dataset.

    4.4. Cross-person and cross-dataset evaluations

    Fig.6. Distributions of gaze angle for persons No.0,No.5,and No.7 in MPIIGaze.Any reference sample-set can be represented by a point in the gaze angle dimension in terms of its labeled gaze direction.The red points are good reference sample-sets whose value,At,is smaller than the mean value of all the references;blue points are bad reference sample-sets whose value is greater than the mean value of all the references. Gray points are all the samples for each person. Green points are the adopted references according to the reference grid in our experiments.

    The proposed DEANet is a person-independent model that can estimate the gaze direction for a new person. Information for the new person is incorporated into the network as reference sample-sets in the testing stage. Thus, the problem of a personindependent model being irrelevant to a new person is effectively avoided. In order to evaluate how well the DEANet addresses the challenge,a cross-person evaluation was performed in both public datasets. Table 1 illustrates the mean angular errors of the proposed algorithm and of other approaches based on the MPIIGaze and UT-Multiview datasets. Our proposed algorithm achieves favorable results in both datasets. Although the same SNNet framework was adopted by both Ref.[25]and our proposed approach,the performance of our proposed approach is better than that in Ref. [25] because our approach involves more information,including the information for both eyes and the head pose. Compared with MPIIGaze, the UT-Multiview dataset includes more people, so the performance of all algorithms evaluated on UT500 Multiview are better than those evaluated on MPIIGaze. As datadriven models, diversity of the training data increases the performance of the pre-trained models, and our proposed DEANet outperforms the other algorithms in both datasets.

    To demonstrate the robustness of our proposed approach, a cross-dataset evaluation was performed as well. The model was trained on the UT-Multiview dataset and then tested on the MPIIGaze dataset. Fig. 8 illustrates the mean angular errors of all the evaluated algorithms for the cross-dataset evaluation[16,29,40,51,52].Because the gaze distribution of the training samples is different from the distribution of the testing ones, all algorithms performed worse in the cross-dataset evaluations than in the cross-person evaluations. However, our proposed DEANet is a differential network, and the input and output of the network are replaced with differential inputs and outputs. Our proposed approach is more robust against gaze distributions than other traditional methods. The mean angular error of our proposed approach is 7.77 degrees,with a standard deviation of 3.5 degrees.

    4.5. Performance against variation

    Fig. 7. The relationship between the estimation error (y-axis) and the difference between both sample-sets (x-axis) for persons No. 0, No. 5, and No. 7.

    Table 1 Gaze directional results on two popular datasets with mean angular error (in degrees).

    In the previous evaluations, our proposed DEANet achieved a good performance in gaze estimation. In this section, the performance against variation, such as the influence of the head pose information and the image resolution,is further investigated.To deal with arbitrary head pose information in our proposed DEANet, a normalized head pose information was adopted. To demonstrate the performance of the DEANet against variation, a cross-person evaluation in the MPIIGaze dataset without the head pose information was performed. In this experiment, a new network without the head pose information was retrained based on the MPIIGaze dataset.The mean angular error evaluated for all persons was 4.46, which is a little higher than that for the network with the head pose information (4.38), as reported in Table 1.The network’s performance will be slightly degraded without the head pose information. The head pose information is marginal for a deep network such as DEANet. However, it is still important for a shallower network,such as MnistNet[53],which is evaluated in Ref. [16]. Shallower networks are usually adopted in order to save computation resources, especially in remote devices.

    Moreover,the influence of the image resolution on gaze estimation was investigated in this experiment. The same network parameters were adopted as those proposed in Section 4.4, and the cross-person evaluation was performed. The protocols were the same as those described in Section 4.4. In the evaluation, all the input patches were resized to 18×30, 9×15, and 5×8. It should be noted that the resized patches needed to be restored to the original size (36×60) by interpolation in order to be successfully fed into the DEANet.The DEANet’s performance for a different image resolution was compared with that of GazeNet [16]based on both the MPIIGaze and UT-Multiview datasets,as shown in Table 2. Our proposed DEANet outperforms GazeNet in this experiment.

    5. Conclusions

    Fig. 8. Mean angular error for a cross-dataset evaluation with training on the UTMultiview dataset and testing on the MPIIGaze dataset.

    Table 2 The influence of image resolution. Mean angular errors were evaluated on the MPIIGaze and UT-Multiview datasets with different image resolutions.

    This paper presented a novel DEANet for appearance-based gaze estimation. Three streams—including both eye patches and the head pose information—are fed into the network,and a personindependent model is trained based on an SNNet framework.Because the differential gaze is adopted, person-specific information can be used in the testing stage.A reference grid is constructed for reference candidates, and the proposed strategy selects good references to improve the estimation accuracy. Our approach was evaluated on two public datasets: MPIIGaze and UT-Multiview.The extensive experimental evaluations showed that our approach achieves a more promising performance than other popular methods.

    All experiments were analyzed theoretically on the public datasets.Our proposed approach will be encompassed as a modality for HRC robot control with multimodal fusion, which will be investigated carefully in our future work.

    Acknowledgements

    This work was supported by the Science and Technology Support Project of Sichuan Science and Technology Department(2018SZ0357) and China Scholarship.

    Compliance with ethics guidelines

    Song Gu, Lihui Wang, Long He, Xianding He, and Jian Wang declare that they have no conflict of interest or financial conflicts to disclose.

    亚洲成av片中文字幕在线观看| 在线观看66精品国产| 国产成人一区二区三区免费视频网站| 亚洲国产看品久久| 免费在线观看视频国产中文字幕亚洲| 高清在线国产一区| √禁漫天堂资源中文www| 亚洲免费av在线视频| 欧美黄色淫秽网站| 日韩有码中文字幕| 久久香蕉激情| 99热国产这里只有精品6| 十八禁人妻一区二区| 老鸭窝网址在线观看| 久久精品国产亚洲av高清一级| 露出奶头的视频| 国产一区二区三区综合在线观看| 国产精品久久久av美女十八| 国产av在哪里看| 日韩欧美国产一区二区入口| 丝袜在线中文字幕| 激情视频va一区二区三区| 老汉色∧v一级毛片| 丰满饥渴人妻一区二区三| 麻豆成人av在线观看| 99国产精品一区二区三区| 欧美乱色亚洲激情| 久久精品亚洲熟妇少妇任你| 777久久人妻少妇嫩草av网站| www日本在线高清视频| www日本在线高清视频| 老熟妇乱子伦视频在线观看| 欧美日韩av久久| 麻豆成人av在线观看| 国产视频一区二区在线看| 国产视频一区二区在线看| 一级黄色大片毛片| 成年版毛片免费区| 欧美不卡视频在线免费观看 | 日本免费a在线| 亚洲视频免费观看视频| 亚洲精品一二三| 啦啦啦 在线观看视频| 国产午夜精品久久久久久| 18禁国产床啪视频网站| 男女下面进入的视频免费午夜 | 大陆偷拍与自拍| 国产精品爽爽va在线观看网站 | 人妻久久中文字幕网| 国产亚洲欧美精品永久| 国产亚洲欧美精品永久| 色尼玛亚洲综合影院| www.精华液| 级片在线观看| 欧美最黄视频在线播放免费 | 精品人妻1区二区| 巨乳人妻的诱惑在线观看| 国产主播在线观看一区二区| 久热这里只有精品99| 男女下面进入的视频免费午夜 | 一级,二级,三级黄色视频| 亚洲精华国产精华精| 久久久久久久久久久久大奶| 久久天躁狠狠躁夜夜2o2o| 亚洲人成电影观看| 99精品久久久久人妻精品| av片东京热男人的天堂| 国产成人欧美在线观看| 美女扒开内裤让男人捅视频| 正在播放国产对白刺激| 黄色a级毛片大全视频| 热re99久久国产66热| 国产熟女xx| 亚洲va日本ⅴa欧美va伊人久久| 精品福利观看| 国产精品亚洲一级av第二区| 亚洲成人久久性| 国产精品综合久久久久久久免费 | 欧美最黄视频在线播放免费 | 在线观看免费高清a一片| 可以免费在线观看a视频的电影网站| e午夜精品久久久久久久| 久久草成人影院| 国产片内射在线| 脱女人内裤的视频| 黑人操中国人逼视频| 欧美人与性动交α欧美软件| 手机成人av网站| 黄网站色视频无遮挡免费观看| 12—13女人毛片做爰片一| 亚洲伊人色综图| 久久久国产欧美日韩av| 啦啦啦免费观看视频1| 午夜福利,免费看| 极品教师在线免费播放| 一级片免费观看大全| 十八禁人妻一区二区| 午夜老司机福利片| 成人亚洲精品av一区二区 | 午夜影院日韩av| 搡老岳熟女国产| 亚洲av美国av| 大陆偷拍与自拍| 交换朋友夫妻互换小说| 国产精品偷伦视频观看了| 国产成人av教育| 日韩欧美在线二视频| 怎么达到女性高潮| 韩国av一区二区三区四区| 黄色成人免费大全| 麻豆久久精品国产亚洲av | 老司机午夜十八禁免费视频| 欧美成人免费av一区二区三区| 欧美大码av| 亚洲欧美日韩高清在线视频| 日韩免费高清中文字幕av| 久久久久国产精品人妻aⅴ院| 日韩精品中文字幕看吧| 精品国产亚洲在线| 一进一出抽搐gif免费好疼 | 色婷婷av一区二区三区视频| 欧洲精品卡2卡3卡4卡5卡区| 看免费av毛片| 亚洲一区中文字幕在线| 欧美日韩黄片免| 在线视频色国产色| 91麻豆精品激情在线观看国产 | 国产精品 欧美亚洲| 黄色毛片三级朝国网站| 两性午夜刺激爽爽歪歪视频在线观看 | 99久久综合精品五月天人人| 欧美精品一区二区免费开放| 国产av又大| aaaaa片日本免费| 午夜久久久在线观看| 亚洲一区二区三区不卡视频| 欧美性长视频在线观看| 欧美不卡视频在线免费观看 | 老司机靠b影院| 久久欧美精品欧美久久欧美| 国产蜜桃级精品一区二区三区| 欧美日本中文国产一区发布| 亚洲精品粉嫩美女一区| а√天堂www在线а√下载| 国产欧美日韩一区二区精品| 熟女少妇亚洲综合色aaa.| 国产精品野战在线观看 | 精品久久久久久久毛片微露脸| 人成视频在线观看免费观看| 免费人成视频x8x8入口观看| 国产av又大| 成年女人毛片免费观看观看9| 日韩大尺度精品在线看网址 | 久久精品亚洲熟妇少妇任你| 亚洲成人久久性| 久久亚洲精品不卡| 亚洲精品美女久久av网站| 亚洲精品中文字幕一二三四区| 欧美精品一区二区免费开放| 精品久久蜜臀av无| 久久久久久久久中文| 国产精品1区2区在线观看.| 久久婷婷成人综合色麻豆| 午夜福利欧美成人| 久久久国产精品麻豆| 人成视频在线观看免费观看| 久久这里只有精品19| 男女高潮啪啪啪动态图| 久久精品国产综合久久久| 欧美成人午夜精品| 大型黄色视频在线免费观看| 日韩精品免费视频一区二区三区| 欧美激情极品国产一区二区三区| 久久精品91无色码中文字幕| 国产97色在线日韩免费| 黄色a级毛片大全视频| 老司机在亚洲福利影院| 久久久久久人人人人人| 国产又爽黄色视频| 国产午夜精品久久久久久| 69av精品久久久久久| 日本撒尿小便嘘嘘汇集6| 午夜免费鲁丝| 老司机亚洲免费影院| 午夜精品国产一区二区电影| a级片在线免费高清观看视频| 叶爱在线成人免费视频播放| 国产国语露脸激情在线看| 伦理电影免费视频| 亚洲国产精品合色在线| 黄色 视频免费看| 国产不卡一卡二| 欧美激情高清一区二区三区| 欧美日韩视频精品一区| 夫妻午夜视频| 精品久久久精品久久久| 悠悠久久av| av视频免费观看在线观看| 99国产综合亚洲精品| av福利片在线| 国产aⅴ精品一区二区三区波| 国产免费男女视频| 久久人人97超碰香蕉20202| 亚洲av美国av| 老鸭窝网址在线观看| 精品久久久久久电影网| 一级毛片女人18水好多| 精品福利观看| 国产一区在线观看成人免费| 露出奶头的视频| 一级作爱视频免费观看| 午夜成年电影在线免费观看| 无限看片的www在线观看| 国产精品综合久久久久久久免费 | 久久久久国产精品人妻aⅴ院| 涩涩av久久男人的天堂| 国产三级黄色录像| 一区在线观看完整版| 国产高清videossex| 亚洲va日本ⅴa欧美va伊人久久| 91成人精品电影| 成人亚洲精品一区在线观看| 国产欧美日韩一区二区精品| 丰满迷人的少妇在线观看| 又紧又爽又黄一区二区| 国产精品久久久人人做人人爽| 亚洲五月天丁香| 成熟少妇高潮喷水视频| 亚洲专区字幕在线| 91字幕亚洲| 久久精品影院6| 大香蕉久久成人网| 在线播放国产精品三级| 97人妻天天添夜夜摸| 99re在线观看精品视频| 国产精品av久久久久免费| 中文字幕人妻熟女乱码| 久久国产精品人妻蜜桃| 一级黄色大片毛片| 午夜福利欧美成人| 免费女性裸体啪啪无遮挡网站| 免费在线观看亚洲国产| 在线观看免费高清a一片| ponron亚洲| 91国产中文字幕| 国产成人欧美在线观看| 国产成+人综合+亚洲专区| 亚洲美女黄片视频| 欧美日韩亚洲综合一区二区三区_| 久99久视频精品免费| 久久人人爽av亚洲精品天堂| 久热爱精品视频在线9| 一区二区日韩欧美中文字幕| 久久人妻熟女aⅴ| 久久久国产成人精品二区 | 18禁国产床啪视频网站| 丝袜人妻中文字幕| 国产高清国产精品国产三级| 一级a爱视频在线免费观看| av欧美777| 国产黄色免费在线视频| 色尼玛亚洲综合影院| 成年版毛片免费区| 在线观看免费视频网站a站| 99国产综合亚洲精品| 欧美性长视频在线观看| 国产成+人综合+亚洲专区| 久久久久久免费高清国产稀缺| 啦啦啦免费观看视频1| а√天堂www在线а√下载| 日本撒尿小便嘘嘘汇集6| 精品卡一卡二卡四卡免费| 纯流量卡能插随身wifi吗| 亚洲精品美女久久av网站| 中文字幕人妻熟女乱码| 精品久久久久久久久久免费视频 | 久热这里只有精品99| 欧美在线一区亚洲| 成年女人毛片免费观看观看9| 欧美乱码精品一区二区三区| 午夜精品国产一区二区电影| 成年人免费黄色播放视频| 日韩人妻精品一区2区三区| 免费久久久久久久精品成人欧美视频| www日本在线高清视频| 精品久久久久久,| 69av精品久久久久久| 精品午夜福利视频在线观看一区| 国产99久久九九免费精品| 欧美日韩黄片免| 亚洲,欧美精品.| 大香蕉久久成人网| 丰满迷人的少妇在线观看| 国产无遮挡羞羞视频在线观看| 亚洲成av片中文字幕在线观看| 两性夫妻黄色片| 日韩大尺度精品在线看网址 | 精品少妇一区二区三区视频日本电影| 在线观看一区二区三区激情| 俄罗斯特黄特色一大片| 老司机午夜福利在线观看视频| 老司机亚洲免费影院| 精品人妻在线不人妻| 丝袜人妻中文字幕| 国产精品一区二区免费欧美| 每晚都被弄得嗷嗷叫到高潮| videosex国产| 亚洲免费av在线视频| 无限看片的www在线观看| 美女大奶头视频| 国产精品av久久久久免费| 首页视频小说图片口味搜索| 欧美乱色亚洲激情| 亚洲国产看品久久| 中亚洲国语对白在线视频| 午夜成年电影在线免费观看| 亚洲专区国产一区二区| 中文字幕人妻丝袜一区二区| 18禁黄网站禁片午夜丰满| 黑人操中国人逼视频| 亚洲美女黄片视频| 久久99一区二区三区| 丰满人妻熟妇乱又伦精品不卡| 999精品在线视频| 琪琪午夜伦伦电影理论片6080| 亚洲人成伊人成综合网2020| 日韩精品中文字幕看吧| 男女做爰动态图高潮gif福利片 | 天堂动漫精品| 国产欧美日韩综合在线一区二区| 国产亚洲精品一区二区www| 啦啦啦免费观看视频1| 久久亚洲真实| 国产99久久九九免费精品| 三上悠亚av全集在线观看| 久久中文字幕一级| 校园春色视频在线观看| 中文亚洲av片在线观看爽| 五月开心婷婷网| 亚洲成国产人片在线观看| 国产精品免费视频内射| 免费在线观看完整版高清| av片东京热男人的天堂| 9热在线视频观看99| 1024香蕉在线观看| 久久精品亚洲av国产电影网| 久久 成人 亚洲| 波多野结衣av一区二区av| www.www免费av| 脱女人内裤的视频| 亚洲成人免费av在线播放| 亚洲aⅴ乱码一区二区在线播放 | 免费观看精品视频网站| www.自偷自拍.com| 日本免费a在线| 久久午夜综合久久蜜桃| 欧美+亚洲+日韩+国产| 日本a在线网址| 9色porny在线观看| 99久久人妻综合| 99国产精品一区二区蜜桃av| 很黄的视频免费| 国产精品98久久久久久宅男小说| a级片在线免费高清观看视频| 中文欧美无线码| 热re99久久国产66热| 男女下面进入的视频免费午夜 | 欧美黄色片欧美黄色片| 人人妻人人澡人人看| 久久婷婷成人综合色麻豆| 如日韩欧美国产精品一区二区三区| 精品一区二区三区视频在线观看免费 | 久久99一区二区三区| 亚洲va日本ⅴa欧美va伊人久久| 亚洲av第一区精品v没综合| 亚洲va日本ⅴa欧美va伊人久久| а√天堂www在线а√下载| 色婷婷久久久亚洲欧美| 久久久久久久久久久久大奶| 亚洲欧美激情在线| 久久久国产成人免费| 亚洲精品中文字幕在线视频| 精品熟女少妇八av免费久了| 人人妻,人人澡人人爽秒播| 国产精品秋霞免费鲁丝片| 欧美+亚洲+日韩+国产| 日本wwww免费看| www.自偷自拍.com| 欧美中文日本在线观看视频| 美女午夜性视频免费| 国产精品久久电影中文字幕| 亚洲第一青青草原| 国产精品电影一区二区三区| 露出奶头的视频| 搡老熟女国产l中国老女人| 天天添夜夜摸| 女性被躁到高潮视频| 欧美在线一区亚洲| 亚洲欧美激情在线| 99香蕉大伊视频| 老熟妇仑乱视频hdxx| 国产麻豆69| 村上凉子中文字幕在线| 91麻豆av在线| 国产欧美日韩一区二区三| 国产精品一区二区免费欧美| 五月开心婷婷网| 精品国产国语对白av| 欧美激情高清一区二区三区| 亚洲色图av天堂| cao死你这个sao货| 成人特级黄色片久久久久久久| 如日韩欧美国产精品一区二区三区| 99久久久亚洲精品蜜臀av| 无人区码免费观看不卡| 成人三级黄色视频| 久久久久精品国产欧美久久久| 18禁国产床啪视频网站| 亚洲精品av麻豆狂野| 国产精品爽爽va在线观看网站 | 国产区一区二久久| 欧美黑人精品巨大| 久久人人爽av亚洲精品天堂| 国产免费现黄频在线看| www日本在线高清视频| 亚洲精华国产精华精| 丰满饥渴人妻一区二区三| 久9热在线精品视频| 999久久久国产精品视频| 99久久精品国产亚洲精品| 成人三级黄色视频| 级片在线观看| 久久久水蜜桃国产精品网| 男男h啪啪无遮挡| 欧美激情高清一区二区三区| 国产精品免费视频内射| 久久久国产成人免费| 久久久久亚洲av毛片大全| 中文字幕色久视频| 国产欧美日韩一区二区三| 免费在线观看日本一区| 天天添夜夜摸| 97人妻天天添夜夜摸| 91国产中文字幕| 天天躁夜夜躁狠狠躁躁| 精品欧美一区二区三区在线| 91麻豆精品激情在线观看国产 | 狠狠狠狠99中文字幕| 变态另类成人亚洲欧美熟女 | 精品国产乱子伦一区二区三区| 美女高潮到喷水免费观看| 无限看片的www在线观看| 久久亚洲真实| 12—13女人毛片做爰片一| 精品国产乱子伦一区二区三区| 久久亚洲精品不卡| 人妻久久中文字幕网| 国产主播在线观看一区二区| 怎么达到女性高潮| 香蕉丝袜av| 成年版毛片免费区| 波多野结衣一区麻豆| 亚洲少妇的诱惑av| 夫妻午夜视频| 欧美激情极品国产一区二区三区| 国产免费男女视频| 黄片大片在线免费观看| 人人妻,人人澡人人爽秒播| 久久久国产一区二区| 亚洲,欧美精品.| 成人精品一区二区免费| 亚洲aⅴ乱码一区二区在线播放 | 亚洲成a人片在线一区二区| 久久国产精品男人的天堂亚洲| av天堂久久9| 女性生殖器流出的白浆| 一边摸一边抽搐一进一出视频| 久久性视频一级片| av中文乱码字幕在线| 熟女少妇亚洲综合色aaa.| 男女床上黄色一级片免费看| 香蕉久久夜色| 18禁裸乳无遮挡免费网站照片 | 9191精品国产免费久久| 久久亚洲真实| ponron亚洲| 婷婷丁香在线五月| 欧美黑人欧美精品刺激| 久久精品亚洲精品国产色婷小说| 18禁裸乳无遮挡免费网站照片 | 精品久久久精品久久久| 一边摸一边抽搐一进一小说| 午夜精品国产一区二区电影| av网站在线播放免费| 国产精品二区激情视频| 国产伦一二天堂av在线观看| 丝袜人妻中文字幕| 99国产极品粉嫩在线观看| 欧美中文综合在线视频| 亚洲精品美女久久av网站| 一本大道久久a久久精品| 18禁裸乳无遮挡免费网站照片 | 亚洲专区字幕在线| 国产精品偷伦视频观看了| 神马国产精品三级电影在线观看 | 日韩三级视频一区二区三区| 热99国产精品久久久久久7| 在线观看免费日韩欧美大片| 欧美日本亚洲视频在线播放| 国产日韩一区二区三区精品不卡| 国产av精品麻豆| 变态另类成人亚洲欧美熟女 | 韩国av一区二区三区四区| 久久精品91无色码中文字幕| 国产深夜福利视频在线观看| 法律面前人人平等表现在哪些方面| 久久久水蜜桃国产精品网| 在线十欧美十亚洲十日本专区| 男人的好看免费观看在线视频 | 老司机午夜福利在线观看视频| 丁香欧美五月| 国产精品 欧美亚洲| 黄色视频不卡| 免费日韩欧美在线观看| 人成视频在线观看免费观看| 国产精品久久久久成人av| 高清毛片免费观看视频网站 | 午夜精品久久久久久毛片777| 超碰成人久久| 一区在线观看完整版| 免费少妇av软件| 一级毛片女人18水好多| 国产成人精品无人区| 老汉色∧v一级毛片| 精品久久蜜臀av无| 欧美日本中文国产一区发布| 欧美精品啪啪一区二区三区| 99香蕉大伊视频| 在线观看免费视频网站a站| 国产1区2区3区精品| 电影成人av| 国产精品综合久久久久久久免费 | 欧美最黄视频在线播放免费 | 搡老岳熟女国产| 日韩大尺度精品在线看网址 | 欧美中文综合在线视频| 久久久久久大精品| 黑人巨大精品欧美一区二区mp4| 久久这里只有精品19| 国产蜜桃级精品一区二区三区| 亚洲av成人一区二区三| 超色免费av| 欧美av亚洲av综合av国产av| 人人澡人人妻人| 18禁美女被吸乳视频| 国产高清视频在线播放一区| 成人三级做爰电影| 欧美日本中文国产一区发布| 午夜激情av网站| 精品久久久久久电影网| 精品乱码久久久久久99久播| 国产精品乱码一区二三区的特点 | 免费在线观看日本一区| 亚洲黑人精品在线| 99国产极品粉嫩在线观看| 搡老乐熟女国产| 精品国产乱码久久久久久男人| 国产一区二区三区视频了| 精品久久蜜臀av无| 欧美黄色片欧美黄色片| 搡老熟女国产l中国老女人| 嫁个100分男人电影在线观看| 成人三级黄色视频| 丁香六月欧美| 这个男人来自地球电影免费观看| 日韩成人在线观看一区二区三区| xxxhd国产人妻xxx| av超薄肉色丝袜交足视频| av天堂在线播放| 两个人看的免费小视频| 最近最新中文字幕大全免费视频| 午夜精品国产一区二区电影| 级片在线观看| 男女之事视频高清在线观看| 99久久久亚洲精品蜜臀av| 色尼玛亚洲综合影院| 天天添夜夜摸| 日本黄色视频三级网站网址| 久久久久国产一级毛片高清牌| 国产一区二区激情短视频| 黄色成人免费大全| 久久久国产成人精品二区 | 亚洲va日本ⅴa欧美va伊人久久| 欧美日韩福利视频一区二区| 日本 av在线| av电影中文网址| 一级作爱视频免费观看| 另类亚洲欧美激情| 老司机深夜福利视频在线观看| 一级作爱视频免费观看| 国产欧美日韩一区二区三| 不卡av一区二区三区| 国产精品美女特级片免费视频播放器 | 亚洲av成人av| 欧美日韩精品网址| 一进一出抽搐动态| 国产精品免费一区二区三区在线| 亚洲国产精品合色在线| aaaaa片日本免费| 老汉色av国产亚洲站长工具| 久久国产精品影院| 亚洲成国产人片在线观看| 国产亚洲欧美在线一区二区| 一边摸一边做爽爽视频免费| 成人三级做爰电影|