• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Vision Based Hand Gesture Recognition Using 3D Shape Context

    2021-07-23 10:20:50ChenZhuJianyuYangZhanpengShaoandChunpingLiu
    IEEE/CAA Journal of Automatica Sinica 2021年9期

    Chen Zhu, Jianyu Yang,, Zhanpeng Shao,, and Chunping Liu

    Abstract—Hand gesture recognition is a popular topic in computer vision and makes human-computer interaction more flexible and convenient. The representation of hand gestures is critical for recognition. In this paper, we propose a new method to measure the similarity between hand gestures and exploit it for hand gesture recognition. The depth maps of hand gestures captured via the Kinect sensors are used in our method, where the 3D hand shapes can be segmented from the cluttered backgrounds. To extract the pattern of salient 3D shape features,we propose a new descriptor–3D Shape Context, for 3D hand gesture representation. The 3D Shape Context information of each 3D point is obtained in multiple scales because both local shape context and global shape distribution are necessary for recognition. The description of all the 3D points constructs the hand gesture representation, and hand gesture recognition is explored via dynamic time warping algorithm. Extensive experiments are conducted on multiple benchmark datasets. The experimental results verify that the proposed method is robust to noise, articulated variations, and rigid transformations. Our method outperforms state-of-the-art methods in the comparisons of accuracy and efficiency.

    I. INTRODUCTION

    H AND gesture recognition is a popular topic in computer vision and plays an important role in humancomputer interaction (HCI) [1]–[4]. Its research and development affect the communication of intelligent devices and humans, which makes HCI more flexible. Nowadays,hand gesture recognition has been widely applied to all aspects of human life and industry [5], e.g., virtual reality,sign language recognition, visual tracking, video indexing,and automated surveillance.

    The development of hand gesture recognition is based on three main types of hand gestures: static 2D hand gestures,dynamic 2D hand gestures, and 3D hand gestures. The static 2D hand gesture [6] is the simplest form of hand gesture data for recognition. The 2D shape information is acquired to identify static hand gestures, such as a fist or fingers open.This technique can only recognize static gestures without perceiving continuous changes of hand gesture. The dynamic 2D hand gesture recognition methods [7], [8] are more complex in tracking the movements of hands [9], [10].Researchers usually extract different types of features of the hand trajectories and use various combinations of them to represent the dynamic 2D hand gesture. Features extracted from dynamic gestures include richer information than those extracted from static gestures. With the appearance of depth cameras, e.g., the Kinect sensor [11], 3D hand gesture recognition [12]–[14] has been developed rapidly. The output of a depth camera is a depth map, which is sometimes mentioned as a depth image or depth frame according to context. A depth map contains depth information corresponding to the distances of all the pixels from the surface of the object to the sensors. Compared with a conventional color image, a depth map provides several advantages. First, depth maps generated by a structured light or time-of-flight depth camera are insensitive to changes in lighting conditions. Second, the 3D information collection of the object is hardly affected by the cluttered background.Third, depth maps provide 3D structure and shape information, which makes several problems easier to deal with, such as segmentation and detection. Hand shapes can be detected and segmented robustly from the RGB-D images captured by the Kinect sensor. Therefore, this new type of data enables a different area of information which has rarely been touched by traditional hand gesture recognition research on color or gray scale videos.

    The effectiveness of hand gesture recognition is greatly improved by 3D hand gesture recognition methods. In recent years, there has been a vast set of literatures which report promising results [15]–[21]. However, many methods[15]–[17] cannot recognize hand gestures robustly with severe articulated variations and rigid transformations. Renet al.[22]propose a part-based method to solve these problems, but they cannot capture the global features of hand shape. So the description of hand gestures is defective with missing global shape information. Furthermore, some of the 3D hand gesture recognition methods, e.g., the methods based on hand motion trajectories [23], [24], usually have high computational cost.These methods are not so efficient for real-time applications.Thus, effective and efficient hand gesture recognition is still a challenging task.

    Fig. 1. Overview of the proposed method. The depth maps are captured via the Kinect sensor and then the 3D hand shapes can be segmented from the cluttered backgrounds. The 3D-SC information of each 3D point is extracted in multiple scales and is summarized in respective histogram. A hand gesture is thus represented by the combination of histograms of all the contour points. Finally, the improved DTW algorithm is used for hand gesture recognition.

    Additionally, based on the way of feature extraction, hand gesture recognition methods can broadly be divided into two categories: global-feature-based methods [25]–[27] and localfeature-based methods [28]–[31]. Global methods process hand gesture as a whole in recognition. Global features are defined to effectively and concisely describe the entire 3D hand gesture. However, hand shape details are missed. On the other hand, local-feature-based methods extract only local 3D information around a specific selected key point. The local methods are generally better in handling occlusion compared to global methods. However, they cannot capture the global information of hand gestures. Therefore, in order to utilize both the local and global 3D information of hand gestures, the features should be extracted in multiple scales. This motivates us to design a discriminative descriptor which considers both the local and global 3D information of hand gestures.Moreover, a hand gesture recognition method with both high accuracy and efficiency is required.

    In this paper, we propose a new hand gesture recognition method, 3D shape context (3D-SC), which sufficiently utilizes both the local and global 3D information of hand gestures.The proposed method is inspired from the shape context (SC)method [32], while our method extracts rich and discriminative features based on the 3D structure and context of hand shape in multiple scales, which are robust to noise and articulated variations, and invariant to rigid transformations.The overview of the proposed method is illustrated in Fig. 1.Firstly, the depth map is captured via the Kinect sensor and the hand gesture is segmented from the cluttered background.Then the edge detecting algorithm is used to achieve the hand contour. After that, we use a set of vectors originating from one contour point to all the rest to express the spatial structure of the entire shape relative to the reference point. The 3D-SC information of each 3D point is extracted in multiple scales and is summarized in the respective histogram. The description of all the 3D points makes up the hand gesture representation. Finally, we improve the dynamic time warping(DTW) algorithm [33] with the Chi-square Coefficient method [34] to measure the similarity between the 3D-SC representations of hand gestures for recognition.

    Our contributions in this paper can be summarized in three aspects as follows:

    1) A new method for 3D hand gesture representation and recognition is proposed. Both the local and global 3D shape information in multiple scales are sufficiently utilized in the feature extraction and representation of our method. The 3D shape context information of hand gestures is extracted at salient feature points to obtain a discriminative pattern of hand shape. This method is invariant to geometric transformations and nonlinear deformations, and is robust to noise and cluttered background.

    2) We improve the DTW algorithm with the Chi-square Coefficient method, where the Euclidean distance is replaced when calculating the similarity between hand gestures.

    3) Our method achieves state-of-the-art performances on five benchmark hand gesture datasets, including large-scale hand gesture datasets. The efficiency of our method outperforms recent proposed methods as well, which is fast enough for real-time applications.

    The invariance and robustness of the proposed descriptor is evaluated through extensive experiments. The proposed method is invariant to geometric transformations and nonlinear deformations, especially to articulated variations.The experimental results validate the invariance of our method to these variations. Our method is also verified to be capable of capturing the common features for hand gestures with large intra-class variations. We also validate the robustness of our method to cluttered background and noise. The effectiveness of our method is evaluated with experiments on five benchmark RGB-D hand gesture datasets: the NTU Hand Digit dataset [22], Kinect Leap dataset [35], Senz3d dataset[36], ASL finger spelling (ASL-FS) dataset [37], and ChaLearn LAP IsoGD dataset [38]. All the data are captured by depth cameras and the last two datasets are large-scale.Only depth maps are used in our method, without the RGB images. Experimental results show that the proposed method outperforms state-of-the-art methods and is efficient enough for real-time applications.

    The remainder of this paper is organized as follows. We begin by reviewing the relevant works in the next section.Section III describes the hand gesture representation of the proposed method in detail and shows how it can be used to obtain rich 3D information of hand gestures. Hand gesture recognition is introduced in Section IV. Section V gives the experimental evaluation of the invariance and robustness of our method as well as the performances of hand gesture recognition on five benchmark datasets. This paper is concluded in Section VI.

    II. RELATED WORK

    Fig. 2. An illustration of hand shape segmentation.

    According to a survey of hand gesture recognition approaches [39], [40], these approaches can be broadly divided into two classes, i.e., approaches for static hand gestures and dynamic hand gestures. In the static hand gesture recognition, only the “state” of hand gestures can be recognized. The “continuous change” of hand gestures cannot be perceived. Keskinet al. [41] use pixel-level features to describe the hand gesture with high computational cost. In[42], a 3D facets histogram is proposed to describe the shape information of the 3D hand gesture. Also, high-level features can be extracted from the contour of the hand gesture. Yanget al. [43] propose a finger emphasized multi-scale descriptor which incorporates three types of parameters in multiple scales to fully utilize hand shape features. Generally speaking,by analyzing the hand gesture images and comparing them with the preset image patterns, the meanings of hand gestures are understood. These approaches are sensitive to illumination and background interference.

    In the dynamic hand gesture recognition, the temporal information of the motion of hand gestures can be captured.When a hand gesture starts and when it ends are recorded.This task is extended to recognize more complex and meaningful hand gestures. Also, we can realize the visual tracking and online understanding of hand gestures based on the approaches for dynamic hand gestures. In [44], a novel feature fusion approach fuses 3D dynamic features by learning a shared hidden space for hand gesture recognition. Elmezainet al. [23] realize a hidden Markov model (HMM)-based continuous gesture recognition system for real-time dynamic hand gesture recognition. Additionally, an effective representation of 3D motion trajectory is important for capturing and recognizing complex motion patterns. In [24], a view invariant hierarchical parsing method for free form 3D motion trajectory representation is proposed. The proposed parsing approach well represents long motion trajectories and it can also support online gesture recognition.

    In the past few years, many traditional learning algorithms[45], [46] have been applied for hand gesture classification.Panget al. [47] extracted light flow and movement features and used the traditional HMM to recognize hand gestures.Keskinet al. [48] use random decision forests (RDF) to train the hand model and design an application based on a support vector machine (SVM) for American digital hand gesture recognition. In [49], a set of pre-processing methods is discussed and a DTW-based method is proposed. It can emphasize joints which are more important for gesture recognition by giving them different weights. These machine learning techniques are always complicated due to the essential large-scale training data for feature extraction and classifier training. Thus, Keskinet al.[50] propose a clustering technique which can significantly reduce the complexity of the optimal model and improve the recognition accuracy at the same time.

    III. HAND GESTURE REPRESENTATION

    A. Hand Shape Segmentation

    To extract shape features and weed out redundant contour points, the ADCE method [52] is employed for contour evolution. The evolution results of a sample hand shape are shown in Fig. 3 with the numbers of remained contour points.We can see that the evolved shape preserves the salient shape features of the original shape with less redundant points.Additionally, the computational cost of subsequent operation is largely reduced by the contour evolution.

    Fig. 3. The ADCE evolution process of the hand shapes with the numbers of remained contour points.

    B. 3D Shape Context Descriptor

    Fig. 4. An illustration of the direction and length of a vector.

    Fig. 5. An illustration of the mesh division with k1=12 and k2=5.

    After the hand shape segmentation and contour evolution,we obtain the contour of the hand shape with 3D information which is represented as a sequenceScorresponding bent-block according to their directions and lengths. We divide the vectors based on the logarithms of lengths. Thus, the area of each bent-block far from the reference point is larger than that based on lengths, which avoids the weakening of the role of these bent-blocks. The area of each bent-block which is near the reference point is smaller, which avoids the effect of these bent-blocks being too large. The logarithm of the maximum length is used as the maximum radius for length division so that all the vectors can be assigned to the mesh. Additionally, both local 3D shape context and global 3D shape distribution information in multiple scales are sufficiently utilized in the 3D-SC descriptor which increases the reliability and discrimination of the hand gesture representation.

    The parameter values ofk1andk2are set to 12 and 5 respectively based on the following theoretical analyses.Obviously, a bigger value ofKindicates that the division of the 3D hand shape is more detailed and the descriptor is more discriminative. However, the computational cost will be higher with meaningless calculations. Thus, to improve the efficiency of our method, we choose the smallest parameter values which can sufficiently represent hand gestures. The criterion is that different fingers should not be included in the same bent-block. Take the hand gesture in Fig. 5 as an example, if the reference point is on the end of the thumb or little finger, i.e., two sides of the hand,k2should be set to at least 5 to fit the criterion. On the other hand, the maximum angle between the thumb and little finger is generally smaller than 150?. Each finger is distributed approximately in the range of 30?. Thus,k1should be set to at least 12. As shown in Fig. 5 , the value ofKis 60 and different fingers do not appear in the same bent-block. In order to verify the above analyses,we conduct experiments with different parameter values and this parameter setting achieves the best performance.

    The sum of relative depth values of all the contour points in each bent-block is calculated as a respective column value of the log-polar histogram. Take the log-polar histogram of the contour pointias an example, the value of its columnkis calculated as

    whered(j) is the absolute depth value of contour pointj.Considering that the distances between hand gestures and the Kinect sensor are different, we calculate the sum of the relative depth values rather than the absolute depth values.The purpose of adding one in (8) is to put the contour points with minimum absolute value into the description of 3D hand shapes. This makes the description more comprehensive.

    The 3D-SC information of each 3D point is summarized in log-polar histogram shown in Fig. 6. A deeper grid color indicates a larger value of the corresponding column. A hand gesture is finally represented by the combination of log-polar histograms of all the contour points.

    IV. HAND GESTURE RECOGNITION

    Fig. 6. The sum of relative depth values of all the contour points in each bent-block is calculated as respective column value of the log-polar histogram. (a) The 3D-SC information of a 3D point. (b) The corresponding log-polar histogram.

    After the hand gesture representation, we improve the DTW algorithm [33] with the Chi-square Coefficient method [34] to measure the similarity between the 3D-SC representation of hand gestures for recognition. The original DTW algorithm uses the Euclidean distance to calculate the matching cost between two points. However, the correlation between components is not considered. In this work, the proposed 3D shape context descriptor is represented as histograms, and the Chi-square Coefficient is capable of capturing the correlations among histogram components, therefore, we choose the Chisquare Coefficient to measure the similarity between hand gestures for accuracy.

    Given two contour point sequences of hand gestureAandB

    The training sample with the smallest value ofRin each class is selected as the hand gesture template of the respective class. Hand gestures can be finally recognized based on the DTW distances between testing samples and templates.

    V. EXPERIMENT

    In this section, we evaluate the capability of our method in four aspects: 1) demonstrate the invariant properties of the proposed 3D-SC descriptor for articulated deformation,rotation and scale variation; 2) evaluate the representative and discriminative power by hand gesture recognition experiments on five benchmark datasets, including the NTU Hand Digit dataset [22], Kinect Leap dataset [35], Senz3d dataset [36],ASL finger spelling (ASL-FS) dataset [37], and ChaLearn LAP IsoGD dataset [38]; 3) test the robustness to cluttered backgrounds and noise in recognition; 4) conduct a detailed comparison of efficiency. The experimental results show that our method outperforms state-of-the-art methods.

    A. Invariant Properties of 3D-SC Descriptor

    In this experiment, we evaluate the invariant properties of the 3D-SC descriptor to rotation, scale variation, and articulated deformation. We select two hand gestures from the same class in the NTU Hand Digit dataset [22] with salient intra-class variations including articulated deformation, and calculate their 3D-SC descriptors as shown in Fig. 7.

    In the first column of Fig. 7, the first and the second rows show the original hand gestures with salient intra-class variations. With these two gestures, the angles between fingers are different which indicates large articulated deformation. The third and the fourth rows are the corresponding rotated gestures, the fifth and the sixth rows are scaled gestures, and finally the seventh and the eighth rows are gestures with both rotation and scale variations. The second and the third columns are, respectively, the log-polar histograms and line charts of the points in the same position,i.e., thumb fingertip. From the figures we can find significant correspondences and similarities among the log-polar histograms and line charts of different gestures. The strong correspondence of the line peaks among gestures in the plots indicates the invariance of our method to intra-class variation.These similarities show a reliable evaluation for recognition.In summary, our method is invariant to articulated deformation, rotation, and scale variation.

    Additionally, the log-polar histograms of different contour points of one hand gesture are shown in the second column of Fig. 8. The positions of the three contour points are quite different and there are large differences among the log-polar histograms. The third and fourth columns of Fig. 8 shows that contour points in the same position of different hand gestures also have different log-polar histograms. This means that the 3D-SC descriptor can distinguish contour points in different locations and represent the 3D hand gesture effectively.

    B. Recognition Experiments

    1) NTU Hand Digit Dataset:The NTU Hand Digit dataset[22] includes 10 classes of hand gestures which are captured by the Kinect sensor with cluttered backgrounds. Each class has 100 gestures performed by 10 subjects. Also, the gestures are performed with variations in hand scale, orientation,articulation, etc., which is challenging for hand gesture recognition. As shown in Fig. 9, the first row includes samples of ten classes, the second shows the corresponding depth maps, and the third shows hand shapes. We can see that the hand shapes are segmented accurately from the cluttered backgrounds.

    We test our method with DTW matching, and half of the hand gestures are used for training and half for testing. The experiment is repeated over 100 times while changing the training and testing data. Different parameters (k1andk2) are tested in our experiments. As shown in Table I, the best performance we get is 98.7% whenk1=12 andk2=5 , which validates the rationality of the theoretical analyses of the parameter setting in Section III-B. The recognition accuracy of our method outperforms the state-of-the-art methods listed in Table II. Our method shows an impressive performance with the highest recognition rate of 98.7% compared to other methods, e.g., convolutional neural networks (CNN). We use the same structure of CNN as that in [55], i.e., AlexNet. Our method has an obvious improvement compared to the SC algorithm which proves that 3D information is more discriminative than 2D information for hand gesture recognition. The efficiency of our method is higher than the state-of-the-art methods which is discussed in Section V-D and our method is capable for real-time applications.

    Fig. 7. In the first column, the first and the second rows are the original gestures with intra-class salient variations. The third and the fourth rows are the corresponding rotated gestures, the fifth and the sixth rows are scaled gestures, and finally the seventh and the eighth rows are gestures with both rotation and scale variations. The second and the third columns are the log-polar histograms and line charts respectively, corresponding to the gestures in the first column.

    Fig. 8. The first column is an illustration of different contour points on the same hand gesture (Point A, B, and C). The second column is the log-polar histograms corresponding to the three points in the first column, respectively. The third column is an illustration of the contour points in the same position of different hand gestures (Point D1, D2, and D3), and the fourth column is the log-polar histograms corresponding to the three points in the third column,respectively.

    Fig. 9. Hand gesture samples of ten classes (G1 to G10 from left to right) in the NTU Hand Digit dataset.

    TABLE I RECOGNITION ACCURACIES WITH DIFFERENT PARAMETERS ON THE NTU HAND DIGIT DATASET

    The confusion matrix of the recognition results on the NTU Hand Digit dataset is shown in Fig. 10. Each row of the matrix represents the distribution of samples in the corresponding class of hand gestures recognized as respective gesture classes. The numbers in the black grids represent the proportions of correct recognitions, while the numbers in the gray grids represent the proportions of error recognitions.Although G2 is very similar to G9 with each hand gesture showing only one extended finger, the recognition accuracies of these two classes are still more than 98%. We should note that, it is impossible for any descriptor to extract all the gesture features to achieve a 100% recognition rate. However,improving the capability of descriptor is important for hand gesture recognition. The experimental results validate the superior representative and discriminative power of the proposed method for hand gesture recognition.

    2) Kinect Leap Dataset:Reference [35] consists of 10 classes of hand gestures, each of which has 140 gesturesperformed by 14 subjects. As shown in Fig. 11, the first row includes samples of ten classes, the second shows the corresponding depth maps, and the third shows hand shapes.We can see that accurate shape contours and depth information can be achieved for later operations.

    TABLE II RECOGNITION ACCURACY COMPARISON ON THE NTU HAND DIGIT DATASET

    Fig. 10. Confusion matrix of the recognition results on the NTU Hand Digit dataset. Each row of the matrix represents the distribution of the samples in the corresponding class of hand gestures recognized as respective gesture class. The numbers in the black grids represent the proportions of correct recognitions, while the numbers in the gray grids represent the proportions of error recognitions.

    Fig. 11. Hand gesture samples of ten classes (G1 to G10 from left to right) in the Kinect Leap dataset.

    Fig. 12. Confusion matrix of the recognition results on the Kinect Leap dataset. Each row of the matrix represents the distribution of the samples in the corresponding class of hand gestures recognized as respective gesture class. The numbers in the black grids represent the proportions of correct recognitions, while the numbers in the gray grids represent the proportions of error recognitions.

    The hand gestures are captured from 14 different subjects,which increases the complexity and recognition difficulty of the dataset. We use the same setup as that in the NTU Hand Digit dataset. Experimental results show that our method achieves a high recognition rate which is superior to other methods listed in Table III. The confusion matrix of the recognition results on the Kinect Leap dataset is shown in Fig. 12. The recognition accuracies of all the classes are high.

    TABLE III RECOGNITION ACCURACY COMPARISON ON THE KINECT LEAP DATASET

    3) Senz3d Dataset:The Senz3d dataset [36] consists of 11 classes of hand gestures, each of which has 120 gestures performed by 4 subjects. Each subject performs one class of hand gesture 30 times. The backgrounds are cluttered and there is a lot of noise in the depth maps. Therefore, we process the depth maps using a corrosion operation to achieve a clean hand shape. As shown in Fig. 13, the first row includes samples of ten classes, the second shows the corresponding depth maps, and the third shows hand shapes.

    We make a 70%/30% split between training and testing sets.The experiment is repeated over 100 times while changing the training and testing data. The recognition accuracy of our method outperforms the state-of-the-art method listed in Table IV. The confusion matrix of the recognition results on the Senz3d dataset is shown in Fig. 14. We can see that the recognition accuracies of six classes of gestures reach 100%.

    4) ASL-FS Dataset:The ASL-FS dataset [37] contains about 65000 samples of 24 classes of hand gestures performed by 5 subjects. The 24 classes of hand gestures stand for 24 letters in American Sign Language which are selected from all 26 letters excluding the 2 letters with hand motion. The subjects were asked to make the sign facing the Kinect and to move their hand around while keeping the hand shape fixed,in order to collect a variety of backgrounds and viewing angles. An illustration of the variety in size, background and orientation is shown in Fig. 15.

    In our experiments, we perform leave-one-subject-out cross validation, which is the same criterion as used in [37]. It is the most relevant accuracy criterion, as it tests on unseen users.The average recognition rate is 87.1%. As shown in Table V,the accuracy is comparable to the state-of-the-art methods and our method shows an impressive performance with the dominating recognition rate.

    Fig. 13. Hand gesture samples of eleven classes (G1 to G11 from left to right) in the Senz3d dataset.

    TABLE IV RECOGNITION ACCURACY COMPARISON ON THE SENZ3D DATASET

    Fig. 14. Confusion matrix of the recognition results on the Senz3d dataset.Each row of the matrix represents the distribution of the samples in the corresponding class of hand gestures recognized as respective gesture class.The numbers in the black grids represent the proportions of correct recognitions, while the numbers in the gray grids represent the proportions of error recognitions.

    5) ChaLearn LAP IsoGD Dataset:ChaLearn LAP IsoGD[38] is a large-scale dataset derived from the ChaLearn gesture dataset (CGD) [61] which has a total of more than 50000 gestures. The ChaLearn LAP IsoGD dataset includes 47933 RGB+D gesture videos. Each RGB+D video represents one gesture only, and there are 249 classes of hand gestures performed by 21 subjects. This dataset is for the task of user independent recognition, namely recognizing gestures without considering the influence of performers. Details of the dataset are shown in Table VI.

    Only the depth videos are used in our experiment, without RGB videos. We combine the 3D-SC descriptor of the hand gesture in each frame to make the whole representation of a hand gesture. The performance of our method is compared with state-of-the-art methods as shown in Table VII. We can see that our method achieves the performance of 60.12%which outperforms the state-of-the-art methods.

    C. Robustness of 3D-SC Descriptor

    In this experiment, we test the robustness of the proposed 3D-SC descriptor to cluttered backgrounds and noise.

    1) Robustness to Cluttered Backgrounds:Given the depth maps with cluttered backgrounds, we need to segment the hand shape from the background. The hand shape segmentation method we used is discussed in Section III-A.As shown in Fig. 16, although the backgrounds are cluttered,the hand shapes are segmented accurately. Thus, the influence of cluttered backgrounds can be avoided.

    Fig. 15. An illustration of the variety of the ASL-FS dataset. This array shows one image from each user and from each letter, displayed with relative size preserved. The size, orientation and background can change to a large extent. The full dataset contains approximately 100 images per user per letter.

    Our method is superior to skin-color-based methods [62],[63] in hand gesture recognition because we exclude the interference of color and our method is insensitive to changes in lighting conditions. Additionally, redundant information is excluded by hand shape segmentation and the rest is useful with all the 3D information of hand gestures. In summary, the proposed 3D-SC descriptor is robust to cluttered backgrounds.

    2) Robustness to Noise:This experiment is carried out to evaluate the robustness of our method against noise. Gaussian noises are added to the original hand shape. The hand shape is perturbed by a Gaussian random function with zero mean and deviation σ in bothxandydirections. The noisy hand shapes with different deviations are shown in Fig. 17. The log-polar histograms and line charts of the original hand shapes(Fig. 17(a)) and the noisy hand shapes with σ=0.4 and σ=0.8(Figs. 17(c) and 17(e)) are shown in Fig. 18. From the figure we can find that the proposed 3D-SC descriptor preserves invariance under noise, and the increasing of σ has very little effect on our method.

    To further demonstrate the robustness to noise of the proposed 3D-SC descriptor, we add Gaussian noise to the hand shapes in the NTU Hand Digit dataset with differentσ and conduct hand gesture recognition experiments on these noise-deformed hand shapes. This test is implemented in the same manner as before to calculate the recognition rate. The experimental results are listed in Table VIII which shows that the recognition rates are rarely affected by noise levels from 0.2 to 0.4, and that our method can still maintain high recognition rate even with noise levels of 0.6 to 0.8 with large deformations in the boundary. This verifies the robustness of the proposed 3D-SC descriptor to noise in hand gesture r ecognition.

    ?

    ?

    ?

    D. Efficiency Comparison

    Besides the outstanding recognition accuracies on five benchmark datasets, the proposed method also has excellent efficiency. In this experiment, the efficiency of our method is tested and compared with other important methods. Our experiment is repeated with different training and testing samples over 100 times and the mean performance is reported.The efficiencies of different methods are listed in Table IX.From the comparison we can find that the efficiency of our method outperforms state-of-the-art methods. The average running time of the proposed method is only 6.3 (ms) for each query on a general PC (without GPU), which fully supports real-time applications.

    VI. CONCLUSION

    In this paper, a new method for 3D hand gesture representation and recognition is proposed. Both local and global 3D shape information in multiple scales are sufficiently utilized in feature extraction and representation. The proposed 3D-SC descriptor is invariant to articulated deformation,rotation, and scale variation. It is also robust to cluttered backgrounds and noise. Experiments on five benchmark datasets, including large-scale hand gesture datasets,demonstrate that the proposed method outperforms state-ofthe-art methods. Also, our method has excellent efficiency and fully supports real-time applications.

    Fig. 17. The hand shapes with Gaussian noises in different degrees. (a) σ=0. (b) σ=0.2. (c) σ=0.4. (d) σ=0.6. (e) σ=0.8.

    Fig. 18. The log-polar histograms and line charts of the hand shapes with Gaussian noises, where σ=0, 0.4 and 0.8 in the 1–3, rows respectively.

    TABLE VIII RECOGNITION ACCURACIES UNDER DIFFERENT NOISE LEVELS ON THE NTU HAND DIGIT DATASET

    TABLE IX EFFICIENCIES OF DIFFERENT METHODS ON THE NTU HAND DIGIT DATASET

    成人亚洲精品一区在线观看| 99久久精品国产国产毛片| 亚洲欧美一区二区三区黑人 | 97在线视频观看| 制服丝袜香蕉在线| 国产午夜精品一二区理论片| 全区人妻精品视频| 亚洲欧美色中文字幕在线| 男人爽女人下面视频在线观看| 国产片内射在线| 日韩中字成人| 亚洲成色77777| videos熟女内射| 香蕉精品网在线| 色婷婷久久久亚洲欧美| 久久ye,这里只有精品| 午夜av观看不卡| 人体艺术视频欧美日本| 国产日韩欧美亚洲二区| 久久精品国产亚洲av涩爱| 韩国精品一区二区三区 | 综合色丁香网| 欧美+日韩+精品| 99热全是精品| 久久久国产一区二区| 亚洲av欧美aⅴ国产| 亚洲精品一二三| 性色av一级| 99九九在线精品视频| av免费在线看不卡| 99国产综合亚洲精品| 亚洲国产精品国产精品| 热re99久久国产66热| 大片电影免费在线观看免费| 日本av免费视频播放| 中文字幕另类日韩欧美亚洲嫩草| 天天影视国产精品| 国产精品一二三区在线看| 国产精品久久久久久av不卡| 男女高潮啪啪啪动态图| 成人黄色视频免费在线看| 国产女主播在线喷水免费视频网站| 十八禁网站网址无遮挡| 极品少妇高潮喷水抽搐| 成年人午夜在线观看视频| 日韩成人av中文字幕在线观看| 亚洲精品色激情综合| 满18在线观看网站| 国产免费现黄频在线看| 久久精品aⅴ一区二区三区四区 | 亚洲国产精品专区欧美| 国产成人精品久久久久久| 国产精品.久久久| 久久毛片免费看一区二区三区| 国产欧美日韩一区二区三区在线| 亚洲国产色片| 黄片播放在线免费| 国产欧美另类精品又又久久亚洲欧美| 亚洲精品自拍成人| 成人免费观看视频高清| 两个人免费观看高清视频| 韩国高清视频一区二区三区| 啦啦啦视频在线资源免费观看| 国产精品人妻久久久影院| 久久99蜜桃精品久久| 国产免费一级a男人的天堂| 国产不卡av网站在线观看| 天天躁夜夜躁狠狠躁躁| 一边摸一边做爽爽视频免费| av卡一久久| 黄色怎么调成土黄色| 男人操女人黄网站| 性高湖久久久久久久久免费观看| 精品一区二区三卡| 国产69精品久久久久777片| 99九九在线精品视频| xxxhd国产人妻xxx| 国产成人免费观看mmmm| 亚洲熟女精品中文字幕| 男女边摸边吃奶| 亚洲一码二码三码区别大吗| 精品酒店卫生间| 国产精品国产av在线观看| 亚洲第一av免费看| 国产日韩欧美视频二区| 夫妻午夜视频| av片东京热男人的天堂| 免费人妻精品一区二区三区视频| 在线天堂最新版资源| 国产国拍精品亚洲av在线观看| 久久精品国产自在天天线| 男人操女人黄网站| 一级片免费观看大全| 久久久久久久久久成人| 精品一品国产午夜福利视频| videos熟女内射| 中文字幕免费在线视频6| 少妇熟女欧美另类| 美女视频免费永久观看网站| 国产一级毛片在线| 最近的中文字幕免费完整| 高清欧美精品videossex| 丰满少妇做爰视频| 久久韩国三级中文字幕| 在线观看美女被高潮喷水网站| 制服人妻中文乱码| 免费久久久久久久精品成人欧美视频 | 久久久欧美国产精品| 国产伦理片在线播放av一区| 又黄又爽又刺激的免费视频.| 纯流量卡能插随身wifi吗| 水蜜桃什么品种好| 两个人看的免费小视频| 国产精品免费大片| 各种免费的搞黄视频| 久久精品国产自在天天线| 午夜av观看不卡| 日本-黄色视频高清免费观看| 亚洲久久久国产精品| 国产淫语在线视频| 天天躁夜夜躁狠狠躁躁| 999精品在线视频| 国产 精品1| www.熟女人妻精品国产 | 美国免费a级毛片| 国产精品久久久久成人av| 免费日韩欧美在线观看| 免费观看无遮挡的男女| 青青草视频在线视频观看| 国产一区亚洲一区在线观看| 69精品国产乱码久久久| 18在线观看网站| 22中文网久久字幕| 日韩av不卡免费在线播放| 久久女婷五月综合色啪小说| a级片在线免费高清观看视频| 国产又色又爽无遮挡免| 免费黄频网站在线观看国产| 内地一区二区视频在线| 欧美3d第一页| 久久久久网色| 在线观看美女被高潮喷水网站| 日韩av免费高清视频| 国产欧美另类精品又又久久亚洲欧美| 精品国产一区二区久久| 亚洲一区二区三区欧美精品| 中文字幕另类日韩欧美亚洲嫩草| 久久国产精品大桥未久av| 亚洲性久久影院| 少妇被粗大的猛进出69影院 | 黄色毛片三级朝国网站| 国产亚洲最大av| 亚洲av成人精品一二三区| 亚洲国产av新网站| 哪个播放器可以免费观看大片| 国产精品成人在线| 午夜av观看不卡| 男女啪啪激烈高潮av片| 成年美女黄网站色视频大全免费| 看免费成人av毛片| 亚洲欧美成人精品一区二区| 亚洲第一av免费看| 99久久人妻综合| 一级黄片播放器| 欧美 亚洲 国产 日韩一| 国产免费视频播放在线视频| 中国国产av一级| 男女边吃奶边做爰视频| 亚洲精品自拍成人| 午夜福利在线观看免费完整高清在| 亚洲国产毛片av蜜桃av| 精品亚洲成a人片在线观看| 超碰97精品在线观看| 啦啦啦啦在线视频资源| 我要看黄色一级片免费的| 成人无遮挡网站| 午夜免费鲁丝| 黑人猛操日本美女一级片| 亚洲内射少妇av| 免费少妇av软件| 国产欧美日韩一区二区三区在线| 飞空精品影院首页| 久久久久久人人人人人| 九九爱精品视频在线观看| 三上悠亚av全集在线观看| 在线亚洲精品国产二区图片欧美| 久久久精品区二区三区| 欧美成人午夜精品| 久久久久精品性色| 国产成人a∨麻豆精品| 嫩草影院入口| 在线观看三级黄色| 成人国产麻豆网| 91aial.com中文字幕在线观看| 纵有疾风起免费观看全集完整版| 免费女性裸体啪啪无遮挡网站| 亚洲av日韩在线播放| 亚洲av日韩在线播放| 熟女电影av网| 国产有黄有色有爽视频| 免费女性裸体啪啪无遮挡网站| 国产熟女欧美一区二区| 久久久久久久久久久久大奶| 久久久久久久久久久久大奶| 国产黄色免费在线视频| 日韩人妻精品一区2区三区| 天天操日日干夜夜撸| 交换朋友夫妻互换小说| 日韩av不卡免费在线播放| 1024视频免费在线观看| 国产精品成人在线| 亚洲av成人精品一二三区| 成人国产av品久久久| 免费高清在线观看日韩| 亚洲国产毛片av蜜桃av| 欧美成人午夜免费资源| 亚洲精品456在线播放app| 高清毛片免费看| 极品少妇高潮喷水抽搐| 午夜福利视频在线观看免费| 午夜福利视频在线观看免费| 精品国产一区二区三区四区第35| 精品一品国产午夜福利视频| 日韩成人伦理影院| 亚洲精品一二三| 日韩,欧美,国产一区二区三区| 成年人免费黄色播放视频| 亚洲中文av在线| 亚洲精品美女久久av网站| 这个男人来自地球电影免费观看 | 亚洲情色 制服丝袜| 黑人欧美特级aaaaaa片| av卡一久久| 日本欧美国产在线视频| 亚洲精品日本国产第一区| 欧美丝袜亚洲另类| 狠狠精品人妻久久久久久综合| 午夜影院在线不卡| av在线老鸭窝| 精品国产一区二区三区四区第35| 宅男免费午夜| 亚洲成人av在线免费| 国产精品偷伦视频观看了| 如日韩欧美国产精品一区二区三区| 久久精品久久久久久噜噜老黄| 99香蕉大伊视频| 高清av免费在线| 国产亚洲最大av| av在线播放精品| 国产精品久久久久成人av| 韩国高清视频一区二区三区| 在线亚洲精品国产二区图片欧美| 久久国产精品男人的天堂亚洲 | 男女无遮挡免费网站观看| 国产又色又爽无遮挡免| 熟女电影av网| 男女免费视频国产| 波多野结衣一区麻豆| 色吧在线观看| av一本久久久久| 高清不卡的av网站| 老司机亚洲免费影院| av线在线观看网站| 中国美白少妇内射xxxbb| 亚洲成色77777| 日韩av不卡免费在线播放| 99香蕉大伊视频| 2018国产大陆天天弄谢| 国产1区2区3区精品| 久久国内精品自在自线图片| 国产亚洲一区二区精品| 久久精品国产亚洲av天美| 两个人免费观看高清视频| 久久女婷五月综合色啪小说| 婷婷色麻豆天堂久久| 乱码一卡2卡4卡精品| 日日啪夜夜爽| 尾随美女入室| 国产免费福利视频在线观看| 精品久久久久久电影网| 国产精品欧美亚洲77777| 免费观看无遮挡的男女| 精品一区二区三区视频在线| 久久精品国产a三级三级三级| 国产不卡av网站在线观看| 亚洲人成77777在线视频| 中文天堂在线官网| 欧美精品人与动牲交sv欧美| 汤姆久久久久久久影院中文字幕| 制服丝袜香蕉在线| 亚洲成av片中文字幕在线观看 | 十八禁网站网址无遮挡| 国产白丝娇喘喷水9色精品| 成人毛片a级毛片在线播放| 国产一区亚洲一区在线观看| 亚洲精品av麻豆狂野| 欧美xxxx性猛交bbbb| 国产色婷婷99| 亚洲成av片中文字幕在线观看 | 99国产精品免费福利视频| 夫妻性生交免费视频一级片| 91国产中文字幕| 人成视频在线观看免费观看| 2022亚洲国产成人精品| 超色免费av| 26uuu在线亚洲综合色| 国产精品国产三级国产av玫瑰| av天堂久久9| 一区二区三区四区激情视频| 国产精品99久久99久久久不卡 | 亚洲图色成人| 人妻一区二区av| 交换朋友夫妻互换小说| 日本wwww免费看| 十分钟在线观看高清视频www| 久久久久久久精品精品| 亚洲综合精品二区| 亚洲欧美一区二区三区国产| 乱码一卡2卡4卡精品| 九草在线视频观看| 亚洲av国产av综合av卡| 人人妻人人澡人人看| 中国国产av一级| 亚洲欧美一区二区三区国产| 免费少妇av软件| 看免费成人av毛片| 亚洲精品自拍成人| 美女国产视频在线观看| 人人澡人人妻人| 曰老女人黄片| 国产xxxxx性猛交| 欧美成人午夜免费资源| 亚洲国产看品久久| 亚洲国产日韩一区二区| 久久av网站| 交换朋友夫妻互换小说| 亚洲av.av天堂| 精品一区二区免费观看| 男女边吃奶边做爰视频| 精品一区在线观看国产| av在线播放精品| 黄网站色视频无遮挡免费观看| 巨乳人妻的诱惑在线观看| 国产精品.久久久| 日韩一本色道免费dvd| 国产欧美另类精品又又久久亚洲欧美| av国产精品久久久久影院| 亚洲精品成人av观看孕妇| 国产精品久久久av美女十八| 九九在线视频观看精品| 草草在线视频免费看| 欧美精品国产亚洲| av国产精品久久久久影院| 51国产日韩欧美| 97在线人人人人妻| www.av在线官网国产| 精品国产露脸久久av麻豆| av在线观看视频网站免费| 日日撸夜夜添| 国产1区2区3区精品| 国产精品久久久久久精品古装| 国产色婷婷99| 久久精品国产亚洲av涩爱| 在线看a的网站| 一区二区av电影网| 天美传媒精品一区二区| 久久99热6这里只有精品| 亚洲三级黄色毛片| 国产精品麻豆人妻色哟哟久久| 亚洲人成77777在线视频| 精品国产一区二区久久| 一二三四中文在线观看免费高清| 成年人午夜在线观看视频| 久久精品国产鲁丝片午夜精品| 精品一区二区三卡| 宅男免费午夜| 免费黄网站久久成人精品| 久久久久久久精品精品| 日本色播在线视频| 久久久精品免费免费高清| 99热6这里只有精品| a级毛色黄片| 在线看a的网站| 大香蕉久久成人网| 免费久久久久久久精品成人欧美视频 | 久久久久人妻精品一区果冻| 激情五月婷婷亚洲| 国产在视频线精品| 亚洲欧美清纯卡通| 女的被弄到高潮叫床怎么办| 尾随美女入室| 人人妻人人爽人人添夜夜欢视频| 男的添女的下面高潮视频| 欧美人与性动交α欧美精品济南到 | 色5月婷婷丁香| 亚洲精品自拍成人| 亚洲欧洲日产国产| 亚洲av免费高清在线观看| 欧美 日韩 精品 国产| 国产成人一区二区在线| 如日韩欧美国产精品一区二区三区| 欧美国产精品一级二级三级| 91在线精品国自产拍蜜月| 亚洲成人一二三区av| 成年av动漫网址| 国产白丝娇喘喷水9色精品| 亚洲av成人精品一二三区| videossex国产| 久久人人97超碰香蕉20202| 亚洲成色77777| 国产熟女午夜一区二区三区| 日本av手机在线免费观看| 欧美亚洲日本最大视频资源| 国产不卡av网站在线观看| 1024视频免费在线观看| 亚洲精品色激情综合| 欧美 亚洲 国产 日韩一| 国产成人精品一,二区| 一级毛片黄色毛片免费观看视频| 精品久久蜜臀av无| 女性被躁到高潮视频| 狂野欧美激情性bbbbbb| 国产淫语在线视频| 考比视频在线观看| 满18在线观看网站| 视频区图区小说| 丰满少妇做爰视频| 久久久久久久国产电影| 欧美日韩亚洲高清精品| 天天影视国产精品| 国产女主播在线喷水免费视频网站| 亚洲熟女精品中文字幕| xxxhd国产人妻xxx| 亚洲高清免费不卡视频| av不卡在线播放| 久久精品熟女亚洲av麻豆精品| 久久免费观看电影| 欧美另类一区| 最近的中文字幕免费完整| 精品人妻在线不人妻| 18在线观看网站| 一区二区三区四区激情视频| 高清毛片免费看| 人人妻人人澡人人爽人人夜夜| 一区二区av电影网| 欧美少妇被猛烈插入视频| 欧美 亚洲 国产 日韩一| 国产精品久久久av美女十八| 男的添女的下面高潮视频| 搡老乐熟女国产| 秋霞在线观看毛片| 久久精品aⅴ一区二区三区四区 | 大码成人一级视频| 最近手机中文字幕大全| 一区二区日韩欧美中文字幕 | videosex国产| 精品国产露脸久久av麻豆| 久久久久久久久久久久大奶| av一本久久久久| 最近的中文字幕免费完整| 少妇的逼水好多| 久久99精品国语久久久| 成人漫画全彩无遮挡| 欧美丝袜亚洲另类| 国产精品.久久久| 国产伦理片在线播放av一区| 少妇被粗大的猛进出69影院 | 在现免费观看毛片| 国产深夜福利视频在线观看| 国产一区二区激情短视频 | 亚洲av电影在线观看一区二区三区| 欧美丝袜亚洲另类| 国产成人精品一,二区| videosex国产| 久久国产精品大桥未久av| 大码成人一级视频| 欧美成人午夜精品| 97超碰精品成人国产| 午夜av观看不卡| 99九九在线精品视频| 欧美精品高潮呻吟av久久| 亚洲av中文av极速乱| 午夜久久久在线观看| 搡女人真爽免费视频火全软件| 久久99热这里只频精品6学生| 啦啦啦啦在线视频资源| 老司机影院毛片| 大片电影免费在线观看免费| 国产精品一区www在线观看| videosex国产| 丰满乱子伦码专区| 久久99热6这里只有精品| 精品酒店卫生间| 国产成人精品婷婷| 丝袜在线中文字幕| 国产熟女欧美一区二区| 大片电影免费在线观看免费| 美女国产高潮福利片在线看| 亚洲久久久国产精品| 日韩三级伦理在线观看| 好男人视频免费观看在线| 午夜福利视频在线观看免费| 1024视频免费在线观看| 久久 成人 亚洲| 久久婷婷青草| 这个男人来自地球电影免费观看 | 国产黄色免费在线视频| 精品少妇久久久久久888优播| 高清在线视频一区二区三区| 国产女主播在线喷水免费视频网站| 校园人妻丝袜中文字幕| 亚洲国产成人一精品久久久| 纵有疾风起免费观看全集完整版| 亚洲色图综合在线观看| videos熟女内射| 欧美日韩成人在线一区二区| 一边亲一边摸免费视频| 一本久久精品| 51国产日韩欧美| 免费大片黄手机在线观看| 久久免费观看电影| 亚洲色图综合在线观看| 精品视频人人做人人爽| 久久精品久久久久久噜噜老黄| 亚洲av免费高清在线观看| 久久av网站| 一本色道久久久久久精品综合| 热re99久久国产66热| 99久久人妻综合| 捣出白浆h1v1| 日韩中文字幕视频在线看片| 亚洲欧美一区二区三区国产| 久久久欧美国产精品| av片东京热男人的天堂| 久久精品夜色国产| 午夜免费观看性视频| 视频区图区小说| 欧美日韩av久久| 国产精品.久久久| 精品一区在线观看国产| 水蜜桃什么品种好| 熟女av电影| 女人精品久久久久毛片| 少妇 在线观看| 亚洲国产日韩一区二区| 色哟哟·www| 国产精品久久久久久久久免| 亚洲av日韩在线播放| 人人妻人人添人人爽欧美一区卜| 黄色视频在线播放观看不卡| 精品少妇黑人巨大在线播放| 精品一区二区三区四区五区乱码 | 亚洲经典国产精华液单| 欧美精品国产亚洲| 久久久久网色| 女性生殖器流出的白浆| 精品久久国产蜜桃| 中文字幕免费在线视频6| 精品人妻偷拍中文字幕| 校园人妻丝袜中文字幕| 国产69精品久久久久777片| 在线看a的网站| 久久午夜福利片| 日韩视频在线欧美| 最黄视频免费看| 看免费av毛片| 青春草视频在线免费观看| 日韩精品有码人妻一区| 日本-黄色视频高清免费观看| 亚洲欧美一区二区三区黑人 | 久久精品人人爽人人爽视色| 欧美另类一区| 国产爽快片一区二区三区| 亚洲精品自拍成人| 午夜福利影视在线免费观看| 高清视频免费观看一区二区| 一级a做视频免费观看| 精品人妻一区二区三区麻豆| 一级毛片 在线播放| 女人被躁到高潮嗷嗷叫费观| 如日韩欧美国产精品一区二区三区| 美女视频免费永久观看网站| 黄片无遮挡物在线观看| 久久99热6这里只有精品| 1024视频免费在线观看| 久久99热6这里只有精品| 母亲3免费完整高清在线观看 | 亚洲久久久国产精品| 一级a做视频免费观看| 天天躁夜夜躁狠狠躁躁| 国产精品成人在线| 少妇熟女欧美另类| 美女大奶头黄色视频| 侵犯人妻中文字幕一二三四区| 国产福利在线免费观看视频| 成人无遮挡网站| 国产欧美亚洲国产| 日日摸夜夜添夜夜爱| 国产探花极品一区二区| 欧美少妇被猛烈插入视频| 久久这里有精品视频免费| 极品人妻少妇av视频| 亚洲美女搞黄在线观看| 美女主播在线视频| 国产精品一二三区在线看| 中文欧美无线码| 一边摸一边做爽爽视频免费| 伊人亚洲综合成人网| 熟女人妻精品中文字幕| 免费高清在线观看视频在线观看| 99久国产av精品国产电影| 久久久久久久久久久免费av| 制服诱惑二区| 欧美日韩亚洲高清精品| 日韩不卡一区二区三区视频在线|