• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Japanese Sign Language Recognition by Combining Joint Skeleton-Based Handcrafted and Pixel-Based Deep Learning Features with Machine Learning Classification

    2024-03-23 08:15:44JungpilShinMdAlMehediHasanAbuSalehMusaMiahKotaSuzukiandKokiHirooka

    Jungpil Shin ,Md.Al Mehedi Hasan ,Abu Saleh Musa Miah ,Kota Suzuki and Koki Hirooka

    1School of Computer Science and Engineering,The University of Aizu,Aizuwakamatsu,965-8580,Japan

    2Department of Computer Science&Engineering,Rajshahi University of Engineering&Technology,Rajshahi,6204,Bangladesh

    ABSTRACT Sign language recognition is vital for enhancing communication accessibility among the Deaf and hard-of-hearing communities.In Japan,approximately 360,000 individuals with hearing and speech disabilities rely on Japanese Sign Language(JSL)for communication.However,existing JSL recognition systems have faced significant performance limitations due to inherent complexities.In response to these challenges,we present a novel JSL recognition system that employs a strategic fusion approach,combining joint skeleton-based handcrafted features and pixel-based deep learning features.Our system incorporates two distinct streams:the first stream extracts crucial handcrafted features,emphasizing the capture of hand and body movements within JSL gestures.Simultaneously,a deep learning-based transfer learning stream captures hierarchical representations of JSL gestures in the second stream.Then,we concatenated the critical information of the first stream and the hierarchy of the second stream features to produce the multiple levels of the fusion features,aiming to create a comprehensive representation of the JSL gestures.After reducing the dimensionality of the feature,a feature selection approach and a kernel-based support vector machine(SVM)were used for the classification.To assess the effectiveness of our approach,we conducted extensive experiments on our Lab JSL dataset and a publicly available Arabic sign language(ArSL)dataset.Our results unequivocally demonstrate that our fusion approach significantly enhances JSL recognition accuracy and robustness compared to individual feature sets or traditional recognition methods.

    KEYWORDS Japanese Sign Language (JSL);hand gesture recognition;geometric feature;distance feature;angle feature;GoogleNet

    1 Introduction

    Japanese Sign Language (JSL) plays a pivotal role in enabling communication for the Deaf and hard-of-hearing communities in Japan [1–3].Approximately 340,000 individuals rely on this visual-based language for their daily interactions [4].Among them,80,000 deaf communities use JSL to establish their communication to express their daily basic needs,thoughts,expressions,and requirements [5–7].However,learning JSL presents considerable challenges,both for the Deaf community and for non-deaf individuals,primarily due to its complexity.This complexity hampers effective communication between these two communities,often necessitating human sign language translators,which are both scarce and expensive [3,8].Many people have used a human language translator to establish communication between the deaf and non-deaf communities.However,it is difficult to find human sign language translators costly.In response to these challenges,automatic JSL recognition systems have emerged as a potential solution [9].These systems aim to accurately interpret and translate JSL signs into meaningful communication[10].Furthermore,in the context of the COVID-19 pandemic,the demand for touchless communication interfaces has grown,underlining the urgency of developing reliable JSL recognition technologies.Therefore,there is a growing demand for non-contact input interfaces that allow users to input data without touching their hands.JSL recognition aims to develop a feature extraction technique and classification approach to recognize the demonstrated signs and generate equivalent meanings accurately[10].It is essential to extract various features of sign language,such as hand orientation,hand shape,movement expression,location,etc.Besides the significant process in JSL recognition for the last decades,it still faces problems because the hand gesture contains a high degree of freedom,specifically in the hand shape [9,11].Moreover,individual distinctions in finger spelling expression have a considerable influence.The study used various feature extraction techniques to extract the mentioned information from the hand gestures.This system needed to record the actual information of the sign or dataset.Past efforts have employed feature extraction techniques and classification methods,relying on sensorbased and vision-based systems.However,sensor-based systems suffer from portability and cost limitations.In contrast,vision-based systems offer a more affordable and portable alternative,but they require effective feature extraction and classification algorithms to operate successfully.To date,researchers have explored model-based,template-based,and machine-learning approaches for hand gesture recognition.While some have employed pixel-based datasets,these approaches often grapple with challenges related to background interference,computational complexity,occlusions,and lighting conditions.Recently,skeleton-based approaches using tools like MediaPipe and OpenPose have gained attention for their potential in JSL recognition.However,achieving satisfactory performance remains an ongoing challenge,with room for improvement.We did not find any research that combined the hand-crafted feature and deep learning for JSL recognition.In this study,we propose an innovative approach to JSL recognition that capitalizes on the strengths of joint skeleton-based handcrafted features and pixel-based Convolutional Neural Network(CNN)features.The proposed method aims to enhance the accuracy and robustness of JSL recognition systems by leveraging the complementary nature of these two feature streams,which represents a significant advancement in the field of JSL recognition.We introduce an innovative fusion approach that enhances both accuracy and robustness,contributing to the state-of-the-art in sign language recognition.Our contributions extend beyond mere methodology:

    ?Novelty:Our research introduces a new and unique JSL alphabet dataset,effectively addressing the scarcity of resources in the field.This dataset is thoughtfully designed with diverse backgrounds and environmental conditions taken into account during data collection.Not only does it fill a critical gap,but it also paves the way for more comprehensive and inclusive studies in the domain of sign language recognition.

    ?Methodological Innovation:Our research introduces an innovative fusion approach that combines joint skeleton-based handcrafted features with pixel-based deep learning-based features,presenting a novel solution for JSL recognition.This fusion method not only significantly enhances recognition performance but also showcases the potential of integrating diverse feature types in similar recognition tasks.

    ? In our methodology,we employed two distinct feature streams.The first stream extracted skeleton-based distance and angle features from joint coordinates,effectively capturing intrinsic joint relationships.Simultaneously,the second stream utilized GoogleNet transfer learning to extract pixel-based features from frames,enabling hierarchical representations of sign language gestures.Subsequently,we fused these two feature streams to strike a balance between interpretability and the discriminative power of deep learning,thereby enhancing the overall effectiveness of our approach.To further optimize our feature set,we implemented a feature selection algorithm,selectively retaining the most effective features while discarding irrelevant ones.Finally,we applied these reduced features as input to an support vector machine(SVM)machine-learning model equipped with multiple kernels for comprehensive evaluation.This combined approach not only represents a methodological innovation but also yields substantial improvements in JSL recognition performance.

    ?Empirical Validation:Through extensive experimentation on both our new JSL dataset and a publicly available Arabic sign language (ArSL) dataset,we establish the effectiveness of our approach.Our results unequivocally demonstrate substantial enhancements in recognition performance compared to conventional methods or individual feature sets.Our work not only showcases the effectiveness of our proposed methodology but also contributes valuable insights to the broader field of sign language recognition research.

    In summary,our research contributes to the advancement of JSL recognition by presenting an innovative fusion approach that improves accuracy and robustness.Our results,based on our new JSL dataset and a publicly available Arabic sign language (ArSL) dataset,demonstrate substantial enhancements in recognition performance compared to traditional methods or individual feature sets.

    2 Related Work

    Many researchers have applied feature extraction approaches,machine learning and deep learning models in various sections,such as electromyogram (EMG) electroencephalogram (EEG) classification [12–15],hand gesture recognition and many other sign language recognition [16–19].Most JSL recognition research has been done with image-and skeleton-based datasets.In addition,some research on Japanese Sign Language has been conducted in various forms,including specialized devices such as Leap Motion[8],and Data Glove[20,21],and research using RGB cameras.MediaPipe and OpenPose software systems have recently been used to extract the skeleton point from the RGB images.Kobayashi et al.proposed a JSL recognition system by extracting the geometric formula-based feature,specifically angle-based feature information,calculated by the cartesian coordinate joint of the skeleton point.Finally,they achieved 63.60% accuracy SVM [7].Hossoe et al.developed a JSL recognition system to create a JSL dataset by recording 5000 samples[22].They first applied an image generative model for increasing the synthetic training data;after that,they used CNN directly on the image pixel information and reported 93.00% performance compared to the previous studies.In the same way,Funasaka et al.collected the JSL dataset using a leap motion controller for collecting the JSL dataset,then employed a genetic algorithm and a decision tree algorithm for the feature extraction and classification and reported 74.04% accuracy with it[3].Ikuno et al.developed a JSL recognition system by collecting the JSL dataset using a smartphone[23].They extracted skeleton data from the RGB image and extracted features from the skeleton information,and finally,they reported 70% accuracy with the random forest algorithm.The main drawback of the mentioned JSL recognition system is that it achieves lower performance accuracy for JSL recognition and is not satisfactory for real-life applications.Ito et al.developed a JSL recognition system by gathering all images into a single image using CNN and extracting features from the generated image based on the maximum difference from different blocks[2].Finally,they employed an SVM for multi-class classification and achieved 84.20%accuracy with ten JSL words-based datasets.Kwolek et al.developed another RGB image-based JSL recognition system where they followed the image generation,feature extraction and classification technique[24].They first used the Generative Adversarial Network(GAN)technique to increase the training image by generating the synthetic data with the graph technique,and then they thought about skin segmentation using ResNet34.Finally,they used an ensemble technique for the classification,which achieved 92.01% accuracy.Although this method reported higher performance accuracy than the existing method,the GAN technique for image generation leads to high computational complexity.To deploy the method,it is important to ensure the system’s high-performance accuracy,efficiency,and generalizability.We proposed a hand skeleton-based feature extraction and classification method for developing a JSL recognition system to overcome the problem.

    3 Dataset Description

    Two datasets have been utilized in this research.Also,JSL is one of the most famous languages in the world.

    3.1 Our Lab JSL Dataset

    Therefore,to utilize the hand joint estimation technique,we built a new dataset that contains the same 41 Japanese sign characters that have been utilized in the public dataset.However,in this new dataset,the size of the samples has been kept to 400 ?400.In later sections,it has been shown that due to the usage of such images,the hand joint estimation was easier and therefore,higher performance was achieved.Fig.1 illustrates the example of input images and estimated hand joints drawn on the input images for the new dataset.In this new dataset,there are 7380 images containing 180 samples per class.The images were captured from 18 individuals,with ten images per person.Table 1 showcases the distribution of each character in the JSL public dataset and the JSL new dataset.The signs(no),(mo),(ri),(wo),and(n) cannot be acquired with an RGB camera because they involve movement.

    Table 1:The number of JSL our collected dataset

    Figure 1:Example of input images and corresponding estimated hand joints for the new dataset

    3.2 Arabic Sign Language Dataset

    The dataset is collected by 50 signers(volunteers),27 males and 23 females.Among our signers,the oldest was 69,and the youngest was 14,where the average age was 27.98 with a standard deviation of 13.99.Fig.2 shows a sample image for all 32 signs.We instructed the signers to capture nine images for each sign using their smartphones,with three shots with different angles for each distance,i.e.,close,medium,and far.Thus,288 images of each signer were collected;however,several unqualified images were removed during the dataset’s creation.Signers were free to use any hand to perform the signs.Due to different smartphone camera configurations,some images were non-square size.Those images were padded with white pixels;thus,all the images were resized to a square with a resolution of 416×416 pixels.Finally,our proposed dataset,ArSL21L,containing 14202 images of 32 signs with a wide range of backgrounds,is annotated with bounding boxes in PASCAL VOC[3];format using the LabelImg program[25].Fig.2 demonstrates the example of an Arabic image[12,25].

    Figure 2:Arabic dataset examples[12]

    4 Proposed Methodology

    Fig.3 demonstrates the proposed method architecture.In the study,we revolved around the fusion of pixel-based deep-learning features and joint skeleton-based handcrafted features for JSL recognition.To begin,we extracted joint skeleton-based features from sign language gestures,capturing intricate details of hand and body movements.These features were crucial in capturing the nuances of JSL articulation and expression.

    Concurrently,we harnessed the power of CNNs to extract pixel-based features from video frames containing sign language gestures.This approach allowed our model to learn hierarchical representations of these gestures,enriching the feature set with spatial information.Integrating the hand-crafted feature with the deep learning model proved excellent in various domains and many tasks[25–27].It can be impactful when the handcrafted features provide crucial domain-specific knowledge that might not be straightforward for the deep learning model.Hand gesture recognition using skeleton datasets can greatly benefit from the combining hand-crafted features with deep learning.The skeleton data provides joint locations,which are essentially spatial coordinates representing parts of the hand or body.This structured nature of the data offers the perfect opportunity to compute hand-crafted features.The fusion of these two distinct branch feature sets took place at multiple levels within our JSL recognition system.By combining joint skeleton-based features with pixel-based CNN features,we aimed to create a holistic and robust representation of sign language gestures.This fusion strategy capitalizes on the interpretability of handcrafted features and the discriminative capabilities of deep learning,achieving a synergistic effect.After concatenating the feature,we fed it into the feature reduction approach to select the effective features by discarding the less relevant features.Finally,we employed SVM with various kernel functions for the classification.

    Figure 3:Proposed method architecture

    4.1 Joint Estimator with Media Pipe

    Media pipe estimates hand joints using a machine-learning model that analyzes input data from a single RGB camera.The model first detects the presence of a hand in the camera image and then estimates the 3D position of the hand joints.It does this by using a complex series of mathematical calculations based on the input data,including the position and orientation of the hand,as well as other factors such as lighting and camera angle.The model then outputs a set of coordinates representing the estimated positions of each hand joint in 3D space.In the first branch,we used Mediapipe to extract the hand joint skeleton from the JSL hand gesture RGB dataset.Fig.4 illustrates the results after applying the Media pipe.

    Figure 4:Hand pose estimation using media pipe

    4.2 Feature Extraction

    Extracting handcrafted features from joint skeleton points is essential for the recognition of Japanese Sign Language(JSL).It ensures precise gesture capture,critical for distinguishing signs with similar handshapes.Handcrafted features offer interpretability,aiding linguistic analysis.They reduce data dimensionality,improving computational efficiency.They also enhance robustness,as they can be invariant to certain variations.Combined with other feature types,they provide comprehensive sign representations,boosting recognition accuracy.Moreover,handcrafted features can be customized to JSL’s linguistic and cultural context,making recognition systems more effective and inclusive for the Deaf and hard-of-hearing communities.In the study,after extracting the 21 joint coordinates by including x,y,and z coordinates,we capture the hand-crafted features to avoid the hand positionrelated limitations.This feature is also important as the output value would differ even if the hand had the same signature but was placed on the left or right side of the camera.Additionally,the same hand shape can represent different characters based on their inclination in some Japanese Sign Languages.Thus,extracting features that work effectively even in such cases was necessary.To achieve this features for the distance between joint coordinates and the angle between joint coordinates were extracted.

    4.2.1 Distance-Based Feature Extraction

    To extract features unaffected by the hand’s position in the image,distances between the 21 coordinates were calculated.The distances between neighbouring joints were excluded since they are always constant.This resulted in 190 features being obtained from each image.Using the distances between these joints,the same position can be expected even if the position of the hand changes.This is because the distances between the joints are the same no matter where they are on the screen.The formula for calculating distances is given below Eq.(1):

    Here,x=Relative distance in x-coordinate between two joints

    y=Relative distance in y-coordinate between two joints

    z=Relative distance in z-coordinate between two joints

    Fig.5 illustrates an example of extracting distance-based features.Table 2 demonstrates the different types of distance features calculating initial and end joints.A diverse number of distances can be calculated from each of the skeleton joint points,and only 20 and 21 no skeleton joints remain empty for producing distance.In summer,we calculated different distance features from all the joints except joint numbers 20 and 21,which gave no features as distance.The main reason for the empty for generating distance is that other joints’distance calculation already covers it.

    Table 2:Distance calculation initial point and end points

    4.2.2 Angle-Based Feature Extraction

    The feature value of how much the hand is tilted was calculated by calculating the direction vectors between the coordinates of each joint and then determining how much each vector was tilted from the XYZ direction.A total of 210 vectors can be created since the number of joints to be estimated is 21,and three types of XYZ data can be calculated for each vector.This resulted in 630 features related to angles.These features can increase the recognition rate of signs with the same shape but with different meanings depending on the inclination of the hand,such as the Japanese Sign Language characters for(na),(ni),(ma),and(mi).The angle between the vectors was calculated by taking Cos values from the two space vectors and using the direction vector and the vectors in the x,y,and z-axis directions.

    Table 3 demonstrates the all angle calculation joint vector scenarios,which generated 630 diverse angle-based features.This style also has similarities,like the distance calculation where 21 no joint set is described as empty.Similarly to distance-based features,it can be noticed that for the joint point number 21,the set is empty.This is because the earlier joint points have already covered the expected pairs.In Table 3,angle calculation of initial point and set of vectors when each joint point can be considered the endpoint.

    Table 3:Angle calculation initial point and set of vectors when each joint point can be considered the endpoint

    Figure 5:Example of extracting distance-based feature

    4.3 GoogleNet Features

    In the second stream,we extracted deep learning features using the pre-trained GoogleNet model from the RGB images in the JSL dataset.Researchers have developed many deep learning-based transfer learning models to extract the effective features to overcome the inefficient dataset[28–31].In addition,BenSignNet[32],CNN[33],DenseNetwork[34],Deep Residual Model[35],VeryDeepCNN[36],ImageNet[37]and Invert residual network[38]also used by many researchers to extract effective feature and classification to implement the vision based hand gesture recognition system.We did this by removing the final classification layers and keeping the feature extraction layers intact.Pass each image through the modified model,and the output from one of the intermediate layers can be considered as the extracted features.Machine learning has offered a wide range of techniques and Application-based models and huge data sizes for processing and problem-solving.Deep learning,a subset of Machine Learning,is usually more complex.So,thousands of images are needed to get accurate results.The GoogleNet architecture was proposed by researchers from Google in 2014.The research paper was titled“Going Deeper with Convolutions”.This architecture was a winner at the ILSVRC 2014 classification of image challenge[39].It has shown a significant decrease in error rate if we compare it with the previous winner,AlexNet.GoogleNet architecture uses techniques such as 1×1 convolutions in the middle of the architecture and global average pooling that is shown in the second stream of Fig.3 that is different from the AlexNet architecture.Using 1×1 convolution as intermediate and global average pooling creates a deeper architecture.This architecture is 22 layers deep.Google Net uses 1×1 convolutions in architecture.The convolutions are used to decrease the number of parameters (weights and biases) of the architecture.For a convolution,when the filter number is 48 but the filter size is 5×5,the number of computations can be written as(14×14×48)×(5×5×480)=112.9 M.On the other side,for the 1×1 convolution with 16 filters,the number of computations can be written as.(14×14×16)×(1×1×480)+(14×14×48)×(5×5×16)=5.3 M.The inception architecture in GooLeNet has some intermediate classifier branches in the middle of the architecture.These branches are used only while training.These branches consist of a 5×5 average pooling layer with a stride of 3,one 1×1 convolution with 128 filters,and two fully connected layers of 1024 outputs.The architectural details can be described in the following steps:(a)A 1×1 convolution with 128 filters for dimension reduction and rectified linear unit(ReLU)activation.(b)An average polling layer of filter size 55 and stride is three values.(c)Dropout regularization with dropout ratio=0.7.(d)A fully connected layer with 1024 outputs and ReLU activation.We did not use any classification module,but we used the output of the GoogleNet architecture as a feature of the second stream,and the feature dimension is 1024,which is concatenated with the handcrafted feature to produce the final feature.

    4.4 Feature Reduction

    We got 819 features from the hand-crafted first stream and 1024 from the deep learning-based second stream.After concatenating the two streams,we got 1843 features in total.High-dimensional features can carry the effective information of the JSL gesture,but they may carry some irrelevant features,which will lead to a reduction in computational complexity.Selection of the potential feature and discarding the irrelevant feature may improve the performance accuracy and efficiency of the system.Feature selection is known as a critical step in the process of building machine learning models,as it helps improve model performance,reduces overfitting,and speeds up training.Here,we used a popular method for feature selection,namely the Boruta algorithm,which leverages the power of Random Forest to identify the most relevant features for a given problem.

    4.4.1 The Boruta Algorithm

    Boruta,inspired by the spirit of Darwinian evolution,mimics the competitive world of features to select those that truly matter.The procedure begins with duplicating the original dataset to create a shadow dataset,where the features’order is randomized.Next,a Random Forest classifier is trained on the combined dataset,encompassing the original and shadow features.During the algorithm’s iterations,Boruta assesses the importance of each feature by comparing it with the performance of its corresponding shadow feature.Features that consistently outperform their shadow counterparts are deemed significant and retained.On the other hand,those features that fail to exhibit consistent superiority are rejected.The algorithm concludes when all features prove their significance or are eliminated from contention[19].

    4.4.2 Random Forest’s Role

    The strength of Boruta lies in its partnership with Random Forest,a versatile ensemble learning technique.Random Forest constructs a multitude of decision trees to make predictions,each tree emphasizing different subsets of features.By comparing a feature’s performance against its shadow counterpart across multiple trees,Boruta mitigates the potential of overfitting,enhancing its reliability.There are various advantages of the feature reduction method,such as reduced overfitting,comprehensive feature assessment,efficient computation,and robust performance[19].

    4.5 Classification with SVM

    After selecting the potential features using the Boruta algorithm approach,we employed the conventional machine learning model SVM with various kernel approaches for recognition and prediction.SVM is a powerful machine learning algorithm mainly used for classification and regression analysis.The main objective of SVM is to find a hyperplane that separates data points of different classes in a high-dimensional space.In classification,SVM determines the best boundary between two classes by maximizing the margin between the two closest points of the different classes.SVM is especially effective in handling complex,non-linear datasets,as it can project data to a higherdimensional space where a linear hyperplane can separate them.In the study,we used several kernel functions,such as polynomial,radial basis function,and sigmoid functions,aiming to transform the data into a higher-dimensional space to make it more separable.Also,the SVM was originally designed for binary classification tasks,but it can be extended to handle multi-class classification problems like classifying the 41 labels of Japanese Sign Language (JSL) through various techniques.In the study,we did this using the One-vs-Rest (OvR) Approach,where in the OvR approach,we train 41 separate binary classifiers,each distinguishing one JSL sign from the rest.This means you have a binary classifier for each sign(41 classifiers).During prediction,we run all 41 classifiers on an input,and the sign corresponding to the classifier with the highest confidence score is chosen as the predicted sign[13].

    5 Experimental Analysis and Results

    We described here the accuracy of the proposed model after evaluating both the existing dataset and our JSL dataset.In this study,we employed two branches of features:the first branch consisted of handcrafted features,including distance and angle,while the second branch included deep learningbased features.Additionally,we utilized augmentation techniques,such as Random Affine transformations ranging from 0.1 to 0.1 and 0.95 to 1.05,rotations between -20 degrees and 20 degrees,a 10 per cent shift in both vertical and horizontal directions,and scale within the range of 0.95 to 1.05 per cent.

    We conducted experiments with three different configurations: a stream based solely on handcrafted features,a stream based solely on deep learning features,and a combined stream that included augmentation and one without augmentation.

    5.1 Experimental Setting

    We implemented the proposed model in Python with TensorFlow,pandas,NumPy,and sci-kitlearn Python packages to test the proposed model.To experiment with the system,we used a CPU machine with 16 GB RAM.In this research,both user-dependent and user-independent SVM have been applied.In user-dependent SVM,a random 70%of the datasets were utilized for training,and the rest of the 30% data were kept for testing.However,in user-independent SVM,data from 17 people was used for training and data from 1 person was used for testing.In both cases,the parametersCandγare adjusted during training,and the best one is applied to the test set.

    5.2 Experimental Result with JSL Dataset

    Table 4 shows the accuracy score of the proposed model,including the number of features,selected features,and the kernel of the SVM,along with their corresponding performance accuracies.The table displays the performance accuracy for three distinct configurations.The first configuration consists of a stream that relies exclusively on handcrafted features,achieving an accuracy of 93.70%.The second configuration is based solely on deep learning features and achieves an accuracy of 90.08%.In contrast,the third configuration is a combined stream that incorporates augmentation techniques and attains an accuracy of 96.30%.Additionally,there is a fourth configuration,which is identical to the combined stream but without augmentation,achieving a remarkable accuracy of 98.53%.

    Table 4:Performance results proposed with the lab JSL dataset

    As mentioned earlier,for the JSL dataset,we extracted 819 handcrafted features based on the distance and angle.Subsequently,we obtained 1024 features using the deep learning-based method,and an additional 1843 features were generated by combining all feature extraction methods.To identify the most relevant features,we employed the Boruta and random forest methods,which resulted in the selection of 777 potential features from the pool of 1843 features.Ultimately,the selected features were fed into the machine learning-based SVM classifier.The study yielded a remarkable accuracy of 98.53% through the use of SVM with hyperparameter tuning.

    Table 5 shows the label-wise precision-recall and F1-score where.It meticulously outlines precision,recall,and F1-score for each label,offering insight into the model’s effectiveness in correctly identifying positive instances and minimizing false negatives.Examining the results reveals remarkable performance across multiple fronts.Several labels,such as 0,3,4,5,11,17,19,and 40,exhibit flawless precision,recall,and F1-scores at 100.00%.This suggests an exceptional ability to predict instances belonging to these categories accurately.Furthermore,various labels manifest high precision rates coupled with near-perfect recall,indicating a propensity not only to predict positives correctly but also to effectively capture nearly all actual positive instances effectively.Notably,label 24 achieves a perfect 100.00% across all metrics,highlighting the model’s ability to harmoniously attain precision,recall,and F1-scores harmoniously.Throughout the table,the F1 scores reflect the balance between precision and recall,illuminating the model’s overall effectiveness by considering both false positives and false negatives.The weighted nature of the F1-score ensures a comprehensive evaluation,especially for imbalanced datasets where certain labels might be underrepresented.In summary,the table encapsulates the classification model’s prowess in a granular manner,depicting its capacity to discern intricate patterns across diverse labels.The stellar performance in precision,recall,and F1-scores for numerous labels underscores the model’s proficiency in accurate classification and thorough identification of relevant instances.While some labels exhibit slightly lower metrics,the overall results testify to the model’s robustness and efficacy in handling a multifaceted classification task.Fig.6 shows the confusion matrix for the proposed JSL dataset.

    Table 5:Precision recall and F1-score of the proposed JSL dataset

    Figure 6:Confusion matrix for our lab JSL dataset

    5.3 Experimental Result with a Benchmark Public Dataset

    Table 6 presents a detailed breakdown of precision,recall,and F1-score metrics for the Arabic datasets.Noteworthy observations include exceptional scores for some classes: Class 1 achieves a perfect 100.00% in all metrics,and Class 15 attains 100.00% precision and recall.While some classes exhibit slightly lower F1-scores,the model demonstrates strong performance overall.A few classes,like class 18,show a precision-recall imbalance,while others,such as class 20,showcase well-balanced performance.The results underscore the model’s effectiveness in classification,with varying degrees of accuracy and completeness across different classes.Fig.7 shows the confusion matrix of the proposed model.

    Table 6:Precision Recall and F1-score of the Arabic dataset

    Table 7 gives the accuracy score of the proposed model and the state-of-the-art comparison where the proposed model achieved 95.84%accuracy.Compared to the other state-of-the-art models,ResNet1 transfer learning and ResNet2 transferring gave 35.00%and 91.04%accuracy,respectively.In addition,another researcher employed a based approach and gave 92.00%accuracy.In summary,we see that our proposed model achieved higher than all the previous methods.The table provides a concise overview of different models’performance on the “ArSL21L Arabic” dataset,highlighting their respective features,feature selection,SVM parameters,and resulting performance accuracy.The first three models,labelled ResNet1,ResNet2,and CNN,have unspecified handcrafted and deep learning features,total features,selected features,and SVM parameters.ResNet1 achieves a performance accuracy of 35.00%,while ResNet2 and CNN achieve significantly higher accuracies of 91.04% and 92.00%,respectively.The proposed model,tailored for the same “ArSL21L Arabic”dataset,employs both handcrafted features(819)and deep learning features(1024),combining to form a total of 1843 features.After feature selection,820 features are chosen.The SVM parametersC=100 andγ=0.0005 are utilized.This model demonstrates an impressive performance accuracy of 95.84%,surpassing the other mentioned models.Overall,the table underlines the efficacy of the proposed model by showcasing its ability to leverage a combination of handcrafted and deep learning features,resulting in a considerably high accuracy on the specified dataset when compared to alternative approaches.

    Table 7:Performance results proposed with the public Arabic dataset and state-of-the-art comparison

    Figure 7:Confusion matrix for Arabic dataset

    These results demonstrate the significant impact of augmentation techniques on the model’s performance.The configuration without augmentation achieves the highest accuracy,indicating that the model can make accurate predictions solely based on the inherent features of the data.On the other hand,the combined stream,which includes augmentation,still performs exceptionally well and suggests that the augmentation techniques help improve the model’s ability to generalize and handle variations in the data.These findings highlight the versatility of the model and its adaptability to different feature sets and preprocessing strategies.Further analysis can explore the specific ways in which augmentation techniques contribute to performance improvements and whether they have varying effects on handcrafted featuresvs.deep learning features.

    6 Conclusions

    Our study introduces an innovative approach to Japanese Sign Language(JSL)recognition,leveraging the strengths of joint skeleton-based handcrafted features and pixel-based Convolutional Neural Network (CNN) features.This fusion not only significantly enhances recognition accuracy but also contributes to improving the accessibility and inclusivity of sign language communication for the Deaf and hard-of-hearing communities.The integration of these two feature streams results in a large feature dimension,leading to increased computational complexity.To manage this complexity,we employed a feature selection algorithm to identify and retain the most relevant features while discarding irrelevant ones.This dimensionality reduction approach optimized the efficiency and effectiveness of our system.Our choice of a machine learning-based classification algorithm,Support Vector Machine (SVM),as the classifier yielded high-performance accuracy.This achievement underscores the efficiency and effectiveness of the proposed system,particularly in its ability to outperform previous studies that relied on expensive devices.We successfully employed a relatively inexpensive webcam,making our system more accessible and cost-effective.Looking ahead,our future work will focus on extending our approach to word-level JSL recognition systems,further advancing the field of sign language recognition and enhancing communication accessibility for the deaf and hard-of-hearing communities.

    Acknowledgement:The authors wish to express their appreciation to the reviewers for their helpful suggestions which greatly improved the presentation of this paper.

    Funding Statement:This work was supported by the Competitive Research Fund of the University of Aizu,Japan.

    Author Contributions:The authors confirm their contribution to the paper as follows:study conception and design:Jungpil Shin,Md.Al Mehedi Hasan;data collection:Abu Saleh Musa Miah,Kota Suzuki;analysis and interpretation of results: Jungpil Shin,Koki Hirooka,Abu Saleh Musa Miah;draft manuscript preparation: Md.Al Mehedi Hasan,Kota Suzuki,Koki Hirooka.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:Arabic Dataset:https://www.kaggle.com/datasets/ammarsayedtaha/arabic-sign-language-dataset-2022.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产伦精品一区二区三区视频9| 久久久久性生活片| 亚洲欧美日韩卡通动漫| 久久久精品欧美日韩精品| 18禁在线播放成人免费| 丝袜美腿在线中文| 99久久精品国产国产毛片| 国产视频内射| 久久精品国产鲁丝片午夜精品| 别揉我奶头 嗯啊视频| 在线播放国产精品三级| 一进一出抽搐动态| 午夜激情欧美在线| 国产成人精品一,二区 | 亚洲无线在线观看| 美女xxoo啪啪120秒动态图| 免费观看人在逋| 欧美激情在线99| 久久久久九九精品影院| 美女cb高潮喷水在线观看| 国产亚洲精品久久久com| 亚洲欧美中文字幕日韩二区| 秋霞在线观看毛片| 精品人妻熟女av久视频| 人妻系列 视频| 免费在线观看成人毛片| 欧美丝袜亚洲另类| 特大巨黑吊av在线直播| 国产亚洲av片在线观看秒播厂 | 国产精品综合久久久久久久免费| 国产一级毛片七仙女欲春2| 1000部很黄的大片| 99久久久亚洲精品蜜臀av| 精品一区二区三区人妻视频| 边亲边吃奶的免费视频| 国内少妇人妻偷人精品xxx网站| 日产精品乱码卡一卡2卡三| 男女下面进入的视频免费午夜| 波多野结衣高清作品| 久久久久久大精品| 91在线精品国自产拍蜜月| 成人漫画全彩无遮挡| 天美传媒精品一区二区| 免费观看的影片在线观看| 亚洲av.av天堂| 国产成年人精品一区二区| 久久久久久伊人网av| 97超视频在线观看视频| 丰满乱子伦码专区| 少妇猛男粗大的猛烈进出视频 | 国产日本99.免费观看| 久久99热6这里只有精品| 99热6这里只有精品| 成人二区视频| 99久久成人亚洲精品观看| av在线老鸭窝| 校园春色视频在线观看| 亚洲图色成人| 国产色婷婷99| 中国美白少妇内射xxxbb| 免费av不卡在线播放| 国产精品综合久久久久久久免费| 亚洲自拍偷在线| 成人三级黄色视频| 久久久久久久久久黄片| 国产人妻一区二区三区在| 日韩视频在线欧美| 国产女主播在线喷水免费视频网站 | 成人二区视频| 九草在线视频观看| 久久精品国产清高在天天线| 禁无遮挡网站| 午夜视频国产福利| 日产精品乱码卡一卡2卡三| av在线天堂中文字幕| a级毛色黄片| 91精品一卡2卡3卡4卡| 男人舔奶头视频| 亚洲无线在线观看| 特大巨黑吊av在线直播| 在线天堂最新版资源| 99久久中文字幕三级久久日本| 禁无遮挡网站| 国产av麻豆久久久久久久| 国产乱人视频| 午夜亚洲福利在线播放| 亚洲国产精品国产精品| 人体艺术视频欧美日本| 爱豆传媒免费全集在线观看| 热99re8久久精品国产| 日韩在线高清观看一区二区三区| 色噜噜av男人的天堂激情| 久久韩国三级中文字幕| av黄色大香蕉| 久久欧美精品欧美久久欧美| 欧美性猛交╳xxx乱大交人| 久久精品久久久久久久性| 国产成人一区二区在线| 菩萨蛮人人尽说江南好唐韦庄 | 午夜爱爱视频在线播放| 国产午夜精品论理片| 少妇人妻一区二区三区视频| av在线播放精品| 内地一区二区视频在线| 九九在线视频观看精品| 成人特级av手机在线观看| 亚洲成人久久爱视频| 国产成人午夜福利电影在线观看| 一级黄色大片毛片| 日韩欧美精品v在线| 久久久精品94久久精品| 97热精品久久久久久| 国产综合懂色| 日本黄色视频三级网站网址| 蜜臀久久99精品久久宅男| 久久久久久久久久黄片| 亚洲高清免费不卡视频| 赤兔流量卡办理| 99国产极品粉嫩在线观看| 日本av手机在线免费观看| 99热这里只有精品一区| 不卡一级毛片| or卡值多少钱| 尤物成人国产欧美一区二区三区| 麻豆久久精品国产亚洲av| 国产精品日韩av在线免费观看| 国产视频内射| 亚洲中文字幕日韩| 91久久精品国产一区二区三区| 久久99热6这里只有精品| 久久久久国产网址| 国产精品久久久久久精品电影| 午夜福利视频1000在线观看| 亚洲欧美成人精品一区二区| 久久亚洲精品不卡| 精品一区二区三区视频在线| 天天一区二区日本电影三级| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 两性午夜刺激爽爽歪歪视频在线观看| 色综合亚洲欧美另类图片| 毛片女人毛片| 看片在线看免费视频| 日产精品乱码卡一卡2卡三| 成人午夜高清在线视频| 成人综合一区亚洲| 亚洲国产精品sss在线观看| 久久韩国三级中文字幕| 亚洲av中文字字幕乱码综合| 久久婷婷人人爽人人干人人爱| 午夜福利视频1000在线观看| 午夜视频国产福利| 菩萨蛮人人尽说江南好唐韦庄 | 99热这里只有是精品在线观看| 国产精品乱码一区二三区的特点| 亚洲自偷自拍三级| 人妻系列 视频| 国产精品国产高清国产av| 色吧在线观看| 久久99热6这里只有精品| 国产黄片美女视频| 色综合站精品国产| 亚洲久久久久久中文字幕| 国产精品久久久久久久电影| 深爱激情五月婷婷| 尤物成人国产欧美一区二区三区| 只有这里有精品99| 美女cb高潮喷水在线观看| 人人妻人人看人人澡| 亚洲成人中文字幕在线播放| av女优亚洲男人天堂| 国产69精品久久久久777片| 少妇高潮的动态图| 色综合站精品国产| 国产又黄又爽又无遮挡在线| 久久精品人妻少妇| 日韩高清综合在线| 少妇的逼水好多| 亚洲精品成人久久久久久| 午夜老司机福利剧场| 成人一区二区视频在线观看| 日韩精品有码人妻一区| 麻豆成人午夜福利视频| 国产探花极品一区二区| 国产国拍精品亚洲av在线观看| 国产精品嫩草影院av在线观看| 国产色婷婷99| 草草在线视频免费看| 观看美女的网站| 国产一级毛片在线| 麻豆成人av视频| eeuss影院久久| 亚洲五月天丁香| 精品99又大又爽又粗少妇毛片| 精品不卡国产一区二区三区| 菩萨蛮人人尽说江南好唐韦庄 | 只有这里有精品99| 一级二级三级毛片免费看| 99热这里只有是精品在线观看| 久久久欧美国产精品| 麻豆久久精品国产亚洲av| 久久久精品94久久精品| 亚洲一区高清亚洲精品| 亚洲国产日韩欧美精品在线观看| 神马国产精品三级电影在线观看| 日韩制服骚丝袜av| 日日干狠狠操夜夜爽| 国产精品蜜桃在线观看 | 亚洲乱码一区二区免费版| 国产国拍精品亚洲av在线观看| 久久久久网色| 亚洲成av人片在线播放无| 人人妻人人澡欧美一区二区| 欧美性感艳星| 内地一区二区视频在线| 国产一级毛片七仙女欲春2| 亚洲熟妇中文字幕五十中出| 日韩亚洲欧美综合| 一级毛片aaaaaa免费看小| 午夜福利在线观看免费完整高清在 | 欧美性猛交黑人性爽| 成熟少妇高潮喷水视频| av在线观看视频网站免费| 插阴视频在线观看视频| 特级一级黄色大片| 久久精品综合一区二区三区| 国产在视频线在精品| 久久久午夜欧美精品| 一级毛片我不卡| 日日摸夜夜添夜夜添av毛片| 黑人高潮一二区| 男人狂女人下面高潮的视频| 在线观看av片永久免费下载| 亚洲最大成人手机在线| 国产白丝娇喘喷水9色精品| 国国产精品蜜臀av免费| 最近最新中文字幕大全电影3| 黄片无遮挡物在线观看| 成人性生交大片免费视频hd| 九九在线视频观看精品| 卡戴珊不雅视频在线播放| 国产精品野战在线观看| 久久中文看片网| 欧美3d第一页| 欧美一区二区精品小视频在线| 精品无人区乱码1区二区| 成人亚洲欧美一区二区av| 舔av片在线| 乱码一卡2卡4卡精品| 国产精品无大码| 久久午夜亚洲精品久久| 亚洲性久久影院| 久久精品国产自在天天线| 青青草视频在线视频观看| 亚洲国产精品sss在线观看| 深爱激情五月婷婷| 久久久久性生活片| 亚洲在线自拍视频| 1000部很黄的大片| 黄色视频,在线免费观看| 精品久久久久久成人av| 国产午夜福利久久久久久| 久久久久久久久久黄片| 国产精华一区二区三区| 少妇的逼水好多| 欧美又色又爽又黄视频| 观看美女的网站| 欧美成人免费av一区二区三区| 国产亚洲精品久久久久久毛片| 一边亲一边摸免费视频| 蜜桃亚洲精品一区二区三区| 日本成人三级电影网站| 亚州av有码| 熟女人妻精品中文字幕| 亚洲国产精品合色在线| 99久久中文字幕三级久久日本| 热99re8久久精品国产| 国产亚洲精品av在线| 2021天堂中文幕一二区在线观| 69av精品久久久久久| 中文在线观看免费www的网站| 国产高潮美女av| 乱人视频在线观看| 欧美极品一区二区三区四区| 小说图片视频综合网站| 国产精品一及| 国产精品一区www在线观看| 久久久久免费精品人妻一区二区| 久久久久久久久大av| 国产成人aa在线观看| 国产伦理片在线播放av一区 | 欧美又色又爽又黄视频| 美女xxoo啪啪120秒动态图| 麻豆乱淫一区二区| 欧美激情国产日韩精品一区| 国产不卡一卡二| videossex国产| 国产探花在线观看一区二区| 日韩人妻高清精品专区| 禁无遮挡网站| 日本黄色视频三级网站网址| 亚洲va在线va天堂va国产| 欧美激情国产日韩精品一区| 我的女老师完整版在线观看| 亚洲精品国产av成人精品| 欧美日本视频| 国产高清不卡午夜福利| 中国国产av一级| 亚洲图色成人| 精品一区二区三区人妻视频| 97在线视频观看| 日本黄色片子视频| 久久鲁丝午夜福利片| 免费av毛片视频| 国产精品嫩草影院av在线观看| 18+在线观看网站| 黄片无遮挡物在线观看| 精品久久久久久久末码| 国产成人一区二区在线| 青青草视频在线视频观看| 国产成人精品一,二区 | 看黄色毛片网站| 床上黄色一级片| 日本三级黄在线观看| av在线观看视频网站免费| 亚洲四区av| 内射极品少妇av片p| 欧美日本亚洲视频在线播放| 免费无遮挡裸体视频| 国产私拍福利视频在线观看| 99热全是精品| 伦理电影大哥的女人| 精品午夜福利在线看| 国产精品嫩草影院av在线观看| 亚洲丝袜综合中文字幕| 亚洲自拍偷在线| 六月丁香七月| 成人午夜精彩视频在线观看| 一级毛片电影观看 | 中文字幕人妻熟人妻熟丝袜美| avwww免费| 国产精品伦人一区二区| 亚洲成人中文字幕在线播放| 亚洲av电影不卡..在线观看| 久久精品国产自在天天线| 国产一级毛片在线| 看非洲黑人一级黄片| 嘟嘟电影网在线观看| 亚洲欧美精品专区久久| 国产精品一二三区在线看| 欧美+亚洲+日韩+国产| 成人av在线播放网站| 国产午夜福利久久久久久| 午夜免费激情av| 亚洲人成网站在线播放欧美日韩| 日本一二三区视频观看| 99视频精品全部免费 在线| 97热精品久久久久久| av天堂中文字幕网| 亚洲va在线va天堂va国产| 在线a可以看的网站| 久久午夜福利片| 网址你懂的国产日韩在线| 女人十人毛片免费观看3o分钟| 一个人看视频在线观看www免费| 亚洲av一区综合| 欧美人与善性xxx| 一级二级三级毛片免费看| 久久亚洲国产成人精品v| 久久久精品94久久精品| 中国美白少妇内射xxxbb| 欧美性猛交╳xxx乱大交人| 欧美日本视频| 69人妻影院| 少妇熟女aⅴ在线视频| 免费观看人在逋| 国产探花极品一区二区| 99久久无色码亚洲精品果冻| 在线观看免费视频日本深夜| 男人和女人高潮做爰伦理| 欧美三级亚洲精品| 久久久久网色| 此物有八面人人有两片| 久久久国产成人精品二区| 亚洲国产精品sss在线观看| 国产精品伦人一区二区| 精品国产三级普通话版| 级片在线观看| 天堂√8在线中文| 国产探花在线观看一区二区| 国产视频首页在线观看| 久久精品国产亚洲av涩爱 | 亚洲第一区二区三区不卡| 午夜精品在线福利| 欧美+亚洲+日韩+国产| 日韩一区二区三区影片| av免费观看日本| 欧美成人免费av一区二区三区| 男人舔女人下体高潮全视频| 高清在线视频一区二区三区 | 久久久久久大精品| 特级一级黄色大片| 欧美成人a在线观看| 人体艺术视频欧美日本| 麻豆成人午夜福利视频| 亚洲真实伦在线观看| 久久久国产成人精品二区| 欧美日韩精品成人综合77777| 嘟嘟电影网在线观看| avwww免费| 级片在线观看| 亚洲内射少妇av| 看片在线看免费视频| 我要看日韩黄色一级片| 亚洲国产精品合色在线| 性色avwww在线观看| 久久精品91蜜桃| 久久久色成人| 国产极品天堂在线| 亚洲欧美精品综合久久99| 亚洲欧洲国产日韩| 真实男女啪啪啪动态图| 精品久久久久久久久av| 亚洲精华国产精华液的使用体验 | 美女脱内裤让男人舔精品视频 | 夫妻性生交免费视频一级片| 国内精品宾馆在线| 日韩一本色道免费dvd| 久久精品国产亚洲网站| 国产不卡一卡二| 99久久成人亚洲精品观看| 深夜精品福利| 国产午夜精品一二区理论片| 日本爱情动作片www.在线观看| 黄色一级大片看看| 插阴视频在线观看视频| 国产乱人视频| 如何舔出高潮| 国产精品,欧美在线| 成人一区二区视频在线观看| 免费看美女性在线毛片视频| 人人妻人人澡人人爽人人夜夜 | 丰满人妻一区二区三区视频av| 十八禁国产超污无遮挡网站| 91久久精品国产一区二区三区| 欧美激情国产日韩精品一区| 中国美女看黄片| 国产极品精品免费视频能看的| av女优亚洲男人天堂| 蜜臀久久99精品久久宅男| 晚上一个人看的免费电影| 色综合亚洲欧美另类图片| 黄色欧美视频在线观看| 中文字幕熟女人妻在线| 国产精品一区二区三区四区免费观看| 欧美日韩精品成人综合77777| 亚洲成人久久爱视频| 最好的美女福利视频网| 亚洲美女搞黄在线观看| 国产男人的电影天堂91| 91狼人影院| 久久韩国三级中文字幕| 人妻夜夜爽99麻豆av| 日本撒尿小便嘘嘘汇集6| 色噜噜av男人的天堂激情| 最近最新中文字幕大全电影3| 国产一区二区在线av高清观看| 中文资源天堂在线| 卡戴珊不雅视频在线播放| 国产午夜福利久久久久久| 美女 人体艺术 gogo| www日本黄色视频网| 尤物成人国产欧美一区二区三区| 亚洲国产欧美人成| 亚洲国产精品合色在线| 人人妻人人澡人人爽人人夜夜 | 久久婷婷人人爽人人干人人爱| 在线播放国产精品三级| 日韩欧美一区二区三区在线观看| 久久精品国产鲁丝片午夜精品| 久久人人爽人人爽人人片va| 亚洲av成人精品一区久久| 97人妻精品一区二区三区麻豆| 国产精品久久久久久久久免| 美女黄网站色视频| 在线播放无遮挡| 一个人观看的视频www高清免费观看| 亚洲人成网站在线播放欧美日韩| 国产中年淑女户外野战色| 少妇丰满av| 亚洲在线观看片| 最近视频中文字幕2019在线8| 白带黄色成豆腐渣| 亚洲国产精品成人久久小说 | 国产av一区在线观看免费| 国产黄a三级三级三级人| 午夜视频国产福利| 少妇猛男粗大的猛烈进出视频 | 久久久久性生活片| 久久久成人免费电影| 国产激情偷乱视频一区二区| 狠狠狠狠99中文字幕| h日本视频在线播放| 欧美成人a在线观看| 人妻制服诱惑在线中文字幕| 日日摸夜夜添夜夜爱| 少妇高潮的动态图| av卡一久久| 欧美日韩综合久久久久久| 久久久久九九精品影院| 国产精品久久久久久久电影| 日本一二三区视频观看| 在线观看免费视频日本深夜| 99热6这里只有精品| 久久这里只有精品中国| 国产av一区在线观看免费| 国产私拍福利视频在线观看| 国产综合懂色| 国产精品,欧美在线| 亚洲天堂国产精品一区在线| 亚洲av.av天堂| 老师上课跳d突然被开到最大视频| 精品一区二区免费观看| 级片在线观看| 在线免费观看的www视频| 最近最新中文字幕大全电影3| 深夜精品福利| 99精品在免费线老司机午夜| 国产精品三级大全| 赤兔流量卡办理| 中文字幕熟女人妻在线| 中文资源天堂在线| 免费大片18禁| 国产免费男女视频| 欧美zozozo另类| 免费无遮挡裸体视频| 精品久久久久久久末码| 美女脱内裤让男人舔精品视频 | 最近最新中文字幕大全电影3| 亚洲自偷自拍三级| 99热精品在线国产| 国产精品三级大全| 国产成人午夜福利电影在线观看| 激情 狠狠 欧美| 国产伦一二天堂av在线观看| 少妇裸体淫交视频免费看高清| 内射极品少妇av片p| 少妇猛男粗大的猛烈进出视频 | 精品国产三级普通话版| 亚洲第一电影网av| 99久久中文字幕三级久久日本| 国产成人a区在线观看| 中文精品一卡2卡3卡4更新| 天堂av国产一区二区熟女人妻| 成人特级av手机在线观看| 国产伦一二天堂av在线观看| 国产黄片视频在线免费观看| 国产白丝娇喘喷水9色精品| 性色avwww在线观看| 色哟哟哟哟哟哟| 日韩 亚洲 欧美在线| 亚洲欧美日韩无卡精品| 日韩视频在线欧美| 91久久精品国产一区二区成人| 91久久精品电影网| 精品99又大又爽又粗少妇毛片| 国产一区二区亚洲精品在线观看| 国产精品.久久久| 美女 人体艺术 gogo| 亚洲av.av天堂| 伦精品一区二区三区| 1024手机看黄色片| 精品国产三级普通话版| 日韩在线高清观看一区二区三区| 最近手机中文字幕大全| 美女大奶头视频| 日韩大尺度精品在线看网址| 少妇猛男粗大的猛烈进出视频 | 午夜福利成人在线免费观看| 九色成人免费人妻av| 亚洲av二区三区四区| 麻豆乱淫一区二区| 亚洲自偷自拍三级| 大香蕉久久网| 亚洲成人久久爱视频| 亚洲综合色惰| 熟妇人妻久久中文字幕3abv| 麻豆国产av国片精品| 成人国产麻豆网| 日韩成人伦理影院| 1024手机看黄色片| 男女做爰动态图高潮gif福利片| 国产亚洲精品av在线| 久久午夜亚洲精品久久| 一区福利在线观看| kizo精华| 一本一本综合久久| 久久这里只有精品中国| 国产午夜精品论理片| 麻豆国产av国片精品| 国产精品av视频在线免费观看| 亚洲国产日韩欧美精品在线观看| 亚洲av免费高清在线观看| eeuss影院久久| 日韩欧美在线乱码| 久久久色成人| 久久精品国产亚洲av香蕉五月| 亚洲性久久影院| 简卡轻食公司| av免费在线看不卡| 黄色配什么色好看| 日韩一区二区三区影片| 亚洲熟妇中文字幕五十中出| 波野结衣二区三区在线| 欧美3d第一页| 国产精品日韩av在线免费观看|