• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Sign Language to Sentence Formation:A Real Time Solution for Deaf People

    2022-08-24 06:58:50MuhammadSanaullahMuhammadKashifBabarAhmadTauqeerSafdarMehdiHassanMohdHilmiHasanandAmirHaider
    Computers Materials&Continua 2022年8期

    Muhammad Sanaullah,Muhammad Kashif,Babar Ahmad,Tauqeer Safdar,Mehdi Hassan,Mohd Hilmi Hasan and Amir Haider

    1Bahauddin Zakariya University,Department of Computer Science,Multan,60,000,Pakistan

    2Air University,Department of Computer Science,Multan,60,000,Pakistan

    3Air University,Department of Computer Science,Islamabad,44,000,Pakistan

    4Centre for Research in Data Science,Department of Computer and Information Sciences,Universiti Teknologi PETRONAS,Seri Iskandar,32610,Perak,Malaysia

    5Department of Intelligent Mechatronics Engineering,Sejong University,Seoul,05006,Korea

    Abstract: Communication is a basic need of every human being to exchange thoughts and interact with the society.Acute peoples usually confab through different spoken languages,whereas deaf people cannot do so.Therefore,the Sign Language (SL) is the communication medium of such people for their conversation and interaction with the society.The SL is expressed in terms of specific gesture for every word and a gesture is consisted in a sequence of performed signs.The acute people normally observe these signs to understand the difference between single and multiple gestures for singular and plural words respectively.The signs for singular words such as I,eat,drink,home are unalike the plural words as school,cars,players.A special training is required to gain the sufficient knowledge and practice so that people can differentiate and understand every gesture/sign appropriately.Innumerable researches have been performed to articulate the computer-based solution to understand the single gesture with the help of a single hand enumeration.The complete understanding of such communications are possible only with the help of this differentiation of gestures in computer-based solution of SL to cope with the real world environment.Hence,there is still a demand for specific environment to automate such a communication solution to interact with such type of special people.This research focuses on facilitating the deaf community by capturing the gestures in video format and then mapping and differentiating as single or multiple gestures used in words.Finally,these are converted into the respective words/sentences within a reasonable time.This provide a real time solution for the deaf people to communicate and interact with the society.

    Keywords: Sign language;machine learning;conventional neural network;image processing;deaf community

    1 Introduction

    Communication is the primary source of sharing and transferring any knowledge in any society.Generally,human do this task through speaking/talking whereas deaf people cannot hear and talk.Therefore,the medium of communication for such kind of persons is the Sign Language (SL) to interact and participate into a society.The statistics of such people is very alarming according to the World Health Organization (WHO) which is around 466 million people in the world [1,2]and 0.6 million people only from United States[3].The SL is based on hand gestures to transfer and share their ideas with the society.Although,the understanding of common used gestures such as hungry,drink,go and study are easy but for the formal or professional talks,there must be a proper knowledge of SL is required.These talks contain on such kind of sentences as what is your name?How old are you?What is the specification of this mobile phone?

    Hence,the deaf people face many difficulties in sharing their thoughts in their professional career and as a result,they feel lonely and go in isolation.A technical solution is demanded to overcome the information transferring barrier which must provide the answer of the following concerns.

    In SL the movements of the hand gestures can be divided into two categories,“Single Sign”(e.g.,I,eat,drink)which contains the sign of a single gesture and“Multiple Sign”(e.g.,she,hot,outside,etc.) which accommodates the multiple gestures for a single concept.This categorization is further divided into“Single Hand”and“Both Hand”gestures on the basis of hand movement.For example,the gestures of“I”,“You”,“go”,“drink”etc.are performed by a single hand and have a single sign.On the same site,the gestures of“she”,“hot”,“outside”etc.are performed by a single hand and have multiple signs.Whereas some gestures are performed by both hands and have single signs e.g.,“home”,“l(fā)ove”etc.and both hands with multiple signs for e.g.,play,football,car,drive,etc.

    According to the categorization and grouping mentioned above,a summary of the found literature is shown in Tab.1.Most of the literature is about “single sign by single hand”.Unfortunately,no literature is found on “multiple signs with both hands”.The capturing and recognition of hand movement and rotation add complexities in the computed solutions design;therefore,the existing research mainly focuses on the primary category.

    Table 1:State-of-the-art of gesture recognition

    Fig.1 the left side gesture is from Britain Sign Language(BSL),which shows that the right hand with the first finger in up-word direction is used to perform “What”.The middle gesture is from American Sign Language(ASL),which shows that both hands with open palms are used to perform“What”and the right side gesture is from Pakistan Sign Language(PSL),which shows that the right hand with first two out-word pointing fingers is used to perform“What”.Hence,a holistic solution for all cultural societies is not possible and each country has to design and develop its own solution according to its society gestures.

    Figure 1:Sign of“What”in different sing languages

    Moreover,in the current proposed solutions the following concern are also addressed:

    ?Mobility Issue:Most of the researchers use Kinect 3D camera,which can easily detect hands movements.But in a real world,it is difficult to carry out such hardware to every place where ever the special person move (e.g.,market,school and hospital).There is another solution in form of a sensory glove with Flex sensor,in which contact sensor and accelerometer is used to detect hand movement and rotation.However,this solution also required to carry out the extra hardware at every-time.The proposed methodology is offering a solution without any extra hardware for mobility and movement of such special persons.

    ?Sign Capturing Issue:In a real environment,the person can wear any color and type of shirt/t-shirt and the detection of hand and their movement is not an easy task(skin segmentation of face,arms and hand).Therefore,most researchers assume to use colored gloves or different color strips attached to white color gloves.To wear these gloves at everywhere and every time which is a difficult job.This issue is resolved in the proposed solution using the video capturing for signs identifications.

    ?Special Environment Issue:For the recognition of a person,researchers design a special environment in which some specific colored dress with a specific color background is required.They used the black dress with black background(easy detection of skin),but in a real environment,it is quite challenging to maintain such an environment everywhere.Hence,there must be a solution for real time world in which the person and sign detection is much easier.

    ?Sign Recognition Issue:Most researchers work on recognizing words with“Single sign”gesture,but in real life,we speak complete sentences.The SL sentences consist of a sequence of gestures,recognizing each words from a single sign is also a difficult task.

    ?Human Experimental Issues:Experimental issues are also found in literature where solutions are validated with a limited number of persons and a recorded dataset of videos.

    Thanks to Conventional Neural Network(CNN)and advanced Image Processing techniques to facilitate us for providing a solution to translate the SL without any special environment and fixed specific hardware.It also facilitates us in recognition of the gestures of sentences involving both hands with multiple signs.The validation of the proposed solution is confirmed with ten different males and females in a different real environment.After resolving all the mentioned issues,94.66%accuracy is achieved.

    In the rest of the paper,Section 2 present the literature review,Section 3 explains the proposed solution and the results are presented in Section 4.The discussions on results are presented in Section 5.The conclusion and future work is given in Section 6.

    2 Literature Review

    In this section,a literature review of the existing research work is presented,due to the space limitation only the paper which are mostly cited are presented.The literature is evaluated on the basis of the following parameters:their research assumptions,considered gestures,used hardware,numbers of verified signs,number of participants,learning and testing techniques and the accuracy.A summary in the form tabular view,of the evaluation,is presented in Tabs.2 and 3.

    Table 2:Different SL translation techniques and their limitations

    Table 2:Continued

    Table 3:Different sign recognition techniques

    Bukhari et al.[4]designed a sensory glove having flex sensors for capturing the movement of fingers,accelerometer for capturing the rotation of hands and contact sensors for bending of palm.They used 26 gestures for recognition.Each sign is recorded 20 times.They got 92%accuracy for the recognition of gestures.This work has issues of mobility,sign recognition and Human Experimental.

    Helderman et al.[5]designed a sensory glove for the translation of SL.They used flex sensor,contact sensor and gyroscope for capturing the contact between fingers and rotation of hand.They used Arduino for controlling sensors.Blue-tooth module was used to transmit signal from Arduino to smart phone.They recognized only two signs from ASL.The first sign was“apple”and second was“party”.They test each sign 20 times.Their glove recognized“apple”19 times and 14 times“party”sign.Their glove shows 95%accuracy for“apple”and 70%accuracy for“party”sign.This work has issues of mobility,sign recognition and Human Experimental.

    Wadhawan et al.[6]proposed a deep learning based SL recognition system.They used 100 static words for the recognition and they achieved 99.90% accuracy.Their system has the limitation of mobility and real environment.

    Kanwal et al.[7]designed a sensory glove for the translation of Pakistani SL.They tested ten signs of PSL and their glove recognized only nine signs accurately.The accuracy of their sensory glove is 90%.This work has issues of mobility,sign recognition issues and Human Experimental issues.

    Ambar et al.[8]designed a sensory glove to recognize the words of American SL.They got an accuracy of 74%for translating SL.They used the sensor for capturing the movement of fingers and hands.This work has issues of mobility,sign recognition issues and Human Experimental issues.

    Lungociu[9]proposed a neural network approach for the recognition of SL.They used 14 finger spellings for the recognition.Their accuracy was 80%.They used webcam for data acquisition and only captured the hand shapes.This work has issues of mobility,sign recognition,Human Experimental,Sign capturing and special environment.

    Kang et al.[10]used CNN for the recognition of SL.They recognized alphabets and digits taken from ASL.They used 3D sensor for the capturing of sign.They got 85.49%accuracy.This work has issues of mobility,sign recognition,Human Experimental,Sign capturing and special environment.

    Cui et al.[11]proposed a framework for SL recognition.They recognized 603 recorded sentences of German SL.They got 91.93% accuracy for using Deep NN.This work has human experimental issues.

    Akmeliawati et al.[12]translated the Malaysian SL using image processing technique.A webcam is used for data capturing and color gloves are used to capture the sign.In this approach,the author translated A-Z alphabets,0-9 numbers and ten words.Their approach gained 90% accuracy for recognition.This work has mobility issues,sign recognition issues,human experimental issues,Sign capturing and special environment issues.

    Maraqa et al.[13]proposed a system for Arabic SL recognition.They used the Digital camera for capturing images and captured 900 images of 30 signs training data-set.Three hundred more images are captured for testing data.They used the white color glove with different color patches on the fingertips and a wrist color band.They gained an accuracy of 95% for sign recognition.This work has mobility issues,sign recognition issues,human experimental issues,Sign capturing and special environment issues.

    Bantupalli et al.[14]American SL using RNN and CNN.They recorded videos using iPhone 6 with same background.They tested 150 signs and achieved accuracy of 91%.Their work has specific background issue.

    Mittal et al.[15]used Leap Motion sensor to capture Indian SL CNN was used to recognize it.They tested words and sentences.CNN was trained using 35 isolated words and model is tested using 942 sentences.Average accuracy for words 89.50%and for sentences 72.30%.This work has mobility issues.

    Hassan et al.[16]produced three data sets of Arabian SL.They also used different techniques of recognition.Data sets have words and sentences.Two data sets are produced using motion detector and camera and one data set is produced using sensory glove.Tools used for classification of SL are MKNN,RASR and GT2K.

    Ullah et al.[17]used Wi See technology that can detects the gestures using multiple antennas.They used the gesture to control the movement of a car.This work has mobility issues,sign recognition issues,human experimental issues,Sign capturing and special environment issues.

    3 Proposed Solution

    The proposed solution is divided in three components:Image Processing-in which video signs are captured and Key Frames are extracted-Classification-of gestures with the use of CNN-and Sentence Formation-where the words are arranged in a semantic form with the use of Natural Language Processing (NLP).The framework of the proposed solution is presented in Fig.2 and explanation of each component is given in the following subsections:

    Figure 2:Framework of the proposed solution

    3.1 Real Time Sign Capturing

    The work performed in this component is to identify those frames in which some gestures are performed from the real time recording then frames are extracted.Further key frames are filtered from extracted frames.The key frames contain useful information of gesture sign.So that instead of working on all extracted frames,only filtered key frames are considered for further processing.This component consists of the following subcomponents.

    3.1.1 Signs’Frame Extraction

    Real Time signs are captured in a video format using any digital camera.The real time video is accessed frame by frame and parsed to detect focused person movement by comparing these frames.In detecting any movement in a focused person,this component stores the frame identity and continues its working to detect the frame in which the movement stops.A set of these frames,from starting to end the movement detected frames,is sent to the“Skin Segmentation”component for further processing.

    3.1.2 Skin Segmentation

    The skin Segmentation process works to identify the hands of the focused person.For this purpose,Otsu threshold method,which iterates through all the possible threshold values,calculates the spread for the pixels that fall in foreground or background.LAB color space is selected because it is an effective color space in Otsu thresholding.The activities performed for this purpose is sequenced in Algorithm 1.In which,the set of frames,extracted in the previous section and are in RGB format,are work as input and also shown in Fig.3 part(A).

    Algorithm 1:Skin Detection Algorithm 1 RGB color space image Skin Segmented Binary Image Get RGB sign image 2 Convert RGB image to LAB color space image 3 Apply Otsu threshold 4 Apply Gray threshold 5 Binaries the image to convert the background pixels to black and foreground pixels to white return segmented image

    Figure 3:Image of signer showing hand and face detection

    3.1.3 Key Frame Extraction

    A set of these binary images consists of many frames.Most of the frames represent the sequence of moving hands and do not contain any factual information,which is required for the recognition of gestures.The inclusion of these frames increases time and space complexity along with the increase of error rate.We need to exclude these frames.The rest of the frames are considered as key frames,which have some specific information.The step performed for key frames identification is presented in Algorithm 2.

    Firstly,it calculates the entropy of the considering frame based on the frames histogram values and then compares these values.The base value from which the comparison is made is the base frames entropy in which hands are in the rest position.This comparison is performed based on Eq.(1) in which,αrepresents the entropy of the first frame,βj represents the entropy of other frames and μ represents the threshold value.

    From a lot of experiments,it is observed that the frames having threshold value less than 0.8 are just containing hands movements and therefore can be excluded.Hence,the frames having threshold value greater than 0.8 are considered as key frames of this sign.Let consider the sign“I”,as shown in Fig.3 the entropy and threshold values against each fame is presented in Tab.4.In which the entropy of the first/base frame is 6.8423 which will remain same in the whole process.This“I”sign consist of 77 frames and after the key frame identification process only 38 frames are considered as key frames,rest of the frames are excluded,these 38 frames are passed to the classification component for further processing.

    Algorithm 2:Key Frames Extraction Algorithm A ←Acquire first Image(A ∈Segmented Image)I ←Calculate Entropy(A)for each,B ∈(Segmented Image)do J ←CalculateEntropy(B)M ←Calculate Difference(I,J)N ←CalculateSum(I,J)Threshold ←CalculateDivision(M,N)If(Absolute(Threshold*100)≥0.8)then Keyframe ←Image B is end end return KeyFrames

    Table 4:Keyframe selection based on entropy and threshold

    Table 4:Continued

    3.2 Classification

    The classification component takes the key frames as input and predicts the label of gesture that belongs to that key frames.The overall working of the classification component is explained in the following subsections.

    3.2.1 Sign Repository

    PSL case study is used to implement the developed methodology.A repository of PSL signs is created using pre-recorded gestures performed by different people at different places of different age and gender groups.Some of the videos from the official PSL website [18]are also included in the repository.Firstly,a database of 300 daily life sentences are created,in which it is found that some words are repeatedly used,among which 21 high-frequency words are selected and shown in Tab.5 with their occurrence frequencies.These words gesture is recorded three times by three different people and their videos are stored in a signed repository.Moreover,15 gestures videos are downloaded from the PSL website and also stored in the signed repository.Hence,the total recorded videos are 204.The total number of Key frames that are extracted from 204 videos is 1882.The key frames of each gesture are labeled with the appropriate name(e.g.,we,Today,Drive,etc.).

    PSL case study is used to implement the developed methodology.A repository of PSL signs is created using pre-recorded gestures performed by different people at different places of different age and gender groups.Some of the videos from the official PSL website [18]are also included in the repository.Firstly,a database of 300 daily life sentences are created,in which it is found that some words are repeatedly used,among which 21 high-frequency words are selected and shown in Tab.5 with their occurrence frequencies.These words gesture is recorded three times by three different people and their videos are stored in a signed repository.Moreover,15 gestures videos are downloaded from the PSL website and also stored in the signed repository.Hence,the total recorded videos are 204.The total number of Key frames that are extracted from 204 videos is 1882.The key frames of each gesture are labeled with the appropriate name(e.g.,we,Today,Drive,etc.).

    3.2.2 Training and Testing with CNN Model

    CNN is a type of Deep Learning algorithm which belongs to machine learning.CNN is best for recognition because it implicitly calculates the features,whereas,in other techniques,the features are required to be calculated explicitly.The input image layer is 200×200 pixels in size,we have 1882 images in over sign repository.Convolution filters are applied to the inputted image using a 2-D convolution layer.This layer convolves the image vertically and horizontally by moving the filters and computes the dot product of CNN weights to the inputted image.Moreover,the convolutional layer has 20 filters of size 5×5 and has a ReLU layer,which automatically works for rectifying the error.The max-pooling layer is created,which performs down-sampling on the input and dividing into rectangular regions by computing the maximum value for every region.The layer has pool size 2×2 and a stride of 2,followed by the Fully Connected Layers.The architecture of the Designed neural network is shown in Fig.4.From the signed repository,75% data is used for training purposes and 25%data is used for testing purposes.

    3.2.3 Recognizer

    After training and testing,recognition is done through CNN.Extracted Key frames of a gesture are given to the Recognizer component,which works with the trained CNN model,discussed in Section 3.2.2.The recognizer identifies the total number of gestures in the given set of key frames and returns the gestures labels.In the case of gesture “I”38 frames are passed to the recognizer.It takes 4 s to recognize and label it.In the case of“She”and“beautiful”,it takes 13 s to recognize and label it.

    Table 5:Bag of Words

    3.3 Sentence Formation

    The identified labels are sent to this component.Firstly,the nature of labels are identified like subject,verb or object using NLP and a dictionary in which most terms are classified with their nature.The frequency in which these terms are used that their labels are arranged in Subject-verb-object format.Although it is not fully satisfied the English grammar rules but some extent it is able to convey the meaning of the sentence.For example when a person perform gestures as shown in Fig.5 after all the processing the sentence formation component returns“I go Home”.

    Figure 4:CNN architecture

    Figure 5:Frames from the sentence“I go to home”

    4 Results

    To measure the validity of the proposed solution,a set of 300 daily used sentences is considered.A“bag of words”file is generated from these sentences,which contains the different words with their occurrence frequencies,as 21 high-frequency words are shown in Tab.5.The dataset for training and testing purposes is created by using these 21 gestures.Each gesture is performed three times by three different males and females of different age groups at different locations.Moreover,15 videos or SL expert from the Pakistan Sign Language website [18]is also added to the repository.Overall,we have 204 videos,these videos are processed for the key frame extraction component and in-result 1882 frames were generated.The CNN considers 75%for training and takes approx.4 min(3 min and 37 s)for this purpose.

    Real Time Sign Capturing identifies the focused person body movement and parses it for the key frame extraction.These key frames are further passed to the recognizer component,which performs recognition of the gesture(s)based on the trained module and returns the label(s)of the gesture(s).

    The time required of key frame identification from real time video to movement identification,frame by frame parsing and then key frame extraction is given in Tab.8 under Capturing time parameter and the recognition time shows the time taken for the recognition and labeling of the gestures and the accuracy is the values provided by CNN model against each sentence.Overall,the achieved accuracy is 94.66%.

    Tab.7,shows the time spend in the case of single words,in which,Gesture capturing time is that time which is required for the Key frames extraction from a video of a gesture and the recognition time is the time which recognizer takes to recognize a gesture.

    Precision,recall,false-negative rate,false discovery rate and f-score are the measure used to measure a classification algorithms performance.For this,the standard formulas are given below.In which,True Positive(TP):Actual value is positive and predicted is also positive value,False Negative(FN):Actual value is positive but predicted is a negative value,True Negative(TN):Actual value is negative and predicted is also negative value and False Positive(FP):Actual value is negative,but the predicted value is positive.

    These formulas are used to calculate the binary classification results.Our dataset has multi-class labels and results are calculated using confusion matrix which is given in Fig.6.The following Tab.6 is playing the calculated values for precision,recall,false positive rate,false negative rate,F1-score and accuracy of the above mentioned gestures.

    The rows presents the predicted class and the columns presents to the actual class.The diagonal cells presents to observations that are correctly classified.The off-diagonal cells presents incorrectly classified observations.Both the number of observations and the percentage of the total number of observations are shown in each cell.The last column shows the percentages of all the examples predicted to belong to each class that are correctly and incorrectly classified.These metrics are often called the precision and false discovery rate,respectively.The row at the bottom shows the percentages of all the examples belonging to each class that are correctly and incorrectly classified.These metrics are often called the recall and false negative rate,respectively.The cell in the bottom right of the plot shows the overall accuracy.

    5 Discussion

    Tab.7 the first column displays the Gesture label and the second column displays gesture capturing time.The“Real Time Sign Capturing”component takes gesture capturing time.The“Real Time Sign Capturing”component has three sub-components.The total time of three sub-components is given in that column.This time is depending on the number of key frames in a gesture or gesture performing time.As gesture performing time increases,the number of key frames also increases,so capturing time increases.One more observation is that which gesture consists of multiples signs also has a large capturing time.The gesture,which consists of a single gesture,has the lowest capturing time.As shown in table“Lahore”gesture has the highest capturing time,171 s,because it consists of multiple gestures and has a greater performing time.Similarly,a gesture “I”has the lowest capturing time because it consists of a single sign,has the lowest number of key frames and has the lowest-performing time.This time can be reduced if we can build a method that finds the minimum number of key frames from a gesture.

    Figure 6:Confusion matrix

    Table 6:Performance measures for trained model

    Table 6:Continued

    Table 7:Recognized bag of words

    Table 7:Continued

    The third column in Tab.7 is displaying the gesture recognition time.A trained model recognizes a frame in milliseconds,but our recognizer takes some seconds to recognize a gesture because it recognizes the whole number of key frames extracted from a gesture.So recognition time is depending on the number of key frames that are given to the recognizer.As shown in the table“I”gesture takes 4 s,which is the lowest recognition time in our case because gesture “I”has the lowest number of extracted key frames.Similarly,the gesture“Lahore”has the highest recognition time,which 15 s in our case.This time is highest due to the highest number of key frames.After the recognition of key frames,the highest frequency label is considered as a final label of gesture.The highest frequency label approach is adopted due to similar key frames in gesture.It is observed that many of the gesture has similar key frames e.g.,(Drink and Milk,Drive and Car,etc.).Similarly,Tab.8 shows the capturing and recognition time of the sentence gestures.

    Table 8:Sentence capturing and recognition time

    6 Conclusion and Future Work

    Deaf people are part of society and have the right to live in society and participate in every aspects of life.They need communication way to transfer and interact with other people participating in society.In this research,an automated gestures/signs recognition system is designed and developed.The computer-based Sign Language(SL)recognition solution is efficiently implemented to help the deaf people of Pakistan.We tried to remove mobility and gestures limitations of singlevs.multiple signs by creating a special environment for SL translation in proposed computer-based solution.The proposed solution is even applicable for the signer to communicate through multiple gestures with the deaf people.For this purpose,he does not need to wear any extra hardware such as special gloves for SL translation.The multiple signs/gestures of individual words as well as the complete sentences can be recognized in the proposed solution as it works on the basis of sign videos captured from real environment and translate it into text.The results are verified by adding 204 videos which consist on 1882 key frames with an accuracy of 94.66%.In the future,the computational optimization for smart phone is recommended.On the other hand,computation power of mobile technology can be enhanced to enable the processing of the complex image processing and machine learning tasks.

    Acknowledgement:The work presented in this paper is part of an ongoing research funded by Yayasan Universiti Teknologi PETRONAS Grant(015LC0-311 and 015LC0-029).

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    中文字幕最新亚洲高清| 国产毛片在线视频| 一级,二级,三级黄色视频| 日本vs欧美在线观看视频| 91老司机精品| 在线观看www视频免费| 国产淫语在线视频| 电影成人av| 在线观看三级黄色| 亚洲欧美中文字幕日韩二区| 久久狼人影院| 亚洲国产精品一区二区三区在线| 又粗又硬又长又爽又黄的视频| 午夜福利视频精品| 伊人久久大香线蕉亚洲五| 久久精品国产亚洲av涩爱| 永久免费av网站大全| 一本—道久久a久久精品蜜桃钙片| 黄色毛片三级朝国网站| 老熟女久久久| 狠狠婷婷综合久久久久久88av| 少妇人妻 视频| 亚洲 欧美一区二区三区| 麻豆乱淫一区二区| 亚洲成av片中文字幕在线观看| 亚洲一区二区三区欧美精品| 中国国产av一级| 黄色视频在线播放观看不卡| 久久天堂一区二区三区四区| 又大又爽又粗| 人人妻人人添人人爽欧美一区卜| 女的被弄到高潮叫床怎么办| 看十八女毛片水多多多| 女人精品久久久久毛片| 久久久久网色| 国产精品久久久人人做人人爽| 欧美精品一区二区大全| 激情五月婷婷亚洲| 性色av一级| 亚洲视频免费观看视频| 男女边摸边吃奶| 美女扒开内裤让男人捅视频| 侵犯人妻中文字幕一二三四区| 日韩电影二区| 一级片免费观看大全| 午夜激情久久久久久久| 亚洲欧美日韩另类电影网站| 高清视频免费观看一区二区| 18在线观看网站| 国产精品欧美亚洲77777| 五月天丁香电影| 国产一区有黄有色的免费视频| 中文字幕另类日韩欧美亚洲嫩草| 美女扒开内裤让男人捅视频| 国产亚洲一区二区精品| 19禁男女啪啪无遮挡网站| 亚洲熟女精品中文字幕| av卡一久久| 国产深夜福利视频在线观看| 天天添夜夜摸| 菩萨蛮人人尽说江南好唐韦庄| 卡戴珊不雅视频在线播放| 女的被弄到高潮叫床怎么办| 精品国产露脸久久av麻豆| 十分钟在线观看高清视频www| 99久久综合免费| 自线自在国产av| 老司机影院毛片| e午夜精品久久久久久久| 国产免费一区二区三区四区乱码| 99久久人妻综合| 国产精品.久久久| 又大又黄又爽视频免费| 最近中文字幕高清免费大全6| 国产成人欧美| 人体艺术视频欧美日本| 熟女少妇亚洲综合色aaa.| av女优亚洲男人天堂| 男人添女人高潮全过程视频| 成人手机av| 久久久久久久久免费视频了| 纯流量卡能插随身wifi吗| 性高湖久久久久久久久免费观看| svipshipincom国产片| 在线免费观看不下载黄p国产| 赤兔流量卡办理| 汤姆久久久久久久影院中文字幕| 精品少妇久久久久久888优播| 国产乱来视频区| 国产毛片在线视频| 精品一区二区免费观看| 女性被躁到高潮视频| 男人添女人高潮全过程视频| 中文天堂在线官网| 午夜免费鲁丝| 国产乱人偷精品视频| 中文字幕人妻丝袜制服| www.熟女人妻精品国产| 侵犯人妻中文字幕一二三四区| 宅男免费午夜| 国产欧美日韩综合在线一区二区| 欧美乱码精品一区二区三区| 久久久精品免费免费高清| 久久免费观看电影| 久久精品久久精品一区二区三区| 欧美av亚洲av综合av国产av | 卡戴珊不雅视频在线播放| 丝瓜视频免费看黄片| 精品国产一区二区三区久久久樱花| 色吧在线观看| 最近的中文字幕免费完整| 亚洲精品一二三| av免费观看日本| 啦啦啦视频在线资源免费观看| 免费黄网站久久成人精品| 丝袜美腿诱惑在线| 亚洲综合色网址| 97人妻天天添夜夜摸| 亚洲,欧美精品.| 99香蕉大伊视频| 精品少妇内射三级| 国产免费又黄又爽又色| 香蕉国产在线看| 嫩草影院入口| 在线观看免费视频网站a站| 亚洲av国产av综合av卡| 色播在线永久视频| 久久精品aⅴ一区二区三区四区| 亚洲av中文av极速乱| 欧美日韩视频高清一区二区三区二| 国产爽快片一区二区三区| 亚洲成人av在线免费| 中文精品一卡2卡3卡4更新| 久久久久国产精品人妻一区二区| 精品少妇黑人巨大在线播放| 亚洲免费av在线视频| 男人爽女人下面视频在线观看| 欧美变态另类bdsm刘玥| 日本wwww免费看| 欧美国产精品va在线观看不卡| 日韩欧美一区视频在线观看| 亚洲欧洲国产日韩| 97人妻天天添夜夜摸| 91精品三级在线观看| 极品人妻少妇av视频| 国产精品嫩草影院av在线观看| av视频免费观看在线观看| 男人操女人黄网站| 欧美人与性动交α欧美软件| 亚洲男人天堂网一区| 日韩伦理黄色片| 又黄又粗又硬又大视频| 亚洲成av片中文字幕在线观看| 精品一区二区三区四区五区乱码 | 一级黄片播放器| 十八禁网站网址无遮挡| 成人三级做爰电影| 婷婷成人精品国产| 最近最新中文字幕大全免费视频 | 建设人人有责人人尽责人人享有的| 国产成人精品福利久久| 啦啦啦在线观看免费高清www| 亚洲欧美激情在线| 香蕉丝袜av| 精品一区在线观看国产| 天天躁夜夜躁狠狠躁躁| 日韩av在线免费看完整版不卡| 国产99久久九九免费精品| 亚洲少妇的诱惑av| 国产精品成人在线| 欧美人与性动交α欧美软件| 成年人免费黄色播放视频| 最近最新中文字幕免费大全7| 午夜激情av网站| 日韩,欧美,国产一区二区三区| e午夜精品久久久久久久| 久热爱精品视频在线9| 岛国毛片在线播放| 亚洲精品国产色婷婷电影| 亚洲国产欧美日韩在线播放| 亚洲七黄色美女视频| 两个人看的免费小视频| 亚洲第一av免费看| 精品国产乱码久久久久久小说| 欧美国产精品va在线观看不卡| 亚洲国产精品999| 亚洲国产av新网站| 超碰97精品在线观看| 天天添夜夜摸| 久久久久久久精品精品| 秋霞伦理黄片| 亚洲av男天堂| 一级黄片播放器| 国产成人精品在线电影| 青青草视频在线视频观看| videos熟女内射| 这个男人来自地球电影免费观看 | 伊人亚洲综合成人网| 国精品久久久久久国模美| 国产免费又黄又爽又色| 精品一区二区三区四区五区乱码 | 三上悠亚av全集在线观看| 亚洲七黄色美女视频| 日韩成人av中文字幕在线观看| 欧美精品高潮呻吟av久久| 青草久久国产| 亚洲国产精品成人久久小说| 波野结衣二区三区在线| av天堂久久9| av在线app专区| 汤姆久久久久久久影院中文字幕| netflix在线观看网站| 亚洲视频免费观看视频| 色综合欧美亚洲国产小说| 悠悠久久av| 亚洲精品aⅴ在线观看| 最近中文字幕高清免费大全6| 一级爰片在线观看| 欧美人与性动交α欧美精品济南到| 中文字幕精品免费在线观看视频| 国产精品.久久久| 多毛熟女@视频| 成年av动漫网址| 亚洲激情五月婷婷啪啪| 亚洲自偷自拍图片 自拍| 99国产精品免费福利视频| xxx大片免费视频| 国产精品蜜桃在线观看| 国产成人精品久久二区二区91 | 蜜桃在线观看..| 色婷婷av一区二区三区视频| 欧美日韩av久久| 精品人妻熟女毛片av久久网站| 精品午夜福利在线看| 久久久久久久大尺度免费视频| 精品福利永久在线观看| 天天躁日日躁夜夜躁夜夜| 亚洲第一青青草原| 国产亚洲午夜精品一区二区久久| videos熟女内射| 亚洲美女黄色视频免费看| 免费在线观看视频国产中文字幕亚洲 | 亚洲欧美中文字幕日韩二区| 精品国产露脸久久av麻豆| 我的亚洲天堂| av视频免费观看在线观看| 99久久精品国产亚洲精品| 韩国高清视频一区二区三区| 亚洲av欧美aⅴ国产| 精品卡一卡二卡四卡免费| 国产不卡av网站在线观看| 国产麻豆69| 久久久久网色| 欧美日韩亚洲综合一区二区三区_| 久久人人爽av亚洲精品天堂| 国产精品国产av在线观看| 十分钟在线观看高清视频www| 亚洲精品中文字幕在线视频| 国产精品久久久久久人妻精品电影 | 考比视频在线观看| 又粗又硬又长又爽又黄的视频| 免费黄网站久久成人精品| 汤姆久久久久久久影院中文字幕| 亚洲欧美中文字幕日韩二区| 国产免费福利视频在线观看| 亚洲精品国产av成人精品| 久久鲁丝午夜福利片| 激情五月婷婷亚洲| 午夜免费观看性视频| 99热网站在线观看| 韩国精品一区二区三区| 免费在线观看黄色视频的| 麻豆乱淫一区二区| 亚洲一区二区三区欧美精品| a 毛片基地| 大香蕉久久网| 亚洲色图综合在线观看| 人人妻人人爽人人添夜夜欢视频| 激情五月婷婷亚洲| 精品人妻在线不人妻| 成人国语在线视频| 高清在线视频一区二区三区| 国产极品粉嫩免费观看在线| 老汉色∧v一级毛片| 欧美精品人与动牲交sv欧美| 午夜福利影视在线免费观看| 午夜福利在线免费观看网站| 啦啦啦视频在线资源免费观看| 欧美老熟妇乱子伦牲交| 国产视频首页在线观看| 两个人免费观看高清视频| 日日撸夜夜添| 新久久久久国产一级毛片| 69精品国产乱码久久久| 免费看av在线观看网站| 久久精品亚洲av国产电影网| 麻豆乱淫一区二区| 又大又爽又粗| 一边摸一边做爽爽视频免费| 麻豆av在线久日| 2021少妇久久久久久久久久久| 亚洲av电影在线进入| 日韩成人av中文字幕在线观看| 久久久欧美国产精品| 又粗又硬又长又爽又黄的视频| 精品亚洲成a人片在线观看| 看非洲黑人一级黄片| 黄网站色视频无遮挡免费观看| 亚洲av欧美aⅴ国产| 国产精品久久久久成人av| 亚洲人成77777在线视频| 亚洲 欧美一区二区三区| 熟女av电影| 黑人猛操日本美女一级片| 国产xxxxx性猛交| av片东京热男人的天堂| videos熟女内射| 亚洲欧洲国产日韩| 黑人巨大精品欧美一区二区蜜桃| 亚洲欧美成人综合另类久久久| 丝袜在线中文字幕| 免费观看a级毛片全部| 韩国高清视频一区二区三区| 91国产中文字幕| 最近的中文字幕免费完整| 午夜福利网站1000一区二区三区| 曰老女人黄片| 国产女主播在线喷水免费视频网站| 最近手机中文字幕大全| 免费观看av网站的网址| 在现免费观看毛片| 人成视频在线观看免费观看| 久久久久网色| 久久久久久久久久久久大奶| 一级a爱视频在线免费观看| 天天操日日干夜夜撸| 无限看片的www在线观看| 国产男女超爽视频在线观看| netflix在线观看网站| 成人亚洲精品一区在线观看| 少妇猛男粗大的猛烈进出视频| 国产欧美亚洲国产| 午夜福利视频精品| 国产男女超爽视频在线观看| 午夜福利影视在线免费观看| 欧美xxⅹ黑人| 成年人午夜在线观看视频| 国产成人精品久久久久久| 我要看黄色一级片免费的| 精品国产露脸久久av麻豆| 99九九在线精品视频| 90打野战视频偷拍视频| 欧美乱码精品一区二区三区| 免费女性裸体啪啪无遮挡网站| 人妻人人澡人人爽人人| 精品免费久久久久久久清纯 | 国产xxxxx性猛交| 可以免费在线观看a视频的电影网站 | 日韩伦理黄色片| 飞空精品影院首页| 亚洲av成人精品一二三区| www.自偷自拍.com| 女人精品久久久久毛片| 国产又爽黄色视频| 国产精品 欧美亚洲| 国产淫语在线视频| 伦理电影免费视频| 黄片小视频在线播放| 制服人妻中文乱码| 亚洲精品美女久久久久99蜜臀 | 飞空精品影院首页| 日韩不卡一区二区三区视频在线| 极品少妇高潮喷水抽搐| 少妇的丰满在线观看| 一级爰片在线观看| 国产精品免费大片| 赤兔流量卡办理| 亚洲欧洲精品一区二区精品久久久 | 天天躁夜夜躁狠狠久久av| 国产精品99久久99久久久不卡 | 纯流量卡能插随身wifi吗| 99精品久久久久人妻精品| 中文字幕制服av| 成人国产麻豆网| 亚洲伊人色综图| 精品一区二区免费观看| 国产成人系列免费观看| 亚洲国产中文字幕在线视频| a级片在线免费高清观看视频| 搡老乐熟女国产| 亚洲男人天堂网一区| 校园人妻丝袜中文字幕| 色94色欧美一区二区| 99久久99久久久精品蜜桃| 人体艺术视频欧美日本| 亚洲精品日本国产第一区| 欧美精品人与动牲交sv欧美| 黄网站色视频无遮挡免费观看| 欧美精品一区二区免费开放| 亚洲专区中文字幕在线 | 精品人妻在线不人妻| 国产片内射在线| 亚洲国产精品999| xxxhd国产人妻xxx| 色94色欧美一区二区| 午夜91福利影院| 狠狠婷婷综合久久久久久88av| 老司机亚洲免费影院| 永久免费av网站大全| 狂野欧美激情性bbbbbb| 中文欧美无线码| 性色av一级| 国产视频首页在线观看| 91精品三级在线观看| 日日摸夜夜添夜夜爱| 免费高清在线观看日韩| 在线 av 中文字幕| 视频在线观看一区二区三区| 欧美亚洲日本最大视频资源| 久久久久精品久久久久真实原创| 精品人妻一区二区三区麻豆| 亚洲精品国产一区二区精华液| 久久毛片免费看一区二区三区| 两个人看的免费小视频| 看十八女毛片水多多多| 亚洲四区av| 一级,二级,三级黄色视频| 丰满饥渴人妻一区二区三| 少妇被粗大猛烈的视频| 91成人精品电影| 国产亚洲欧美精品永久| 热99久久久久精品小说推荐| 亚洲 欧美一区二区三区| 国产精品一区二区在线观看99| 夜夜骑夜夜射夜夜干| 丝袜美腿诱惑在线| 日韩 亚洲 欧美在线| 亚洲精品久久久久久婷婷小说| 精品国产超薄肉色丝袜足j| 看非洲黑人一级黄片| 十八禁高潮呻吟视频| 爱豆传媒免费全集在线观看| 国产亚洲av片在线观看秒播厂| 一本—道久久a久久精品蜜桃钙片| 国产淫语在线视频| 久久人人爽av亚洲精品天堂| 国产精品一区二区在线观看99| 99久国产av精品国产电影| 男的添女的下面高潮视频| 少妇人妻精品综合一区二区| 欧美人与性动交α欧美精品济南到| 乱人伦中国视频| 国产男人的电影天堂91| 尾随美女入室| 王馨瑶露胸无遮挡在线观看| 最近最新中文字幕大全免费视频 | 尾随美女入室| 99久久综合免费| 最近最新中文字幕大全免费视频 | 国产毛片在线视频| 嫩草影院入口| 中文天堂在线官网| 飞空精品影院首页| 狂野欧美激情性xxxx| 少妇被粗大的猛进出69影院| 亚洲色图综合在线观看| 黄频高清免费视频| 国产精品一二三区在线看| 国产极品粉嫩免费观看在线| 国产xxxxx性猛交| 亚洲av电影在线观看一区二区三区| 国产成人精品福利久久| 国产精品嫩草影院av在线观看| 国产精品国产三级专区第一集| 在线亚洲精品国产二区图片欧美| 中文字幕色久视频| 1024香蕉在线观看| 美女福利国产在线| 天天操日日干夜夜撸| 免费看av在线观看网站| 亚洲免费av在线视频| 两个人免费观看高清视频| 免费观看性生交大片5| 在线天堂最新版资源| 少妇 在线观看| 美女主播在线视频| 日本欧美视频一区| 日韩成人av中文字幕在线观看| 国产av精品麻豆| 9热在线视频观看99| 日本av免费视频播放| 99久久人妻综合| 午夜久久久在线观看| 精品人妻熟女毛片av久久网站| 女人爽到高潮嗷嗷叫在线视频| 久久免费观看电影| 免费黄网站久久成人精品| 啦啦啦在线观看免费高清www| 国产毛片在线视频| 亚洲精品国产av成人精品| 国产黄色免费在线视频| 亚洲美女搞黄在线观看| 亚洲欧洲日产国产| 成人免费观看视频高清| 欧美精品亚洲一区二区| 免费在线观看黄色视频的| 国产色婷婷99| av国产精品久久久久影院| 欧美xxⅹ黑人| 蜜桃国产av成人99| 精品午夜福利在线看| 高清视频免费观看一区二区| 无限看片的www在线观看| av在线观看视频网站免费| 乱人伦中国视频| 一级毛片电影观看| 国产成人精品在线电影| 97精品久久久久久久久久精品| 国产国语露脸激情在线看| 男人添女人高潮全过程视频| av在线播放精品| 满18在线观看网站| 亚洲国产成人一精品久久久| 高清av免费在线| 捣出白浆h1v1| 国产片特级美女逼逼视频| 69精品国产乱码久久久| 国产淫语在线视频| 国产成人免费无遮挡视频| 中文天堂在线官网| 熟妇人妻不卡中文字幕| 欧美另类一区| 免费高清在线观看日韩| 日韩一区二区三区影片| 十分钟在线观看高清视频www| 欧美激情 高清一区二区三区| 一级毛片我不卡| 午夜日本视频在线| 高清在线视频一区二区三区| 久久影院123| 一级毛片黄色毛片免费观看视频| 久久久久久免费高清国产稀缺| 国产成人欧美在线观看 | 一区在线观看完整版| 巨乳人妻的诱惑在线观看| 日本爱情动作片www.在线观看| 高清黄色对白视频在线免费看| 欧美 日韩 精品 国产| 久久久国产一区二区| 国产在线视频一区二区| 久久久久精品性色| 久久这里只有精品19| 国产一卡二卡三卡精品 | 五月开心婷婷网| 国产黄色视频一区二区在线观看| av福利片在线| 考比视频在线观看| 一级毛片我不卡| 黄片无遮挡物在线观看| 精品一区二区三卡| 少妇猛男粗大的猛烈进出视频| 亚洲激情五月婷婷啪啪| 新久久久久国产一级毛片| 国产精品女同一区二区软件| 久久精品国产亚洲av高清一级| 久久久久久人人人人人| svipshipincom国产片| 天天添夜夜摸| 妹子高潮喷水视频| 熟女少妇亚洲综合色aaa.| 久久精品人人爽人人爽视色| 欧美成人午夜精品| av又黄又爽大尺度在线免费看| 欧美日本中文国产一区发布| 欧美国产精品va在线观看不卡| av福利片在线| 久久久欧美国产精品| 夜夜骑夜夜射夜夜干| 观看av在线不卡| 赤兔流量卡办理| 午夜福利影视在线免费观看| 哪个播放器可以免费观看大片| 国产一区有黄有色的免费视频| 亚洲,一卡二卡三卡| 1024视频免费在线观看| 在线观看人妻少妇| 亚洲色图综合在线观看| 亚洲激情五月婷婷啪啪| 午夜福利视频在线观看免费| 亚洲色图综合在线观看| 亚洲精品自拍成人| 国产成人系列免费观看| 久久国产亚洲av麻豆专区| 又大又爽又粗| 日本欧美视频一区| 嫩草影院入口| 91aial.com中文字幕在线观看| 日韩精品免费视频一区二区三区| 好男人视频免费观看在线| 如日韩欧美国产精品一区二区三区| 成年动漫av网址| 人妻人人澡人人爽人人| 99精品久久久久人妻精品| 国产又色又爽无遮挡免| 一本久久精品| 看非洲黑人一级黄片| 亚洲欧美成人精品一区二区| 在现免费观看毛片| 亚洲国产av影院在线观看| 欧美黄色片欧美黄色片| 欧美日韩av久久| 最新在线观看一区二区三区 | 国产福利在线免费观看视频| 亚洲av成人不卡在线观看播放网 | 男女国产视频网站|