• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Real-Time Oral Cavity Gesture Based Words Synthesizer Using Sensors

    2022-08-23 02:16:20PalliPadminiParamasivamJyothishLalSadeenAlharbiandKaustavBhowmick
    Computers Materials&Continua 2022年6期

    Palli Padmini,C.Paramasivam,G.Jyothish Lal,Sadeen Alharbiand Kaustav Bhowmick

    1Department of Electronics&Communication Engineering,Amrita School of Engineering,Bengaluru,Amrita Vishwa Vidyapeetham,India

    2Center for Computational Engineering and Networking(CEN),Amrita School of Engineering,Coimbatore,Amrita Vishwa Vidyapeetham,India

    3Department of Software Engineering,College of Computer and Information Sciences,King Saud University,Riyadh,Saudi Arabia

    4Department of Electronics and Communication Engineering,PES University,Bengaluru,India

    Abstract: The present system experimentally demonstrates a synthesis of syllables and words from tongue manoeuvers in multiple languages,captured by four oral sensors only.For an experimental demonstration of the system used in the oral cavity, a prototype tooth model was used.Based on the principle developed in a previous publication by the author(s),the proposed system has been implemented using the oral cavity (tongue, teeth, and lips)features alone,without the glottis and the larynx.The positions of the sensors in the proposed system were optimized based on articulatory (oral cavity)gestures estimated by simulating the mechanism of human speech.The system has been tested for all English alphabets and several words with sensor-based input along with an experimental demonstration of the developed algorithm,with limit switches,potentiometer,and flex sensors emulating the tongue in an artificial oral cavity.The system produces the sounds of vowels,consonants,and words in English, along with the pronunciation of meanings of their translations in four major Indian languages, all from oral cavity mapping.The experimental setup also caters to gender mapping of voice.The sound produced from the hardware has been validated by a perceptual test to verify the gender and word of the speech sample by listeners, with ~98% and~95%accuracy,respectively.Such a model may be useful to interpret speech for those who are speech-disabled because of accidents, neuron disorder,spinal cord injury,or larynx disorder.

    Keywords: English vowels and consonants; oral cavity; proposed system;sensors;speech-disabled;speech production;vocal tract model

    1 Introduction

    Communication is essential in modern society,in every environment or social aspect of people’s life, private or public.Statistics show that there are 400 million disabled people in the developing world[1].Approximately,more than nine million people in the world have voice and speech disorders[2–4].Speech impediments are conditions in which normal speech is interrupted due to vocal cord paralysis, vocal cord damage, accidents, brain damage, laryngeal disease [5–7], laryngeal disorders[8], dysarthria, or cerebral palsy [9], neuron disorders, old age, oral cancer, muscle weakness, and respiratory weakness[10],etc.

    Assistive technologies (AT) and speech synthesis systems that help the severely disabled to communicate their intentions to others and effectively control their environments, enable them to pursue self-care, educational, vocational, and recreational activities, etc.The present work aims towards the development of a mechanism to benefit patients with speech disorders based on oral cavity maneuvering only,without glottal and laryngeal intervention.Presently,there are different techniques to synthesize speech for the speech-disabled available in the commercial market, each with its own merits and demerits.A comprehensive summary of the state-of-the-art about these different techniques of speech synthesis system is listed in Tab.1.

    Table 1: Description of speech synthesis techniques

    Table 1:Continued

    Table 1:Continued

    Table 1:Continued

    The summary hence drawn,is that in the aforesaid literature,speech synthesis systems have mostly used electrodes or sensors or visual features information from lips or hands, that are captured for speech production.The tongue drive system was explored in recognizing the oral cavity activities and to control the remote devices,but although promising,it has not been explored for speech production.Based on tongue gestures,author(s)have previously estimated optimized formants of the oral cavity for the English alphabet, to prove that the tongue plays a key role in speech production [20].From the basics of phonetics, and articulatory system-based method was proposed by author(s) initially(including tongue, teeth, and lips) with listed accurate gestures for each English alphabet, and was tested for English alphabets using LabView[21].However,an initial practical demonstration was not published by the author(s)until later[20],although yet not for complete words.

    The main objective of the proposed work is to synthesize sound/words based on the movements in the oral cavity (tongue, lips, and teeth) during the production of human speech, especially the articulation of English vowels,consonants,and sequence of letters i.e.,words.The tongue,which has only been clinically characterized thus far,has been considered as the main player in English vowel and consonant production.Refrences[20,21],by the author(s),previously,leading to the present work.In this paper, a tongue-based system has been conceptualized for producing human speech to create a demonstrable device to produce up to words.Thus,the present solution demonstrated can potentially only help people with speech disabilities in social life,and also aid in their healthcare,professional life,family life,etc.

    The main contributions of the work are as follows:

    · The oral cavity gestures need to be identified for each vowel and consonant during the production of human speech.

    · Optimization of the number and position of the main sensors needed to implement tonguedriven speech synthesis.The optimized setup is implemented via an in-house built device and the performance is experimentally demonstrated.

    · Implementation of the concept of the proposed system in hardware for use in real-time applications to allow speech-disabled people to produce words in English and major Indian languages.

    The remainder of the paper is organized as follows: Section 2 explains the system of human speech production, followed by the proposed system in Section 3.Section 4 describes the setup of the experimental hardware, followed by a conclusion and expectations for future research in Section 5.

    2 Oral Cavity Gestures Identification Based on Human Speech Production System

    In the speech production model, the air is forced from the lungs by the contraction of muscles around the lung cavity.The pitch of the sound is caused by the vibration of cords.The excitation of the vocal tract results from periodic puffs of air.The vocal tract(between the vocal cords and lips)acts as a resonator that spectrally shapes the periodic input that can produce the respective sound from lips[10].

    The parts of the vocal tract that can be used to form sounds or during speech production are called articulators including the glottis,larynx,jaw,tongue,and lips,teeth.The articulatory gestures of speech production in existing speech production models involve the glottis,larynx,tongue,lips,teeth,and palate for producing specific sounds.The first step was to understand and analyze the existing mechanism to produce speech in the vocal tract by using VocalTractLab 2.3 software(VTL 2.3)[22].It identifies the articulators, which play an important role in improving acoustic simulation in VTL 2.3.This model represents the surfaces of the articulators and the vocal tract walls.The shape and/or position of the articulators is defined by vocal tract parameters like glottis,larynx,jaw,tongue,teeth,and lips.

    In general,the shapes of the vocal tract during articulation of a few English vowels and consonants in the existing model using VTL 2.3 software are shown in Fig.1.This makes it easy to visually compare the vocal tract model shapes for the pronunciation of English syllables, especially if they are displayed as 2D or 3D contour images.The tongue, teeth parameters define tongue height (h)and tongue frontness(l)which specify the tongue shape for each English alphabetic sound[20].The degrees of lip rounding and velum is the soft tissue constituting the back of the roof of the mouth called soft palate,which is specified by the parameters lip rounding and velum position.When a parameter(tongue,teeth,and lips)is changed,the corresponding changes in the vocal tract shape,observed in Fig.1,for a few English vowels and consonants.

    Figure 1: 3D representation of the vocal tract model while pronouncing English (a) Vowels and (b)Consonants using VTL 2.3 software[22]

    Again, vocal tract (VT) animation with the glottal area, larynx height, jaw height (which resembles tongue displacement),tongue position,tongue tip/apex,and lips from vocal tract acoustics demonstrator(VTDemo)software[23]is shown in Fig.2,for the articulatory synthesizer.

    VTDemo shows the vocal tract positions during sound synthesis.When articulatory parameters change,it changes the sound that is heard.By varying the height of the jaw or the displacement of the tongue body,tongue position,tongue tip,and lips in the VT Demo articulatory synthesizer software,researchers can observe the production of different vowel sounds [23].The shapes of articulatory gestures during the production of English vowels and consonants using VT Demo are shown in Fig.2,which helps to identify the oral cavity gestures of each alphabet.

    Figure 2:Oral cavity gestures while pronouncing a few English(a)Vowels and(b)Consonants

    The observation and analysis of the existing vocal tract speech production systems using VTL and VTDemo as shown in Figs.1 and 2,for both vowels and consonants help us to estimate the correct locations/gestures to capture the parameters of oral cavity like tongue,teeth,and lips position data for speech production for each English alphabet without the use of glottis and larynx.Thus,the focus of the study is to use only oral cavity movements(tongue,teeth,and lips,jaw),with the tongue being most important[20]to produce sound.This requires building the proposed sensor-based system for speech production for the speech-disabled using only oral cavity gestures.It is described in the following sections.

    3 Proposed System to Synthesize Speech Using Matlab GUI

    From the literature review,we found that there are not many studies that focus only on the oral cavity articulatory gestures for speech production,except in author(s)work[21].Thus,the proposed system is built by using the human speech production mechanism by using only the gestures of the oral cavity like jaw height, lips, tongue body displacement, the position of the tongue, and the tip of the tongue, without using the glottis and larynx.The present work concentrates mainly on the functionality of the lips and tongue (in the oral cavity) in the production of English vowels and consonants.The production of each sound depends on the degree of lips i.e.,lip movements,tongue tip,and tongue body displacement,and tongue teeth positions.

    The front and side views of the oral cavity are shown in Figs.3a and 3b,respectively.We estimate the four sensor positions of the proposed system for speech production are highlighted by red dots is shown in Fig.3b.

    Moreover, a set of gestural descriptors is used to represent the contrastive ranges of gestural parameter values discretely [24].These descriptors point is a set of articulators involved in a given gesture and the numerical values of the dynamic parameters, which characterize the gestures.Every gesture can be specified by distinct descriptor values for the degree of constriction.Fig.4,shows the gestural dimensions and their descriptors in detail,including a comparison with current proposals of feature geometry[24].

    Figure 3:Estimated four sensors positions in the oral cavity(a)Front view(b)Side view

    Figure 4:Inventory of articulator sets and associated parameter[24]

    Oral gestures involve pairs of tract variables that specify the degree of constriction of the tongue.For simplicity,we refer to the sets of articulators involved in oral gestures,viz.,the lips,tongue tip(TP),tongue teeth position (TTP), and jaw height (JD) for gestures involving constriction of the tongue body.These gestures will help to differentiate each English alphabet easily.Individual gestures of each sensor position with different articulatory gestures are successively shown in Fig.5.

    Figure 5:Schematic representation of(a)Tongue-teeth position(b)Tongue displacement(c)Tongue tip(d)Lips while producing English vowels and consonants[10]

    Data from all four sensor positions that capture the gestures of the tongue,teeth,and lips in the oral cavity along with the side and front view of the oral cavity during the production of each English vowel is shown in Fig.6.For example, to produce the sound ‘/a/,’the tongue tip is neutral, tongue displacement is narrow,lips are open and the tongue-teeth position is back as highlighted by red dots.Generalized sensor-based inputs(i.e.,jaw height,tongue body,tongue tip,and lips)for a few English vowels and consonants are given in Fig.6.

    Figure 6: Four sensor positions along with a front view of the oral cavity gestures for a few English vowels and consonants

    Likewise,we can produce the sounds of either English vowels or consonants using the captured four sensor positions of oral cavity gestures.Different combinations of four articulatory gestures produce different English sounds.Irrespective of these combinations of four articulatory gesture inputs,no sound will be produced.

    The articulatory gestures through sensor input are monitored and captured continuously to synthesize the sequence of vowels,consonants i.e.,words.The gestural score for the utterance of each word is based on articulatory phonology, as shown in Fig.7 [25].The rows correspond to distinct organs(JH=“Jaw Height,”TD=“Tongue Body,”TP=“Tongue Tip,”Lips).The labels in the boxes stand for the gesture’s goal specification for that organ.For example,alveolar stands for a tongue tip constriction from the horizontal.Each syllable connects critically coordinated gestures,or phased,for one another that represent greater bonding strengths between coordinated gestures[25].The gestural score for uttering a few words,with boxes and tract variable motions as generated by the computational model(coordinate pairs of gestures),is shown in Fig.7.

    Figure 7: Schematic gestural scores.(a) “add” (b) “had” (c) “bad” (d) “pad” (e) “pan” (f) “dad”(g)“span”(h)“palm”(i)“team”

    Fig.7 can be substantiated by displaying the gesture scores while reciting each particular word.Philological items contrast gesturally, by verifying each gesture is present or absent (e.g., “add”vs.“had,” Figs.7a,7b; “add”vs.“bad,” Figs.7a,7c; “bad”vs.“pad,” Figs.7c,7d; “pad”vs.“pan,”Figs.7d,7e; “pan”vs.“span,” Figs.7e,7g).Those combinations are pointed and highlighted with different red shapes.We assume that in speech mode“had”and“bad”would typically be considered to differ from“add”by the presence of a segment,while“bad”and“pad,”“pad”and“pan,”would differ only in a single feature,voicing or nasality,respectively.Another kind of contrast is that in which gestures differ in their assembly, i.e., by involving different sets of articulators and tract variables,such as lip closurevs.tongue tip closure(e.g.,“bad”vs.“dad,”Figs.7c,7f).All these differences are categorically distinct.

    The response of the complete proposed system for speech production was validated by creating a GUI using Matlab.In our proposed system,four-sensor data inputs i.e.,tongue body displacement or jaw height,tongue position and tongue tip,lips positions were used for the speech synthesis system.We have made a list of the position of each sensor like as shown in Fig.6,for each English alphabet.If the combination of four sensors positions matches with the pre-defined look-up-table as in Fig.6,it has to display the respective letter in the GUI and produce the same sound.The four sensor inputs are selected manually using the drop-down arrow,GUI for speech production is built for English vowels and consonants using Matlab,as shown in Fig.8.The steps to build GUI using Matlab is as follows,

    1.First select the sensor inputs manually using the drop-down button.

    2.The sensor inputs are checked with a pre-defined table,based on match,the respective letter displayed in the text box.

    3.we use text to speech conversion in Matlab,to produce the speech sound.

    4.For the sound produced,the pitch will be calculated and display the sound waveform on the screen.

    Thus,the GUI shows four sensor inputs,and the respective letters of the alphabet are displayed on the text box,speech signal waveform,pitch of the sound,and the equivalent sound.For example,if our sensor inputs are open,front,narrow,and neutral are selected,according to the predefined table,these combinations should produce/c/sound as is shown in Fig.8b.

    Figure 8:GUI of the proposed system using Matlab for speech synthesis based on the sensor–input of(a)Sequence of vowels(AEIOU)(b)Consonant Production

    The sensor-based input continuously monitors the position of tongue, tongue-teeth, and lips during the articulation of a sequence of vowels and consonants using text to speech synthesis(TTS),whose GUI is shown in Figs.8a and 8b,respectively.

    In our proposed system,speech is synthesized by capturing the different articulatory gestures of the oral cavity(tongue,lips,teeth)based on sensor-based input data.It brings out a hardware design with each of the roles of articulators for speech production using appropriate sensors and electronic devices.Furthermore,we implemented the sensor-based hardware experimental setup version of the proposed system.The hardware setup is described and discussed briefly in the following section.

    4 Real-Time Speech Production System for Speech-Disabled Using Tongue Movements

    A speech-impaired person can be able to produce/pronounce a letter/word distinctly using tongue movements.In our proposed hardware system,speech is produced by using the four sensors,which are placed inside a prototype tooth model,to capture tongue movements.We observed the bent and rolling of the tongue in numerous degrees and their subsequent touch is recorded.The estimated lookup table(LUT) is to be coded in the microcontroller’s memory.The produced letter is played over a speaker using parallel communication with a micro-SD card.This section describes the components of the hardware and the flow of the algorithm and features of the proposed hardware system.It also analyzes the output of the proposed hardware system,which is validated by a perception test.

    4.1 Proposed Hardware Experimental Setup

    The proposed system was initially limited to the pronunciation of the English alphabet and words by using a prototype model of teeth,perceived as a biological system,and extended to other languages using a language translator.In our proposed system,speech production happens only from movements of the tongue inside the oral cavity.Based on knowledge of existing speech production systems, we concluded that four-sensor positions are required for speech production using oral cavity movements,as discussed in Section 3.

    The hardware prototype is improvised to sense jaw movements and tongue flexibility.In the proposed system, a set-up to analyze the speech sound from a laptop and LCD screen is shown in Fig.9a.To capture the tongue and jaw movements(including tongue teeth position and lips),we use two limit switches,a potentiometer,and a flex sensor,as shown in Fig.9b.One can track the position of the oral cavity movements during articulation displays and produce the respective sounds in an LCD of speech assistant device and through an electric speaker.

    The components required to set up the mini-prototype to synthesize words with a rechargeable battery have Arduino UNO,Bluetooth module,flex sensor,potentiometer,connecting wires,breadboard,and a 9v DC battery with connectors,as shown in Fig.9b.

    Two limit switches were placed in the upper and lower palates to indicate whether the tongue touches the palate.The potentiometer is placed on the tooth set to sense the jaw movement and lips positions, and the flex sensor is placed on the tongue to identify the twist and roll positions of the tongue,as shown in Fig.9b.

    Figure 9:(a)The hardware experimental setup(b)Hardware components of the proposed system for sound/word production

    It is assumed that the vocal tract is not present because the person has a larynx disorder or a voice box problem.The algorithm is designed to observe the tongue,teeth,and lips functions and capture the same with tongue movement,tongue touches,and jaw movement.A unique algorithm was created to pronounce every letter of the alphabet distinctly.A LUT is formed using the gestures made during the articulation of each vowel or consonant for digital data acquisition.Optimized coding is incorporated for every sensor calibration on sensible data to run the algorithm.The prototype of the speech assistant system has been designed to fit the small-scale model of human dummy teeth with minimal circuitry.The flow diagram for the real-time speech production algorithm is shown in Fig.10.

    Figure 10:Flow diagram of the proposed system for sound production

    There is two independent(dummy tooth/jaw set and Speech assistant device)and one interdependent(interface circuitry)aspect of the hardware.They are treated in what follows.

    · Dummy tooth/jaw set: This is used by dentists to demonstrate how teeth, cavities, and gums,interrelate.It is used as a dummy human tooth/jaw set.In this work, the tooth set is used as a medium to fix the sensors and acquire data.We have placed different sensors in different positions to capture the relevant oral cavity gestures during each alphabet production but considered sensors are in optimal positions which gives good information of gestures in identifying the sound production which improves the accuracy/performance of the system.Thus, during trials on experiments, we consider four sensors (S1-S4) affixed to the dummy tooth/jaw set to acquire data/values that are sufficient/required to produce a particular letter or word.

    · Interface circuitry:This circuitry enables the communication between the dummy tooth set and the speech assistant device.The input sensor values from the dummy tooth set are sent to the speech assistant device.

    · Speech assistant device:This is the final part of the hardware and also the most important part.Its basic features are:

    o It stores letters and words.

    o It has a user-friendly graphical user interface(GUI).

    o A control system allows users to choose words from lists and announce them to fellow listeners.

    o With the use of a dummy tooth/jaw set,it can save the pronounced word for future use.

    o Machine learning and natural language processing libraries were incorporated for the synthesis of words in any language that Google Translate can enlist.

    o Offline and online modes are available for the user’s benefit.

    The hardware components required for the experimental setup are enumerated as given in Tab.2.

    Table 2: The hardware components required for the experimental setup

    Table 2:Continued

    Table 2:Continued

    Table 2:Continued

    Table 2:Continued

    Two limit switches,flex,and potentiometer sensors were placed in the oral cavity to capture the oral cavity gestures.Based on oral cavity gestures,we capture and obtain data by affixed sensors on a dummy tooth set.The oral cavity gestures at different positions during articulation of different sounds are shown in Fig.11.The different oral cavity gestures of the proposed hardware system are shown in Fig.11,as same as the oral cavity gestures discussed in Section 3 and shown in Fig.6.The dummy tooth/jaw setup is activated to exhibit various gestures associated with different letters and words of the English language,a few of the gestures are shown in Fig.11.

    Figure 11:(a)Multiple degrees of Jaw gestures(b)Various tongue height and advancement gestures

    The input sensor data values captured from the different oral cavity gestures were sent to the interface circuity and speech assistant device to pronounce a sound.The respective output speech would be heard from the speaker based on a match of the variability of four sensor values with LUT.Tab.3 shows the LUT values in centimeters of four input sensors for a few English letters.LUT is used for interfacing the sensor outputs values in accomplishing speech synthesis of our proposed hardware setup.

    Table 3: The LUT values of four sensors for a few English alphabets

    The input sensor data was tested using a LUT (see Tab.3).The LUT values of sensors 1 and 2 (S1, S2) define the position of the tongue whether it touches the upper palate and lower palate or is not defined by 0 or 1.Sensors 3 and 4 (S3, S4) define the tongue teeth position and tongue tip using flex and potentiometer as shown in Tab.3.As the sensor devices, we have considered are electrical devices,thus tolerance/error will be considered.We have considered the tolerance values are+/-0.5 of sensor values,to get effective results.The purpose of these tolerance values will help us to get effective information from the sensor,even sometimes the electrical sensors/devices get affected by temperature/heating issues.

    If input sensor data match with LUT,a respective letter will be displayed on the screen.If sensor data did not match with LUT,no letter was printed on the screen.The respective displayed letter was passed to the function called for audio play through serial-parallel interface communication with SD card(flash memory).The letter sounds,which had been stored,came through the electric speaker.The steps were repeated while taking the sensor data continuously to produce a letter or sequence of letters(words)and the relevant letter is printed on the LCD screen.

    4.2 Features of the Proposed Hardware System

    The proposed system is completely built keeping given the original idea of pronouncing alphabets and extrapolating it to synthesize words.The features of the experimental setup are enumerated as follows.

    · Prototype of small-sized dummy teeth model:The speech assistant system is designed to fit the small-scale human teeth model with minimal circuitry.It has been designed to follow the same LUT.The model is now more realistic and easier to use.

    · Interface of the dummy teeth model with the GUI-based speech assistant: The dummy teeth model interfaces with the Raspberry-Pi-enabled speech assistant system wirelessly.The data collected from the sensors can be sent via Bluetooth to the speechstant system,and the user can see the letters being printed on the display.

    · User-friendly design of speech assistant system:The user can see the letters in the display of the speech assistant system.The user can store and delete these letters to make meaningful words and store them permanently in a predefined list,that needs to be pronounced.

    · Application-oriented approach for enlisting favorite words:The user is given a list of frequently used words as a part of a favorite list that one uses daily to announce through the speech assistant.This intuitive approach to the project serves as a credible asset to a user who faces a speech disability disorder.

    · Translation of words into any language: The user can save any meaningful word to the list.The default setting is English.Four language settings are added:Hindi,Kannada,Tamil,and Telugu.In the online mode, the word in the selected language will be announced/synthesized by Google speech assistant,which holds a very strong connection to the native accent the user desires the word to be announced for fellow listeners.This brings a new dimension to the project.

    · Interrupt-oriented switching of online to offline modes: This step is taken particularly for a user who is in online mode and may not have strong Internet connectivity.In such a case, an interrupt-based dedicated program is developed and uploaded so that the user chooses to switch to offline mode to announce the chosen word to fellow listeners immediately.It can pronounce the word in online mode when it finds strong internet connectivity.

    4.3 Basic GUI Features

    The speech assistant (speechstant) device has a user-friendly GUI so the user can choose the gender of the voice sample,language,and also,we can choose words from predefined lists,and store letters/words by operating a joystick.The same chosen or stored letter or word is announced for fellow listeners.The speech assistant device uses machine learning and natural language processing libraries incorporated for the pronunciation of words in any language that Google Translate can enlist.It has offline and online modes for the user’s benefit.The GUI features are described in Tab.4.The block diagram of and hardware setup of the joystick module and GUI features are shown in Fig.12.

    Table 4: Basic GUI features description

    Table 4:Continued

    The hardware setup is connected to Bluetooth, before going to operate the joystick.The input sensor values are analyzed, and display the output based on oral cavity movements in LCD screen which can be operated by joystick of speechstant device.A block diagram of the joystick module is shown in Fig.12a.

    Figure 12: (a) Joystick module (b) Speechstant device (c) Language setting page (d) Scan page(e)Assistant page(f)Assistant page in basic mode

    The blue light at point 6 indicates the switch has been read,and it displays the processing status.If point 6 has no sign,it means the user must first connect with the dummy tooth set as shown in Fig.12b.The joystick up button selects the online mode and the language setting page,in which the up and down buttons of the joystick are used to select the preferred language(Kannada,Tamil,Telugu,Hindi,but the default is English)as shown in Fig.12c.Similarly,the center and left buttons are used to confirm a selection or to move back and to set the default language is English.The left switch engages the scan mode when the user wants to store a letter to make a sentence communicating with a dummy tooth set over Bluetooth.Fig.12d shows the scan page.Pointer 1 shows the data are coming continuously,and you can store a letter within 4 s using the right switch.After that,it appears at pointer 2.At that point,you can delete the last letter from the word using the left switch.The down switch selects either male or female voice depending on the disabled person’s choice(the default is female).The right switch is used to select what the user wants to speak from a predefined list.The center switch(point 5)of the joystick is not functional.

    When a word is ready,the user stores it to a basic file by clicking the center button followed by the saved message and refreshing it again.To go to the home page,the up switch is used.Fig.12e shows the assistant page where a user can choose the favorite file(a predefined word that is used frequently or for emergency purposes)or a basic file(the word has already been stored the word by a dummy tooth set)file by up and down button and select it by pressing the center button.Fig.12f shows the second page of the assistant mode,where a user can choose a word with the up/down button.The arrow is a pointer to the that the user makes decisions about.If the user wants to speak that word,he or she just presses the center button.If the user wants to delete any line,the user must hold the right switch for 2 sec.When the user is speaking,he or she moves to the left switch and holds it to go to the home page.The program must be restarted when the user wants to move from offline mode to online mode.

    4.4 Hardware Experimental Results

    The importance of the sensor,the variation of the limit switches,the flex,and the potentiometer were displayed on the laptop screen with the help of the Arduino board, as shown in Fig.13a.The respective output is displayed on LCD through the I2C module, as shown in Fig.13b, respectively.The produced output sound was heard from the electric loudspeaker according to the sensor data based on the matching with the LUT(see Tab.4),or from a predefined list stored on the SD card.

    Figure 13: The system for speech production which displays the values of sensor inputs (a) Laptop(b)LCD screen

    When the acknowledgment switch was ON{1},the hardware setup reads the input sensor data.After that, when the acknowledgment switch was OFF {0}, it displayed the corresponding English alphabet based on the match of input sensor data with the LUT, and the same letter is produced loudly by the electric speaker.

    The speechstant device displays the sensor values based on the input captured by sensors which are placed in the oral cavity during oral cavity movements.The output screenshots of the LCD screen during the experiment of the proposed hardware setup from the initial page to output letter/words displayed are shown in Fig.14.

    The initial page of the speech assistant device is shown in Fig.14a.A user moves the joystick down to choose the male voice(the default is the female voice),as shown in Fig.14b.The sensor input data from the tooth set are matched with the LUT,and the respective letter is displayed in the LCD,once the store button(Str)is pressed.The user can press the delete button(Dlt)to avoid storing and producing the sound.Then the same word is saved in basic mode,and the sound is produced through the speaker in the voice of the chosen gender.The options for a single letter“a”and a three-letter word“add”are shown in Figs.14c and 14d.The favorite and basic file modes are shown in Figs.14e and 14f.

    Figure 14:The LCD screen(a)Initial page(b)Gender selection and(c)One letter output(d)Threeletter word output(e)Frequently used word list(f)Predefined list

    The output sound waveforms produced based on the interfacing sensor input values with the proposed hardware system are shown in Fig.15 for both female and male voices.In general,female voices have a higher pitch(amplitude)when compare to male voices,the same differences can also be observed in Fig.15.The comparison between natural speech and its synthesized speech waveforms is shown in Fig.16 and observed that they were almost similar.

    Figure 15: The output waveforms of female and male voices (a) letter “a” (b) “add” [English](c)“baguna”[Telugu](d)“banni”[Kannada](e)“ghar”[Hindi](f)“panam”[Tamil]

    The output sounds produced from the hardware setup based on the sensor inputs have differences in production time.The production time differs based on the average length of the word(number of the letters)is shown in Tab.5.

    Figure 16:The comparison between natural and its synthesized speech waveforms for(a)/a/(b)/w/

    Table 5: The output sound production time in seconds for letter “a”, “add” [English], “baguna”[Telugu],“banni”[Kannada],“ghar”[Hindi],“panam”[Tamil]

    The output sound produced from the input sensor based on oral cavity gestures is validated by a Mean Opinion Score(MOS)which is discussed in the following subsection.

    4.5 Mean Opinion Score(MOS)

    The opinion score[29]was validated by 20 listeners(10 males and 10 females),all native speakers of British English aged 17–42 years old,who were recruited to participate in the experiment.The listeners had no known speaking or hearing impairments.The test was devised to evaluate the quality of the synthesized speech produced through voice conversion.The opinion score measures and judge the correctly produced sound by the listeners.A five-point scale was used,with five as the best score.The scores from the evaluation test are shown in Tab.6.Tab.6a shows the means for correctly identifying the gender of the voice sample,and Tab.6b shows the means for correctly identifying the stated words over the proposed hardware system.

    The overall performance in correctly identifying the speech for all ages of listeners was quite good.Approximately 98% accurately identified the gender, and there was 95% accuracy in identifying the words in the voice samples.The pair of voice samples of the words add,had and pan,span was similar,and they created some issues of perception or quality that left some listeners confused.This contributed to the 95%accuracy in identifying the voice samples of all the words.

    Table 6: Mean opinion scores

    5 Conclusion and Future Work

    This paper presents an approach for speech production using oral cavity gestures especially movements of the tongue, lips, and teeth.our motivation is to make communication easier for the speech disabled.Speech disability can occur because of cancer of the larynx,spinal cord injury,brain injury,neuromuscular diseases,or accidents.The four positions of the sensors in the proposed system were based on appropriate articulatory(oral cavity)gestures estimated from the mechanism of human speech,using VocalTractLab and the vocal tract acoustics demonstrator(VT Demo).From the study and analyses of existing vocal tract speech production physiology, we observed the tongue plays a crucial role in speech production.An initial experiment was carried out by listing the positions of oral cavity gestures for respective sound production.It was tested by emerging a GUI using Matlab.

    With the reference of knowledge from the initial experiment, the hardware system was verified using an experimental dummy tooth set setup with four sensors, and it produces the speech.The tongue and jaw movements placed in the dummy tooth set were captured by two limit switches, a potentiometer, and a flex sensor.These sensor data from oral cavity movements translate into a set of user-defined commands by developing efficient algorithms that can analyze what is intended and create a voice for those who cannot speak.The output sounds can be heard from an electric speaker,and they are displayed on the screen of the speech assistant device.The system was extended to apply to other languages,such as Hindi,Kannada,Tamil,and Telugu,using a language translator.Based on the choice of speech-disabled,users can select gender voice samples as male or female voice.Those results were validated by a perceptual test,in which ~98%accurately identified the gender of the voice,and there was ~95%accuracy in identifying the words in the voice samples.Thus,this system was useful for speech-disabled because of accidents, neuron disorder, spinal cord injury, or larynx disorder to have communication easily.

    In future research,time delays can be reduced during sound production.The present work could be extended to generate sequences of words and long/whole sentences.Using the basic facts demonstrated in this study,it might be possible to build a chip design system that wirelessly tracks the movements of the tongue and transmits the sensor data through Bluetooth to a personal computer in which data are displayed and saved for data analysis.Another objective will be to include emotion in the output speech of the proposed system,to communicate to express their thoughts with emotions.

    Acknowledgement:The authors thank all the participants who enabled us to validate these output sounds/words.

    Funding Statement:The authors would like to acknowledge the Ministry of Electronics and Information Technology(MeitY),Government of India for financial support through the scholarship for Palli Padmini,during research work through Visvesvaraya Ph.D.Scheme for Electronics and IT.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    一区二区三区乱码不卡18| 国产视频首页在线观看| 色综合色国产| 国产中年淑女户外野战色| 亚洲精品乱久久久久久| 欧美三级亚洲精品| 午夜视频国产福利| 亚洲精品中文字幕在线视频 | 日韩欧美精品免费久久| 久久热精品热| 亚洲丝袜综合中文字幕| av卡一久久| av.在线天堂| 国产 一区 欧美 日韩| 97热精品久久久久久| 欧美日韩精品成人综合77777| 成人高潮视频无遮挡免费网站| 亚洲一区高清亚洲精品| 一级二级三级毛片免费看| 国产爱豆传媒在线观看| 免费少妇av软件| 好男人在线观看高清免费视频| 国产国拍精品亚洲av在线观看| 汤姆久久久久久久影院中文字幕 | 少妇熟女欧美另类| 一边亲一边摸免费视频| 高清日韩中文字幕在线| 亚洲av日韩在线播放| 国产伦一二天堂av在线观看| 亚洲久久久久久中文字幕| 男插女下体视频免费在线播放| 日韩欧美三级三区| 国产乱人视频| 亚洲精华国产精华液的使用体验| 成人二区视频| 国产成人a∨麻豆精品| 日韩av不卡免费在线播放| 国产伦精品一区二区三区四那| 亚洲欧美一区二区三区国产| 久久综合国产亚洲精品| 七月丁香在线播放| 日产精品乱码卡一卡2卡三| 天天一区二区日本电影三级| 老女人水多毛片| 一个人免费在线观看电影| 欧美日韩一区二区视频在线观看视频在线 | 五月玫瑰六月丁香| 日本-黄色视频高清免费观看| 麻豆乱淫一区二区| 男女国产视频网站| 国产爱豆传媒在线观看| 丝袜喷水一区| 91久久精品国产一区二区成人| 午夜视频国产福利| 99九九线精品视频在线观看视频| av播播在线观看一区| 久久久久性生活片| 日韩一本色道免费dvd| 极品教师在线视频| 久久97久久精品| 国产一区二区在线观看日韩| 波野结衣二区三区在线| 建设人人有责人人尽责人人享有的 | 成人性生交大片免费视频hd| 午夜激情久久久久久久| 国内精品美女久久久久久| 午夜精品在线福利| 亚洲av国产av综合av卡| 十八禁国产超污无遮挡网站| 男人和女人高潮做爰伦理| 久久久久久久久久黄片| 久久精品国产亚洲av涩爱| 91久久精品国产一区二区成人| 男女啪啪激烈高潮av片| 精品人妻一区二区三区麻豆| 日日啪夜夜撸| 亚洲精品日本国产第一区| 2022亚洲国产成人精品| 夜夜爽夜夜爽视频| 一区二区三区免费毛片| 91av网一区二区| 欧美最新免费一区二区三区| 午夜福利在线观看吧| 国产精品麻豆人妻色哟哟久久 | 免费大片18禁| 亚洲真实伦在线观看| 一级黄片播放器| 日韩精品青青久久久久久| 成人av在线播放网站| 欧美精品一区二区大全| 色尼玛亚洲综合影院| av一本久久久久| 免费看光身美女| 嫩草影院新地址| 男女下面进入的视频免费午夜| av天堂中文字幕网| 久久精品国产亚洲av天美| 欧美zozozo另类| 3wmmmm亚洲av在线观看| 国产精品一区二区性色av| 黄色日韩在线| 久久久久精品性色| 乱人视频在线观看| 午夜亚洲福利在线播放| 男的添女的下面高潮视频| 久久久午夜欧美精品| 在线播放无遮挡| 成人午夜高清在线视频| 亚洲精品乱久久久久久| 国产爱豆传媒在线观看| 极品教师在线视频| 少妇的逼好多水| 天堂av国产一区二区熟女人妻| av在线播放精品| 日产精品乱码卡一卡2卡三| 少妇高潮的动态图| 日韩欧美国产在线观看| 国产精品女同一区二区软件| 草草在线视频免费看| 成人毛片60女人毛片免费| 色综合色国产| 啦啦啦韩国在线观看视频| 中文字幕亚洲精品专区| 青春草国产在线视频| 大香蕉97超碰在线| 亚洲欧美日韩卡通动漫| 日韩强制内射视频| 久久久久久久久大av| 26uuu在线亚洲综合色| 亚洲欧美一区二区三区黑人 | 国产久久久一区二区三区| 听说在线观看完整版免费高清| 国产男女超爽视频在线观看| 亚洲av福利一区| 久久久亚洲精品成人影院| 十八禁网站网址无遮挡 | 啦啦啦韩国在线观看视频| 亚洲久久久久久中文字幕| 国产美女午夜福利| 亚洲自偷自拍三级| 国产 一区精品| 亚洲在久久综合| 亚洲综合精品二区| 久久久久久伊人网av| 亚洲丝袜综合中文字幕| 亚洲四区av| 色综合亚洲欧美另类图片| ponron亚洲| 91狼人影院| 亚洲精品国产av蜜桃| 婷婷色综合www| 日本-黄色视频高清免费观看| 中文乱码字字幕精品一区二区三区 | 国产免费视频播放在线视频 | 免费看a级黄色片| www.av在线官网国产| 国产色爽女视频免费观看| 午夜精品在线福利| 在线观看美女被高潮喷水网站| 亚洲真实伦在线观看| 欧美激情久久久久久爽电影| 午夜精品国产一区二区电影 | 777米奇影视久久| 蜜桃久久精品国产亚洲av| 午夜激情福利司机影院| 能在线免费观看的黄片| 美女cb高潮喷水在线观看| 一级毛片 在线播放| 国产永久视频网站| 黄色日韩在线| 小蜜桃在线观看免费完整版高清| 又爽又黄a免费视频| av国产久精品久网站免费入址| 自拍偷自拍亚洲精品老妇| 成人亚洲精品一区在线观看 | 国产在线男女| 乱系列少妇在线播放| 高清欧美精品videossex| 日韩成人av中文字幕在线观看| 亚州av有码| 久久这里只有精品中国| 男女下面进入的视频免费午夜| 日韩精品青青久久久久久| 久久亚洲国产成人精品v| 亚洲成人一二三区av| 精品久久久精品久久久| 国产乱人视频| 国产久久久一区二区三区| 午夜福利视频精品| 夫妻午夜视频| 人人妻人人看人人澡| 性色avwww在线观看| 亚洲欧洲日产国产| 777米奇影视久久| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 欧美成人午夜免费资源| 国产免费视频播放在线视频 | 亚洲图色成人| 免费看日本二区| 观看美女的网站| 亚洲国产欧美人成| 少妇被粗大猛烈的视频| 国产精品综合久久久久久久免费| 亚洲精品亚洲一区二区| 三级国产精品欧美在线观看| 欧美日韩一区二区视频在线观看视频在线 | 搡老乐熟女国产| 91久久精品电影网| 亚洲性久久影院| 91狼人影院| 免费av不卡在线播放| a级一级毛片免费在线观看| 成人一区二区视频在线观看| 老师上课跳d突然被开到最大视频| 精品人妻一区二区三区麻豆| 一级av片app| 免费在线观看成人毛片| 91久久精品国产一区二区成人| 亚洲伊人久久精品综合| 狠狠精品人妻久久久久久综合| 国产精品熟女久久久久浪| 一级二级三级毛片免费看| 精品一区在线观看国产| 哪个播放器可以免费观看大片| 天天躁夜夜躁狠狠久久av| 欧美日韩综合久久久久久| 国产精品伦人一区二区| 男人爽女人下面视频在线观看| 久久久久久久久大av| 亚洲精品视频女| 国产免费一级a男人的天堂| 伊人久久精品亚洲午夜| 99热这里只有精品一区| 乱系列少妇在线播放| 日韩中字成人| 日韩人妻高清精品专区| 亚洲av中文字字幕乱码综合| 国产精品久久视频播放| 3wmmmm亚洲av在线观看| 人体艺术视频欧美日本| 国产乱来视频区| 91久久精品电影网| 高清午夜精品一区二区三区| 欧美最新免费一区二区三区| 女人久久www免费人成看片| 中文在线观看免费www的网站| 美女高潮的动态| 淫秽高清视频在线观看| 亚洲激情五月婷婷啪啪| 身体一侧抽搐| 国产亚洲精品av在线| 日韩电影二区| 男女那种视频在线观看| 成年av动漫网址| 插阴视频在线观看视频| 七月丁香在线播放| 国产白丝娇喘喷水9色精品| 人妻制服诱惑在线中文字幕| 国产成人a区在线观看| 久久精品国产鲁丝片午夜精品| 久久久久久国产a免费观看| 少妇人妻精品综合一区二区| 午夜福利在线在线| 国产三级在线视频| 国产av在哪里看| 边亲边吃奶的免费视频| 欧美不卡视频在线免费观看| 成年女人在线观看亚洲视频 | 国产国拍精品亚洲av在线观看| 国产不卡一卡二| 午夜福利高清视频| 亚洲精华国产精华液的使用体验| 国产探花极品一区二区| 国产高清不卡午夜福利| 人体艺术视频欧美日本| av.在线天堂| 久久精品人妻少妇| 纵有疾风起免费观看全集完整版 | 天堂俺去俺来也www色官网 | 亚洲精品自拍成人| 久久久亚洲精品成人影院| 久久久午夜欧美精品| 国产伦在线观看视频一区| 国产精品不卡视频一区二区| 在线免费十八禁| 国产亚洲午夜精品一区二区久久 | 精品国产三级普通话版| 日产精品乱码卡一卡2卡三| 亚洲av日韩在线播放| 午夜福利在线观看免费完整高清在| 亚洲欧美日韩东京热| 亚洲国产成人一精品久久久| 天堂俺去俺来也www色官网 | 好男人视频免费观看在线| 99九九线精品视频在线观看视频| 国产精品精品国产色婷婷| 久久综合国产亚洲精品| 精品人妻视频免费看| 99久久九九国产精品国产免费| 国产亚洲av片在线观看秒播厂 | av在线天堂中文字幕| 日产精品乱码卡一卡2卡三| 国产高清不卡午夜福利| 久久精品国产亚洲av涩爱| 插阴视频在线观看视频| 九色成人免费人妻av| 天堂√8在线中文| 亚洲av.av天堂| 在线观看一区二区三区| videos熟女内射| 亚洲欧美一区二区三区国产| 精品人妻熟女av久视频| 51国产日韩欧美| 国产v大片淫在线免费观看| 狂野欧美白嫩少妇大欣赏| 久久综合国产亚洲精品| 一级二级三级毛片免费看| 91久久精品国产一区二区成人| 大片免费播放器 马上看| 午夜福利在线观看吧| 久久97久久精品| 美女cb高潮喷水在线观看| 久久久精品94久久精品| 亚洲av一区综合| 免费观看性生交大片5| av又黄又爽大尺度在线免费看| 亚洲av免费在线观看| 婷婷六月久久综合丁香| 精品不卡国产一区二区三区| 精品国产露脸久久av麻豆 | 午夜爱爱视频在线播放| 亚洲最大成人手机在线| 日本午夜av视频| 丰满人妻一区二区三区视频av| 菩萨蛮人人尽说江南好唐韦庄| 精品国产露脸久久av麻豆 | 看黄色毛片网站| 国产大屁股一区二区在线视频| 国产色婷婷99| av在线亚洲专区| 特大巨黑吊av在线直播| 国产精品久久视频播放| 青春草国产在线视频| 精品人妻一区二区三区麻豆| 国产免费又黄又爽又色| 国产探花极品一区二区| 插逼视频在线观看| 一区二区三区高清视频在线| 亚洲精品久久久久久婷婷小说| 久久精品人妻少妇| 久久热精品热| 亚洲欧美成人综合另类久久久| 亚洲最大成人av| 亚洲熟妇中文字幕五十中出| 亚洲精品乱久久久久久| 777米奇影视久久| 久久久亚洲精品成人影院| 免费观看a级毛片全部| 日本wwww免费看| 国产av码专区亚洲av| 最近中文字幕2019免费版| 中文字幕久久专区| 久久久久久久久中文| 国产老妇女一区| 成人一区二区视频在线观看| 嫩草影院入口| 人妻少妇偷人精品九色| a级毛色黄片| 18禁裸乳无遮挡免费网站照片| 久久99热6这里只有精品| 亚洲欧美日韩卡通动漫| 国产黄a三级三级三级人| 毛片一级片免费看久久久久| 插阴视频在线观看视频| 国产高清有码在线观看视频| 国产美女午夜福利| 亚洲精品日本国产第一区| 2018国产大陆天天弄谢| 久久综合国产亚洲精品| 久久国内精品自在自线图片| 国产精品国产三级国产专区5o| 久久综合国产亚洲精品| 能在线免费看毛片的网站| 亚洲va在线va天堂va国产| 精品久久久久久久末码| 一级毛片我不卡| 国产人妻一区二区三区在| 亚洲av电影不卡..在线观看| 极品少妇高潮喷水抽搐| 国产在线一区二区三区精| 小蜜桃在线观看免费完整版高清| 国产一级毛片在线| 欧美zozozo另类| 男人狂女人下面高潮的视频| 国产毛片a区久久久久| 搡女人真爽免费视频火全软件| 六月丁香七月| 丝瓜视频免费看黄片| 国产老妇女一区| av在线观看视频网站免费| 国产av不卡久久| 亚洲欧美日韩卡通动漫| 韩国av在线不卡| 亚洲人成网站高清观看| 欧美激情国产日韩精品一区| 1000部很黄的大片| 国产伦精品一区二区三区视频9| 日韩成人av中文字幕在线观看| 日本欧美国产在线视频| 十八禁国产超污无遮挡网站| 天堂av国产一区二区熟女人妻| 九草在线视频观看| 亚洲自拍偷在线| 欧美bdsm另类| or卡值多少钱| 国语对白做爰xxxⅹ性视频网站| 久久精品国产亚洲网站| 80岁老熟妇乱子伦牲交| av福利片在线观看| 久久久亚洲精品成人影院| 国产精品久久久久久久电影| 自拍偷自拍亚洲精品老妇| 久久精品综合一区二区三区| 国产精品久久久久久av不卡| 极品教师在线视频| 欧美日本视频| 国产老妇女一区| 国产女主播在线喷水免费视频网站 | 搡老乐熟女国产| 久久久久精品性色| 色尼玛亚洲综合影院| 欧美区成人在线视频| 日本色播在线视频| 亚洲av福利一区| 18禁在线无遮挡免费观看视频| 女人被狂操c到高潮| 欧美人与善性xxx| 纵有疾风起免费观看全集完整版 | 亚洲精品国产av蜜桃| 亚洲精品中文字幕在线视频 | 深爱激情五月婷婷| 97在线视频观看| 一区二区三区乱码不卡18| 狠狠精品人妻久久久久久综合| 色5月婷婷丁香| 寂寞人妻少妇视频99o| 一级毛片 在线播放| 一二三四中文在线观看免费高清| 好男人在线观看高清免费视频| 久久久久性生活片| 又黄又爽又刺激的免费视频.| www.av在线官网国产| 97人妻精品一区二区三区麻豆| 国产精品久久视频播放| 三级国产精品片| 成人亚洲精品av一区二区| 高清视频免费观看一区二区 | 国产精品久久久久久精品电影小说 | 男的添女的下面高潮视频| 久久韩国三级中文字幕| 天堂av国产一区二区熟女人妻| 黄片wwwwww| 麻豆成人午夜福利视频| 伦理电影大哥的女人| 国产成年人精品一区二区| 色视频www国产| 中文天堂在线官网| 69人妻影院| 精品一区二区三卡| 看非洲黑人一级黄片| 麻豆成人午夜福利视频| 国产成人午夜福利电影在线观看| 大陆偷拍与自拍| 青青草视频在线视频观看| 久久久a久久爽久久v久久| 亚洲精品,欧美精品| 99久国产av精品国产电影| 91狼人影院| 久久久国产一区二区| 麻豆av噜噜一区二区三区| 欧美精品一区二区大全| 综合色丁香网| 亚洲人成网站高清观看| 亚洲自拍偷在线| 国产老妇伦熟女老妇高清| 一个人看视频在线观看www免费| 一级片'在线观看视频| 久热久热在线精品观看| 亚洲精品亚洲一区二区| 亚洲国产最新在线播放| 亚洲欧美日韩东京热| 夫妻午夜视频| 男女那种视频在线观看| 91aial.com中文字幕在线观看| 春色校园在线视频观看| 国产亚洲精品av在线| 国产精品久久久久久精品电影| 久久人人爽人人片av| 夜夜看夜夜爽夜夜摸| 好男人视频免费观看在线| 色5月婷婷丁香| 床上黄色一级片| 亚洲精品国产av蜜桃| 亚洲真实伦在线观看| 久久久色成人| 美女脱内裤让男人舔精品视频| 91精品伊人久久大香线蕉| 特级一级黄色大片| 国产 亚洲一区二区三区 | 国产片特级美女逼逼视频| 久久久久久久久久久免费av| 国产亚洲最大av| 日韩视频在线欧美| 高清日韩中文字幕在线| 亚洲成人一二三区av| 日韩av不卡免费在线播放| 精品久久久精品久久久| 色5月婷婷丁香| 又粗又硬又长又爽又黄的视频| 国产久久久一区二区三区| 日产精品乱码卡一卡2卡三| 大香蕉97超碰在线| 黄色一级大片看看| 狂野欧美激情性xxxx在线观看| 免费黄频网站在线观看国产| 哪个播放器可以免费观看大片| 亚洲人与动物交配视频| 肉色欧美久久久久久久蜜桃 | 777米奇影视久久| 干丝袜人妻中文字幕| 三级国产精品欧美在线观看| 欧美成人a在线观看| 欧美人与善性xxx| 欧美另类一区| 精品久久久噜噜| eeuss影院久久| 狂野欧美白嫩少妇大欣赏| ponron亚洲| 99久国产av精品国产电影| 精品国产一区二区三区久久久樱花 | 综合色丁香网| 国产精品不卡视频一区二区| 97在线视频观看| 午夜免费观看性视频| 在线免费观看的www视频| 精品久久久久久久久久久久久| 在线观看一区二区三区| 久久精品久久久久久久性| 日本与韩国留学比较| 秋霞在线观看毛片| 汤姆久久久久久久影院中文字幕 | 高清毛片免费看| 国产精品无大码| 97精品久久久久久久久久精品| 日本黄大片高清| 日韩欧美精品免费久久| 色5月婷婷丁香| 国产精品三级大全| 免费少妇av软件| 99热网站在线观看| 国产精品一二三区在线看| 午夜福利高清视频| 国产老妇伦熟女老妇高清| 成人毛片a级毛片在线播放| 老司机影院成人| 亚洲精品乱码久久久久久按摩| 国产视频内射| 婷婷六月久久综合丁香| 蜜桃久久精品国产亚洲av| 草草在线视频免费看| 中文字幕免费在线视频6| 亚洲av成人精品一二三区| 极品少妇高潮喷水抽搐| 精品午夜福利在线看| 久久久久久久亚洲中文字幕| 久久久久久久久大av| 欧美97在线视频| 91久久精品国产一区二区三区| 在线播放无遮挡| 日韩强制内射视频| 亚洲精品一二三| 中国国产av一级| 亚洲欧美成人精品一区二区| 青青草视频在线视频观看| 久久久久国产网址| 天美传媒精品一区二区| 看十八女毛片水多多多| 欧美成人一区二区免费高清观看| 国内精品美女久久久久久| 国产人妻一区二区三区在| 国产黄频视频在线观看| 美女国产视频在线观看| 十八禁国产超污无遮挡网站| 天堂√8在线中文| 尾随美女入室| 欧美性猛交╳xxx乱大交人| 黄色配什么色好看| 男人和女人高潮做爰伦理| 欧美xxⅹ黑人| 精品欧美国产一区二区三| 精品人妻偷拍中文字幕| 美女高潮的动态| 亚洲怡红院男人天堂| 又爽又黄a免费视频| 高清av免费在线| 欧美人与善性xxx| 观看美女的网站| 熟女电影av网| 最近2019中文字幕mv第一页| 美女大奶头视频| 亚洲av电影不卡..在线观看| 国产精品久久久久久精品电影| 老女人水多毛片| 久久精品综合一区二区三区| 伊人久久精品亚洲午夜|