• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    A Real-Time Oral Cavity Gesture Based Words Synthesizer Using Sensors

    2022-08-23 02:16:20PalliPadminiParamasivamJyothishLalSadeenAlharbiandKaustavBhowmick
    Computers Materials&Continua 2022年6期

    Palli Padmini,C.Paramasivam,G.Jyothish Lal,Sadeen Alharbiand Kaustav Bhowmick

    1Department of Electronics&Communication Engineering,Amrita School of Engineering,Bengaluru,Amrita Vishwa Vidyapeetham,India

    2Center for Computational Engineering and Networking(CEN),Amrita School of Engineering,Coimbatore,Amrita Vishwa Vidyapeetham,India

    3Department of Software Engineering,College of Computer and Information Sciences,King Saud University,Riyadh,Saudi Arabia

    4Department of Electronics and Communication Engineering,PES University,Bengaluru,India

    Abstract: The present system experimentally demonstrates a synthesis of syllables and words from tongue manoeuvers in multiple languages,captured by four oral sensors only.For an experimental demonstration of the system used in the oral cavity, a prototype tooth model was used.Based on the principle developed in a previous publication by the author(s),the proposed system has been implemented using the oral cavity (tongue, teeth, and lips)features alone,without the glottis and the larynx.The positions of the sensors in the proposed system were optimized based on articulatory (oral cavity)gestures estimated by simulating the mechanism of human speech.The system has been tested for all English alphabets and several words with sensor-based input along with an experimental demonstration of the developed algorithm,with limit switches,potentiometer,and flex sensors emulating the tongue in an artificial oral cavity.The system produces the sounds of vowels,consonants,and words in English, along with the pronunciation of meanings of their translations in four major Indian languages, all from oral cavity mapping.The experimental setup also caters to gender mapping of voice.The sound produced from the hardware has been validated by a perceptual test to verify the gender and word of the speech sample by listeners, with ~98% and~95%accuracy,respectively.Such a model may be useful to interpret speech for those who are speech-disabled because of accidents, neuron disorder,spinal cord injury,or larynx disorder.

    Keywords: English vowels and consonants; oral cavity; proposed system;sensors;speech-disabled;speech production;vocal tract model

    1 Introduction

    Communication is essential in modern society,in every environment or social aspect of people’s life, private or public.Statistics show that there are 400 million disabled people in the developing world[1].Approximately,more than nine million people in the world have voice and speech disorders[2–4].Speech impediments are conditions in which normal speech is interrupted due to vocal cord paralysis, vocal cord damage, accidents, brain damage, laryngeal disease [5–7], laryngeal disorders[8], dysarthria, or cerebral palsy [9], neuron disorders, old age, oral cancer, muscle weakness, and respiratory weakness[10],etc.

    Assistive technologies (AT) and speech synthesis systems that help the severely disabled to communicate their intentions to others and effectively control their environments, enable them to pursue self-care, educational, vocational, and recreational activities, etc.The present work aims towards the development of a mechanism to benefit patients with speech disorders based on oral cavity maneuvering only,without glottal and laryngeal intervention.Presently,there are different techniques to synthesize speech for the speech-disabled available in the commercial market, each with its own merits and demerits.A comprehensive summary of the state-of-the-art about these different techniques of speech synthesis system is listed in Tab.1.

    Table 1: Description of speech synthesis techniques

    Table 1:Continued

    Table 1:Continued

    Table 1:Continued

    The summary hence drawn,is that in the aforesaid literature,speech synthesis systems have mostly used electrodes or sensors or visual features information from lips or hands, that are captured for speech production.The tongue drive system was explored in recognizing the oral cavity activities and to control the remote devices,but although promising,it has not been explored for speech production.Based on tongue gestures,author(s)have previously estimated optimized formants of the oral cavity for the English alphabet, to prove that the tongue plays a key role in speech production [20].From the basics of phonetics, and articulatory system-based method was proposed by author(s) initially(including tongue, teeth, and lips) with listed accurate gestures for each English alphabet, and was tested for English alphabets using LabView[21].However,an initial practical demonstration was not published by the author(s)until later[20],although yet not for complete words.

    The main objective of the proposed work is to synthesize sound/words based on the movements in the oral cavity (tongue, lips, and teeth) during the production of human speech, especially the articulation of English vowels,consonants,and sequence of letters i.e.,words.The tongue,which has only been clinically characterized thus far,has been considered as the main player in English vowel and consonant production.Refrences[20,21],by the author(s),previously,leading to the present work.In this paper, a tongue-based system has been conceptualized for producing human speech to create a demonstrable device to produce up to words.Thus,the present solution demonstrated can potentially only help people with speech disabilities in social life,and also aid in their healthcare,professional life,family life,etc.

    The main contributions of the work are as follows:

    · The oral cavity gestures need to be identified for each vowel and consonant during the production of human speech.

    · Optimization of the number and position of the main sensors needed to implement tonguedriven speech synthesis.The optimized setup is implemented via an in-house built device and the performance is experimentally demonstrated.

    · Implementation of the concept of the proposed system in hardware for use in real-time applications to allow speech-disabled people to produce words in English and major Indian languages.

    The remainder of the paper is organized as follows: Section 2 explains the system of human speech production, followed by the proposed system in Section 3.Section 4 describes the setup of the experimental hardware, followed by a conclusion and expectations for future research in Section 5.

    2 Oral Cavity Gestures Identification Based on Human Speech Production System

    In the speech production model, the air is forced from the lungs by the contraction of muscles around the lung cavity.The pitch of the sound is caused by the vibration of cords.The excitation of the vocal tract results from periodic puffs of air.The vocal tract(between the vocal cords and lips)acts as a resonator that spectrally shapes the periodic input that can produce the respective sound from lips[10].

    The parts of the vocal tract that can be used to form sounds or during speech production are called articulators including the glottis,larynx,jaw,tongue,and lips,teeth.The articulatory gestures of speech production in existing speech production models involve the glottis,larynx,tongue,lips,teeth,and palate for producing specific sounds.The first step was to understand and analyze the existing mechanism to produce speech in the vocal tract by using VocalTractLab 2.3 software(VTL 2.3)[22].It identifies the articulators, which play an important role in improving acoustic simulation in VTL 2.3.This model represents the surfaces of the articulators and the vocal tract walls.The shape and/or position of the articulators is defined by vocal tract parameters like glottis,larynx,jaw,tongue,teeth,and lips.

    In general,the shapes of the vocal tract during articulation of a few English vowels and consonants in the existing model using VTL 2.3 software are shown in Fig.1.This makes it easy to visually compare the vocal tract model shapes for the pronunciation of English syllables, especially if they are displayed as 2D or 3D contour images.The tongue, teeth parameters define tongue height (h)and tongue frontness(l)which specify the tongue shape for each English alphabetic sound[20].The degrees of lip rounding and velum is the soft tissue constituting the back of the roof of the mouth called soft palate,which is specified by the parameters lip rounding and velum position.When a parameter(tongue,teeth,and lips)is changed,the corresponding changes in the vocal tract shape,observed in Fig.1,for a few English vowels and consonants.

    Figure 1: 3D representation of the vocal tract model while pronouncing English (a) Vowels and (b)Consonants using VTL 2.3 software[22]

    Again, vocal tract (VT) animation with the glottal area, larynx height, jaw height (which resembles tongue displacement),tongue position,tongue tip/apex,and lips from vocal tract acoustics demonstrator(VTDemo)software[23]is shown in Fig.2,for the articulatory synthesizer.

    VTDemo shows the vocal tract positions during sound synthesis.When articulatory parameters change,it changes the sound that is heard.By varying the height of the jaw or the displacement of the tongue body,tongue position,tongue tip,and lips in the VT Demo articulatory synthesizer software,researchers can observe the production of different vowel sounds [23].The shapes of articulatory gestures during the production of English vowels and consonants using VT Demo are shown in Fig.2,which helps to identify the oral cavity gestures of each alphabet.

    Figure 2:Oral cavity gestures while pronouncing a few English(a)Vowels and(b)Consonants

    The observation and analysis of the existing vocal tract speech production systems using VTL and VTDemo as shown in Figs.1 and 2,for both vowels and consonants help us to estimate the correct locations/gestures to capture the parameters of oral cavity like tongue,teeth,and lips position data for speech production for each English alphabet without the use of glottis and larynx.Thus,the focus of the study is to use only oral cavity movements(tongue,teeth,and lips,jaw),with the tongue being most important[20]to produce sound.This requires building the proposed sensor-based system for speech production for the speech-disabled using only oral cavity gestures.It is described in the following sections.

    3 Proposed System to Synthesize Speech Using Matlab GUI

    From the literature review,we found that there are not many studies that focus only on the oral cavity articulatory gestures for speech production,except in author(s)work[21].Thus,the proposed system is built by using the human speech production mechanism by using only the gestures of the oral cavity like jaw height, lips, tongue body displacement, the position of the tongue, and the tip of the tongue, without using the glottis and larynx.The present work concentrates mainly on the functionality of the lips and tongue (in the oral cavity) in the production of English vowels and consonants.The production of each sound depends on the degree of lips i.e.,lip movements,tongue tip,and tongue body displacement,and tongue teeth positions.

    The front and side views of the oral cavity are shown in Figs.3a and 3b,respectively.We estimate the four sensor positions of the proposed system for speech production are highlighted by red dots is shown in Fig.3b.

    Moreover, a set of gestural descriptors is used to represent the contrastive ranges of gestural parameter values discretely [24].These descriptors point is a set of articulators involved in a given gesture and the numerical values of the dynamic parameters, which characterize the gestures.Every gesture can be specified by distinct descriptor values for the degree of constriction.Fig.4,shows the gestural dimensions and their descriptors in detail,including a comparison with current proposals of feature geometry[24].

    Figure 3:Estimated four sensors positions in the oral cavity(a)Front view(b)Side view

    Figure 4:Inventory of articulator sets and associated parameter[24]

    Oral gestures involve pairs of tract variables that specify the degree of constriction of the tongue.For simplicity,we refer to the sets of articulators involved in oral gestures,viz.,the lips,tongue tip(TP),tongue teeth position (TTP), and jaw height (JD) for gestures involving constriction of the tongue body.These gestures will help to differentiate each English alphabet easily.Individual gestures of each sensor position with different articulatory gestures are successively shown in Fig.5.

    Figure 5:Schematic representation of(a)Tongue-teeth position(b)Tongue displacement(c)Tongue tip(d)Lips while producing English vowels and consonants[10]

    Data from all four sensor positions that capture the gestures of the tongue,teeth,and lips in the oral cavity along with the side and front view of the oral cavity during the production of each English vowel is shown in Fig.6.For example, to produce the sound ‘/a/,’the tongue tip is neutral, tongue displacement is narrow,lips are open and the tongue-teeth position is back as highlighted by red dots.Generalized sensor-based inputs(i.e.,jaw height,tongue body,tongue tip,and lips)for a few English vowels and consonants are given in Fig.6.

    Figure 6: Four sensor positions along with a front view of the oral cavity gestures for a few English vowels and consonants

    Likewise,we can produce the sounds of either English vowels or consonants using the captured four sensor positions of oral cavity gestures.Different combinations of four articulatory gestures produce different English sounds.Irrespective of these combinations of four articulatory gesture inputs,no sound will be produced.

    The articulatory gestures through sensor input are monitored and captured continuously to synthesize the sequence of vowels,consonants i.e.,words.The gestural score for the utterance of each word is based on articulatory phonology, as shown in Fig.7 [25].The rows correspond to distinct organs(JH=“Jaw Height,”TD=“Tongue Body,”TP=“Tongue Tip,”Lips).The labels in the boxes stand for the gesture’s goal specification for that organ.For example,alveolar stands for a tongue tip constriction from the horizontal.Each syllable connects critically coordinated gestures,or phased,for one another that represent greater bonding strengths between coordinated gestures[25].The gestural score for uttering a few words,with boxes and tract variable motions as generated by the computational model(coordinate pairs of gestures),is shown in Fig.7.

    Figure 7: Schematic gestural scores.(a) “add” (b) “had” (c) “bad” (d) “pad” (e) “pan” (f) “dad”(g)“span”(h)“palm”(i)“team”

    Fig.7 can be substantiated by displaying the gesture scores while reciting each particular word.Philological items contrast gesturally, by verifying each gesture is present or absent (e.g., “add”vs.“had,” Figs.7a,7b; “add”vs.“bad,” Figs.7a,7c; “bad”vs.“pad,” Figs.7c,7d; “pad”vs.“pan,”Figs.7d,7e; “pan”vs.“span,” Figs.7e,7g).Those combinations are pointed and highlighted with different red shapes.We assume that in speech mode“had”and“bad”would typically be considered to differ from“add”by the presence of a segment,while“bad”and“pad,”“pad”and“pan,”would differ only in a single feature,voicing or nasality,respectively.Another kind of contrast is that in which gestures differ in their assembly, i.e., by involving different sets of articulators and tract variables,such as lip closurevs.tongue tip closure(e.g.,“bad”vs.“dad,”Figs.7c,7f).All these differences are categorically distinct.

    The response of the complete proposed system for speech production was validated by creating a GUI using Matlab.In our proposed system,four-sensor data inputs i.e.,tongue body displacement or jaw height,tongue position and tongue tip,lips positions were used for the speech synthesis system.We have made a list of the position of each sensor like as shown in Fig.6,for each English alphabet.If the combination of four sensors positions matches with the pre-defined look-up-table as in Fig.6,it has to display the respective letter in the GUI and produce the same sound.The four sensor inputs are selected manually using the drop-down arrow,GUI for speech production is built for English vowels and consonants using Matlab,as shown in Fig.8.The steps to build GUI using Matlab is as follows,

    1.First select the sensor inputs manually using the drop-down button.

    2.The sensor inputs are checked with a pre-defined table,based on match,the respective letter displayed in the text box.

    3.we use text to speech conversion in Matlab,to produce the speech sound.

    4.For the sound produced,the pitch will be calculated and display the sound waveform on the screen.

    Thus,the GUI shows four sensor inputs,and the respective letters of the alphabet are displayed on the text box,speech signal waveform,pitch of the sound,and the equivalent sound.For example,if our sensor inputs are open,front,narrow,and neutral are selected,according to the predefined table,these combinations should produce/c/sound as is shown in Fig.8b.

    Figure 8:GUI of the proposed system using Matlab for speech synthesis based on the sensor–input of(a)Sequence of vowels(AEIOU)(b)Consonant Production

    The sensor-based input continuously monitors the position of tongue, tongue-teeth, and lips during the articulation of a sequence of vowels and consonants using text to speech synthesis(TTS),whose GUI is shown in Figs.8a and 8b,respectively.

    In our proposed system,speech is synthesized by capturing the different articulatory gestures of the oral cavity(tongue,lips,teeth)based on sensor-based input data.It brings out a hardware design with each of the roles of articulators for speech production using appropriate sensors and electronic devices.Furthermore,we implemented the sensor-based hardware experimental setup version of the proposed system.The hardware setup is described and discussed briefly in the following section.

    4 Real-Time Speech Production System for Speech-Disabled Using Tongue Movements

    A speech-impaired person can be able to produce/pronounce a letter/word distinctly using tongue movements.In our proposed hardware system,speech is produced by using the four sensors,which are placed inside a prototype tooth model,to capture tongue movements.We observed the bent and rolling of the tongue in numerous degrees and their subsequent touch is recorded.The estimated lookup table(LUT) is to be coded in the microcontroller’s memory.The produced letter is played over a speaker using parallel communication with a micro-SD card.This section describes the components of the hardware and the flow of the algorithm and features of the proposed hardware system.It also analyzes the output of the proposed hardware system,which is validated by a perception test.

    4.1 Proposed Hardware Experimental Setup

    The proposed system was initially limited to the pronunciation of the English alphabet and words by using a prototype model of teeth,perceived as a biological system,and extended to other languages using a language translator.In our proposed system,speech production happens only from movements of the tongue inside the oral cavity.Based on knowledge of existing speech production systems, we concluded that four-sensor positions are required for speech production using oral cavity movements,as discussed in Section 3.

    The hardware prototype is improvised to sense jaw movements and tongue flexibility.In the proposed system, a set-up to analyze the speech sound from a laptop and LCD screen is shown in Fig.9a.To capture the tongue and jaw movements(including tongue teeth position and lips),we use two limit switches,a potentiometer,and a flex sensor,as shown in Fig.9b.One can track the position of the oral cavity movements during articulation displays and produce the respective sounds in an LCD of speech assistant device and through an electric speaker.

    The components required to set up the mini-prototype to synthesize words with a rechargeable battery have Arduino UNO,Bluetooth module,flex sensor,potentiometer,connecting wires,breadboard,and a 9v DC battery with connectors,as shown in Fig.9b.

    Two limit switches were placed in the upper and lower palates to indicate whether the tongue touches the palate.The potentiometer is placed on the tooth set to sense the jaw movement and lips positions, and the flex sensor is placed on the tongue to identify the twist and roll positions of the tongue,as shown in Fig.9b.

    Figure 9:(a)The hardware experimental setup(b)Hardware components of the proposed system for sound/word production

    It is assumed that the vocal tract is not present because the person has a larynx disorder or a voice box problem.The algorithm is designed to observe the tongue,teeth,and lips functions and capture the same with tongue movement,tongue touches,and jaw movement.A unique algorithm was created to pronounce every letter of the alphabet distinctly.A LUT is formed using the gestures made during the articulation of each vowel or consonant for digital data acquisition.Optimized coding is incorporated for every sensor calibration on sensible data to run the algorithm.The prototype of the speech assistant system has been designed to fit the small-scale model of human dummy teeth with minimal circuitry.The flow diagram for the real-time speech production algorithm is shown in Fig.10.

    Figure 10:Flow diagram of the proposed system for sound production

    There is two independent(dummy tooth/jaw set and Speech assistant device)and one interdependent(interface circuitry)aspect of the hardware.They are treated in what follows.

    · Dummy tooth/jaw set: This is used by dentists to demonstrate how teeth, cavities, and gums,interrelate.It is used as a dummy human tooth/jaw set.In this work, the tooth set is used as a medium to fix the sensors and acquire data.We have placed different sensors in different positions to capture the relevant oral cavity gestures during each alphabet production but considered sensors are in optimal positions which gives good information of gestures in identifying the sound production which improves the accuracy/performance of the system.Thus, during trials on experiments, we consider four sensors (S1-S4) affixed to the dummy tooth/jaw set to acquire data/values that are sufficient/required to produce a particular letter or word.

    · Interface circuitry:This circuitry enables the communication between the dummy tooth set and the speech assistant device.The input sensor values from the dummy tooth set are sent to the speech assistant device.

    · Speech assistant device:This is the final part of the hardware and also the most important part.Its basic features are:

    o It stores letters and words.

    o It has a user-friendly graphical user interface(GUI).

    o A control system allows users to choose words from lists and announce them to fellow listeners.

    o With the use of a dummy tooth/jaw set,it can save the pronounced word for future use.

    o Machine learning and natural language processing libraries were incorporated for the synthesis of words in any language that Google Translate can enlist.

    o Offline and online modes are available for the user’s benefit.

    The hardware components required for the experimental setup are enumerated as given in Tab.2.

    Table 2: The hardware components required for the experimental setup

    Table 2:Continued

    Table 2:Continued

    Table 2:Continued

    Table 2:Continued

    Two limit switches,flex,and potentiometer sensors were placed in the oral cavity to capture the oral cavity gestures.Based on oral cavity gestures,we capture and obtain data by affixed sensors on a dummy tooth set.The oral cavity gestures at different positions during articulation of different sounds are shown in Fig.11.The different oral cavity gestures of the proposed hardware system are shown in Fig.11,as same as the oral cavity gestures discussed in Section 3 and shown in Fig.6.The dummy tooth/jaw setup is activated to exhibit various gestures associated with different letters and words of the English language,a few of the gestures are shown in Fig.11.

    Figure 11:(a)Multiple degrees of Jaw gestures(b)Various tongue height and advancement gestures

    The input sensor data values captured from the different oral cavity gestures were sent to the interface circuity and speech assistant device to pronounce a sound.The respective output speech would be heard from the speaker based on a match of the variability of four sensor values with LUT.Tab.3 shows the LUT values in centimeters of four input sensors for a few English letters.LUT is used for interfacing the sensor outputs values in accomplishing speech synthesis of our proposed hardware setup.

    Table 3: The LUT values of four sensors for a few English alphabets

    The input sensor data was tested using a LUT (see Tab.3).The LUT values of sensors 1 and 2 (S1, S2) define the position of the tongue whether it touches the upper palate and lower palate or is not defined by 0 or 1.Sensors 3 and 4 (S3, S4) define the tongue teeth position and tongue tip using flex and potentiometer as shown in Tab.3.As the sensor devices, we have considered are electrical devices,thus tolerance/error will be considered.We have considered the tolerance values are+/-0.5 of sensor values,to get effective results.The purpose of these tolerance values will help us to get effective information from the sensor,even sometimes the electrical sensors/devices get affected by temperature/heating issues.

    If input sensor data match with LUT,a respective letter will be displayed on the screen.If sensor data did not match with LUT,no letter was printed on the screen.The respective displayed letter was passed to the function called for audio play through serial-parallel interface communication with SD card(flash memory).The letter sounds,which had been stored,came through the electric speaker.The steps were repeated while taking the sensor data continuously to produce a letter or sequence of letters(words)and the relevant letter is printed on the LCD screen.

    4.2 Features of the Proposed Hardware System

    The proposed system is completely built keeping given the original idea of pronouncing alphabets and extrapolating it to synthesize words.The features of the experimental setup are enumerated as follows.

    · Prototype of small-sized dummy teeth model:The speech assistant system is designed to fit the small-scale human teeth model with minimal circuitry.It has been designed to follow the same LUT.The model is now more realistic and easier to use.

    · Interface of the dummy teeth model with the GUI-based speech assistant: The dummy teeth model interfaces with the Raspberry-Pi-enabled speech assistant system wirelessly.The data collected from the sensors can be sent via Bluetooth to the speechstant system,and the user can see the letters being printed on the display.

    · User-friendly design of speech assistant system:The user can see the letters in the display of the speech assistant system.The user can store and delete these letters to make meaningful words and store them permanently in a predefined list,that needs to be pronounced.

    · Application-oriented approach for enlisting favorite words:The user is given a list of frequently used words as a part of a favorite list that one uses daily to announce through the speech assistant.This intuitive approach to the project serves as a credible asset to a user who faces a speech disability disorder.

    · Translation of words into any language: The user can save any meaningful word to the list.The default setting is English.Four language settings are added:Hindi,Kannada,Tamil,and Telugu.In the online mode, the word in the selected language will be announced/synthesized by Google speech assistant,which holds a very strong connection to the native accent the user desires the word to be announced for fellow listeners.This brings a new dimension to the project.

    · Interrupt-oriented switching of online to offline modes: This step is taken particularly for a user who is in online mode and may not have strong Internet connectivity.In such a case, an interrupt-based dedicated program is developed and uploaded so that the user chooses to switch to offline mode to announce the chosen word to fellow listeners immediately.It can pronounce the word in online mode when it finds strong internet connectivity.

    4.3 Basic GUI Features

    The speech assistant (speechstant) device has a user-friendly GUI so the user can choose the gender of the voice sample,language,and also,we can choose words from predefined lists,and store letters/words by operating a joystick.The same chosen or stored letter or word is announced for fellow listeners.The speech assistant device uses machine learning and natural language processing libraries incorporated for the pronunciation of words in any language that Google Translate can enlist.It has offline and online modes for the user’s benefit.The GUI features are described in Tab.4.The block diagram of and hardware setup of the joystick module and GUI features are shown in Fig.12.

    Table 4: Basic GUI features description

    Table 4:Continued

    The hardware setup is connected to Bluetooth, before going to operate the joystick.The input sensor values are analyzed, and display the output based on oral cavity movements in LCD screen which can be operated by joystick of speechstant device.A block diagram of the joystick module is shown in Fig.12a.

    Figure 12: (a) Joystick module (b) Speechstant device (c) Language setting page (d) Scan page(e)Assistant page(f)Assistant page in basic mode

    The blue light at point 6 indicates the switch has been read,and it displays the processing status.If point 6 has no sign,it means the user must first connect with the dummy tooth set as shown in Fig.12b.The joystick up button selects the online mode and the language setting page,in which the up and down buttons of the joystick are used to select the preferred language(Kannada,Tamil,Telugu,Hindi,but the default is English)as shown in Fig.12c.Similarly,the center and left buttons are used to confirm a selection or to move back and to set the default language is English.The left switch engages the scan mode when the user wants to store a letter to make a sentence communicating with a dummy tooth set over Bluetooth.Fig.12d shows the scan page.Pointer 1 shows the data are coming continuously,and you can store a letter within 4 s using the right switch.After that,it appears at pointer 2.At that point,you can delete the last letter from the word using the left switch.The down switch selects either male or female voice depending on the disabled person’s choice(the default is female).The right switch is used to select what the user wants to speak from a predefined list.The center switch(point 5)of the joystick is not functional.

    When a word is ready,the user stores it to a basic file by clicking the center button followed by the saved message and refreshing it again.To go to the home page,the up switch is used.Fig.12e shows the assistant page where a user can choose the favorite file(a predefined word that is used frequently or for emergency purposes)or a basic file(the word has already been stored the word by a dummy tooth set)file by up and down button and select it by pressing the center button.Fig.12f shows the second page of the assistant mode,where a user can choose a word with the up/down button.The arrow is a pointer to the that the user makes decisions about.If the user wants to speak that word,he or she just presses the center button.If the user wants to delete any line,the user must hold the right switch for 2 sec.When the user is speaking,he or she moves to the left switch and holds it to go to the home page.The program must be restarted when the user wants to move from offline mode to online mode.

    4.4 Hardware Experimental Results

    The importance of the sensor,the variation of the limit switches,the flex,and the potentiometer were displayed on the laptop screen with the help of the Arduino board, as shown in Fig.13a.The respective output is displayed on LCD through the I2C module, as shown in Fig.13b, respectively.The produced output sound was heard from the electric loudspeaker according to the sensor data based on the matching with the LUT(see Tab.4),or from a predefined list stored on the SD card.

    Figure 13: The system for speech production which displays the values of sensor inputs (a) Laptop(b)LCD screen

    When the acknowledgment switch was ON{1},the hardware setup reads the input sensor data.After that, when the acknowledgment switch was OFF {0}, it displayed the corresponding English alphabet based on the match of input sensor data with the LUT, and the same letter is produced loudly by the electric speaker.

    The speechstant device displays the sensor values based on the input captured by sensors which are placed in the oral cavity during oral cavity movements.The output screenshots of the LCD screen during the experiment of the proposed hardware setup from the initial page to output letter/words displayed are shown in Fig.14.

    The initial page of the speech assistant device is shown in Fig.14a.A user moves the joystick down to choose the male voice(the default is the female voice),as shown in Fig.14b.The sensor input data from the tooth set are matched with the LUT,and the respective letter is displayed in the LCD,once the store button(Str)is pressed.The user can press the delete button(Dlt)to avoid storing and producing the sound.Then the same word is saved in basic mode,and the sound is produced through the speaker in the voice of the chosen gender.The options for a single letter“a”and a three-letter word“add”are shown in Figs.14c and 14d.The favorite and basic file modes are shown in Figs.14e and 14f.

    Figure 14:The LCD screen(a)Initial page(b)Gender selection and(c)One letter output(d)Threeletter word output(e)Frequently used word list(f)Predefined list

    The output sound waveforms produced based on the interfacing sensor input values with the proposed hardware system are shown in Fig.15 for both female and male voices.In general,female voices have a higher pitch(amplitude)when compare to male voices,the same differences can also be observed in Fig.15.The comparison between natural speech and its synthesized speech waveforms is shown in Fig.16 and observed that they were almost similar.

    Figure 15: The output waveforms of female and male voices (a) letter “a” (b) “add” [English](c)“baguna”[Telugu](d)“banni”[Kannada](e)“ghar”[Hindi](f)“panam”[Tamil]

    The output sounds produced from the hardware setup based on the sensor inputs have differences in production time.The production time differs based on the average length of the word(number of the letters)is shown in Tab.5.

    Figure 16:The comparison between natural and its synthesized speech waveforms for(a)/a/(b)/w/

    Table 5: The output sound production time in seconds for letter “a”, “add” [English], “baguna”[Telugu],“banni”[Kannada],“ghar”[Hindi],“panam”[Tamil]

    The output sound produced from the input sensor based on oral cavity gestures is validated by a Mean Opinion Score(MOS)which is discussed in the following subsection.

    4.5 Mean Opinion Score(MOS)

    The opinion score[29]was validated by 20 listeners(10 males and 10 females),all native speakers of British English aged 17–42 years old,who were recruited to participate in the experiment.The listeners had no known speaking or hearing impairments.The test was devised to evaluate the quality of the synthesized speech produced through voice conversion.The opinion score measures and judge the correctly produced sound by the listeners.A five-point scale was used,with five as the best score.The scores from the evaluation test are shown in Tab.6.Tab.6a shows the means for correctly identifying the gender of the voice sample,and Tab.6b shows the means for correctly identifying the stated words over the proposed hardware system.

    The overall performance in correctly identifying the speech for all ages of listeners was quite good.Approximately 98% accurately identified the gender, and there was 95% accuracy in identifying the words in the voice samples.The pair of voice samples of the words add,had and pan,span was similar,and they created some issues of perception or quality that left some listeners confused.This contributed to the 95%accuracy in identifying the voice samples of all the words.

    Table 6: Mean opinion scores

    5 Conclusion and Future Work

    This paper presents an approach for speech production using oral cavity gestures especially movements of the tongue, lips, and teeth.our motivation is to make communication easier for the speech disabled.Speech disability can occur because of cancer of the larynx,spinal cord injury,brain injury,neuromuscular diseases,or accidents.The four positions of the sensors in the proposed system were based on appropriate articulatory(oral cavity)gestures estimated from the mechanism of human speech,using VocalTractLab and the vocal tract acoustics demonstrator(VT Demo).From the study and analyses of existing vocal tract speech production physiology, we observed the tongue plays a crucial role in speech production.An initial experiment was carried out by listing the positions of oral cavity gestures for respective sound production.It was tested by emerging a GUI using Matlab.

    With the reference of knowledge from the initial experiment, the hardware system was verified using an experimental dummy tooth set setup with four sensors, and it produces the speech.The tongue and jaw movements placed in the dummy tooth set were captured by two limit switches, a potentiometer, and a flex sensor.These sensor data from oral cavity movements translate into a set of user-defined commands by developing efficient algorithms that can analyze what is intended and create a voice for those who cannot speak.The output sounds can be heard from an electric speaker,and they are displayed on the screen of the speech assistant device.The system was extended to apply to other languages,such as Hindi,Kannada,Tamil,and Telugu,using a language translator.Based on the choice of speech-disabled,users can select gender voice samples as male or female voice.Those results were validated by a perceptual test,in which ~98%accurately identified the gender of the voice,and there was ~95%accuracy in identifying the words in the voice samples.Thus,this system was useful for speech-disabled because of accidents, neuron disorder, spinal cord injury, or larynx disorder to have communication easily.

    In future research,time delays can be reduced during sound production.The present work could be extended to generate sequences of words and long/whole sentences.Using the basic facts demonstrated in this study,it might be possible to build a chip design system that wirelessly tracks the movements of the tongue and transmits the sensor data through Bluetooth to a personal computer in which data are displayed and saved for data analysis.Another objective will be to include emotion in the output speech of the proposed system,to communicate to express their thoughts with emotions.

    Acknowledgement:The authors thank all the participants who enabled us to validate these output sounds/words.

    Funding Statement:The authors would like to acknowledge the Ministry of Electronics and Information Technology(MeitY),Government of India for financial support through the scholarship for Palli Padmini,during research work through Visvesvaraya Ph.D.Scheme for Electronics and IT.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    久久久久久久久大av| 卡戴珊不雅视频在线播放| 国产极品天堂在线| 国产免费又黄又爽又色| av国产久精品久网站免费入址| 午夜av观看不卡| 一个人看视频在线观看www免费| 51国产日韩欧美| 国产国拍精品亚洲av在线观看| 一级,二级,三级黄色视频| 欧美日韩综合久久久久久| 91成人精品电影| 91精品国产国语对白视频| 久久久久久久久大av| 国产免费一级a男人的天堂| 蜜桃在线观看..| 亚洲av不卡在线观看| 国产av国产精品国产| 久久精品久久久久久噜噜老黄| 在线看a的网站| 人人妻人人看人人澡| 最近的中文字幕免费完整| 久久人人爽av亚洲精品天堂| 五月伊人婷婷丁香| 亚洲成色77777| 精品人妻一区二区三区麻豆| 国产av国产精品国产| 国产一区二区在线观看av| 精品酒店卫生间| 如何舔出高潮| 天堂俺去俺来也www色官网| 国产欧美日韩精品一区二区| 日韩亚洲欧美综合| 男女边吃奶边做爰视频| 亚洲精品乱久久久久久| 91久久精品电影网| 一区二区三区精品91| 免费播放大片免费观看视频在线观看| 午夜激情久久久久久久| av一本久久久久| 在线观看免费高清a一片| 久久97久久精品| 日韩成人av中文字幕在线观看| 综合色丁香网| 亚洲精品国产av成人精品| 在现免费观看毛片| 日韩亚洲欧美综合| 简卡轻食公司| 亚洲成人手机| 久久国内精品自在自线图片| 日韩 亚洲 欧美在线| 国产在线一区二区三区精| 国产一区二区三区综合在线观看 | 午夜免费男女啪啪视频观看| 国产亚洲5aaaaa淫片| av一本久久久久| 国产成人aa在线观看| 日韩电影二区| 2022亚洲国产成人精品| 日韩强制内射视频| 男人狂女人下面高潮的视频| 日本黄色日本黄色录像| 亚洲欧洲国产日韩| 久久国内精品自在自线图片| a级一级毛片免费在线观看| 两个人的视频大全免费| 精品一区在线观看国产| 婷婷色av中文字幕| 国产精品一区二区三区四区免费观看| 亚洲伊人久久精品综合| 最后的刺客免费高清国语| 久久久久久久久久久丰满| 三级经典国产精品| 亚州av有码| 高清欧美精品videossex| 老女人水多毛片| 大陆偷拍与自拍| 七月丁香在线播放| 大片免费播放器 马上看| 国产爽快片一区二区三区| 免费大片黄手机在线观看| 少妇人妻一区二区三区视频| 免费观看无遮挡的男女| 精品人妻一区二区三区麻豆| 成人毛片60女人毛片免费| 777米奇影视久久| 亚洲四区av| 久久国产精品男人的天堂亚洲 | www.av在线官网国产| 久久国产乱子免费精品| 美女福利国产在线| 免费少妇av软件| 制服丝袜香蕉在线| 日韩av免费高清视频| 国产精品一区www在线观看| 秋霞伦理黄片| 男女边摸边吃奶| 欧美国产精品一级二级三级 | 如日韩欧美国产精品一区二区三区 | 国产乱人偷精品视频| 最近中文字幕2019免费版| 十分钟在线观看高清视频www | 免费看光身美女| 最新的欧美精品一区二区| 久久影院123| 国产精品一二三区在线看| 中文资源天堂在线| 日韩伦理黄色片| 男男h啪啪无遮挡| 下体分泌物呈黄色| 中文字幕人妻熟人妻熟丝袜美| 成人亚洲欧美一区二区av| 美女国产视频在线观看| 欧美少妇被猛烈插入视频| 免费观看无遮挡的男女| 在线亚洲精品国产二区图片欧美 | 永久网站在线| av天堂久久9| 婷婷色av中文字幕| 日本免费在线观看一区| 大片电影免费在线观看免费| 啦啦啦啦在线视频资源| 欧美日本中文国产一区发布| 十分钟在线观看高清视频www | 国产黄片美女视频| 国产成人91sexporn| 亚洲va在线va天堂va国产| 免费少妇av软件| 边亲边吃奶的免费视频| 亚洲欧洲精品一区二区精品久久久 | 久久国产亚洲av麻豆专区| 免费看日本二区| 女人久久www免费人成看片| 99热网站在线观看| 精品人妻熟女av久视频| 亚洲在久久综合| 欧美日韩精品成人综合77777| 亚洲欧美日韩卡通动漫| 久久精品国产a三级三级三级| 国产熟女欧美一区二区| 亚洲精品国产色婷婷电影| 另类精品久久| 天堂中文最新版在线下载| 国产精品免费大片| 亚洲人成网站在线播| 国产男女超爽视频在线观看| 国产精品欧美亚洲77777| 久久精品国产亚洲网站| 亚洲av二区三区四区| 色吧在线观看| 久久精品国产亚洲网站| 日韩,欧美,国产一区二区三区| 欧美日韩在线观看h| 亚洲,一卡二卡三卡| av黄色大香蕉| 日本黄大片高清| 晚上一个人看的免费电影| 女人久久www免费人成看片| 我的女老师完整版在线观看| 免费在线观看成人毛片| 十八禁网站网址无遮挡 | 91精品伊人久久大香线蕉| 中文字幕制服av| av福利片在线观看| 噜噜噜噜噜久久久久久91| 国产成人aa在线观看| 国产成人精品一,二区| 国产av码专区亚洲av| 中文精品一卡2卡3卡4更新| 国产精品99久久99久久久不卡 | 亚洲激情五月婷婷啪啪| 毛片一级片免费看久久久久| 精品亚洲成国产av| 蜜臀久久99精品久久宅男| 久久97久久精品| 十八禁高潮呻吟视频 | 欧美精品亚洲一区二区| 人人妻人人澡人人爽人人夜夜| 欧美精品国产亚洲| 啦啦啦啦在线视频资源| 免费观看在线日韩| 国产精品女同一区二区软件| 99热这里只有是精品50| 2022亚洲国产成人精品| 国产成人a∨麻豆精品| 午夜影院在线不卡| 免费高清在线观看视频在线观看| 午夜老司机福利剧场| 一级毛片aaaaaa免费看小| 国产黄频视频在线观看| av不卡在线播放| 国产成人午夜福利电影在线观看| 美女国产视频在线观看| 欧美丝袜亚洲另类| 久久人妻熟女aⅴ| 成人特级av手机在线观看| 欧美日韩亚洲高清精品| 22中文网久久字幕| 久久99热这里只频精品6学生| av在线app专区| 精品人妻偷拍中文字幕| 一边亲一边摸免费视频| 午夜老司机福利剧场| 国产视频内射| 噜噜噜噜噜久久久久久91| 国产视频内射| 日韩视频在线欧美| 九草在线视频观看| 亚洲伊人久久精品综合| 国产成人免费无遮挡视频| 美女视频免费永久观看网站| 久久久久久久久久成人| av.在线天堂| 精品久久国产蜜桃| 午夜福利视频精品| 人妻少妇偷人精品九色| 97精品久久久久久久久久精品| 亚洲av不卡在线观看| 91成人精品电影| 尾随美女入室| 成年美女黄网站色视频大全免费 | 午夜激情福利司机影院| 欧美成人午夜免费资源| 伊人久久国产一区二区| 丝袜喷水一区| 十八禁网站网址无遮挡 | 国产av国产精品国产| 亚洲色图综合在线观看| 亚洲情色 制服丝袜| 99久久综合免费| 久久午夜综合久久蜜桃| 啦啦啦视频在线资源免费观看| 丰满饥渴人妻一区二区三| 国产欧美日韩精品一区二区| 亚洲中文av在线| 观看av在线不卡| 国产亚洲欧美精品永久| 精品卡一卡二卡四卡免费| 99热这里只有精品一区| av有码第一页| 欧美日韩视频精品一区| 丰满乱子伦码专区| 国产黄色视频一区二区在线观看| 青春草视频在线免费观看| 夜夜看夜夜爽夜夜摸| 日韩伦理黄色片| 国产高清有码在线观看视频| 黑人巨大精品欧美一区二区蜜桃 | 欧美一级a爱片免费观看看| 一区在线观看完整版| 亚洲四区av| 免费大片黄手机在线观看| 精品久久国产蜜桃| 国产成人精品一,二区| 欧美日韩精品成人综合77777| 丝袜脚勾引网站| 三级经典国产精品| 久久精品久久精品一区二区三区| 97超碰精品成人国产| 桃花免费在线播放| 亚洲人成网站在线播| 这个男人来自地球电影免费观看 | 亚洲图色成人| 街头女战士在线观看网站| 丰满少妇做爰视频| 精品酒店卫生间| 国产亚洲最大av| 在线观看人妻少妇| 亚洲伊人久久精品综合| 中文字幕免费在线视频6| 亚洲av中文av极速乱| 一级毛片我不卡| 精品人妻熟女毛片av久久网站| 色哟哟·www| 亚洲精品一区蜜桃| 3wmmmm亚洲av在线观看| 超碰97精品在线观看| 美女大奶头黄色视频| 一区二区三区免费毛片| 国产黄片美女视频| 免费大片18禁| 久久毛片免费看一区二区三区| 美女国产视频在线观看| 亚洲精品乱码久久久v下载方式| 国产精品偷伦视频观看了| 亚洲av免费高清在线观看| 国产精品.久久久| 国产在线免费精品| 欧美另类一区| 国产伦精品一区二区三区四那| 国产一区二区在线观看av| 久久精品久久久久久久性| 成人毛片60女人毛片免费| 精品国产国语对白av| 麻豆成人av视频| 国产精品三级大全| 久久国产精品男人的天堂亚洲 | 日本与韩国留学比较| 老司机亚洲免费影院| 久久女婷五月综合色啪小说| 9色porny在线观看| 少妇的逼水好多| 国产男人的电影天堂91| 亚洲国产毛片av蜜桃av| 久久久久久伊人网av| 纯流量卡能插随身wifi吗| 久久99精品国语久久久| 免费黄色在线免费观看| 成人毛片a级毛片在线播放| 插阴视频在线观看视频| 黑丝袜美女国产一区| 国产av国产精品国产| 中国美白少妇内射xxxbb| 国产精品三级大全| a级一级毛片免费在线观看| 日韩三级伦理在线观看| 一个人免费看片子| 久久久久久久久大av| 三上悠亚av全集在线观看 | 伊人久久国产一区二区| 亚洲美女视频黄频| 国产91av在线免费观看| 最新中文字幕久久久久| 久久久久久人妻| 亚洲欧美精品专区久久| 青春草国产在线视频| 午夜福利网站1000一区二区三区| 在现免费观看毛片| 日本-黄色视频高清免费观看| 嫩草影院入口| 久久久久久久久久人人人人人人| 国产免费一区二区三区四区乱码| 欧美最新免费一区二区三区| 一区二区三区免费毛片| 大陆偷拍与自拍| 国产精品久久久久久av不卡| 久久女婷五月综合色啪小说| 一区二区三区免费毛片| 精品国产一区二区三区久久久樱花| 国产免费视频播放在线视频| av有码第一页| 国产精品久久久久久av不卡| 国产在线免费精品| 久久韩国三级中文字幕| 久久久久久久久久久免费av| 国产免费一区二区三区四区乱码| 国产av精品麻豆| 美女内射精品一级片tv| 黄色日韩在线| 午夜免费鲁丝| 黄色欧美视频在线观看| 成人国产麻豆网| 日韩免费高清中文字幕av| 在线精品无人区一区二区三| 丰满人妻一区二区三区视频av| 亚洲国产色片| 十八禁网站网址无遮挡 | 国产无遮挡羞羞视频在线观看| 欧美人与善性xxx| 少妇人妻一区二区三区视频| 丰满少妇做爰视频| 日本欧美视频一区| 日本色播在线视频| 国产男女内射视频| 妹子高潮喷水视频| 国产熟女欧美一区二区| 国产男人的电影天堂91| 三级国产精品片| 女人久久www免费人成看片| 美女国产视频在线观看| 日本欧美国产在线视频| 亚洲国产精品999| 国产熟女午夜一区二区三区 | 青春草亚洲视频在线观看| 精品久久久噜噜| 亚洲美女视频黄频| 超碰97精品在线观看| 国产成人精品无人区| 中文字幕精品免费在线观看视频 | 国产在线视频一区二区| 少妇猛男粗大的猛烈进出视频| 女性生殖器流出的白浆| 日韩欧美精品免费久久| 免费观看av网站的网址| 国产亚洲精品久久久com| 免费观看在线日韩| 丝袜脚勾引网站| 精品人妻偷拍中文字幕| 99热这里只有是精品在线观看| 亚洲精品,欧美精品| 欧美丝袜亚洲另类| 久久精品国产自在天天线| 在线亚洲精品国产二区图片欧美 | 久久韩国三级中文字幕| 美女脱内裤让男人舔精品视频| 亚洲va在线va天堂va国产| 欧美三级亚洲精品| 熟女人妻精品中文字幕| 亚洲精品久久久久久婷婷小说| 久久精品国产亚洲网站| 哪个播放器可以免费观看大片| 亚洲精品中文字幕在线视频 | 国产精品一区www在线观看| 插逼视频在线观看| 日本爱情动作片www.在线观看| 久久亚洲国产成人精品v| 日本wwww免费看| 欧美丝袜亚洲另类| 在线观看av片永久免费下载| 春色校园在线视频观看| a级片在线免费高清观看视频| 日韩制服骚丝袜av| 国产精品蜜桃在线观看| 亚洲av国产av综合av卡| 美女cb高潮喷水在线观看| 国产精品人妻久久久久久| av在线观看视频网站免费| 亚洲欧美清纯卡通| 亚洲国产精品999| 一级毛片 在线播放| 中文乱码字字幕精品一区二区三区| 国产白丝娇喘喷水9色精品| 欧美少妇被猛烈插入视频| 亚洲不卡免费看| 大陆偷拍与自拍| 在线观看美女被高潮喷水网站| 免费少妇av软件| 在线观看av片永久免费下载| 国语对白做爰xxxⅹ性视频网站| 久久久欧美国产精品| 国产一区二区三区av在线| 欧美日韩av久久| 黑丝袜美女国产一区| 美女xxoo啪啪120秒动态图| av在线老鸭窝| 最新中文字幕久久久久| 久久99热6这里只有精品| 免费大片黄手机在线观看| 国产综合精华液| 亚洲无线观看免费| 免费久久久久久久精品成人欧美视频 | 久久精品国产鲁丝片午夜精品| 国产 一区精品| 精品久久久精品久久久| 麻豆成人av视频| av一本久久久久| 久久狼人影院| 两个人免费观看高清视频 | 黑人巨大精品欧美一区二区蜜桃 | 精品亚洲乱码少妇综合久久| 国产精品成人在线| 欧美xxxx性猛交bbbb| 午夜福利在线观看免费完整高清在| 精品少妇内射三级| 国产在线视频一区二区| 久久久久人妻精品一区果冻| 精品亚洲成a人片在线观看| 国产真实伦视频高清在线观看| 一级黄片播放器| 久久热精品热| 精品久久久精品久久久| 亚洲av二区三区四区| 中文字幕av电影在线播放| 人妻一区二区av| 久久ye,这里只有精品| h视频一区二区三区| 爱豆传媒免费全集在线观看| 极品少妇高潮喷水抽搐| 3wmmmm亚洲av在线观看| 精品久久国产蜜桃| 欧美精品高潮呻吟av久久| 九九久久精品国产亚洲av麻豆| 国产无遮挡羞羞视频在线观看| 纵有疾风起免费观看全集完整版| 午夜91福利影院| 女性生殖器流出的白浆| 18禁在线播放成人免费| 久热这里只有精品99| 国产男人的电影天堂91| 熟女av电影| 在线观看免费高清a一片| 永久网站在线| 日本黄色片子视频| 亚洲av男天堂| 午夜免费男女啪啪视频观看| 午夜福利视频精品| 成人毛片a级毛片在线播放| 欧美丝袜亚洲另类| 一级毛片 在线播放| 久热这里只有精品99| 日本爱情动作片www.在线观看| 欧美区成人在线视频| 水蜜桃什么品种好| 婷婷色综合大香蕉| 欧美一级a爱片免费观看看| 亚洲国产精品一区三区| 欧美日韩综合久久久久久| 欧美精品亚洲一区二区| 日韩伦理黄色片| 成人美女网站在线观看视频| 青青草视频在线视频观看| 国产精品一区二区在线观看99| 亚洲国产色片| 免费观看a级毛片全部| 亚洲国产成人一精品久久久| 自拍欧美九色日韩亚洲蝌蚪91 | 美女福利国产在线| 亚洲av欧美aⅴ国产| 香蕉精品网在线| 爱豆传媒免费全集在线观看| 国产毛片在线视频| 亚洲精品国产av成人精品| 日产精品乱码卡一卡2卡三| 老熟女久久久| av一本久久久久| 久久久久久久久久久免费av| 水蜜桃什么品种好| 免费人妻精品一区二区三区视频| 国产精品国产三级国产专区5o| 一区二区三区精品91| 婷婷色av中文字幕| 久久国产乱子免费精品| 国产69精品久久久久777片| 久久久午夜欧美精品| 免费在线观看成人毛片| 只有这里有精品99| 妹子高潮喷水视频| 精华霜和精华液先用哪个| 纯流量卡能插随身wifi吗| 日韩精品有码人妻一区| 黑人巨大精品欧美一区二区蜜桃 | 免费看不卡的av| 久久影院123| 内射极品少妇av片p| 亚洲色图综合在线观看| 国产白丝娇喘喷水9色精品| 亚洲高清免费不卡视频| 99精国产麻豆久久婷婷| 亚洲欧美清纯卡通| 国产免费福利视频在线观看| a 毛片基地| 日日啪夜夜撸| 婷婷色av中文字幕| 99热国产这里只有精品6| 18禁动态无遮挡网站| 啦啦啦在线观看免费高清www| 精品人妻熟女毛片av久久网站| 男人和女人高潮做爰伦理| 亚洲精品日韩av片在线观看| 大又大粗又爽又黄少妇毛片口| av国产精品久久久久影院| 国产女主播在线喷水免费视频网站| 久久久久久久大尺度免费视频| 日韩av免费高清视频| 伦理电影大哥的女人| 日韩 亚洲 欧美在线| 草草在线视频免费看| 男人添女人高潮全过程视频| 国产亚洲最大av| 欧美日本中文国产一区发布| 能在线免费看毛片的网站| 夜夜看夜夜爽夜夜摸| 亚洲欧美中文字幕日韩二区| 欧美亚洲 丝袜 人妻 在线| 丝袜在线中文字幕| 视频区图区小说| 又爽又黄a免费视频| 国产欧美另类精品又又久久亚洲欧美| 男人添女人高潮全过程视频| 亚洲自偷自拍三级| av专区在线播放| 中国国产av一级| 欧美日本中文国产一区发布| 毛片一级片免费看久久久久| 久久人妻熟女aⅴ| 国产精品一区www在线观看| 亚洲精品自拍成人| 国产一级毛片在线| 亚洲国产av新网站| 日本欧美视频一区| 男女国产视频网站| 不卡视频在线观看欧美| 美女内射精品一级片tv| 在线看a的网站| 青春草国产在线视频| 人人妻人人澡人人看| 国产一区有黄有色的免费视频| 亚洲欧洲日产国产| 欧美人与善性xxx| 18禁在线无遮挡免费观看视频| 亚洲欧美一区二区三区黑人 | 精品视频人人做人人爽| 亚洲精品国产成人久久av| 在线天堂最新版资源| 国产精品久久久久久久久免| 男女边吃奶边做爰视频| 欧美日韩综合久久久久久| 国产精品久久久久久精品古装| 国产国拍精品亚洲av在线观看| 免费av不卡在线播放| 在线观看www视频免费| 国产国拍精品亚洲av在线观看| 人人妻人人澡人人爽人人夜夜| 天堂俺去俺来也www色官网| 又粗又硬又长又爽又黄的视频| 亚洲经典国产精华液单| 性色av一级| 国产国拍精品亚洲av在线观看| 老司机影院毛片| 亚洲精品中文字幕在线视频 | 久久久国产欧美日韩av| 亚洲色图综合在线观看| 日日啪夜夜撸| 夜夜爽夜夜爽视频| 好男人视频免费观看在线|