• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Recognition of Human Actions through Speech or Voice Using Machine Learning Techniques

    2023-12-15 03:57:30OscarPeceresHenrySilvaMarchanManuelaAlbertandMiriamGil
    Computers Materials&Continua 2023年11期

    Oscar Pe?a-Cáceres,Henry Silva-Marchan,Manuela Albert and Miriam Gil

    1Professional School of Systems Engineering,Universidad César Vallejo,Piura,20009,Perú

    2Escola Tècnica Superior d’Enginyeria,Departament d’Informàtica,Universitat de València,Burjassot,Valencia,46100,Spain

    3Department of Mathematics,Statistics and Informatics,Universidad Nacional de Tumbes,Tumbes,24000,Perú

    4Valencian Research Institute for Artificial Intelligence,Universitat Politècnica de València,Valencia,46022,Spain

    ABSTRACT The development of artificial intelligence (AI) and smart home technologies has driven the need for speech recognition-based solutions.This demand stems from the quest for more intuitive and natural interaction between users and smart devices in their homes.Speech recognition allows users to control devices and perform everyday actions through spoken commands,eliminating the need for physical interfaces or touch screens and enabling specific tasks such as turning on or off the light,heating,or lowering the blinds.The purpose of this study is to develop a speech-based classification model for recognizing human actions in the smart home.It seeks to demonstrate the effectiveness and feasibility of using machine learning techniques in predicting categories,subcategories,and actions from sentences.A dataset labeled with relevant information about categories,subcategories,and actions related to human actions in the smart home is used.The methodology uses machine learning techniques implemented in Python,extracting features using CountVectorizer to convert sentences into numerical representations.The results show that the classification model is able to accurately predict categories,subcategories,and actions based on sentences,with 82.99%accuracy for category,76.19%accuracy for subcategory,and 90.28%accuracy for action.The study concludes that using machine learning techniques is effective for recognizing and classifying human actions in the smart home,supporting its feasibility in various scenarios and opening new possibilities for advanced natural language processing systems in the field of AI and smart homes.

    KEYWORDS AI;machine learning;smart home;human action recognition

    1 Introduction

    In recent years,speech recognition of human actions through artificial intelligence has emerged as a promising and rapidly growing field of research.This area focuses on the development of techniques and algorithms that enable machines to understand and recognize the actions humans perform by using only auditory information captured through recording devices.Speech recognition of human actions has applications in various domains,such as virtual assistants,smart homes,security systems,human-machine interaction [1],and healthcare.As AI evolves and improves its ability to understand natural language,speech recognition of human actions has become even more relevant and challenging.Human speech action recognition research has benefited greatly from advances in audio signal processing and machine learning[2].Combining audio signal feature extraction techniques and machine learning algorithms has enabled the development of more accurate and robust human speech action recognition models.These models can analyze acoustic and linguistic patterns in speech data to identify the actions users perform.However,most of these systems are expensive and require a complete replacement of existing equipment?[3].Nowadays,people are eager to use and buy devices that make most tasks easier.According to Google,27% of the world’s online population uses voice search on mobile[4],which has promoted AI to be considered one of the most exciting fields in the new world with a continuous activity to produce machines endowed with intelligence that can help simplify many tasks.

    A voice assistant is considered an AI application that can be deployed in various locations.The emergence of the Internet of Things(IoT)has made maximizing projects with minimal architectures and reasonable costs possible.Using the IoT protocol,machines can communicate with humans and other machines,making it possible to control them remotely.Many applications focused on the smart home environment communicate through the cloud.This is done to simplify some basic tasks,such as when a person is in the room and uses only his/her voice to perform some action.This kind of activity greatly contributes to many people,especially elderly people,people with determination,or pregnant women.In addition,a person with an injured leg or who is carrying a heavy load can use their voice to control some elements in the room without the help of other people.

    The development of autonomous systems in the smart home domain has begun to be attracted by the integration of a module based on voice commands as a home automation solution for the user to transmit voice prompts.This module continuously listens to and processes environmental sounds to detect words.Once the user pronounces the word,the voice module activates real-time voice command processing,where it recognizes and performs the action expressed by the user.In some cases,the operations performed may lead to vocal responses or text messages[5].Undoubtedly,there are offline speech recognition engines for this purpose [6] that could help in scenarios where the connectivity service is lacking.

    To strengthen this type of study,it is fundamental to recognize that human beings can express their emotions in different ways depending on the occasion,which could generate some type of uncertainty or distortion at the moment of understanding what is expressed,and the system ends up performing other types of actions.Studies such as[7]pointed out that in some cases,it is difficult to detect and understand users’emotions because it depends on the context in which it is found.Among the best practices to look for the quality of this type of system is to develop machine learning models to classify human emotions using their speeches or requests previously made.Emotions can be classified into five categories:normal emotion,anger,surprise,happiness,and sadness.Other findings,such as[8],pointed out that it is possible to use powerful packages such as pywhatkit,pyttsx3,pygame,and OpenCV-based speech recognition.

    A virtual or personal assistant-based AI could be a package of intelligent mechanisms to perform different tasks and services based on user queries.Smart technologies,such as natural language processing,lead to voice recognition and play an essential role in security and other areas.The most common mechanisms to achieve this type of results are associated with machine-deep learning and adhering to natural language.On the other hand,human-robot interaction is one of the areas of knowledge that relies on multidisciplinary technologies such as natural language processing and machine learning.Such tools can be integrated into various alternative intelligent systems.Voice can be much more economical than typing on a keyboard.Recent research[9]indicated that augmented reality is becoming an expanding field in research and practice due to its ability to link virtual information with the real world.Significant advances in wearable devices or smartphone technology have drawn people’s attention to augmented reality and have led to the rapid development of applications,games,and environments where daily human activities can be executed [10].Voice commands and speech technology have also been significant milestones of the 21st century by enabling human interaction with devices and applications through speech recognition.It provides users with interface solutions that do not require physically touching or pressing buttons to perform a given action.

    This study is distinguished by its focus on recognizing human actions through artificial intelligence and speech processing.In a context where natural human-device interaction gains relevance,this research addresses crucial challenges in understanding vocally expressed actions that could act in practical scenarios such as virtual assistance,intelligent homes,security,and medical care.The combination of advances in audio signaling and machine learning algorithms promises the creation of increasingly accurate and robust models.The vision for this solution is for it to be deployable on both every day and inclusive devices,which extends its utility and enhances the user experience.This research could help simplify tasks and improve accessibility,advancing the use of voice as an intuitive and powerful interface in the digital environment.Its relevance lies in addressing emerging challenges in the human-IA interface and improving everyday life.

    Technology has advanced beyond our wildest dreams in recent years.It helps human beings in their daily lives,and there is still much to learn about it in the future.Some problems,such as disabled people and the elderly who cannot enjoy the freedom of driving alone,have yet to be solved.An autonomous system in the smart home domain with a voice module can be helpful in solving this issue.The interaction and alternative required for this environment to operate autonomously is a voice command through a smartphone application[11].The objective of this study is to use machine learning techniques in the smart home domain,where the user expresses terms or sentences through voice or speech that lead to the identification and recognition of patterns that infer the action requested by the user.The proposal is expected to be used in other knowledge domains such as education,health,security,and food.

    2 Related Work

    In recent times,significant advances have been achieved in the field of automated speech recognition through the application of various machine learning-based approaches.In this section,we describe some of the current state-of-the-art strategies,with special emphasis on the various techniques employed in machine learning.According to[12],controlling electronic devices remotely using voice and brain waves is one of the future techniques that can be used to control household appliances or wheelchairs remotely.This type of system is intended to be used by everyone.Its purpose is to assist and provide help for the purpose of meeting the needs of elderly and disabled people.Reference [13] indicated that recent technological advances allow the development of voicemachine interfaces,systems that can be used in a variety of potential applications,such as security,smart homes,and centers for people with disabilities.The proposal focused on a frequency domain speech recognition platform,where a spectral analysis of speech to extract its characteristics was performed.These extracted characteristics were then used to identify and perform actions such as issuing action commands,granting access to secure services,voice dialing,telephone banking,and accessing confidential databases.Reference[14]made known that exploring smart home environments,leads to generating knowledge where a system can make intelligent decisions and control end devices based on the current resident by voice.

    Works such as[3]pointed out that these types of system can run on an Android operating system phone connected by Bluetooth to a local home automation node and depending on the connected devices,the node searches for the keywords in the command and performs a control action.This framework is very useful for the needs of elderly and disabled patients due to their minimal technical knowledge requirements.In[15],the authors used natural language processing techniques to build a system based on the Internet of Things with the purpose of making home devices easier to use and control.Through a command or question,the system understands the user’s wishes and responds accordingly.Accompanying this study,the authors in[16]indicated that automatic speech recognition is an effective technique that can convert human speech into text format or computer actions.Signal processing and machine learning techniques are required to recognize speech.

    In [17],the authors addressed the improvement of the voice assistant in the smart home by providing context-aware capabilities.Currently,voice assistants can only receive clear voice commands from users without being able to meet individual needs automatically.To address this limitation,a system that uses walking sounds to identify the user and provide personalized services automatically was proposed.The system recognizes footstep sounds and detects the presence of a user through machine learning algorithms.Also,in [18],the authors designed a voice assistant to command robotic tasks in residential environments aimed at people with mobility limitations.In this case,a convolutional neural network trained with a database of 3600 audios of keywords,such as paper,glass,or robot,was used,achieving an accuracy of 96.9% in the discrimination of categories.The results reached an accuracy of 88.75%in identifying eight different actions by voice commands,such as robot brings to glass.Reference [2] presented a speech and gesture signal translator focused on human-machine interaction.By using machine learning algorithms,they achieve 97% accuracy in speech recognition,even in noisy environments.These results show the power of machine learning algorithms and the potential that comes with creating high-fidelity platforms for effective humanmachine interaction.

    Reference [19] described that human-machine interfaces with voice support are essential in improving user interfaces.This work focused on employing deep learning for intelligent humancomputer interaction.The results show that the combination of human-computer interaction and deep learning is deeply applied in gesture recognition,speech recognition,emotion recognition,and intelligent robot steering.On the other hand,approaches based on machine learning techniques and their integration into robotics have shown it is possible to generate scenario-based programming of voice-controlled medical robotic systems,such as [20] that reveals that human-machine voice communication is the subject of research in various fields(industry,social robotics).Having reasonable results on voice-controlled system functions can lead to a significant improvement in human-machine collaboration.

    In the smart home domain,voice-based control devices have become a topic of great interest to major technology companies and researchers.The study introduced in [21] proposed a solution for detecting Vietnamese language speech because there is no current research in this field,and many previous solutions are needed to provide adequate scalability for the future.Using Machine Learning algorithms,the authors achieve an average recognition accuracy of 98.19%on 15 commands to control smart home devices.Solutions such as[22]pinpointed the possibility of early detection of Parkinson’s disease through voice recordings in the smart home environment using machine learning prediction methods and metaheuristic algorithms.According to Hung et al.in [23],the recognition of emotions through speech can be achieved using machine learning and deep learning techniques.While Tanveer et al.in[24]reaffirmed that machine learning methods are widely used to process and analyze speech signals due to their performance improvements in multiple domains.Deep learning and ensemble learning are the two most commonly used techniques,which provide benchmark performance in different downstream tasks.

    These types of studies provide insight into the potential and effectiveness of machine learning in the automation and intelligent control of actions performed by users in smart home environments.The application of different strategies demonstrates the flexibility of machine learning to adapt to various situations and challenges in recognizing human actions.These testimonials are the potential of machine learning.As technology advances,these advances are expected to continue to improve,providing new opportunities for creating more intuitive and personalized homes.

    3 Methodology

    This section describes the procedures used to develop the module for the recognition of human actions through speech or voice using automatic learning in the smart home domain.Fig.1 outlines the workflow from the stage of understanding the problem to the refinement of the proposal.

    Figure 1:Methodological strategy of the proposal

    3.1 Understanding the Problem

    As technological services have evolved,human beings have learned to adapt and take advantage of the benefits offered by these digital media.However,a fragment of the population still distrusts and is not empathetic with this type of solution.These hesitations are due to the fact that many technological proposals leave aside the main actor,which in this case is characterized as the user.AI and cloud services based on IoT have sought to make disruptive changes in people’s daily activities.For example,in recent years,special attention has been paid to the voice remote control of driverless vehicles for the future intelligent transportation system.In these vehicles,the remote intelligent car receives the user’s commands and controls the engine rotation forward,backward,turn left or right,and stop[25].Attempting to overcome the user’s abilities still represents a significant gap and therefore the community must reformulate the principle of an autonomous system because such a system will always depend on textual,verbal,or gestural indications to perform some action.

    It is necessary to address more empathetic solutions where the user trusts the technologies in their mode,form,and quality.A good start to promote autonomous solutions is to develop projects in the smart home environment.The ease of use and simplicity of a system are factors that build the safety of users during the stage of interaction with such systems,whose purpose is to assist in the development of everyday tasks such as turning on or off the light,opening the blinds,and turning on the air conditioning,among other household activities.Other recent findings used augmented reality with voice recognition capabilities where a student and teacher[26]performed joint tasks that complement the student’s knowledge[27].

    It is common for people in advanced stages of life to face a greater propensity to develop chronic diseases and to experience a decline in their physical and cognitive abilities.These factors can trigger a progressive decline in their ability to lead an autonomous and independent life.In this context,voice recognition of actions in smart home systems emerges as an innovative technological intervention that can make a significant contribution.These systems allow older adults to control various functions and devices in their homes simply by voice commands,providing greater independence and autonomy in their daily lives.The recognition of voice actions in smart home systems is positioned as an innovative and adapted intervention to improve the quality of life of older adults.By allowing them to control their environment and access support services in a simple and natural way,this technology helps to strengthen their independence and general well-being.These approaches could help the user to have another look at autonomous solutions based on voice or speech recognition that contribute to domestic,academic,or work activities that improve the quality of life of the human being.

    On the other hand,this study is based on the realization that,despite technological advances,there remains a huge gap between human capabilities and the expectations of autonomous systems.By enabling more natural interactions with technology,speech recognition technologies hold promise for bridging this gap.Furthermore,by improving the accuracy and efficiency of recognizing human actions through speech,machine learning research has the potential to improve people’s daily lives.It is clear that we need technical solutions that are more comprehensive and flexible.From this perspective,research on human action recognition through speech is justified as a means to meet the growing demand for inclusive and accessible technology that improves the quality of life of users in different domains.

    3.2 Characterize the Work Domain

    In a smart home environment,a variety of activities can be performed through voice command,providing convenience and control without the need to interact directly with devices.Virtual assistants,such as Amazon’s Alexa or Google Assistant,provide facilities for users to control home lighting,adjust room temperature,play music,open and close curtains or blinds,turn on and off appliances such as the coffee maker or vacuum cleaner,and manage home security such as activating and deactivating alarms and surveillance cameras[28].Other more complex tasks that can be performed by these types of systems focus on programming customized routines,such as simulating presence at home during vacations by turning lights on and off and playing ambient sounds.It is also possible to perform online information queries to obtain weather forecasts,the latest news,or cooking recipes,as mentioned.

    3.3 Definition of Categories,Subcategories and Actions

    This section sets out the categories,subcategories,and actions that the speech recognition module must infer in order to perform the actions.Table 1 describes them.Categories are characterized as the main elements of the smart home environment and subcategories as the elements that are associated with the categories or represent compatibility in their operation.The actions are based on the speech or voice terms used by the user.For example,“Illuminate the kitchen today”,in this case,the system must recognize the category,subcategory,and action to be performed to fulfill the user’s request which corresponds to turning on the lights right now.Another situation could be,“It is cold in the library”,in this case,the user needs to be cold in the library environment,the system should infer the context and activate the heating.This system could collaborate with elderly people,who develop multiple chronic diseases and experience a decline in some of their physical and cognitive functions,leading to a decrease in their ability to live independently[29].This need is considered imperative because of the great contribution it could make as a holistic technological ecosystem to improve people’s quality of life.

    Table 1: Categories,subcategories,and actions

    3.4 Data Set Development

    The dataset1https://github.com/oscarp-caceres/speech-recognitionconsists of 663 records and required labeling of several variables,as shown in Table 2.These variables were“Category”to which the action belongs,“Action needed”,whether the action is necessary or not for the user,“Question”,whether what is expressed by the user is a question or an instruction,the specific“Subcategory”to which the action belongs,the“Action”itself to be carried out,the“Time”or moment at which the request is made and the“Sentence”which describes precisely and concisely the requests indicated through speech or voice by the user.

    Table 2: Vector of the dataset

    The dataset was built through the collection of samples of real interactions between users and the smart home system.Each sample was labeled with the aforementioned variables,which allowed the training and development of a speech recognition model to interpret and understand user requests efficiently and accurately.It is essential to recognize that a good model will always depend on the data set to achieve an acceptable accuracy that leads to optimizing the needs and preferences of users,providing a more personalized and satisfactory experience in the control and management of a smart home.

    3.5 Model Coding

    The model’s functionality focuses on predicting the category,subcategory,and action based on the variable-label text“Sentence”(Table 2 shows some examples).In this case,machine learning and natural language processing techniques were used.Naive Bayes and Random Forest algorithms were used to building the classification models through the scikit-learn library in Python.The Multinomial Naive Bayes(MultinomialNB)algorithm is employed for text or data classification problems with a multinomial distribution,such as document classification,sentiment analysis,or topic categorization.At the same time,the RandomForestClassifier algorithm is very versatile for classification problems such as regression.It is beneficial because it does not require complex hyperparameter tuning and is less prone to overfitting than the decision tree algorithm,among the main limitations of the algorithms.MultinomialNB may have difficulties capturing complex relationships between features or words in a text,resulting in reduced accuracy in tasks requiring more sophisticated natural language processing.While RandomForest may face difficulties in handling data sets with high dimensionality,which may affect its performance and increase the risk of overfitting[30].

    Table 3 illustrates the coding in Python to transform speech into text.The code starts with the import of the“speech_recognition”library by assigning its properties to the variable“sr”.It creates an instance of the “speech_recognition”object by giving it to “r”,which allows performing speech recognition operations.It then defines a function called“transcribe_microphone()”to perform speech recognition and translation.Within this function,the context“with”is used to establish a connection to the microphone,and the message “Say something” is displayed to guide the user.The function captures audio from the microphone using the “l(fā)isten()” method of the “r” object.An exception handling block addresses possible problems,printing “Speech could not be recognized”in case the recognition fails“(sr.UnknownValueError)”or an error message if there are problems with the Google service “(sr.RequestError)”.If there are no errors,the “recognize_google()” method of “r” is used to translate the recognized text into the desired language,storing the translated text in the variable“text”.The translated text is returned,and outside the function,“transcribe_microphone()”is called to perform the recognition and translation process,storing the result in“transcribed_text,”which is then used as input data for the model.

    Table 3: Class to receive the user’s speech or voice and transform it into text

    Table 4 describes the employment of the Multinomial Naive Bayes algorithm for category and subcategory classification and the Random Forest algorithm for action classification.The data set was divided into training and test data.Then,“CountVectorizer”was used to create a matrix of text features in the label“Sentence”.Subsequently,classification models were built and trained by dividing each vector label used(category,subcategory,and action).Finally,the model’s accuracy was evaluated on the test set,and predictions were made on new data achieving acceptable findings.

    Table 4: Model building in Python

    The parameters set in Table 4,in particular,the y_action_test tags,define the target actions to be recognized,test_size controls how the data split to train and test the model,and random_state ensures that the data splitting is reproducible,which facilitates comparison of results and consistency across different runs.In this case,the value of 0.2 indicates that 20%of the data set was used as the test set,while the remaining 80% was considered for training the model.This division has been essential to assess the generalizability of the model.However,this ratio may vary depending on the size of the data set and the balance between the number of samples in each class.If the data set is small,a higher ratio can be considered for the test set to have enough evaluation data.It is also noted that in some cases,experimenting with different values and evaluating the model’s performance can help find the optimal combination to obtain satisfactory results.

    3.6 Testing and Error Analysis

    Testing plays a key role in machine learning models because of its importance in assessing performance,efficiency,and robustness.Testing provides a clear understanding of how the model behaves.For the case study,a test dataset consisting of 25 records was constructed,and of these,23 have been correctly classified and recognized.The tests were carried out in different environments.According to the categories offered by the model,mainly on and off lights,heating,opening and closing blinds,and the use of household appliances such as the toaster,interactions were simulated at different times according to questions such as,“Illuminate the dining room today”,“Can you turn on the heating in the bathroom when it freezes outside?”and“Raise the shutters in the basement”.In this sense,testing and error analysis are essential components in developing and continuously improving such systems.As systems integrate services based on machine learning,they will facilitate the daily lives of the residents of a home.On the other hand,it is necessary to recognize that many activities in smart homes need preparation time before they can be carried out[31].

    These evaluations not only allow us to discern the depth of the model’s behavior but also provide a comprehensive picture of its adaptive and predictive capabilities.The consistency and reliability of these results accentuate the inescapable importance of testing and error analysis as intrinsic components for the iterative advancement and ongoing optimization of systems of this nature.In summary,testing,as the backbone of validation and continuous improvement,plays an imperative role in the consolidation and effectiveness of machine learning systems for performance optimization in intelligent residential contexts.

    Fig.2 outlines the procedure used to simulate the proposal.The illustration starts from the user’s speech or voice to the inferred outputs that determine the category,subcategory,and action to be performed by the system.The tests were simulated using the Anaconda platform working environment through the Spyder tool.

    This type of testing has provided valuable feedback that has helped strengthen the optics for developing future models in the smart home environment.This approach can become a powerful solution to improve people’s lives in smart home environments and provide assistance and autonomy to those with physical limitations or disabilities.This technology can make homes more accessible,intuitive,and personalized to meet the individual needs of users.As machine and deep learning techniques are known,they seek to make predictions whose purpose is focused on inferring activities that could occur in the future.The model obtained an accuracy of 79.70% in category prediction,which indicates that it can classify correctly.As for subcategory,the model achieved an accuracy of 71.43%,which implies that it is correct in the subcategorization of the instances.The action recognition model achieved an accuracy of 89.47%,correctly identifying the actions associated with the texts in most cases.These results show promising performance for each classification and prediction model for the different labels.Fig.3 represents the confusion matrix of the three models.The confusion matrix unnormalized categories shows that the model correctly assigned most instances to their original category,although some distortions have occurred slightly between similar categories.While the confusion matrix does not normalize subcategories,it can be seen how the model successfully assigned most of the instances to their corresponding subcategories.However,some confusion between similar subcategories also occurred,slightly affecting overall accuracy.As for the unnormalized action confusion matrix,many instances have been correctly classified into the corresponding actions.This indicates that the model has learned patterns and distinctive features to accurately identify actions.

    Figure 2:Simulation of model operation

    Figure 3:Model confusion matrix

    3.7 Model Refinement

    In data science and machine learning,refining classification models is fundamental.After analyzing the confusion matrix and evaluating the accuracy of the models in classifying categories,subcategories,and actions,the need to improve the performance of the models and address areas where confusion occurs arises.To achieve greater accuracy and discrimination capability,it is possible to consider collecting additional,high-quality data to enrich the training set.This could help to improve the representation of categories,subcategories,and actions in the model,allowing better generalization and distinction between them.However,this process will be considered a working milestone for a future study to extend the capacity and robustness of the models.In addition,with the advancement of AI and substance increase in machine learning research [32],new techniques are expected to be available and used to help this type of proposal[33],to reduce the burden and increase performance and robustness[34].

    4 Results and Discussion

    The development of machine learning models has led to recognizing human actions through voice in the smart home domain.This approach has been possible thanks to the quality of the dataset and the application of machine learning best practices.The Naive Bayes multinomial classifier proved suitable for classification with discrete features.Using the CountVectorizer,it was possible to recount each word in the text and construct a numerical vector representing the presence or absence of each word.This text processing technique is used in the field of natural language processing.It is convenient to use CountVectorizer in classification and topic modeling tasks,where numerical representation of texts is required to apply machine learning algorithms.By converting text documents into numerical vectors,one can use this representation to train classification models or perform topic analysis based on the frequency of words in the texts.As mentioned above,the classification models were evaluated using precision and confusion matrix metrics.In the accuracy evaluation,it is reaffirmed that the category model achieved an accuracy of 79.70%,an accuracy of 71.43%for the subcategory,and an accuracy of 89.47%for the action.These results indicate a promising performance on each of the classification models using the different labels.However,when analyzing the confusion matrix,areas of confusion were identified.In the category model,some instances were observed to be incorrectly classified in similar categories.In the subcategory model,there was also confusion between related subcategories,indicating the need for further distinction in specific themes or characteristics.On the other hand,the action confusion matrix revealed high accuracy in most instances.

    It should be noted that the models are supported by the confusion matrix analysis.While the models have achieved generally high accuracy,the confusion matrix reveals that some confounding has occurred between similar categories or subcategories.These findings highlight the importance of conducting a more detailed analysis and looking for opportunities for improvement in future iterations.Having assessed the confusion matrix,it is relevant to note that the normalized confusion matrix also provides a more detailed view of how the misclassifications are distributed in relation to the true classes.Fig.4 represents the normalized confusion matrix on the label“category”.It can be seen that,although the overall accuracy of the model is acceptable,the data set on categories that represent noise,for example,still needs to be improved,as other,shutters,times,and toaster.

    Compared to the abstract representation in Fig.5,the normality of the“subcategory”criterion,there is a little more noise in the inference.This is because these criteria or subcategories are associated with a category.However,the results obtained have represented a valid proximity in the identification of categories and subcategories,without generating a high rate of distortion that affects the action that the system should perform by the user through speech or voice.

    Fig.6 illustrates the ability of the output variable“action”,which is one of the best results of the three classification models.The bias rate is minimal and is only affected by the down,none,and off Labels.Although the model’s overall accuracy is high,there is room for improvement.

    Figure 4:Normalized confusion matrix of the category model

    Figure 5:Normalized confusion matrix of the subcategory model

    Figure 6:Normalized confusion matrix of the action model

    Research such as [20] specified that this type of method will facilitate the creation of voicecontrolled systems,fulfilling the essential requirements of human beings and taking into account the possible scenarios of human-machine collaboration,achieving a significant improvement.Also,Kumar et al.[35] described that the proposed algorithms offer better performance during their operation than existing technologies,suggesting the support vector machine algorithm.The authors in[36]discussed and recommended using seizure model-centered machine learning algorithms for action recognition.While reference[16]pointed out that signal processing and machine learning techniques must be used to recognize speech and that traditional systems have low performance.The approach of[37]describes that using mobile applications for speech recognition is possible as an alternative that allows processing tasks remotely or in contexts of high criticality on the user’s part.

    Hung et al.[21]proposed a control unit in a smart home using voice commands in Vietnamese.This study uses machine learning techniques.Its main limitation is that it can only recognize voice commands in the Vietnamese language.Compared to the proposed solution,the main advantage of voice recognition is that it is multilingual,which allows a broader range of applicability and demonstrates that machine learning techniques are not limited to a particular set of commands but can be extrapolated to different contexts in smart home environments.In[38],the authors used machine learning algorithms for human recognition from audio signals;their findings show that the Naive Bayes algorithm represents an accuracy of 60%.The proposed solution adopts an average value of 80.2% accuracy.In [39],the authors used the Naive Bayes classification algorithm to classify tasks linked to an intelligent virtual assistant based on speech recognition,which offers the best accuracy among k-nearest neighbors,logistic regression,random forest,decision tree,and support vector machine.This work reaffirms that the Multinomial Naive Bayes algorithm is potentially suitable for developing activities related to speech recognition.

    The mobile application of [40] mySmartCart focuses on transforming the traditional way of writing a shopping list into a digitized smart list that implements speech recognition using the Google Speech cloud service,with costs ranging from $0.036 per minute and limited to specific languages.It uses the Naive Bayes algorithm for text classification,achieving 76% accuracy.Unlike the aforementioned study the proposed solution does not require any economic cost for the execution of tasks that transform the voice into text and identify the request requested by the user.It is essential to note that this research focused on the application of speech recognition-based classification and natural language processing in the context of smart homes.Although traditional approaches,such as regression analysis or rule-based techniques,exist to address similar issues,they often need help capturing the complexity of human interactions in such a dynamic and diversified environment as intelligent homes.Our choice to employ the Multinomial Naive Bayes and RandomForestClassifier algorithms was based on their ability to deal with nonlinear relationships and handle multiple features efficiently.Our approach is best suited to the inherent nature of the problem at hand,offering promising results both in terms of accuracy and in its ability to adjust to variations in natural language and human actions in the context of smart homes.

    To improve the model’s performance,it is necessary to consider expanding the dataset by collecting more examples for each label.This could help strengthen the representation of categories,subcategories,and actions,allowing for better model generalization.In addition,it is possible to explore advanced natural language processing techniques,such as word embeddings or pre-trained language models,to capture semantic and contextual relationships in texts better.Another area for improvement focuses on the tuning of model hyperparameters.A common hyperparameter is the vocabulary size,which defines the number of unique words in the text representation.A large vocabulary can improve information richness but also increase computational complexity.Another hyperparameter is the size of the hidden layer in the case of using neural networks,which controls the number of units in the inner layers of the model.A larger size may allow the model to learn more complex features but could also lead to overfitting problems.The learning rate is another crucial hyperparameter,which determines how the model weights are updated during training.A low learning rate may lead to slow convergence,while a high rate could cause the model to oscillate and not converge.Finally,the number of epochs,representing the number of times the model sees the entire training set,is also an important hyperparameter affecting model performance.An adequate number of epochs is essential to balance the fit to the training set and the ability to generalize to new data.Careful selection and adjustment of these hyperparameters are critical to obtaining well-calibrated models with optimal performance on various tasks[41].Exhaustive search for optimal hyperparameter combinations or optimization techniques can help find a configuration that improves accuracy and reduces confounding.The obtained results demonstrate promising performance of the classification model in predicting categories,subcategories,and actions based on the“Sentence”tag texts.Although high accuracy was achieved overall,the confounding identified in the confusion matrix highlights the need for further improvement and refinement of the model.

    5 Conclusion

    This research proposes recognizing human actions through speech or voice using machine learning.The accuracy obtained for the category model is 82.99%,for the subcategory model 76.19%,and for the action model 90.28%.While exactitude is described as 79.70% for the category model,71.43%for the subcategory model,and 89.47%for the action model.Each model has performed well in classifying the categories,subcategories,and actions based on the input texts.However,it is essential to consider that these results are based on specific metrics and a specific data set.It is recommended for future work that the vector integrates information on user preferences,context,and level of criticality so that the experience when interacting with the system shows expected responses and is not out of context.It is also suggested to explore models based on rules and fuzzy logic that could provide a logical and transparent structure to establish patterns of relationships between voice commands and specific actions that improve the system’s ability to interpret the user’s intentions coherently.Also,we plan to investigate using deep learning models,such as neural networks,to improve the ability to capture complex relationships and subtle patterns in human speech.Then,explore their adaptability in different contexts and environments,such as variations in accents,noisy environments,or users with special needs.In addition,it is necessary to mention that systems that integrate this type of services help to increase confidence in the human-system interaction and that the person is characterized by being the main actor before,during,and after using an intelligent system.

    Acknowledgement:Thanks to the Universidad César Vallejo for their collaboration in this study.

    Funding Statement:This work was supported by Generalitat Valenciana with HAAS(CIAICO/2021/03 9) and the Spanish Ministry of Science and Innovation under the Project AVANTIA PID2020-114480RB-I00.

    Author Contributions:Oscar Pe?a-Cáceres:Data curation,Formal analysis,Software,Methodology,Writing,Original draft.Henry Silva-Marchan: Conceptualization,Fundraising.Manuela Albert:Conceptualization,Research,Methodology,Writing (original draft),Acquisition of funds.Miriam Gil:Conceptualization,Research,Methodology,Validation,Writing(original draft),Fundraising.

    Availability of Data and Materials:The data used in this paper can be requested from the corresponding author upon request.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    性高湖久久久久久久久免费观看| 啦啦啦在线观看免费高清www| av一本久久久久| 国产免费一区二区三区四区乱码| 午夜精品国产一区二区电影| 午夜免费男女啪啪视频观看| 一级,二级,三级黄色视频| 日韩大片免费观看网站| 在线观看免费高清a一片| 欧美丝袜亚洲另类| 国产在线视频一区二区| 如日韩欧美国产精品一区二区三区 | av福利片在线| 亚洲精品久久午夜乱码| 九九爱精品视频在线观看| 一个人看视频在线观看www免费| 天天操日日干夜夜撸| 中国三级夫妇交换| av女优亚洲男人天堂| 免费黄网站久久成人精品| 久久精品国产亚洲av涩爱| a级毛片在线看网站| 日本色播在线视频| 女人精品久久久久毛片| 亚洲伊人久久精品综合| 97超视频在线观看视频| 亚洲国产av新网站| 日韩av免费高清视频| 亚洲情色 制服丝袜| 国产黄片视频在线免费观看| 夜夜骑夜夜射夜夜干| 国产精品久久久久久精品电影小说| 99热这里只有是精品在线观看| 亚洲不卡免费看| 精品一区二区三卡| 亚洲无线观看免费| 高清午夜精品一区二区三区| 婷婷色麻豆天堂久久| 九九久久精品国产亚洲av麻豆| 欧美另类一区| 9色porny在线观看| 久久久久久久大尺度免费视频| 人人妻人人爽人人添夜夜欢视频 | 性色avwww在线观看| 亚洲av在线观看美女高潮| 亚洲经典国产精华液单| 黑人高潮一二区| 一区二区三区乱码不卡18| 久久精品国产鲁丝片午夜精品| 欧美激情极品国产一区二区三区 | 亚洲一区二区三区欧美精品| 国产视频首页在线观看| 国产中年淑女户外野战色| 日韩欧美一区视频在线观看 | 久久久久国产精品人妻一区二区| av专区在线播放| h日本视频在线播放| 在线播放无遮挡| 色婷婷久久久亚洲欧美| 精品亚洲乱码少妇综合久久| 久久久精品免费免费高清| 99久久人妻综合| 在线精品无人区一区二区三| 欧美日韩一区二区视频在线观看视频在线| 亚洲精品亚洲一区二区| 91午夜精品亚洲一区二区三区| 美女大奶头黄色视频| 亚洲精品自拍成人| 日韩伦理黄色片| 黑人猛操日本美女一级片| 简卡轻食公司| 夜夜爽夜夜爽视频| 秋霞伦理黄片| 中文字幕亚洲精品专区| 精品亚洲乱码少妇综合久久| 女人久久www免费人成看片| 欧美精品亚洲一区二区| 国产极品天堂在线| 观看av在线不卡| 亚洲一级一片aⅴ在线观看| 天美传媒精品一区二区| 丰满少妇做爰视频| 免费看av在线观看网站| 成人综合一区亚洲| 国产白丝娇喘喷水9色精品| 男人舔奶头视频| 七月丁香在线播放| 噜噜噜噜噜久久久久久91| 王馨瑶露胸无遮挡在线观看| 国产精品久久久久久av不卡| 91午夜精品亚洲一区二区三区| 亚洲欧美精品自产自拍| 精品人妻偷拍中文字幕| 免费观看的影片在线观看| 亚洲欧洲日产国产| 国产精品人妻久久久影院| 国产亚洲午夜精品一区二区久久| 国产精品免费大片| 欧美日韩国产mv在线观看视频| 超碰97精品在线观看| 成人综合一区亚洲| 亚洲性久久影院| 我的老师免费观看完整版| 国产精品秋霞免费鲁丝片| 多毛熟女@视频| 高清av免费在线| 青春草亚洲视频在线观看| 成人漫画全彩无遮挡| 一级毛片久久久久久久久女| 国产av国产精品国产| 国产精品熟女久久久久浪| 中文字幕制服av| kizo精华| 寂寞人妻少妇视频99o| 人人妻人人澡人人看| 亚洲精品国产色婷婷电影| 国内精品宾馆在线| 丝瓜视频免费看黄片| 99re6热这里在线精品视频| 国产成人精品无人区| 一区二区av电影网| 久久综合国产亚洲精品| 下体分泌物呈黄色| 国产日韩欧美在线精品| 你懂的网址亚洲精品在线观看| 免费黄网站久久成人精品| 人妻制服诱惑在线中文字幕| av专区在线播放| 国产白丝娇喘喷水9色精品| 亚洲一级一片aⅴ在线观看| 色婷婷久久久亚洲欧美| 一个人看视频在线观看www免费| 国产 一区精品| 久久韩国三级中文字幕| av在线老鸭窝| 中文天堂在线官网| 国产成人免费无遮挡视频| 久久精品国产鲁丝片午夜精品| h日本视频在线播放| 免费不卡的大黄色大毛片视频在线观看| 高清欧美精品videossex| 国产色婷婷99| 亚洲精品日韩在线中文字幕| 国产成人a∨麻豆精品| tube8黄色片| 制服丝袜香蕉在线| 啦啦啦中文免费视频观看日本| 精品人妻一区二区三区麻豆| 久久久午夜欧美精品| 高清不卡的av网站| a级一级毛片免费在线观看| 一区二区三区精品91| 99九九线精品视频在线观看视频| 亚洲欧美清纯卡通| 亚洲精品乱码久久久v下载方式| 日韩在线高清观看一区二区三区| 免费观看在线日韩| 哪个播放器可以免费观看大片| 欧美少妇被猛烈插入视频| av国产久精品久网站免费入址| 国产69精品久久久久777片| 久久这里有精品视频免费| 三级国产精品欧美在线观看| www.色视频.com| 王馨瑶露胸无遮挡在线观看| 国产日韩欧美亚洲二区| 国产精品一区二区在线观看99| av在线观看视频网站免费| 十分钟在线观看高清视频www | 啦啦啦中文免费视频观看日本| 中文乱码字字幕精品一区二区三区| 亚洲国产毛片av蜜桃av| 国产成人精品福利久久| 久久免费观看电影| 亚洲内射少妇av| 在现免费观看毛片| 高清在线视频一区二区三区| 女性生殖器流出的白浆| 五月伊人婷婷丁香| 久久精品久久精品一区二区三区| 国产精品伦人一区二区| 十八禁网站网址无遮挡 | 国产视频内射| 亚洲欧美成人精品一区二区| 国内少妇人妻偷人精品xxx网站| 一边亲一边摸免费视频| 免费观看的影片在线观看| 一级爰片在线观看| 国产成人精品无人区| 大片电影免费在线观看免费| 久久国内精品自在自线图片| 男人添女人高潮全过程视频| freevideosex欧美| 成人18禁高潮啪啪吃奶动态图 | av在线老鸭窝| 精品少妇黑人巨大在线播放| 高清黄色对白视频在线免费看 | 人妻少妇偷人精品九色| 男人狂女人下面高潮的视频| a级一级毛片免费在线观看| 亚洲av中文av极速乱| 久久精品国产自在天天线| 国产黄片视频在线免费观看| 97精品久久久久久久久久精品| av专区在线播放| 日韩av在线免费看完整版不卡| 久久久久久久久大av| 卡戴珊不雅视频在线播放| 欧美精品亚洲一区二区| 欧美变态另类bdsm刘玥| 日产精品乱码卡一卡2卡三| 搡老乐熟女国产| 在线免费观看不下载黄p国产| 国产av一区二区精品久久| 91午夜精品亚洲一区二区三区| 激情五月婷婷亚洲| 免费黄色在线免费观看| 一级毛片aaaaaa免费看小| 22中文网久久字幕| 一级毛片电影观看| 国产精品国产三级国产专区5o| 在线播放无遮挡| 国产一区二区三区av在线| 最近中文字幕高清免费大全6| 欧美成人精品欧美一级黄| 九草在线视频观看| 人人妻人人澡人人看| 六月丁香七月| 久久久久久久亚洲中文字幕| 欧美三级亚洲精品| 2018国产大陆天天弄谢| 欧美精品亚洲一区二区| 91精品一卡2卡3卡4卡| 交换朋友夫妻互换小说| 91久久精品国产一区二区三区| 国产中年淑女户外野战色| a级片在线免费高清观看视频| 欧美精品人与动牲交sv欧美| 国产亚洲午夜精品一区二区久久| 一级黄片播放器| 精品少妇黑人巨大在线播放| 少妇被粗大猛烈的视频| 18禁在线无遮挡免费观看视频| 中文字幕人妻熟人妻熟丝袜美| 免费观看av网站的网址| 国产亚洲欧美精品永久| 成年人午夜在线观看视频| 国产成人午夜福利电影在线观看| 日本与韩国留学比较| 91在线精品国自产拍蜜月| 蜜桃久久精品国产亚洲av| 波野结衣二区三区在线| 日韩三级伦理在线观看| 97精品久久久久久久久久精品| h视频一区二区三区| 国产极品天堂在线| 国产成人aa在线观看| 色网站视频免费| 天美传媒精品一区二区| 中文资源天堂在线| 多毛熟女@视频| 91成人精品电影| 午夜av观看不卡| 国产精品久久久久久精品电影小说| 久久精品国产亚洲av涩爱| 亚洲国产精品专区欧美| 一区二区三区乱码不卡18| 一个人免费看片子| 午夜福利网站1000一区二区三区| 欧美少妇被猛烈插入视频| 麻豆乱淫一区二区| 高清视频免费观看一区二区| 久久精品久久久久久噜噜老黄| 午夜老司机福利剧场| 亚洲丝袜综合中文字幕| 国产精品福利在线免费观看| 黄色毛片三级朝国网站 | 在线观看免费日韩欧美大片 | 人妻人人澡人人爽人人| 亚洲三级黄色毛片| av播播在线观看一区| 18禁在线无遮挡免费观看视频| 免费人成在线观看视频色| 中文欧美无线码| 男男h啪啪无遮挡| 少妇精品久久久久久久| 91午夜精品亚洲一区二区三区| 国产黄片视频在线免费观看| 伦精品一区二区三区| 亚洲内射少妇av| 国产亚洲av片在线观看秒播厂| 久久精品国产自在天天线| 国产视频内射| av在线观看视频网站免费| a 毛片基地| 精品熟女少妇av免费看| 中文字幕精品免费在线观看视频 | 日日摸夜夜添夜夜添av毛片| 搡老乐熟女国产| 亚洲丝袜综合中文字幕| 日韩av免费高清视频| 18禁动态无遮挡网站| 你懂的网址亚洲精品在线观看| 精品一区二区免费观看| 日本黄大片高清| 亚洲自偷自拍三级| 下体分泌物呈黄色| 日日摸夜夜添夜夜添av毛片| 欧美精品一区二区免费开放| 久久6这里有精品| 中文在线观看免费www的网站| 久久久久久久久久久久大奶| 色视频在线一区二区三区| 亚洲不卡免费看| 两个人的视频大全免费| 日本欧美国产在线视频| 日本-黄色视频高清免费观看| 精品少妇内射三级| 夫妻性生交免费视频一级片| 亚洲成人手机| 国产成人freesex在线| 久久久久人妻精品一区果冻| 亚洲欧美成人精品一区二区| 激情五月婷婷亚洲| 欧美高清成人免费视频www| 国产亚洲午夜精品一区二区久久| 久久久精品免费免费高清| 久久人人爽人人爽人人片va| 日韩精品免费视频一区二区三区 | 国产成人freesex在线| 18禁在线播放成人免费| 99re6热这里在线精品视频| 久久影院123| 另类亚洲欧美激情| 亚洲精品国产色婷婷电影| 99久久精品一区二区三区| 亚洲av福利一区| 免费黄频网站在线观看国产| 亚洲自偷自拍三级| av在线老鸭窝| 啦啦啦在线观看免费高清www| 亚洲国产毛片av蜜桃av| 国产男人的电影天堂91| 成人综合一区亚洲| 国产男女内射视频| a级一级毛片免费在线观看| 免费久久久久久久精品成人欧美视频 | 一级av片app| 一级,二级,三级黄色视频| 简卡轻食公司| 天天躁夜夜躁狠狠久久av| 中文乱码字字幕精品一区二区三区| 午夜福利在线观看免费完整高清在| 国产在视频线精品| 天堂中文最新版在线下载| 国模一区二区三区四区视频| 欧美日韩一区二区视频在线观看视频在线| 2021少妇久久久久久久久久久| 欧美日韩一区二区视频在线观看视频在线| 亚洲va在线va天堂va国产| 美女脱内裤让男人舔精品视频| 99九九在线精品视频 | 日韩 亚洲 欧美在线| 纵有疾风起免费观看全集完整版| 最近2019中文字幕mv第一页| 免费人妻精品一区二区三区视频| 国产一级毛片在线| 少妇人妻 视频| 亚洲精品乱码久久久v下载方式| 亚洲欧洲国产日韩| av黄色大香蕉| xxx大片免费视频| 午夜影院在线不卡| 国产精品三级大全| 久久亚洲国产成人精品v| 亚洲国产精品一区三区| 久久久久久久久久成人| 免费观看性生交大片5| 久久久久国产网址| 国产精品久久久久久av不卡| 国产熟女欧美一区二区| 99久国产av精品国产电影| 精品熟女少妇av免费看| a 毛片基地| 亚洲精品自拍成人| 男人狂女人下面高潮的视频| 亚洲欧洲国产日韩| 国产色爽女视频免费观看| 97超视频在线观看视频| 日韩电影二区| 亚洲精品国产成人久久av| 日韩成人伦理影院| 最近中文字幕高清免费大全6| 免费大片黄手机在线观看| 久久97久久精品| 少妇高潮的动态图| 亚洲va在线va天堂va国产| 亚洲第一区二区三区不卡| 久久久久网色| 亚洲一区二区三区欧美精品| 国产视频内射| 99热国产这里只有精品6| 熟妇人妻不卡中文字幕| 色吧在线观看| 大又大粗又爽又黄少妇毛片口| 有码 亚洲区| 国产成人午夜福利电影在线观看| 日本av手机在线免费观看| 人妻制服诱惑在线中文字幕| 日韩在线高清观看一区二区三区| 亚洲精品久久午夜乱码| 欧美最新免费一区二区三区| 2021少妇久久久久久久久久久| 亚洲精品aⅴ在线观看| 欧美精品亚洲一区二区| 日本av免费视频播放| 香蕉精品网在线| 视频区图区小说| 国产乱人偷精品视频| 黑人高潮一二区| 99久久精品一区二区三区| 在线观看国产h片| 亚洲性久久影院| 亚洲欧美日韩卡通动漫| 国产有黄有色有爽视频| 国产黄色视频一区二区在线观看| 午夜av观看不卡| 免费黄网站久久成人精品| 亚洲av.av天堂| 精品一区二区三卡| 美女xxoo啪啪120秒动态图| 精品少妇黑人巨大在线播放| 99九九在线精品视频 | 色视频www国产| 国产 一区精品| 自线自在国产av| 18禁裸乳无遮挡动漫免费视频| 日韩电影二区| 天堂8中文在线网| 看十八女毛片水多多多| 国内精品宾馆在线| 欧美亚洲 丝袜 人妻 在线| 欧美成人精品欧美一级黄| 少妇人妻久久综合中文| av专区在线播放| 精品午夜福利在线看| 丝袜喷水一区| 日韩一本色道免费dvd| 熟女电影av网| 成人漫画全彩无遮挡| 一级爰片在线观看| av播播在线观看一区| 欧美日韩av久久| 久久久久久久久久久久大奶| 中文字幕精品免费在线观看视频 | 一级毛片久久久久久久久女| 久久精品国产亚洲av天美| 午夜福利在线观看免费完整高清在| 大话2 男鬼变身卡| 日韩欧美精品免费久久| 久久久久久久久久人人人人人人| 免费观看无遮挡的男女| 又爽又黄a免费视频| 免费黄频网站在线观看国产| 亚洲精品日韩av片在线观看| 丝袜脚勾引网站| 亚洲成人av在线免费| 久久人人爽av亚洲精品天堂| 欧美高清成人免费视频www| 免费久久久久久久精品成人欧美视频 | 嘟嘟电影网在线观看| 国产极品天堂在线| 免费黄色在线免费观看| av网站免费在线观看视频| 一本一本综合久久| 你懂的网址亚洲精品在线观看| 99久久综合免费| 五月开心婷婷网| 国产精品女同一区二区软件| 蜜桃在线观看..| 搡老乐熟女国产| 欧美高清成人免费视频www| 国产亚洲最大av| 18+在线观看网站| a级毛片免费高清观看在线播放| 国产欧美日韩精品一区二区| 精品久久国产蜜桃| a级一级毛片免费在线观看| 精品久久国产蜜桃| 久热这里只有精品99| 国产国拍精品亚洲av在线观看| 亚洲第一av免费看| a级毛片免费高清观看在线播放| 国产欧美亚洲国产| 国产老妇伦熟女老妇高清| 欧美日韩在线观看h| 高清在线视频一区二区三区| 国产亚洲最大av| 国产精品.久久久| 成年人午夜在线观看视频| 午夜av观看不卡| 国产色爽女视频免费观看| 久久午夜综合久久蜜桃| 成人亚洲精品一区在线观看| 日韩 亚洲 欧美在线| 26uuu在线亚洲综合色| 精品一区二区三卡| 亚洲色图综合在线观看| 我的老师免费观看完整版| 嘟嘟电影网在线观看| 黑人高潮一二区| 人人妻人人爽人人添夜夜欢视频 | 国产成人精品福利久久| 精品一区二区三区视频在线| 少妇被粗大猛烈的视频| 亚洲丝袜综合中文字幕| 91久久精品国产一区二区成人| 免费观看在线日韩| 久久精品久久久久久久性| 我的女老师完整版在线观看| 欧美高清成人免费视频www| 王馨瑶露胸无遮挡在线观看| 这个男人来自地球电影免费观看 | 高清午夜精品一区二区三区| 国产精品一二三区在线看| 99热网站在线观看| 日韩在线高清观看一区二区三区| 能在线免费看毛片的网站| 汤姆久久久久久久影院中文字幕| 新久久久久国产一级毛片| 中国国产av一级| 99视频精品全部免费 在线| 麻豆精品久久久久久蜜桃| 国产成人91sexporn| 日韩在线高清观看一区二区三区| 精品久久久精品久久久| 2022亚洲国产成人精品| 老女人水多毛片| 免费大片18禁| 日本-黄色视频高清免费观看| 超碰97精品在线观看| 黑人高潮一二区| 国产成人freesex在线| 成年人午夜在线观看视频| 看非洲黑人一级黄片| h视频一区二区三区| av卡一久久| 91午夜精品亚洲一区二区三区| 乱人伦中国视频| 我要看日韩黄色一级片| 在线天堂最新版资源| 妹子高潮喷水视频| 国产一区二区在线观看av| 少妇精品久久久久久久| 精品久久久久久久久av| 丁香六月天网| 欧美最新免费一区二区三区| 久久国内精品自在自线图片| 久久久久久久久久久久大奶| 欧美丝袜亚洲另类| 亚洲精品久久久久久婷婷小说| 欧美成人精品欧美一级黄| www.色视频.com| 免费少妇av软件| 免费av中文字幕在线| 国产精品不卡视频一区二区| 免费观看无遮挡的男女| 国产av精品麻豆| 欧美精品亚洲一区二区| 久久国产乱子免费精品| 精品亚洲乱码少妇综合久久| xxx大片免费视频| 亚洲无线观看免费| 国模一区二区三区四区视频| 交换朋友夫妻互换小说| 人妻系列 视频| 国产 一区精品| 欧美成人午夜免费资源| 97超视频在线观看视频| 国产在线男女| 综合色丁香网| 五月开心婷婷网| 麻豆成人午夜福利视频| 人人妻人人澡人人看| 在线播放无遮挡| av专区在线播放| 精品少妇内射三级| 国产黄频视频在线观看| 大又大粗又爽又黄少妇毛片口| 精品少妇内射三级| 日本与韩国留学比较| 免费人成在线观看视频色| av国产精品久久久久影院| 男女边摸边吃奶| 国产一区有黄有色的免费视频| 日日摸夜夜添夜夜爱| 亚洲av中文av极速乱| 在线观看一区二区三区激情| 在线观看免费高清a一片| 免费人妻精品一区二区三区视频| 成人综合一区亚洲| 好男人视频免费观看在线| 深夜a级毛片| 两个人免费观看高清视频 | 看十八女毛片水多多多| 韩国高清视频一区二区三区| 日韩av免费高清视频| 美女脱内裤让男人舔精品视频| 国产精品久久久久久久久免| 激情五月婷婷亚洲| 国产高清不卡午夜福利| 国产精品伦人一区二区| av一本久久久久| 亚洲av日韩在线播放| 国产乱人偷精品视频|