• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Emotion Based Signal Enhancement Through Multisensory Integration Using Machine Learning

    2022-08-23 02:21:42MuhammadAdnanKhanSagheerAbbasAliRazaFaheemKhanandWhangbo
    Computers Materials&Continua 2022年6期

    Muhammad Adnan Khan,Sagheer Abbas,Ali Raza,Faheem Khan and T.Whangbo,

    1Pattern Recognition and Machine Learning Lab,Department of Software,Gachon University,Seongnam,Gyeonggido,13120,Korea

    2Riphah School of Computing&Innovation,Faculty of Computing,Riphah International University Lahore Campus,Lahore,54000,Pakistan

    3School of Computer Science,National College of Business Administration and Economics,Lahore,54000,Pakistan

    4Department of Computer Engineering,Gachon University,Seongnam,13557, Korea

    Abstract: Progress in understanding multisensory integration in human have suggested researchers that the integration may result into the enhancement or depression of incoming signals.It is evident based on different psychological and behavioral experiments that stimuli coming from different perceptual modalities at the same time or from the same place, the signal having more strength under the influence of emotions effects the response accordingly.Current research in multisensory integration has not studied the effect of emotions despite its significance and natural influence in multisensory enhancement or depression.Therefore, there is a need to integrate the emotional state of the agent with incoming stimuli for signal enhancement or depression.In this study, two different neural network-based learning algorithms have been employed to learn the impact of emotions on signal enhancement or depression.It was observed that the performance of a proposed system for multisensory integration increases when emotion features were present during enhancement or depression of multisensory signals.

    Keywords: Multisensory integration; sensory enhancement; depression;emotions

    1 Introduction

    The brain is the most complex part of the human body.The nervous system inside the human brain is composed of different types of cells.The primary functional unit of the nervous system is called the neuron.Every neuron passes signals which cause thoughts,actions,feelings,and memories[1].The brain has a specific area to deal with sensory input and the most significant capabilities of the brain include an appropriate response to sensory input and information processing [2].The human brain has the great capability to perceive the information of the outer world through various sensors,process this information,and generate a response accordingly[3].Recently several attempts have been made to simulate these functions of the brain into cognitive agents[2,4,5].

    The cognitive agents are the systems that sense the environment from time to time and generate appropriate actions, in pursuit of their agenda [4,5].Most of the cognitive agents are not able to perform multisensory integration or either they do not deal with the sensory data efficiently received from multiple sensors.The integration of multiple sensory data may result in the enhancement or depression of incoming signals[6,7].Enhancement and depression of different sensory stimuli is a very significant characteristic of multisensory integration[8].Interactions in the multisensory environment are normally restrained by the primary unimodal stimuli therefore the role of secondary auditory input is to enhance the response of multisensory integration to non-perceived visual inputs[8].

    When different sensors observe the same event, multiple questions arise that upon receiving multiple measurements of a single event, how consistent perception is generated by the system with higher accuracy [9–11].Another important question that may arise is, that, how optimal decisions are made by the systems when various perceptions are measured through different sensors [12,13].Although every sensor has its congenital deficiencies and margins, yet the redundant data sensed through different sensors is fused to provide precise and accurate perception due to the sensory enhancement and depression phenomenon [1,7].The system may produce a more accurate and reliable decision by diminishing the perceptual uncertainty through this phenomenon [11,14].All cognitive agents are considered to perform optimally in integration tasks particularly in enhancement and depression in a way that they take uncertainty into account to integrate information coming from different sources [15].It is important to elucidate that how neurons can produce such optimal probabilistic inference based on sensory representation considering the structure of the brain[16].It is also significant to study how neural networks can acquire the knowledge about their environment that is the base of such computations[8,13].

    Studies on human emotion recognition typically use facial expressions and emotional voice expressions which appear to be the most complex visual and auditory stimuli[14,15].The vocal stimuli of emotions may affect the perception of facial expression in a way that people may visualize the felt textures or may feel touched by visualizing textures[15,16].Although data are not directly useful to understand how and when both the face and the voice are present,yet the two information streams are integrated[17,18].Multisensory integration contributes to a sense of self and an intensified presence of the perceiver in his or her world[19].This aspect of multisensory integration is particularly relevant for the multisensory perception of emotion [20].A comprehensive computational methodology is being proposed in this research paper, with the intent of understanding the functional features and advantages of emotions for multisensory enhancement and depression phenomenon.The prime focus of this research is to observe the orientation of integrated incoming stimuli for enhancement and depression criteria with the assumption that both the stimuli appear concurrently in Superior colliculus(SC) along with emotions.This research paper provides the basics to understand the impact of emotions in multisensory enhancement and depression to design a model for cognitive agents which may generate more appropriate responses while interacting in dynamic environments.In forthcoming sections, a brief review of literature is presented to outline the work done so far, then the proposed methodology is described with results and outcomes.

    2 Related Work

    Research in neuroscience and cognitive sciences have studied sensory enhancement and depression phenomenon in humans and animals in great detail.Stein [8,21] demonstrated visual enhancement based on the neurophysiological phenomenon first time through integrative processing of multisensory stimuli in nonhuman species.A very significant characteristic of multisensory integration is the foundation of enhancement of visual stimuli.Interactions in the multisensory environment are normally restrained by the primary unimodal stimuli so the role of secondary auditory input is to enhance the response of multisensory integration to non-perceived visual inputs[8].

    In recent years,the advantages of multisensory integration have also inspired research in different applications areas[14].Expendable and complementary sensory data that come from different sensory systems can be integrated using different multisensory integration methods to increase the capability of the agent[22–24].To use sensory data in the integration process,it needs to be effectively modeled.The sensors model epitomizes the errors and uncertainty in the sensory data and measures the quality of sensory data that can be used in subsequent integration functions[20].After modeling this sensory data,the processing may result into following three different types of processing stages:fusion,separate operation,and guiding or cueing[25–27].It is also a possible research area to understand when and how the human brain decides to opt between fusion,separation,and cueing[28].

    Events that happen in the external environment are more simply perceived and localized when cross-modal stimuli being initiated from a similar location.The neuronal responses generated by SC are enhanced at the physiological level when stimuli coming from different modalities descent into the relevant receiving fields of overlapping unisensory neurons[9,29].Similarly,the responses of SC and accuracy of localization should be depressed when the same stimuli are originated from different places or descent into receptive fields of neurons surrounding inhibitory regions[30,31].Another significant point to note is that the localization of visual stimulus does not depress when a blurred or vague visual target is collectively analyzed with auditory stimuli placed in exterior space [27].It has also been observed during various psychological and behavioral studies of humans and animals that upon receiving diverse strengths of dissimilar stimuli coming from different perceptual modalities at the same time or from the same place, the signal having more strength effects the response accordingly[6,32,33].

    Multisensory enhancement and depression are primarily based on the space between different perceptual stimuli and the strength of incoming stimuli[34,35].It is believed by many researchers that SC takes into account the significance of primary visual stimuli because of its straight interaction with visual sensors through vision sensors.This phenomenon occurs only when stimuli are observed having different strengths at varied time intervals[1,36–38].For example,if any human agent is sitting on a revolving chair and the chair is rotated several times, finally when that chair will stop, the received visual stimulus will be superseded by an acoustic stimulus received through the ear[37,38].It can be noticed that initially the optical stimulus is superseded by the acoustic stimulus,but at further stages,the acoustic stimulus will also lose effectively when it tries to integrate visual and audio stimuli.It happens because of the ever-changing state of sensors(in this case eyes and ears)[39].This scenario implies the fact that both the stimuli (eyes and ears) are equally prioritized during the integration process yet for signal enhancement and depression,both stimuli may be treated differently[37].

    In multisensory enhancement when strong and weak perceptual stimuli are received then the stronger perceptual stimuli can be enhanced[40].When some strong and reasonably similar perceptual stimuli are received, it will more likely have resulted in enhancement of unified or integrated signal instead of enhancement of individual stimulus.Likewise, when two weak and relatively closer perceptual stimuli are received the enhancement may occur in the integrated output of either or both of the received stimuli[37,41].Depression is the phenomenon in which the current state of the agent is reported as misperception[42].This misperception arises while determining the angle for localization[43] particularly in cases when some strong perceptual stimuli are perceptually dissimilar from each other,in such a scenario the output will be more depressed.Similarly,if the two weak perceptual stimuli that are far away from one another being received then the output will again be strongly depressed,resulting in that no output will be generated[44].Multisensory enhancement,on the other hand,may improve the ability of an organism to detect the targets in the environment.Deep&Machine learning arose over the last two decades from the increasing capacity of computers to process large amounts of data.Machine learning approaches like Swarm Intelligence[45],Evolutionary Computing[46]like Genetic Algorithm[47],Neural Network[48],Deep Extreme Machine learning[49]and Fuzzy system[50–56]are strong candidate solution in the field of smart city[57–59],smart health[60,61],and wireless communication[62,63],etc.

    3 Proposed Methodology

    This research paper is focused to develop a framework to study the impact of emotion on the enhancement and depression phenomenon of integrated signals in machines.This study allows the functionally cognitive agents to integrate multisensory stimuli on the most salient features of the sensed audio-visual information by considering internal and external psychophysical states for sensory enhancement or depression.The system model is illustrated in Fig.1 as a solution to achieve multisensory integration for functional cognitive agents however the scope of this paper is limited to the multisensory enhancement and depression module of a proposed system model.Various components of the proposed model are capsulated and operate within the scope of a different interconnected cognitive subsystem.The model is given in Fig.1 consist of four different modules,i.e., Sensory system (SS) to sense different stimuli from the environment, Perceptual associative memory (PAM) to perceive important information from sensory input, Working memory (WM) is a preconscious buffer and Emotion System (ES) to analyze and generate an emotional state of the system,given the sensory input.The focus of this research paper is limited to sensory enhancement or depression,other parts of SC such as the sensory integration module,event history module,and WM have been described in this paper very briefly to outline the scope of the generic system proposed for multisensory integration but they have no significance in this paper.

    The sensory system is subdivided into internal and external sensors.The internal sensor senses the emotional states of conscious agents while the two external sensors sense audio and visual stimuli.These sensors transfer data to the sensory buffer temporarily for further processing.This buffered information shifts towards perceptual associative memory, which is further divided into two submodules like perceptual module and superior colliculus.The superior colliculus module is responsible for signal enhancement and depression phenomenon to integrate multisensory stimuli.

    3.1 Sensory System(SS)

    The prime purpose of this research is to study the influence of emotions to enhance or depress multisensory inputs.These sensory inputs are being sensed through SS which is built with internal and external sensors.This module can receive information from different sensors and save it into sensory memory.Only auditory and visual sensors have been used in the proposed system to reduce the computational complexity of SS.These sensors receive the audio and visual stimuli and forward them to PAM for further processing.The role of the internal sensor is to sense the emotional state of the agent and this state is sent to the emotional module.

    Figure 1:System model for emotion-based sensory enhancement and depression during multisensory integration using artificial neural networks

    3.2 Perceptual Associative Memory(PAM)

    The sensed input is transferred to PAM from sensory memory for further processing.PAM consists of two sub-modules,Perception module(PM)and Superior colliculus(SC).

    3.3 Perception Module(PM)

    PM perceives low-level features from input stimuli after receiving the input from sensory memories.Enhancement and depression of sensory stimuli primarily depend on the perception of these features.For instance,if the pitch and frequency of any audio stimuli are very high,the agent should depress these auditory features so that visual and other sensory modalities may be integrated with auditory signals.PM contains two sub-modules, which are the audio feature extractor and video feature extractor.The perceived information is transferred to the SC,where signals are enhanced or depressed if deemed necessary and finally integrated.

    3.4 Audio Feature Extractor

    Feature extraction has a very important role in this study and its intent is to analyze raw sensory data to analyze important features from incoming stimuli.Audio feature extractor separates important and useful audio data including features like Frequency,Formant Value,Pitch,and Timbre value.

    Frequency is the oscillations of sound waves,as measured in hertz(cycles per second).Frequency is not concerned with the vibrating object used to create the sound wave,rather the speed of the particles vibrating in a back and forth motion at a given rate is known as frequency.Fewer vibrations in any certain amount of time are normally difficult to be noticed by the human ear and hence require more effort to generate more attention.The frequency may be expressed in Eq.(1),as follows.

    Here the symbol F represents frequency and the symbol p represents period which is the number of vibrations per second.It is important to relate that higher frequencies are inclined to be directional and generally dissipate the energy quickly.However,lower frequencies are likely to be multi-directional and it becomes hard to localize lower frequencies as compared to the higher frequencies.

    The pitch depicts the subjective psycho-acoustical sensation of high and low tones.Pitch is normally defined as the perceptual property of acoustic reverberations that agrees to the organization of multiple sounds on a frequency-related scale.It is significant to note that pitch can only be measured in sounds having a clear and stable frequency to be differentiated from noise.The higher the frequency the higher will be pitch as shown below in Eq.(2).

    Timbre is another perceptual quality of sound that differentiates between various acoustic constructions.Generally, timbre differentiates any particular acoustic impression from others even if the pitch or loudness of both sounds are similar.The intensity of sounds and their frequencies are considered important in any waveform of which the timbre is computed.For instance,if sounds having different frequencies 100,300,and 500 hertz are taken into account and relative amplitudes of 10,5,and 2.5 are fused to form a complex sound then the amplitude“a”of the waveform at any timetwould be represented by following Eq.(3).

    It will be easy to recognize this timbre and it will be well differentiable from other sounds having diverse amplitude and a basic frequency of 100 hertz.

    A formant is a grouping of acoustic strength around a specific frequency in the sound wave.There could be multiple formants having different frequencies about one in each 1000 hertz.Every formant relates to a sound in the vocal tract.The strength of formants means more energy in the vocal stimuli.The vocal tract producing different sounds can be roughly considered as a pipe barred from one side.Because the vocal tract evolves in time to produce different sounds, spectral characterization of the voice signal will be also variant in time.This temporal evolution may be represented with a voice signal spectrogram as shown in Eq.(4).

    It is clear from above Eq.(4)that there can be several formants calculated at each value of n where n represents the sequence number of the formant as represented asTn,i.e.,the second formant would be written asT2.In Eq.(4)vis the speed of sound which is 34000 cm/sec and L is the length of the vocal tube.

    Entropy in the analysis of visual stimuli is considered as a quantity describing the amount of information in any image that needs to be coded for compression of the image.This quantity can be low or high where Low entropy means that the concerned image contains a considerable amount of background of the same color.Subsequently,images having high entropy,are those which have highly cratered areas,having a considerable amount of contrast from one pixel to another.

    Eq.(5)returns a scalar value which gives the entropy of any imageι.Here,entropy may further be elaborated as a sum of all histograms h returned from image histogram as defined as Eq.(6).

    Mean color is the measurement of the color temperature of any object that radiates light of comparable color.Mean color is normally expressed in kelvins,using the symbol K,which is a unit of measure for absolute temperature.Eq.(7)illustrates the formula to calculate mean color whereiandjrepresent current pixel and ckrepresents color while k ranges from 1 to 3 showing red,green,and blue color.

    RMS is used for transforming images into numbers and get the results for analysis.The use of RMS is helpful to refine the images and to explore error detection.For instance, if two images are required to be compared to evaluate if they belong to the same event or not a de-noised image ?ι,can be measured in several ways,check that how good is the restored image ?ι.Eq.(8)gives the formula to calculate RMS where x and y represents the pixels

    Correlation is represented by the“correlation coefficient”,which ranges between-1 and+1.A correlation coefficient is computed as

    where,xiandyiare values of intensity at any pixel in two different images.Moreover,the mean intensity values of these images are measured asxmandym.The correlation r may result as r=1 if participating images are completely similar,the value of r can be 0 if these images are uncorrelated and r may result as-1 if the images are inversely correlated.This comparison is beneficial to understand if the context of any event changes as measured by the correlation coefficient.

    3.5 Emotion Module

    Emotions are representations of feelings and can be sensed through the internal sensors of the agent.It is believed by many researchers [10,19,21] that internal emotions can play a vital role in multisensory enhancement and depression.Progress of research in the field of cognitive agents has led toward employing agents to perform various tasks in daily life.For this purpose,the agents need to interact with other agents/humans and so they need to keep track of their psychological and physical states.The role of emotions in action selection is very important and it may influence the perception of the outside environment.In this model,the role of the emotion module is to understand emotional cues from audio and visual data.Humans can express several primary and secondary emotions,but the scope of this study is concerned with only six basic emotions.Once the agent perceives emotion from incoming stimuli,it can generate its sentiments.

    The Princess was much agitated27 by this speech, and feared lest the Enchanter should have overheard it; but he had been loudly calling her attention to the flowers, and chuckling28 over his own smartness in getting them for her; and it was rather a blow to him when she said very coldly that they were not the sort she preferred, and she would be glad if he would send them all away

    The expression of emotions is an important and dynamically changing phenomenon,particularly for human beings.The strategy used in this research for classification and analysis of facial emotion comprises three significant steps.First of all important region features and skin color is detected from incoming stimuli using the spatial filtering approach.The next step is to locate the position of facial features to be represented as a region of interest and the Bezier curve is created by using a feature map from the region of interest.Furthermore,the calibration of illumination is done in this step as an essential preprocessing criterion for precise recognition of facial expressions as mentioned in Eq.(10).

    where min1and max1are the least and most extreme estimation of Y segment on the current image,min2andmax2are the estimation of the changed space,K1and Khare given values as 30 and 220,separately.

    In Eqs.(11) and (12), the region and position of eyes and lips are extracted from the visual incoming stimuli.Eyes have multiple basic properties like symmetry so can easily be detected and transformed into an eye map as mentioned in Eq.(12).

    To analyze resemblance in the incoming stimuli for similarity comparison, the normalization of curve displacements is done.This normalization step regenerates each width of curves with a threshold value of 100 to maintain the aspect ratio.Distance dh(p,q)between two curves p(s), s ∈[a,b]and p(t), t ∈[c,d]is calculated to compare and calculate shape matric with Eq.(13).

    It is apparent from Eq.(13) that other feature points may also be used for distance calculation.The emotion on the face depends upon the distance between different focus points.

    Extracting features from audio signals was a bit difficult task for selected videos containing music along with speech signals.Humans are capable of focusing on the person who is speaking to us, by ignoring the noise of the environment.Therefore the first step of extracting emotions from auditory input was to remove noise from incoming stimuli which require separate discussion and are not part of this paper.Once noise was removed and audio features were extracted from auditory stimuli,the Levenberg marquardt (LM) algorithm was used to train an artificial neural network for which the Berlin Database of Emotional Speech was used as a training dataset.This database consists of 500 utterances spoken by the different actors in happy, angry, sad, disgust, boredom, fear, and normal ways.These utterances are from 10 different actors and ten different texts in the German language.Around 50,50 samples of each emotion were taken to create the dataset for training.After training and validation, 96% accuracy was obtained and this trained network was later used to extract emotions from our dataset.

    3.6 Superior Colliculus(SC)

    The superior colliculus is the region in the mid-brain,where multimodal stimulus processing and integration of audio-visual stimuli takes place.SC consists of signal enhancement and depression module,history manager,and sensory integration module.The scope of this research paper is limited to signal enhancement and depression module,other modules have also been described very briefly to outline the scope of the proposed system model.

    3.7 Enhancement and Depression Module

    This module is responsible for enhancement and depression of incoming stimuli before integration of different sensory cues.Before actually integrating different sensory inputs they are synchronized so that all sensory stimuli may get similar attention and so appropriate responses may be generated.In case the enhancement or depression phenomenon is not employed in the system, then the situation becomes more like of winner take all and so the sensor having more intense features in incoming stimuli will get whole attention and other sensors may not get proper attention resulting in inappropriate actions and responses.In this module, we are using artificial neural networks to train the system as to how and when to enhance or depress incoming stimuli.Two different learning methods have been used to train the system which are LM and Scaled Conjugate Gradient.

    The LM algorithm is developed to work with loss functions that are formulated as the sum of squared errors.LM algorithm utilizes gradient vector and Jacobian matrix which maximizes the speed of training neural networks.Though this algorithm has limitations that for very big data sets the size of the jacobian matrix becomes large, and requires a lot of memory but for medium-size datasets it can work well.The main notion behind the LM algorithm is that the training process consists of two steps.The first step of the LM algorithm is that it changes to the steepest descent algorithm around an intricate curve until the local curve is appropriate to generate a quadratic approximation.The second step is to significantly speed up the convergence which makes it quite similar to the Gauss-Newton algorithm.

    For the derivation of the LM algorithm in the aforementioned computation,the variables j and k are being used as the indices of neurons which range from 1 to any number nn, where nn is the number of neurons used in any neural network topology.The value ofiwhich represents the index of input neurons ranges from 1 toj,the number of input neurons may always vary depending on the feature set provided in the data.For training of the neural network suppose there is input neuronjwith niinputs.As neuronjrepresents the input neuron,it will always exist in the first layer of network topology and all inputs of neuron j will be linked to the inputs of the network.Nodeφplays a very significant and flexible role in training.The ithinput of neuron j can be represented asφi,jwhileφjcan be used to define the output of neuronj.From these derivations,we may conclude that if nodeφhas only one index then it will be used as an output node of the neuron,but ifφis used with two indices it will be considered a neuron input node where the first index will represent the input and second will represent the output neuron.Eq.(14)illustrates the calculation of the output node of neuronjwhere the activation function of neuronjis represented withfjand the value of net is shown asψjwhich the sum of weighted input nodes of neuron j is:

    whereφi,jis the ithinput computed for neuron j and is weighted byωj,iwhereas the bias weight of neuron j is represented byωj,0.Input and output nodes of the network can be represented byψj,i.The non-linear relationship is represented byfm,j(ψj)between the output of the neuronomand output nodeψj.Using Eq.(14),one may notice that derivative ofψjis

    And slopesjof activation functionfjis

    There is a complex non-linear relationship between the output of the networkφjand outputφjof a hidden neuron j as shown in Eq.(18)where mthoutput of the network is represented asom.

    The complexity of this nonlinear functionFm,j(φj)is determined by the factor that how many neurons other thanjare between the output of the network as represented bymand the neuronj.If neuronjis the output neuron then= 1, whereis the derivative of nonlinear relationship the output of the networkm andneuronj.As mentioned earlier the LM algorithm makes use of the jacobian matrix,so the elements of the Jacobian matrix can be calculated as

    where the derived non-linear functionacts between the output of the network m and neuron j while the training error is represented as ep,mat the output of the network m.When pattern p is applied to this training error,it is defined as

    where,τp,mrepresents the desired output andφp,mas a calculated output.The computation process of a backpropagation-based algorithm can be used for the Jacobian matrix with a slight difference that only one process of backpropagation is required in traditional backpropagation algorithms for every pattern.Whereas to acquire consecutive rows of the jacobian matrix the LM algorithm the backpropagation process needs to be reiterated separately for every output.The concept of the backpropagation of theδparameter is also different in the LM algorithm as compared to traditional backpropagation algorithms where theδparameter also holds the output error.

    Theδparameter in the LM algorithm is computed separately for every neuron j and every output m,and the computed error is replaced by a unit value during the backpropagation process as shown in Eq.(22).

    The elements of the jacobian matrix can finally be computed by Eq.(23).

    3.8 History Manager

    In case,the incoming stimuli match with stored signals,the history manager will generate a strong signal to integrate incoming stimuli according to already stored experiences.The history manager not only recognizes the individual objects rather it also manages the state of the objects with relevant details of the event.

    3.9 Integration Module

    Once processing is completed,all audiovisual signals are combined in the integration module.The input to this module has information regarding the direction of a person,its’identity,and emotional state.The purpose of this module is to associate these heterogeneous signals coherently.The resultant output contains a unified,maneuvering,and affective signal carrying all relevant information about the objects.

    3.10 Working Memory(WM)

    After integration, input stimuli being integrated and transferred to working memory.WM provides temporary storage while manipulating information needed for understanding, learning,and reasoning for the current environment.Information comes into the working memory from two different sources.One source is the perceptual associative memory were initially integrated information is passed to working memory and the other source is the internal sensory memory from where the emotional state of the agent is transferred to WM.The signal from WM is further transferred to actuators to perform an action and the same signal is transferred to internal sensors through internal actuators.

    4 Results and Discussion

    In this paper, it is hypothesized based on the literature survey (as discussed in Section 2 of this paper) that emotions play a significant role in multisensory enhancement or depression during the sensory integration process in the superior colliculus.This behavior of the machine is able not only to integrate multi-sensory input but also mimics biology both at neuron level and behavioral level.It becomes very crucial almost every time to have a sufficient amount of training data to train the neural network and to cover the full range of the network and secure the applicability of the network with sufficient statistical significance.For dataset preparation,Twenty(20)different videos collected from various movies and documentaries on diverse topics were used in the experiment.There were no particular selection criteria for these videos.Persons participating in these videos were of different age groups, including both males and females.Although persons performing different actions belonged to different ethnic backgrounds,yet all of them were communicating in the English language.All of the video clips were processed and regenerated to have an identical format with the standard duration of 30 s.All sorts of introductory titles and subtitles were removed during preprocessing.In the video dataset, we first separated audio from visual data, and then each video segment was converted into images.For each video,visual features were extracted.Similarly,the audio features were also extracted to prepare the dataset containing more than 10000 records.The data that is used for training,testing,and validation of the ANN is shown in Fig.2.

    Figure 2:Features of the pre-generated dataset used for training,testing,and validation of ANN

    In the design of the ANN architecture presented in Section 3.7, the results are obtained for an ANN trained with the MSE as an objective function or error measure.Thus,the various settings of the neural network are compared for the ANN with 10,15,20,and 25 neurons in the hidden layer in a different set of experiments,ten input variables representing different sensory features.The training length of the neural network was fixed at 1000 iterations but in most of the experiments,the network converged very early with good accuracy.

    In every experiment, some of the pre-generated data were randomly selected for performance validation and testing of the trained neural network, and performance and regression graphs were saved for comparison.These data are used to calculate a validation error which is the measure for the accuracy of the trained network.Fig.3 shows the development in the validation error during the network training with different settings of the hidden layer using pre-generated data set with emotion features.It is apparent from Fig.3 that as the number of neurons in hidden layers are increases the performance of the system is also increased.Initially,the network was trained with 10 neurons in the hidden layer and the system converged in the 90thiteration while with 15 neurons in the hidden layer and the system converged earlier approximately in iteration 48 with a significant decrease in error.When the number of neurons in the hidden layer increased to a certain number it was observed that though the system converged in the 70thiteration the mean error decreased to approximately 10-3as shown in Fig.3d.Fig.4 shows the regression comparison of network training with the same network settings using the LM algorithm with emotion features.It was observed during experiments that when the system was trained using 10 neurons in the hidden layer the system achieved approximately 95%accuracy as shown in Fig.4a.When the number of neurons increased to 15 the accuracy also increased to approximately 96.7% (see Fig.4b) though there was a very slight change with 20 neurons at the hidden layer (see Fig.4c) but with 25 neurons at hidden layer 98% accuracy of the trained network was achieved(see Fig.4d)which is very high.During all experiments,it was observed that the network converged in approximately less than 90 iterations and achieved a maximum 98% of accuracy.It is therefore concluded that with 25 neurons in the hidden layer and using emotion features approximately 98%of the accuracy can be achieved with the LM algorithm.

    Figure 3:Performance of proposed system using LM algorithm with different number of neurons at hidden layer having emotion features

    Figure 4: Regression comparison of proposed system using LM algorithm with different number of neurons at hidden layer having emotion features

    In another set of experiments,the same pre-generated dataset was trained with other ANN algorithms but a significant decrease in performance was observed.Fig.5 summarizes the development of the performance using scaled conjugate gradient backpropagation ANN, with a similar network setting and the number of neurons in the hidden layer using the same MSE error measure to give a consistent basis for comparison.It was observed that when 10 neurons were used in the hidden layer for training the network converged in approximately 200 iterations as shown in Fig.5a but the performance remained low as compared to the same number of neurons used in the hidden layer with LM algorithm(see Fig.4a).

    Figure 5: Performance of proposed system using scaled conjugate gradient algorithm with different number of neurons at hidden layer having emotion features

    In other experiments, 15, 20, and 25 neurons were used in the hidden layer but no significant change in performance was observed after increasing the number of neurons from 15 at the hidden layer as shown in Fig.5b.Fig.6 gives the plot of regression using a similar network setting and the results were observed.It is observed that with 10 neurons at hidden layer though the network converged early the overall validation and testing accuracy remained at 89% (see Fig.6a) and increasing the number of neurons after 15 the performance of the trained network with emotion features of the proposed system increased to approximately 91%(see Fig.6b)but it is still low as compared to the training of LM algorithm.This indicates that changing the training algorithm implies no improvement of the performance and accuracy of the neural network and it may therefore be concluded that the LM algorithm for training neural network can give the best result for the training of the proposed system using 25 neurons at hidden layer.

    It is hypothesized in this research in accordance with different theories of human psychology(see Section 2)that emotions play a significant role in the enhancement and depression of sensory stimuli.Therefore, it was necessary to conduct another set of experiments to test whether the behavior of the trained network is consistent with this hypothesis.Consequently,another set of experiments were conducted by training the LM algorithm with the same audio and visual stimuli without presenting the features extracted as emotions from visual and auditory stimuli.

    The computational complexity of the proposed system is given in Fig.7 which clearly shows that the computational complexity of the system increases with the increase in the number of neurons at the hidden layer but it is also observed that the increase in the number of neurons at hidden layer also increases the accuracy of neural network.

    Figure 6:Comparison of regression for the proposed system using scaled conjugate gradient algorithm with different number of neurons at hidden layer having emotion features

    Figure 7: Factor of the complexity of neural network used in the proposed system with different numbers of neurons in a hidden layer

    The comparison of Root means square error (RMSE) is given in Tab.1 and the comparison of regression with or without the presence of emotion features.It is apparent from Tab.1 and Tab.2 that when emotion features are present in the dataset the RMSE decreases significantly as the number of neurons is increased up to a limit of 25 neurons in the hidden layer.Similarly,Tab.2 shows that the accuracy of the system increases if emotion features are present during the training of the proposed system for signal enhancement and depression.

    Table 1:Comparison of RMSE with and without emotion using LM algorithm for training proposed system

    Table 2: Comparison of regression with and without emotion using LM algorithm for training proposed system

    5 Conclusion

    In this paper,various significant features of audio and visual signals and their impact on signal enhancement and depression are demonstrated.This paper proposes ANN-based system for signal enhancement and depression during the integration process of senses i.e.,audio and visual.This paper presents a deeper insight into the modules of which this architecture is consisted of.This architecture provides the agent ability to localize the object based on enhancement and depression signals and find the effect of emotions on enhancement or depressed data employing audio and visual senses.It was observed that the enhancement and depression phenomena may take place in a large number of combinations of weak and strong stimuli.It is also observed that emotion plays a significant role in the enhancement and depression phenomenon and without the presence of emotions the accuracy of signal enhancement and depression decreases significantly.

    Acknowledgement:Thanks to our families&colleagues who supported us morally.

    Funding Statement:This work was supported by the GRRC program of Gyeonggi province.[GRRCGachon2020(B04),Development of AI-based Healthcare Devices].

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    精华霜和精华液先用哪个| 亚洲一级一片aⅴ在线观看| 在线观看一区二区三区| 国产精品人妻久久久影院| 亚洲一级一片aⅴ在线观看| 97在线视频观看| 99热只有精品国产| 国产一区亚洲一区在线观看| 又黄又爽又免费观看的视频| 97超碰精品成人国产| 中文字幕久久专区| 在线播放国产精品三级| 欧美成人免费av一区二区三区| 精品无人区乱码1区二区| 国产精华一区二区三区| 亚洲精品影视一区二区三区av| 五月玫瑰六月丁香| www日本黄色视频网| 欧美最新免费一区二区三区| 国产亚洲精品av在线| 国产午夜精品论理片| 中国美白少妇内射xxxbb| 97超碰精品成人国产| 欧美激情久久久久久爽电影| 22中文网久久字幕| 国产一区二区三区av在线 | av在线老鸭窝| 一级毛片aaaaaa免费看小| 日韩国内少妇激情av| 国产午夜精品论理片| 欧美日韩一区二区视频在线观看视频在线 | 国产一区二区在线观看日韩| 国产亚洲91精品色在线| 91久久精品国产一区二区成人| 最近在线观看免费完整版| 美女高潮的动态| 在线播放国产精品三级| 久久鲁丝午夜福利片| 国产免费一级a男人的天堂| 日本五十路高清| 日韩中字成人| 九色成人免费人妻av| 亚洲七黄色美女视频| 免费观看的影片在线观看| 如何舔出高潮| 久久久久久久久中文| 国产白丝娇喘喷水9色精品| 国内少妇人妻偷人精品xxx网站| 久久精品久久久久久噜噜老黄 | 欧美潮喷喷水| 亚洲久久久久久中文字幕| 丰满乱子伦码专区| 亚州av有码| 久久中文看片网| 91av网一区二区| 亚洲乱码一区二区免费版| 午夜精品一区二区三区免费看| 亚洲成人久久爱视频| 成人午夜高清在线视频| 97在线视频观看| 在线免费观看的www视频| 一区二区三区高清视频在线| 欧美三级亚洲精品| 亚洲第一电影网av| 国产中年淑女户外野战色| 男女视频在线观看网站免费| 日韩欧美免费精品| a级毛片免费高清观看在线播放| 国产蜜桃级精品一区二区三区| 婷婷色综合大香蕉| 久久亚洲精品不卡| 国产精品人妻久久久影院| 国产高清有码在线观看视频| 麻豆成人午夜福利视频| 亚洲国产精品国产精品| 毛片女人毛片| 美女 人体艺术 gogo| 日本三级黄在线观看| 亚洲高清免费不卡视频| 国产单亲对白刺激| 国产精品野战在线观看| 天天一区二区日本电影三级| 一进一出抽搐gif免费好疼| 婷婷六月久久综合丁香| 日韩欧美免费精品| 干丝袜人妻中文字幕| 免费观看精品视频网站| 国产亚洲精品久久久久久毛片| 免费在线观看影片大全网站| 美女大奶头视频| 99热只有精品国产| 插阴视频在线观看视频| 欧美国产日韩亚洲一区| 国产黄片美女视频| 少妇人妻一区二区三区视频| 波多野结衣高清无吗| 久久婷婷人人爽人人干人人爱| 久久久久久伊人网av| 亚洲欧美成人综合另类久久久 | 国产私拍福利视频在线观看| 色吧在线观看| 十八禁网站免费在线| 男女做爰动态图高潮gif福利片| 亚洲精品456在线播放app| 成人无遮挡网站| 男女之事视频高清在线观看| 午夜a级毛片| 欧美又色又爽又黄视频| 日本五十路高清| 国产三级中文精品| 小说图片视频综合网站| 天堂影院成人在线观看| 91精品国产九色| 啦啦啦啦在线视频资源| 国产精品久久久久久精品电影| 禁无遮挡网站| 精品久久久久久久末码| 最近视频中文字幕2019在线8| 国产亚洲91精品色在线| 国模一区二区三区四区视频| 欧美国产日韩亚洲一区| 成人高潮视频无遮挡免费网站| 男人舔女人下体高潮全视频| av在线播放精品| 欧洲精品卡2卡3卡4卡5卡区| 亚洲经典国产精华液单| 91av网一区二区| 午夜福利在线观看吧| 成年女人毛片免费观看观看9| 成人精品一区二区免费| 日日啪夜夜撸| 噜噜噜噜噜久久久久久91| 给我免费播放毛片高清在线观看| 免费观看的影片在线观看| 欧美bdsm另类| 亚洲欧美成人精品一区二区| 一级毛片aaaaaa免费看小| 天堂av国产一区二区熟女人妻| 联通29元200g的流量卡| 成人一区二区视频在线观看| 欧美潮喷喷水| 特级一级黄色大片| 国产精品国产高清国产av| 国产老妇女一区| 亚洲国产高清在线一区二区三| 欧美+亚洲+日韩+国产| 深夜a级毛片| 精品久久久久久久久亚洲| 国产精品国产三级国产av玫瑰| 日本精品一区二区三区蜜桃| 非洲黑人性xxxx精品又粗又长| 又黄又爽又刺激的免费视频.| 18禁黄网站禁片免费观看直播| 最新中文字幕久久久久| 99久久无色码亚洲精品果冻| 亚洲第一电影网av| 久久久久久九九精品二区国产| 天堂动漫精品| 国产人妻一区二区三区在| 欧美日本视频| 欧美一区二区精品小视频在线| 亚洲精品色激情综合| 国产一区二区三区在线臀色熟女| 国产成人aa在线观看| 亚洲婷婷狠狠爱综合网| 欧美一级a爱片免费观看看| 国产精品亚洲美女久久久| 日本免费一区二区三区高清不卡| 卡戴珊不雅视频在线播放| 性欧美人与动物交配| 91在线观看av| 在线观看av片永久免费下载| 九色成人免费人妻av| 久久国内精品自在自线图片| 日韩大尺度精品在线看网址| 亚洲精品一区av在线观看| 色哟哟·www| 免费看美女性在线毛片视频| 日本三级黄在线观看| 白带黄色成豆腐渣| 日韩欧美一区二区三区在线观看| 国产精品女同一区二区软件| 好男人在线观看高清免费视频| 少妇的逼水好多| 两个人的视频大全免费| 深夜精品福利| 久久久色成人| 美女被艹到高潮喷水动态| 亚洲欧美精品自产自拍| 国产成人91sexporn| 亚洲高清免费不卡视频| 国产在线精品亚洲第一网站| 国产精品一区二区三区四区免费观看 | 欧美三级亚洲精品| 免费不卡的大黄色大毛片视频在线观看 | 久久精品综合一区二区三区| 在线免费十八禁| 国产乱人偷精品视频| 国产色婷婷99| 99久久精品热视频| 国产女主播在线喷水免费视频网站 | 日韩三级伦理在线观看| 好男人在线观看高清免费视频| 欧美绝顶高潮抽搐喷水| 九九久久精品国产亚洲av麻豆| 校园人妻丝袜中文字幕| 亚洲av不卡在线观看| 久久精品国产亚洲av涩爱 | 十八禁国产超污无遮挡网站| 亚洲人成网站在线观看播放| 91av网一区二区| 国产一区二区三区在线臀色熟女| 99久国产av精品| 黄色欧美视频在线观看| avwww免费| 日韩av在线大香蕉| 久久久久久久久久久丰满| 国产精品一区二区三区四区免费观看 | 国产欧美日韩精品亚洲av| 欧美成人a在线观看| 久久久a久久爽久久v久久| 欧美激情久久久久久爽电影| 香蕉av资源在线| av卡一久久| 欧美极品一区二区三区四区| 在线a可以看的网站| 一a级毛片在线观看| 国产美女午夜福利| 国产在线男女| 日本与韩国留学比较| 我要搜黄色片| 色av中文字幕| 国产精品1区2区在线观看.| 成人特级av手机在线观看| 三级毛片av免费| 波多野结衣巨乳人妻| 午夜福利高清视频| 一级毛片久久久久久久久女| 99热这里只有精品一区| 国产高清不卡午夜福利| 日日干狠狠操夜夜爽| 色av中文字幕| 婷婷精品国产亚洲av| 国产蜜桃级精品一区二区三区| 欧美成人一区二区免费高清观看| 99热这里只有是精品50| 此物有八面人人有两片| 观看美女的网站| 国产高清三级在线| 22中文网久久字幕| 国产一区二区三区在线臀色熟女| 91精品国产九色| 国产精品国产高清国产av| 日韩制服骚丝袜av| 免费观看的影片在线观看| 国产色爽女视频免费观看| 在线a可以看的网站| 免费电影在线观看免费观看| 国产亚洲精品久久久com| 国内精品久久久久精免费| 免费在线观看影片大全网站| 99九九线精品视频在线观看视频| 女的被弄到高潮叫床怎么办| 小说图片视频综合网站| 成年女人永久免费观看视频| 亚洲欧美精品综合久久99| 亚洲无线观看免费| 美女大奶头视频| 久99久视频精品免费| 国产精品人妻久久久久久| 午夜影院日韩av| 超碰av人人做人人爽久久| 老司机午夜福利在线观看视频| 又黄又爽又免费观看的视频| 国产精品国产三级国产av玫瑰| 在线免费十八禁| 国产激情偷乱视频一区二区| 久久中文看片网| 熟女人妻精品中文字幕| 国产老妇女一区| 天堂√8在线中文| 亚洲国产精品久久男人天堂| 久久国产乱子免费精品| 一区二区三区免费毛片| 国内久久婷婷六月综合欲色啪| 少妇人妻一区二区三区视频| 免费观看的影片在线观看| 国产白丝娇喘喷水9色精品| 日韩三级伦理在线观看| 欧美3d第一页| 久久久久久九九精品二区国产| 国产精品女同一区二区软件| 亚洲av中文字字幕乱码综合| 国产蜜桃级精品一区二区三区| 夜夜夜夜夜久久久久| 免费无遮挡裸体视频| 国产高清激情床上av| 男人狂女人下面高潮的视频| 日本-黄色视频高清免费观看| 亚洲无线在线观看| 国产单亲对白刺激| 亚洲中文字幕日韩| 亚洲av一区综合| 又黄又爽又刺激的免费视频.| 人人妻,人人澡人人爽秒播| 一区二区三区四区激情视频 | 亚洲国产日韩欧美精品在线观看| 搡女人真爽免费视频火全软件 | 成人一区二区视频在线观看| 日日撸夜夜添| 一进一出抽搐动态| 一个人看的www免费观看视频| 久久综合国产亚洲精品| 国产精品一区二区免费欧美| 色吧在线观看| 免费在线观看成人毛片| 12—13女人毛片做爰片一| 中国美女看黄片| 国产精品野战在线观看| 十八禁网站免费在线| 美女高潮的动态| 亚洲av中文av极速乱| 少妇猛男粗大的猛烈进出视频 | 欧美高清性xxxxhd video| 99国产极品粉嫩在线观看| 国产精品爽爽va在线观看网站| 国产精品电影一区二区三区| 亚洲熟妇中文字幕五十中出| 国产精品一区www在线观看| 国产精品亚洲美女久久久| 午夜福利在线在线| 国产高清激情床上av| 国产精品久久视频播放| 一级av片app| 欧美绝顶高潮抽搐喷水| 亚洲成人精品中文字幕电影| 成年女人永久免费观看视频| 成人二区视频| 欧美绝顶高潮抽搐喷水| 亚洲综合色惰| 精品久久久久久久末码| 啦啦啦韩国在线观看视频| 熟女电影av网| 草草在线视频免费看| 99久国产av精品国产电影| 最近最新中文字幕大全电影3| 亚洲精品日韩av片在线观看| 国产成人精品久久久久久| 精品久久久久久久末码| 九九在线视频观看精品| 国产黄色小视频在线观看| 欧美+日韩+精品| 亚洲最大成人av| 精品久久久久久久久久久久久| 男女下面进入的视频免费午夜| 精品一区二区三区视频在线| 成人av一区二区三区在线看| 18禁在线无遮挡免费观看视频 | 欧美一区二区亚洲| 联通29元200g的流量卡| 色综合站精品国产| 在线播放国产精品三级| 亚洲av美国av| 免费在线观看成人毛片| 精品熟女少妇av免费看| 日本欧美国产在线视频| 国内精品一区二区在线观看| 嫩草影院入口| 啦啦啦韩国在线观看视频| 少妇的逼好多水| 可以在线观看毛片的网站| 日韩中字成人| 亚洲国产精品国产精品| 亚洲欧美日韩无卡精品| 晚上一个人看的免费电影| 国产午夜精品久久久久久一区二区三区 | 午夜老司机福利剧场| 白带黄色成豆腐渣| 国产男人的电影天堂91| 午夜福利成人在线免费观看| 五月伊人婷婷丁香| 国产中年淑女户外野战色| 人人妻,人人澡人人爽秒播| 国产成人影院久久av| 插逼视频在线观看| 日本精品一区二区三区蜜桃| 免费一级毛片在线播放高清视频| or卡值多少钱| 中文字幕久久专区| 综合色av麻豆| 国产在线男女| 免费高清视频大片| 日韩欧美在线乱码| 最近视频中文字幕2019在线8| 精品一区二区三区av网在线观看| 国产国拍精品亚洲av在线观看| 韩国av在线不卡| 人人妻,人人澡人人爽秒播| 桃色一区二区三区在线观看| 国产高潮美女av| 国产麻豆成人av免费视频| 黄色配什么色好看| 亚洲婷婷狠狠爱综合网| 全区人妻精品视频| 两个人视频免费观看高清| 看十八女毛片水多多多| 丰满乱子伦码专区| 国产探花极品一区二区| 男女视频在线观看网站免费| 国产精品99久久久久久久久| 日本a在线网址| 亚洲av成人精品一区久久| 久久精品影院6| 亚洲丝袜综合中文字幕| 日韩欧美三级三区| 亚洲丝袜综合中文字幕| 亚洲av.av天堂| 又黄又爽又刺激的免费视频.| 长腿黑丝高跟| 亚洲国产精品国产精品| 波多野结衣高清作品| 欧美高清性xxxxhd video| avwww免费| 老师上课跳d突然被开到最大视频| 日韩欧美精品v在线| 蜜桃亚洲精品一区二区三区| 99久久九九国产精品国产免费| 欧美xxxx黑人xx丫x性爽| 插逼视频在线观看| av卡一久久| 亚洲电影在线观看av| 亚洲自偷自拍三级| 国产伦精品一区二区三区四那| 中文字幕熟女人妻在线| 成人一区二区视频在线观看| 一级毛片久久久久久久久女| 小蜜桃在线观看免费完整版高清| 中国国产av一级| 搞女人的毛片| 最近视频中文字幕2019在线8| 亚洲欧美日韩东京热| videossex国产| 成人高潮视频无遮挡免费网站| 午夜福利在线观看吧| 伦精品一区二区三区| 免费搜索国产男女视频| 在线观看免费视频日本深夜| 两个人视频免费观看高清| 小说图片视频综合网站| 成人午夜高清在线视频| 日本-黄色视频高清免费观看| 亚洲国产色片| 永久网站在线| 99热6这里只有精品| 成人三级黄色视频| 麻豆精品久久久久久蜜桃| 美女内射精品一级片tv| 国内精品一区二区在线观看| 一级毛片我不卡| 一个人看视频在线观看www免费| 亚洲高清免费不卡视频| 人妻夜夜爽99麻豆av| 国产成人影院久久av| 12—13女人毛片做爰片一| 国国产精品蜜臀av免费| 搞女人的毛片| 中国美女看黄片| 大香蕉久久网| 1024手机看黄色片| 人人妻人人澡人人爽人人夜夜 | 国产欧美日韩一区二区精品| 婷婷精品国产亚洲av| 亚洲欧美日韩高清专用| 伊人久久精品亚洲午夜| 99久久精品国产国产毛片| 此物有八面人人有两片| 亚洲欧美日韩高清专用| 两个人的视频大全免费| 一区二区三区免费毛片| 日韩av不卡免费在线播放| 99久国产av精品| 欧美日本亚洲视频在线播放| 亚洲第一电影网av| 男女边吃奶边做爰视频| 一级毛片aaaaaa免费看小| or卡值多少钱| 中文字幕久久专区| 嫩草影院新地址| 在线a可以看的网站| 夜夜爽天天搞| 午夜福利高清视频| 女的被弄到高潮叫床怎么办| 搡老岳熟女国产| 日本一本二区三区精品| 亚洲性夜色夜夜综合| 亚洲人成网站高清观看| 变态另类丝袜制服| 免费观看精品视频网站| 69人妻影院| 不卡视频在线观看欧美| 国产精品女同一区二区软件| av在线老鸭窝| 久久久久久伊人网av| 国产亚洲av嫩草精品影院| 久久亚洲国产成人精品v| 国产精品三级大全| 青春草视频在线免费观看| 日韩成人av中文字幕在线观看 | 欧美一区二区亚洲| 国产伦一二天堂av在线观看| 国产成人影院久久av| 欧美又色又爽又黄视频| 久久久久性生活片| 婷婷六月久久综合丁香| 日本三级黄在线观看| 国产成年人精品一区二区| 国产亚洲精品综合一区在线观看| 成年女人永久免费观看视频| 国产白丝娇喘喷水9色精品| 在线免费十八禁| 12—13女人毛片做爰片一| 极品教师在线视频| 高清日韩中文字幕在线| 联通29元200g的流量卡| 亚洲av熟女| 亚洲av五月六月丁香网| 一个人观看的视频www高清免费观看| 精品一区二区三区人妻视频| 日本黄色视频三级网站网址| 非洲黑人性xxxx精品又粗又长| 久久亚洲精品不卡| 亚洲图色成人| 97在线视频观看| 久久久久久久亚洲中文字幕| 国产精品一区二区三区四区久久| 国产精品亚洲美女久久久| 我要搜黄色片| 色哟哟哟哟哟哟| 日韩高清综合在线| 久久精品人妻少妇| 嫩草影院入口| 人人妻,人人澡人人爽秒播| 国产av一区在线观看免费| 热99在线观看视频| 久久久久久久久中文| 色综合站精品国产| 亚洲成人av在线免费| 男女之事视频高清在线观看| 日韩,欧美,国产一区二区三区 | 深夜a级毛片| 国产高清三级在线| www.色视频.com| 国产精品不卡视频一区二区| 美女cb高潮喷水在线观看| .国产精品久久| aaaaa片日本免费| www.色视频.com| 观看免费一级毛片| 久久综合国产亚洲精品| 日本黄大片高清| 日韩亚洲欧美综合| 午夜a级毛片| 久久久欧美国产精品| av在线天堂中文字幕| 国产亚洲精品久久久久久毛片| 在线看三级毛片| 亚洲精品成人久久久久久| 成人欧美大片| 搡老妇女老女人老熟妇| 日韩精品中文字幕看吧| 成人性生交大片免费视频hd| av在线蜜桃| 九九爱精品视频在线观看| or卡值多少钱| 中国美白少妇内射xxxbb| 给我免费播放毛片高清在线观看| 婷婷色综合大香蕉| 国产69精品久久久久777片| 亚洲熟妇中文字幕五十中出| 婷婷亚洲欧美| 欧美成人a在线观看| 人人妻人人澡欧美一区二区| 精品久久久噜噜| 亚洲精品色激情综合| 国产精品国产高清国产av| 美女 人体艺术 gogo| 婷婷精品国产亚洲av在线| 亚洲av成人精品一区久久| 欧美zozozo另类| 22中文网久久字幕| 亚洲一区高清亚洲精品| 天堂动漫精品| h日本视频在线播放| 亚洲五月天丁香| av天堂在线播放| 亚洲欧美日韩高清在线视频| 毛片一级片免费看久久久久| 你懂的网址亚洲精品在线观看 | 久久精品综合一区二区三区| .国产精品久久| 久久九九热精品免费| 精华霜和精华液先用哪个| 日日摸夜夜添夜夜添小说| 99九九线精品视频在线观看视频| av在线蜜桃| 欧美高清性xxxxhd video| 国产精品一区二区三区四区久久| 精品福利观看| 亚洲一区高清亚洲精品| 午夜a级毛片| 久久久久免费精品人妻一区二区| 最近在线观看免费完整版| 日韩av不卡免费在线播放| 男女那种视频在线观看| 久久韩国三级中文字幕|