• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Research on Driver’s Fatigue Detection Based on Information Fusion

    2024-05-25 14:41:26MeiyanZhangBoqiZhaoJipuLiQisongWangDanLiuJinweiSunandJingxiaoLiao
    Computers Materials&Continua 2024年4期

    Meiyan Zhang ,Boqi Zhao ,Jipu Li ,Qisong Wang,? ,Dan Liu ,Jinwei Sun and Jingxiao Liao,?

    1Harbin Institute of Technology,School of Instrument Science and Engineering,Harbin,150036,China

    2Department of Industrial and Systems Engineering,The Hong Kong Polytechnic University,Hong Kong,100872,China

    ABSTRACT Driving fatigue is a physiological phenomenon that often occurs during driving.After the driver enters a fatigued state,the attention is lax,the response is slow,and the ability to deal with emergencies is significantly reduced,which can easily cause traffic accidents.Therefore,studying driver fatigue detection methods is significant in ensuring safe driving.However,the fatigue state of actual drivers is easily interfered with by the external environment(glasses and light),which leads to many problems,such as weak reliability of fatigue driving detection.Moreover,fatigue is a slow process,first manifested in physiological signals and then reflected in human face images.To improve the accuracy and stability of fatigue detection,this paper proposed a driver fatigue detection method based on image information and physiological information,designed a fatigue driving detection device,built a simulation driving experiment platform,and collected facial as well as physiological information of drivers during driving.Finally,the effectiveness of the fatigue detection method was evaluated.Eye movement feature parameters and physiological signal features of drivers’fatigue levels were extracted.The driver fatigue detection model was trained to classify fatigue and non-fatigue states based on the extracted features.Accuracy rates of the image,electroencephalogram(EEG),and blood oxygen signals were 86%,82%,and 71%,separately.Information fusion theory was presented to facilitate the fatigue detection effect;the fatigue features were fused using multiple kernel learning and typical correlation analysis methods to increase the detection accuracy to 94%.It can be seen that the fatigue driving detection method based on multi-source feature fusion effectively detected driver fatigue state,and the accuracy rate was higher than that of a single information source.In summary,fatigue driving monitoring has broad development prospects and can be used in traffic accident prevention and wearable driver fatigue recognition.

    KEYWORDS Driving fatigue;information fusion;EEG;blood oxygen

    1 Introduction

    Fatigue driving refers to the decline of drivers’physiological function due to long-term continuous driving or lack of sleep,which affects drivers’regular driving operation[1].The behavior is mainly manifested by prolonged eye closure,yawning,and frequent nodding,and the physiological performance is primarily indicated by slow heartbeat and slow breathing.These characteristics can be used to identify the fatigue driving behavior and warn drivers to avoid potential traffic accidents.Therefore,it is of great significance to study the driver fatigue recognition algorithm.In all significant traffic accidents,27% of drivers are tired.In railway traffic,the proportion of train accidents caused by train driver fatigue is as high as 30%~40%[2].Although napping while driving is a severe violation of discipline,drivers still frequently experience fatigue due to various factors such as nighttime driving,long driving hours,and short rest periods [3].Many accidents are directly or indirectly related to drivers’states.Toyota investigated the causes of traffic accidents[4],as shown in Fig.1.92.9%of traffic accidents are directly or indirectly caused by humans,among the three factors.The driver’s driving state will directly affect the operation error rate and the ability to deal with emergencies.Driving fatigue has become an invisible killer of traffic accidents[5].

    Figure 1: Statistics of accident factors

    The prevention of driver fatigue is the focus of the road traffic safety field,and the recognition of driver fatigue is the prerequisite for preventing driver fatigue.Driver fatigue recognition methods have been extensively developed,and several driver fatigue recognition techniques(lane departure,image recognition)have been applied to certain vehicles[6].

    (1)Driver fatigue detection based on facial features evaluates fatigue according to facial expression when the driver enters the fatigue state[7].As a non-contact detection method,this method does not affect driving.However,objective factors such as light and occlusion (glasses and hair) easily affect image information collected in the driving environment.Balasundaram et al.developed a method to extract specific features of eyes to determine the degree of drowsiness but did not consider the influence of blinking and other factors on fatigue driving [8].Garg recognized a driver’s sleepiness while measuring parameters such as blinking frequency,eye closure,and yawning [9].Wang et al.monitored drivers’eye movement data and behavior and selected indicators such as the proportion of eye closing time through experiments[10].Ye et al.proposed a driver fatigue detection system based on residual channel attention network and head pose estimation.This system used a Retina face for face location;five face landmarks were output[11].

    (2)Physiological information can most truly reflect the fatigue state of drivers,and the amount of data of physiological signals is much smaller than that of image information [12].However,the disadvantage is that collecting various physiological information is invasive to the human body.Physiological signals commonly used to detect driver fatigue include EEG,electrocardiogram(EOG),electromyography (EMG),respiration,pulse,etc.When the driver is tired,the driver’s physiological parameters and non-fatigue state have different changes.Based on the detection model established by Thilagaraj,the EEG signals were classified,and fatigue detection was realized [13].Lv et al.preprocessed the EEG signal,selected feature value,and used a clustering algorithm to classify the driver fatigue and label the EEG feature data set according to driving quality [14].Watling et al.conducted a comprehensive and systematic review of recent techniques for detecting driver sleepiness using physiological signals[15].Murugan et al.detected and analyzed the drivers’state by monitoring the electrocardiogram signal[16].Priyadharshini et al.calculated the blood oxygen level in the driver’s blood to determine and assess the driver’s sleepiness[17].Sun et al.studied changes in blood oxygen saturation under mental fatigue[18].During the experiment,blood oxygen saturation of brain tissue was monitored in real-time using near-infrared spectroscopy.Results showed that mental fatigue and reaction speed decreased after participating in the task,and the blood oxygen saturation of brain tissue increased compared with non-participating tasks.It can be concluded that mental fatigue affected performance,and brain tissue’s blood oxygen saturation level was also more affected by a test’s motivation and compensation mechanism than by resting level.

    (3) Fatigue driving detection based on vehicle driving characteristics indirectly predicts fatigue state by measuring the vehicle’s driving speed,curve size,and angle of deviation from the driving line[19].After entering a fatigued state,the driving ability will be significantly reduced,and attention will be low,which will cause the vehicle to deviate from the driving line.The advantage of this method is that it can ensure the safety of drivers to the maximum extent.Still,the disadvantages are low accuracy and poor night driving detection,so it is rarely used.Mercedes-Benz’s Attention Assist extracted speed,acceleration,angle,and other information about the vehicle,and it conducted comprehensive processing to detect fatigue state[20].

    However,due to the complex mechanism of fatigue,numerous influencing factors,and significant differences in individual performance,there are still many difficulties in the practical application of driver fatigue recognition,so the current key road traffic industry still needs to adjust or control the driving time to avoid fatigue driving.In addition,a single feature’s high false detection rate generally can not accurately detect the fatigue state,and fatigue behavior is related to facial image features and other physiological parameters.To improve the accuracy and stability of fatigue driving detection,this paper focused on fatigue feature extraction and information fusion,designed a comprehensive simulation driving experiment,established a fatigue driving sample dataset,and verified the detection effect.The research content is as follows: The Sections 1 introduces the research status of fatigue driving.In Sections 2,methods of fatigue feature extraction based on drivers’facial images and biological signals are introduced.Sections 3 fuses heterogeneous information of image information and physiological signals in fatigue scenarios to improve the accuracy and robustness of fatigue detection.In Sections 4,the construction of a comprehensive simulation driving experiment platform,design of the experimental process,and data collection are introduced,and the effect of various fatigue analysis algorithms is verified by data collected during experiments.Sections 5 concludes the paper and puts forward prospects.

    2 Research on Fatigue Feature Extraction Method

    When entering a fatigued state,drivers have different physiological function declines[21],which can be judged by analyzing drivers’facial images and biological signals.This section introduces the extraction of feature parameters related to fatigue from the collected and physical signals.

    2.1 Fatigue Feature Extraction Based on Image Information

    Driver fatigue detection based on image information is the most applied method.When entering the fatigue state,drivers’eye features will have apparent changes,which can be used to evaluate the fatigue state.

    2.1.1 Driver Face Detection Algorithm Based on Histogram of Oriented Gradients(HOG)

    For a fatigue detection system built by machine vision,the image of the driver’s face must be obtained first.Subsequent facial vital points and driver fatigue features can be extracted.Therefore,applying a fast and accurate face detection algorithm for a fatigue detection system is essential.

    Compared with other image detection algorithms,HOG quantifies the gradient direction of the image in a cellular manner to characterize the structural features of object edges[22].As the algorithm quantizes position and orientation space,it reduces the influence of object rotation and translation.Moreover,the gradient information of the object is not easily affected by actual ambient light,and the algorithm consumes fewer computer resources,so it is more suitable to be applied to a driver fatigue detection system with significant light variations and small computational power.The specific face detection process based on the HOG algorithm is as follows:

    (1)Image normalization

    Since camera input images are mostly RGB color space with high feature dimensions,which is not conducive to direct processing,they must be converted to grayscale images as in Eq.(1).The image compression process was then normalized using the Gamma algorithm to reduce the effects of light,color,and shadows in the actual driving environment scene,as shown in Eq.(2),where(x,y)denotes pixels in the image.

    (2)Image pixel gradient calculation

    Due to considerable gradient variation at the object shape’s edges,contour information valueG(x,y)can be obtained by traversing computed pixel steps.Eqs.(3)and(4)are formulas for calculating gradients in horizontal and vertical directions.

    whereH(x,y) is the pixel value of the image.After obtainingGx(x,y) andGy(x,y) of pixel points through traversal calculation,the gradient valueG(x,y)and gradient directionα(x,y)of pixel points were calculated using Eqs.(5)and(6).

    (3)Gradient histogram construction

    The input image was evenly divided into several small regions of the same size,also called cell units,and pixels of each area weren×n.Then,the gradient histogram of each cell was counted.

    (4)Merging and normalization of gradient histogram

    Aiming at the problem of sizeable front-background light contrast in the driving environment,multiple cells were combined into interoperable unit blocks to reduce gradient intensity change,and then gradient information in each block was normalized to improve algorithm performance.Finally,the unit block was used as a sliding window to scan cockpit image input by the camera,extract HOG features,and input them to the classifier.Non-maximum suppression algorithm [23] was used to process multiple face candidate frames in the output,and face candidate boxes were finally obtained,as shown in Fig.2.

    Figure 2: Schematic diagram of driver face position detection

    The driver’s position was relatively fixed in the actual driving environment,and the face was in the middle of the collected images.HOG algorithm will not frequently lose detection target due to changes in the driver’s facial expression and driving posture.Therefore,the face detection algorithm based on HOG features has excellent accuracy.Furthermore,since the HOG algorithm’s main principle was to calculate the gradient of the image,most of which were linear operations,low time complexity,and good real-time detection,it is more suitable for onboard systems with small CPU computing capacity.

    2.1.2 Eye Feature Points

    Face key point detection describes the facial geometric features of face images in machine vision.Given an image with face information,a face key point detection algorithm can identify and locate facial geometric features such as a face’s eyes,mouth,and nose according to target requirements.In this section,based on the image with face and distance information output from the HOG face detection algorithm,the cascade regression tree algorithm was used to locate the driver’s face’s critical point information,laying the foundation for the subsequent fatigue feature extraction.

    Eye feature point localization belongs to the category of face alignment,referring to finding various facial landmarks from detected faces,such as eyes,eyebrows,nose,mouth,face outline,and other key positions.The data set used in this paper is 300 W,the number of feature points is 68 points,and there are more than 3000 samples,which meets the requirements of the fatigue driving detection scenario.Table 1 lists the posture information of the dataset.

    Table 1: 300 W data set feature points

    The Ensemble of Regression Trees (ERT) algorithm used in this paper is a face alignment algorithm based on regression trees[24].By constructing a cascade of Gradient Boost Decision Tree(GBDT)[25],the algorithm made face gradually return from the initial value of feature points to the actual position.The algorithm was implemented in machine learning toolkit Dlib[26].ERT was used to align the detected face area to find the location of the face,and the detection effect is shown in Fig.3.Compared with traditional algorithms,ERT can solve the problems of missing or uncertain labels in the training process and minimize the same loss function while performing shape-invariant feature selection.The following introduces eye movement feature extraction based on eye feature points.

    Figure 3: Schematic diagram of eye feature point detection

    (1)Reference Fatigue Index

    The interval between the driver’s opening and closing eyes in unit time can reveal a fatigue state.PERCLOS defined the proportion of eye closing frame duration in unit time to total detection frame duration.Among them,PERCLOS took the ratio of eye-face closed area to the total eye area as the reference standard,and the ratio reached 50%,70%,and 80% representing EM,P70,and P80,respectively.

    (2)Eyelid Ratio(ER)

    ER,which described the ratio of the eye’s height to the eye’s width,was introduced to analyze blinking movements.When people opened their eyes,ER was a relatively stable value,and the corresponding ratio changed when their eyes were closed.As shown in Fig.4,each eye was calibrated by six-coordinate points P1–P6,taking the right eye as an example.

    The numerator represented the longitudinal Euclidean distance between points,and the denominator calculated the transverse Euclidean distance between points.Longitudinal points were weighted to ensure coordinates with different distances apply different ratio factors,and the computed ER values have the same scale.

    Figure 4: Four typical degrees of eyelid closure

    2.2 Fatigue Characteristics Based on Biological Signals

    In recent years,many studies have investigated the correlation between cognitive learning tasks,driving fatigue,and other aspects of biological signals [15].However,the measurement of fatigued driving status has not been standardized,and further research is needed.Relevant studies in the fields of cognitive science and neuroscience have shown that changes in fatigue state are closely related to physiological information,especially to the corresponding activity of the human cerebral cortex and blood oxygen saturation level,laying a theoretical foundation for analysis and recognition of human fatigue state based on cerebral cortex activities and blood oxygen.Physiological electrical signal analysis is the most objective and effective way to monitor human fatigue.Physiological signal features are described as follows.

    2.2.1 Noise Reduction of EEG Signals

    EEG signals acquired in the driving environment carry multiple interfering signals,including nonphysiological and physiological artifacts[27].It is necessary to investigate the filtering method of EEG artifact components to facilitate subsequent processing.Ocular electrical artifacts caused by human blink and eye movement are the most common physiological artifacts in EEG signals.Typical visual artifact waveforms are shown in Fig.5,and the labeled red parts are characteristic ocular disturbances.

    Figure 5: Typical electroocular artifact

    The amplitude of EEG artifact is much larger than that of EEG,and its frequency band ranges from 0 to 16 Hz,while the EEG frequency band ranges from 0 to 32 Hz.Traditional filtering methods can not filter eye electrical signal artifacts effectively.The core step of the Hilbert Yellow transform[28]was the empirical mode decomposition of the signal,which adaptively decomposed signals into a series of intrinsic mode functions (IMF) [29].Then,the Hilbert transform was expanded for each eigenmode function to obtain the instantaneous frequency of each component,and the signal was filtered according to the fast frequency.As shown in Fig.6,multiple IMF components can be obtained by EMD decomposition of input signalX(t).In the first row,the original signal is EEG before filtering physiological artifacts,and IMF1 to IMF8 are component signals obtained after the empirical mode decomposition of the original signal.The processed signal is obtained after filtering out physiological artifacts from the original signal.

    Figure 6: IMF component of EEG signal

    For each IMF,its instantaneous frequencyf(t)was calculated,and the fast frequency of the signals(t)=α(t)cos(ψt)was:f(t)=

    The main frequency band of EEG signals is generally between 1–32 Hz,and the IMF component was filtered based on fast frequency:

    whereimfiis thei-th order eigenmode function of signalsi(t)after EMD decomposition.

    Significant jitters in signals caused by blinking and eye rotation were processed according to the standard deviation of the signal to eliminate significant jitters caused by eye electrical artifacts in signals.Processing rules are shown as follows:

    wherePiis the sequence of extreme values of thei-th order IMF component,andmiis the mean value of thei-th order IMF component.

    Then,filtered IMF components were linearly reconstructed to obtain new EEG signalh(t)as inh(t)=xi(t).

    The reconstructed EEG signal waveform is shown in Fig.7.The red dashed line is the raw EEG signal,and two physiological artifacts caused by blinking and eye rotation can be observed.The blue waveform represents the signal obtained after EMD decomposition and component filtering.

    Figure 7: EEG signal waveform after filtering ocular electrical artifacts

    2.2.2 Feature Extraction of EEG Signals

    Physiological studies showed that information of each rhythm in prefrontal EEG was closely related to brain arousal degree[30].δrhythm is the primary frequency component of deep sleep.When the brain enters deep sleep,breathing gradually deepens,heart rate slows down,blood pressure and body temperature drop,and signal energy of EEG in the 1–4 Hz band increases significantly.θrhythm is the primary frequency component of the brain during consciousness.wave is the primary frequency component of the brain when it is highly concentrated.βrhythm is the primary frequency component of the brain under anxiety and stress.However,due to the randomness and non-stationarity of EEG signals,the absolute frequency band energy intensity of EEG signals cannot accurately represent the brain state or cope with the uncertainty caused by individual differences.This paper used the Fourier transform[31]to convert it from the time domain to the frequency domain(Eq.(10)).Discrete EEG acquired directly by the acquisition device needs to be processed by discrete Fourier transform.

    Then,EEG signal characteristics based on frequency band energy ratio were adopted:

    2.2.3 Reflection Oxygen Saturation Calculation

    Blood oxygen saturation (SpO2) refers to the percentage of actual dissolved oxygen content in human blood and the maximum oxygen content that can be dissolved in blood[32].The formula of blood oxygen saturation was defined as follows:

    CHbis deoxyhemoglobin,andCHbO2is the concentration of oxygenated hemoglobin.

    Studies have shown that the fluctuation range and distribution range of blood oxygen saturation of drivers in fatigued driving are more extensive than those in everyday driving [33].Therefore,distribution characteristics can be obtained by statistics of blood oxygen saturation of drivers in daily driving and fatigue driving,represented by mean value(Eq.(15))and standard deviation(Eq.(16)).

    3 Research on Fatigue Analysis Method Based on Information Fusion

    Among the most applied fatigue detection approaches,the method based on vehicle driving characteristics has low accuracy,prominent driving lanes,and poor night driving effect,so it is rarely used.Besides,in actual driving,image information captured in the driving environment is often affected by objective factors such as light and occlusion(glasses,hair obscuration),and the accuracy of detection based on eye features decreases.Therefore,to improve the accuracy and reliability of fatigue recognition,it is necessary to fuse images with biological signal features,which have significant changes and feature differences in the early fatigue stage.

    Information fusion[34]is a comprehensive process of preprocessing,data registration,predictive estimation,and arbitration decision-making of information from similar or heterogeneous sources to obtain more accurate,stable,and reliable target information than a single source.Information fusion is mainly carried out from the following levels: Data layer fusion,feature layer fusion,and decision layer fusion.

    3.1 Fatigue Detection Based on Feature Layer Fusion

    In the driver fatigue detection scenario,various information sources can reflect the driver’s fatigue from different angles.Drivers’facial information,biological signals,operating behavior information,and other characteristic indicators can well reflect drivers’fatigue states from different perspectives to different degrees.Among them,drivers’facial and physiological information are heterogeneous and have complementary characteristics.This multi-source heterogeneous information can be expanded from the feature and decision-level fusion.According to the principle of information fusion,combined with information features and the fusion purpose of this paper,feature fusion was carried out on multisource information of face features and biometric features.

    Feature layer fusion [35] refers to the preprocessing and feature extraction of raw data from each information source and then fusing features of each information source to obtain prediction and estimation of target information further.Commonly used feature-level fusion algorithms include neural networks,Multi-Kernel Learning (MKL),and typical Correlation Analysis (CA).MKL algorithm trained different kernel functions for additional features and then linearly weighted multiple kernel functions to obtain a combined kernel with multi-source information processing capability,which is suitable for multi-source heterogeneous information fusion.CA algorithm analyzed the intrinsic relationship between features by projecting features in the direction of maximum correlation to obtain new fused features,remove redundant information from features,and improve the ability of new features to represent target information.

    3.1.1 Feature Fusion Based on Multi-Kernel Learning

    whereαtwas the weight coefficient of the basis kernelktand the nonlinear mapping function of the joint kernel wasφc(x).Current research on base-core combination methods is mainly based on linear combinations.Although nonlinear combinations can give better results,the computational complexity is too high and the results are often difficult to interpret.

    ForMfeatures of target information,each feature corresponded to at least one basis kernel,to construct basis kernel modelkc,as shown in Fig.8.The solution of structural parameters of the base kernel model (weight coefficients of the base kernelα={α1,α2,...,αM}) has become a hot research topic.MKL’s fusion ability was applied to fuse fatigue features from images and physiological information.WithMkinds of fatigue features (denoted asx={x1,x2,...,xM}),we constructed a model of joint kernel based on radial basis function(RBF)kernel(the most widely used kernel)under the condition of uncertain distribution characteristics of each feature [37].The formula of the RBF kernel is shown in Eq.(19).After determining the kernel function,we used an algorithm to train the weight coefficient of each basis kernel and inherent parameters of each basis kernel (parameter of RBF kernel function wasσandσ>0),to obtain joint kernelkc.kcwas then applied to various kernel methods to train classification/clustering/regression models.

    Figure 8: Schematic diagram of multi-kernel learning

    3.1.2 Feature Fusion Based on Multi-Set Canonical Correlation Analysis(MCCA)

    For a feature setx=[x1,x2,...,xm]containingmfeatures,the data dimension of featurexiwasn.A linear transformationzi=xiwiwas applied to each set of data,wherewiwas also known as projection direction.After the projection feature setz=[z1,z2,...,zm] was obtained,the projection directions werew=[w1,w2,...,wm],wT=[w1T,w2T,...,wmT].After projection,the covariance matrix of feature setxandzwereCx=xTxandCz=zTz,separately.The core task of MCCA was to find the appropriate projection of each feature set to maximize the correlation degree of each new feature in the projected feature set.SUMOR criterion[38]characterized correlation as the sum of all elements ofCz,then the problem was transformed into solving the following objective functions,the maximum value ofwTCxwunder constraints a and b.The objective function can then be solved using the Lagrange multiplier[39]to obtain the projection transformation matrixw=[w1,w2,...,wm].

    whenm=2,MCCA was ordinary CCA.Whenm>2,MCCA analyzed comprehensive correlation degree between multiple features(feature sets).Feature set was obtained after the fusion of numerous feature sets.

    3.1.3 Feature Layer Fusion Fatigue Detection Model

    Features such as proportion of eyelid closure time per unit time(PERCLOS),blink frequency per unit time(BF),and mean eyelid ratio(ER)were extracted from driver’s face,and features such as ratio of each rhythm’s energy to total energy were extracted from driver’s EEG signals,which were used in combination with features of the mean and standard deviation of blood oxygenation saturation to characterize fatigue.Physical significance and dimensionality of eye-movement features and biosignal features extracted from driver facial information vary,with many feature parameters differing by orders of magnitude.Therefore,features need to be normalized before training.Normalization was performed using Eq.(21)to obtain normalized features for features in the sample.

    whereαiis a featureαof theith sample,αmax,αminare the maximum and minimum value of featureαamongNsamples.

    For MCCA-based feature layer fusion,after feature normalization,MCCA was used to fuse features to obtain newly united features.MKL-based feature layer fusion assigned different kernel functions to additional features and received parameters and weights of each kernel function during model training.Both feature fusion methods applied SVM,and for kernel function,RBF kernel was chosen for model training.The machine learning library sklearn for Python has code implementations of SVM algorithms and interface functions for extended model training and testing.Code implementation of multi-core SVM algorithm is available in the machine learning library SHOGUN.In this paper,training for the driving fatigue detection model was carried out based on the above two tools.For SVM with RBF kernel,model parameters were penalty coefficientCand structural parameterσof RBF kernel.For the training of the small-sample driving fatigue detection model,the grid search method was used to adjust parameters,limit the variation range and step size of the model parameters,and then exhaust all model parameter combinations to obtain the optimal parameters.

    4 Experiment and Result Analysis

    In this paper,simulated driving experiment was carried out on a driving simulation platform built by laboratory to simulate actual driving situations,and driver’s data was collected during simulated driving.

    4.1 Experimental Platform

    The experimental platform included a simulated driving platform and a data acquisition platform.A simulated driving platform was used to affect the driving situation.A data acquisition platform was used to collect multisource information synchronously.The experimental scenario is shown in Fig.9a.

    4.1.1 Simulation Driving Experiment Platform

    The driving simulation platform included driving simulation software,a view display,a steering wheel,a brake pedal,an accelerator pedal,and a host computer.Driving simulation software,called“City Car Driving,”provides a realistic reproduction of natural road scenes.The software ran on a PC and displayed traffic conditions through a monitor.Figs.9b and 9c show the simulator used to deploy the driving simulation.When drivers apply torque to the steering wheel,the vehicle can give a corresponding degree of steering.

    Figure 9: Driving simulation platform

    4.1.2 Data Acquisition Platform

    A fatigue-driving monitoring device was designed for data acquisition of biosignals.A multifunctional and multi-node physiological parameter acquisition device obtained EEG and blood oxygen saturation signals.Data was transmitted to a host computer through Wi-Fi technology,which processed and analyzed data to get the fatigue status of the subject.The overall framework is shown in Fig.10.CC3200 was the main control chip to control EEG and blood oxygen collection.

    Figure 10: General framework of fatigue driving monitoring devic e

    EEG signals of the brain’s different locations have different characteristics,distinguished according to the “10/20 international standard system”.The ADC for acquiring EEG signals was multichannel and high-precision,with a sampling rate of 60 Hz.MAX30105 was used to measure blood oxygen saturation.Device nodes were placed on the head and earlobes when acquiring biosignals.Experimental equipment and specific connection modes are shown in Fig.11.

    Figure 11: Experimental equipment and electrode lead mode

    4.2 Paradigm of Experiment

    The experimental environment was required to be a quiet and stable indoor environment,with sufficient light and no massive spatial electromagnetic interference,ensuring the acquisition of highquality images and physiological signals.Ten participants were asked to be healthy,have no history of brain disease,and not consume stimulating beverages such as coffee and tea the day before the experiment.Meanwhile,subjects should be familiarized with the experimental procedure and equipment before the experiment.After experiment preparation,subjects were connected to an EEG and blood oxygen acquisition device and acquisition equipment;a camera was opened to ensure that the data acquisition platform could collect data properly.Then,the simulation driving software was turned on to make preparations.After that,experiments were conducted according to Experiments 1 and 2,respectively,for the data record.

    (1)Simulated driving experiment(The collected data was labeled as Dataset 1)

    The driving simulation experiment consisted of fatigue and non-fatigue driving experiments.The fatigue experiment required subjects to sleep for no more than seven hours and was conducted after lunch at 13:00 ~14:00(when humans were prone to sleepiness)and during the night at 22:00–23:00.For non-fatigue experiments,subjects should ensure sufficient rest to maintain a good physical and mental state,and experiment time was 9:00–11:00.Subjects sat in driver’s seat and closed their eyes to calm down for 1 min.Afterward,driving simulation formally began,and the acquisition of drivers’facial images and physiological information started simultaneously.Before the first experiment,participants were given three minutes to familiarize themselves with the driving simulator.The simulated driving time was 10 min.

    (2)Reaction test experiment(The collected data was labeled as Dataset 2)

    In the simulated driving process,drivers’biological signals were collected,and drivers’reaction time was recorded by synchronized stimulation with periodic fixed audio signals.After hearing the sound signal,drivers responded to the audio stimulus by pressing the button on the steering wheel.Reaction time can more accurately mark the degree of fatigue.A reaction time detection software was developed and run in the background to measure drivers’reaction time.It has the following functions:(1)Send out audio periodically and record the time of the audio.(2)Detect whether the buttons on the steering wheel of simulated driving were pressed and record the pressing time of the keys.(3)Calculate the difference between audio sending time and critical pressing time,record differences,and save it as a CSV file.The interface diagram of the reaction time detection tool and program flow chart are shown in Figs.12 and 13.The purpose of this reaction test is to automatically label collected driving data according to reaction time.Data from fatigue driving experiments were labeled based on synchronously collected reaction time.Reaction time was normalized to distribute between 0 and 1,where 0 is non-fatigue,and 1 is extreme fatigue.

    Figure 12: Reaction time detection interface

    Figure 13: Flow chart of reaction time detection

    4.3 Analysis of Experimental Data

    After data collection,heterogeneous data from multiple information sources must be integrated into time series.In this experiment,the sampling frequency of biological signals was 60 Hz,and the camera’s frame rate was 30FPS.When data were aligned on time series,each second contained 60 points of biological information and 30 images of image information.When collating data sets,it is necessary to segment collected original data in time series,and the time window of segmentation is T,as shown in Fig.14.The segmented data was used as a sample,and the sample was preprocessed to extract features.

    Figure 14: Schematic diagram of data alignment and segmentation

    The size of windowTsignificantly influenced PERCLOS and BF features in image information,and the blink interval of human eyes was about 3~4 s.However,the EEG signal artifact removal algorithm produces edge effects when processing short-term signals,and the time windowTshould not be too small.In this paper,original data were divided according to Windows of 3,5,7,10,and 15 s,and the fatigue detection accuracy under different times Windows was discussed.

    4.3.1 Effect Analysis of Fatigue Detection Based on Eye Movement Features

    Two hundred eye movement sample features (including 100 eye movement sample features of drivers under fatigue driving and non-fatigue driving)were taken from Dataset 1.Each feature sample was extracted from video data samples of continuous 7 s,and 100 fatigue feature samples were obtained by segmenting constant 700 s fatigue driving video and extracting features piecewise.

    (1) The distribution of PERCLOS characteristics is shown in Fig.15.The Blue dotted line and red solid line were PERCLOS distribution of 100 fatigue and non-fatigue conditions.When the driver was tired,the distribution range of PERCLOS was significantly lower than that of a non-fatigue state.

    Figure 15: Schematic diagram of PERCLOS feature distribution in fatigued/non-fatigued state

    (2) Blink frequency distributions are shown in Fig.16.The Blue dashed and red solid lines indicated blink frequency distribution of 100 fatigued and 100 non-fatigued samples.It can be observed that when the driver was tired,the blink frequency oscillated violently before 0 and higher values.This is because when drivers realized they were tired,they suddenly blinked quickly and tried to stay awake.

    Figure 16: Schematic diagram of eyeblink frequency distribution in fatigue/non-fatigue state

    (3) In Fig.17,the solid red line was the ER feature extracted from 700 s continuous fatigued driving video screens,and the dashed blue line was the feature extracted from 700 s continuous nonfatigued driving video screens.The mean eyelid opening was significantly higher in the sleepy state than in the non-fatigued one.This indicated that ER characteristics can represent a fatigue state.

    Figure 17: Average eyelid height/width ratio under fatigue/non-fatigue condition

    By comparing the above features,it is possible to analyze the ability of these features to characterize the fatigue state to varying degrees.The above three features were formed into a 3-dimensional feature vector as sample features from image information.The fatigue driving detection model was trained based on SVM,and model parameters were adjusted by grid search.Detection accuracy based on the time window of 3,5,7,10,15 s were 74.6%,82.8%,86.1%,85.6%,87.3%,separately.The size of the time window impacted the accuracy rate of recognition.The main reason for the difference was that the three eye movement features were all based on time series statistics,and the change patterns of these features each have their period(for example,the blink frequency of normal people is generally 3–4 s).Extracted features will be greatly affected when the time window exceeds this period.Therefore,the time window should not be too small when extracting eye movement features from image information.

    4.3.2 Effect Analysis of Fatigue Detection Based on EEG Features

    Two hundred sets of EEG samples were taken from experimental Dataset 1,including 100 fatigue EEG samples and 100 non-fatigue samples.Each sample contained power spectral-based features extracted from a continuous period of 7 s of EEG signals.The feature component is a four-dimensional feature based on the power spectrum of each rhythm.To further directly analyze the relationship between the energy ratio of each rhythmic band of the driver’s EEG signals and fatigue,EEG signals of FP1 were collected.The energy conversion calculation for each rhythm is shown in Fig.18.It can be observed that when the driver gradually entered drowsiness,unconsciousness,or even sleep state,the energy ratio ofδandθrhythm rose rapidly.In contrast,the energy of andβrhythm quickly decreased,which was consistent with medical research results:δandθrhythm indicated inhibition degree of specific neurons in the cerebral cortex,while andβrhythm indicated excitation degree of neurons in the cerebral cortex.

    Figure 18: Change of energy ratio of EEG signal rhythm

    Power spectrum features of EEG signals were combined to form a 4-dimensional feature vector,and feature samples of EEG signals were organized.SVM was used to train the driving fatigue detection model based on EEG signals,and grid search was used to adjust model parameters to obtain the fatigue driving detection model.Classification accuracy based on time window of 3,5,7,10,15 s,were 74.2%,72.4%,81.8%,71.6%,76.4%,separately.EEG was not sensitive to time window size and did not affect accuracy because features extracted from EEG were all power spectral features belonging to the frequency domain and were not directly affected as image features extracted from time series.

    4.3.3 Driving Fatigue Evaluation Method Based on Blood Oxygen

    Changes in blood oxygen saturation under everyday driving and fatigue driving conditions are shown in Fig.19.It can be observed that the fluctuation range and distribution range of blood oxygen saturation of fatigued driving were more extensive than those of daily driving.

    Figure 19: Blood oxygen saturation in different states

    According to research on driving fatigue evaluation methods based on blood oxygen[40],changes in blood oxygen saturation standard deviation in two states can determine fatigue.The distribution of standard deviation of blood oxygen during daily and tired driving is shown in Fig.20.

    Figure 20: Distribution of standard deviation of blood oxygen under different driving states

    The standard deviation of blood oxygen in fatigued driving was higher than that of daily driving,so the increase in blood oxygen standard deviation can be used to judge driving fatigue.

    4.3.4 Comparative Analysis of Experimental Data

    Single-mode driving fatigue detection methods and information fusion-based fatigue driving detection methods were evaluated using the experimental dataset.Detection effectiveness of the various techniques is shown in the following table.Data sets in Table 2 were divided by time window T=7.

    Table 2: Comprehensive comparative analysis of various methods

    MKL and MCCA represented driving fatigue detection methods based on separately fused MKL and MCCA feature layers.By comparing experimental results,we can conclude that:

    (1) Accuracies of the experiments conducted on Dataset 1 were generally much higher than that of Dataset 2,suggesting that the simulated driving data collected through Experiment 1 and the fatigued/non-fatigued driving labeling method were more reasonable.Experiment 1 ensured that the experimenter entered a sleepy state,accurately collected fatigue driving data,and established a fatigued/non-fatigued dataset.However,the reaction time was random,and the experimental method based on reaction time had a specific interference with simulated driving.

    (2) Comparing the IMG single-mode driving fatigue detection method with the physiological signal detection method in Table 2,it can be found that the driving fatigue detection method based on drivers’eye movement characteristics was significantly higher than physiological signal characteristics in Dataset 1.Drivers had apparent drowsiness,and eye features were substantially different from those in non-fatigue states.Under ideal conditions,driving fatigue detection based on image information can obtain better results.However,due to the difficulty of signal acquisition,physiological artifacts cannot be wholly filtered,and the accuracies of physiological signals were lower than that of image information.However,in real driving scenarios,fatigue is a slowly changing signal;drivers will gradually develop prominent fatigue characteristics only when driving fatigue accumulates to a certain degree and thus enters a state of profound fatigue.Moving fatigue will be reflected in physiological information first and then gradually show characteristics of eye fatigue.When drivers began to show apparent eye fatigue features,drivers were already in an over-fatigued state,which would be very dangerous.Therefore,the combination of images and physiological information is essential and should be acted upon as soon as it is observed that the driver is beginning to feel tired.

    (3) By comparing driving fatigue detection results,it can be seen that the information fusion algorithm did improve fatigue driving detection accuracy to some extent,but at the expense of some efficiency.Compared with only using image-based algorithms,the efficiency was slightly reduced.In addition to time efficiency issues,the detection effect based on Datasets 1 and 2 has been improved by information fusion.Among them,the results of the MCCA-based feature layer fusion method were better.MCCA fused multiple features by analyzing the correlation between features,using two main features of information fusion:Information complementarity and information redundancy.

    5 Conclusion

    Driving is a complex,multifaceted,and potentially risky activity that requires total mobilization and utilization of physiological and cognitive abilities.Driver sleepiness,fatigue,and inattention are significant causes of road traffic accidents,leading to sudden deaths,injuries,high mortality rates,and economic losses.Sleepiness,often caused by stress,fatigue,and illness,reduces cognitive abilities,which affects drivers’skills and leads to many accidents.Road traffic accidents related to sleepiness are associated with mental trauma,physical injury,and death and are often accompanied by economic losses.Sleeping-related crashes are most common among young people and night shift workers.Accurate driver sleepiness detection is necessary to reduce the driver sleepiness accident rate.Many researchers have tried to detect the sleepiness of a system using different characteristics related to vehicles,driver behavior,and physiological indicators.Among the most applied methods,the method based on vehicle driving characteristics has low accuracy and poor night driving effect,so it is rarely used.Besides,in actual driving,image information captured in the driving environment is often affected by objective factors such as light and occlusion(glasses,hair obscuration),and the accuracy of detection based on eye features decreases.Furthermore,in real driving scenarios,fatigue is a slowly changing signal,and drivers will gradually show prominent fatigue characteristics only when driving fatigue accumulates to a certain degree,thus entering a state of profound fatigue.Moving fatigue will first be reflected in physiological information and then gradually show the characteristics of eye fatigue.Consequently,detection methods of drivers’fatigue have gradually diversified from original detection methods based on the driver’s facial image information to those found in physiological details such as EEG,respiratory pulse,and blood oxygen.However,the fatigue state of drivers is easily affected by their own body and interference with the external environment,which brings many problems to the reliability of fatigue driving detection.Physiological signals provide information about the body’s internal functioning,thus providing accurate,reliable,and robust information about the state of drivers.

    This paper studied driving fatigue detection methods based on single-mode and multisource information (fusion of image and physiological information) by analyzing domestic and foreign research on fatigue detection technology.Simulated driving experiments were designed to evaluate and compare various driving fatigue detection methods,improving the accuracy,stability,and environmental adaptability of driving fatigue detection.A fatigue feature extraction method based on the driver’s facial information(eye-movement features)was explored,and the eye-movement feature parameters representing the driver’s fatigue level under fatigue and non-fatigue conditions,such as the percentage of eye closure time and the blinking frequency were investigated.Following that,the fatigue driving detection model was trained based on eye movement features to categorize fatigue state and non-fatigue state,with an accuracy of 86%.In addition,based on the designed portable fatigue driving monitoring device,a multi-sensor acquisition network was used to detect multiple physiological parameters (EEG and blood oxygen) information synchronously,and data-adaptive operation was carried out at the detection end to realize validity discrimination of measured signals.The energy ratio of each rhythm band,representing the degree of fatigue,was extracted from EEG signals and physiological signal characteristics such as the standard deviation of blood oxygen.The driver fatigue detection model was trained to classify the fatigue and non-fatigue states,and the accuracy rate reached 82%.Various experiments were conducted based on the simulation driving experiment platform.Facial information and biosignals of drivers during driving were collected simultaneously.Data were then segmented and labeled to obtain a driving database,including fatigue driving data and non-fatigue driving data,which was used to evaluate detection effectiveness.Afterward,feature layer information fusion methods based on MKL and MCCA were used to fuse various eye movements and physiological features.The driving fatigue detection model was trained based on the fused features,and the accuracy reached 92%and 94%,respectively.In summary,the wider the feature coverage is,the more accurate and reliable the driver fatigue detection results will be.

    Although fatigue driving was studied from image and physiological information sources,more information sources (such as ECG signals,EMG signals,driving behavior information,and vehicle information) can also be added to improve the accuracy and stability of driving fatigue detection.The experiments in this paper are based on the existing laboratories and have many limitations.In the future,a better and more scientific experimental platform can be built,and more professional drivers can be recruited to conduct experiments,collect a large number of driving data,and support research ongoing fatigue detection methods.In addition,based on the research in this paper,the development of the implementation of intelligent accident prevention systems is worth looking forward to.Among them,the tremendous growth of wearable sensors,especially flexible sensors in biochemical signal measurement,provides technical support for the long-term dynamic measurement of multimodal signals of weak physiological signals of fatigue driving.This is an essential direction for wearable driver fatigue recognition.In the future,research on driver fatigue state recognition based on biosignal characteristics can be carried out by relying on low-invasion and high-popularity smartwatches.

    Acknowledgement:Not applicable.

    Funding Statement:This work was sponsored by the Fundamental Research Funds for the Central Universities(Grant No.IR2021222)received by J.S and the Future Science and Technology Innovation Team Project of HIT(216506)received by Q.W.

    Author Contributions:The authors confirm their contribution to the paper as follows:study conception and design:Meiyan Zhang,Boqi Zhao;data collection:Jipu Li;analysis and interpretation of results:Qisong Wang;draft manuscript preparation:Dan Liu,Jingxiao Liao.All authors reviewed the results and approved the final version of the manuscript.

    Availability of Data and Materials:SHOGUN toolkit is linked to http://www.shogun-toolbox.org/.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    日韩免费av在线播放| 免费人成视频x8x8入口观看| 妹子高潮喷水视频| 久久香蕉精品热| 一区二区三区激情视频| 久久精品91蜜桃| 老司机靠b影院| 日本免费一区二区三区高清不卡 | 午夜激情av网站| av免费在线观看网站| 侵犯人妻中文字幕一二三四区| 身体一侧抽搐| 亚洲专区中文字幕在线| 午夜免费观看网址| 高清毛片免费观看视频网站 | 日韩欧美国产一区二区入口| 视频区欧美日本亚洲| 制服诱惑二区| 国产精品野战在线观看 | 一进一出抽搐动态| 丝袜美足系列| 亚洲精品一区av在线观看| 在线天堂中文资源库| 青草久久国产| 欧美最黄视频在线播放免费 | 交换朋友夫妻互换小说| 午夜老司机福利片| 午夜亚洲福利在线播放| 国产无遮挡羞羞视频在线观看| 老司机福利观看| 大码成人一级视频| 国产av一区在线观看免费| 黑人欧美特级aaaaaa片| 看黄色毛片网站| 亚洲av五月六月丁香网| 精品国产亚洲在线| 人人妻,人人澡人人爽秒播| 美女午夜性视频免费| 亚洲精品一二三| 久久久久久久午夜电影 | 国产精品一区二区免费欧美| 女人爽到高潮嗷嗷叫在线视频| 亚洲一区中文字幕在线| 少妇被粗大的猛进出69影院| 1024视频免费在线观看| 黄色a级毛片大全视频| 国产亚洲av高清不卡| 久久精品aⅴ一区二区三区四区| 亚洲黑人精品在线| 成人18禁高潮啪啪吃奶动态图| 久久香蕉精品热| a级片在线免费高清观看视频| 久久精品成人免费网站| 脱女人内裤的视频| 新久久久久国产一级毛片| 俄罗斯特黄特色一大片| 亚洲avbb在线观看| 动漫黄色视频在线观看| 久久香蕉国产精品| 成年女人毛片免费观看观看9| 亚洲一区二区三区不卡视频| 国产亚洲精品第一综合不卡| 大陆偷拍与自拍| 首页视频小说图片口味搜索| 欧美日韩精品网址| 岛国视频午夜一区免费看| 超色免费av| 国产日韩一区二区三区精品不卡| 亚洲成人久久性| 亚洲,欧美精品.| 久久久久九九精品影院| 国产99久久九九免费精品| 一边摸一边抽搐一进一出视频| 天堂俺去俺来也www色官网| av视频免费观看在线观看| 啪啪无遮挡十八禁网站| 精品久久蜜臀av无| 日日爽夜夜爽网站| 嫩草影视91久久| 国产在线观看jvid| 大陆偷拍与自拍| 久久精品国产99精品国产亚洲性色 | 久久九九热精品免费| 91老司机精品| 自线自在国产av| 长腿黑丝高跟| 日韩有码中文字幕| 老司机深夜福利视频在线观看| 午夜福利欧美成人| 如日韩欧美国产精品一区二区三区| 亚洲欧洲精品一区二区精品久久久| 国产精品一区二区精品视频观看| a级片在线免费高清观看视频| 超色免费av| 亚洲在线自拍视频| 久久欧美精品欧美久久欧美| 亚洲七黄色美女视频| 成熟少妇高潮喷水视频| 亚洲av五月六月丁香网| 国产精品国产av在线观看| 黄片大片在线免费观看| 亚洲精品久久午夜乱码| 乱人伦中国视频| 欧美激情高清一区二区三区| 成人18禁在线播放| 午夜视频精品福利| 国产精品自产拍在线观看55亚洲| 亚洲色图 男人天堂 中文字幕| 如日韩欧美国产精品一区二区三区| 国产精品香港三级国产av潘金莲| 国产又爽黄色视频| 国产高清激情床上av| 老汉色av国产亚洲站长工具| 18禁美女被吸乳视频| 一区在线观看完整版| 人妻久久中文字幕网| 高清黄色对白视频在线免费看| 亚洲欧美一区二区三区黑人| 一区在线观看完整版| 国产亚洲欧美精品永久| 国产精品野战在线观看 | 亚洲精品粉嫩美女一区| 91av网站免费观看| 亚洲人成电影免费在线| 久久国产精品人妻蜜桃| 一边摸一边抽搐一进一出视频| 亚洲av成人不卡在线观看播放网| 国产成人精品无人区| 国产高清videossex| 夜夜看夜夜爽夜夜摸 | 熟女少妇亚洲综合色aaa.| 侵犯人妻中文字幕一二三四区| 99精品欧美一区二区三区四区| 视频区图区小说| 精品高清国产在线一区| 国产高清激情床上av| 亚洲人成电影观看| 精品久久久精品久久久| 巨乳人妻的诱惑在线观看| 久久久久国内视频| 精品国产国语对白av| 精品国产一区二区三区四区第35| 欧美日韩视频精品一区| www日本在线高清视频| 国产精品国产av在线观看| 香蕉国产在线看| 亚洲精品久久午夜乱码| 欧美性长视频在线观看| 高清av免费在线| 日本免费一区二区三区高清不卡 | 天堂俺去俺来也www色官网| 日本欧美视频一区| 黄色视频不卡| 大香蕉久久成人网| 中文字幕色久视频| 精品久久久久久久久久免费视频 | 婷婷六月久久综合丁香| 免费观看人在逋| 亚洲人成电影观看| 国产成人啪精品午夜网站| 韩国av一区二区三区四区| 身体一侧抽搐| 99国产综合亚洲精品| 一级片'在线观看视频| 亚洲五月色婷婷综合| 亚洲中文av在线| 女人精品久久久久毛片| 欧美成人午夜精品| 精品免费久久久久久久清纯| 可以在线观看毛片的网站| 国产男靠女视频免费网站| 人成视频在线观看免费观看| 中文字幕av电影在线播放| 色在线成人网| 99香蕉大伊视频| 两性午夜刺激爽爽歪歪视频在线观看 | 无人区码免费观看不卡| 又黄又粗又硬又大视频| 日本精品一区二区三区蜜桃| 欧美丝袜亚洲另类 | 欧美日本亚洲视频在线播放| 亚洲一码二码三码区别大吗| 久久影院123| ponron亚洲| 日本a在线网址| 国产高清国产精品国产三级| 精品福利观看| 精品卡一卡二卡四卡免费| 一边摸一边抽搐一进一小说| 久久香蕉激情| 美女扒开内裤让男人捅视频| 热99国产精品久久久久久7| 久久精品91蜜桃| 波多野结衣高清无吗| 久久中文看片网| 男女下面插进去视频免费观看| 午夜老司机福利片| 中文字幕av电影在线播放| 国产成人欧美| 国产91精品成人一区二区三区| 电影成人av| √禁漫天堂资源中文www| 日韩精品中文字幕看吧| 亚洲欧美日韩无卡精品| 欧美日韩亚洲综合一区二区三区_| 无人区码免费观看不卡| 国产成人免费无遮挡视频| 热re99久久精品国产66热6| 国产精品久久久人人做人人爽| 男女下面插进去视频免费观看| 狂野欧美激情性xxxx| 热re99久久国产66热| 成人国产一区最新在线观看| 日本黄色日本黄色录像| bbb黄色大片| 日日爽夜夜爽网站| 亚洲视频免费观看视频| 久久这里只有精品19| 麻豆成人av在线观看| 90打野战视频偷拍视频| xxxhd国产人妻xxx| 国产高清激情床上av| 国产麻豆69| 国产精品1区2区在线观看.| 淫秽高清视频在线观看| 人人澡人人妻人| 久久国产乱子伦精品免费另类| 久久性视频一级片| 手机成人av网站| 视频区欧美日本亚洲| 黄色丝袜av网址大全| 亚洲伊人色综图| 免费人成视频x8x8入口观看| 高清欧美精品videossex| 美女高潮喷水抽搐中文字幕| 国内久久婷婷六月综合欲色啪| 韩国精品一区二区三区| a在线观看视频网站| 香蕉久久夜色| 国产精品日韩av在线免费观看 | 99精品在免费线老司机午夜| 亚洲激情在线av| 99久久精品国产亚洲精品| 在线观看免费视频日本深夜| 久久99一区二区三区| 黄色怎么调成土黄色| 久久国产精品男人的天堂亚洲| 99精品久久久久人妻精品| 国产精品98久久久久久宅男小说| 亚洲七黄色美女视频| 国产精品野战在线观看 | 真人做人爱边吃奶动态| www.999成人在线观看| 亚洲avbb在线观看| 一级a爱片免费观看的视频| 真人一进一出gif抽搐免费| www日本在线高清视频| 国产三级黄色录像| 9色porny在线观看| 亚洲视频免费观看视频| 午夜成年电影在线免费观看| 国产精品久久久av美女十八| aaaaa片日本免费| 动漫黄色视频在线观看| 这个男人来自地球电影免费观看| 又黄又爽又免费观看的视频| 黄频高清免费视频| 亚洲一区二区三区不卡视频| 天天添夜夜摸| 欧美最黄视频在线播放免费 | 精品午夜福利视频在线观看一区| 免费看a级黄色片| 中文欧美无线码| 女性被躁到高潮视频| 色综合站精品国产| 亚洲精品av麻豆狂野| 国产免费av片在线观看野外av| 最新在线观看一区二区三区| 日韩欧美国产一区二区入口| 国产国语露脸激情在线看| 亚洲欧美精品综合一区二区三区| 国产黄a三级三级三级人| 日韩有码中文字幕| 精品国产一区二区久久| 波多野结衣一区麻豆| 18禁国产床啪视频网站| 一区二区三区精品91| 久久青草综合色| 女人被躁到高潮嗷嗷叫费观| 久久精品影院6| x7x7x7水蜜桃| 无限看片的www在线观看| 男人舔女人下体高潮全视频| 日韩人妻精品一区2区三区| 搡老乐熟女国产| 亚洲精品在线美女| www.999成人在线观看| 国产麻豆69| 搡老熟女国产l中国老女人| 巨乳人妻的诱惑在线观看| 免费看十八禁软件| 人妻丰满熟妇av一区二区三区| 大型黄色视频在线免费观看| 午夜福利在线免费观看网站| 精品日产1卡2卡| 亚洲精品成人av观看孕妇| 19禁男女啪啪无遮挡网站| 操美女的视频在线观看| 成年版毛片免费区| 久久中文看片网| 女警被强在线播放| 超碰成人久久| 18禁国产床啪视频网站| 精品无人区乱码1区二区| 欧美国产精品va在线观看不卡| 欧美黑人欧美精品刺激| 男人操女人黄网站| 91在线观看av| 久久亚洲精品不卡| 一区在线观看完整版| 欧美日韩精品网址| 国产一区二区在线av高清观看| 亚洲成国产人片在线观看| 久热爱精品视频在线9| 国产黄a三级三级三级人| 中出人妻视频一区二区| 国产亚洲精品久久久久久毛片| 五月开心婷婷网| 欧美中文综合在线视频| 老熟妇仑乱视频hdxx| cao死你这个sao货| 国产高清国产精品国产三级| 精品电影一区二区在线| 国产成人精品久久二区二区91| 色综合婷婷激情| 亚洲片人在线观看| 欧美人与性动交α欧美精品济南到| 成人永久免费在线观看视频| 欧美黑人欧美精品刺激| 狂野欧美激情性xxxx| 丰满饥渴人妻一区二区三| 99国产综合亚洲精品| 久久久久久久久中文| 国产三级黄色录像| 亚洲中文日韩欧美视频| 亚洲av电影在线进入| 国产精品电影一区二区三区| 五月开心婷婷网| 国产激情欧美一区二区| 欧美激情极品国产一区二区三区| 可以在线观看毛片的网站| 亚洲精品美女久久久久99蜜臀| xxx96com| 亚洲av成人av| 免费一级毛片在线播放高清视频 | 极品教师在线免费播放| 女人爽到高潮嗷嗷叫在线视频| 90打野战视频偷拍视频| 一区二区三区国产精品乱码| 亚洲精品国产精品久久久不卡| 91老司机精品| 99在线人妻在线中文字幕| 久久热在线av| 亚洲精品美女久久久久99蜜臀| 我的亚洲天堂| 日韩欧美三级三区| 淫妇啪啪啪对白视频| 两个人免费观看高清视频| 久久伊人香网站| 少妇被粗大的猛进出69影院| 久久人妻av系列| 久久中文看片网| 两个人看的免费小视频| 亚洲av成人av| 国产成人精品久久二区二区免费| 国产有黄有色有爽视频| 岛国在线观看网站| 1024视频免费在线观看| 日韩精品中文字幕看吧| 色精品久久人妻99蜜桃| 久久香蕉精品热| 91国产中文字幕| 成人特级黄色片久久久久久久| 91国产中文字幕| 身体一侧抽搐| 精品国产一区二区久久| 亚洲欧美日韩无卡精品| 国产成人啪精品午夜网站| 国产蜜桃级精品一区二区三区| 亚洲精品国产色婷婷电影| 1024香蕉在线观看| 人妻丰满熟妇av一区二区三区| 成年人免费黄色播放视频| 国产成人免费无遮挡视频| 午夜福利免费观看在线| 欧美人与性动交α欧美软件| 久久久久久久久久久久大奶| 一区二区三区激情视频| 日韩有码中文字幕| 国产精品影院久久| 精品午夜福利视频在线观看一区| 日韩欧美一区二区三区在线观看| 我的亚洲天堂| 91成年电影在线观看| 91九色精品人成在线观看| 国产精品av久久久久免费| 亚洲自偷自拍图片 自拍| 性色av乱码一区二区三区2| 欧美性长视频在线观看| 久久精品91无色码中文字幕| 日本黄色视频三级网站网址| 一区二区三区精品91| 麻豆久久精品国产亚洲av | 美女午夜性视频免费| 午夜久久久在线观看| 精品人妻在线不人妻| 免费av中文字幕在线| 欧美激情 高清一区二区三区| 高潮久久久久久久久久久不卡| 99在线人妻在线中文字幕| 一区二区三区国产精品乱码| 亚洲av第一区精品v没综合| 欧美激情久久久久久爽电影 | 国内久久婷婷六月综合欲色啪| 宅男免费午夜| 看片在线看免费视频| 大陆偷拍与自拍| 色婷婷久久久亚洲欧美| 日韩大码丰满熟妇| 老汉色∧v一级毛片| 中文字幕人妻熟女乱码| 久久久国产欧美日韩av| 熟女少妇亚洲综合色aaa.| 亚洲视频免费观看视频| 窝窝影院91人妻| 嫩草影院精品99| 亚洲专区字幕在线| 啪啪无遮挡十八禁网站| 亚洲精品国产区一区二| 精品国产一区二区久久| 精品国产一区二区三区四区第35| 久久人妻av系列| 在线天堂中文资源库| 黑人操中国人逼视频| 满18在线观看网站| 一边摸一边做爽爽视频免费| 国产精品美女特级片免费视频播放器 | 亚洲国产精品sss在线观看 | 啦啦啦免费观看视频1| www.精华液| 精品乱码久久久久久99久播| 一区二区三区国产精品乱码| 在线看a的网站| 一边摸一边抽搐一进一出视频| 99riav亚洲国产免费| 国产高清视频在线播放一区| 亚洲成国产人片在线观看| 日本 av在线| 国产精品永久免费网站| 欧美激情极品国产一区二区三区| 欧美不卡视频在线免费观看 | 日韩免费av在线播放| 十分钟在线观看高清视频www| 精品国产一区二区三区四区第35| 两个人看的免费小视频| 亚洲人成77777在线视频| 欧美国产精品va在线观看不卡| 亚洲一区二区三区欧美精品| 亚洲成人久久性| 夜夜夜夜夜久久久久| 国产又色又爽无遮挡免费看| 50天的宝宝边吃奶边哭怎么回事| 99精国产麻豆久久婷婷| 亚洲专区国产一区二区| 国产欧美日韩综合在线一区二区| 天天躁狠狠躁夜夜躁狠狠躁| 制服人妻中文乱码| 久久久久久久午夜电影 | 欧美乱妇无乱码| 欧美+亚洲+日韩+国产| 99久久综合精品五月天人人| 91麻豆精品激情在线观看国产 | 国产成人精品久久二区二区91| 亚洲熟女毛片儿| 亚洲人成电影免费在线| 国产一区在线观看成人免费| 我的亚洲天堂| 免费女性裸体啪啪无遮挡网站| 国产精品一区二区精品视频观看| 中文字幕人妻熟女乱码| 国产一区在线观看成人免费| 高潮久久久久久久久久久不卡| av在线天堂中文字幕 | 久久人妻熟女aⅴ| 亚洲人成网站在线播放欧美日韩| 热99re8久久精品国产| 天天影视国产精品| 久久热在线av| 国内毛片毛片毛片毛片毛片| 亚洲三区欧美一区| 97超级碰碰碰精品色视频在线观看| 日韩精品免费视频一区二区三区| 50天的宝宝边吃奶边哭怎么回事| 免费观看人在逋| 国产亚洲精品久久久久久毛片| 日本wwww免费看| 午夜a级毛片| 看免费av毛片| 亚洲七黄色美女视频| 妹子高潮喷水视频| 国产成人欧美在线观看| 深夜精品福利| 国产亚洲精品一区二区www| 丝袜人妻中文字幕| 99国产极品粉嫩在线观看| 欧美中文综合在线视频| 久久久久久久午夜电影 | 久久香蕉激情| 亚洲国产看品久久| 成人影院久久| 久久精品国产清高在天天线| 成年女人毛片免费观看观看9| 国产aⅴ精品一区二区三区波| 国产精品影院久久| 久99久视频精品免费| 三上悠亚av全集在线观看| 一级作爱视频免费观看| 超碰97精品在线观看| 久久性视频一级片| 久久婷婷成人综合色麻豆| 国产av又大| 久久久久久免费高清国产稀缺| 男人操女人黄网站| 国产一区二区三区视频了| 国产精品 国内视频| 女同久久另类99精品国产91| 久久久久国内视频| 成人特级黄色片久久久久久久| 一区在线观看完整版| 色哟哟哟哟哟哟| 国产精品久久视频播放| 一边摸一边抽搐一进一出视频| 欧美成人性av电影在线观看| 欧美日韩av久久| 亚洲五月天丁香| 天堂俺去俺来也www色官网| 久久久国产成人免费| 欧美亚洲日本最大视频资源| 在线观看午夜福利视频| 夜夜看夜夜爽夜夜摸 | 久久久精品国产亚洲av高清涩受| 国产精品影院久久| 久久精品国产99精品国产亚洲性色 | 伦理电影免费视频| 深夜精品福利| 久久国产亚洲av麻豆专区| 国产一区二区三区在线臀色熟女 | 国产免费现黄频在线看| 精品久久久久久久久久免费视频 | tocl精华| 亚洲 欧美 日韩 在线 免费| 国产欧美日韩综合在线一区二区| 国产深夜福利视频在线观看| 精品无人区乱码1区二区| www.自偷自拍.com| 亚洲成a人片在线一区二区| 精品电影一区二区在线| av福利片在线| 久久人人97超碰香蕉20202| 极品人妻少妇av视频| 国产男靠女视频免费网站| 欧美一级毛片孕妇| 精品国产美女av久久久久小说| 亚洲欧美日韩高清在线视频| 国产精品九九99| 一本综合久久免费| 亚洲aⅴ乱码一区二区在线播放 | 一a级毛片在线观看| 国产精品 欧美亚洲| 男女床上黄色一级片免费看| 国产成人免费无遮挡视频| 丝袜美腿诱惑在线| 国产精品国产高清国产av| 黄片大片在线免费观看| 亚洲专区中文字幕在线| 欧美丝袜亚洲另类 | 亚洲片人在线观看| 色播在线永久视频| 久久九九热精品免费| 亚洲国产看品久久| 亚洲av片天天在线观看| 午夜免费激情av| 亚洲国产精品sss在线观看 | 欧美黄色片欧美黄色片| 亚洲国产精品合色在线| 美女 人体艺术 gogo| 久久精品国产清高在天天线| 欧美性长视频在线观看| 视频区图区小说| 亚洲第一青青草原| 欧美激情久久久久久爽电影 | 亚洲一区二区三区不卡视频| 麻豆国产av国片精品| 国产蜜桃级精品一区二区三区| 一本大道久久a久久精品| 久99久视频精品免费| 亚洲精品一二三| 亚洲国产精品合色在线| 免费在线观看亚洲国产| 91精品三级在线观看| 一二三四在线观看免费中文在| 免费不卡黄色视频| 国产黄a三级三级三级人| 国产视频一区二区在线看| 久久久久国产一级毛片高清牌|