• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Gaussian Process for a Single-channel EEG Decoder with Inconspicuous Stimuli and Eyeblinks

    2022-11-10 02:29:28NurSyazreenAhmadJiaHuiTeoandPatrickGoh
    Computers Materials&Continua 2022年10期

    Nur Syazreen Ahmad*,Jia Hui Teo and Patrick Goh

    School of Electrical&Electronic Engineering,Universiti Sains Malaysia,Nibong Tebal,Penang,14300,Malaysia

    Abstract:A single-channel electroencephalography (EEG) device,despite being widely accepted due to convenience,ease of deployment and suitability for use in complex environments,typically poses a great challenge for reactive brain-computer interface (BCI) applications particularly when a continuous command from users is desired to run a motorized actuator with different speed profiles.In this study,a combination of an inconspicuous visual stimulus and voluntary eyeblinks along with a machine learning-based decoder is considered as a new reactive BCI paradigm to increase the degree of freedom and minimize mismatches between the intended dynamic command and transmitted control signal.The proposed decoder is constructed based on Gaussian Process model(GPM)which is a nonparametric Bayesian approach that has the advantages of being able to operate on small datasets and providing measurements of uncertainty on predictions.To evaluate the effectiveness of the proposed method,the GPM is compared against other competitive techniques which include k-Nearest Neighbors,linear discriminant analysis,support vector machine,ensemble learning and neural network.Results demonstrate that a significant improvement can be achieved via the GPM approach with average accuracy reaching over 96% and mean absolute error of no greater than 0.8 cm/s.In addition,the analysis reveals that while the performances of other existing methods deteriorate with a certain type of stimulus due to signal drifts resulting from the voluntary eyeblinks,the proposed GPM exhibits consistent performance across all stimuli considered,thereby manifesting its generalization capability and making it a more suitable option for dynamic commands with a single-channel EEG-controlled actuator.

    Keywords:Brain-computer interface;dynamic command;electroence phalography;gaussian process model;visual stimulus;voluntary eyeblinks

    1 Introduction

    Electroencephalography (EEG) test is the standard approach for measuring oscillations caused by brain activity in most brain-computer interface (BCI) technologies.The measurement was traditionally recorded using multiple wet electrodes(usually more than 32)attached to the scalp with high sensitivity electronics in an attempt to boost the signal-to-noise ratio[1].Participants involved in the data collection using such a device are typically constrained to laboratory settings and require extensive training in order to produce clean and reliable EEG data[2].Nonetheless,the past decade has seen the rapid development of wearable EEG-based BCIs such as Neurosky MindWave,Emotive EPOC(+)and Mindo series which offer competitive performance with dry sensor technology and a smaller number of electrodes that overcomes many of the aforesaid barriers.Apart from ease of deployment and suitability for use in complex environments,they are also available at considerably lower prices compared to the laboratory-restricted EEG devices,thus accelerating their adoption by the general public[3-6].

    EEG signals measured on the scalp from the BCI device will generate the so-called Event-Related Potentials(ERPs)which refer to the small potential or voltage changes in the signals immediately after the user’s attention is invoked by a stimulus.Human inhibitory control using ERP is relatively easier to carry out as it only requires the user’s attention in a short duration for transmission of a command to an external device.Examples include switching on/off the lights and stopping an ongoing motor action[7].However,in reactive BCI applications which require users to consciously generate brain signals for continuous command transmission to an external device such as motorized actuators,the approach via visual evoked potentials(VEPs)which are natural responses when the user’s brain is invoked only by a visual stimulus tends to be relatively more prevalent[8].

    To improve the quality of EEG data recording and decoding in the aforementioned BCI paradigms,most wearable BCI devices have been equipped with machine learning (ML) algorithms that allow them to safely extract relevant features from the EEG signals and classify them into several states of mind such as relaxation and attention[9].Linear discriminant analysis(LDA)for instance has been preferred in many EEG classifications due to reduced computational cost which can minimize the transmission delay between the brain and the target system[10].However,for complex nonlinear EEG data,support vector machine (SVM) can provide more desirable results as it uses a kernelbased transformation to project data into a higher dimensional space where the relationships between variables become linear[11].The k-Nearest Neighbors (kNN) which is a machine learning (ML)algorithm that identifies a testing sample’s class according to the majority class of k-nearest training samples has demonstrated comparable performance in a recent study on EEG-based cognitive tasks[12].Another popular EEG classification is ensemble learning (EL) which generates multiple ML models and then combines them to attain improved performance[13,14].A more robust EEG classification can be obtained using a deep convolutional neural network (CNN) using a large number of electrodes with temporal and spatial filters to eliminate redundant information.For BCI applications with a single-channel EEG device,the sequential CNN approach is available but is often employed for passive BCI applications without the purpose of voluntary control such as cognitive monitoring and sleep stage scoring[15,16].Thus,for reactive BCI applications,applying the CNN methods can be computationally taxing.

    To treat issues on ocular artifacts,independent component analysis (ICA) is often employed which utilizes the blind source separation method to detect and reject the contaminated EEG signals[17].Nonetheless,similar to the many CNN approaches,ICA usually requires EEG data recorded from many channels owing to its intrinsic characteristics,which makes it extremely challenging to eliminate independent components accurately including artifacts when only a few EEG channels are available[18].Alternatives to the ICA include multiscale principal component analysis[19],signal decomposition methods[20,21],and general filtering methods such as wavelet transform,adaptive and Wiener filters but most of them are frequently adopted for offline analysis due to high computational cost.To minimize delays in real-time BCI application with an external actuator,infinite-impulse response,Kalman and Boxcar filters have been proposed as they can offer better solutions with less demanding computational requirements[22].

    Despite promising results in classifying and denoising EEG signals,most of the proposed techniques are either only suitable for passive BCI applications or only applicable to multi-channel EEG devices for optimal performance.A single-channel wearable EEG device,despite being widely accepted due to low cost,convenience,and ease of applications especially in controlling robotic devices in unconstrained environments[9,23],the accuracy and reliability of the transmitted signals are still inconclusive and remain under debate as reported in several recent studies[24,25].Plus,the use of such a device will pose a great challenge particularly when both eyeblink detection and clean continuous EEG signals are required to control an external actuator in reactive BCI applications[26].

    In this work,the focus is on improving the BCI decoding strategy with a single-channel wearable EEG device for reactive BCI applications where a continuous command from the user is transmitted to actuate and drive a motorized actuator.To increase the degree of freedom of the BCI system,voluntary eyeblinks with prespecified durations are leveraged to change the state of the recorded EEG data,thus generating dynamic commands that can modify the speed of the motor while running.The proposed decoding strategy is constructed based on Gaussian Process model (GPM) approach which to-date remains underexplored for such a BCI paradigm.Unlike other ML approaches,a notable advantage of the GPMs lies on their ability to operate on small datasets and provide measurements of uncertainty on predictions.The effectiveness of the proposed approach is demonstrated via a comparative study against other competitive classifiers which have been previously evaluated with a single-channel EEG device in recent works such as multilayer perceptron NN[22],EL[27],LDA[28],kNN,and SVM[29].In the light of[22]which proposes an alternative to motor imagery BCI that typically entails flickering stimuli and extensive training[30],inconspicuous stationary visual stimuli are introduced in the BCI paradigm to elevate the user’s attentiveness while controlling the actuator.The use of such a paradigm is also in line with a recent review in[31]that highlights the significance of selecting suitable stimuli to induce the user’s attention.Results demonstrate that a significant improvement can be achieved via the GPM approach with average accuracy reaching over 96%and mean absolute error of no greater than 0.8 cm/s.In addition,the analysis reveals that while the performances of other existing methods deteriorate with a certain type of stimulus due to signal drifts resulting from the voluntary eyeblinks,the proposed GPM exhibits consistent performance across all stimuli considered,thereby manifesting its generalization capability and making it a more suitable option for such applications.The findings of this study will not only increase the degree-of-freedom(DoF)of a single-channel EEG-controlled actuator,but will also redound to the benefit of new BCI users or BCI illiterates who are unable to sufficiently modulate their neuronal signals when controlling an external device.

    2 Methodology

    2.1 Data Acquisition

    NeuroSky?MindWave Mobile 2 Headset has been chosen in this study as it has gained widespread acceptance due to its capability of providing a steady EEG recording over a long length of time.The device consists of a single dry EEG channel placed on Fp1 as depicted in Fig.1 according to the 10-20 system,which is a worldwide known system that establishes the relationship between the underlying region of the cerebral cortex and the location of the electrodes.Another dry electrode is placed at the A1 position using an ear clip to act as the ground reference.

    Figure 1:The EEG channel’s position with respect to the user’s head is placed at Fp1.Another dry electrode in the form of an ear clip is placed at A1 to serve as the ground reference

    Another significant characteristic of this device is its portability and lightweight,which allows the user to move around freely without restriction.The MindWave Mobile 2 is equipped with an eSense attention meter,which produces values on a scale of 1 to 100.If the reading falls below 40,the subject is predicted to be in a neutral state.The range (140,60]implies slightly elevated attention while the range above 60 implies a normal to high attentiveness level.

    2.2 Visual Stimuli and Dynamic Command to Actuator

    Inspired by the work in[22]which adopts a brain training game-based stimuli to keep the attentiveness high when transmitting signals via a BCI device,this work extends the capability of such a paradigm by introducing voluntary eyeblinks to allow for multiple command changes to the actuator.The proposed paradigm is depicted in Fig.2 where the subject needs to transmit a continuous dynamic speed command to the actuator(right subplot)while his/her attention is being elevated by the stimulus(left subplot).In the light of[22],two stimuli are employed as shown in Fig.3 where the first one involves multiple hidden targets which requires the subject to spot differences between two adjacent figures;while the second one involves one hidden target that needs to be localized in a cluttered scene.For performance evaluation purposes,the speed command was designed with a mixture of a step function to indicate a constant velocity and an increasing ramp function to represent acceleration with prespecified durations as follows:

    For consistency during the data acquisition,voluntary blinking will only take place att=t2andt=t4which serve as signals for state and speed changes(further details on this strategy are presented in Section 2.3.3)that will take effect att=t3andt=t5.

    The left subplot of Fig.2 depicts three major phases in the proposed paradigm;i.e.,initial resting state(IRS),attentive state(AS),and final resting state(FRS).During IRS,the subjects are requested to rest and clear their minds before the experiment begins and a timer is displayed on the PC screen as a guide.When the timer hits 10 s(i.e.,att=t1),they must instantly focus on the stimulus to actuate the motor.Att=t2,they are required to blink twice at a rate of approximately 1 blink/second to accelerate the motor,and then continue focusing on the stimulus untilt=t4where they have to blink thrice with a similar rate to stop the motor.The FRS phase begins whent=t5during which they need to clear their mind to ensure the EEG signal is brought back to the normal state.

    Figure 2:The visual stimulus(left)is used to enhance the subject’s capability in controlling the EEG signal to follow the targeted speed command (right) which is a mixture of ramp and step functions.IRS,AS and FRS refer to initial,attentive and final resting stages respectively.The middle subfigure depicts the timeline of the desired state transitions along with voluntary eyeblinks at t=t2(for b1)and t=t4(for b2)

    Figure 3:Two types of visual stimuli employed in this study[22];Stimulus 1 (left) involves multiple hidden targets,i.e.,the subject needs to spot the differences between the two figures;Stimulus 2(right)involves one hidden target in a cluttered scene,i.e.,the subject needs to find a character named Wally hidden in the crowd

    While the proposed paradigm is realistically attainable,it can be a significant challenge to distinguish the elevated attention from the normal range during the voluntary blinking events due to drifts and prominent deflections from the recorded EEG data.Such a scenario is illustrated by four recorded trials in Fig.4 where the blinking starts att=33 s after the stimulus is displayed att=20 s.From the figure,a sudden drop in the EEG data,denoted asν,and a duration of 2 to 5 s to drive the meter reading back to the attention range is clearly seen within the blue strip.Thus,although the event is instrumental for state or command changes,it can cause undesired delays and increase the chance of misclassification,thereby lowering the BCI’s predictive capabilities.

    To this purpose,this work proposes a robust decoding strategy that is based on GPM which is a nonparametric Bayesian approach that has the advantages of being able to provide measurements of prediction uncertainties,and a voluntary eyeblink detection that can also be embedded into the motor’s control system as illustrated in Fig.5.To ensure resilience against disturbances within the motor system,the system is assumed to feature a pre-stabilized speed control loop that anticipates a reference speed command rather than a pulse width modulation[32].Hence,rather than visually assessing the movement of the motor system(e.g.,wheeled chair,robotic arm,or mobile robot),which may be influenced by friction with the ground or disturbances within the hardware itself,we focus on the precision of the command received by the system’s embedded controller,which also serves as the motion controller in this work.The main decoding strategy is further detailed in the following section.

    Figure 4:Illustration on signal deflections during voluntary blinking when the subject’s attention level is within the elevated range(i.e.,ν >40)

    Figure 5:Illustration of the overall flow of the proposed paradigm.The embedded system which consists of the decoder and a motorized actuator is simulated in the PC via MATLAB software.Bluetooth was used for the wireless data transmission from the BCI headset

    2.3 Decoding Strategy

    Unlike neural network-based predictions which assume that the data distribution can be modeled in terms of a set of a finite number of parameters,GPM works based on nonparametric Bayesian statistics which predicts the target function value in the form of posterior distribution that can be computed by combining the noise(likelihood)model and a prior distribution on the target function.The trained GPM can be embedded into the motor’s motion control system in practice using GPML[33],PyGPs[34],GPflow[35]or GPyTorch[36].Applying the GPM alone,however,may not be adequate if one is to change the speed of the motor when it is running.To treat this issue,voluntary eyeblink detection is introduced since the EEG electrode that is placed at Fp1 will result in prominent signal deflections during the blinking events.In order to construct a stronger prediction model,a Hanning-based filtering stage is also integrated into the system.The overview of the proposed structure for the decoder is presented in Fig.6 where the green areas illustrate the filtering stage and voluntary eyeblink detection while the blue area represents the GPM with dynamic speed command decoder.Details of each stage are discussed in the subsequent subsections.

    Figure 6:Overview of the proposed decoding strategy which consists of a GPM in cascade with a Hanning filter,and a voluntary eyeblink detection via ev.Both y and ev are required to decode the signal into the desired speed command,vd

    2.3.1 Hanning Filter

    Hanning filters which are a type of finite impulse response filters with Hanning window are frequently employed with random data as they typically have a moderate impact on the frequency resolution.In this work,as the computation speed is equally important to avoid delay in the wireless communication between the subject and the external device,we propose the Hanning filter as shown in the top left of Fig.6 with the gain values ofa0= 0.25,a1= 0.5,anda2= 0.25 which result in a second-order polynomial as follows:

    or,equivalently in time-domain,

    This filter will have a total gain of unity to preserve the amplitude of the targeted command,and the output that will be fed to the GPM later will only be a scaled average of three sequential inputs,with the center point weighted twice as heavily as its two adjacent neighbours.The performance of this filter will also be compared against the recursive Boxcar filter which has shown superiority over IIR and Kalman filters with a single-channel EEG device in[22].

    2.3.2 Gaussian Process Model

    Gaussian Process (GP) has an advantage over other ML algorithms in approximating a target function,(denoted asf(x)) since it can express complicated input and output interactions without predefining a set of bases and forecast a target output with uncertainty quantification.For regressions,GP is used as a prior to describe the distribution on the target function.As GP is a stochastic process,the function valuesf(xi),i= 1,...,nare treated as random variables.GP describes the distribution over an unknown function by its mean functionm(x)= E[f(x)]and a kernel functionk(x,x′)which approximates the covariance E[(f(x)-m(x))(f(x′)-m(x′))].The covariance function denotes a geometrical distance measure assuming that the more closely located inputs would be more correlated in terms of their function values.That is,the prior on the function values is represented as:

    2.3.3 Voluntary Eyeblinks

    EEG data from the BCI device would typically have minor fluctuations at all states including normal eyeblink events.In order to identify the voluntary blinks from other events,a preliminary test with ten trials is conducted where the BCI user had to perform voluntary blinking once,twice,and thrice with a rate of approximately 1 Hz when the attention level falls within the elevated range.During the test,the value ofevwhich refers to the first derivative ofνas depicted at the bottom left of Fig.6 is computed at each time instant.The magnitudes ofevwhenev <0 from voluntary blinks and normal blinks/fluctuations are recorded and visualized in Fig.7.

    Figure 7:Comparisons of signal deflection magnitude,|ev|,for normal blinks/fluctuations and voluntary blinks.The“1×”,“2×”and“3×”notations refer to once,twice and thrice blinks with a rate of approximately 1 Hz.The left plot shows the histogram while the right plot shows the corresponding box plot

    From the left plot of Fig.7,it can be observed that the|ev|is nearly normally distributed within the(0,23)range during the normal blink or fluctuations.A similar trend is also seen for voluntary blinks that are performed once (1×) where the distribution spans between 19 and 23.On the other hand,the distribution of |ev| when voluntary blinks are performed twice (2×) and thrice (3×) are left-skewed with the highest frequency at|ev| = 23 and|ev| = 29 respectively.Interestingly,if the 1×voluntary blink is removed,the remaining distributions do not heavily overlap with each other as can be seen in the corresponding box plots on the left side of Fig.7.Thus,from this observation,a two DoF can be designed with voluntary blinks to change the EEG state when it is elevated,i.e.,Voluntary Blink 2×which can be detected when|ev| ∈[23,28],and Voluntary Blink 3×which can be detected when|ev|≥29.For brevity,Voluntary Eyeblinks 2×and 3×will be henceforth renamed asb1andb2respectively.

    2.3.4 Generation of Prediction Models and Performance Metrics

    Twenty healthy subjects(ten from each gender)aged between 24 and 29 years participated in the EEG experiments conducted in this study.The subjects had no brain training session or any prior BCI experience before the actual experiment was carried out.During the experiments,the EEG data from the BCI device was captured and recorded in MATLAB software.To obtain consistent and accurate results,descriptions of the experimental protocols and the recommended method for fitting the headset were provided and demonstrated to each participant before the paradigm was carried out.

    In order to provide an unbiased evaluation of the prediction model,the data were partitioned into training and test sets where only the performance of the latter would be evaluated.The flowchart of the prediction model generations is illustrated in Fig.8(left)where SetMTRand SetFTRwhich consist of 80%of the data from the male and female subjects respectively are used as training data to construct the prior distribution on the target function and the likelihood model.To further observe whether the gender-based training can enhance the generalization capability of the model,the training is also conducted based on genders as depicted in the first section of the flowchart.This process will generate three types of models,namely Model G (Cg) which is trained based on both male and female data;Model M (Cm) which is trained based on male-only data (i.e.,SetMTR);and Model F (Cf) which is trained based on female-only data(i.e.,SetFTR).

    Figure 8:Flowchart of the prediction model generations(left)and the proposed algorithm for EEG to dynamic speed decoder(right)

    Algorithm 1 which is detailed on the right side of Fig.8 presents the proposed EEG to dynamic speed decoding strategy withb1andb2detections and heuristic method to reject and reconstruct the EEG data to the desired values during theb1andb2events.The actual performance is then tested on new datasets,i.e.,SetMTSand SetFTSas defined in Fig.8,which come from the remaining 20%of the recorded EEG data.Similar to the training process,gender-based evaluations are also conducted to analyse the generalization capability of the gender-based models.

    In this study,accuracy which is a measure of correctly classified data is considered as the performance metric for the classification of the states (i.e.,A,B and C) as depicted in the middle plot of Fig.2.This metric can be computed as

    where TP,TN,FP and FN represent true positive,true negative,false positive and false negative respectively.The ultimate goal is however to ensure the actual dynamic speed command,,is driven as close as possible to the target command,νd.Thus,to penalize the mismatch between the two,the mean absolute error(MAE)is computed as follows:

    wherenis the total sampled data for each test.This metric will be a more accurate representation of the overall performance since it takes into account the effectiveness of the voluntary blink detection that affects the state transitions.Results from these performance evaluations are presented in the next section.

    3 Results

    To demonstrate the effectiveness of the proposed GPM in decoding the single-channel EEG data into the desired dynamic commands,the performance is compared against other competitive classifiers using 5-fold cross validations as well as the conventional method (Conv) which relies solely on the eSense meter and the proposed voluntary blink detection for state transitions.The classifiers evaluated in this study are LDA,SVM,kNN,EL and NN which have previously been employed for classification with a single-channel EEG device in recent studies[22,29].In addition,the results with and without the filters are also recorded for further analyses.

    Tab.1 compares the overall performance of the proposed GPM and other methods from the test conducted on SetFTS+MTSwith Stimulus 1 using Model G.“No F”,“B”and“H”denote“No Filter”,“Boxcar filter”and“Hanning filter”respectively.In general,the GPM considerably outperforms the rest in terms of accuracy and MAE,both with and without filters.The highest accuracy and lowest MAE obtained are 96.5%and 0.7 cm/s respectively with Hanning filter.

    Table 1:Overall performance evaluations of the proposed GPM and other ML classifiers from the test conducted on Set FTS+MTS with Stimulus 1 using Model G

    Tabs.2 and 3 illustrate the difference in the performance of Model G and Model F/M when evaluated based on genders.Via the proposed GPM,no performance differences can be seen between the generic and gender-based models,and both result in the best performance with 96% accuracy and 0.8cm/s MAE with Hanning filter.With regard to male-based evaluations which are presented in Tab.3,a quite similar trend is seen from the DA,EL and NN classifiers except for kNN and SVM where their generic models outperform their male-based counterparts with MAE of 1.4 cm/s.

    Table 2:Performance evaluations of the proposed GPM and other ML classifiers from the test conducted on set FTS with Stimulus 1 using model G and model F

    Table 3:Performance evaluations of the proposed GPM and other ML classifiers from the test conducted on set MTS with Stimulus 1 using model G and model M

    The same evaluations for Stimulus 2 are presented in Tab.4 for the overall performance,and Tabs.5 and 6 for the gender-based performances.In contrast to Stimulus 1,the best performance when Stimulus 2 is employed is achieved via the proposed GPM without filter,which results in 92.5%accuracy and 1.5 cm/s MAE.What stands out in Tab.4 is the big gap in performance between the proposed model and other classifiers where the highest accuracy achieved is only 69.5%via DA and EL,which is considerably lower than that resulted from GPM.Moreover,the resulting gender-based models from the DA,kNN,SVM,EL and NN classifications do not improve the predictive ability as can be observed in Tabs.5 and 6 where the differences with their generic counterparts are negligibly small.On the contrary,a slight difference in performance is seen between Model G and Model F/M with GPM;i.e.,for the female-based evaluations,Model F resulted in a better performance with 91%accuracy and 1.8 cm/s MAE,and for male-based evaluations,Model G with Hanning filter beats Model M with 98%accuracy and 0.4 cm/s MAE.

    Table 4:Overall performance evaluations of the proposed GPM and other ML classifiers from the test conducted on set FTS+MTS with Stimulus 2 using model G

    Table 5:Performance evaluations of the proposed GPM and other ML classifiers from the test conducted on set FTS with Stimulus 2 using model G and model F

    Table 6:Performance evaluations of the proposed GPM and other ML classifiers from the test conducted on set MTS with Stimulus 2 using model G and model M

    Table 6:Continued

    For clarity and brevity,the performance of the proposed GPMs against other best-performing models based on gender and stimulus is summarized in Tab.7.From the table,it can be generally concluded that while other methods perform substantially worse with Stimulus 2,the GPM approach demonstrates consistent performance across both stimuli with accuracy above 91%and a maximum MAE of 1.80 cm/s.Nonetheless,for such a BCI application,Stimulus 1 with GPM is likely to form a better prediction model since the resulting accuracy reaches 96%with an MAE of no greater than 0.8 cm/s,which is significantly lower than that resulted from Stimulus 2.

    Table 7:Summary of the GPM methods against other best performing models based on gender and stimulus.The corresponding dynamic speed commands are illustrated in Fig.9 for Stimulus 1 and Fig.10 for Stimulus 2

    The corresponding dynamic speed commands are visualized in the upper plots of Figs.9 and 10 along with the derivatives of the EEG data,ev,in the bottom plots.The target speed command,υd,is represented by the dashed black lines,while the voluntary blink events which serve as signals for state transition are denoted by the vertical lines,b1andb2.Comparing Figs.9 and 10,it can be clearly seen that Stimulus 2 resulted in a relatively longer delay during the transition fromνd=0 toνd=20 cm/s,which accounted for the deteriorating performance when compared to the results from Stimulus 1 in Tab.7.

    Figure 9:Illustrations on dynamic speed commands based on GPM approach against conventional method and other best performing classifiers as recorded in Tab.7 for Stimulus 1

    Figure 10:Illustrations on dynamic speed commands based on GPM approach against conventional method and other best performing classifiers recorded in Tab.7 for Stimulus 2

    Referring to the responses ofevon the bottom plots of Figs.9 and 10,theb2(b1)events result in the largest (second largest) magnitude whenev <0 in each test.With the proposed voluntary blink detections,the transmitted speed commands have been successfully driven to the desired values for both paradigms as can be seen from the responses of GPM and other classifiers which are represented by the orange and blue lines respectively.On the contrary,the conventional method performs the worst due to the nature of the eSense meter reading which has a greater tendency of misclassification during the voluntary eyeblink events as conjectured in Section 2.2.Another striking observation is that SVM and EL result in significant delays and mismatches betweenνdandparticularly during the state transitions att=10 s andt=40 s compared to the GPM approach which only causes small delays during the transition att=10 s.In practice,such a scenario is undesirable since it will lead to performance deterioration of the closed-loop motor system and eventually instability.By contrast,the GPM approach particularly with Stimulus 1 demonstrates considerably smaller errors betweenνdandwhich only occur when the motor is initially actuated.This is inherently due to the representation flexibility of the trained models that also provide uncertainty measures over predictions.

    4 Conclusion and Future Work

    Conclusion:In this work,a new BCI decoding strategy via the GPM approach for dynamic speed commands with a single-channel EEG-controlled actuator has been proposed.The experimental outcome has demonstrated the superiority of the GPM approach over other existing classifiers in the literature which include LDA,SVM,kNN,EL and NN.Additionally,further analysis reveals that while the error performance of other existing methods deteriorates with Stimulus 2 due to signal drifts resulting from voluntary eyeblinks,the proposed GPM exhibits consistent performance.

    Implications of the study:The current study has proposed an improved BCI decoding strategy based on GPM that can be readily embedded in many affordable off-the-shelf microcomputers.Plus,the combination of an inconspicuous visual stimulus and voluntary eyeblinks has not just increased the DoF of a single-channel EEG-controlled actuator,but also eliminated the need of extensive training that is typically required in most motor-imagery based BCIs.Such an approach will greatly benefit new BCI users as well as BCI illiterates who are unable to sufficiently modulate their neuronal signals when controlling an external device.

    Limitations and future work:Despite the significant improvements,the proposed method has only been evaluated with a BCI paradigm that lasted no longer than 50 s.A greater focus on modifying the stimuli to prolong the attention span could produce interesting findings that account more for higher DoF EEG-controlled actuators particularly those used in mobile robots.Thus,future work will encompass the aforementioned research field as well as deployment to robotic platforms which may necessitate some modifications to address unanticipated issues during real-time implementations.For instance,when the actuator is subject to external disturbances and diverts away from the targeted position,a new function to detect such a scenario needs to be embedded in the decoder’s algorithm to avoid user distraction that can consequently affect the accuracy of the transmitted EEG signal.In addition,different sizes of datasets may be required to evaluate and further enhance the generalization capability of the GPM-based decoder.

    Acknowledgement:The authors would like to thank all volunteers who have participated in this experimental study and the Human Research Ethics Committee for approving the protocol which was conducted in accordance to the ethical principles outlined by the Declaration of Helsinki.

    Funding Statement:This work was supported by the Ministry of Higher Education Malaysia for Fundamental Research Grant Scheme with Project Code:FRGS/1/2021/TK0/USM/02/18.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    www.自偷自拍.com| 国产亚洲精品第一综合不卡| av电影中文网址| 一级毛片 在线播放| 国产免费福利视频在线观看| 赤兔流量卡办理| 成人影院久久| 熟女av电影| 国产高清国产精品国产三级| 18禁国产床啪视频网站| 精品第一国产精品| 久久久久精品人妻al黑| 国产成人免费无遮挡视频| 看免费成人av毛片| 我的亚洲天堂| 国产日韩一区二区三区精品不卡| 五月伊人婷婷丁香| 99国产精品免费福利视频| 18禁观看日本| 丰满少妇做爰视频| 少妇人妻 视频| 美国免费a级毛片| 男的添女的下面高潮视频| 一边摸一边做爽爽视频免费| 街头女战士在线观看网站| 国产精品久久久久久久久免| 欧美bdsm另类| av在线app专区| 啦啦啦在线观看免费高清www| 一区福利在线观看| 久久午夜综合久久蜜桃| 亚洲第一av免费看| 男女国产视频网站| 国产不卡av网站在线观看| 青青草视频在线视频观看| 成年人免费黄色播放视频| 少妇猛男粗大的猛烈进出视频| 久久国产精品大桥未久av| 亚洲国产欧美网| 久久精品国产亚洲av高清一级| 亚洲欧美中文字幕日韩二区| 男女边摸边吃奶| 黄色 视频免费看| 久久这里只有精品19| 久久狼人影院| 欧美精品av麻豆av| 国产一区二区三区综合在线观看| 亚洲精品av麻豆狂野| 99国产综合亚洲精品| 欧美日韩精品网址| 久久精品亚洲av国产电影网| 18禁观看日本| 午夜激情久久久久久久| 欧美激情 高清一区二区三区| 国精品久久久久久国模美| 一级毛片黄色毛片免费观看视频| 亚洲国产精品成人久久小说| freevideosex欧美| 天天躁夜夜躁狠狠躁躁| 精品午夜福利在线看| 免费看不卡的av| 卡戴珊不雅视频在线播放| 久久女婷五月综合色啪小说| 日本欧美国产在线视频| 超色免费av| av线在线观看网站| 国产激情久久老熟女| 深夜精品福利| 亚洲欧美一区二区三区久久| 天天操日日干夜夜撸| 欧美日韩亚洲高清精品| 亚洲精品乱久久久久久| 久久毛片免费看一区二区三区| 黄色怎么调成土黄色| 在线天堂最新版资源| 久久人人爽人人片av| 国产国语露脸激情在线看| 一区福利在线观看| 卡戴珊不雅视频在线播放| 欧美精品一区二区大全| 丝袜在线中文字幕| 777米奇影视久久| 人人妻人人爽人人添夜夜欢视频| 亚洲色图综合在线观看| 免费av中文字幕在线| 国产精品女同一区二区软件| 五月天丁香电影| 最黄视频免费看| 巨乳人妻的诱惑在线观看| 久久综合国产亚洲精品| 香蕉丝袜av| 国产无遮挡羞羞视频在线观看| 99国产精品免费福利视频| 婷婷色综合www| 欧美日韩视频精品一区| 国产淫语在线视频| 黄色毛片三级朝国网站| 欧美精品一区二区大全| 韩国精品一区二区三区| 亚洲国产精品一区三区| 寂寞人妻少妇视频99o| 欧美日韩一级在线毛片| 只有这里有精品99| 国产精品av久久久久免费| 精品少妇黑人巨大在线播放| 亚洲三级黄色毛片| 亚洲av综合色区一区| 国产精品一区二区在线不卡| 人成视频在线观看免费观看| av线在线观看网站| xxxhd国产人妻xxx| 91午夜精品亚洲一区二区三区| 老女人水多毛片| 日日摸夜夜添夜夜爱| 免费高清在线观看视频在线观看| 国产精品久久久av美女十八| 色播在线永久视频| 亚洲国产精品成人久久小说| 最近最新中文字幕免费大全7| 亚洲av欧美aⅴ国产| 免费看不卡的av| 两个人免费观看高清视频| 日本欧美视频一区| 黑人欧美特级aaaaaa片| 高清黄色对白视频在线免费看| 午夜福利视频在线观看免费| 99久久综合免费| 看非洲黑人一级黄片| 欧美黄色片欧美黄色片| 麻豆乱淫一区二区| www.精华液| 国产成人精品无人区| 亚洲欧美一区二区三区国产| av免费在线看不卡| 女人久久www免费人成看片| 亚洲三区欧美一区| 黑人猛操日本美女一级片| 亚洲欧美精品自产自拍| 十八禁高潮呻吟视频| 啦啦啦视频在线资源免费观看| 如日韩欧美国产精品一区二区三区| 欧美变态另类bdsm刘玥| 国产精品久久久久久精品电影小说| 国产成人免费观看mmmm| 午夜福利在线免费观看网站| 纯流量卡能插随身wifi吗| 人人妻人人添人人爽欧美一区卜| 亚洲精品日韩在线中文字幕| 亚洲国产看品久久| 最近手机中文字幕大全| 成人国产av品久久久| www.自偷自拍.com| 免费观看性生交大片5| 免费观看a级毛片全部| 91精品三级在线观看| 久久久久久免费高清国产稀缺| 亚洲精品国产色婷婷电影| 一级片免费观看大全| h视频一区二区三区| 亚洲欧美中文字幕日韩二区| 高清av免费在线| 大话2 男鬼变身卡| 国产日韩欧美在线精品| 亚洲av免费高清在线观看| 777米奇影视久久| 交换朋友夫妻互换小说| 午夜免费观看性视频| 亚洲欧美精品综合一区二区三区 | 久久毛片免费看一区二区三区| 亚洲欧美中文字幕日韩二区| 色哟哟·www| 视频在线观看一区二区三区| 两个人免费观看高清视频| 亚洲精品第二区| 久久热在线av| 日本色播在线视频| 国产在线免费精品| 熟女电影av网| 成人国语在线视频| 欧美精品国产亚洲| 亚洲第一av免费看| 人体艺术视频欧美日本| 欧美精品一区二区免费开放| 国产男女内射视频| 一本久久精品| 亚洲欧美一区二区三区国产| 国产无遮挡羞羞视频在线观看| 黄网站色视频无遮挡免费观看| 波野结衣二区三区在线| 九九爱精品视频在线观看| 亚洲精品国产色婷婷电影| 亚洲国产精品一区二区三区在线| 精品国产一区二区三区四区第35| 国产黄色视频一区二区在线观看| 美女xxoo啪啪120秒动态图| 欧美精品一区二区免费开放| 性高湖久久久久久久久免费观看| 国产野战对白在线观看| www日本在线高清视频| 欧美少妇被猛烈插入视频| 亚洲精品视频女| 老汉色av国产亚洲站长工具| 丝瓜视频免费看黄片| 精品久久久精品久久久| 欧美日韩亚洲国产一区二区在线观看 | 另类亚洲欧美激情| 超碰97精品在线观看| 美女福利国产在线| 大码成人一级视频| 久久精品久久精品一区二区三区| 亚洲欧美一区二区三区国产| 成年av动漫网址| 久久久国产欧美日韩av| 精品国产一区二区三区四区第35| 久久久欧美国产精品| 国产在线视频一区二区| 亚洲av电影在线进入| 欧美变态另类bdsm刘玥| 国产在线一区二区三区精| 高清欧美精品videossex| 男女高潮啪啪啪动态图| 婷婷色综合大香蕉| 天堂中文最新版在线下载| 熟女av电影| videos熟女内射| 九色亚洲精品在线播放| 久久97久久精品| av又黄又爽大尺度在线免费看| 我的亚洲天堂| 麻豆精品久久久久久蜜桃| www.自偷自拍.com| 精品国产一区二区三区四区第35| 国产午夜精品一二区理论片| 不卡视频在线观看欧美| 美女脱内裤让男人舔精品视频| 亚洲精品久久久久久婷婷小说| 你懂的网址亚洲精品在线观看| 午夜福利,免费看| 欧美av亚洲av综合av国产av | 王馨瑶露胸无遮挡在线观看| 91久久精品国产一区二区三区| 日本wwww免费看| 亚洲av国产av综合av卡| 国产高清不卡午夜福利| 亚洲,欧美精品.| 国产又爽黄色视频| 在线观看www视频免费| 老司机亚洲免费影院| 久久97久久精品| 国产精品欧美亚洲77777| 国产欧美日韩综合在线一区二区| 色哟哟·www| 免费在线观看黄色视频的| 天天操日日干夜夜撸| 亚洲人成77777在线视频| 久久久欧美国产精品| 日本91视频免费播放| 999精品在线视频| 亚洲国产色片| 美女福利国产在线| 久久久久人妻精品一区果冻| 免费不卡的大黄色大毛片视频在线观看| 青草久久国产| 男的添女的下面高潮视频| 成人亚洲欧美一区二区av| 亚洲,欧美,日韩| 久久狼人影院| 涩涩av久久男人的天堂| 晚上一个人看的免费电影| a 毛片基地| 又黄又粗又硬又大视频| 亚洲精品av麻豆狂野| 中国国产av一级| 国产精品免费大片| 99热网站在线观看| 美女主播在线视频| 亚洲一区二区三区欧美精品| 制服丝袜香蕉在线| 波野结衣二区三区在线| 亚洲精品美女久久久久99蜜臀 | 久久99蜜桃精品久久| 亚洲一码二码三码区别大吗| 久久女婷五月综合色啪小说| 亚洲精品美女久久久久99蜜臀 | 如日韩欧美国产精品一区二区三区| 伊人亚洲综合成人网| 欧美xxⅹ黑人| 美女脱内裤让男人舔精品视频| 国产欧美亚洲国产| 在线精品无人区一区二区三| 国产又色又爽无遮挡免| 精品视频人人做人人爽| 老女人水多毛片| 久久精品国产鲁丝片午夜精品| 女的被弄到高潮叫床怎么办| 丰满饥渴人妻一区二区三| 少妇 在线观看| 丝袜脚勾引网站| 国产精品久久久久成人av| 9191精品国产免费久久| 男男h啪啪无遮挡| 国产av国产精品国产| 久久久久久久精品精品| 国产人伦9x9x在线观看 | 亚洲精品国产av蜜桃| 热re99久久精品国产66热6| 在线免费观看不下载黄p国产| 丝袜脚勾引网站| a级毛片黄视频| www日本在线高清视频| 99久国产av精品国产电影| 亚洲欧美清纯卡通| 美国免费a级毛片| 午夜福利影视在线免费观看| av国产久精品久网站免费入址| 男男h啪啪无遮挡| 嫩草影院入口| 色播在线永久视频| 久久午夜福利片| 欧美日韩国产mv在线观看视频| 精品国产国语对白av| 久久韩国三级中文字幕| 热re99久久国产66热| 亚洲国产欧美网| 美国免费a级毛片| 国产精品久久久久成人av| 超碰成人久久| 国产片特级美女逼逼视频| 亚洲av免费高清在线观看| 欧美国产精品va在线观看不卡| 老司机影院成人| av女优亚洲男人天堂| 一级a爱视频在线免费观看| 狂野欧美激情性bbbbbb| 一级a爱视频在线免费观看| 欧美老熟妇乱子伦牲交| 亚洲精品自拍成人| 国产男人的电影天堂91| 亚洲欧美清纯卡通| 久久精品国产鲁丝片午夜精品| 大香蕉久久网| 欧美日韩精品成人综合77777| 99久久中文字幕三级久久日本| 亚洲国产欧美在线一区| 91久久精品国产一区二区三区| 我的亚洲天堂| 高清视频免费观看一区二区| 日本黄色日本黄色录像| 亚洲经典国产精华液单| 国产av精品麻豆| 天天操日日干夜夜撸| 国产精品偷伦视频观看了| 日本色播在线视频| 极品人妻少妇av视频| 自拍欧美九色日韩亚洲蝌蚪91| www.熟女人妻精品国产| 亚洲av免费高清在线观看| 国产伦理片在线播放av一区| 久久久亚洲精品成人影院| 有码 亚洲区| 国产 精品1| 亚洲三级黄色毛片| 久久99一区二区三区| av又黄又爽大尺度在线免费看| 免费在线观看视频国产中文字幕亚洲 | 久久精品国产亚洲av高清一级| 免费高清在线观看视频在线观看| 国产欧美日韩综合在线一区二区| 国产精品亚洲av一区麻豆 | 国产欧美日韩综合在线一区二区| 国产成人免费观看mmmm| 欧美国产精品一级二级三级| 久久99热这里只频精品6学生| 一本久久精品| 欧美成人午夜精品| 精品99又大又爽又粗少妇毛片| 性少妇av在线| 亚洲精品一二三| 九草在线视频观看| 免费观看av网站的网址| 超色免费av| 日本vs欧美在线观看视频| 夫妻午夜视频| 亚洲色图综合在线观看| 欧美 亚洲 国产 日韩一| 天堂俺去俺来也www色官网| 亚洲av在线观看美女高潮| 精品人妻在线不人妻| 亚洲国产日韩一区二区| 大话2 男鬼变身卡| 69精品国产乱码久久久| 精品久久蜜臀av无| 午夜免费鲁丝| 欧美人与性动交α欧美软件| 久久精品久久久久久久性| 老女人水多毛片| 一本一本久久a久久精品综合妖精 国产伦在线观看视频一区 | 久久鲁丝午夜福利片| 久久人妻熟女aⅴ| 日本av免费视频播放| 国产一区亚洲一区在线观看| 色视频在线一区二区三区| 99久国产av精品国产电影| 国产精品久久久久久久久免| 日本欧美视频一区| 久久鲁丝午夜福利片| 欧美精品一区二区大全| 又大又黄又爽视频免费| 男女国产视频网站| 人妻人人澡人人爽人人| 免费看av在线观看网站| 国产xxxxx性猛交| 伦理电影大哥的女人| 一边亲一边摸免费视频| 亚洲av在线观看美女高潮| 亚洲精品,欧美精品| 熟女少妇亚洲综合色aaa.| 久久韩国三级中文字幕| 三级国产精品片| 日韩视频在线欧美| 色94色欧美一区二区| 国产黄色免费在线视频| 免费不卡的大黄色大毛片视频在线观看| 欧美日韩视频高清一区二区三区二| 亚洲欧美清纯卡通| 久久 成人 亚洲| a 毛片基地| 久久久a久久爽久久v久久| 免费在线观看完整版高清| 日本-黄色视频高清免费观看| 欧美日韩精品成人综合77777| 欧美另类一区| 亚洲四区av| 男人舔女人的私密视频| 国产精品不卡视频一区二区| 在线看a的网站| 国产成人91sexporn| 色视频在线一区二区三区| 人妻少妇偷人精品九色| 精品99又大又爽又粗少妇毛片| 少妇被粗大的猛进出69影院| 亚洲欧美精品自产自拍| 少妇人妻 视频| 国产日韩一区二区三区精品不卡| 热99国产精品久久久久久7| 久久久久久人妻| 99re6热这里在线精品视频| 精品福利永久在线观看| 亚洲一级一片aⅴ在线观看| 热re99久久国产66热| 国产精品蜜桃在线观看| 久久av网站| 丝袜人妻中文字幕| 有码 亚洲区| 国产日韩欧美亚洲二区| 视频区图区小说| 男人爽女人下面视频在线观看| 蜜桃在线观看..| 99香蕉大伊视频| 午夜老司机福利剧场| 欧美变态另类bdsm刘玥| 国产极品粉嫩免费观看在线| 国产97色在线日韩免费| 女的被弄到高潮叫床怎么办| 人成视频在线观看免费观看| 精品久久蜜臀av无| 色婷婷久久久亚洲欧美| 国产野战对白在线观看| 一级毛片黄色毛片免费观看视频| 国产成人免费观看mmmm| 丝袜美足系列| 女人高潮潮喷娇喘18禁视频| 欧美日韩综合久久久久久| 亚洲av男天堂| 婷婷色综合www| 国产成人精品一,二区| 永久网站在线| 国产亚洲欧美精品永久| 在线 av 中文字幕| 国产又爽黄色视频| 国产女主播在线喷水免费视频网站| 2018国产大陆天天弄谢| 精品国产乱码久久久久久小说| 青青草视频在线视频观看| 亚洲av综合色区一区| 不卡视频在线观看欧美| 欧美日韩一级在线毛片| 毛片一级片免费看久久久久| 欧美成人午夜免费资源| 99精国产麻豆久久婷婷| 一本—道久久a久久精品蜜桃钙片| 一区二区三区乱码不卡18| 精品人妻一区二区三区麻豆| 日本免费在线观看一区| 欧美精品国产亚洲| 国产精品99久久99久久久不卡 | 国产精品久久久久久精品电影小说| 秋霞伦理黄片| 国产极品天堂在线| 精品少妇久久久久久888优播| 国产精品久久久久久久久免| 日韩熟女老妇一区二区性免费视频| 精品久久久久久电影网| 免费在线观看视频国产中文字幕亚洲 | 日韩精品有码人妻一区| 天天躁日日躁夜夜躁夜夜| 欧美在线黄色| 日产精品乱码卡一卡2卡三| 校园人妻丝袜中文字幕| 国产欧美日韩一区二区三区在线| 国产高清不卡午夜福利| 久久久久国产一级毛片高清牌| 午夜激情av网站| 中文字幕人妻丝袜一区二区 | 寂寞人妻少妇视频99o| www.熟女人妻精品国产| 亚洲精品中文字幕在线视频| 久久久国产精品麻豆| 国产精品人妻久久久影院| 一级a爱视频在线免费观看| 十八禁网站网址无遮挡| 美女国产视频在线观看| 另类精品久久| 看十八女毛片水多多多| 可以免费在线观看a视频的电影网站 | 国产女主播在线喷水免费视频网站| 成人影院久久| 久久精品国产亚洲av天美| 黄频高清免费视频| 久久久久人妻精品一区果冻| 国产爽快片一区二区三区| 一级毛片黄色毛片免费观看视频| 免费av中文字幕在线| 中文字幕制服av| 亚洲欧美精品自产自拍| 国产视频首页在线观看| 精品99又大又爽又粗少妇毛片| 久久人妻熟女aⅴ| 久久ye,这里只有精品| 18禁动态无遮挡网站| 国产成人精品久久久久久| 欧美少妇被猛烈插入视频| 水蜜桃什么品种好| 成年人免费黄色播放视频| 色吧在线观看| 建设人人有责人人尽责人人享有的| av有码第一页| 多毛熟女@视频| 久久久久久久国产电影| 大话2 男鬼变身卡| 黑人欧美特级aaaaaa片| 国产乱人偷精品视频| 久久久亚洲精品成人影院| 国精品久久久久久国模美| 日韩电影二区| 最新中文字幕久久久久| 中文乱码字字幕精品一区二区三区| 国产伦理片在线播放av一区| 99久久中文字幕三级久久日本| 麻豆精品久久久久久蜜桃| 精品国产露脸久久av麻豆| 亚洲欧洲精品一区二区精品久久久 | 99re6热这里在线精品视频| 日韩中文字幕视频在线看片| 国产成人欧美| 精品人妻偷拍中文字幕| 午夜日韩欧美国产| 宅男免费午夜| 亚洲国产欧美在线一区| 久久久久久久国产电影| 日韩av在线免费看完整版不卡| 热re99久久国产66热| 亚洲欧美成人精品一区二区| 一本色道久久久久久精品综合| 婷婷成人精品国产| 母亲3免费完整高清在线观看 | 在线 av 中文字幕| 色94色欧美一区二区| 中国国产av一级| 精品一区二区三卡| 国产欧美日韩综合在线一区二区| 在线免费观看不下载黄p国产| 叶爱在线成人免费视频播放| 日韩成人av中文字幕在线观看| 男女无遮挡免费网站观看| 精品久久久久久电影网| 啦啦啦啦在线视频资源| 电影成人av| 亚洲av免费高清在线观看| 女人久久www免费人成看片| av天堂久久9| 日韩电影二区| 中国三级夫妇交换| 三上悠亚av全集在线观看| av在线观看视频网站免费| 亚洲国产精品成人久久小说| 丝袜美腿诱惑在线| 男男h啪啪无遮挡| 99re6热这里在线精品视频| 男女高潮啪啪啪动态图| 国产xxxxx性猛交| 十分钟在线观看高清视频www| 999久久久国产精品视频| 亚洲美女黄色视频免费看| 国产精品免费视频内射| 国产福利在线免费观看视频| 天天躁日日躁夜夜躁夜夜| 女人久久www免费人成看片| 人人妻人人添人人爽欧美一区卜| 午夜免费男女啪啪视频观看| 日韩av免费高清视频| 亚洲第一av免费看| 精品久久蜜臀av无| 欧美变态另类bdsm刘玥| 大片免费播放器 马上看|