• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    An Attention Based Neural Architecture for Arrhythmia Detection and Classification from ECG Signals

    2021-12-15 08:13:52NimmalaMangathayaruPadmajaRaniVinjamuriJanakiKalyanapuSrinivasMathuraBaiSaiMohanandLalithBharadwaj
    Computers Materials&Continua 2021年11期

    Nimmala Mangathayaru,Padmaja Rani,Vinjamuri Janaki,Kalyanapu Srinivas,B.Mathura Bai,G.Sai Mohan and B.Lalith Bharadwaj

    1Department of IT,VNR Vignana Jyothi Institute of Engineering&Technology,Hyderabad,500090,India

    2Department of CSE,JNTUH,Hyderabad,500085,India

    3Department of CSE,Vaagdevi College of Engineering,Warangal,506005,India

    4Kakatiya Institute of Technology and Science,Warangal,506015,India

    Abstract:Arrhythmia is ubiquitous worldwide and cardiologists tend to provide solutions from the recent advancements in medicine.Detecting arrhythmia from ECG signals is considered a standard approach and hence,automating this process would aid the diagnosis by providing fast,costefficient,and accurate solutions at scale.This is executed by extracting the definite properties from the individual patterns collected from Electrocardiography(ECG)signals causing arrhythmia.In this era of applied intelligence,automated detection and diagnostic solutions are widely used for their spontaneous and robust solutions.In this research,our contributions are two-fold.Firstly,the Dual-Tree Complex Wavelet Transform (DT-CWT) method is implied to overhaul shift-invariance and aids signal reconstruction to extract significant features.Next,A neural attention mechanism is implied to capture temporal patterns from the extracted features of the ECG signal to discriminate distinct classes of arrhythmia and is trained end-to-end with the finest parameters.To ensure that the model’s generalizability,a set of five traintest variants are implied.The proposed model attains the highest accuracy of 98.5% for classifying 8 variants of arrhythmia on the MIT-BIH dataset.To test the resilience of the model,the unseen (test) samples are increased by 5x and the deviation in accuracy score and MSE was 0.12% and 0.1%respectively.Further,to assess the diagnostic model performance,AUC-ROC curves are plotted.At every test level,the proposed model is capable of generalizing new samples and leverages the advantage to develop a real-world application.As a note,this research is the first attempt to provide neural attention in arrhythmia classification using MIT-BIH ECG signals data with state-of-the-art performance.

    Keywords:Arrhythmia classification;arrhythmia detection;MIT-BIH dataset;dual-tree complex wave transform;ECG classification;neural attention;neural networks;deep learning

    1 Introduction

    Arrhythmia is ubiquitous worldwide and there is still a major population at risk.The ECG signals are highly efficient and used as a gold-standard to detect the presence of arrhythmia and critical conditions such as cardiac arrest.So,detecting arrhythmia using ECG signals is challenging for researchers.In this era of automation,deep learning has made strides in various fields such as computer vision,language processing,and signal processing in developing stateof-the-art models on large scale databases.Further,deep learning has advanced in bio-medical imaging and bio-medical signal processing.Hence,it is aimed to develop a diagnostic model by extracting features from ECG using DT-CWT and processing them with help of the proposed neural architecture.

    It is observed that the signals recorded by ECG are a combination of PQRS waves and these waves detect the heart functionality by identifying various characteristics.Certain features are extracted from the signal by pre-processing it with various transformation techniques in which continuous wavelet transform and discrete wavelet transform are frequently used.Sequentially these extracted features are then processed using various learning algorithms such as k-nearest neighbors (k-NN),Support vector machines (SVM),Multi-layer-perceptron (MLP),Singular value decomposition (SVD),etc.The pre-processing techniques are implied for extracting the features from ECG signal widely used Discrete wavelet transformation (DWT),which is sensitive to shift-invariance and downgrades the quality of the signal while reconstructing the decomposed signal.In the subsequent processing step,most of the methods include generic machine learning algorithms (classification or clustering algorithms) or an MLP which does not capture temporal invariances and eventually harms the performance by reducing the generalization.Hence the two key steps to provide a diagnostic model are,(a) an appropriate pre-processing of the signal (DTCWT) (b) a processing step to prognosticate the disease (neural attention).To overcome the drawbacks of the existing research,the proposed diagnostic framework implies DT-CWT and neural attention mechanism to provide a significant solution.

    2 Previous Works

    The impact of MIT-BIH data was shown by George et al.[1],which acted as a catalyst and gave rise to numerous works that provide insights on automated devices to diagnose arrhythmia.Markos et al.[2]used time-domain analysis to extricate features and then arranged them in distinct combinations,which are utilized as input for neural networks.Sixty-three different types of neural networks were formed.The output of these networks was deployed to a decision tree to diagnose arrhythmia.Karimifard [3]worked on modelling of signals,who later used a Hermitian basis function to get a feature vector and sent to a k-nearest neighbor classifier to classify seven types of arrhythmia,which [1-4]obtained sensitivity and specificity of 99.0% and 99.84% respectively.He also concluded that the size of the feature vector affected the training time of the model.Mohammadzadeh et al.[4]took features from the signal by linear and nonlinear methods,which were reduced by Gaussian discriminant analysis (GDA) and used an SVM to recognize six classes of arrhythmia with a sensitivity of 95.7% and specificity of 99.40%.Chi et al.[5]focused on quick prediction of the disease by pulling out PQRST features from the ECG signal and later used Linear Discriminant Analysis for grouping five different classes of arrhythmia and achieved an accuracy of 96.23%.

    A unique use of the kernel Adatron algorithm was combined with SVM by Majid et al.[6].He explained the drawbacks of a multi-layered-perceptron (MLP) and differentiated the training and testing time of these two methods.Hamid et al.[7]performed Complex wavelet transformation (CWT),Discrete wavelet transformation (DWT),Discrete cosine transformation(DCT) feature extraction methods separately on the signal and formed four different structures using MLP,then the other four using SVM and deduced the efficient use of a feature extraction method by the training time of the model [5-10].Oscar [8]preprocessed the signals by QRS extraction method and used fuzzy KNN,MLP with backpropagation,and MLP with scaled conjugate gradient backpropagation (GBP) to get the output matrix.Later these three matrices were combined and sent to the fuzzy inference system to get the result.This achieved an accuracy of 98%.Roland et al.[9]presented an artificial neural network that took signals that are preprocessed by Fast Fourier Transformation (FFT) as an input and then categorized five classes of arrhythmia.

    The importance of PQRST wave properties was also discussed here.Yeh et al.[10]proposed a novel preprocessing method along with Cluster analysis (CA) to classify 5 distinct classes and attained a total classification accuracy (TCA) of 94.30%.Stefan [11]developed an android application for real-time detection of arrhythmia by Decision Trees (DT).This model clocked a sensitivity of 89.5% and specificity of 80.6%.Elgendi [12]introduced the application of the moving averages method on ECG signals for detection of P and T waves by addressing four sources of noise which altered the quality of the signal and obtained a sensitivity of 98.05% and specificity of 98.86%.Manu et al.[13]extracted features using DTCWT and merged another four features (AC power,Kurtosis,Skewness,and timing information).This feature set was passed into an MLP and got an accuracy of 94.64% and a sensitivity of 94.6%.Ahmet et al.[14]showed a comparison between the performance of bagged decision trees and a single decision tree with the input of nine features which was taken from ECG signal by applying Low Pass Filter,High Pass Filter,form factor (FF) computing,FF ratio to previous one (FFR),RR ration to the previous RR ration (RRR),RR difference from mean RR value (RRM),skewness,linear predictive coding(LPC) and the cumulated ensemble method outperformed a single decision tree with an accuracy of 99.15%.Mehrdad [15]improved the signal quality by un-decimated wavelet transformation(UWT) and then proposed a method that combined Negatively Correlated Learning (NCL) and Mixture of Experts (ME) which is known to provide an excellent recognition rate.The model is used to group premature ventricular contraction (PVC) arrhythmia and Normal heartbeat classes and achieved accuracy,sensitivity,the specificity of 96.02%,92.27%,and 93.72% respectively.Ping [16]proposed an adaptive feature extraction method based on wavelet transformation and a modified voting mechanism consists of K-means clustering,one against one SVM to enhance the recognition rate and got an accuracy of 89.2%.Joachim [17]worked on the categorization of poor and good signals.An alarm is set off when the parameters are not within a given scale.QRS extraction method along with SVM was used to reach this objective.Patricia [18]presented a Learning Vector Quantization (LVQ) algorithm with SVM to classify arrhythmia.However,the comparisons were made with simulated data.A total of 15 classes were grouped with three different architectures and the best architecture got an accuracy of 99.16%.Ali et al.[19]performed a diagnosis of arrhythmia with the help of Alex net and prior QRS detection was done.The signal was converted to a 256×256 sized image and then passed into the network.The recognition rate and accuracy are 98.5% and 92%.Joy [20]implemented DCT transformation of waves and used Probabilistic Neural Network (PNN) for efficient detection of disease [11].Vasileios [21]showed false beat detection effectively by detecting QRS peaks then filtering false beats using SVM and concluded that QRS peaks are very important for the detection.Rashid et al.[22]proposed a new method that showed promising results by using Gaussian mixture modeling (GMM) with expectation maximization (EM),Combined with statistical and morphological features.Accuracy for class-oriented is 99.6% and for subject-oriented is 96.15%.Serkan et al.[23]also made an android application using 1-D convolution to classify supraventricular ectopic beat (SVEB) and ventricular ectopic beat (VEB).FFT with DCT was used in feature extraction and obtained an accuracy of 99.0%,97.2% for each class.

    From the previous research works,it is observed that (pre-processing part) many methods do not capture temporal relationship among the data.If they capture temporal dependencies,they do not persist with long term dependencies.So,if long term dependencies are provided there are no sequential patterns,which provide attention to the network determining the importance factor.So,these loops are overhauled with the use of attention embedded neural architecture by capturing long term temporal dependencies.Further,some loops are addressed in the pre-processing section and they are overridden by utilising DT-CWT and are mentioned in the successive section.

    3 Contributions

    In this research,the contributions to the body of the knowledge are mentioned as,

    · Dual-Tree Complex Wavelet Transform (DT-CWT) method is implied to overhaul shiftinvariance and aids the signal reconstruction to extract significant features.Further,a small set of features are extracted using the Pan-Tomkins algorithm and are adjoined with the features extracted from DT-CWT.

    · A neural attention mechanism is implied to capture temporal patterns from the extracted features of the ECG signal and to discriminate distinct classes of arrhythmia.The proposed attention model is end-to-end trained by carefully optimizing the hyperparameters.

    4 Methodology

    As mentioned,the two important steps are involved to complete the proposed automated system.This section aims to give a clear understanding of mathematics related to these two steps and explains the unique capabilities.

    4.1 Dataset Description

    In this paper,ECG recordings acquired by the arrhythmia laboratory of Boston’s Beth Israel Hospital are used and this database is known as the MIT-BIH arrhythmia database.ECG recordings are collected using Del Mar Avionics model 445 two-channel reel to reel Holter recorders.These signals were filtered using bandpass filters with frequency in the range of 0.1-100 HZ and are digitized by Del Mar Avionics 660 playback unit with a sampling rate of 360 samples per second.This database consists of forty-eight half-hour excerpts of two-channel twenty-fourhour,ECG recordings from 47 subjects as record number 201.The first twenty-three records are drawn from a collection of four thousand Holter tapes and the other records include uncommon heartbeat irregularities but have great clinical significance.The subjects include twenty-five men and twenty-two women who are aged between twenty-three to eighty-nine.

    The most frequently used ECG leads in this database are modified limb lead 2 (MLII) for channel one and v1 for the other channel.V2,V4 and V5 are also used occasionally,based on the subjects.Fusion Ventricular (FV),VEB,right bundle branch block (RBB),paced beat (PB),Normal (N),ventricular contraction (VC),left bundle branch block (LBB),atrial premature beat(APB) are the different classes of arrhythmia that are used in the task of classification to evaluate the model.

    4.2 Feature Extraction(Pre-Processing)

    As an insight,pre-processing step is important to capture appropriate features which in turn obliges prognosticate arrhythmia.After extensive research on what would be the best practice to get the features based on prior knowledge,it is to be deduced that wavelet transformation for the first step is the best practice for MIT-BIH arrhythmia.The QRS complex signal is pulled with a sample of 256 of which 128 samples are considered from the left side of the R peak and 128 samples from the right.Later wavelet transformation is applied to get a set of required features.As a note,the database reflects certain noise in the signal and the cause of it are described below,

    · The frequency from the power supply usually manipulates the signal,this is known as the powerline interface.

    · Our muscles often tend to contract and expand,which regularly gets combined with cardiac muscles and end up giving a signal with noise

    · A signal quality often depends on the contact between the lead and skin,there are some times that a movement by the patient corrupts the signal and this is described as motion artefact.

    The above-discussed problems are solved by implementing the Pan-Tomkins and including it as one of the important features for signal pre-processing.These findings are addressed in discovering the QRS complex by Pan-Tomkins [24].A signal undergoes four steps in this algorithm.Initially,to attenuate noise,the signal is passed to a bandpass filter,which eliminates motion artefact and makes the signal more stable.A differentiator is used to get the slope of the signal and solve the baseline drift problem.This is followed by a squaring function that helps to remove get absolute value and limit false positives generated by T waves.Finally,moving-window integration is used to smooth the curve and get information about the slope of the signal.The steps are implemented for a signal (considered from the database) and are visually illustrated in Fig.1.Wavelet transformation is applied to the extracted QRS complex signal where a wavelet acts as a window function.All the wavelet transformations are in the compressed or shifted form of the mother wavelet and the different versions of the mother wavelet are described in Eqs.(1)-(3).In Eq.(1) [25],S is the inverse of the frequency of the signal,which can be used to get low and high-frequency signals and to make the wave thinner or broader.T is used to translate wavelet across the signal.Wavelet transformation helps in analysing the different frequencies at different locations;This is known as multi-resolution analysis.By changing the values of S,the wavelet can be obtained in expanded or in compressed form,which is known as scaling.For non-stationary waves,CWT is used,however,the upper limit and lower limit of CWT tends to infinity.This means that there would be a huge number of coefficients that are to be calculated at every possible position.(in Eq.(2)) [26].

    where,

    To reduce the number of coefficients,DWT is used instead of CWT (Eq.(3)) [27].This is achieved by choosing a,b in powers of two and so,the DWT is calculated (computationally) by multilevel decomposition.The signal is further passed into a low pass and high pass filter and the two filters utilized are orthonormal by construction.Initially,the signal is passed to a low pass filter to get approximate coefficients and then again to a high pass filter to get detailed coefficients and are downsampled by 2 successively.The approximate coefficients are iteratively processed in the same way to get low pass portions as well as high pass portions.Fig.2 visually explains the complete overview of the DWT decomposition process.

    Figure 1:Steps in Pan-Tomkins algorithm

    Yet after decomposition,it still lacks Perfect Reconstruction (PR) and does not provide a shift in-variance.To overcome this,Dual-Tree Complex Wave Transformation (DT-CWT) is used [28].DTCWT employs a complex-valued scaling function and wavelet.The Eqs.(4) and (5) show the functions used in DT-CWT,whereΦr(t)is a real part of the complex-valued function andΦi(t)is the imaginary part of the wavelet function.The main difference is that the Eqs.(4) and (5) has two distinct tree structures and multilevel decomposition is performed twice on the same signal.This is shown in Fig.3.

    In Fig.4,Tree(A) is used to acquire the real part coefficients and Tree (B) is utilized to get the imaginary part coefficients.The low pass filter is slowed down by one-fourth of the sample for non-symmetry,which helps in achieving the PR of the signal.The filters used in both the trees are orthonormal to each other and the reverse of decomposition provides the synthesis of the signal.Next,fourth and fifth level detailed coefficients are taken from both the trees.Then 1-D FFT is applied with the obtained features from these levels and another four features are appended to this feature set.The four features are AC power,kurtosis,skewness and timing information and cumulated twenty-eight features from DT-CWT are extracted from this process and are ready to be fed into the classifier.

    Figure 2:Transformation of the signal after every individual step using pan Tomkins algorithm

    4.3 Classification(Processing)

    In the past few years,Feed Forward Neural Networks (FNN’s) dominated the automation field.Even after their efficient performance throughout the years,they are still short of remembering long term dependencies and do not work well with the time-series data.Recurrent Neural Network’s (RNN’s) are used to overcome this drawback.The central theme of the architecture proposed in this section is based on Recurrent Neural Networks and before explaining the proposed architecture,a detailed summary of the Recurrent Neural Networks and their variants are explained below.

    Figure 3:DWT architecture and transformation of a signal at every level of decomposition

    Figure 4:Architecture of DT-CWT

    4.3.1 Recurrent Neural Networks(RNN)

    Unlike FNN’s,RNN’s can consider the input of different lengths and provide an output of different lengths.This feature has increased the scope of applications of Deep Learning,such as image captioning and language translation.RNN’s have a loop to their unit which helps to store information.These networks play on a recursive function that helps to generate a new state at the time (t) by the information of its old state at the time (t-1).Eq.(6) [29]shows the recursive function which is a tanh function that has weights and linear operations in it.

    These networks use backpropagation through time to calculate the gradient.As the number of units in the network increases,the gradient value would come close to zero,because of this the weights would not add any information to the network and this problem is known as thevanishing gradients.

    4.3.2 Long Short-Term Memory(LSTM)

    Long Short-Term Memory (LSTM) is a different type of cell used in recurrent neural networks which were found by Sepp et al.[30].In an LSTM cell,there are various gates,where each gate has its purpose.Lines that are connected to the gates carry a vector that is used to perform a linear operation and provide the output as required.These gates have full control over the information that has to be retained or removed.The gates in LSTM have sigmoid and pointwise operations and the information is initially passed through the LSTM network by the cell state.Only with the cell state,different operations can be performed from the information provided by the cell state to understand LSTM clearly.The working of an LSTM cell is divided into four steps.

    Step 1:To know what information should be forgotten from the previous state.This is done by the ‘forget gate also known as the first layer of LSTM.It takesht-1(a previous hidden state at time t-1) andxt(input at time t) and gives a value between 0 and 1,where ‘0’means to forget everything and ‘1’means to consider the complete information.The reason for getting two values is that this layer uses a sigmoid function.Eq.(9) is used in this layer.

    Step 2:To take the required information which is done by the input gate that tells what to write to cell state.Two functions act in this gate.

    Step 3:First is the sigmoid layer known as the input gate layerit,which is used to take the required information.

    Step 4:A tanh function is used which produces a candidate setthat is added to the state.

    Eqs.(12) and (13) are used to update the cell state;first,Eq.(10) is multiplied byCt-1to forget the information from the previous cell state and then addit*to add the information.

    After updating the cell state the output of the LSTM cell is calculated,this is calculated after the input passes through a sigmoid layer and then through a tanh function ofCt.From Fig.5,We can say that the output of an LSTM cell depends on the previous state of the cell [31].

    Figure 5:An LSTM unit

    4.3.3 Gated Recurrent Units(GRU)

    The gated recurrent unit (GRU) [32]is another variant of the Recurrent Neural Network and is similar to the LSTM with some changes.Due to the changes which are made,GRU tends to work faster than the LSTM network and gives an advantage over it and this can also be explained in 4 steps that are below:

    Update Gate:The functionality of the update gate is to decide what information is to be taken from the previous cell.It takesht-1(a previous hidden state at time t-1) andxt(input at time t)then uses the sigmoid function to give a value between 0 and 1.

    Reset Gate:Reset gate works the same as the forget gate in LSTM.

    Current information:By using the reset gate,htis calculated by performing Hadamard product (pointwise operation) with reset gate andht-1,then,it is added with the input (multiplied with its weight).Performing this would give the information that is taken from the previous hidden state using a reset gate.

    Output:update gate is employed to get the final outputhtof the GRU cell.Hadamard pointwise operation and sum operation are used to get the output [33].

    4.3.4 Bi-Directional Units(Bi-LSTM and Bi-GRU)

    It is seen that bi-directional RNN leverages performance compared to that of unidirectional RNN on speech data.Firstly,the state neurons are divided into two different time directions which are considered as forward and reverse states and the output from the reverse state is not connected to the input of forwarding states,and vice versa.With the assistance of these two sequential time directions,the input data assess the future and past dependencies.This helps to understand long-term dependencies.While training bi-directional RNN’s the weighs are updated not only via forwarding pass but also through backward pass [34].Additionally,it is observed that bi-directional LSTM units outperform in phenome classification and recognition tasks with fewer computations i.e.,epochs [35].Similarly,Bi-directional GRU’s can draw desirable outcomes similar to that of bi-directional LSTM’s [36].Hence,it is aimed to connect sequential bi-directional LSTM and GRU units cautiously for outperforming the classification of ECG signals by acquiring their temporal patterns as shown in Fig.6.

    Figure 6:A single gated recurrent unit (GRU)

    4.3.5 Proposed Neural Architecture

    The attention mechanism in neural networks was first implied by Dzmitry et al.[37]to memorize long sequences in decoder architecture.A neural architecture is proposed with an embedded attention mechanism for the classification of 8 distinct kinds of arrhythmia from ECG signals.The pre-processed signal from DT-CWT is fed into the proposed neural architecture.The input is merged into two sequential stacked layers with two variant patterns.At first pattern,bidirectional LSTM units are sequentially arranged with respective layer normalization and dropout layers [38,39].In the second pattern,bi-directional GRU units are stacked similar to that of the previous pattern.Then,the output sequence from pattern-i is multiplied with pattern-ii to imply attention.

    For a definite time step ‘t’,both the bi-directional LSTM and GRU sequence units attempt to perform attention by a scalar product as mentioned in Eq.(19).This attention mechanism is proposed as global attention to extracting invariant temporal patterns [40].

    Then the resultant multiplied output sequence proceeds as input to a GRU layer and next fed into a fully connected feed-forward network.The complete model architecture and its related parameters are depicted in Fig.7.The fully connected network consists of 128 unity in the first layer with ReLU as activation and the final layer consists of 8 neurons which are activated with softmax.The complete model consumed 106K trainable weights and negligible non-trainable weights consumed with the usage of layer normalization layers.The model was trained on 5 variant test patters.

    Figure 7:Proposed neural architecture implied with an attention mechanism

    5 Results and Discussion

    As mentioned above,to leverage the model performance,training and testing samples are split into multiple variants ranging from 10% to 50%.The complete analysis is carried out on various standard classification metrics and to study the proposed model behaviour,an accuracy score is chosen as the gold standard.Similarly,to study class wise performance precision,f-1 score recall is utilized.MSE is used to assess the predictability of the model which depicts the error attained due to imperfect predictions.Finally,AUC-ROC curves are generated to assess the diagnostic performance of the proposed model [41].

    As AUC-ROC curves are sample invariant in nature as they are insensitive to the alterations implied in the class distributions.These curves are plotted class-wise to interpret the performance of the model at each class level.As a note,AUC-ROC visualizations can be obliged as they decouple the performance of the classifier from skewness in classes and error costs presented in Fig.8.The proposed model is trained with two variant batches of 32 and 64 respectively.All the above-mentioned classification metrics are evaluated for all the test variants with two different batch sets.Large batches are acquired during training neural networks to minimize the generalization gap [42].Hence,a large set of batches are considered with sizes of 32 and 64.(illustrated in the Tab.1).In the feature extraction step,the Pan Tomkins method is employed to extract QRS points of the signal which play an important role in determining r-peak which helps to detect heartbeats.A large set of features are drawn out by using DT-CWT.This transformation is shift-invariant and provides PR.Most of the signal transformation methods lack these properties which can cause imperfect prediction and increase the chance of misclassification.

    The proposed network used adam [43]as an optimizer with a learning rate of 10-3.The neural network uses categorical cross-entropy as the objective function for stochastic optimization of neural network with backpropagation.

    Figure 8:Visualizations of AUC-ROC curves for 64 batch for test size from 10%,20%,30%,40%,50%

    Table 1:Proposed model performance on the variant test splits with 32,64 batches

    Table 2:Previous literature work on MIT-BIH dataset

    Generally,RNNs understand the temporal dependencies but they lack understanding in long term dependencies where LSTM overcomes the problems of RNN by understating long-term temporal relationships in the data.But LSTMs are computationally expensive and do have the problem of gradient vanishing with increasing units to a greater extent.Whereas GRU contains fewer gates compared to that of LSTM networks and overcomes the problems of LSTMs.GRUs are computationally faster compared to RNN’s and LSTMs.As most of the research focuses on designing a neural architecture utilizing these units with changing the number of time steps,units and stacking pattern.To leverage predictive capability neural attention is implied by undersetting the temporal pattern extracted from the signal.So,by providing neural attention,the minute redundancies and noise captured during feature extraction can be regulated to a greater extent.This provides greater performance compared to that of remaining neural networks.Various previous work is studied and curated,and our method outperforms the existing literature,and the depicted results are tabulated (illustrated in Tab.2).

    As a note,the research was conducted to study arrhythmia without using signal patterns i.e.,carried out by classifying variant attributes involved in predicting cardiovascular diseases [44]and also carried by implying PPG signals [45].To see the future perspective of the proposed work,it can be figured out traditionally,the current deep learning applications have considered existing distance functions in the research literature for similarity computations but did not try to fit in new functions for similarity computations [46-50].There is a possibility to devise threshold and similarity functions to suit deep learning applications [51-55].For instance,recent research contributions propose various similarity and threshold functions for temporal pattern mining which can be redesigned to suit deep learning applications [56-60].

    6 Conclusion

    In this research,a novel attention-based neural architecture is built to vanquish the loops of existing methods for classifying ECG signals.It can be stated that the proposed model is sample invariant as it has minute error variation when test samples increased five times.AUC-ROC plots are illustrated to provide a vivid understanding of the performance of the proposed diagnostic model.In a worst-case scenario,the model provides a micro averaged AUC of 0.9904.Even with numerous advantages,it is seen that the proposed model can consume high memory while embedding the model into a real-world application.The training procedure adapted is tested on two batches instead of a dynamic sampling is preferred to improve performance.In future,it is aimed to provide a salient model by acquiring humongous data with less computational capability and higher performance.

    Acknowledgement:The authors acknowledge JNTUH/TEQIP-III,for providing research fund(Ref:No.JNTUH/TEQIP-III/CRS/2019/CSE/08).

    Funding Statement:This research was partially supported by JNTU Hyderabad,India under Grant proceeding number:JNTUH/TEQIP-III/CRS/2019/CSE/08.The authors are grateful for the support provided by the TEQIP-III team.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    国产在线精品亚洲第一网站| 亚洲精品在线观看二区| 成年版毛片免费区| 国产黄片美女视频| 国语自产精品视频在线第100页| 人人妻人人澡人人爽人人夜夜 | 久久久久久久亚洲中文字幕| 中文字幕久久专区| 看片在线看免费视频| 日日摸夜夜添夜夜添av毛片| 久久欧美精品欧美久久欧美| 久久久精品欧美日韩精品| 久久国内精品自在自线图片| 日韩大尺度精品在线看网址| 国内精品美女久久久久久| 成人国产麻豆网| 中文字幕熟女人妻在线| 中文字幕精品亚洲无线码一区| 大又大粗又爽又黄少妇毛片口| 欧美日韩一区二区视频在线观看视频在线 | 露出奶头的视频| 久久久久国内视频| 天天躁夜夜躁狠狠久久av| 亚洲最大成人中文| 男女那种视频在线观看| 久久久久久久亚洲中文字幕| 国产精品野战在线观看| 国产免费男女视频| 变态另类成人亚洲欧美熟女| 国产极品精品免费视频能看的| 22中文网久久字幕| 村上凉子中文字幕在线| 精品久久久久久久久亚洲| 亚洲美女视频黄频| 亚州av有码| 最近视频中文字幕2019在线8| av视频在线观看入口| 国产亚洲精品久久久com| 欧美色视频一区免费| 久久九九热精品免费| 成人午夜高清在线视频| 欧美xxxx黑人xx丫x性爽| 国产精品久久久久久av不卡| 人妻少妇偷人精品九色| 国产视频内射| 亚洲国产精品成人久久小说 | 国产爱豆传媒在线观看| 国产精品亚洲一级av第二区| 国产成人一区二区在线| 亚洲国产日韩欧美精品在线观看| 国产白丝娇喘喷水9色精品| 国产黄片美女视频| 国产伦在线观看视频一区| 伦精品一区二区三区| 成人永久免费在线观看视频| 欧美高清性xxxxhd video| 少妇熟女欧美另类| 99久久精品国产国产毛片| 欧美xxxx性猛交bbbb| 非洲黑人性xxxx精品又粗又长| 少妇人妻精品综合一区二区 | 亚洲国产精品成人综合色| 免费人成视频x8x8入口观看| 校园人妻丝袜中文字幕| 亚洲精品影视一区二区三区av| 午夜免费激情av| 麻豆国产av国片精品| 久久精品国产亚洲网站| 级片在线观看| 一级毛片aaaaaa免费看小| av在线老鸭窝| 国产综合懂色| 国产单亲对白刺激| 非洲黑人性xxxx精品又粗又长| 欧美潮喷喷水| 波多野结衣巨乳人妻| 高清毛片免费看| 国内久久婷婷六月综合欲色啪| 啦啦啦韩国在线观看视频| 老熟妇乱子伦视频在线观看| 日日啪夜夜撸| 日韩欧美一区二区三区在线观看| 九九热线精品视视频播放| 最近中文字幕高清免费大全6| 人妻少妇偷人精品九色| 亚洲不卡免费看| 亚洲中文日韩欧美视频| 一个人看的www免费观看视频| 成人亚洲欧美一区二区av| 91午夜精品亚洲一区二区三区| 国产精品av视频在线免费观看| 久久久久久伊人网av| 亚洲国产欧美人成| 最后的刺客免费高清国语| 成人二区视频| 人人妻人人看人人澡| 成年女人看的毛片在线观看| av在线观看视频网站免费| 深夜精品福利| 国产精品人妻久久久影院| 亚洲精品色激情综合| 国产私拍福利视频在线观看| 女人十人毛片免费观看3o分钟| 桃色一区二区三区在线观看| 99热精品在线国产| 成年女人看的毛片在线观看| 草草在线视频免费看| 亚洲国产欧美人成| 99久久精品国产国产毛片| 如何舔出高潮| 中文字幕免费在线视频6| 国产伦精品一区二区三区四那| 亚洲成人精品中文字幕电影| 99热这里只有精品一区| 22中文网久久字幕| 中国美女看黄片| 国产亚洲精品av在线| 久久久久久伊人网av| 欧美3d第一页| 99久久久亚洲精品蜜臀av| 夜夜爽天天搞| 久久久久免费精品人妻一区二区| 免费不卡的大黄色大毛片视频在线观看 | 亚洲av二区三区四区| 成人特级av手机在线观看| 成年av动漫网址| av在线观看视频网站免费| 国产精华一区二区三区| 又黄又爽又免费观看的视频| 一级毛片电影观看 | 国产高清视频在线播放一区| av卡一久久| 熟妇人妻久久中文字幕3abv| 亚洲国产欧美人成| 99热这里只有是精品50| avwww免费| 成人二区视频| 人人妻,人人澡人人爽秒播| 国产精华一区二区三区| 白带黄色成豆腐渣| 观看美女的网站| 波多野结衣巨乳人妻| 国产欧美日韩精品亚洲av| 校园春色视频在线观看| 亚洲,欧美,日韩| 又粗又爽又猛毛片免费看| 日本一二三区视频观看| 精品福利观看| 日本a在线网址| 国产亚洲精品综合一区在线观看| 午夜日韩欧美国产| 人妻少妇偷人精品九色| 久久久久久大精品| 无遮挡黄片免费观看| 又黄又爽又免费观看的视频| 99久久成人亚洲精品观看| 午夜精品在线福利| 一级毛片电影观看 | 日日撸夜夜添| av在线蜜桃| 免费av观看视频| 国产精品久久久久久久久免| 嫩草影院入口| 亚洲av成人av| 免费看a级黄色片| 日韩,欧美,国产一区二区三区 | 麻豆久久精品国产亚洲av| 久久久欧美国产精品| 99热这里只有精品一区| 麻豆av噜噜一区二区三区| 最近最新中文字幕大全电影3| 亚洲va在线va天堂va国产| 美女黄网站色视频| 久久精品人妻少妇| 日本熟妇午夜| 国产大屁股一区二区在线视频| 国产人妻一区二区三区在| av在线天堂中文字幕| 一区福利在线观看| 亚洲欧美精品自产自拍| 久久亚洲国产成人精品v| 国产91av在线免费观看| 国内精品宾馆在线| 欧美日韩乱码在线| 日日撸夜夜添| 国产男靠女视频免费网站| 亚洲经典国产精华液单| 午夜福利高清视频| 久久婷婷人人爽人人干人人爱| 身体一侧抽搐| 熟妇人妻久久中文字幕3abv| 十八禁国产超污无遮挡网站| 少妇的逼水好多| 日日摸夜夜添夜夜添小说| 美女高潮的动态| 91精品国产九色| 国产精品1区2区在线观看.| 午夜影院日韩av| 久久中文看片网| 校园人妻丝袜中文字幕| 美女内射精品一级片tv| 亚洲国产精品久久男人天堂| 亚洲av五月六月丁香网| 亚洲一区二区三区色噜噜| 国产高清激情床上av| 精品久久久久久久末码| 午夜精品国产一区二区电影 | 97碰自拍视频| 亚洲av成人av| 久久久国产成人免费| 成人一区二区视频在线观看| 悠悠久久av| a级毛片免费高清观看在线播放| 在线免费十八禁| 午夜免费男女啪啪视频观看 | 亚洲成a人片在线一区二区| 成人精品一区二区免费| 欧美+亚洲+日韩+国产| 亚洲性夜色夜夜综合| 成年免费大片在线观看| 看黄色毛片网站| 国产毛片a区久久久久| 欧美日本亚洲视频在线播放| 欧美性猛交╳xxx乱大交人| 如何舔出高潮| 国产精品久久久久久久电影| 日日撸夜夜添| 欧美xxxx黑人xx丫x性爽| 国产精品嫩草影院av在线观看| 精品久久久噜噜| 亚洲美女黄片视频| 国产黄色小视频在线观看| 中文字幕免费在线视频6| 波野结衣二区三区在线| eeuss影院久久| 乱人视频在线观看| 热99re8久久精品国产| 麻豆一二三区av精品| 久久人人精品亚洲av| 久久人人爽人人爽人人片va| 国产精品女同一区二区软件| 黄色视频,在线免费观看| 欧美xxxx黑人xx丫x性爽| 亚洲精品乱码久久久v下载方式| 免费看光身美女| www日本黄色视频网| 成人午夜高清在线视频| 亚洲人成网站在线播| 欧美高清性xxxxhd video| 尾随美女入室| 欧美日韩在线观看h| 日本免费a在线| 干丝袜人妻中文字幕| 小蜜桃在线观看免费完整版高清| 久久热精品热| 激情 狠狠 欧美| 成人鲁丝片一二三区免费| 我的老师免费观看完整版| 欧美在线一区亚洲| 日日摸夜夜添夜夜添av毛片| 男女边吃奶边做爰视频| 国产又黄又爽又无遮挡在线| 久久婷婷人人爽人人干人人爱| 国产精品一二三区在线看| av在线蜜桃| 久久午夜福利片| 欧美激情国产日韩精品一区| 舔av片在线| 亚洲欧美日韩高清在线视频| 色吧在线观看| 精品人妻一区二区三区麻豆 | 给我免费播放毛片高清在线观看| 欧美最新免费一区二区三区| 欧美zozozo另类| 午夜日韩欧美国产| 国产成人福利小说| 日本一本二区三区精品| 草草在线视频免费看| 成人鲁丝片一二三区免费| 午夜福利高清视频| 最近手机中文字幕大全| 三级经典国产精品| 精品免费久久久久久久清纯| 97超碰精品成人国产| 久久精品影院6| 欧美国产日韩亚洲一区| 日本色播在线视频| 国产麻豆成人av免费视频| 成人特级黄色片久久久久久久| 日日干狠狠操夜夜爽| 国产精品不卡视频一区二区| 丰满乱子伦码专区| 精品一区二区三区av网在线观看| 国产成人91sexporn| 国产精品久久久久久久久免| 小说图片视频综合网站| 国产男人的电影天堂91| 不卡一级毛片| 日韩欧美精品v在线| 国产精品久久久久久av不卡| 免费观看精品视频网站| 无遮挡黄片免费观看| 一边摸一边抽搐一进一小说| 精品久久久久久久久久久久久| 熟女电影av网| 深夜精品福利| 国产美女午夜福利| 赤兔流量卡办理| 日本成人三级电影网站| 久久精品夜夜夜夜夜久久蜜豆| 成人午夜高清在线视频| 九九爱精品视频在线观看| 久久精品国产亚洲av涩爱 | 国产高清有码在线观看视频| 18禁裸乳无遮挡免费网站照片| 国产又黄又爽又无遮挡在线| 亚洲中文字幕一区二区三区有码在线看| 亚洲av熟女| 久久久久免费精品人妻一区二区| 18禁在线播放成人免费| 欧美日本视频| 99九九线精品视频在线观看视频| 国产精品野战在线观看| 九色成人免费人妻av| 久久久久久久亚洲中文字幕| 天堂动漫精品| 可以在线观看毛片的网站| 深夜精品福利| 精品久久久噜噜| 精品乱码久久久久久99久播| 亚洲人成网站高清观看| 在线免费观看的www视频| 欧美国产日韩亚洲一区| 99久久精品热视频| 久久精品国产鲁丝片午夜精品| 最新中文字幕久久久久| 蜜桃久久精品国产亚洲av| eeuss影院久久| 极品教师在线视频| eeuss影院久久| 亚洲精品456在线播放app| av在线观看视频网站免费| 热99re8久久精品国产| 国产爱豆传媒在线观看| 国产黄色视频一区二区在线观看 | 亚洲成人av在线免费| 男女视频在线观看网站免费| 成人亚洲欧美一区二区av| 亚洲高清免费不卡视频| 婷婷精品国产亚洲av| av卡一久久| 亚洲色图av天堂| 少妇猛男粗大的猛烈进出视频 | 丰满的人妻完整版| 中文字幕人妻熟人妻熟丝袜美| 国产伦精品一区二区三区视频9| 成年av动漫网址| 最新中文字幕久久久久| 久久精品国产亚洲av天美| avwww免费| 精品99又大又爽又粗少妇毛片| 久久精品久久久久久噜噜老黄 | 欧美丝袜亚洲另类| 亚洲精品亚洲一区二区| 99热这里只有是精品在线观看| 午夜福利在线在线| 成年免费大片在线观看| 一级a爱片免费观看的视频| 啦啦啦啦在线视频资源| 亚洲成人中文字幕在线播放| 欧美3d第一页| 男女下面进入的视频免费午夜| 精品99又大又爽又粗少妇毛片| 男人舔女人下体高潮全视频| 精品人妻一区二区三区麻豆 | 精品久久久久久成人av| 国产精品av视频在线免费观看| 亚洲不卡免费看| 日韩制服骚丝袜av| 国产精品永久免费网站| 97超视频在线观看视频| 日韩大尺度精品在线看网址| 91久久精品国产一区二区三区| 在线国产一区二区在线| 国产精品爽爽va在线观看网站| 亚洲成av人片在线播放无| av在线天堂中文字幕| 成人永久免费在线观看视频| 成人午夜高清在线视频| 三级男女做爰猛烈吃奶摸视频| 午夜老司机福利剧场| 全区人妻精品视频| 99在线视频只有这里精品首页| 精品一区二区三区人妻视频| 狂野欧美白嫩少妇大欣赏| 日日啪夜夜撸| 中出人妻视频一区二区| 国产高清激情床上av| av黄色大香蕉| 国产国拍精品亚洲av在线观看| 看片在线看免费视频| 中文字幕熟女人妻在线| 两性午夜刺激爽爽歪歪视频在线观看| 国产精品久久久久久av不卡| 亚洲第一区二区三区不卡| 麻豆成人午夜福利视频| 亚洲精品在线观看二区| 蜜桃亚洲精品一区二区三区| 久久99热这里只有精品18| 国产 一区 欧美 日韩| 欧美精品国产亚洲| 欧美高清成人免费视频www| 日韩欧美免费精品| 亚洲色图av天堂| 插阴视频在线观看视频| 国产高潮美女av| 非洲黑人性xxxx精品又粗又长| 亚洲美女黄片视频| 午夜精品国产一区二区电影 | 99在线人妻在线中文字幕| 麻豆一二三区av精品| 黄色视频,在线免费观看| 免费看光身美女| 性插视频无遮挡在线免费观看| 精品不卡国产一区二区三区| 久久久色成人| 婷婷精品国产亚洲av在线| 性欧美人与动物交配| 少妇丰满av| 日韩欧美国产在线观看| 亚洲国产欧洲综合997久久,| 亚洲欧美成人精品一区二区| 两个人视频免费观看高清| 国产高清不卡午夜福利| 成人av在线播放网站| 男人舔奶头视频| 欧美中文日本在线观看视频| 男女做爰动态图高潮gif福利片| 十八禁国产超污无遮挡网站| 久久久久久大精品| 欧美激情久久久久久爽电影| 免费观看的影片在线观看| ponron亚洲| 久久精品国产亚洲网站| 日韩精品有码人妻一区| 成人美女网站在线观看视频| 成人特级av手机在线观看| 成人永久免费在线观看视频| 久久午夜福利片| 色哟哟哟哟哟哟| 最近2019中文字幕mv第一页| 婷婷色综合大香蕉| 丝袜喷水一区| 亚洲最大成人av| 黄色一级大片看看| 午夜福利视频1000在线观看| 此物有八面人人有两片| 精品99又大又爽又粗少妇毛片| 国产精品免费一区二区三区在线| 少妇的逼水好多| 狠狠狠狠99中文字幕| 国产老妇女一区| 国产人妻一区二区三区在| 午夜影院日韩av| 久久精品国产亚洲av天美| 日韩欧美 国产精品| 久久久国产成人免费| 免费看a级黄色片| 国产精品久久电影中文字幕| 国产高清视频在线观看网站| 亚洲性久久影院| 男女视频在线观看网站免费| 国产女主播在线喷水免费视频网站 | 久久鲁丝午夜福利片| 18禁裸乳无遮挡免费网站照片| 国产爱豆传媒在线观看| 欧美成人免费av一区二区三区| 91麻豆精品激情在线观看国产| 联通29元200g的流量卡| 亚洲av第一区精品v没综合| 国产亚洲精品久久久久久毛片| 午夜爱爱视频在线播放| 黄片wwwwww| 亚洲人成网站在线播| 黄片wwwwww| 如何舔出高潮| 亚洲美女黄片视频| 久久99热6这里只有精品| av中文乱码字幕在线| 麻豆精品久久久久久蜜桃| 亚洲精品日韩在线中文字幕 | 国产视频内射| 日本黄大片高清| 99久国产av精品国产电影| 你懂的网址亚洲精品在线观看 | 亚洲中文字幕日韩| 国模一区二区三区四区视频| 日韩精品中文字幕看吧| 一区二区三区四区激情视频 | 国产视频一区二区在线看| 一夜夜www| 不卡视频在线观看欧美| 偷拍熟女少妇极品色| 欧美日本视频| 国产精品精品国产色婷婷| 中文字幕av在线有码专区| 亚洲国产欧美人成| 久久久久久久久中文| 国产精品爽爽va在线观看网站| 看黄色毛片网站| 噜噜噜噜噜久久久久久91| 久久久欧美国产精品| 亚洲性夜色夜夜综合| 看黄色毛片网站| 岛国在线免费视频观看| 亚洲成人久久性| 亚洲欧美精品综合久久99| 国产高清视频在线观看网站| 久久精品国产亚洲av涩爱 | 男女那种视频在线观看| 免费大片18禁| 欧美三级亚洲精品| 欧美3d第一页| 亚洲18禁久久av| 久久久久久久久久成人| 午夜福利视频1000在线观看| 国产av在哪里看| 国产久久久一区二区三区| 男女边吃奶边做爰视频| 国语自产精品视频在线第100页| 亚洲欧美中文字幕日韩二区| 欧美日韩精品成人综合77777| 久久久精品大字幕| av在线播放精品| 12—13女人毛片做爰片一| 91精品国产九色| 长腿黑丝高跟| 亚洲最大成人中文| 久久久国产成人免费| 3wmmmm亚洲av在线观看| 可以在线观看毛片的网站| 91久久精品国产一区二区成人| 老熟妇仑乱视频hdxx| 免费黄网站久久成人精品| 国产成人福利小说| 一个人看的www免费观看视频| 最近的中文字幕免费完整| 男女做爰动态图高潮gif福利片| 亚洲中文日韩欧美视频| 久久久久国内视频| 色在线成人网| 人妻丰满熟妇av一区二区三区| 欧美日韩乱码在线| 两个人的视频大全免费| 亚洲久久久久久中文字幕| 久久久精品94久久精品| 成人亚洲欧美一区二区av| 欧美色欧美亚洲另类二区| 色吧在线观看| 欧美绝顶高潮抽搐喷水| 亚洲欧美中文字幕日韩二区| 草草在线视频免费看| 欧美高清成人免费视频www| 99riav亚洲国产免费| 2021天堂中文幕一二区在线观| 真实男女啪啪啪动态图| 亚洲av第一区精品v没综合| 一区二区三区高清视频在线| 国产久久久一区二区三区| 亚洲最大成人av| 欧美日韩一区二区视频在线观看视频在线 | 免费观看的影片在线观看| 18禁黄网站禁片免费观看直播| 一级毛片aaaaaa免费看小| av卡一久久| 人人妻人人澡欧美一区二区| 国产精品一区二区免费欧美| 69人妻影院| 国内少妇人妻偷人精品xxx网站| 久久久久久久久久久丰满| 深爱激情五月婷婷| 两个人的视频大全免费| 高清毛片免费看| 黄色视频,在线免费观看| 永久网站在线| 国产精品久久电影中文字幕| 舔av片在线| 身体一侧抽搐| 亚洲不卡免费看| 国产精品女同一区二区软件| 两个人的视频大全免费| 如何舔出高潮| 国产精品国产三级国产av玫瑰| 乱码一卡2卡4卡精品| 亚洲人成网站在线观看播放| 综合色av麻豆| 欧美高清成人免费视频www| 成人午夜高清在线视频| 欧美不卡视频在线免费观看| 一个人免费在线观看电影| 免费无遮挡裸体视频| 成人无遮挡网站| 日韩av不卡免费在线播放| 国产精品久久久久久久久免| 国产精品一及| 亚洲人成网站在线观看播放| 一区二区三区四区激情视频 | 久久久久国产网址| 最近手机中文字幕大全| 亚洲美女视频黄频| 成人一区二区视频在线观看| 久久综合国产亚洲精品| 国产精品1区2区在线观看.| 好男人在线观看高清免费视频| 欧美日本亚洲视频在线播放|