• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Optimized Predictive Framework for Healthcare Through Deep Learning

    2021-12-16 07:51:40YasirShahzadHumaJavedHaleemFarmanJamilAhmadBilalJanandAbdelmohsenNassani
    Computers Materials&Continua 2021年5期

    Yasir Shahzad,Huma Javed,Haleem Farman,Jamil Ahmad,Bilal Jan and Abdelmohsen A.Nassani

    1Department of Computer Science,University of Peshawar,Peshawar,25120,Pakistan

    2Department of Computer Science,Islamia College Peshawar,Peshawar,25120,Pakistan

    3Department of Computer Science,FATA University,Kohat,26100,Pakistan

    4Department of Management,College of Business Administration,King Saud University,Riyadh,Saudi Arabia

    Abstract:Smart healthcare integrates an advanced wave of information technology using smart devices to collect health-related medical science data.Such data usually exist in unstructured,noisy,incomplete,and heterogeneous forms.Annotating these limitations remains an open challenge in deep learning to classify health conditions.In this paper,a long short-term memory (LSTM)based health condition prediction framework is proposed to rectify imbalanced and noisy data and transform it into a useful form to predict accurate health conditions.The imbalanced and scarce data is normalized through coding to gain consistency for accurate results using synthetic minority oversampling technique.The proposed model is optimized and fine-tuned in an end to end manner to select ideal parameters using tree parzen estimator to build a probabilistic model.The patient’s medication is pigeonholed to plot the diabetic condition’s risk factor through an algorithm to classify blood glucose metrics using a modern surveillance error grid method.The proposed model can efficiently train,validate,and test noisy data by obtaining consistent results around 90% over the state of the art machine and deep learning techniques and overcoming the insufficiency in training data through transfer learning.The overall results of the proposed model are further tested with secondary datasets to verify model sustainability.

    Keywords:Recurrent neural network;long short-term memory;deep learning;Bayesian optimization;surveillance error grid;hyperparameter

    1 Introduction

    Advanced healthcare is an open technological facet providing multidisciplinary telemedicine treatments from basic sprains to complex chronic disease at the industry level.The vast range of eHealth innovations and applications such as interactive websites,web-portals,telehealth,e-mail,voice recognition,online health groups,and gaming are swiftly challenging the old conventional approaches [1].The healthcare industry has tailored large scale interventions to provide affordable medical clarity through fast,reliable,robust,and recursive diagnostic techniques.Smart healthcare,therefore,falls under vertical industry areas such as very large-scale integrations,embedded systems,big data,cloud computing,and machine learning (ML) [2].

    In addition to its vast scope,little attention is paid to data variance,noise,broad and scarce data in the literature.Deep learning (DL) is often rarely used with transfer learning (TL) for health related conditions.The current research proposals are consistent with ML conventional approaches,simple datasets,long delays,and open-loop problems,particularly with regard to control algorithms [3].Subsequently,data normalization is either done manually or through conventional approaches,and nonlinear regression models are used through linear and dynamic approaches.Nevertheless,neural networks are ideal for modelling blood glucose levels using multilayer perceptrons and generative architecture DL techniques [4].The discriminant architecture of DL embedded into TL is currently infrequent to find,especially for a diabetes prognosis.This research focuses on predicting health related issues and conditions such as diabetes and its associated diseases.

    Diabetes Mellitus (DM),a dreadful disease,has affected one out of eleven adult populations globally,and up till now,it is not treated seriously.World Health Organization (WHO) depicts that the alarming figure of 463 m may rise to 578 m by 2030 and 700 m by 2045.The WHO cautions that such huge figures bring a direct financial global impact to 760b in 2019 to 823b by 2030 and 845b in 2045.Moreover,232 m people live with this infection in undiagnosed form and have so far caused 4.2 m global deaths making it a fourth major mortality epidemic disease.Diabetes invites diverse effects on children and adults’ health,such as renal dysfunctions,cardiovascular,neuropathy,micro/macrovascular diseases,retinopathy,gums,dental,sexual,bladder,and vessels [5].There are three different known diabetes BG states such as normal state between 80 to 130 mg/dL,hypoglycemia below 80 mg/dL,and hyperglycemia above 180 mg/dL [6].Common symptoms of both hyperglycemia and hypoglycemia are shown in Fig.1.

    In this research,we present a complete framework by proposing a recurrent neural network(RNN) based long short-term memory (LSTM) model to predict the patient’s past,present,and future sickness status.The proposed model has been tested for the challenging and noisy dataset(dataset-I),which is normalized to find suitable parameters for high accuracy using advanced optimization and hyperparameter (HPM) tuning.The results obtained are assessed by a surveillance error grid (SEG) to chalk out blood glucose (BG) metric.Two secondary datasets have also been trained and tested to validate the results by comparing with state of the art ML and DL techniques.We contribute to:

    ? Improve the accuracy rate of the predictive model and making it suitable for arbitrary nature datasets.

    ? Use pre-processing procedures to find fine-tuned and optimal weights for accurate results.

    ? Propose an easy method to train challenging data by feature engineering to increase prediction accuracy.

    ? Design a model that efficiently predicts illness status from the past,present,and future data relating to medication categorization,normalization,hyperparameter tuning,and accurate prediction.

    The rest of the paper is organized in the following manner.Section 2 gives a recent literature review with its limitations.Section 3 has a complete methodology approach.Section 4 gives results and discussion.Finally,Section 5 concludes with future directions and limitations of this research.

    Figure 1:Common glycemic symptoms

    2 Literature Review

    Despite many efforts and achievements in the health industry,clean digital medical data remains an important and difficult challenge to obtain.Predicting accurate and productive risk factor has always been an important subject attracting many researchers’ interests.The relevant topical work is discussed below.

    The authors in [7]proposed an artificial neural network (ANN) using the backpropagation and apriori algorithm to detect diabetes.An online manual system exploiting chain rule for frequent mining itemsets is proposed to detect diabetes condition.However,besides manual inputs,a simple dataset is tested to negate the involvement of doctors.No data normalization,BG metric,and TL are observed.An advanced approach is seen in [8]by proposing a DL restricted Boltzmann machine based framework to detect diabetes.The proposed work can differentiate between Type-I and Type-II diabetes by classification and recognition using decision tree ML method.Their proposed model is manual and independent between layers using a likelihood approach.However,data is input through top-down feedback with low performance and high time complexity.Three hundred data samples are selected with no clear picture of the source.Additionally,data scarcity and imbalances are not addressed.No BG metric and TL are observed.

    The authors in [9]proposed a DL approach,namely GRU-D,to recover missing data for successful imputation and improved prediction.Gated recurrent unit,GRU-D is built using RNN to exploit masking and time interval in two representations of missing patterns and incorporate deep model architecture.Missing values are estimated to achieve prediction results.However,MIMIC-III dataset is selected for research with no diabetes taxonomy and cataloging,TL,and grid classification.Collaborative filtering-enhanced DL (CFDL) is proposed by [10]to build a reference system to predict future based behavior patterns.The model finds an incomplete input to predict patient readmission.However,data incompleteness and noise are addressed by ML using a traditional normalization approach.Also,no BG metric and TL are observed.

    The authors in [11]proposed a model to utilize distributed and parallel computation using multiple GPU’s through DL.A large dataset of Type-2 DM is used to observe hardware and software computation complexity.However,no data optimization,categorization,TL,and error grid is seen.Decent research is observed in [12],where the authors offered RNN based LSTM with embedded TL.A Gaussian kernel is exploited using six-month continuous glucose monitoring(CGM) data of 26 participants.However,the dataset is small,and also,the error grid approach is outdated,and diabetes categorization is not observed.Another decent work is observed in [13],where authors used RNN based LSTM to estimate BG level using root mean square error(RMSE) and univariate Gaussian distribution over the output.Future forecasting is done by using the history of BG levels.However,the proposed research uses a single dataset with no trace of TL and diabetes medication categorization.

    3 Methodology

    Machine learning entails high-quality training that directly depends on the quality of input data,which is not easy.The accuracy of such data can be regulated by deriving a training dataset (to predict model),test dataset (to determine the future performance of the predictive model),and validation dataset (to measure the predictive model’s adherence to a given quality standard).Healthcare is essential for the growth of many benevolence opportunities,computing fields,strength,and confidence in the health sector’s outcomes.Nevertheless,a dataset format should retain the structure and individual values in syntactic integrity so that a health practitioner may practice automated analysis.

    3.1 Dataset

    Over twenty datasets of diabetes and their associated disease are downloaded,discussed,evaluated,and analyzed.Three datasets from the UCI ML repository are selected;a primary dataset (Dataset-1) [14]and two secondary datasets (Dataset-2,Dataset-3) [15].The challenging and noisy dataset-I is grouped into medicinal,peculiar information,and diagnoses with a total of 101,766 instances and 50 attributes,including 47,055 male,54,708 females,and 3 unknown genders.Simultaneously,the attributes of secondary datasets are grouped into personal information,medical tests,and keystone habits.

    The primary dataset requires a detailed transformation to invoke DL abilities because insufficient information leads to inefficient models.The dataset-1 features are grouped into three categories such as patient demographic information (race,gender,age,weight,etc.),hospital metrics (number of procedures,insulin levels,number of lab procedures,H1A1c test,etc.),and diabetes medication (metformin,repaglinide,glimepiride,chlorpropamide,etc.).The extracted information and correlation between attributes are given in Fig.2.The primary dataset contains errors,missing values,and anomalies.Preprocessing is required to gain consistency in data to prevent overfitting.The sparse properties are eliminated by auto-normalization as most fields do not have usable values,and little predictive information,as shown in Fig.3.

    Figure 2:Pearson’s correlation coefficient (r) heatmap shows a high relationship between encounter_id and patient_nbr,time_hospital with num_medi and lab_procedure,and num_procedure with num_medi

    Figure 3:Weight,payer_code,and medical_specialty have high missing values

    3.2 Imbalanced and Missing Values

    Imbalanced data usually arise when the data is dominated by a majority class and ignoring a minority class.Hence,the dataset-I classifier routine on the minority is insufficient compared to the majority.To deal with such imbalanced data,either use the under-sampling method to balance the class and eliminate some portion of the majority class or use synthetic minority methods to increase the number of minority class instances.Synthetic minority oversampling technique (SMOTE) is preferred to create new instances rather than replicating the existing [16].In the selected dataset,attributes having more than 50% missing values are dropped because of no significance in filling those values,as shown in Fig.3.

    3.3 Value Mapping

    Data transformation is performed to find and map useful informationf:Rn→R in the form of [0,1],as shown in Tab.1.Theagevariable is defined by mean age interval and translated to a numeric value.

    Table 1:Value mapping in dataset-I

    Figure 4:Diabetes medication in their standard class

    3.4 Proposed Algorithm

    The primary dataset includes twenty-three different medicines,which is ordered into medicinal class,as shown in Fig.4.The categorization and classification are being carried out in consultation with medical specialists and endocrinologists for diabetic culture,current treatment trends,and diabetes medicinal groups.

    The training of a model requires a complete classification of administered medicine at home and during the hospital visit.The drug intake is added into the model,as illustrated in Algorithm 1.

    Algorithm 1:Categorization of a patient through their medication Start Load dataset and Normalize 1.Check patient_nbr,gender,age 2.Check admission_rate (number_outpatient,number_inpatient,number_emergency)3.Evaluate admission_type_id and discharge_disposition_id Count time_in_hospital and discharge_rate Validate medical_speciality Record num_lab_procedure,num_procedure,num_medication 4.If admission_rate>5 then diabetic else not-diabetic 5.Return 6.Evaluate number_diagnose 7.If max_glu_serum>200 and A1Cresult>8 then diabetic 8.If metformin<=0.5 and glipizide<=0.5 then not-diabetic 9.If metformin<=0.5 and chlorpropamide>0.5 and nateglinide<=0.5 then diabetic 10.If gilimepiride<=0.5 and repaglinide>0.5 and pioglitazone>=0.5 then diabetic 11.If gilimepiride=0.5 and rosiglitazone<=0.5,patient was categorized as healthy.12.If acetohexamide<=0.5 and glipizide>0.5 and glyburide<=0.5 then not-diabetic 13.If rosiglitazone<=0.5 and tolbutamide>0.5 and acarbose<=0.5 then diabetics 14.If miglitol>0.5 and tolazamide>0.5 and citoglipton>0.5 then not-diabetic 15.If insulin>0.5 then diabetic 16.If insulin<=0.5 and metaformin (remaining all ranges)>=0.5 then diabetic 17.Return 18.If diabetesMed yes then diabetic else not-diabetic 19.End Output:Weights for prediction ML,DL,LSTM and LSTM-TR

    3.5 Data Normalization

    The dataset value range fluctuates widely.Thus,the learning algorithm’s output can be dominated by features with higher values within a predefined limit (say Optimal Range) to retain inherent details.The Optimal Range (OR) is a set of patterns used to predict the next sequence.Min-max normalization is used to permit a configurable range to scale values in the datasets,as depicted in Eqs.(1) and (2).

    whereXNoris the initial value feature interest,XMinshows minimum value,XMaxis a maximum value,andRdenotes the optimal scaled features set [?1,1].To generalize the whole procedure,consider OR=4 sayp1,p2,p3,p4to predict valuerofp5with unitsQ1,Q2,Q3.The first moduleQ1takes the input vectorp1,p2,p3,...,pORand the second moduleQ2takes the input vectorp1,p2,p3,...,pORwith output of first module,and so on.The final module (Q3) will predict the valuerOR+3,as given in Eq.(3).

    HereNis the total number of samples for time instancei·andyiare the predicted and actual values.To offer perception about the working of RNN,the network typically gross an independent variable(s) X and a dependent variable y followed by the mapping and training between both X and y.The process sequence of values are(X1,X2,X3,...,Xt+1,Xt,Xt?1).So,Xthas a sequence of data at timetand parameter stateΘ,which is equal toxt=Statextis dependent on the input parameterXt?1,which is a previous timestamp of the model,as shown in the process model in Fig.5.

    Figure 5:Overall proposed RNN-LSTM model with TL

    3.6 Model Classification and Tuning

    RNN is a linear memory architecture that maintains all previous information in the internal state vector.Gradient approaches may fail when time dependencies become too long due to exponential increase or decrease in values [17].RNN suffers from gradient vanishing,exploding,challenging training,and treating long sequencing.LSTM resolve RNN long-term dependency problem by maintaining relevant information for more extended periods and forgetting irrelevant information.LSTM also overcomes the back-flow error problems and process large datasets by keeping the cell’s information through structural gates.The proposed model uses three distinct gates;forget gate,input gate,and output gate,as shown in Eqs.(4)-(6).The forget gate is responsible for deciding which input information from previous memory may be ignored.The input gate is responsible for feeding certain cell information.The output gate generates and updates the hidden vector.Additionally,LSTM allows a fourth gate called Input Modulation Gate,a subpart of the input gate,to reduce the learning time as a Zero-Mean for faster convergence,as shown in Eq.(7).The input gate exploits feedback weights from other memory cells in order to store or access data on its memory cell.

    whereσis a logistic sigmoid function to decide among [0,1]values to let it through,k is recurrent weight,m is input weight,and b is bias value.The obtained feeding back performance of LSTM is a layer at timetto the input of the same network layer at timet?1.The complete process is shown in a block diagram in Fig.6,where information regulates using control gateskg,kfandkoby internal state ‘c.’

    Figure 6:LSTM vector ‘x’ (The Past) a single input to predict an output ‘y’ (The Future)

    3.7 Hyperparameter Optimization

    Selecting ideal parameters is the key difference between average and state-of-the-art performance in a neural network because an algorithm highly depends on HPM.Various aspects,such as memory utilization and computation complexity,depend on HPM tuning,which requires more training time for significant results.HPM optimization is defined as in Eq.(8).

    Herex?is the minimum generated score value,mx∈Xshows thatxcan assume any value in the domainXandf(x)indicates the target score to reduce the validation set error.The overall goal is to evaluate model HPM producing a high score on the validation set metric.Tuning of ML algorithms is subject to trials and errors to find the optimal values,which could be either done manually or automatically.For the purpose,an automated method such as Bayesian optimization is designated to systematize finding HPM in less time using an informed search technique and assess values based on past trials.Bayesian optimization is a famous,simple,and collective approach in DL to sequentially optimize an unlabeled objective function.Bayesian’s method selection is preferred over random search optimization because the latter use long run times to assess doubtful areas of the search space [18].Bayesian optimization allows fewer iterations to achieve excellent performance by tuning the HPM on building,training,and validating versions.Bayesian optimization is defined as in Eq.(9).

    Our proposed approach use three HPM such as dropout,number of LSTM neurons,and network layer neurons.The parameter domain is defined using thehyperoptdistribution function [19].Hyperopthelps serial and parallel optimization over awkward search spaces,including real-valued,discrete,and conditional dimensions.To achieve consistenthyperoptfunction results,we feed input parameters to put forward to the objective function based on the Surrogate model engineering functionp(b|a).The surrogate act as an approximator of the objective function to propose parameter using tree parzen estimator (TPE),Gaussian process aka GPyOpt,or random forest regression through sequential model-based algorithm configuration (SMAC).In this research,TPE is preferred over the others to build a probabilistic model of the function at each step and choose the most likewise parameters.The complete framework of the operation model,including the Bayesian optimization process,is shown in Fig.7.

    Figure 7:Bayesian optimization works on top of the predictive model to fine-tune HPM and validate accuracy.However,the weights are chosen from the HPM constraint block based on the values collected from accuracy validation

    The algorithm creates a random starting point x?and calculates F(x?).It uses trial history for conditional probability model P(F|x).Select xi giving P(F|x),resulting in better F(xi).Calculate the real value of the F(xi) and repeat steps 3 to 5 till one of the stop criteria is satisfied,for instance,i>max_eva.TPE put forward HPM by applying surrogate and selection functions.Both combine to evaluate parameters that it believes bring/calculate high accuracy on the objective function.The selection function cum TPE-Surrogate is illustrated in Eq.(10) [20].Optimization results of LGBMRegressor,Grid Search,Random Search,and Hpyeropt optimization results are given in Tab.2,and their comparison is given in Fig.8.

    Table 2:Comparison of search methods

    Figure 8:Optimization bar chart.The test score1=10,000

    3.8 LSTM Output

    LSTM can bridge time intervals and approximate noisy data to generalize over problem domains,distributed representations,and continuous values.It overcomes error back-flow problems to ensure neither exploding nor vanishing using specialized units’ internal state to reduce the input/output weight conflict.LSTM layer accompanies a dropout layer to prevent overfitting and a selected number of neurons for optimal training.In this research,the LSTM network has 512 hidden neurons and 256 neurons with an imposed dropout rate of 0.2 and 0.3 to refrain the model from over-fitting.The optimized input HPM is evaluated for 43 iterations until no improvements are seen.The output performance metrics considered are accuracy,precision,and recall to track LSTM predictive response as defined in Eqs.(11)-(13).

    4 Results and Discussion

    This research aims to build an optimized and user-friendly system that can process challenging datasets with dependent and independent variables.The optimization in autonomous mode dig deeper and draw optimal HPM for higher accuracy.More than twenty diabetes datasets are downloaded from the UCI repository having a variety of information,among which three datasets associated with each other are selected.The primary dataset is very challenging due to variation,noisy and incomplete data.To comply with data performance and accuracy,we test state of the art ML methods,as shown in Figs.11-13.We also define two DL testing and validation methods based on straight LSTM and embedded LSTM Transfer Learning (LSTM-TL).

    The RNN based LSTM is preferred because it is capable of giving more control-ability and better results.It uses extended,long multiple and parallel sequences to produce accurate results on the dataset by learning and remembering input from direct raw time series data.Additionally,43 random seeds are trained for each HPM between 512 and 256 LSTM units;however,we went with 256 units to lower the complexity.The reason is that LSTM is a process hungry and time-consuming technique that requires error and trial approach for best suitable inputs.The proposed model is initially validated to 512 LSTM units,which consume extra operating cost and time.However,it is later on limited to 256 units for low complexity and processing.Moreover,the nature of datasets has a significant role in model performance by digging the valid inputs for dataset-I,whereas dataset-II and dataset-III are comparatively straight forward with low complexity,processing time,and operating cost.

    4.1 Surveillance Error Grid

    An evaluation standard is required to define the role of the proposed model for medical practitioners.To quantify patients’ clinical accuracy,few error grid analyses are used to edge the threat of indecent future forecasts of the monitored BG levels.Previously,clarke error grid (CEG)was a standard,famous,and old technique to evaluate BG levels in clinical practices.However,it faces limitations of (i) Lack of difference between type-1 and type-2 diabetes,(ii) Discontinuous transition between Zone-B and Zone-E,and (iii) A small number of diabetes experts introduced it.A successor,namely parkes error grid (PEG),is introduced to chalk out risk zones and declare thresholds for both types of diabetes.However,PEG has no integration to the smart technology and has required a vital need to review the odd approaches w.r.t.Type-1 patient insulin pump therapy,Type-2 insulin injections,and CGM insulin injections.Henceforth,it leads us to surveillance error grid (SEG) [21].The bilinear interpolation criterion of SEG is given as under [13].

    Table 3:Risk grade of evaluated samples in light of SEG

    The risk factor in above mentioned table is the difference between BGM and REF as percent of REF>100 mg/dL and in mg/dL for REF<=100 mg/dL.The distribution of dataset samples that lie in various zones of the SEG plot is shown in Fig.9.

    Figure 9:SEG plot showing sample risk factor

    4.2 Performance Evaluation

    DL is a data-hungry technique to understand the hidden data patterns,resulting in high data dependency.The scale and size of a data model are in a linear relationship that shows the model’s expressive space is large enough to discover the hidden patterns under the data [22].The proposed model performed well by a margin of significance with three input dimensions of samples,time steps,and features.An acceptable error tolerance threshold is obvious to iterate the model to reach convergence until no further improvement is seen.A 40-epochs stoppage policy is assigned to stop the validation process if no improvement is observed.Complete details of the LSTM model is given in Fig.10.

    Figure 10:Param (total trainable:367,585,non-trainable:0)

    The batch-size of LSTM output (batch-size,timespan,input) plays a vital role with a learning rate.The batch-size gradually enhanced from 128 to 768 for accurate results.The dataset distribution is set up to 60% for final training,20% testing,and 20% validation.A maximum of 5000 epochs and 500 enforced stoppage epochs are endorsed if no improvement is observed.In order to reuse the pre-trained data on imbalanced datasets,TL is used to fit a previously unseen dataset.

    The transfer learning takes up a model trained data and utilizes it on a second related task,saving time and gives better performance.TL recovers an imbalanced dataset to eliminate classification problems where several observations per class are not equally distributed.TL takes the previously trained model and freezes them.It adds additional training layers to the top of the frozen layers and trains the final model.In other words,it allows the freedom not to train the model in a target domain from scratch,which significantly lessens the demand for training data and training time [22].However,gaps in the data lead to missing predictions.Therefore,to fill the missing values,it is not clear how much bias it would introduce.So,we propose to create the(a,b)pairs with a given historyaand regression targetbfor a given prediction horizon.The proposed approach help to train and predict utilizing maximum data.Performance evaluation for this research is based on accuracy,area under the curve (AUC),and recall metrics,whereas precision is a supporting metric as illustrated in Figs.11-13,respectively.

    Figure 11:Performance comparison of the classifiers of DM dataset-I

    Figure 12:Performance comparison of the classifiers of renal disease dataset-II

    4.3 Discussion

    In this study,a DL framework is proposed to assess noisy,challenging,and incomplete data through high-end optimization and diabetes patients’ evaluation.To objective our study,the data is cleaned and transformed autonomously by converting the string into numeric,categorizing fields as per their relevancy,remove data scarcity,filling empty cells,and molding medication dosage.The autonomous method is an innovative dataset experiment to improve the HPM of the proposed DL model with minimal effort.An expert level scenario is also constructed in consultation with medical specialists and endocrinologists to achieve the research objective.The dose/level of the administered medicine is classified to distinguish between normal and diabetic patients.An algorithm is designed to leverage diabetic patient data to indicate their condition and predict their future status.The objective is to develop an RNN based LSTM model to train,validate,and test data with an in-depth exploration of dynamic changes to predict BG levels.We tried to normalize the scarce dataset and equated the readings through SEG,a modern metric,for clinical risk assessments.The BG errors give the data a unique risk score compared with a reference value,which helps evaluate the risk facing by diabetic patients by successfully categorizing the samples in the specified risk zones.We hope that the obtained results will improve the clinical accuracy and further experiments,such as data points falling into custom-defined risk zones.

    Figure 13:Performance comparison of the classifiers of cardiovascular disease dataset-III

    Previous studies concentrate on predicting BG levels,diabetic/non-diabetic status,artificial pancreas readings,glycemic control,and physiological prototypical.Keeping in view,the results of the proposed model are initially tested with state-of-the-art ML baseline models,including logistic regression,random forest,decision tree,k-nearest neighbors,na?ve bayes,and linear support vector classifier.All the baseline models reveal a good and constant accuracy around 89% except for the bernoulli na?ve bayes (85%),which is also an encouraging result.The overall objective is to develop a method that can guide diabetic patients to choose their healthy lifestyle according to their diabetic condition.Nutritious food and physical activity are essential elements of a healthy lifestyle.However,it is optimistic about changing lifestyles drastically,but changes come steadily.

    4.4 Transfer Learning

    The numerical results show that integrating TL provides better predictive accuracy,particularly when the available dataset is noisy and incomplete.It reveals significant findings in sparse data with complex trends,missing and imputed values.Before training,data is normalized and optimized to create the TL dataset to pre-train a global LSTM model.The regulated data is trained to the RNN-LSTM DL model and tested to 20% of a dataset,which is always a reasonable percentage between maintaining model accuracy and avoiding overfitting.Here,our LSTM layer(s) did all the work to transform the input to predict the desired output.

    5 Conclusion

    To conclude,DL systems need data to provide difficult interpretations to effectively diagnose health conditions to improve clinical decision-making uncertainty.In this paper,the RNN-LSTM model is proposed to test and forecast diabetic patients’ disease status from the demanding realworld challenging dataset with scarcity,missing/imbalanced values,incomplete,and noisy data.The data is normalized autonomously via standard procedures and value mapping.The normalized data is fine-tuned by Bayesian optimization to chalk out interstitial HPM values.The data normalization,HPM tuning,and medicinal categorizing are customized according to the proposed model,which through state of the art ML and DL methods,provide a high and consistent level of accuracy.

    An algorithm specifies the primary dataset with twenty-three medicinal attributes for a dosebased patient prediction.LSTM configuration is effectively optimized for accurate input,LSTM units,and output synchronization via trial and error basis.Finally,TL is integrated into LSTM to repair imbalances and feed-forward the training data for stable prediction and higher accuracy.To validate the model’s performance,two secondary datasets are tested to ensure model consistency,reliability,and accurate performance.

    5.1 Future Work

    The initial idea was to obtain a local dataset from the health regulatory bodies,hospitals,and laboratories for this research.However,the current COVID-19 emergency led us to use a readymade dataset for testing our proposed model.In a pilot investigation,we find that this work has significant implications for future research using a local dataset.In the future,we will collect data from local sources and prepare a pilot project by embedding sensor technology such as body area networks and generate a dataset.A complete framework will collect health data through customized software.The collected data will be normalized for accurate forecasting and bind aggregated readings with medical professionals for on-site expert advice.

    5.2 Research Limitations

    1.Increasing LSTM units leads to more complexity,processing time,and operating cost

    2.Old datasets.Whereas dataset-I has missing fields of Age and Weight

    3.Low test precessions

    Funding Statement:This work is supported by Researchers Supporting Project number (RSP-2020/87),King Saud University,Riyadh,Saudi Arabia.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    女人高潮潮喷娇喘18禁视频| 岛国在线免费视频观看| 性色avwww在线观看| 9191精品国产免费久久| 在线免费观看的www视频| 精品国产超薄肉色丝袜足j| av天堂中文字幕网| 欧美绝顶高潮抽搐喷水| 亚洲一区二区三区不卡视频| 搞女人的毛片| 99热这里只有精品一区| 俄罗斯特黄特色一大片| 欧美日本亚洲视频在线播放| 欧美最黄视频在线播放免费| 日本五十路高清| 色综合欧美亚洲国产小说| 国产一区二区亚洲精品在线观看| 中文亚洲av片在线观看爽| 成年女人毛片免费观看观看9| 一个人免费在线观看的高清视频| 久久香蕉精品热| 国产中年淑女户外野战色| 久久久久久久久大av| 中文资源天堂在线| 黄片小视频在线播放| 亚洲中文日韩欧美视频| 桃色一区二区三区在线观看| 美女高潮喷水抽搐中文字幕| 国产精品99久久久久久久久| 可以在线观看的亚洲视频| 国产91精品成人一区二区三区| 757午夜福利合集在线观看| 中文字幕人妻熟人妻熟丝袜美 | 欧美丝袜亚洲另类 | 国产av一区在线观看免费| 天美传媒精品一区二区| 亚洲自拍偷在线| 亚洲色图av天堂| 欧美日韩亚洲国产一区二区在线观看| 国产成年人精品一区二区| 亚洲乱码一区二区免费版| 757午夜福利合集在线观看| 精品久久久久久久久久免费视频| 51国产日韩欧美| 国产黄a三级三级三级人| 午夜福利视频1000在线观看| 日韩人妻高清精品专区| 在线天堂最新版资源| 天天一区二区日本电影三级| 国产午夜精品论理片| 亚洲中文字幕一区二区三区有码在线看| 亚洲av成人不卡在线观看播放网| 黄片小视频在线播放| 男女之事视频高清在线观看| 青草久久国产| 亚洲不卡免费看| 中文字幕av在线有码专区| 免费一级毛片在线播放高清视频| www.999成人在线观看| 国产精品,欧美在线| 天堂√8在线中文| 亚洲国产精品sss在线观看| 人人妻,人人澡人人爽秒播| 久久久久久人人人人人| 身体一侧抽搐| av在线蜜桃| 亚洲国产高清在线一区二区三| 亚洲中文字幕日韩| 看片在线看免费视频| 欧美日本视频| 国产成人av激情在线播放| 欧美乱妇无乱码| 老鸭窝网址在线观看| 精品免费久久久久久久清纯| 90打野战视频偷拍视频| 九九热线精品视视频播放| 老汉色av国产亚洲站长工具| 国产精品久久久人人做人人爽| 最好的美女福利视频网| 欧美午夜高清在线| 日本精品一区二区三区蜜桃| 岛国在线免费视频观看| 18美女黄网站色大片免费观看| 超碰av人人做人人爽久久 | 国产三级中文精品| 久久久久久人人人人人| 日韩精品青青久久久久久| 欧美+亚洲+日韩+国产| 亚洲专区中文字幕在线| 人人妻,人人澡人人爽秒播| 日韩免费av在线播放| 亚洲av五月六月丁香网| 中文字幕人妻熟人妻熟丝袜美 | 9191精品国产免费久久| 欧美国产日韩亚洲一区| 99精品久久久久人妻精品| 成年女人永久免费观看视频| 国产成人福利小说| 国产亚洲精品一区二区www| 免费在线观看日本一区| av黄色大香蕉| 亚洲av电影不卡..在线观看| 桃色一区二区三区在线观看| 欧美日韩一级在线毛片| 桃红色精品国产亚洲av| 天天躁日日操中文字幕| 亚洲av一区综合| 天天添夜夜摸| 免费看美女性在线毛片视频| 免费大片18禁| 97人妻精品一区二区三区麻豆| 日韩免费av在线播放| 久久精品91蜜桃| 精品福利观看| 手机成人av网站| 老司机深夜福利视频在线观看| 99久久99久久久精品蜜桃| 日韩欧美在线乱码| 欧美高清成人免费视频www| 精品国内亚洲2022精品成人| 国产成人av激情在线播放| 两个人看的免费小视频| 搡老熟女国产l中国老女人| 国产黄片美女视频| 亚洲国产高清在线一区二区三| 日韩欧美国产一区二区入口| 亚洲自拍偷在线| 亚洲,欧美精品.| 久久香蕉国产精品| 偷拍熟女少妇极品色| 高清日韩中文字幕在线| 免费av毛片视频| 午夜免费观看网址| 亚洲av成人av| 有码 亚洲区| 国产精品自产拍在线观看55亚洲| 国产一区二区三区在线臀色熟女| 99热只有精品国产| www.999成人在线观看| 亚洲国产欧洲综合997久久,| 亚洲av美国av| 免费电影在线观看免费观看| 美女黄网站色视频| 国产国拍精品亚洲av在线观看 | 一级a爱片免费观看的视频| 2021天堂中文幕一二区在线观| 在线观看一区二区三区| 久久精品国产清高在天天线| 国产私拍福利视频在线观看| 天堂动漫精品| 亚洲av电影在线进入| 91在线精品国自产拍蜜月 | 久久精品国产综合久久久| 中文字幕av在线有码专区| 人人妻人人看人人澡| 国产精品久久久久久久电影 | 偷拍熟女少妇极品色| 在线观看美女被高潮喷水网站 | 午夜福利在线观看免费完整高清在 | 国产精品电影一区二区三区| 在线观看免费午夜福利视频| 国产三级黄色录像| 久久精品国产自在天天线| 9191精品国产免费久久| 午夜激情欧美在线| 叶爱在线成人免费视频播放| 3wmmmm亚洲av在线观看| 亚洲av二区三区四区| 级片在线观看| 别揉我奶头~嗯~啊~动态视频| 国产黄a三级三级三级人| 又粗又爽又猛毛片免费看| 亚洲最大成人中文| 日本三级黄在线观看| 黄片大片在线免费观看| 日日干狠狠操夜夜爽| 午夜福利在线观看吧| 真人一进一出gif抽搐免费| 欧美日韩一级在线毛片| 亚洲人成网站在线播| 欧美国产日韩亚洲一区| 波多野结衣巨乳人妻| 日韩国内少妇激情av| 夜夜看夜夜爽夜夜摸| 国产三级在线视频| 一进一出好大好爽视频| 听说在线观看完整版免费高清| 最近最新免费中文字幕在线| 国产伦一二天堂av在线观看| 亚洲国产日韩欧美精品在线观看 | 在线观看美女被高潮喷水网站 | 久久精品夜夜夜夜夜久久蜜豆| 亚洲熟妇熟女久久| 日本免费a在线| 中亚洲国语对白在线视频| 给我免费播放毛片高清在线观看| 国产淫片久久久久久久久 | 色吧在线观看| 舔av片在线| 免费观看的影片在线观看| 一本一本综合久久| 精品欧美国产一区二区三| 久久久久精品国产欧美久久久| 亚洲国产高清在线一区二区三| 国产一区二区激情短视频| av片东京热男人的天堂| 欧美不卡视频在线免费观看| 国产精品一及| 狂野欧美白嫩少妇大欣赏| 韩国av一区二区三区四区| 少妇丰满av| 狠狠狠狠99中文字幕| 精品久久久久久成人av| 99视频精品全部免费 在线| 18禁国产床啪视频网站| 十八禁人妻一区二区| 中文在线观看免费www的网站| 三级男女做爰猛烈吃奶摸视频| av国产免费在线观看| 男插女下体视频免费在线播放| 真人做人爱边吃奶动态| 色在线成人网| 69av精品久久久久久| 色噜噜av男人的天堂激情| a级毛片a级免费在线| 国产精品自产拍在线观看55亚洲| 一级黄色大片毛片| 99视频精品全部免费 在线| 欧美又色又爽又黄视频| 久久久久亚洲av毛片大全| 亚洲精品成人久久久久久| 国产精品免费一区二区三区在线| 久久精品影院6| 超碰av人人做人人爽久久 | 亚洲av熟女| 操出白浆在线播放| 久久久久久久亚洲中文字幕 | 亚洲精品一卡2卡三卡4卡5卡| 99国产极品粉嫩在线观看| 久久精品国产亚洲av香蕉五月| 男女之事视频高清在线观看| 在线观看免费午夜福利视频| 高清在线国产一区| 免费看光身美女| 欧美一区二区精品小视频在线| 午夜免费观看网址| 日韩欧美免费精品| 麻豆成人午夜福利视频| 性欧美人与动物交配| 午夜精品一区二区三区免费看| 岛国在线观看网站| 神马国产精品三级电影在线观看| 中文字幕人成人乱码亚洲影| 丰满乱子伦码专区| 午夜精品在线福利| 精品99又大又爽又粗少妇毛片 | 欧美黑人欧美精品刺激| 久久6这里有精品| 欧美色欧美亚洲另类二区| 国产色婷婷99| 国产精品乱码一区二三区的特点| 国产一区在线观看成人免费| 亚洲精品粉嫩美女一区| 两个人视频免费观看高清| 午夜激情欧美在线| 欧美日韩亚洲国产一区二区在线观看| 美女被艹到高潮喷水动态| 两个人的视频大全免费| 亚洲精品色激情综合| 欧美成人一区二区免费高清观看| 亚洲久久久久久中文字幕| 女同久久另类99精品国产91| 美女免费视频网站| 亚洲精品美女久久久久99蜜臀| 麻豆国产97在线/欧美| 一个人免费在线观看电影| 午夜激情欧美在线| 国内久久婷婷六月综合欲色啪| 两性午夜刺激爽爽歪歪视频在线观看| 人妻夜夜爽99麻豆av| 亚洲专区中文字幕在线| 亚洲av成人不卡在线观看播放网| 久久国产乱子伦精品免费另类| 国产欧美日韩一区二区三| 国产精品一区二区三区四区久久| 欧美成人性av电影在线观看| 黄片大片在线免费观看| 欧美日韩亚洲国产一区二区在线观看| 亚洲国产日韩欧美精品在线观看 | 成人特级av手机在线观看| 熟女人妻精品中文字幕| 亚洲av中文字字幕乱码综合| 嫩草影视91久久| 综合色av麻豆| 成人性生交大片免费视频hd| 日韩欧美国产在线观看| 变态另类成人亚洲欧美熟女| 五月伊人婷婷丁香| 国产精品女同一区二区软件 | 内地一区二区视频在线| 99热精品在线国产| 精品国产超薄肉色丝袜足j| 午夜福利欧美成人| 国产精品日韩av在线免费观看| 色精品久久人妻99蜜桃| 老司机午夜福利在线观看视频| 欧美中文日本在线观看视频| 国产又黄又爽又无遮挡在线| 国产精品一区二区免费欧美| 麻豆成人av在线观看| 国产蜜桃级精品一区二区三区| 久久6这里有精品| www国产在线视频色| 99久久99久久久精品蜜桃| 此物有八面人人有两片| 五月伊人婷婷丁香| 久久久久久国产a免费观看| 十八禁人妻一区二区| 国产综合懂色| 一个人看的www免费观看视频| 国内揄拍国产精品人妻在线| 最后的刺客免费高清国语| 亚洲人成网站高清观看| 国产午夜福利久久久久久| 丰满的人妻完整版| 男插女下体视频免费在线播放| 欧洲精品卡2卡3卡4卡5卡区| 精品久久久久久久人妻蜜臀av| 三级男女做爰猛烈吃奶摸视频| 国产三级在线视频| 免费高清视频大片| 天天躁日日操中文字幕| 狂野欧美白嫩少妇大欣赏| 欧美国产日韩亚洲一区| 国产熟女xx| 一进一出抽搐gif免费好疼| 丁香欧美五月| 一个人看视频在线观看www免费 | 在线视频色国产色| 哪里可以看免费的av片| 日韩欧美精品免费久久 | 黄色片一级片一级黄色片| 黑人欧美特级aaaaaa片| 日韩精品中文字幕看吧| 午夜免费成人在线视频| 少妇的逼好多水| 十八禁人妻一区二区| 99久国产av精品| 日韩欧美在线乱码| 三级国产精品欧美在线观看| or卡值多少钱| 无人区码免费观看不卡| 天堂网av新在线| 变态另类成人亚洲欧美熟女| eeuss影院久久| 俄罗斯特黄特色一大片| 成人性生交大片免费视频hd| 日本在线视频免费播放| 老熟妇乱子伦视频在线观看| 精品久久久久久,| 老汉色av国产亚洲站长工具| 高清毛片免费观看视频网站| 国产v大片淫在线免费观看| 日日摸夜夜添夜夜添小说| 亚洲国产高清在线一区二区三| 美女免费视频网站| 18禁在线播放成人免费| 少妇人妻精品综合一区二区 | 亚洲国产日韩欧美精品在线观看 | 中亚洲国语对白在线视频| 婷婷丁香在线五月| 无限看片的www在线观看| 嫩草影视91久久| 99在线人妻在线中文字幕| a在线观看视频网站| 黄色丝袜av网址大全| 成年版毛片免费区| 男女下面进入的视频免费午夜| 深爱激情五月婷婷| 在线免费观看不下载黄p国产 | 国内揄拍国产精品人妻在线| 欧美性猛交╳xxx乱大交人| 国产中年淑女户外野战色| av专区在线播放| 1000部很黄的大片| 国产高清视频在线观看网站| 18禁裸乳无遮挡免费网站照片| 国产成人啪精品午夜网站| 一本综合久久免费| 国产三级在线视频| 欧美性猛交╳xxx乱大交人| 99热6这里只有精品| 丁香六月欧美| 国产精品久久久久久亚洲av鲁大| 一二三四社区在线视频社区8| 毛片女人毛片| 午夜精品一区二区三区免费看| 熟女电影av网| 黄片小视频在线播放| 在线观看午夜福利视频| 久久久久精品国产欧美久久久| av天堂中文字幕网| www.色视频.com| 欧美成人一区二区免费高清观看| 欧洲精品卡2卡3卡4卡5卡区| 久久国产精品影院| 18禁国产床啪视频网站| 99riav亚洲国产免费| 男女那种视频在线观看| 国产极品精品免费视频能看的| 欧美黄色片欧美黄色片| 亚洲最大成人手机在线| 国产伦精品一区二区三区四那| 精品人妻偷拍中文字幕| or卡值多少钱| 美女高潮的动态| 亚洲va日本ⅴa欧美va伊人久久| 91av网一区二区| 69人妻影院| 黄色丝袜av网址大全| 国产亚洲精品久久久久久毛片| 欧美黄色片欧美黄色片| 好男人在线观看高清免费视频| 夜夜躁狠狠躁天天躁| 老熟妇仑乱视频hdxx| 嫩草影院精品99| www国产在线视频色| 少妇高潮的动态图| 婷婷六月久久综合丁香| 俺也久久电影网| 一级毛片女人18水好多| 亚洲专区中文字幕在线| 成人午夜高清在线视频| 国产熟女xx| 国产蜜桃级精品一区二区三区| tocl精华| 成人永久免费在线观看视频| 精品国产美女av久久久久小说| 在线观看日韩欧美| 在线十欧美十亚洲十日本专区| 美女黄网站色视频| 亚洲成人久久爱视频| 日韩国内少妇激情av| 色吧在线观看| 97超级碰碰碰精品色视频在线观看| 久久精品91无色码中文字幕| 免费在线观看日本一区| 国内少妇人妻偷人精品xxx网站| 91在线精品国自产拍蜜月 | 国产高清激情床上av| bbb黄色大片| 欧美日韩精品网址| 长腿黑丝高跟| 久久草成人影院| 国产一区二区三区视频了| 搡老妇女老女人老熟妇| svipshipincom国产片| 亚洲av成人不卡在线观看播放网| 老鸭窝网址在线观看| 麻豆一二三区av精品| 一进一出好大好爽视频| 免费在线观看亚洲国产| 黄色日韩在线| 天美传媒精品一区二区| 成人一区二区视频在线观看| 亚洲美女黄片视频| 久久九九热精品免费| 老司机深夜福利视频在线观看| 免费电影在线观看免费观看| 国内精品久久久久久久电影| 一区二区三区激情视频| 免费大片18禁| 亚洲欧美日韩高清在线视频| 精华霜和精华液先用哪个| 国产真实乱freesex| 欧美激情久久久久久爽电影| 麻豆国产97在线/欧美| 内地一区二区视频在线| 久久久久亚洲av毛片大全| 中国美女看黄片| 国产亚洲精品一区二区www| 91麻豆精品激情在线观看国产| 亚洲人成伊人成综合网2020| 在线a可以看的网站| 露出奶头的视频| 亚洲男人的天堂狠狠| 国产三级中文精品| 男插女下体视频免费在线播放| 国产免费男女视频| 99在线人妻在线中文字幕| 国产成人啪精品午夜网站| 神马国产精品三级电影在线观看| 嫁个100分男人电影在线观看| 欧美精品啪啪一区二区三区| 国产黄a三级三级三级人| 国产精品 欧美亚洲| 久久人人精品亚洲av| 男人的好看免费观看在线视频| 熟妇人妻久久中文字幕3abv| 十八禁网站免费在线| 欧洲精品卡2卡3卡4卡5卡区| 最好的美女福利视频网| 亚洲欧美日韩高清专用| 网址你懂的国产日韩在线| 欧美性猛交╳xxx乱大交人| 国产色婷婷99| 亚洲片人在线观看| 97超级碰碰碰精品色视频在线观看| 国内精品美女久久久久久| 色精品久久人妻99蜜桃| 久久精品国产亚洲av涩爱 | 亚洲无线观看免费| 男女床上黄色一级片免费看| 精品99又大又爽又粗少妇毛片 | 99久久精品国产亚洲精品| 夜夜看夜夜爽夜夜摸| 亚洲黑人精品在线| 无限看片的www在线观看| 国产一区二区亚洲精品在线观看| 高清在线国产一区| 欧美在线黄色| 99热6这里只有精品| 国产免费一级a男人的天堂| 欧美xxxx黑人xx丫x性爽| 欧美绝顶高潮抽搐喷水| 国产精品日韩av在线免费观看| 国产色爽女视频免费观看| 最新中文字幕久久久久| 欧美丝袜亚洲另类 | 毛片女人毛片| 免费av不卡在线播放| 两个人视频免费观看高清| 搡老熟女国产l中国老女人| 国产成人福利小说| 淫妇啪啪啪对白视频| 亚洲激情在线av| 法律面前人人平等表现在哪些方面| 亚洲自拍偷在线| 桃色一区二区三区在线观看| 中文字幕av成人在线电影| www.www免费av| 三级男女做爰猛烈吃奶摸视频| 欧美成人性av电影在线观看| 国产精品亚洲美女久久久| 国产一区二区激情短视频| 国产成人av激情在线播放| 一进一出好大好爽视频| 亚洲人成网站在线播| 一级a爱片免费观看的视频| 99热6这里只有精品| 久久久久久九九精品二区国产| 午夜精品在线福利| 亚洲国产高清在线一区二区三| 18美女黄网站色大片免费观看| 色老头精品视频在线观看| 日韩欧美免费精品| 国产亚洲精品av在线| a级毛片a级免费在线| a级一级毛片免费在线观看| 国产精品一区二区三区四区久久| 国产aⅴ精品一区二区三区波| 男女做爰动态图高潮gif福利片| 婷婷丁香在线五月| 在线视频色国产色| 欧美中文日本在线观看视频| 久久国产乱子伦精品免费另类| aaaaa片日本免费| 亚洲精品在线观看二区| 欧美日韩福利视频一区二区| 三级国产精品欧美在线观看| 久久久色成人| 人妻夜夜爽99麻豆av| 亚洲国产色片| 12—13女人毛片做爰片一| 久久久国产成人免费| 身体一侧抽搐| 久9热在线精品视频| 99久久久亚洲精品蜜臀av| av女优亚洲男人天堂| 成人av一区二区三区在线看| 欧美色视频一区免费| 1000部很黄的大片| 精品熟女少妇八av免费久了| 色哟哟哟哟哟哟| 国产真实乱freesex| 久久久久久九九精品二区国产| 一级作爱视频免费观看| 91久久精品电影网| 亚洲成人久久爱视频| 嫩草影视91久久| 国产日本99.免费观看| 午夜福利在线在线| 精品无人区乱码1区二区| 日本五十路高清| 国产69精品久久久久777片| 午夜免费成人在线视频| 97超级碰碰碰精品色视频在线观看| 国产av在哪里看| 免费在线观看亚洲国产| 国产野战对白在线观看| 成人鲁丝片一二三区免费| 美女大奶头视频| 一卡2卡三卡四卡精品乱码亚洲| 男女下面进入的视频免费午夜| 亚洲男人的天堂狠狠| 九九久久精品国产亚洲av麻豆| 国产高清videossex| 在线免费观看的www视频| 最近最新中文字幕大全电影3| 精品国产美女av久久久久小说| 观看免费一级毛片| 国产欧美日韩精品亚洲av| 一级毛片女人18水好多| 国产精品日韩av在线免费观看| 亚洲精品成人久久久久久|