• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Prognostic Kalman Filter Based Bayesian Learning Model for Data Accuracy Prediction

    2022-08-24 12:58:32KarthikRobinSinghBhadoriaJeongGonLeeArunKumarSivaramanSovanSamantaBalasundaramBrijeshKumarChaurasiaandAshokkumar
    Computers Materials&Continua 2022年7期

    S.Karthik, Robin Singh Bhadoria, Jeong Gon Lee, Arun Kumar Sivaraman, Sovan Samanta,A.Balasundaram, Brijesh Kumar Chaurasiaand S.Ashokkumar

    1Department of ECE, College of Engineering and Technology, SRM Institute of Science and Technology, Vadapalani,Chennai, 600026, India

    2Department of CSE, Birla Institute of Applied Sciences (BIAS), Bhimtal, Uttarakhand, 263136, India

    3Division of Applied Mathematics, Wonkwang University, 460, Iksan-Daero, Iksan-Si, Jeonbuk, 54538, Korea

    4School of Computer Science and Engineering, Vellore Institute of Technology (VIT), Chennai, 600127, India

    5Department of Mathematics, Tamralipta Mahavidyalaya, West Bengal, 721636, India

    6School of Computer Science and Engineering, Center for Cyber Physical Systems, Vellore Institute of Technology (VIT),Chennai, 600127, India

    7Indian Institute of Information Technology (IIIT), Lucknow, Uttar Pradesh, 226002, India

    8Department of Computer Science and Engineering, Saveetha School of Engineering, SIMATS, Chennai, 602105, India

    Abstract: Data is always a crucial issue of concern especially during its prediction and computation in digital revolution.This paper exactly helps in providing efficient learning mechanism for accurate predictability and reducing redundant data communication.It also discusses the Bayesian analysis that finds the conditional probability of at least two parametric based predictions for the data.The paper presents a method for improving the performance of Bayesian classification using the combination of Kalman Filter and K-means.The method is applied on a small dataset just for establishing the fact that the proposed algorithm can reduce the time for computing the clusters from data.The proposed Bayesian learning probabilistic model is used to check the statistical noise and other inaccuracies using unknown variables.This scenario is being implemented using efficient machine learning algorithm to perpetuate the Bayesian probabilistic approach.It also demonstrates the generative function for Kalman-filer based prediction model and its observations..This paper implements the algorithm using open source platform of Python and efficiently integrates all different modules to piece of code via Common Platform Enumeration (CPE) for Python.

    Keywords: Bayesian learning model; kalman filter; machine learning; data accuracy prediction

    1 Introduction

    In today’sworld of automation, machine learning techniques are applied everywhere to make more out of collected data.Machine Learning models are developed and tuned for better performance.The rate of efficiency of these models depends on specific parameters, which are obstacles in simulation of algorithms by human mind, but without which the algorithms cannot work [1].We will be using Bayesian optimization here, which performs better than other available optimization algorithms evolved for the same job.Bayesian optimization works by taking assumption of unknown function from a single or many Gaussian process, uses a prior to maintain a next iteration of posterior distribution of this unknown function.For our demonstration, we make use of K-means clustering,which is one the most popular clustering algorithm.Our work is also related to fine tuning of input values, for which we use another algorithm, Kalman-filter [2].We use values provided by Kalman-filter initially to feed in k-means clustering mechanism.These will work hand-in-hand to improve the clustering processes, and hence improve the classification.

    Clustering, an important field in data science and a contributor to Machine Learning, finds its applications in several other fields like: Image processing, Web cluster search engines, Voice analysis,Pattern Recognition and Bioinformatics [3,4].Cluster can be called a group of similar objects, and clustering is a process of making similar sets out of raw data, which helps in segregation of unknown data easily.The parameters involved should be used cautiously as incompatible use of parameters of clustering like, Number of Clusters (k-means) and Density Limit, may lead to situations like improper density shape of clusters, ambiguity in finding centroid and the noise[5-7].Mainly,clustering algorithms divided into:

    a) Belonging to a Cluster

    ■Hard Clustering

    ■Soft Clustering

    b) Distance from nearest Cluster

    ■Distance Clustering

    ■Conceptual Clustering

    c) Grouping of Clusters

    ■Exclusive Clustering

    ■Overlapping Clustering

    ■Hierarchical Clustering

    ■Probabilistic Clustering

    The improved semi supervised K mean clustering is used for the greedy iteration to find the K mean clustering is presented in [8].In this work, modification of iterative objective function for semi supervised K clustering in dealing with multi-objective optimization problems of insufficient is illustrated.Extended Kalman filter approach for VANET [9] and robotics field [10] is illustrated to achieve low computational requirements, fast convergence, and reliability.Such proposed approaches in existing literature suffers accurate prediction.The proposed work is not only able to improve the performance of Bayesian classification using the combination of Kalman Filter and K-means, but also can reduce the time for computing the clusters from data.

    A Bayesian classifier provide the support for class predict of the values for attributes of that class.Bayesian analysis can serve as a backbone of intelligent systems like robotics.The analysis of previous data can compare on basis of its division into smaller datasets which can be analyzed in collaboration with each other and at same time.

    The proposed model enabled conventional prediction algorithms to adapt to dynamic conditions through continuous monitoring of its performance.To evaluate the effectiveness of the proposed learning to prediction model, we developed a machine learning module to improve the prediction accuracy of the K-means technique and Kalman filter algorithm.

    The open source model has many advantages, such as improved reproducibility of experimental results, quicker detection of errors, and faster adoption of machine learning algorithm.

    The major contributions of this paper can be as follows:

    ■Presents a method for improving the performance of Bayesian classification model using the

    combination of Kalman Filter and K-means techniques.

    ■This paper implements the algorithm using open source platform of Python and efficiently

    integrates all different modules to piece of code via Common Platform Enumeration (CPE).

    ■Proposed Bayesian learning probabilistic model for checking the statistical noise and other

    inaccuracies using unknown variables.

    ■Kalman filter is found much capable to adhere potential result for it.This shows the efficiency

    of 97.6% along with K-Mean technique.

    The remaining part of the paper is organized into five sections.The Section 2 presents the background & related work on available literature for K-mean and Bayesian hierarchical model.The Section 3 discusses the open source software and platforms used for implementation in this paper.The Section 4 discusses the Bayesian probabilistic model and Kalman-Filter Based Prediction Technique along.The Section 5 demonstrates the result and observation noted for comparison of data clustering using simple k-Mean and Kalman-filter analysis.The last section is conclusion of this paper.

    2 Related Works

    This section reports the background and related works in the field of Bayesian Analysis &framework for accurate prediction using machine learning algorithms.Tab.1 also provides comments on utilization and improvements in clustering algorithms.

    Reference [11] presented the work on machine learning algorithms to give fruitful results, the parameters and hyper parameters need to be fine-tuned on a regular basis.The tuning is also governed by thumb rule, which needs expertise, else it boils down to brute force searching for correctness of hyper parameters.It proposes a better way, Bayesian optimization which can be automated and further coupled with Gaussian process to be applied on the hyper parameters to boost the performance of the model.This proposed algorithm improves the previous fine tuning approaches and provides better optimization.

    The work proposed in [12],was based on clustering algorithm using k-means technique with mixed numeric and categorical features.It also point out that the traditional k-means algorithm works best only for numeric computations.Taking in account the distance measures, an improved cost function with modified cluster center is proposed to get over the “numeric computations only” feature of the k-means and to characterize the clusters.It is thoroughly tested with real world datasets and compared with other clustering algorithms.

    The work experimented using Bayesian classifier and kalman filter have drawn lot of attention in building predictive models with better outcome and expected data [13,14].There have been some gaps and challenges which could be encountered:

    ?No holistic approach for common and open source frameworks platform data analysis.

    ?Increasing in the demand of accuracy using classification & clustering algorithms.

    ?No support for Bayesian learning optimization algorithms for reduction in the change of noise as minimum as possible.

    ?No modules to piece of code via Common Platform Enumeration (CPE).

    The work done in [15] disregards usage of Null hypothesis significance testing (NHST) and focuses to promote Bayesian analysis in its place further with statistical comparison for Bayesian hierarchical modelling.By employing three Bayesian tests, namely, Bayesian correlated t-tests, Bayesian signed rank test and Bayesian hierarchical model.It finds errors in NHST and compares the results with that obtained using Bayesian analysis.It also states that p-value is not reasonable proxy for the probability of null hypothesis.But, statistical tests using NHST are employed more in machine learning.

    The work [16] focus on clustering methods to efficiently analyze and generate the required output from the clusters by using K-means clustering.It further explains the working of k-means algorithm in detail, expressing the inaccuracies in k-means.As the number of clusters needs to be specified by the user, not the computer itself, it in turn leads to anomalies in the clusters formed as some data can remain un-clustered.In their proposed algorithm, this issue is looked upon to find a better way to start the clustering process, which will lead to less computation time and better accuracy in assigning the un-clustered datasets to similar clusters.Their method works on changing the conditions initially in the k-means algorithm.

    Reference [17] presented with comparative study on Bayesian hierarchical model which gains on null hypothesis significance test (NHST).It has also cross-validated and compared accuracy of any two classifies using both methods, and have listed the shortcomings of NHST.The shortcomings can be reduced by employing Bayesian hypothesis testing.Also, the hierarchical model, by jointly analyzing the results obtained on all data sets, reduces the estimation error up to a significant rate.

    Table 1: Utilization and improvements in clustering algorithms

    Table 1: Continued

    3 Bayesian Open Source Frameworks

    This section discusses the available set of open source software that has revolutionized the usage of technology in latest learning models based on machine intelligence.It also extends the development practices to creep the benefits of common sharing and exchanging between different medium [20,21].As development of these software are done by volunteers across the globe and it is expected to have a rapid growth of development which directly facilitates the users in overall.Banjo (Bayesian Network Inference with Java Objects) is specifically used for static and dynamic Bayesian networks.Bayesian Network Tools in Java (BNJ) is extensively used for research and development using graphical models of probability.Itis implemented in 100% pure Java.BUGS (Bayesian Inference using Gibbs Sampling)is a flexible software used to implement Bayesian analysis especially for complex statistical models and justified using Markov chain Monte Carlo methods.Dlib C++ Library is a general purpose& cross-platform software library which is written in C++ and high complexity for contract and component-based software engineering.Also extensive used to support Bayesian Network.FBN (Free Bayesian Network) is used for constraint based learning of Bayesian networks.JavaBayes is a tool for the manipulation of Bayesian network which is design to estimate probabilities.MSBNx is a component-based Windows application for creating, assessing, and evaluating Bayesian Networks.SMILE (Structural Modeling, Inference, and Learning Engine) is a fully portable library of C++classes implementing graphical decision-theoretic methods, such as Bayesian networks and influence diagrams.UnBBayes provides a framework and GUI for Bayes Nets with different probabilistic models.

    Bayesian Analysis is derived simply from Bayes Theorem, i.e., the outcome depends on the conditional probability of at least two parameters.This property amounts to its role as a basic building block for much larger probabilistic framework which can be used in complex models for machine learning [22].The graphical models have long been a prevalent epitome for representing complex compositional probabilistic models, most likely: Directed graphs (Bayesian networks and Belief nets),undirected graphs (Markov networks), and Mixed graphs (both directed and undirected edges), as these representations allow much easier and richer generalization of graphical models.This makes the understanding of probabilistic models in context to larger models much easier to understand than a model comprising of nonlinear dynamical system.Probabilistic models, using the prior data are able to generate a posterior data using suitable generative models, which provides a vision for the model and helps interpret the course of data, the changes in the values and the learning attained by model at any phase.For machine learning and artificial intelligence (AI) systems, probabilistic modeling can be used as it is superior to many other existing theoretical and practical prediction models [23-27].

    Bayesian probabilistic analysis can serve as a backbone of many AI systems as it relies on apprehending the previous and already stored information to gather new data for analysis [29].This behavior is analogous to the thinking process of humans.Hence, Bayesian analysis finds many applications in current scenario of AI and machine learning as discussed in [30].The previous data can comprise of large number of smaller datasets which can be analyzed in collaboration with each other and at same time.Thus, huge amount of data can be used to build prediction models and to recognize parameters having significant role.The prediction can be personalized by adding custom parameters, which can work in hierarchical mode, each having its own implication factor.This personalization is normally implemented using Hierarchical Bayes models for instance hierarchical processes and Bayesian multi-task learning.Tab.2 reports the domain of application of Bayesian learning optimization in machine learning.

    Table 2: Expanding the domain of application of Bayesian learning optimization in machine learning

    Table 2: Continued

    The world today is witnessing the increasing trend of open source software.It has been developed for plethora of applications, be it simple word processor like Open Office organizations or a powerful application which can be used for machine learning tools like: Apache Spark [31].It supports the open source cause and have used different open source applications.

    Python is very good language which currently has large number of programmers.This language is much versatile, that it is used in most machine learning applications.It is user-friendly and has a very easy syntax.It is also supported by various powerful manipulation libraries.We have used Python IDLE for running and testing the programs.Scikit-Learning is a free software machine learning language developed byDavid Cournapeaufor Python programming language.It consists of various classification, clustering and regression algorithms including random forests, support vector machines and k-means.Numpy is also a library for python programming language, which is used in manipulation of large, multi-dimensional arrays and matrices.It also contains a collection of mathematical functions to operate on these.It is very versatile and can be used in various other applications.Matplotlib is the most popular plotting library in Python programming language.It provides functionality to embed plots in variety of GUI based applications such as Tkinter, wxPython.It can be called a free and open-source version of MATLAB like interface.Pandas is also an open source library for Python programming language.It provides data structures and data analysis tools.It provides functionality for manipulating numerical tables and time series.It comes handy when we have to deal with computations involving large amount of data.Common Platform Enumeration (CPE)for Python is a standardized method for describing and identifying classes of applications, operating systems, hardware devices present among an enterprise’s computing assets.It features include rich comparison and cross-verification, parsing and evaluation.

    Modelling using a probabilistic approach makes use of theory of probability which takes into consideration, all types of variation and uncertainty in the prediction as demonstrated in [32].It also finds application in constructing and using models, simply called, probabilistic approach to modeling.The primary role of liner quadratic estimation (LQE) can justifies with predict and update the data.These also deals with estimation of error, estimation of time & Kalman gain.

    All the observed data, combined with prior, structural and noise parameters are molded to represent and predict the unobserved trends in quantities.The generative models are applied on the predicted trends to zero onto the unobserved quantities.Learning process also uses this approach for increasing its accuracy in finding the correct parameters for a given dataset.Priors are of great importance as it is the ground for transforming prior probability to posterior distributions.This type of learning is termed as Bayesian Learning as its makes use of priors and generative models for prediction[33].There have been several application which could directly demand the application of Bayesian Learning probabilistic model like:

    ■Data Prediction on CoVID-19 disease and bacterial formations.

    ■Pharmaceutical product development.

    ■Data Science in Learning & Analysis

    ■Autonomous vehicles.

    4 Proposed Bayesian Learning Probabilistic Model

    The role of data processing is very precious as fetch the data from local as well as global repository through various techniques.Later, this data is trained and modelled using various machine learning algorithms which overall helps in better and accurate prediction.The concept for accuracy prediction using machine learning algorithms using Bayesian Learning Probabilistic Model is depicted in Fig.1.

    Figure 1: Systematic overview for machine learning based accuracy prediction using bayesian probabilistic model

    Bayesian learning involves several important steps beforehand.The first step is to establish a descriptive mathematical generative model of the provided data.It is the probability (or likelihood)function that supplies probability of the observed data for each value of parameters [34].This step requires finding the right and most accurate generative model.Next step is to find the credibility of each parameter value against the data.The goal is to find the most accurate parameter(s) and their values in regard to the observed prior data.The third step takes use of Bayes’rule to combine the likelihood function and prior data to generate the posterior distribution using the parameters.This posterior distribution is the predicted data.

    Bayesian probability can also be understood by the mean ofBayes ruleandBayesian Inferencewhich is the interpretation of probability concept.Bayesian inference can be deduced as:

    Whereasnandmare two events for whichnis understand on the basis ofm.In formal definition,one event is treated as a hypothesis and other event is just like evidence that supporting happening of particular hypothesis [35].Both the event is for handling the different possibility of occurrence.For Bayesian probability used in above Eq.(1) can be understudied as:

    ?P(n|m) isposterior probabilityofnoverm, i.e., after observing the particular event happen.

    ?P(n)isprior probabilitywhich is actually an estimation of the probability for made hypothesis for particular event occurrence.

    ?P(m|n) is the probability of observing two different events based on given/particular occurrence of event.

    ?P(m) is probability of observations made for particular evidence.

    For different values ofn, the parameter which affects theP(n|m), is theP(n) andP(m|n) as the posterior probability is directly proportional to prior probability that justifies the hypothesis which must be considered for given evidence (i.e., for specific events).So, Eq.(1) can also be deduced as:

    For partitioning using clustering based approach to group the data into a predetermined knumber, the minimizing the cost function ψ:

    whereCjis the center of jthcluster and its data objectdjwhich should be found out.kis the number of elements for particular data set which creates the entire cluster.It is typically distinct integer that mentioned in the period of particular distance function′q′.The cluster center represents the mean for each attribute of data set and the mean is actually calculated for overall objectsdjbelonging to particular cluster.The algorithm designed for k-Mean clustering not only focus on the perform issue to data sets, but also consider the ordering between the values presented for specified data set as shown in [36].

    4.1 Kalman-Filter Based Prediction Proposed Technique

    The Kalman-filter based prediction technique is successful to implement data set in which estimation of neighboring node can be done with linear stochastic difference equation [37-41].This might also include the estimation of discrete as well as continuous dataset and the modelling based on linear stochastic difference equation can be obtained as mentioned below:

    whereY(k) denotes the estimated data in the period ofk.A(k) denotes the state transition model which is realistic to the data set of previous period, i.e., (k-1).B(k) denotes the control-input model used to the control vectorU(k).This control vector is to specify the direction in which next prediction is done to find out the possible similar data[42].W(k)denotes the noise/error in the prediction period,which is usually treated as a zero covarianceQ(k) with mean multivariate normal distribution [43-45].

    Let say,Z(k) represents the realistic data in the sequence after denoising at the period ofk:

    Here,H(k) is the model which is used to correctY(k) andV(k) covariance in the noise/error which is deviated to zero mean Gaussian white noise.

    4.2 Kalman Filter Based Bayesian Analysis

    The Kalman-filter has two different phases specifically named as prediction and update.The prediction phase estimates the data from its previous period and generate an approximation of new data for the current.For the update phase, the more accurate data is calculated on the basis of corrections in the previous data, i.e., to refine the prediction phase for the current period.

    Linear quadratic estimation, another name for Kalman filter can be used to filter out the noise in data.This is further explained through the plots on datasets that we have used.The initial dataset,or the dataset used to feed the Bayesian analysis function originally forms a backbone of prediction which has a large share in accuracy of estimation.If this initial data is sorted first and then fed into the Bayesian function, we can achieve better results.A better approach will be to filter out noise and sharp variations in the sorted data itself.

    Algorithm 1: Finding centroid of given data using the Initial Seeds in k-means def centroid(data):global counterd counterd = counterd + 1 print“Counter:“, counterd(Continued)

    # Using counterd for initial seed if (counterd == 1):result = [[seedval1, seedval2]]else:result = np.mean(data, 0)return result

    Algorithm 2: Calculating the Kalman Gain# Calculating Kalman Gain KG = err_est/( float( err_est + err_meas))# Calculating new estimate est = est + KG * ( meas-est)# Calculating new error in estimate err_est = ( 1-KG )* err_est

    The snippet of these codes were implemented in Python language where seedval1 and seedval2 denotes the seed values calculated using Kalman Filter approach.This demand can be fulfilled by incorporating the need for Common Platform Enumeration (CPE).Afterward, the values are fed to function which used as a first set of values to calculate the centroid of the cluster.This scenario is mentioned in Algorithm 1.This paper implements the algorithm using open source platform of Python and efficiently integrates all different modules to piece of code via CPE.

    The code mentioned in Algorithm 2 is used to calculate basic Kalman Gain and new estimated values.It also generate initial seed values for feeding to k-means function where KG stands for Kalman Gain variable, est for the Estimated values, meas for the input data and err_est for error in estimate.This overall scenario is the simple Kalman filter algorithm without modification.

    A good validation approach includes with basic hypothesis strategy and later, allow the classifier to moderate the tune parameters to the model as required.These parameters can be with distance function as demonstrated in next Section 5.In predicting the plot values, the initial data value is considered as seed index and is further provided with new numbers of clusters each time.This helps the overall model to predict an accurate efficiency as compare to other classifier like simple K-Mean.

    5 Results and Analysis

    In this section, simulation setup and results are presented.

    5.1 Simulation Setup

    Simulation is conducted to verify the efficiency of the proposed Kalman-filer based prediction model.The open source platform of Python and efficiently integrates all different modules to piece of code via CPE for Python is used.

    CPE is a structured naming scheme for information technology systems (ITS), software, and packages.CPE includes a formal name verification system based upon the generic syntax for uniform resource identifiers (URI).

    The efficacy is calculated with the help of CPE running on Intel Core i5 with 8 GB DDR4 RAM and Windows XP.These are the parameters used for the simulation.The dataset had been created from GNU Octave, which is available at (octave.org/doc/v4.2.1/).The size depends on the Kalman filter as it is found optimal seed value for centroid, which is found to be: [[48.4481141211, 50.8394268109]] for K-Mean clustering formation.

    5.2 Results and Discussion

    This approach will further reveal the amount of noise present in all the datasets which we can remove or improve upon.One such filtering algorithm which we have used is Kalman Filter.It has proven itself to check the statistical noise and other inaccuracies using unknown variables.Cumulatively,we have usedK-means as a sorting (clustering) algorithm and Kalman filter as a filtering algorithm.Though we have used a small dataset, we have demonstrated that Kalman filter gives good results in stabilizing the data along with K-means which then can be used as a prior input for Bayesian analysis.Kalman filter is directly applied on data to reduce noise and is then fed to K-means function,hence improving the prediction mechanism by a significant amount.This paper is able to reduce the number of computations using Kalman filtering.

    The dataset is made into ten clusters [46-51] using the k-means clustering algorithm, without any amalgamation of another algorithm as depicted in Fig.2.This is justifying much faster scenario recognizing better cluster heads using Kalman-filter model as mentioned in Eq.(4).

    Figure 2: Initial number of clusters with Cj

    This experimental setup is established using three initial seed value which are plugged to k-means function at the first iteration.These initial seeds are calculated using simple Kalman filter algorithm,doing calculations on dataset.The required values for the computation of Kalman gain are determined and the output is used as the seed value for k-means [52-56].Both seedval1 and seedval2 are calculated using this approach.It shows the plot with distance function, q=3 as depicted in Fig.3.In each plot, the initial seed is provided with new numbers of clusters each time with variable number of clusters.This shows the efficiency of 97.6% as compare to simple K-Mean.It is also observed that simple K-means approach varies at cluster 8 and cluster10, however, proposed approach increasing exponentially.

    Figure 3: Comparison between number of calls and cluster with distance function, q=3

    As specified in Eq.(3), the different value for distance function, i.e., q=4 is experimented and plot is observed with steady number of calls over mentioned clusters as depicted in Fig.4.

    Figure 4: Comparison between number of calls and cluster with distance function, q =4

    For high cluster size, it is observed that the total number of calls increases with time and then dips periodically since cluster size 6, 8, and 10 respectively.Whenever, proposed approach is continuous increasing expect cluster size 5.

    This shows the simple K-means clustering for data prediction is same as specified with Kalman filter approach upto 600 calls (approx.) but there is zig-zag in formation of clusters using simple K-Means as compared to Kalman-filter approach.Similar to above scenario, the different value for distance function, i.e., q=5 is experimented and plot is observed with steady number of calls over mentioned clusters as depicted in Fig.5.This also justifies significance of Kalman-filter and its computation methodology for better and efficiency in data prediction.

    Figure5:Comparison between number of calls and cluster with distance function, q=54.Comparison between number of calls and cluster with distance function, q =5

    6 Conclusion

    The world’s today is witnessing the increasing trend of open source software.It has been developed for plethora of applications, be it simple word processor like Open Office organizations or a powerful application.This paper presents a method for improving the performance such open source software data prediction using Bayesian classification with the combination of Kalman Filter.The method is applied on a small dataset just for establishing the fact that the proposed algorithm can reduce the time for computing the clusters from data.It has also observed some serious change from all the three plots for different number of calls made to calculating the centroid of cluster.Such combination of Kalman filer for cluster head selection has drastically reduced in the change of noise as minimum as possible which were not reported in earlier work done in this field.The plots also demonstrate the number of times to the k-means function made calls to calculate the centroid of cluster using the initial seed given to it.Such prediction algorithm is high desirable for predicting data in digital revolution and helps in designing new paradigms to learning algorithms.With the aid of open source software, this paper implements the cluster head selection algorithm efficiently with notable time.A much advanced work can be done by using neural nets in which training model can be varied based on noise conditions using Kalman filter.The amount of noise can be changed using filtering algorithm to achieve the accuracy and corrections in the parameter chosen.A major amount of improvements can be done on topics like how to choose the efficient and right algorithm for reducing noise in the input data set.Kalman filter is found much capable to adhere potential result for it.This shows the efficiency of 97.6% along with K-Mean technique.

    Acknowledgement:The authors wish to express their thanks to one and all who supported them during this work.

    Funding Statement:This paper was supported by Wonkwang University in 2021.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    最新在线观看一区二区三区| 国产免费av片在线观看野外av| 亚洲狠狠婷婷综合久久图片| 无人区码免费观看不卡| 真人做人爱边吃奶动态| 免费在线观看影片大全网站| 十八禁网站免费在线| 在线观看午夜福利视频| 一卡2卡三卡四卡精品乱码亚洲| 啦啦啦韩国在线观看视频| 又紧又爽又黄一区二区| 国产91精品成人一区二区三区| 亚洲aⅴ乱码一区二区在线播放| 成人鲁丝片一二三区免费| 午夜视频国产福利| 大型黄色视频在线免费观看| 国产亚洲欧美在线一区二区| 三级毛片av免费| 99精品在免费线老司机午夜| 好男人电影高清在线观看| 日韩欧美三级三区| 免费搜索国产男女视频| 欧美成人免费av一区二区三区| 久久精品亚洲精品国产色婷小说| 日本与韩国留学比较| 午夜福利免费观看在线| 日本黄色片子视频| 亚洲av五月六月丁香网| 亚洲精品成人久久久久久| 国产精品自产拍在线观看55亚洲| 国产97色在线日韩免费| 伊人久久精品亚洲午夜| 男人舔女人下体高潮全视频| 丝袜美腿在线中文| 天堂影院成人在线观看| 亚洲一区二区三区色噜噜| 99热只有精品国产| 国产欧美日韩一区二区精品| a级一级毛片免费在线观看| 麻豆久久精品国产亚洲av| 亚洲av免费在线观看| 美女cb高潮喷水在线观看| 午夜久久久久精精品| 在线观看66精品国产| 国产高清激情床上av| 色播亚洲综合网| 91麻豆av在线| 波野结衣二区三区在线 | 热99re8久久精品国产| 免费一级毛片在线播放高清视频| 日本五十路高清| 综合色av麻豆| 女人被狂操c到高潮| 搡老熟女国产l中国老女人| 久久久久亚洲av毛片大全| 欧美性猛交黑人性爽| 操出白浆在线播放| 久久久久免费精品人妻一区二区| 18禁黄网站禁片免费观看直播| 淫秽高清视频在线观看| 少妇的丰满在线观看| 99国产精品一区二区三区| 日韩成人在线观看一区二区三区| 叶爱在线成人免费视频播放| 色av中文字幕| 在线a可以看的网站| 黄色女人牲交| 亚洲最大成人手机在线| 高潮久久久久久久久久久不卡| 精品国产超薄肉色丝袜足j| 美女黄网站色视频| 欧美日韩福利视频一区二区| 又黄又粗又硬又大视频| 一区福利在线观看| 日本一二三区视频观看| 丁香六月欧美| 精品久久久久久,| 欧美性猛交╳xxx乱大交人| 在线看三级毛片| 在线国产一区二区在线| 国产三级在线视频| 91久久精品国产一区二区成人 | 久久国产乱子伦精品免费另类| 国产一区二区三区在线臀色熟女| 色视频www国产| 在线观看舔阴道视频| 国产午夜精品久久久久久一区二区三区 | 成年人黄色毛片网站| xxx96com| 日韩免费av在线播放| 少妇的逼好多水| 熟女人妻精品中文字幕| 亚洲av中文字字幕乱码综合| 色在线成人网| 两个人视频免费观看高清| 欧美成人免费av一区二区三区| 亚洲色图av天堂| 久久精品夜夜夜夜夜久久蜜豆| 欧美成人a在线观看| 日韩欧美在线二视频| 精品福利观看| 波多野结衣高清作品| 1024手机看黄色片| 亚洲一区二区三区不卡视频| 国内揄拍国产精品人妻在线| 久久九九热精品免费| 国产色婷婷99| 男人舔奶头视频| 淫秽高清视频在线观看| 99久久综合精品五月天人人| 精品福利观看| 亚洲精品国产精品久久久不卡| 欧美日韩福利视频一区二区| 日韩精品中文字幕看吧| 久久久久久久久大av| 成人无遮挡网站| 国产又黄又爽又无遮挡在线| 日韩欧美国产一区二区入口| 亚洲av一区综合| 蜜桃亚洲精品一区二区三区| 中文亚洲av片在线观看爽| 天堂动漫精品| 日本黄色片子视频| av天堂中文字幕网| 琪琪午夜伦伦电影理论片6080| 在线看三级毛片| 久久精品亚洲精品国产色婷小说| 久久久久久久久大av| 成年人黄色毛片网站| 中文字幕高清在线视频| 国产一区二区在线观看日韩 | 国产成+人综合+亚洲专区| 美女大奶头视频| 亚洲aⅴ乱码一区二区在线播放| 国产精品永久免费网站| 国产精品综合久久久久久久免费| 69人妻影院| 精品99又大又爽又粗少妇毛片 | 又黄又爽又免费观看的视频| 国产精品免费一区二区三区在线| 国产淫片久久久久久久久 | 99久久九九国产精品国产免费| 亚洲片人在线观看| 成人av在线播放网站| 成人精品一区二区免费| 国产真实伦视频高清在线观看 | 国产精品自产拍在线观看55亚洲| 国内久久婷婷六月综合欲色啪| 日本黄色视频三级网站网址| 超碰av人人做人人爽久久 | 日本五十路高清| 两个人视频免费观看高清| 亚洲第一电影网av| 大型黄色视频在线免费观看| 嫩草影院入口| av专区在线播放| 亚洲精品一区av在线观看| 成人午夜高清在线视频| 国产激情欧美一区二区| 草草在线视频免费看| 久久午夜亚洲精品久久| 特级一级黄色大片| 十八禁网站免费在线| 免费在线观看亚洲国产| 白带黄色成豆腐渣| 国产精品三级大全| 精品久久久久久成人av| 日本黄色视频三级网站网址| 亚洲av第一区精品v没综合| 国产久久久一区二区三区| 午夜福利在线观看免费完整高清在 | 国产一区二区三区在线臀色熟女| 国产在视频线在精品| 人妻夜夜爽99麻豆av| or卡值多少钱| 真实男女啪啪啪动态图| 国产午夜精品久久久久久一区二区三区 | 成年女人毛片免费观看观看9| 少妇高潮的动态图| 欧美日韩中文字幕国产精品一区二区三区| 99久国产av精品| 国产精品久久久久久久电影 | 亚洲色图av天堂| 老汉色av国产亚洲站长工具| 又爽又黄无遮挡网站| 又紧又爽又黄一区二区| 亚洲av电影在线进入| 精品久久久久久成人av| 日韩免费av在线播放| 精品免费久久久久久久清纯| 真人做人爱边吃奶动态| 国产私拍福利视频在线观看| 美女 人体艺术 gogo| 亚洲国产精品合色在线| АⅤ资源中文在线天堂| 日韩欧美在线二视频| 国产爱豆传媒在线观看| 欧美激情在线99| 99热精品在线国产| 在线观看一区二区三区| 午夜激情欧美在线| 免费av不卡在线播放| 久久久久久大精品| 制服人妻中文乱码| 在线视频色国产色| 欧美日韩一级在线毛片| 日本成人三级电影网站| 综合色av麻豆| 99热这里只有是精品50| 啦啦啦免费观看视频1| 嫁个100分男人电影在线观看| 9191精品国产免费久久| 丰满人妻一区二区三区视频av | 成人av一区二区三区在线看| 免费人成视频x8x8入口观看| 亚洲成av人片在线播放无| 久久久成人免费电影| 黄色片一级片一级黄色片| h日本视频在线播放| 女生性感内裤真人,穿戴方法视频| 国产成人欧美在线观看| 欧美一区二区精品小视频在线| 亚洲国产精品成人综合色| 国产精品电影一区二区三区| 91麻豆精品激情在线观看国产| 99热6这里只有精品| 黄色成人免费大全| 男女之事视频高清在线观看| 精品久久久久久久人妻蜜臀av| 国产精品av视频在线免费观看| 亚洲人成电影免费在线| 99视频精品全部免费 在线| 最近视频中文字幕2019在线8| 法律面前人人平等表现在哪些方面| 日本黄色视频三级网站网址| 亚洲av第一区精品v没综合| 国产成人av激情在线播放| av中文乱码字幕在线| 高清毛片免费观看视频网站| 婷婷亚洲欧美| 国内毛片毛片毛片毛片毛片| 国产激情欧美一区二区| 日本精品一区二区三区蜜桃| 亚洲久久久久久中文字幕| 大型黄色视频在线免费观看| 国产欧美日韩一区二区精品| 欧美不卡视频在线免费观看| 欧美一区二区亚洲| 麻豆一二三区av精品| 久久99热这里只有精品18| 久久久久性生活片| av在线蜜桃| 国产精品 国内视频| 日韩欧美在线乱码| 天堂√8在线中文| 亚洲人成网站在线播| 91久久精品电影网| 亚洲精品久久国产高清桃花| 国产精品一区二区三区四区免费观看 | 桃红色精品国产亚洲av| 国内精品一区二区在线观看| 99国产综合亚洲精品| 精品一区二区三区av网在线观看| 黄色女人牲交| 内地一区二区视频在线| 国产三级中文精品| 亚洲五月婷婷丁香| 手机成人av网站| av在线天堂中文字幕| 国产高清三级在线| 高清日韩中文字幕在线| 校园春色视频在线观看| 日韩欧美国产一区二区入口| 亚洲成人久久爱视频| 亚洲中文字幕日韩| 最近最新免费中文字幕在线| 伊人久久精品亚洲午夜| 偷拍熟女少妇极品色| 日韩欧美国产在线观看| 国产蜜桃级精品一区二区三区| 亚洲一区高清亚洲精品| 欧美又色又爽又黄视频| 久9热在线精品视频| 黄色日韩在线| 免费人成视频x8x8入口观看| 欧美绝顶高潮抽搐喷水| 国产一区二区三区在线臀色熟女| 小蜜桃在线观看免费完整版高清| 少妇的逼好多水| 丰满人妻熟妇乱又伦精品不卡| 神马国产精品三级电影在线观看| 国产欧美日韩一区二区精品| 国内精品美女久久久久久| 日韩av在线大香蕉| 欧美精品啪啪一区二区三区| а√天堂www在线а√下载| 麻豆一二三区av精品| 五月玫瑰六月丁香| 日本一本二区三区精品| 久久久久久久久久黄片| 国产aⅴ精品一区二区三区波| 国产真实伦视频高清在线观看 | 毛片女人毛片| 在线天堂最新版资源| 69av精品久久久久久| 久久精品亚洲精品国产色婷小说| 深爱激情五月婷婷| 亚洲国产日韩欧美精品在线观看 | 老司机深夜福利视频在线观看| 久久久久久久午夜电影| 99精品在免费线老司机午夜| 变态另类丝袜制服| 中文字幕久久专区| 99在线人妻在线中文字幕| 久久国产精品人妻蜜桃| 久久精品国产清高在天天线| 蜜桃久久精品国产亚洲av| 久久人人精品亚洲av| 18禁黄网站禁片午夜丰满| 999久久久精品免费观看国产| 一区二区三区免费毛片| 精品不卡国产一区二区三区| 中亚洲国语对白在线视频| 99国产精品一区二区三区| 听说在线观看完整版免费高清| 国产探花在线观看一区二区| 亚洲内射少妇av| 日韩欧美国产在线观看| 丰满人妻一区二区三区视频av | 男女午夜视频在线观看| 两人在一起打扑克的视频| 国产爱豆传媒在线观看| 精品一区二区三区视频在线 | 国产日本99.免费观看| 一进一出抽搐gif免费好疼| 午夜福利视频1000在线观看| 亚洲成人免费电影在线观看| 俺也久久电影网| 麻豆成人av在线观看| 欧美成人a在线观看| 少妇高潮的动态图| 亚洲精品影视一区二区三区av| 亚洲va日本ⅴa欧美va伊人久久| 亚洲狠狠婷婷综合久久图片| 好看av亚洲va欧美ⅴa在| 精品久久久久久久毛片微露脸| 99热精品在线国产| 欧美一区二区国产精品久久精品| 成人一区二区视频在线观看| 国产三级黄色录像| 一区二区三区国产精品乱码| 天天添夜夜摸| 9191精品国产免费久久| 欧美日韩中文字幕国产精品一区二区三区| 免费一级毛片在线播放高清视频| 综合色av麻豆| 一本综合久久免费| 最近最新中文字幕大全免费视频| 国语自产精品视频在线第100页| 欧美激情在线99| 色老头精品视频在线观看| 成人午夜高清在线视频| 婷婷精品国产亚洲av| 亚洲黑人精品在线| 女同久久另类99精品国产91| 高清毛片免费观看视频网站| 色哟哟哟哟哟哟| 亚洲欧美日韩高清专用| 91字幕亚洲| 美女 人体艺术 gogo| 此物有八面人人有两片| 国产视频内射| 日韩欧美精品v在线| 最新在线观看一区二区三区| av国产免费在线观看| 91麻豆av在线| 国产免费av片在线观看野外av| 日本在线视频免费播放| 久久精品国产清高在天天线| 天堂网av新在线| 男女床上黄色一级片免费看| 99精品在免费线老司机午夜| 噜噜噜噜噜久久久久久91| 国产亚洲精品综合一区在线观看| 深爱激情五月婷婷| 日韩精品中文字幕看吧| 免费看十八禁软件| 久久精品91蜜桃| 日本一二三区视频观看| 动漫黄色视频在线观看| 1024手机看黄色片| 久久欧美精品欧美久久欧美| 色视频www国产| 91av网一区二区| 欧美3d第一页| 精品国产三级普通话版| 99热这里只有精品一区| 免费av毛片视频| 亚洲国产精品久久男人天堂| 精品国产美女av久久久久小说| 老司机午夜福利在线观看视频| 亚洲成人精品中文字幕电影| 欧美成人免费av一区二区三区| 午夜a级毛片| 日韩欧美在线二视频| 男女下面进入的视频免费午夜| 日本在线视频免费播放| 亚洲18禁久久av| 免费人成在线观看视频色| 欧美一区二区国产精品久久精品| 成人永久免费在线观看视频| 亚洲欧美日韩卡通动漫| 黄色日韩在线| 一本久久中文字幕| 每晚都被弄得嗷嗷叫到高潮| 在线看三级毛片| 亚洲国产精品999在线| 首页视频小说图片口味搜索| 国产真人三级小视频在线观看| 99久久综合精品五月天人人| 香蕉av资源在线| 日本一本二区三区精品| 欧美性猛交黑人性爽| 欧美日韩国产亚洲二区| 成人午夜高清在线视频| 三级男女做爰猛烈吃奶摸视频| 一边摸一边抽搐一进一小说| 精品国产超薄肉色丝袜足j| 久久久国产成人免费| 国产午夜精品久久久久久一区二区三区 | 免费在线观看日本一区| 成人特级av手机在线观看| 久久精品国产综合久久久| 免费大片18禁| 亚洲av五月六月丁香网| 亚洲最大成人手机在线| 亚洲精品在线观看二区| 婷婷丁香在线五月| 丰满人妻一区二区三区视频av | 国产精品免费一区二区三区在线| 俄罗斯特黄特色一大片| 久久久久国产精品人妻aⅴ院| 老鸭窝网址在线观看| 国产精品久久久久久久电影 | 啦啦啦免费观看视频1| 亚洲第一电影网av| 国产成人福利小说| 欧美日韩国产亚洲二区| 亚洲精华国产精华精| 999久久久精品免费观看国产| 国产美女午夜福利| av国产免费在线观看| 国产高清videossex| 十八禁网站免费在线| 午夜日韩欧美国产| 最近最新中文字幕大全免费视频| 日韩中文字幕欧美一区二区| 国产成人aa在线观看| 久久人人精品亚洲av| 内地一区二区视频在线| 国产av不卡久久| 日韩欧美在线乱码| 男女视频在线观看网站免费| 不卡一级毛片| 最新在线观看一区二区三区| 日韩欧美一区二区三区在线观看| 一级毛片女人18水好多| 叶爱在线成人免费视频播放| 亚洲专区国产一区二区| 可以在线观看的亚洲视频| 在线视频色国产色| 最新美女视频免费是黄的| 美女高潮喷水抽搐中文字幕| netflix在线观看网站| 国产高清有码在线观看视频| 精品国产超薄肉色丝袜足j| 国产欧美日韩一区二区精品| 日韩欧美精品v在线| 尤物成人国产欧美一区二区三区| 国产一区二区在线av高清观看| 无限看片的www在线观看| 欧美xxxx黑人xx丫x性爽| 12—13女人毛片做爰片一| 日本黄色片子视频| 午夜免费观看网址| 真人做人爱边吃奶动态| 欧美在线一区亚洲| 12—13女人毛片做爰片一| 欧美黄色淫秽网站| 九九久久精品国产亚洲av麻豆| 啦啦啦韩国在线观看视频| 免费观看人在逋| av天堂中文字幕网| 午夜激情福利司机影院| 草草在线视频免费看| 免费人成在线观看视频色| 国产伦人伦偷精品视频| 精品国产超薄肉色丝袜足j| 老司机午夜十八禁免费视频| 国产精品久久久人人做人人爽| 国产精品野战在线观看| 日韩av在线大香蕉| 老熟妇仑乱视频hdxx| 国产单亲对白刺激| 男女床上黄色一级片免费看| 又黄又粗又硬又大视频| 禁无遮挡网站| 国产真实伦视频高清在线观看 | 国产精品久久电影中文字幕| 免费高清视频大片| 内射极品少妇av片p| 国产成人影院久久av| 99国产极品粉嫩在线观看| 亚洲人成网站在线播| 亚洲内射少妇av| 欧美日韩瑟瑟在线播放| 精品一区二区三区视频在线 | 亚洲人成网站高清观看| 国产精品精品国产色婷婷| 黄色日韩在线| 中文字幕精品亚洲无线码一区| 亚洲片人在线观看| 一级毛片高清免费大全| 国产精品久久久久久人妻精品电影| 999久久久精品免费观看国产| 首页视频小说图片口味搜索| 亚洲精品色激情综合| 91九色精品人成在线观看| www.www免费av| 亚洲真实伦在线观看| 精品久久久久久久久久久久久| 午夜福利免费观看在线| 欧美最新免费一区二区三区 | 亚洲精品乱码久久久v下载方式 | 亚洲色图av天堂| 熟女人妻精品中文字幕| 婷婷六月久久综合丁香| 欧美高清成人免费视频www| 天天躁日日操中文字幕| 中文字幕av成人在线电影| 法律面前人人平等表现在哪些方面| 亚洲国产欧美人成| 免费av不卡在线播放| 很黄的视频免费| 国产私拍福利视频在线观看| 国产亚洲精品一区二区www| 日本一二三区视频观看| 亚洲国产精品999在线| 欧美一级a爱片免费观看看| 狠狠狠狠99中文字幕| 麻豆久久精品国产亚洲av| 色av中文字幕| 欧美一区二区国产精品久久精品| av天堂中文字幕网| 桃色一区二区三区在线观看| 国产精品免费一区二区三区在线| 国产免费男女视频| 麻豆国产av国片精品| 欧美在线黄色| 窝窝影院91人妻| 亚洲五月天丁香| 亚洲成人久久性| 久久久久性生活片| 国产野战对白在线观看| 精品午夜福利视频在线观看一区| 搡女人真爽免费视频火全软件 | eeuss影院久久| 国产精品野战在线观看| 18美女黄网站色大片免费观看| 久久欧美精品欧美久久欧美| 欧美大码av| 性色av乱码一区二区三区2| 亚洲成人久久性| 亚洲精品乱码久久久v下载方式 | 变态另类成人亚洲欧美熟女| 桃色一区二区三区在线观看| 99精品欧美一区二区三区四区| 岛国在线观看网站| 热99re8久久精品国产| 18禁国产床啪视频网站| 老司机深夜福利视频在线观看| av女优亚洲男人天堂| 久久精品夜夜夜夜夜久久蜜豆| 亚洲午夜理论影院| 黄片大片在线免费观看| 啦啦啦免费观看视频1| 看片在线看免费视频| 国产成人影院久久av| 97人妻精品一区二区三区麻豆| 精品国内亚洲2022精品成人| 亚洲黑人精品在线| 女人高潮潮喷娇喘18禁视频| 人妻丰满熟妇av一区二区三区| 女人十人毛片免费观看3o分钟| 亚洲 国产 在线| 国产综合懂色| 欧美zozozo另类| 日韩欧美国产一区二区入口| 亚洲人成网站高清观看| 免费人成视频x8x8入口观看| av中文乱码字幕在线| 中文字幕久久专区| 午夜福利在线观看吧| 少妇人妻一区二区三区视频| 免费人成在线观看视频色| 免费看十八禁软件| 亚洲成人中文字幕在线播放| 一级毛片高清免费大全| 久久性视频一级片| 男人舔女人下体高潮全视频| 在线观看舔阴道视频| 麻豆成人午夜福利视频| 久久久久久久久大av| 欧美日韩瑟瑟在线播放| 日本撒尿小便嘘嘘汇集6|