• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Ensemble Learning Based Collaborative Filtering with Instance Selection and Enhanced Clustering

    2022-08-24 03:26:50ParthasarathyandSathiyaDevi
    Computers Materials&Continua 2022年5期

    G.Parthasarathyand S.Sathiya Devi

    1Anna University,Chennai,600025,India

    2University College of Engineering,BIT Campus,Anna University,Tiruchirappalli,620024,India

    Abstract:Recommender system is a tool to suggest items to the users from the extensive history of the user’s feedback.Though, it is an emerging research area concerning academics and industries, where it suffers from sparsity,scalability, and cold start problems.This paper addresses sparsity, and scalability problems of model-based collaborative recommender system based on ensemble learning approach and enhanced clustering algorithm for movie recommendations.In this paper,an effective movie recommendation system is proposed by Classification and Regression Tree(CART)algorithm,enhanced Balanced Iterative Reducing and Clustering using Hierarchies(BIRCH)algorithm and truncation method.In this research paper,a new hyper parameters tuning is added in BIRCH algorithm to enhance the cluster formation process,where the proposed algorithm is named as enhanced BIRCH.The proposed model yields quality movie recommendation to the new user using Gradient boost classification with broad coverage.In this paper, the proposed model is tested on Movielens dataset,and the performance is evaluated by means of Mean Absolute Error(MAE),precision,recall and f-measure.The experimental results showed the superiority of proposed model in movie recommendation compared to the existing models.The proposed model obtained 0.52 and 0.57 MAE value on Movielens 100k and 1M datasets.Further,the proposed model obtained 0.83 of precision, 0.86 of recall and 0.86 of f-measure on Movielens 100k dataset,which are effective compared to the existing models in movie recommendation.

    Keywords: Clustering; ensemble learning; feature selection; gradient boost tree;instance selection;truncation parameter

    1 Introduction

    The exponential increase of data in the digital universe has encouraged efficient information filtering and personalization technology.Recommender System(RS)is a popular technique to perform both information filtering and personalization to the end-user from the huge information space.Nowadays,RS is an integral part of every e-commerce application such as Amazon,Twitter,Netflix,LinkedIn, etc., to provide more relevant and personalized suggestions.So, Recommender Systems(RSs) are the systems that provides recommendations based on user’s past behavior.Tapestry is the oldest recommendation system that filters the mail,which is interested in the user[1].The RS collects information explicitly or implicitly to make recommendations.The rating information(i.e.,like/dislike,a discrete rating)given by the users on products is called explicit,and the information collected from users’behavior(i.e.,feedback,browsing behavior)is called implicit[2].RS is broadly divided into three types namely(i)Collaborative Filtering(CF)(ii)Content-Based Filtering(CBF)and(iii)Hybrid CF recommendation systems that finds relevant items by finding the users having similar interests [3].Content-Based Filtering (CBF) suggests items that are similar in features and the user has already chosen in the past[4].In the hybrid approach,two or more methods are combined to gain better results.Vekariya et al.[5] mentioned the hybrid system types given by Robert Burke:weighted, switching,mixed technique, feature combination cascade, feature augmentation, and meta-level.Hence, there are three approaches in RS in that CF is successful in research and practice.

    There are two types of CF approaches namely(i)Memory-based and(ii)Model-based approach.The memory-based approach uses the entire instance of the database,which results in scalability.The model-based approach tries to reduce a massive dataset into a model and performs recommendation task.Model-Based CF(MBCF)reacts to the user’s request instantly with reduced computation.There are five primary approaches in MBCF such as classification,clustering,latent model,Markov Decision Process (MDP), and Matrix Factorization (MF) [6].Generally, MBCF is preferred over memorybased, because it requires less computational cost and performs recommendations without domain knowledge.Hence,model-based CF is highly preferable and it suffers from three main issues like(i)scalability,(ii)sparsity and(iii)cold start.Sparsity refers that most of the users does not have enough rating to find the similar user.Cold start refers to the challenge in producing specific recommendations for the new or cold users,who rated an inadequate number of items.Scalability reduces the efficiency of handling a vast amount of data,when recommending to the user.Apart from these issues,building a user profile for new users by CF is difficult [7].The main objective of this research paper is to develop a new CF approach by combining the user and item related feature to provide a solution to scalability and sparsity issue.The proposed system performs a mixture of clustering and ensemblebased classification using feature combination for a recommendation.The main contribution of this paper is summarized as follows:

    ·Proposed a new feature, and instance selection method; hierarchical enhanced BIRCH based clustering algorithm to overcome data sparsity.

    ·Incorporating CART based feature,and truncation parameters for normal distribution based instance selection.

    ·Developed an ensemble based Gradient Boosting Tree (GBT) recommendation model which improves recommendation accuracy,and also addresses the scalability issue.

    The rest of the paper is organized as follows:Section 2 presents the related work review.A detailed description of the proposed approach is given in Section 3.Section 4 provides the experimental result on the benchmark datasets.Finally,the conclusion of the work is presented in Section 5.

    2 Literature Review

    This section reviews the existing collaborative recommendation approaches in movie recommendation.CF is a technique,which automatically predicts the unknown ratings of the product(or)user’s interest by analyzing the known ratings of it (or) compiling preferences of similar users.The CF is used to develop a personalized recommendation on many e-commerce applications on the web.The main process of CF is to identify the similar users for guiding the active user.In memory-based CF,instance-based methods are employed to determine similar users,but it suffers from poor scalability for a vast database.On the other hand,model-based CF approach is commonly used in offline dataset for prediction and recommendation.The model-based CF approach is small,which occupies less memory and work faster.Identifying a group of similar users is a challenging task in both memory and modelbased approaches.Generally,a group of similar users is generated using clustering algorithms.

    Ju et al.[8] developed a collaborative model based on k-means clustering and artificial bee colony algorithm.The developed algorithm was used to address the local optima problem of kmeans clustering, and the similarity measure to perform clustering presented by Rongfei et al.[9].The adjusted DBSCAN algorithm was utilized to develop a cluster that improves the accuracy of movie recommendation for the users,who having many available ratings.Das et al.[10]has presented a K-d trees and quad trees based hierarchical clustering method for movie recommendation.The developed method addresses the scalability issue and maintain acceptable recommendation accuracy.Experimental result proved that the computation time was effectively reduced on movielens-100K,movielens-1M,book-crossing,and trip advisor datasets.

    Mohammad pour et al.[11]introduced a CF method based on hybrid meta-heuristic clustering for movie recommendation.The developed method merges a genetic algorithm and a gravitational emulation bounded clustering search.Here,the cluster was efficient,but suffers from higher computational cost.In addition,a Modified Cuckoo Search(MCS)algorithm and a Modified Fuzzy C Means(MFCM) approach was developed by Selvi et al.[12].In this method, the number of iterations and error rates were reduced by MFCM.The recommendation accuracy and the efficiency of clustering was improved by MCS algorithm.

    Generally, the cluster’s discrimination ability and the cluster’s performance depends on dimensionality reduction and it was performed in two ways(i)Feature selection,and(ii)Instance selection.Cataltepe et al.[13]has developed a new feature selection method for Turkish movie recommendation system.The developed method practices user behavior, various kinds of content features, and other users’message to predict the movie ratings.The developed method improves the recommendation’s accuracy,notably for users who have viewed a deficient number of movies.The k-means clustering has been described along with a backward feature selection method to improve movie recommendation by Ramezani et al.[14].The feature selection process eliminates irrelevant feature and makes the real similarity between the users.Further,an information-theoretic approach was developed by Yu et al.[15]for movie recommendation.The developed approach used description rationality,and the power to measure an instance’s pertinence regarding a target notion.The empirical evaluation result showed that the developed method significantly reduces the neighborhood size and increases the CF process’s speed.

    Yu et al.[16]developed a system for feature and instance selection based on mutual information and Bayes theorem.This literature showed that the feature weighting and instance selection based on the pertinence analysis improves the collaborative filtering in light of accuracy.The integration of texture and visual features used by Pahuja et al.[17] were effective in movie recommendations.The feature sets have different level of significance in different scenarios and identified based on the business requirement.Further,the class based collaborative filtering algorithm was described by Zeng et al.[18]which adapts the user frequency threshold methodology for instance selection.The threshold selection improves the speed of computation,recommendation accuracy,and alleviates the cold start problem.

    The CF’s accuracy depends on the classification model, and the Extreme Gradient Boosting(XGBoost)algorithm-based recommendation system was described in Xu et al.[19].Shao et al.[20]has introduced a Heterogeneous Information Boosting(HIBoosting)model based on Gradient Boosting Decision Tree (GBDT) algorithm.The developed model blends independent data in information networks to provide users with more helpful recommendation assistance.

    From the above mentioned literatures, it is recognized that the model-based CF approach addresses the sparsity, and scalability issues better with feature reduction, clustering, and machine learning-based approaches.Still, the prediction accuracy and addition of new data incrementally becomes questionable.This research paper proposed a gradient boosting decision tree based CF approach with instance selection and enhanced clustering for an effective movie recommendation.Hence,the proposed model overcomes the sparsity and scalability issues and improves the accuracy of prediction and movie recommendation.

    3 Proposed Methodology

    The proposed collaborative movie recommendation approach with combined features and probabilistic based instance selection is described in this section.Generally, the RS suffers from three main issues such as sparsity, scalability, and cold start, irrespective of different implementation approaches.These issues affect the performance of RS.Hence this paper proposes an approach for model-based collaborative RS to solve the sparsity and scalability issues.The sparsity occurs due to the sparseness of the user-item matrix.The proposed approach considers both the ratings and content-based features of the data set and uses feature selection to overcome the sparsity problem.The later issue is addressed by enhanced clustering and instance selection.This approach addresses the scalability issue and improves the recommendation’s accuracy when combined with an ensemble method with a limited computational cost.The proposed collaborative RS approach is shown in Fig.1.The proposed approach consists of seven stages such as:(i)Preprocessing,(ii)Feature Selection(iii)Instance Selection(iv)Clustering(v)Model Creation(vi)Prediction and(vii)Recommendation.Each stage is described in detail in the forthcoming subsections.

    3.1 Preprocessing

    Preprocessing is a technique that cleans, integrates, and fills the missing values in the collected dataset to avoid the result’s inconsistencies.The proposed approach considers both user ratings as well as content-based features for a recommendation.Since,these features are of different data types while integrating,there must be inconsistencies,which affects the prediction’s performance.Hence,the proposed approach applies label encoding while combining it to make them the same(or)similar data type[21].The missing values are filled with the mean value for the corresponding features in the input dataset.

    3.2 Feature Selection

    Feature selection selects the most influencing features from the available dataset to avoid computational complexity while training and testing that improves the recommendation model’s generalization.In the proposed approach,feature selection is utilized to reduce the sparsity of the integrated feature dataset.The proposed method uses the correlation-based mutual information measure to identify the entire feature set’s significant features.It considers feature importance,and used to choose the features based on the relative rank of features from a tree.In the proposed approach, feature importance is implemented based on the Classification and Regression Tree (CART) algorithm.Since the target variable in the proposed approach is categorical values, CART uses the Gini index as an impurity measure to find the splits in the tree.Gini index is a measure of inequality practiced in the irregular pattern of data.The Gini index always results in a quantity between 0 and 1,where 0 resembles perfect equality,and 1 replies to perfect inequality.The minimum value 0 occurs when all the data at a feature(node)belongs to one target category.The Gini index at a feature(node)t is defined in Eq.(1).

    Figure 1:Proposed collaborative recommendation approach

    whereiandjare the kinds of the target value,andpis indicated as probability.Eq.(1)can be rewritten as represented in Eq.(2).

    whereindicates the proportion of target categoryjpresent in feature(node)t.The Gini criterion for the split‘s’at a feature t is defined in Eq.(3).

    wherepLandpRare the proportion of instances in t sent to the left child and right child features(nodes)respectively and s €S refers to a particular generic split among all possible set of splits S.The steps involved in CART algorithm is given below:

    Step 1:Starts from the root nodet=1,search for a splits*among all potential candidate’s s that gives the high decrease in impurity.Then split node 1(t=1) into two nodest= 2 andt= 3, using splits*.

    Step 2:Repeat the method in each oft= 2 andt= 3, then extend the tree breeding process till at most insignificant one of the tree growing rules is met.From the constructed tree, the feature importance is calculated by using Eq.(4).

    wheref ijis the importance of featurej,GIjis the Gini impurity value of nodej,njis the number of instances falls in the root node,GIlis the Gini impurity value of a left child node,nnlis the number of instances fall in the left node,GIris the Gini impurity of the right child node,nris the number of instances fall in the right node.The normalized feature importance is calculated by dividing the feature importance by the sum of feature importance of all features,and it will be represented in percentage.The main advantages of feature selection are reducing over fitting, improves accuracy, and reduces training time.The features are selected using the feature importance scores,where almost 19 relevant features are selected from 31 features.

    3.3 Instance Selection

    In most of the Collaborative RS, the predictions are based on users’preferences similar to the active user.Though a similar user’s search is significant in collaborative RS, the entire scan of the dataset leads to non-scalability issues and poor prediction performance when more users and items are added into the dataset.Hence the proposed approach adopts an instance selection strategy to filter the relevant users than searching for the entire data set.The proposed method incorporates instance selection using Probability Density Function (PDF) of a normal distribution, shown in Eq.(5).It achieves the instance selection by computing the truncation parameter(α)from the selected features.

    Most of the real-world datasets fall under the normal distribution density function.This distribution’s empirical rule states that all the samples fall within three standard deviations of the mean.In that,68%of the sample fall inside the first standard deviation from the mean,95%fall inside two standard deviations,and 99.7%fall inside three standard deviations[22].The mean value of the target value is calculated using Eq.(6).

    Next,the standard deviation is calculated using the mean value obtained from Eq.(3).It is the average of the difference between a sample and mean value using Eq.(7).

    The normal distribution curve is plotted, and the truncation parameter is found from likely, very likely,and almost certainly values.The selected instances increase the mean value of the distribution,which means that the most reviewed items are selected.The instances are selected from the truncation algorithm(95%of the relevant instances),and output is given to the Enhanced BIRCH algorithm.

    3.4 Enhanced BIRCH

    The relevant users are identified in the previous subsection,where these users are partitioned into small groups based on the clustering algorithm.The clustering process in RS solves the scalability issue and increases recommendation accuracy with limited computational cost.In this scenario,clustering is performed based on BIRCH algorithm.It is one of the best hierarchical clustering algorithm for high dimensional data, but it suffers from the issue of initial and number of cluster assignment.So,the hyper parameters tuning is added to enhance the BIRCH algorithm for efficient cluster formation process.In the clustering approach,the number of clusters is to be given as input data.This optimal value of the number of groups(K)is decided using different methods.The Elbow method is one of the standard methods to choose the optimal number.TheKvalue is calculated by using the inertia score,which is the sum of samples’squared distances to their closest cluster center.The average internal sum of squares(Wk)is the average distance between points inside a cluster,and it is mathematically expressed in Eq.(8).

    wherekis number of clusters,nris number of points in clusterr,Dris sum of distances between each point in a cluster expressed in Eq.(9),and d indicates distance.

    The hyper parameters are needed to determine the number of clusters and to make the clustering computation faster.We have performed hyperparameters tuning in branching factor,compute labels,the number of clusters, and threshold value to enhance the algorithm.It fully utilizes accessible memory to infer the best conceivable sub-clusters to limit computational costs [23].The cluster centroid is the mean of all the points in the dataset and it is expressed in Eq.(10).Where, x is the point in the dataset and n is the number of points.

    The root node is formed using the number of points, Linear Sum of points (LS), and Sum of the Squared of the points(SS).The radius R is calculated using Eq.(11)to create the leaf node.

    The radius is compared with the threshold value(T),which is set initially.Based on the comparison,the next point is placed in the leaf node or the existing node.The number of a leaf node is restricted by using the value L.At the end of the first phase,the CF tree is built using the above steps.The second phase of Birch architecture is done using the above created CF tree in the agglomerative hierarchical clustering technique discussed earlier in this section.The next section discusses the model creation of the proposed model.

    3.5 Model Creation

    Ensemble methods plays a significant part in machine learning, GBT (Gradient Boost Tree)algorithm is one among them.A series of weak learners (decision trees) are ensemble by using a boosting technique.GBT produces additive models by sequentially implementing a base learner to current residuals by least-squares at each stage.GBT classification model performance is increased by tuning the hyperparameters,maximum depth,minimum sample split,learning rate,loss,number of estimators, and maximum features.Pseudo-residuals are the slope of the loss function being diminished,concerning the model estimations at all training data points estimated at the current step[24–26].

    3.6 Prediction and Recommendation

    The significance of RS mostly relies on the accurate prediction algorithm whose purpose is to approximate the value of the unseen data.According to this value, the system recommends to the user.The proposed approach utilizes the ensemble regression algorithm for effective prediction.The ensemble methodology combines a set of models, each of which performs a similar job to obtain a more reliable composite global model,more accurate and reliable.The proposed approach considers the Gradient Boost regression model for efficient model creation and prediction.This model adopts balanced and conditional recommendations.In gradient boost regression, a series of weak learners(decision trees)are constructed,boosting the classification performance by combining the respective learner.Gradient boosting constructs additive classification models by sequentially applying a simple parameterized function(base learner)to current pseudo residuals by least-squares at every iteration.Hence,the performance of the gradient boosting regression highly depends on parameter tuning.The proposed approach uses the Grid Search method to tune the hyper parameter of the model.A grid search is used for parameter tuning to build and evaluate a model for the different parameters of an algorithm defined in a network.The parameter to be tuned are:(i) Maximum depth (ii) Minimum Sample Split (iii).Learning rate (iv).Loss (v).Number of Estimators and (vi).Maximum features.The Grid Search performs search candidate sampling with k-fold cross-validation to tune the hyper parameters.The pseudo-residuals are the gradient of the loss function being reduced,concerning the model estimations at all training data points evaluated at the prevailing step.The performance of the model is discussed in section IV.

    3.7 Performance Measure

    The performance of the proposed approach is evaluated against the known measure for prediction and recommendations and is given below:For prediction, MAE is used and represented as the difference between the predicted rating of user u on itemi(pu,i) and the actual rating of user u on itemi(ru,i)and is represented in Eq.(12).For recommendation,precision and recall,and f-measures are used.

    Precision is defined as the percentage of recommended items that are relevant to the user and expressed in Eq.(13).The recall is the ratio of correct recommendations relevant to the query to the total number of appropriate recommendations and is shown in Eq.(14).In addition, f-measure is the harmonic mean of precision and recall,and it is indicated in Eq.(15).

    4 Experiment and Results

    In this section,experiments of the proposed model is carried on Movielens 100k and 1M datasets[27,28].Tabs.1 and 2 lists two standard movie recommendation datasets, which are used as the benchmark.The number of users varies from 943 to 6040,and the number of items varies from 1682 to 3952.The number of ratings ranges from 1,00,000 to 1,000,209 and the density of rating ranges from 4.19%to 6.30%.In Movielens 100k and 1M datasets,the rating levels are whole star rating from 1 to 5,and each user has at least have 20 movie ratings.Especially,Tab.2 lists the proportion of each rating level,mean(μ),and standard deviation(σ)among the various statistical measures.

    Dataset link:https://grouplens.org/datasets/movielens/

    Table 1:Datasets description

    4.1 Experiment

    The experiment is carried out on the windows platform using the python programming language.All the item and user features are combined with user preference of a movie.These features are combination of different formats like numbers and strings.Label Encoder applies to these features to make the features as a single data type.Almost 31 features are integrated using preprocessing technique.Among 31 features, 19 features are chosen using feature selection in the Movielens 100k and 1M data sets.

    Table 2:Basic statistical data

    The first step truncation algorithm calculates the mean and standard deviation of the selected features by Eqs.(6) and (7) in Section 3.3.Next, the probability density function of a normal distribution is drawn using these features.The ranges of likely,very likely,and most likely values are found by using the mean and standard deviation values.The truncation parameter range 2σis fixed based on the number of samples and the increase in the density function’s peak value.In the selected parameter value,95%of instances are selected,and also the peak value is increased by introducing the truncation parameter.The Truncation algorithm is fitted to the data set for determining the instances,which is shown in Figs.2a&2b.The curve in Fig.2a shows the mean value of 3.52,and the density peak value is 0.36,and Fig.2b shows the peak value that is increased to 0.52.It indicates that most of the data samples fall under our truncation parameter range,determining the more similar samples and less deviated samples.

    Figure 2:(a) Samples before applying truncation algorithm (b) Samples after applying truncation algorithm

    The dataset selected by the truncation algorithm is divided into training for preparation of model and testing for the experiment in the ratio of 80 and 20 using a ten-fold cross-validation technique.Before using the clustering technique,the number of clusters to be decided using elbow method.In this technique,the number of clusters from 2 to 10 is assigned.The curve is plotted between the number of clusters and the inertia score,which is the sum of samples’squared distances in the closest cluster center.The number of clusters is chosen,where the point after which the inertia has started decreasing linearly.Tab.3 presents an inertia score for K values that varies from 2 to 9.The parameters used for clustering technique are shown in Tab.4.The enhanced birch algorithm expressed in Section 3.4 gives three different clusters C1,C2,and C3.

    Table 3:Elbow curve method using inertia score

    Table 4:Hyperparameters list for BIRCH clustering algorithm

    The Grid search obtains the best parameters,which methodically build and estimate a model for each mixture of algorithm parameters specified in a grid.Hyper parameters are tuned using Grid search method for the gradient boost classification algorithm and it is listed in Tab.5.The gradient boost classification tree models are created based on the clusters obtained from the enhanced birch algorithm,and it is named as M1,M2,and M3.

    Table 5:Hyper parameters list for GBT algorithm

    4.2 Results

    The enhanced BIRCH clustering algorithm is used to test the samples,and the results are predicted and classified into the corresponding clusters such as C1,C2 and C3.The test samples are given to the related models such as M1,M2,and M3 and the prediction values are found by using MAE.Where,these values are recorded for the proposed models and tabulated in Tab.6,which contains both input datasets.

    Table 6:MAE values of the proposed model

    Among these three models, model M3 shows better results and yields 0.52 as average.The experiment is performed without applying the proposed model, and the MAE values are tabulated in Tab.7,which proves that the proposed model reduces the error value.

    Table 7:MAE values after feature and instance selection

    After finding the active user cluster,and the recommendations are made by removing the watched movies from the list using a top n recommendation algorithm as mentioned in Sections 3.5 and 3.6.The model is validated through the recommendation measures such as precision,recall and f-measure,which are explained in the Section 3.7.The recommended measures for each model are calculated and tabulated.Tab.8 shows the recommender measures of the proposed model in movie lens 100k and 1M dataset.

    Table 8:Recommendation measures of the proposed model

    4.3 Discussion

    In this section,the MAE value of the proposed model is compared with the existing recommendation algorithms.Tab.9 represents that the Mohammad pour et al.[11]achieved the minimum MAE value of 0.6610 and 0.8220 on Movielens 100k, and 1M datasets.On the other hand, the proposed model delivers 0.52 and 0.5718 MAE value on Movielens 100k and 1M datasets.The simulation results showed that the proposed model reduces 68%of average MAE before the truncation algorithm,which indicates that the proposed model produces less error, and high accuracy compared to the existing algorithm,and it is represented in Fig.3.The Fig.3 shows the comparison of MAE values with the existing models.It is determined that the proposed model gives better results than the existing model in both Movielens100k and 1M datasets and proved that the model gives consistent results.The proposed model gives minimum error values, due to feature selection and instance selection by a truncation algorithm.

    Table 9:MAE value compared with an existing model

    Figure 3:Graphical comparison of proposed and existing model in terms of MAE

    In Tab.10, the performance comparison is carried out between the proposed model and the existing recommendation system developed by Selvi et al.[12].It is determined that the proposed model delivers better results than the existing model in light of precision,recall and f-measure on Movielens 100k dataset.In this section, the proposed model obtained 0.8350 of precision, 0.8640 of recall and 0.8672 of f-measure on Movielens 100k dataset,which are better compared to existing model that is graphically represented in the Fig.4.The data sparsity is reduced using an enhanced BIRCH clustering through the deployed feature selection and instance selection algorithms.Scalability issue is addressed by implementing the truncation algorithm based on feature importance.The low MAE value and the high precision,recall,and f-measure values showed that the proposed GBT recommendation model performed well in movie recommendation.

    Table 10:Recommendation measures comparison with existing model in terms of precision, recall,and f-measure

    Figure 4:Graphical comparison of proposed and existing model in terms of precision, recall and fmeasure

    In Tab.11,the proposed model is compared with two existing recommendation models,which are developed by Fu et al.[29] and Zhang et al.[30].The existing models obtained 0.8300 and 0.9460 RMSE value on movielens 100k and movielens 1M datasets.Related to the existing models, the proposed model obtained batter RMSE value of 0.4392 and 0.4500 on movielens 100k and movielens 1M datasets.

    Table 11:RMSE value compared with the existing models

    5 Conclusion

    An ensemble collaborative recommendation model with a truncation algorithm is proposed for movie recommendation in this research.The proposed model is validated on two real-world datasets;Movielens 100k and 1M datasets.In the proposed model, feature selection using the significance of the feature plays an important role,truncation algorithm influences the ensemble model performance consistently, and the ensemble learning in collaborative filtering produces better results than the existing models by means of recall, precision and f-measure.The prediction and recommendation performance measures showed that the proposed model is outperformed the existing methods in movie recommendation.The personalized recommender performance measure showed that the proposed model provides top recommendations to the active users.In the future work,we planned to design a recommendation model for big data environment,which is a complicated,engaging and challenging.It involves the recent tools and techniques to handle a massive amount of data.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    看黄色毛片网站| 99视频精品全部免费 在线 | 国产精品自产拍在线观看55亚洲| 亚洲国产欧洲综合997久久,| 国产欧美日韩一区二区三| 午夜福利18| www日本黄色视频网| 欧美av亚洲av综合av国产av| 一级a爱片免费观看的视频| 日韩欧美国产一区二区入口| av天堂在线播放| 大型黄色视频在线免费观看| 久久久久亚洲av毛片大全| 午夜免费成人在线视频| 国产不卡一卡二| avwww免费| 久久这里只有精品中国| 日韩欧美一区二区三区在线观看| 国产91精品成人一区二区三区| 国产伦精品一区二区三区视频9 | 搡老熟女国产l中国老女人| 国产成人精品久久二区二区免费| 蜜桃久久精品国产亚洲av| 欧美色欧美亚洲另类二区| 最新在线观看一区二区三区| 国产日本99.免费观看| 人妻久久中文字幕网| 亚洲第一欧美日韩一区二区三区| 久久久久性生活片| 又爽又黄无遮挡网站| 国产成人欧美在线观看| 首页视频小说图片口味搜索| 又爽又黄无遮挡网站| 蜜桃久久精品国产亚洲av| 国产成人av激情在线播放| 精品国内亚洲2022精品成人| 少妇的逼水好多| 欧美最黄视频在线播放免费| 亚洲中文日韩欧美视频| 一个人看的www免费观看视频| 激情在线观看视频在线高清| 夜夜看夜夜爽夜夜摸| 12—13女人毛片做爰片一| 亚洲成人久久性| 久久久久久九九精品二区国产| 午夜视频精品福利| 美女大奶头视频| 久久国产乱子伦精品免费另类| www日本黄色视频网| 国产蜜桃级精品一区二区三区| 97人妻精品一区二区三区麻豆| 午夜a级毛片| 他把我摸到了高潮在线观看| 亚洲av成人av| 90打野战视频偷拍视频| 国产黄色小视频在线观看| 最近最新免费中文字幕在线| 香蕉国产在线看| 一进一出抽搐动态| 99热6这里只有精品| 欧美3d第一页| 美女扒开内裤让男人捅视频| 亚洲欧洲精品一区二区精品久久久| 高潮久久久久久久久久久不卡| 久久久久免费精品人妻一区二区| 亚洲色图av天堂| 两个人的视频大全免费| 日韩欧美国产一区二区入口| 91av网一区二区| 男女床上黄色一级片免费看| 亚洲狠狠婷婷综合久久图片| 国产伦人伦偷精品视频| 亚洲国产欧美一区二区综合| 成人午夜高清在线视频| 美女cb高潮喷水在线观看 | 麻豆成人av在线观看| 日本免费一区二区三区高清不卡| 无遮挡黄片免费观看| 国产精品久久久人人做人人爽| 九色成人免费人妻av| 黄色视频,在线免费观看| 午夜免费激情av| 国产视频内射| 观看免费一级毛片| 久久久久久久久中文| 婷婷精品国产亚洲av| 悠悠久久av| 成人av一区二区三区在线看| 美女免费视频网站| 国产私拍福利视频在线观看| 在线永久观看黄色视频| 国产蜜桃级精品一区二区三区| 亚洲,欧美精品.| 长腿黑丝高跟| 叶爱在线成人免费视频播放| 亚洲av熟女| 香蕉国产在线看| 人妻久久中文字幕网| 久久午夜综合久久蜜桃| 亚洲欧美日韩高清在线视频| 成人国产综合亚洲| 亚洲精品在线美女| 色在线成人网| 久久99热这里只有精品18| 老汉色av国产亚洲站长工具| 少妇裸体淫交视频免费看高清| 夜夜爽天天搞| 国产麻豆成人av免费视频| 久久精品国产99精品国产亚洲性色| 精品99又大又爽又粗少妇毛片 | 久久午夜综合久久蜜桃| 99热精品在线国产| 久久精品亚洲精品国产色婷小说| 日韩精品中文字幕看吧| 99久久久亚洲精品蜜臀av| 巨乳人妻的诱惑在线观看| 欧美午夜高清在线| 国产三级黄色录像| 国产黄色小视频在线观看| 两性夫妻黄色片| tocl精华| 亚洲国产中文字幕在线视频| 日本一本二区三区精品| 欧美高清成人免费视频www| 久久久久国产精品人妻aⅴ院| 久久香蕉精品热| 国产爱豆传媒在线观看| 国产真人三级小视频在线观看| 久久久久久久久久黄片| 俄罗斯特黄特色一大片| 18禁裸乳无遮挡免费网站照片| 99久久精品国产亚洲精品| 天堂影院成人在线观看| 波多野结衣巨乳人妻| 麻豆成人av在线观看| 丁香欧美五月| 国产av不卡久久| 国产成人精品久久二区二区91| 精品一区二区三区视频在线 | 欧美成人免费av一区二区三区| 久久久精品大字幕| 搞女人的毛片| 99久久久亚洲精品蜜臀av| 99re在线观看精品视频| 色综合站精品国产| 十八禁人妻一区二区| 久久草成人影院| 99热精品在线国产| 99视频精品全部免费 在线 | 国产视频内射| 欧美国产日韩亚洲一区| 午夜日韩欧美国产| 日韩免费av在线播放| 国产91精品成人一区二区三区| 欧美zozozo另类| 国产日本99.免费观看| 窝窝影院91人妻| 亚洲中文字幕日韩| 中文字幕人成人乱码亚洲影| 久久久久久久午夜电影| 亚洲五月婷婷丁香| 精品99又大又爽又粗少妇毛片 | 国产精品久久久久久久电影 | 日韩欧美一区二区三区在线观看| 51午夜福利影视在线观看| 狂野欧美白嫩少妇大欣赏| 国产 一区 欧美 日韩| www日本在线高清视频| 色在线成人网| 舔av片在线| 欧美色视频一区免费| 法律面前人人平等表现在哪些方面| 看片在线看免费视频| 国产亚洲av高清不卡| 一区二区三区激情视频| 午夜福利在线在线| 日韩 欧美 亚洲 中文字幕| 亚洲激情在线av| 老鸭窝网址在线观看| 亚洲第一欧美日韩一区二区三区| 人人妻人人看人人澡| 精品国产亚洲在线| 1024香蕉在线观看| 亚洲五月天丁香| 亚洲成a人片在线一区二区| 9191精品国产免费久久| 精品一区二区三区视频在线观看免费| 亚洲美女视频黄频| 69av精品久久久久久| 桃色一区二区三区在线观看| 国产伦精品一区二区三区四那| 国产99白浆流出| 色综合亚洲欧美另类图片| 母亲3免费完整高清在线观看| 亚洲欧美精品综合久久99| 国产伦一二天堂av在线观看| 日韩免费av在线播放| 久久久久久久久久黄片| www.www免费av| 法律面前人人平等表现在哪些方面| 国产av麻豆久久久久久久| 两性午夜刺激爽爽歪歪视频在线观看| 国产三级中文精品| 十八禁网站免费在线| 久久久久九九精品影院| 亚洲精品乱码久久久v下载方式 | 国产黄片美女视频| 啦啦啦免费观看视频1| 亚洲欧美日韩卡通动漫| 99久久综合精品五月天人人| 国产精品99久久久久久久久| 亚洲乱码一区二区免费版| 视频区欧美日本亚洲| 久9热在线精品视频| 黑人欧美特级aaaaaa片| 成年版毛片免费区| 一区福利在线观看| av视频在线观看入口| 国产69精品久久久久777片 | 麻豆国产97在线/欧美| 两性夫妻黄色片| 精品久久久久久久久久免费视频| 成年女人看的毛片在线观看| 亚洲精品色激情综合| 免费观看的影片在线观看| 国产高清三级在线| 亚洲精品在线美女| 久久久久久久久久黄片| 美女cb高潮喷水在线观看 | 久久精品国产亚洲av香蕉五月| 俄罗斯特黄特色一大片| 成人鲁丝片一二三区免费| 91在线观看av| 好男人电影高清在线观看| 亚洲乱码一区二区免费版| 一个人免费在线观看的高清视频| 叶爱在线成人免费视频播放| 非洲黑人性xxxx精品又粗又长| 看黄色毛片网站| 国产爱豆传媒在线观看| 九色成人免费人妻av| 麻豆成人午夜福利视频| 中文在线观看免费www的网站| 久久这里只有精品19| 99久久精品一区二区三区| 欧美日韩黄片免| 91麻豆精品激情在线观看国产| 欧美成人一区二区免费高清观看 | 国产精品久久久久久亚洲av鲁大| 露出奶头的视频| 1024手机看黄色片| 久久人妻av系列| 久久香蕉精品热| 99热只有精品国产| 成人无遮挡网站| 亚洲第一电影网av| 白带黄色成豆腐渣| 久久国产乱子伦精品免费另类| av在线天堂中文字幕| 国产亚洲欧美98| 国产亚洲精品一区二区www| 精品人妻1区二区| АⅤ资源中文在线天堂| 又大又爽又粗| 国产人伦9x9x在线观看| 黄色成人免费大全| 国产伦精品一区二区三区视频9 | 99re在线观看精品视频| 免费大片18禁| 国产麻豆成人av免费视频| 成人精品一区二区免费| 大型黄色视频在线免费观看| 九色成人免费人妻av| 麻豆成人午夜福利视频| 亚洲欧美日韩高清在线视频| 熟女电影av网| 亚洲av美国av| 亚洲色图 男人天堂 中文字幕| 中文字幕久久专区| 人妻夜夜爽99麻豆av| 国产精品日韩av在线免费观看| 男人和女人高潮做爰伦理| 久久久精品大字幕| 我的老师免费观看完整版| АⅤ资源中文在线天堂| 精品熟女少妇八av免费久了| 国产激情欧美一区二区| 日韩精品青青久久久久久| 成在线人永久免费视频| 国产成人av教育| 久久精品亚洲精品国产色婷小说| 成人三级黄色视频| 18禁美女被吸乳视频| 精品久久蜜臀av无| www日本在线高清视频| 美女午夜性视频免费| 国产成人精品无人区| 国产综合懂色| 成人特级黄色片久久久久久久| 88av欧美| 禁无遮挡网站| 窝窝影院91人妻| 亚洲av成人不卡在线观看播放网| 一个人免费在线观看的高清视频| 桃色一区二区三区在线观看| www日本在线高清视频| 最新中文字幕久久久久 | 亚洲男人的天堂狠狠| 在线观看66精品国产| 国产91精品成人一区二区三区| 禁无遮挡网站| 亚洲中文日韩欧美视频| 日韩欧美精品v在线| 18禁美女被吸乳视频| 国产精品 欧美亚洲| 麻豆成人av在线观看| 91麻豆精品激情在线观看国产| 男人的好看免费观看在线视频| 精品一区二区三区av网在线观看| 女同久久另类99精品国产91| 国产av不卡久久| 美女高潮的动态| 网址你懂的国产日韩在线| 宅男免费午夜| 日韩有码中文字幕| 中文字幕高清在线视频| av在线蜜桃| 九色成人免费人妻av| 99riav亚洲国产免费| 怎么达到女性高潮| 成人国产一区最新在线观看| 最近视频中文字幕2019在线8| 午夜福利成人在线免费观看| 黄色日韩在线| 宅男免费午夜| 久久久色成人| 亚洲av中文字字幕乱码综合| 巨乳人妻的诱惑在线观看| 国产成人影院久久av| 国产探花在线观看一区二区| 神马国产精品三级电影在线观看| av在线天堂中文字幕| 99久国产av精品| h日本视频在线播放| 丰满人妻一区二区三区视频av | 久久久久久大精品| 国产69精品久久久久777片 | 成人午夜高清在线视频| 久久人妻av系列| www.自偷自拍.com| 亚洲精品国产精品久久久不卡| 少妇的丰满在线观看| 又紧又爽又黄一区二区| www.999成人在线观看| 久久这里只有精品19| 曰老女人黄片| 十八禁网站免费在线| 少妇的丰满在线观看| 身体一侧抽搐| 久久久久久久午夜电影| 成人鲁丝片一二三区免费| 黄色成人免费大全| 亚洲av美国av| 国产亚洲欧美98| 国产免费男女视频| 丰满人妻一区二区三区视频av | 亚洲av熟女| 精品国内亚洲2022精品成人| 男女午夜视频在线观看| 午夜免费激情av| 国产熟女xx| 国产精品久久久人人做人人爽| av女优亚洲男人天堂 | 国产一区二区在线观看日韩 | 丰满人妻一区二区三区视频av | 日本a在线网址| 中文在线观看免费www的网站| 国内精品美女久久久久久| 国内毛片毛片毛片毛片毛片| av黄色大香蕉| 女人高潮潮喷娇喘18禁视频| 国模一区二区三区四区视频 | АⅤ资源中文在线天堂| 美女高潮的动态| 精品一区二区三区视频在线观看免费| 国产精品精品国产色婷婷| 淫妇啪啪啪对白视频| 女人高潮潮喷娇喘18禁视频| 久久久久国内视频| 在线观看舔阴道视频| 国产伦精品一区二区三区视频9 | 嫩草影院精品99| 久久国产精品人妻蜜桃| 99国产极品粉嫩在线观看| 欧美黑人巨大hd| 亚洲av片天天在线观看| 久久这里只有精品中国| 亚洲欧美激情综合另类| 午夜两性在线视频| 午夜亚洲福利在线播放| 91老司机精品| 亚洲国产精品999在线| 久久午夜亚洲精品久久| 香蕉久久夜色| 亚洲熟妇熟女久久| 人妻久久中文字幕网| 999久久久精品免费观看国产| 午夜亚洲福利在线播放| 91老司机精品| 亚洲成人中文字幕在线播放| 国产真实乱freesex| 成人国产综合亚洲| 国产一区二区激情短视频| 特大巨黑吊av在线直播| 身体一侧抽搐| 狠狠狠狠99中文字幕| 看片在线看免费视频| 黄色女人牲交| 精品久久蜜臀av无| 国内少妇人妻偷人精品xxx网站 | 亚洲欧美日韩东京热| 天堂√8在线中文| 国产成人啪精品午夜网站| 日韩精品青青久久久久久| 男人的好看免费观看在线视频| 国产精品亚洲美女久久久| 亚洲精品色激情综合| 成人亚洲精品av一区二区| 97碰自拍视频| 高清毛片免费观看视频网站| 欧美日韩国产亚洲二区| 亚洲精品国产精品久久久不卡| 午夜成年电影在线免费观看| 日本五十路高清| 高清毛片免费观看视频网站| 国产精品99久久久久久久久| 亚洲精品美女久久久久99蜜臀| 欧美日韩亚洲国产一区二区在线观看| 日本免费一区二区三区高清不卡| www.www免费av| 超碰成人久久| 床上黄色一级片| 国内久久婷婷六月综合欲色啪| 97碰自拍视频| 午夜福利视频1000在线观看| 欧美日韩瑟瑟在线播放| 国产一区二区三区在线臀色熟女| www.熟女人妻精品国产| 日韩欧美国产一区二区入口| 18禁裸乳无遮挡免费网站照片| 91九色精品人成在线观看| 亚洲熟妇中文字幕五十中出| 亚洲国产中文字幕在线视频| 国产av在哪里看| 后天国语完整版免费观看| 亚洲中文日韩欧美视频| 久久天堂一区二区三区四区| 欧美色欧美亚洲另类二区| 麻豆久久精品国产亚洲av| 老鸭窝网址在线观看| 亚洲成av人片在线播放无| 岛国视频午夜一区免费看| 特大巨黑吊av在线直播| 日韩人妻高清精品专区| 午夜福利视频1000在线观看| 国产精品国产高清国产av| 成人特级av手机在线观看| 国产高清videossex| 欧美av亚洲av综合av国产av| 亚洲人成伊人成综合网2020| 精品国产超薄肉色丝袜足j| 日韩av在线大香蕉| 午夜日韩欧美国产| 五月玫瑰六月丁香| 午夜成年电影在线免费观看| 久久人人精品亚洲av| 欧美色视频一区免费| 国产黄片美女视频| 国产三级黄色录像| 国产综合懂色| 午夜a级毛片| 性欧美人与动物交配| 欧美一区二区精品小视频在线| 一边摸一边抽搐一进一小说| 超碰成人久久| 国产精品精品国产色婷婷| 精品国产乱子伦一区二区三区| 国产伦精品一区二区三区视频9 | 免费看美女性在线毛片视频| 99国产精品99久久久久| 国产精品一及| 精品午夜福利视频在线观看一区| 男人舔奶头视频| 真人一进一出gif抽搐免费| 久久国产精品影院| 最近最新中文字幕大全免费视频| 亚洲av中文字字幕乱码综合| 欧美黑人欧美精品刺激| 最新美女视频免费是黄的| 黄色女人牲交| 欧美另类亚洲清纯唯美| 少妇人妻一区二区三区视频| 两人在一起打扑克的视频| 国产高潮美女av| 一本一本综合久久| 在线观看66精品国产| 最近最新中文字幕大全免费视频| 国产免费男女视频| 听说在线观看完整版免费高清| 啦啦啦韩国在线观看视频| 免费电影在线观看免费观看| 国产激情久久老熟女| 成人三级黄色视频| 国内精品久久久久久久电影| 欧美+亚洲+日韩+国产| 一级作爱视频免费观看| 国产精品免费一区二区三区在线| 欧美一级毛片孕妇| 国产三级在线视频| 国产亚洲欧美98| 天天躁狠狠躁夜夜躁狠狠躁| www国产在线视频色| 在线视频色国产色| 蜜桃久久精品国产亚洲av| 男女做爰动态图高潮gif福利片| 在线播放国产精品三级| 在线永久观看黄色视频| 久久国产精品人妻蜜桃| 在线免费观看的www视频| 国产高清三级在线| 日本精品一区二区三区蜜桃| 夜夜躁狠狠躁天天躁| 深夜精品福利| 99热精品在线国产| 亚洲精品国产精品久久久不卡| 日韩欧美在线乱码| 午夜福利免费观看在线| 一级毛片精品| 亚洲精品美女久久av网站| 亚洲在线观看片| 丰满人妻一区二区三区视频av | 免费人成视频x8x8入口观看| 又黄又粗又硬又大视频| 国产精品爽爽va在线观看网站| 国产激情欧美一区二区| www.999成人在线观看| 国产精品一区二区免费欧美| 欧美午夜高清在线| 亚洲第一电影网av| 国产欧美日韩一区二区三| 久久伊人香网站| 日本 av在线| 国产精品久久电影中文字幕| 国产精品av视频在线免费观看| 亚洲第一欧美日韩一区二区三区| 91av网一区二区| 亚洲第一欧美日韩一区二区三区| 国产一区在线观看成人免费| 欧美另类亚洲清纯唯美| 99久久精品热视频| 真实男女啪啪啪动态图| 日本精品一区二区三区蜜桃| 国产久久久一区二区三区| 丰满人妻一区二区三区视频av | 亚洲五月婷婷丁香| 成人特级黄色片久久久久久久| 这个男人来自地球电影免费观看| 亚洲专区国产一区二区| 久久精品91无色码中文字幕| 日韩中文字幕欧美一区二区| www.www免费av| 三级国产精品欧美在线观看 | 国产午夜福利久久久久久| 国产精华一区二区三区| 国产亚洲av高清不卡| 美女cb高潮喷水在线观看 | 国产精华一区二区三区| 国产一区二区三区视频了| 可以在线观看的亚洲视频| 又粗又爽又猛毛片免费看| 免费在线观看亚洲国产| 亚洲自拍偷在线| 美女扒开内裤让男人捅视频| 精品国产三级普通话版| 天堂影院成人在线观看| 免费电影在线观看免费观看| 一本精品99久久精品77| 91麻豆精品激情在线观看国产| 日本一本二区三区精品| 国产午夜精品久久久久久| 一区二区三区国产精品乱码| 欧美性猛交╳xxx乱大交人| 欧美日韩国产亚洲二区| 亚洲国产精品成人综合色| 欧美日韩瑟瑟在线播放| 久久天躁狠狠躁夜夜2o2o| 国产精品久久久久久久电影 | 狂野欧美白嫩少妇大欣赏| 欧美日本亚洲视频在线播放| 成人亚洲精品av一区二区| 丰满的人妻完整版| 91老司机精品| 欧美日韩中文字幕国产精品一区二区三区| 波多野结衣巨乳人妻| 少妇裸体淫交视频免费看高清| 婷婷亚洲欧美| 久久久久九九精品影院| 欧美一区二区精品小视频在线| 很黄的视频免费| 欧美av亚洲av综合av国产av| 亚洲avbb在线观看| 久久精品国产清高在天天线| 99精品在免费线老司机午夜| 听说在线观看完整版免费高清| 日韩国内少妇激情av|