• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Ensemble Learning Based Collaborative Filtering with Instance Selection and Enhanced Clustering

    2022-08-24 03:26:50ParthasarathyandSathiyaDevi
    Computers Materials&Continua 2022年5期

    G.Parthasarathyand S.Sathiya Devi

    1Anna University,Chennai,600025,India

    2University College of Engineering,BIT Campus,Anna University,Tiruchirappalli,620024,India

    Abstract:Recommender system is a tool to suggest items to the users from the extensive history of the user’s feedback.Though, it is an emerging research area concerning academics and industries, where it suffers from sparsity,scalability, and cold start problems.This paper addresses sparsity, and scalability problems of model-based collaborative recommender system based on ensemble learning approach and enhanced clustering algorithm for movie recommendations.In this paper,an effective movie recommendation system is proposed by Classification and Regression Tree(CART)algorithm,enhanced Balanced Iterative Reducing and Clustering using Hierarchies(BIRCH)algorithm and truncation method.In this research paper,a new hyper parameters tuning is added in BIRCH algorithm to enhance the cluster formation process,where the proposed algorithm is named as enhanced BIRCH.The proposed model yields quality movie recommendation to the new user using Gradient boost classification with broad coverage.In this paper, the proposed model is tested on Movielens dataset,and the performance is evaluated by means of Mean Absolute Error(MAE),precision,recall and f-measure.The experimental results showed the superiority of proposed model in movie recommendation compared to the existing models.The proposed model obtained 0.52 and 0.57 MAE value on Movielens 100k and 1M datasets.Further,the proposed model obtained 0.83 of precision, 0.86 of recall and 0.86 of f-measure on Movielens 100k dataset,which are effective compared to the existing models in movie recommendation.

    Keywords: Clustering; ensemble learning; feature selection; gradient boost tree;instance selection;truncation parameter

    1 Introduction

    The exponential increase of data in the digital universe has encouraged efficient information filtering and personalization technology.Recommender System(RS)is a popular technique to perform both information filtering and personalization to the end-user from the huge information space.Nowadays,RS is an integral part of every e-commerce application such as Amazon,Twitter,Netflix,LinkedIn, etc., to provide more relevant and personalized suggestions.So, Recommender Systems(RSs) are the systems that provides recommendations based on user’s past behavior.Tapestry is the oldest recommendation system that filters the mail,which is interested in the user[1].The RS collects information explicitly or implicitly to make recommendations.The rating information(i.e.,like/dislike,a discrete rating)given by the users on products is called explicit,and the information collected from users’behavior(i.e.,feedback,browsing behavior)is called implicit[2].RS is broadly divided into three types namely(i)Collaborative Filtering(CF)(ii)Content-Based Filtering(CBF)and(iii)Hybrid CF recommendation systems that finds relevant items by finding the users having similar interests [3].Content-Based Filtering (CBF) suggests items that are similar in features and the user has already chosen in the past[4].In the hybrid approach,two or more methods are combined to gain better results.Vekariya et al.[5] mentioned the hybrid system types given by Robert Burke:weighted, switching,mixed technique, feature combination cascade, feature augmentation, and meta-level.Hence, there are three approaches in RS in that CF is successful in research and practice.

    There are two types of CF approaches namely(i)Memory-based and(ii)Model-based approach.The memory-based approach uses the entire instance of the database,which results in scalability.The model-based approach tries to reduce a massive dataset into a model and performs recommendation task.Model-Based CF(MBCF)reacts to the user’s request instantly with reduced computation.There are five primary approaches in MBCF such as classification,clustering,latent model,Markov Decision Process (MDP), and Matrix Factorization (MF) [6].Generally, MBCF is preferred over memorybased, because it requires less computational cost and performs recommendations without domain knowledge.Hence,model-based CF is highly preferable and it suffers from three main issues like(i)scalability,(ii)sparsity and(iii)cold start.Sparsity refers that most of the users does not have enough rating to find the similar user.Cold start refers to the challenge in producing specific recommendations for the new or cold users,who rated an inadequate number of items.Scalability reduces the efficiency of handling a vast amount of data,when recommending to the user.Apart from these issues,building a user profile for new users by CF is difficult [7].The main objective of this research paper is to develop a new CF approach by combining the user and item related feature to provide a solution to scalability and sparsity issue.The proposed system performs a mixture of clustering and ensemblebased classification using feature combination for a recommendation.The main contribution of this paper is summarized as follows:

    ·Proposed a new feature, and instance selection method; hierarchical enhanced BIRCH based clustering algorithm to overcome data sparsity.

    ·Incorporating CART based feature,and truncation parameters for normal distribution based instance selection.

    ·Developed an ensemble based Gradient Boosting Tree (GBT) recommendation model which improves recommendation accuracy,and also addresses the scalability issue.

    The rest of the paper is organized as follows:Section 2 presents the related work review.A detailed description of the proposed approach is given in Section 3.Section 4 provides the experimental result on the benchmark datasets.Finally,the conclusion of the work is presented in Section 5.

    2 Literature Review

    This section reviews the existing collaborative recommendation approaches in movie recommendation.CF is a technique,which automatically predicts the unknown ratings of the product(or)user’s interest by analyzing the known ratings of it (or) compiling preferences of similar users.The CF is used to develop a personalized recommendation on many e-commerce applications on the web.The main process of CF is to identify the similar users for guiding the active user.In memory-based CF,instance-based methods are employed to determine similar users,but it suffers from poor scalability for a vast database.On the other hand,model-based CF approach is commonly used in offline dataset for prediction and recommendation.The model-based CF approach is small,which occupies less memory and work faster.Identifying a group of similar users is a challenging task in both memory and modelbased approaches.Generally,a group of similar users is generated using clustering algorithms.

    Ju et al.[8] developed a collaborative model based on k-means clustering and artificial bee colony algorithm.The developed algorithm was used to address the local optima problem of kmeans clustering, and the similarity measure to perform clustering presented by Rongfei et al.[9].The adjusted DBSCAN algorithm was utilized to develop a cluster that improves the accuracy of movie recommendation for the users,who having many available ratings.Das et al.[10]has presented a K-d trees and quad trees based hierarchical clustering method for movie recommendation.The developed method addresses the scalability issue and maintain acceptable recommendation accuracy.Experimental result proved that the computation time was effectively reduced on movielens-100K,movielens-1M,book-crossing,and trip advisor datasets.

    Mohammad pour et al.[11]introduced a CF method based on hybrid meta-heuristic clustering for movie recommendation.The developed method merges a genetic algorithm and a gravitational emulation bounded clustering search.Here,the cluster was efficient,but suffers from higher computational cost.In addition,a Modified Cuckoo Search(MCS)algorithm and a Modified Fuzzy C Means(MFCM) approach was developed by Selvi et al.[12].In this method, the number of iterations and error rates were reduced by MFCM.The recommendation accuracy and the efficiency of clustering was improved by MCS algorithm.

    Generally, the cluster’s discrimination ability and the cluster’s performance depends on dimensionality reduction and it was performed in two ways(i)Feature selection,and(ii)Instance selection.Cataltepe et al.[13]has developed a new feature selection method for Turkish movie recommendation system.The developed method practices user behavior, various kinds of content features, and other users’message to predict the movie ratings.The developed method improves the recommendation’s accuracy,notably for users who have viewed a deficient number of movies.The k-means clustering has been described along with a backward feature selection method to improve movie recommendation by Ramezani et al.[14].The feature selection process eliminates irrelevant feature and makes the real similarity between the users.Further,an information-theoretic approach was developed by Yu et al.[15]for movie recommendation.The developed approach used description rationality,and the power to measure an instance’s pertinence regarding a target notion.The empirical evaluation result showed that the developed method significantly reduces the neighborhood size and increases the CF process’s speed.

    Yu et al.[16]developed a system for feature and instance selection based on mutual information and Bayes theorem.This literature showed that the feature weighting and instance selection based on the pertinence analysis improves the collaborative filtering in light of accuracy.The integration of texture and visual features used by Pahuja et al.[17] were effective in movie recommendations.The feature sets have different level of significance in different scenarios and identified based on the business requirement.Further,the class based collaborative filtering algorithm was described by Zeng et al.[18]which adapts the user frequency threshold methodology for instance selection.The threshold selection improves the speed of computation,recommendation accuracy,and alleviates the cold start problem.

    The CF’s accuracy depends on the classification model, and the Extreme Gradient Boosting(XGBoost)algorithm-based recommendation system was described in Xu et al.[19].Shao et al.[20]has introduced a Heterogeneous Information Boosting(HIBoosting)model based on Gradient Boosting Decision Tree (GBDT) algorithm.The developed model blends independent data in information networks to provide users with more helpful recommendation assistance.

    From the above mentioned literatures, it is recognized that the model-based CF approach addresses the sparsity, and scalability issues better with feature reduction, clustering, and machine learning-based approaches.Still, the prediction accuracy and addition of new data incrementally becomes questionable.This research paper proposed a gradient boosting decision tree based CF approach with instance selection and enhanced clustering for an effective movie recommendation.Hence,the proposed model overcomes the sparsity and scalability issues and improves the accuracy of prediction and movie recommendation.

    3 Proposed Methodology

    The proposed collaborative movie recommendation approach with combined features and probabilistic based instance selection is described in this section.Generally, the RS suffers from three main issues such as sparsity, scalability, and cold start, irrespective of different implementation approaches.These issues affect the performance of RS.Hence this paper proposes an approach for model-based collaborative RS to solve the sparsity and scalability issues.The sparsity occurs due to the sparseness of the user-item matrix.The proposed approach considers both the ratings and content-based features of the data set and uses feature selection to overcome the sparsity problem.The later issue is addressed by enhanced clustering and instance selection.This approach addresses the scalability issue and improves the recommendation’s accuracy when combined with an ensemble method with a limited computational cost.The proposed collaborative RS approach is shown in Fig.1.The proposed approach consists of seven stages such as:(i)Preprocessing,(ii)Feature Selection(iii)Instance Selection(iv)Clustering(v)Model Creation(vi)Prediction and(vii)Recommendation.Each stage is described in detail in the forthcoming subsections.

    3.1 Preprocessing

    Preprocessing is a technique that cleans, integrates, and fills the missing values in the collected dataset to avoid the result’s inconsistencies.The proposed approach considers both user ratings as well as content-based features for a recommendation.Since,these features are of different data types while integrating,there must be inconsistencies,which affects the prediction’s performance.Hence,the proposed approach applies label encoding while combining it to make them the same(or)similar data type[21].The missing values are filled with the mean value for the corresponding features in the input dataset.

    3.2 Feature Selection

    Feature selection selects the most influencing features from the available dataset to avoid computational complexity while training and testing that improves the recommendation model’s generalization.In the proposed approach,feature selection is utilized to reduce the sparsity of the integrated feature dataset.The proposed method uses the correlation-based mutual information measure to identify the entire feature set’s significant features.It considers feature importance,and used to choose the features based on the relative rank of features from a tree.In the proposed approach, feature importance is implemented based on the Classification and Regression Tree (CART) algorithm.Since the target variable in the proposed approach is categorical values, CART uses the Gini index as an impurity measure to find the splits in the tree.Gini index is a measure of inequality practiced in the irregular pattern of data.The Gini index always results in a quantity between 0 and 1,where 0 resembles perfect equality,and 1 replies to perfect inequality.The minimum value 0 occurs when all the data at a feature(node)belongs to one target category.The Gini index at a feature(node)t is defined in Eq.(1).

    Figure 1:Proposed collaborative recommendation approach

    whereiandjare the kinds of the target value,andpis indicated as probability.Eq.(1)can be rewritten as represented in Eq.(2).

    whereindicates the proportion of target categoryjpresent in feature(node)t.The Gini criterion for the split‘s’at a feature t is defined in Eq.(3).

    wherepLandpRare the proportion of instances in t sent to the left child and right child features(nodes)respectively and s €S refers to a particular generic split among all possible set of splits S.The steps involved in CART algorithm is given below:

    Step 1:Starts from the root nodet=1,search for a splits*among all potential candidate’s s that gives the high decrease in impurity.Then split node 1(t=1) into two nodest= 2 andt= 3, using splits*.

    Step 2:Repeat the method in each oft= 2 andt= 3, then extend the tree breeding process till at most insignificant one of the tree growing rules is met.From the constructed tree, the feature importance is calculated by using Eq.(4).

    wheref ijis the importance of featurej,GIjis the Gini impurity value of nodej,njis the number of instances falls in the root node,GIlis the Gini impurity value of a left child node,nnlis the number of instances fall in the left node,GIris the Gini impurity of the right child node,nris the number of instances fall in the right node.The normalized feature importance is calculated by dividing the feature importance by the sum of feature importance of all features,and it will be represented in percentage.The main advantages of feature selection are reducing over fitting, improves accuracy, and reduces training time.The features are selected using the feature importance scores,where almost 19 relevant features are selected from 31 features.

    3.3 Instance Selection

    In most of the Collaborative RS, the predictions are based on users’preferences similar to the active user.Though a similar user’s search is significant in collaborative RS, the entire scan of the dataset leads to non-scalability issues and poor prediction performance when more users and items are added into the dataset.Hence the proposed approach adopts an instance selection strategy to filter the relevant users than searching for the entire data set.The proposed method incorporates instance selection using Probability Density Function (PDF) of a normal distribution, shown in Eq.(5).It achieves the instance selection by computing the truncation parameter(α)from the selected features.

    Most of the real-world datasets fall under the normal distribution density function.This distribution’s empirical rule states that all the samples fall within three standard deviations of the mean.In that,68%of the sample fall inside the first standard deviation from the mean,95%fall inside two standard deviations,and 99.7%fall inside three standard deviations[22].The mean value of the target value is calculated using Eq.(6).

    Next,the standard deviation is calculated using the mean value obtained from Eq.(3).It is the average of the difference between a sample and mean value using Eq.(7).

    The normal distribution curve is plotted, and the truncation parameter is found from likely, very likely,and almost certainly values.The selected instances increase the mean value of the distribution,which means that the most reviewed items are selected.The instances are selected from the truncation algorithm(95%of the relevant instances),and output is given to the Enhanced BIRCH algorithm.

    3.4 Enhanced BIRCH

    The relevant users are identified in the previous subsection,where these users are partitioned into small groups based on the clustering algorithm.The clustering process in RS solves the scalability issue and increases recommendation accuracy with limited computational cost.In this scenario,clustering is performed based on BIRCH algorithm.It is one of the best hierarchical clustering algorithm for high dimensional data, but it suffers from the issue of initial and number of cluster assignment.So,the hyper parameters tuning is added to enhance the BIRCH algorithm for efficient cluster formation process.In the clustering approach,the number of clusters is to be given as input data.This optimal value of the number of groups(K)is decided using different methods.The Elbow method is one of the standard methods to choose the optimal number.TheKvalue is calculated by using the inertia score,which is the sum of samples’squared distances to their closest cluster center.The average internal sum of squares(Wk)is the average distance between points inside a cluster,and it is mathematically expressed in Eq.(8).

    wherekis number of clusters,nris number of points in clusterr,Dris sum of distances between each point in a cluster expressed in Eq.(9),and d indicates distance.

    The hyper parameters are needed to determine the number of clusters and to make the clustering computation faster.We have performed hyperparameters tuning in branching factor,compute labels,the number of clusters, and threshold value to enhance the algorithm.It fully utilizes accessible memory to infer the best conceivable sub-clusters to limit computational costs [23].The cluster centroid is the mean of all the points in the dataset and it is expressed in Eq.(10).Where, x is the point in the dataset and n is the number of points.

    The root node is formed using the number of points, Linear Sum of points (LS), and Sum of the Squared of the points(SS).The radius R is calculated using Eq.(11)to create the leaf node.

    The radius is compared with the threshold value(T),which is set initially.Based on the comparison,the next point is placed in the leaf node or the existing node.The number of a leaf node is restricted by using the value L.At the end of the first phase,the CF tree is built using the above steps.The second phase of Birch architecture is done using the above created CF tree in the agglomerative hierarchical clustering technique discussed earlier in this section.The next section discusses the model creation of the proposed model.

    3.5 Model Creation

    Ensemble methods plays a significant part in machine learning, GBT (Gradient Boost Tree)algorithm is one among them.A series of weak learners (decision trees) are ensemble by using a boosting technique.GBT produces additive models by sequentially implementing a base learner to current residuals by least-squares at each stage.GBT classification model performance is increased by tuning the hyperparameters,maximum depth,minimum sample split,learning rate,loss,number of estimators, and maximum features.Pseudo-residuals are the slope of the loss function being diminished,concerning the model estimations at all training data points estimated at the current step[24–26].

    3.6 Prediction and Recommendation

    The significance of RS mostly relies on the accurate prediction algorithm whose purpose is to approximate the value of the unseen data.According to this value, the system recommends to the user.The proposed approach utilizes the ensemble regression algorithm for effective prediction.The ensemble methodology combines a set of models, each of which performs a similar job to obtain a more reliable composite global model,more accurate and reliable.The proposed approach considers the Gradient Boost regression model for efficient model creation and prediction.This model adopts balanced and conditional recommendations.In gradient boost regression, a series of weak learners(decision trees)are constructed,boosting the classification performance by combining the respective learner.Gradient boosting constructs additive classification models by sequentially applying a simple parameterized function(base learner)to current pseudo residuals by least-squares at every iteration.Hence,the performance of the gradient boosting regression highly depends on parameter tuning.The proposed approach uses the Grid Search method to tune the hyper parameter of the model.A grid search is used for parameter tuning to build and evaluate a model for the different parameters of an algorithm defined in a network.The parameter to be tuned are:(i) Maximum depth (ii) Minimum Sample Split (iii).Learning rate (iv).Loss (v).Number of Estimators and (vi).Maximum features.The Grid Search performs search candidate sampling with k-fold cross-validation to tune the hyper parameters.The pseudo-residuals are the gradient of the loss function being reduced,concerning the model estimations at all training data points evaluated at the prevailing step.The performance of the model is discussed in section IV.

    3.7 Performance Measure

    The performance of the proposed approach is evaluated against the known measure for prediction and recommendations and is given below:For prediction, MAE is used and represented as the difference between the predicted rating of user u on itemi(pu,i) and the actual rating of user u on itemi(ru,i)and is represented in Eq.(12).For recommendation,precision and recall,and f-measures are used.

    Precision is defined as the percentage of recommended items that are relevant to the user and expressed in Eq.(13).The recall is the ratio of correct recommendations relevant to the query to the total number of appropriate recommendations and is shown in Eq.(14).In addition, f-measure is the harmonic mean of precision and recall,and it is indicated in Eq.(15).

    4 Experiment and Results

    In this section,experiments of the proposed model is carried on Movielens 100k and 1M datasets[27,28].Tabs.1 and 2 lists two standard movie recommendation datasets, which are used as the benchmark.The number of users varies from 943 to 6040,and the number of items varies from 1682 to 3952.The number of ratings ranges from 1,00,000 to 1,000,209 and the density of rating ranges from 4.19%to 6.30%.In Movielens 100k and 1M datasets,the rating levels are whole star rating from 1 to 5,and each user has at least have 20 movie ratings.Especially,Tab.2 lists the proportion of each rating level,mean(μ),and standard deviation(σ)among the various statistical measures.

    Dataset link:https://grouplens.org/datasets/movielens/

    Table 1:Datasets description

    4.1 Experiment

    The experiment is carried out on the windows platform using the python programming language.All the item and user features are combined with user preference of a movie.These features are combination of different formats like numbers and strings.Label Encoder applies to these features to make the features as a single data type.Almost 31 features are integrated using preprocessing technique.Among 31 features, 19 features are chosen using feature selection in the Movielens 100k and 1M data sets.

    Table 2:Basic statistical data

    The first step truncation algorithm calculates the mean and standard deviation of the selected features by Eqs.(6) and (7) in Section 3.3.Next, the probability density function of a normal distribution is drawn using these features.The ranges of likely,very likely,and most likely values are found by using the mean and standard deviation values.The truncation parameter range 2σis fixed based on the number of samples and the increase in the density function’s peak value.In the selected parameter value,95%of instances are selected,and also the peak value is increased by introducing the truncation parameter.The Truncation algorithm is fitted to the data set for determining the instances,which is shown in Figs.2a&2b.The curve in Fig.2a shows the mean value of 3.52,and the density peak value is 0.36,and Fig.2b shows the peak value that is increased to 0.52.It indicates that most of the data samples fall under our truncation parameter range,determining the more similar samples and less deviated samples.

    Figure 2:(a) Samples before applying truncation algorithm (b) Samples after applying truncation algorithm

    The dataset selected by the truncation algorithm is divided into training for preparation of model and testing for the experiment in the ratio of 80 and 20 using a ten-fold cross-validation technique.Before using the clustering technique,the number of clusters to be decided using elbow method.In this technique,the number of clusters from 2 to 10 is assigned.The curve is plotted between the number of clusters and the inertia score,which is the sum of samples’squared distances in the closest cluster center.The number of clusters is chosen,where the point after which the inertia has started decreasing linearly.Tab.3 presents an inertia score for K values that varies from 2 to 9.The parameters used for clustering technique are shown in Tab.4.The enhanced birch algorithm expressed in Section 3.4 gives three different clusters C1,C2,and C3.

    Table 3:Elbow curve method using inertia score

    Table 4:Hyperparameters list for BIRCH clustering algorithm

    The Grid search obtains the best parameters,which methodically build and estimate a model for each mixture of algorithm parameters specified in a grid.Hyper parameters are tuned using Grid search method for the gradient boost classification algorithm and it is listed in Tab.5.The gradient boost classification tree models are created based on the clusters obtained from the enhanced birch algorithm,and it is named as M1,M2,and M3.

    Table 5:Hyper parameters list for GBT algorithm

    4.2 Results

    The enhanced BIRCH clustering algorithm is used to test the samples,and the results are predicted and classified into the corresponding clusters such as C1,C2 and C3.The test samples are given to the related models such as M1,M2,and M3 and the prediction values are found by using MAE.Where,these values are recorded for the proposed models and tabulated in Tab.6,which contains both input datasets.

    Table 6:MAE values of the proposed model

    Among these three models, model M3 shows better results and yields 0.52 as average.The experiment is performed without applying the proposed model, and the MAE values are tabulated in Tab.7,which proves that the proposed model reduces the error value.

    Table 7:MAE values after feature and instance selection

    After finding the active user cluster,and the recommendations are made by removing the watched movies from the list using a top n recommendation algorithm as mentioned in Sections 3.5 and 3.6.The model is validated through the recommendation measures such as precision,recall and f-measure,which are explained in the Section 3.7.The recommended measures for each model are calculated and tabulated.Tab.8 shows the recommender measures of the proposed model in movie lens 100k and 1M dataset.

    Table 8:Recommendation measures of the proposed model

    4.3 Discussion

    In this section,the MAE value of the proposed model is compared with the existing recommendation algorithms.Tab.9 represents that the Mohammad pour et al.[11]achieved the minimum MAE value of 0.6610 and 0.8220 on Movielens 100k, and 1M datasets.On the other hand, the proposed model delivers 0.52 and 0.5718 MAE value on Movielens 100k and 1M datasets.The simulation results showed that the proposed model reduces 68%of average MAE before the truncation algorithm,which indicates that the proposed model produces less error, and high accuracy compared to the existing algorithm,and it is represented in Fig.3.The Fig.3 shows the comparison of MAE values with the existing models.It is determined that the proposed model gives better results than the existing model in both Movielens100k and 1M datasets and proved that the model gives consistent results.The proposed model gives minimum error values, due to feature selection and instance selection by a truncation algorithm.

    Table 9:MAE value compared with an existing model

    Figure 3:Graphical comparison of proposed and existing model in terms of MAE

    In Tab.10, the performance comparison is carried out between the proposed model and the existing recommendation system developed by Selvi et al.[12].It is determined that the proposed model delivers better results than the existing model in light of precision,recall and f-measure on Movielens 100k dataset.In this section, the proposed model obtained 0.8350 of precision, 0.8640 of recall and 0.8672 of f-measure on Movielens 100k dataset,which are better compared to existing model that is graphically represented in the Fig.4.The data sparsity is reduced using an enhanced BIRCH clustering through the deployed feature selection and instance selection algorithms.Scalability issue is addressed by implementing the truncation algorithm based on feature importance.The low MAE value and the high precision,recall,and f-measure values showed that the proposed GBT recommendation model performed well in movie recommendation.

    Table 10:Recommendation measures comparison with existing model in terms of precision, recall,and f-measure

    Figure 4:Graphical comparison of proposed and existing model in terms of precision, recall and fmeasure

    In Tab.11,the proposed model is compared with two existing recommendation models,which are developed by Fu et al.[29] and Zhang et al.[30].The existing models obtained 0.8300 and 0.9460 RMSE value on movielens 100k and movielens 1M datasets.Related to the existing models, the proposed model obtained batter RMSE value of 0.4392 and 0.4500 on movielens 100k and movielens 1M datasets.

    Table 11:RMSE value compared with the existing models

    5 Conclusion

    An ensemble collaborative recommendation model with a truncation algorithm is proposed for movie recommendation in this research.The proposed model is validated on two real-world datasets;Movielens 100k and 1M datasets.In the proposed model, feature selection using the significance of the feature plays an important role,truncation algorithm influences the ensemble model performance consistently, and the ensemble learning in collaborative filtering produces better results than the existing models by means of recall, precision and f-measure.The prediction and recommendation performance measures showed that the proposed model is outperformed the existing methods in movie recommendation.The personalized recommender performance measure showed that the proposed model provides top recommendations to the active users.In the future work,we planned to design a recommendation model for big data environment,which is a complicated,engaging and challenging.It involves the recent tools and techniques to handle a massive amount of data.

    Funding Statement:The authors received no specific funding for this study.

    Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

    亚洲不卡免费看| 中文字幕av在线有码专区| 伦理电影大哥的女人| 日本五十路高清| 国产伦精品一区二区三区视频9| 91久久精品国产一区二区三区| 国产亚洲精品久久久com| 九九爱精品视频在线观看| 亚洲国产日韩欧美精品在线观看| 永久免费av网站大全| 久久这里只有精品中国| 免费看美女性在线毛片视频| 久久6这里有精品| 国产亚洲av嫩草精品影院| 三级男女做爰猛烈吃奶摸视频| 国产亚洲av片在线观看秒播厂 | 欧美成人一区二区免费高清观看| 国产一级毛片在线| 亚洲国产日韩欧美精品在线观看| 日韩精品青青久久久久久| 久久精品夜夜夜夜夜久久蜜豆| 国产免费一级a男人的天堂| 美女cb高潮喷水在线观看| 成人无遮挡网站| 在线免费十八禁| 亚洲成人久久爱视频| 欧美xxxx黑人xx丫x性爽| 男女国产视频网站| 中文字幕熟女人妻在线| 嫩草影院新地址| 一边摸一边抽搐一进一小说| 熟女人妻精品中文字幕| 99久国产av精品| 日韩欧美国产在线观看| 秋霞伦理黄片| 日韩国内少妇激情av| 高清午夜精品一区二区三区| 91久久精品电影网| 亚洲乱码一区二区免费版| 亚洲自偷自拍三级| 亚洲最大成人手机在线| 国产高清三级在线| 夜夜看夜夜爽夜夜摸| 久久久久久伊人网av| 久久韩国三级中文字幕| 色视频www国产| 久久久久免费精品人妻一区二区| 午夜福利在线在线| 久久久久久大精品| 51国产日韩欧美| 啦啦啦韩国在线观看视频| 国产免费福利视频在线观看| 97超视频在线观看视频| 午夜免费激情av| 天堂影院成人在线观看| 三级男女做爰猛烈吃奶摸视频| 亚洲怡红院男人天堂| 国产免费视频播放在线视频 | 亚洲在线自拍视频| 精品熟女少妇av免费看| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 亚洲成人精品中文字幕电影| 在线观看美女被高潮喷水网站| 99热这里只有是精品在线观看| 亚洲国产日韩欧美精品在线观看| 国产精品无大码| 99九九线精品视频在线观看视频| 三级国产精品片| 国产精品人妻久久久久久| 亚洲精品亚洲一区二区| 亚洲最大成人av| 精品无人区乱码1区二区| 国产午夜精品一二区理论片| 波多野结衣高清无吗| 一级毛片电影观看 | 欧美日韩一区二区视频在线观看视频在线 | 老女人水多毛片| 国产老妇女一区| 日日撸夜夜添| 午夜福利视频1000在线观看| 久久久久久久亚洲中文字幕| 身体一侧抽搐| 一边亲一边摸免费视频| 亚洲成人久久爱视频| 午夜免费男女啪啪视频观看| 波多野结衣高清无吗| 偷拍熟女少妇极品色| 欧美丝袜亚洲另类| 亚洲国产日韩欧美精品在线观看| 久久国产乱子免费精品| 一个人观看的视频www高清免费观看| 天堂√8在线中文| 久久久久久大精品| 亚洲精品456在线播放app| 天天一区二区日本电影三级| 在线观看一区二区三区| 插阴视频在线观看视频| 十八禁国产超污无遮挡网站| 午夜福利在线观看免费完整高清在| 黑人高潮一二区| 国产精品熟女久久久久浪| 国产成人精品一,二区| 亚洲av日韩在线播放| 99在线人妻在线中文字幕| 久久国内精品自在自线图片| 国产精品麻豆人妻色哟哟久久 | 亚洲国产精品专区欧美| 97在线视频观看| 一级二级三级毛片免费看| 婷婷色综合大香蕉| 国产伦精品一区二区三区视频9| 淫秽高清视频在线观看| 乱系列少妇在线播放| 亚洲精品乱码久久久v下载方式| 日韩在线高清观看一区二区三区| .国产精品久久| 汤姆久久久久久久影院中文字幕 | 乱码一卡2卡4卡精品| 欧美极品一区二区三区四区| 麻豆精品久久久久久蜜桃| 国产成人aa在线观看| 亚洲高清免费不卡视频| 校园人妻丝袜中文字幕| 男女国产视频网站| 亚洲内射少妇av| 精品国产三级普通话版| 亚洲欧美精品专区久久| 汤姆久久久久久久影院中文字幕 | 老师上课跳d突然被开到最大视频| 老司机福利观看| 亚洲一区高清亚洲精品| 最近的中文字幕免费完整| 老司机影院成人| 亚洲精品国产av成人精品| 99久久无色码亚洲精品果冻| 综合色丁香网| 亚洲最大成人av| or卡值多少钱| 国产亚洲最大av| 久久精品综合一区二区三区| 国产精品一区二区三区四区久久| 一本—道久久a久久精品蜜桃钙片 精品乱码久久久久久99久播 | 麻豆国产97在线/欧美| 免费av毛片视频| av免费观看日本| 久久99精品国语久久久| 亚洲激情五月婷婷啪啪| 91久久精品电影网| 国内精品美女久久久久久| 久久久久久久久大av| 69av精品久久久久久| 欧美成人免费av一区二区三区| 亚洲,欧美,日韩| 国产真实伦视频高清在线观看| 国产精品爽爽va在线观看网站| 三级毛片av免费| 99热精品在线国产| 纵有疾风起免费观看全集完整版 | 哪个播放器可以免费观看大片| 中文字幕熟女人妻在线| 三级国产精品片| 看黄色毛片网站| 中文字幕av成人在线电影| 男女边吃奶边做爰视频| 99热全是精品| 看黄色毛片网站| 麻豆成人av视频| 欧美精品一区二区大全| 国产男人的电影天堂91| 日日摸夜夜添夜夜爱| 亚洲国产精品久久男人天堂| h日本视频在线播放| 七月丁香在线播放| 一级毛片我不卡| 日韩欧美国产在线观看| 欧美激情久久久久久爽电影| 91av网一区二区| 97在线视频观看| 亚洲最大成人手机在线| 非洲黑人性xxxx精品又粗又长| 看黄色毛片网站| 亚洲av中文av极速乱| 日日撸夜夜添| 99久久无色码亚洲精品果冻| 免费在线观看成人毛片| 秋霞伦理黄片| 边亲边吃奶的免费视频| 亚洲国产欧美人成| 狂野欧美白嫩少妇大欣赏| 人人妻人人看人人澡| 在线a可以看的网站| 一级黄色大片毛片| 丰满人妻一区二区三区视频av| 一级毛片久久久久久久久女| 午夜福利视频1000在线观看| 国产视频内射| 午夜精品一区二区三区免费看| 久久久欧美国产精品| 国产极品精品免费视频能看的| 亚洲精品456在线播放app| 免费观看的影片在线观看| 美女高潮的动态| 熟女人妻精品中文字幕| 久久精品熟女亚洲av麻豆精品 | 久久久精品大字幕| 不卡视频在线观看欧美| 亚洲第一区二区三区不卡| 午夜福利网站1000一区二区三区| 国产成年人精品一区二区| 一个人观看的视频www高清免费观看| 国产精品嫩草影院av在线观看| 又爽又黄a免费视频| 亚洲色图av天堂| 欧美日本亚洲视频在线播放| 偷拍熟女少妇极品色| 国产黄色视频一区二区在线观看 | 亚洲国产精品sss在线观看| 久久久久久国产a免费观看| 免费搜索国产男女视频| 亚洲一区高清亚洲精品| 亚洲成av人片在线播放无| 成人亚洲欧美一区二区av| 国产成年人精品一区二区| 亚洲综合精品二区| 欧美3d第一页| 网址你懂的国产日韩在线| 亚洲自偷自拍三级| 国产精品一区www在线观看| 免费观看精品视频网站| 国产爱豆传媒在线观看| 国产精品女同一区二区软件| 啦啦啦啦在线视频资源| 亚洲精品乱码久久久v下载方式| 国产探花极品一区二区| 又粗又爽又猛毛片免费看| 亚洲av成人精品一二三区| 午夜亚洲福利在线播放| 在线免费观看不下载黄p国产| 18+在线观看网站| av播播在线观看一区| 搡老妇女老女人老熟妇| 欧美高清性xxxxhd video| 看十八女毛片水多多多| 最近最新中文字幕免费大全7| 99久国产av精品国产电影| 国产日韩欧美在线精品| 午夜福利在线观看免费完整高清在| 日本猛色少妇xxxxx猛交久久| 国产中年淑女户外野战色| 欧美高清成人免费视频www| 国产三级在线视频| 亚洲aⅴ乱码一区二区在线播放| 在线播放国产精品三级| 高清午夜精品一区二区三区| 在线观看一区二区三区| 女人十人毛片免费观看3o分钟| 国产av一区在线观看免费| 一级黄色大片毛片| 爱豆传媒免费全集在线观看| 水蜜桃什么品种好| av在线亚洲专区| 日本色播在线视频| 一区二区三区免费毛片| 国产色婷婷99| 成人毛片60女人毛片免费| 亚洲精品色激情综合| 午夜爱爱视频在线播放| 国产伦一二天堂av在线观看| 久久亚洲精品不卡| 国产精品久久久久久av不卡| 国产亚洲一区二区精品| 国产午夜精品论理片| 少妇人妻一区二区三区视频| 国产亚洲午夜精品一区二区久久 | av视频在线观看入口| 夜夜爽夜夜爽视频| 亚洲伊人久久精品综合 | 边亲边吃奶的免费视频| 一本久久精品| 久久久久久久久久久免费av| 青春草亚洲视频在线观看| 成人综合一区亚洲| 最近最新中文字幕免费大全7| 1024手机看黄色片| 免费黄网站久久成人精品| 一二三四中文在线观看免费高清| 日韩亚洲欧美综合| 中文乱码字字幕精品一区二区三区 | 日本免费一区二区三区高清不卡| АⅤ资源中文在线天堂| 亚洲高清免费不卡视频| 国产视频内射| 久久亚洲国产成人精品v| 亚洲熟妇中文字幕五十中出| 国产精品久久电影中文字幕| 哪个播放器可以免费观看大片| 午夜激情欧美在线| 成人毛片a级毛片在线播放| av国产久精品久网站免费入址| 日日摸夜夜添夜夜添av毛片| 变态另类丝袜制服| 国产 一区 欧美 日韩| 少妇被粗大猛烈的视频| 国产单亲对白刺激| 神马国产精品三级电影在线观看| 男人的好看免费观看在线视频| 国产精品.久久久| 午夜亚洲福利在线播放| 亚洲人成网站高清观看| 亚洲av二区三区四区| av线在线观看网站| 久久久久国产网址| av免费在线看不卡| 国产麻豆成人av免费视频| 亚洲国产精品合色在线| 日韩三级伦理在线观看| 亚洲欧美一区二区三区国产| 欧美激情久久久久久爽电影| 精品午夜福利在线看| 久久99热6这里只有精品| .国产精品久久| 国产精品日韩av在线免费观看| 18禁动态无遮挡网站| 日本-黄色视频高清免费观看| 听说在线观看完整版免费高清| 欧美另类亚洲清纯唯美| 国产精品一区二区在线观看99 | 美女脱内裤让男人舔精品视频| 亚洲av成人精品一二三区| 在现免费观看毛片| kizo精华| 久久久久精品久久久久真实原创| 男人和女人高潮做爰伦理| 中国美白少妇内射xxxbb| 久久久久久伊人网av| 少妇人妻精品综合一区二区| 白带黄色成豆腐渣| 天堂√8在线中文| 亚洲精品456在线播放app| 久久精品91蜜桃| 免费观看人在逋| 国产色爽女视频免费观看| 只有这里有精品99| 国语对白做爰xxxⅹ性视频网站| 精品久久久久久久人妻蜜臀av| 久久精品夜夜夜夜夜久久蜜豆| 国产精品久久久久久久久免| 中文欧美无线码| 久久久久性生活片| 2022亚洲国产成人精品| 一级黄片播放器| 欧美精品一区二区大全| 亚洲国产精品成人久久小说| 成人毛片60女人毛片免费| 禁无遮挡网站| 精品一区二区三区人妻视频| 午夜精品在线福利| 边亲边吃奶的免费视频| 乱系列少妇在线播放| 一区二区三区高清视频在线| 水蜜桃什么品种好| 男人舔奶头视频| 久99久视频精品免费| av在线蜜桃| 日韩亚洲欧美综合| 国产高潮美女av| 春色校园在线视频观看| 日本免费在线观看一区| 老司机影院成人| 人妻夜夜爽99麻豆av| 99久国产av精品国产电影| 精品国产露脸久久av麻豆 | a级毛片免费高清观看在线播放| 亚洲精品乱码久久久v下载方式| 久久热精品热| 亚洲成人久久爱视频| 国产乱人视频| 日韩一区二区三区影片| 国产真实伦视频高清在线观看| 成人二区视频| 国产不卡一卡二| 亚洲欧美精品综合久久99| eeuss影院久久| 中文字幕免费在线视频6| 最新中文字幕久久久久| 亚洲美女搞黄在线观看| 精品久久久久久电影网 | 久久久久久大精品| 日韩在线高清观看一区二区三区| 在线观看美女被高潮喷水网站| 熟妇人妻久久中文字幕3abv| 最近中文字幕高清免费大全6| 搞女人的毛片| 国产免费视频播放在线视频 | 国产精品久久久久久精品电影| 中文字幕人妻熟人妻熟丝袜美| 亚洲国产精品专区欧美| 国产日韩欧美在线精品| 久久久久久国产a免费观看| 国产又色又爽无遮挡免| 国产黄色小视频在线观看| 看免费成人av毛片| 国模一区二区三区四区视频| 国产一级毛片七仙女欲春2| 欧美色视频一区免费| 午夜福利高清视频| 久久人人爽人人片av| 日日啪夜夜撸| 婷婷色av中文字幕| 欧美bdsm另类| 国产乱人偷精品视频| 久久久久久久久大av| 亚洲av成人精品一区久久| 一边亲一边摸免费视频| 国产亚洲精品av在线| 成人漫画全彩无遮挡| 国产午夜精品论理片| 免费观看性生交大片5| 国产白丝娇喘喷水9色精品| 亚洲高清免费不卡视频| 亚洲中文字幕一区二区三区有码在线看| 麻豆成人午夜福利视频| 免费看a级黄色片| 欧美成人精品欧美一级黄| 草草在线视频免费看| 在线播放国产精品三级| АⅤ资源中文在线天堂| 日韩成人伦理影院| av在线播放精品| av天堂中文字幕网| 禁无遮挡网站| 精品人妻偷拍中文字幕| 久久精品国产99精品国产亚洲性色| 欧美bdsm另类| 国产精品麻豆人妻色哟哟久久 | 久久久欧美国产精品| 久久人人爽人人爽人人片va| 国产午夜精品一二区理论片| 天美传媒精品一区二区| 亚洲欧美日韩东京热| 日韩av在线大香蕉| 国产免费福利视频在线观看| 人妻系列 视频| 日韩中字成人| 国产一区有黄有色的免费视频 | 国产高清有码在线观看视频| 欧美最新免费一区二区三区| 亚洲av免费高清在线观看| 亚洲激情五月婷婷啪啪| 欧美成人免费av一区二区三区| 国产精品久久久久久精品电影| 久久久精品大字幕| 91久久精品国产一区二区三区| 欧美+日韩+精品| 91精品伊人久久大香线蕉| 精品熟女少妇av免费看| 人妻系列 视频| 国产一区二区三区av在线| 亚洲美女搞黄在线观看| 长腿黑丝高跟| 亚洲国产日韩欧美精品在线观看| 91久久精品国产一区二区成人| 成人亚洲欧美一区二区av| 精品一区二区三区视频在线| 汤姆久久久久久久影院中文字幕 | 一级爰片在线观看| 久久久久久久午夜电影| 欧美人与善性xxx| 色播亚洲综合网| 欧美三级亚洲精品| a级毛色黄片| 黄片wwwwww| 国产在视频线在精品| 国产精品一区二区三区四区久久| 在线播放无遮挡| 色播亚洲综合网| 成年女人看的毛片在线观看| 青春草国产在线视频| 日本午夜av视频| 在线播放国产精品三级| 久久久久久国产a免费观看| 久久精品国产亚洲av涩爱| 美女内射精品一级片tv| 国产精品一区二区性色av| 建设人人有责人人尽责人人享有的 | 中文字幕熟女人妻在线| 免费无遮挡裸体视频| 久久久久九九精品影院| 亚洲精品亚洲一区二区| 欧美另类亚洲清纯唯美| 一边亲一边摸免费视频| 一个人看视频在线观看www免费| 亚洲高清免费不卡视频| 男的添女的下面高潮视频| 嘟嘟电影网在线观看| 亚洲综合色惰| 国产成人午夜福利电影在线观看| 国产成人a∨麻豆精品| 午夜福利网站1000一区二区三区| 嘟嘟电影网在线观看| 99久久中文字幕三级久久日本| 亚洲精品日韩av片在线观看| 天堂影院成人在线观看| 久久久精品欧美日韩精品| 亚洲国产精品合色在线| 国产片特级美女逼逼视频| 亚洲精品,欧美精品| 高清在线视频一区二区三区 | 亚洲av成人精品一区久久| 最近视频中文字幕2019在线8| 97超碰精品成人国产| 舔av片在线| 伊人久久精品亚洲午夜| 校园人妻丝袜中文字幕| 91精品一卡2卡3卡4卡| 91久久精品国产一区二区三区| a级一级毛片免费在线观看| 精品久久国产蜜桃| 国产精品野战在线观看| 久久久久久久午夜电影| 免费av观看视频| 日韩强制内射视频| 尾随美女入室| 国产精品精品国产色婷婷| 亚洲无线观看免费| 视频中文字幕在线观看| 国产探花极品一区二区| 免费大片18禁| 成人高潮视频无遮挡免费网站| 18禁在线播放成人免费| 国产精品人妻久久久久久| 亚洲av电影在线观看一区二区三区 | 日本wwww免费看| 直男gayav资源| 免费电影在线观看免费观看| 亚洲天堂国产精品一区在线| 亚洲自偷自拍三级| 午夜a级毛片| 成人亚洲精品av一区二区| 一边亲一边摸免费视频| www.av在线官网国产| 精品欧美国产一区二区三| 少妇人妻一区二区三区视频| 黄色欧美视频在线观看| 69人妻影院| 大话2 男鬼变身卡| 晚上一个人看的免费电影| 高清av免费在线| 看免费成人av毛片| 国产真实乱freesex| 男人舔女人下体高潮全视频| 亚洲丝袜综合中文字幕| av视频在线观看入口| 日韩欧美 国产精品| 特大巨黑吊av在线直播| 亚洲内射少妇av| 亚洲色图av天堂| 亚洲欧美一区二区三区国产| 亚洲伊人久久精品综合 | 国产亚洲91精品色在线| 国内精品一区二区在线观看| 亚洲国产色片| 久久精品久久久久久噜噜老黄 | 久久精品久久精品一区二区三区| 国产成人精品久久久久久| 91精品国产九色| 变态另类丝袜制服| 成年版毛片免费区| 久久久久性生活片| 久久久久久久久久黄片| 亚洲在久久综合| 国产亚洲精品久久久com| 亚洲精品成人久久久久久| 国产精品电影一区二区三区| 亚洲高清免费不卡视频| 日产精品乱码卡一卡2卡三| 十八禁国产超污无遮挡网站| 精品免费久久久久久久清纯| 欧美一区二区精品小视频在线| 高清毛片免费看| 中国国产av一级| 欧美三级亚洲精品| 中文字幕制服av| 久久久久久久午夜电影| 欧美区成人在线视频| 啦啦啦啦在线视频资源| 高清av免费在线| 一个人看的www免费观看视频| 国产精品国产三级国产专区5o | 国产伦在线观看视频一区| 国产精品熟女久久久久浪| 国产一区二区在线观看日韩| 丝袜喷水一区| 国产男人的电影天堂91| 日韩成人伦理影院| 不卡视频在线观看欧美| 久久久精品94久久精品| 在线播放无遮挡| 精品免费久久久久久久清纯| 白带黄色成豆腐渣| 国产精华一区二区三区| 1000部很黄的大片| 99热6这里只有精品| eeuss影院久久| 国产在视频线在精品| 国产亚洲91精品色在线| 欧美日韩国产亚洲二区| 国产精品爽爽va在线观看网站| 国产伦理片在线播放av一区| 一区二区三区四区激情视频| 亚洲第一区二区三区不卡| 亚洲成人久久爱视频| 少妇人妻一区二区三区视频| 中文字幕人妻熟人妻熟丝袜美| 亚洲精品乱久久久久久|