• <tr id="yyy80"></tr>
  • <sup id="yyy80"></sup>
  • <tfoot id="yyy80"><noscript id="yyy80"></noscript></tfoot>
  • 99热精品在线国产_美女午夜性视频免费_国产精品国产高清国产av_av欧美777_自拍偷自拍亚洲精品老妇_亚洲熟女精品中文字幕_www日本黄色视频网_国产精品野战在线观看 ?

    Deep Learning Enabled Autoencoder Architecture for Collaborative Filtering Recommendation in IoT Environment

    2021-12-14 09:58:16ThavavelVaiyapuri
    Computers Materials&Continua 2021年7期

    Thavavel Vaiyapuri

    College of Computer Engineering and Sciences,Prince Sattam bin Abdulaziz University,AlKharj,Saudi Arabia

    Abstract: The era of the Internet of things (IoT) has marked a continued exploration of applications and services that can make people’s lives more convenient than ever before.However,the exploration of IoT services also means that people face unprecedented difficulties in spontaneously selecting the most appropriate services.Thus,there is a paramount need for a recommendation system that can help improve the experience of the users of IoT services to ensure the best quality of service.Most of the existing techniques—including collaborative filtering(CF),which is most widely adopted when building recommendation systems—suffer from rating sparsity and cold-start problems,preventing them from providing high quality recommendations.Inspired by the great success of deep learning in a wide range of fields,this work introduces a deep-learning-enabled autoencoder architecture to overcome the setbacks of CF recommendations.The proposed deep learning model is designed as a hybrid architecture with three key networks,namely autoencoder(AE),multilayered perceptron(MLP),and generalized matrix factorization(GMF).The model employs two AE networks to learn deep latent feature representations of users and items respectively and in parallel.Next, MLP and GMF networks are employed to model the linear and non-linear user-item interactions respectively with the extracted latent user and item features.Finally,the rating prediction is performed based on the idea of ensemble learning by fusing the output of the GMF and MLP networks.We conducted extensive experiments on two benchmark datasets,MoiveLens100K and MovieLens1M,using four standard evaluation metrics.Ablation experiments were conducted to confirm the validity of the proposed model and the contribution of each of its components in achieving better recommendation performance.Comparativeanalyses were also carried out to demonstrate the potential of the proposed model in gaining better accuracy than the existing CF methods with resistance to rating sparsity and cold-start problems.

    Keywords: Neural collaborative filtering; cold-start problem; data sparsity;multilayer perception; generalized matrix factorization; autoencoder; deep learning; ensemble learning; top-K recommendations

    1 Introduction

    The IoT is a new communication paradigm that digitizes the real-world environment by interconnecting vast number of smart objects and systems using Internet infrastructure.In this process, the networked smart objects can sense and generate large volume of useful data about their surroundings specifically about the product performance or customer usage patterns [1].The data generated in the context of IoT brings excellent capability in supporting the development of value-added services and smart applications.The notion of IoT is to create a smart environment that can enable global exchange and delivery of intelligent services to improve the quality of daily life [2].From the industry perspective, the IoT is expected to facilitate the businesses in optimizing their productivity and achieving their goals [3].

    It is forecasted by Gartner that by 2021, the number of connected devices in the IoT will reach 25 billion.SAP predicts that this number will reach 50 billion by 2023.According to research by McKinsey Global, the number of connected IoT devices is set to exceed between$4 and $11 trillion in economic impact [4].As a result, there is a general belief that the IoT will become an important source of contextual information and be ubiquitous in satisfying users’needs through a multitude of services.Nevertheless, when users are swamped with information and offered a plethora of services, they face unprecedented difficulty in making the right decision when spontaneously selecting the desired information and appropriate services.This leads to the phenomenon called the information overload problem [5–7].In this sense, it is of paramount importance to deploy flexible information systems that can extract effectively and efficiently valuable information from massive amounts of data.To this end, in recent years, recommender systems have been considered instrumental in alleviating the problem of information overload by effectively assisting the users in finding the potential products or services that meet their needs and match their interests from a large pool of possible options [8].With such an indispensable role in online information systems, recommendation systems are widely recognized in many fields and have become an animated topic of research [9].

    The literature has discussed two kinds of recommendation algorithm:collaborative filtering(CF) and content-based (CB) [10].CB algorithms make recommendations based on the similarities between the item description and the user preferences.This approach ignores the relationship between items and suffers from a serendipity problem, providing recommendations with limited novelty.As a result, the effectiveness and extensibility of the system is limited [11].Comparatively,CF algorithms are successful and are the most popular approach in many recommender systems because they make recommendations based on user interactions with items and other users with similar preferences.CF offers many advantages, such as being content-independent and efficient in providing serendipitous recommendations.Also, by considering only real quality measurements,CF is effective at providing recommendations [12].

    Among various CF techniques, matrix factorization (MF) and its variants are more effective and most widely used in practice [13].MF maps both users and items onto a shared latent space and uses the inner product of the latent vectors to model the user interaction on an item.Despite the appeal of MF, it is well known that their effective performance is hindered by three serious problems:Rating sparsity, cold-start problems, and their linear nature.Rating sparsity occurs when there is very little available rating data.A cold start occurs when making a recommendation with no prior interaction data for a user or an item.Due to its linear nature, MF fails to capture complex interactions between users and items [14].

    Recently, the application of deep learning models in recommendation systems has made breakthroughs [15,16].Several research studies in the literature have leveraged the roaring success of deep learning to change the linear structure of MF and have demonstrated the powerful ability of deep learning to gain satisfactory results [17,18].A neural CF (NCF) was introduced [19]exploiting the benefits of MLP to capture non-linear user-item interactions.Here, the authors combine MF and MLP networks and build a new neural-based MF (NeuMF) that can capture linear and non-linear user-item interactions.Similarly, Chen et al.[20] adopted a non-sampling strategy through mathematical reasoning to learn neural recommendation models.Here, the adopted strategy proved its efficiency in learning the model parameters with reduced time complexity.However, the developed model was not validated for its effectiveness against data sparsity and cold-start problems.Recently, Thanh et al.[21] presented a DNN model called FeaNMF to improve the predictive accuracy of NCF incorporating user latent and auxiliary features.Wang et al.[22] constructed a hybrid architecture to improve predictive accuracy by combining two different kinds of neural network model, AE and MLP, to extract the latent feature vector from rating data and to describe the user-item interactions, respectively.Wan et al.[23] proposed a new continuous imputation denoising AE model and combined the new model with NCF to alleviate the data sparsity problem.Likewise, a neural-network-based CF model called a deep dual relation network (DRNet) was proposed for recommendations [24].Unlike the previous literature, the present model captures both item–item and user-item interactions to improve the quality of the recommendation performance.Despite the success of existing models, there are still opportunities to improve the quality of recommendations.

    Most of the existing NCF and NeuMF methods focus on user-item interactions and show promise in effectively learning the robust latent feature representations.However, they do not aim to extract key latent features from users and items individually.Consequently, this not only affects the feature representation of users and items but could also affect the user-item interactions.Notably, the benefits of stacked sparse autoencoder (SSAE) can be leveraged to address the setbacks of NCF.In the proposed architecture, two separate SSAE networks are employed to extract the latent features of users and items, respectively, from the implicit rating data.Then,the resulting latent features are used as an input for the two different NCF models, GMF and MLP, to capture the complex user-item interactions.The key contributions of this work are as follows:

    a.Proposed model integrates SSAE within NCF framework to effectively mitigate the rating sparsity problem.Also, to overcome the possibility of overfitting and improve the accuracy of the rating prediction.

    b.Proposed model effectively captures both linear and non-linear interactions between users and items employing two different NCF networks, GMF and MLP respectively.

    c.Proposed model is designed to use only implicit rating data as they are comparatively easier to collect for service provider.

    2 Preliminaries

    2.1 Neural Collaborative Filtering(NCF)

    NCF is an extended version of a traditional CF model.Here, the power of neural networks is leveraged to learn and model intricate user-item interactions with multiple levels of abstraction.The general architecture of NCF consists of an input layer with a sparse user and item feature vector and a feature embedding layer to project the sparse input representations onto a dense feature vector.The obtained user and item embeddings are combined and fed through a multilayer neural architecture for the CF process.In this process, the user and item embeddings are mapped to determine the prediction score.Here, the layers participating in CF can be customized to uncover intricate user-item interactions.Thus, the CF process through MLP is defined as follows:

    In the above equation,(pu,qi)denotes the embedding of user and item, andφ(·)andφn(·)represents the mapping function used in the embedding layer, and the activation function used in the nth layer of the neural network, respectively.Because NCF aims to predict rating score,a sigmoid is used as an activation function of the output layer.In general, the known ratings are used as labels to train NCF and learn the neural network parameters.After training, NCF employs a non-linear process to predict unknown ratings as follows:

    2.2 Autoencoder(AE)

    A simple AE comprises three layers:input, hidden, and output, as shown in Fig.1.It aims to reconstruct the original input at its output layer.The learning process of an AE consists of two stages:Encoding and decoding [25].On the one hand, the encoding stage uses the activation functionaparameterized {w,b} to map the input vectorx=onto the hidden layer and capture the most relevant featureszfrom the input vectorxas follows:

    On the other hand, the decoding stage uses the activation functionparameterizedto map the extracted relevant featurezonto the output layer, and approximately recovers the original input vector asfrom the extracted relevant feature vectorzas follows:

    Figure 1:Structure of autoencoder network

    During the training process, the AE is trained to learn the parameter setminimizing the reconstruction error betweenandx.Also, to prevent overfitting, the weight penalty, termed the regularization parameter, is added to the reconstruction error to obtain the cost function, which is defined as follows:

    3 Proposed Methodology

    This section presents the hybrid deep-learning-enabled AE architecture proposed for rating prediction.As shown in Fig.2, the architecture comprises three different kinds of neural network:AE, GMF, and MLP.The proposed architecture contains two AE networks with parallel structures to learn the latent representation of users and items separately from implicit rating data.Then, the resulting latent user and item features are fed into the two instantiations of NCF,namely GMF and MLP networks, to learn user-item interactions.Finally, the output of GMF and MLP is fused through ensemble learning to generate the rating prediction.The superior quality of this architecture is that it introduces AE for the first time into the NCF framework to not only extract the primary latent features of users and items but also to alleviate the data sparsity problem.Moreover, it effectively enables the complex interactions between users and items to be captured through the ensembles of GMF and MLP to gain better prediction performance.The subsections that follow detail the working principles of the proposed architecture.

    Figure 2:Proposed deep learning enabled autoencoder architecture for NCF

    3.1 Feature Representation

    GivenMusers andNitems, the user ID set is denoted byU=[1,2,...,M] and the item ID set is denoted asI=[1,2,...,N].The ratings from M users for N items are defined as a matrix R with |M×N|dimensions, where each elementruirepresents the rating from userufor itemi.Here, ratings can be explicit or implicit.In the case of explicit ratings, they are provided on a scale ranging from 1 to Rmax, where Rmaxcan be 3, 5, 10, etc.On the contrary, with implicit ratings, they are unary, in the form of user clicks or purchases.This work considers only implicit user ratings because users do not rate all items involved.The rating matrix is defined as follows:

    These rating scores given by users are used to represent the feature vector for user and item.For itemi, the feature vectorIiis represented as follows, using the rating scores of all users:

    Similarly, the feature vectorUuis represented as follows, using the rating score of userufor all the involved items:

    Here, the dimension ofUuandIiare different because the dimensionsMandNare usually unequal.As in [23,24], the present work attempts to employ minimal information, using only user ratings to achieve better rating prediction performance.

    3.2 Feature Extraction Using Stacked Sparse Autoencoder(SSAE)

    Motivated by recent successes of deep learning for better feature generalization, this work employs a sparse autoencoder (SAE), a variant of AE proposed in [26].The idea behind SAE is to impose a sparsity constraint on the core idea of AE to learn the sparse features from user and item feature vectors.In essence, SAE considers a neuron in a hidden layer to be active if its activation value is close to 1; else, it considers the neuron to be inactive.Accordingly, a hidden layer becomes sparse when most neurons are inactive.Thus, the act of imposing a sparse constraint on a hidden layer, as shown in Fig.3, restricts undesired activation and encourages SAE to learn sparse features at the level of its hidden representation.

    Figure 3:Structure of sparse autoencoder (left) and stacked autoencoder (right)

    Accounting for the sparsity constraint, the cost function of AE defined in Eq.(5) is then redefined with a sparsity term for SAE, as follows:

    The sparsity constraintρin the above equation is optimized for the given task by minimizing the Kullback–Leibler (KL) divergence betweenρa(bǔ)ndAlso, the parameterαintroduced in the above equation controls the relative importance of the sparsity term in the cost function.To better extract latent features of the raw rating data, this work employs deep learning by stacking multiple SAEs where the hidden layer of an SAE is fed into the next SAE as input.The stacked SAE(SSAE) adopts bottom-up greedy layer-wise training, starting with the first SAE.After the training process of the first SAE is complete, its hidden layer acts as input for training the next SAE.This work stacks three layers of SAE together and employs two SSAEs, namely Auto-I and Auto-U, to perfectly reconstruct the feature vectors of the users and items respectively from the sparse rating matrix data.This is computed as given below

    3.3 Feature Fusion Using Generalized Neural Matrix Factorization(GMF)

    GMF, which is regarded as an extension of MF under NCF, is utilized in the proposed framework to learn user-item interactions.Nonetheless, to address the data sparsity limitation of the GMF network, in the proposed framework, the latent feature vectors extracted by SSAE for the user and item undergo a CF process, as follows:

    wherepuandqirepresent the user and item latent feature vectors generated by SSAE, respectively.Also, ⊙denotes the element-wise product.Lastly, GMF makes a prediction at the output layer as follows:

    In the above equation,a(·)denotes the activation function and w represents the weight vector of the GMF network.Unlike earlier studies [19,23], this work applies the linear activation function to uncover the linear relationships between the latent feature vector of users and of items.

    3.4 Feature Fusion Using Multi-Layer Perceptron(MLP)

    The proposed framework employs MLP to deeply learn the critical interactions between the user and item latent feature vectors.In this first step, the latent feature vectors of users and items extracted from SSAE are concatenated and then employed in CF as follows:

    The computation process of each hidden layer can be defined as follows:

    Here,aL,wLandbLdenote the activation function, bias vector, and weight matrix of theLth hidden layer respectively; the [·] denotes the concatenating operation.Further, the final prediction of MLP is given as

    In this work, ReLU is used as a hidden layer activation function for MLP for two reasons.First, it encourages sparse activations; second, it prevents the model from overfitting.Because the objective of the MLP model employed here is to capture the critical interaction between user and item vectors, the MLP network is designed as in [19] to follow a tower pattern with three layers.Unlike other NCF models [19,23,24], the GMF and MLP employed in this work captures the linear and non-linear interactions from latent feature vectors generated by SSAEs for users and items to achieve better performance rather being limited due to the sparsity of user preferences.

    3.5 Ensemble Learning for Rating Prediction

    The two models described above, GMF and MLP, which capture user-item interactions by employing linear and non-linear kernels respectively, are ensembled by concatenating their last hidden layers.Following earlier studies [19,23,24], the ensemble formulation of the final output is defined as follows:

    where w and a are the weight and activation function of the output layer, respectively.Here, a sigmoid is applied as an activation function.

    4 Experimental Setup

    4.1 Datasets

    The effectiveness of the proposed model was investigated by choosing two benchmark datasets published by the GroupLens research group, MovieLens100K and MovieLens1M.They are real-world datasets collected through a movie recommender system with different data sparsity percentages.The MovieLens100K dataset contains 943 users and 100,000 ratings on 1,682 movies, whereas the MovieLens1M dataset contains 6,040 users and 1,000,209 rating records on 3,952 movies.In these datasets, each user has rated at least 20 movies on a scale of 1 to 5, with a rating of 5 indicating that the user liked the movie more.There are also some movies with no ratings in the datasets.These movies were removed and the discontinuous movie serial numbers were preprocessed with successive serial numbers to produce rating records for 1,682 and 3,706 movies with data sparsities of 93.70% and 95.75% in MovieLens100K and MovieLens1M, respectively.

    The statistics of these datasets are illustrated in Tab.1.Because the datasets contain explicit user feedback, a strategy like that in [23,24] was adopted to obtain the implicit feedback, which ignores the rating values and retains only the rating as 0 or 1.Also, following the same experimental setup as that in [19–24], we chose to use 90% of the observed ratings to create the training set and the remaining 10% as the testing set.Further, 10% of the training set was used to create a validation set for all these datasets.

    Table 1:Statistics of movielens datasets

    4.2 Pre-Training Process

    Generally, the presence of a non-linear activation function in a neural network induces a non-convex optimization problem and easily traps the model to local optima during the learning process.This degrades the model’s performance.Intuitively, the initialization of network parameters gains paramount importance in improving the model’s performance.For instance, when we trained our model with random initialization, it converged slowly with poor performance.This might be attributed to the co-adaption effect of training the three networks—AE, GMF, and MLP—involved in the process of feature extraction and CF simultaneously.Hence, in agreement with earlier studies [19,24], we hypothesize that the pre-training process can resolve the coadaption effect by initializing the network parameters close to the global optimal solution.We therefore adopted pre-training to initialize the network parameters of GMF and MLP.While pre-training these networks, the robust features learnt by Auto-I and Auto-U were employed.However, for networks such as Auto-I and Auto-U, greedy layer-wise pre-training was performed with tied weights and random initialization because they are not affected by the co-adaptation effect.Further, the Adam optimizer was used in the pre-training process of GMF and MLP.After the pre-training process, the proposed recommendation model was trained with a stochastic gradient descent optimizer for two reasons:first, to achieve better generalization; second, because the Adam update rule requires previous momentum but, in our case, it was unavailable because the network parameters of GMF and MLP were initialized through the pre-training process.A batch size of 256 and learning rate of 0.001 were adopted, considering the convergence rate and lower memory requirements for computing the gradients of the trainable parameters.

    Table 2:Hyperparameter of proposed model

    4.3 Implementation Details

    The proposed recommendation model was implemented as described in Section 3 using Python and Keras deep learning libraries, with TensorFlow as the backend.The hyperparameters of the proposed model are set by using a grid search on the training dataset and performing a 5-fold cross validation.The search space and obtained optimal values for the hyperparameters of the proposed model are presented in Tab.2.

    4.4 Evaluation Metrics

    To sufficiently assess the effectiveness of the proposed model for recommendation performance, we chose four of the most commonly used evaluation metrics:Mean absolute error(MAE), root mean square error (RMSE), hit ratio (HR), and normalized discounted cumulative gain (NDCG).They are formulated as follows:

    The first two metrics, MAE and RMSE, were chosen with the purpose of evaluating the rating effectiveness of the proposed model because they are the ones most widely used in past literature for CF evaluation.Further, the work uses HR and NDCG metrics to evaluate the performance of the proposed model in determining top-K ranked recommendation lists.HR indicates ranking accuracy by measuring whether the preferred items are listed in the top-K recommendation list of a user.Similarly, NDCG evaluates hit positions by assigning higher scores to the hits at the top of the ranking.

    5 Experimental Results and Discussion

    This section explains the two sets of analysis devised to validate the performance of the proposed recommendation model.In particular, these two sets of analysis were conducted to answer the following questions:

    a) Is the integration of an AE network effective in improving the performance of the proposed model?

    b) Does the proposed model perform better than the existing state-of-the-art CF models?

    c) Does the proposed model perform better than the existing state-of-the-art CF models with resistance to data sparsity in terms of recommendation accuracy?

    5.1 Ablation Analysis

    First, we conducted a series of ablation analysis experiments to validate the correctness of the proposed model design and to examine the contribution of different components in the proposed model for recommendation performance.To achieve this, we developed different variants of the proposed model, as detailed below:

    (a) GMF:This variant was created by removing the MLP network and fusion layer from the proposed model.Further, the SSAE networks were replaced with embedding layers to learn the user and item embeddings for CF.Thus, this variant receives the user and item latent features from the embedding layers to perform the element-wise product of the user and item latent features.

    (b) MLP:This variant was created by removing the GMF network and fusion layer from the proposed model.Further, the SSAE networks were replaced with embedding layers to learn the user and item embeddings for CF.Thus, this variant receives the user and item latent features from the embedding layers to learn the non-linear user-item relationships.

    (c) GMF++:This variant was created by removing the MLP network and fusion layer from the proposed model.Thus, this variant utilizes the user and item latent features extracted from the SSAE networks to perform the element-wise product of user and item latent features.

    (a) MLP++:This variant was created by removing the GMF network and fusion layer from the proposed model.Thus, this variant utilizes the user and item latent features extracted from the SSAE networks to learn the non-linear user-item relationships.

    (b) I-Autorec:This variant was created by keeping only one SSAE network and removing all other components, such as the MLP and GMF networks.This variant uses the SSAE network to reconstruct item vectors and predict the missing ratings.It is proven in the literature that I-Autorec outperforms user-based AE [27,28].

    (c) NeuMF:This variant was created by replacing the SSAE networks in the proposed model with embedding layers to learn the user and item embeddings for CF.

    Table 3:Ablation analysis on MovieLens1M for different variants of proposed model

    To maintain a reasonable comparison, the same network parameters and experimental setup were utilized to create the above variants and to conduct the ablation experiments.The results of the ablation experiments on the MovieLens1M dataset are presented in Tab.3.To improve interpretability, the results in Tab.3 are depicted in a comparison plot in Fig.4.The ablation results shown in Tab.3 and Fig.4 clearly indicate the contribution and importance of each component of the proposed model for improving its performance benefits.In particular, the variant I-Autorec delivered the worst performance, with high RMSE and MAE, indicating its failure to capture the user-item interactions, which is essential for improving recommendation performance.Next, observing the performance improvement of GMF++ and MLP++ over GMF and MLP respectively, it is clear that the AE significantly contributes to learning the robust user and item latent features required to enhance the CF process.Also, the variant NeuMF performed better than the variants GMF and MLP, in accordance with the findings reported in [24].This illustrates the significant role of the fusion strategy in learning both linear and non-linear user-item interactions in CF.When comparing the performance of NeuMF with that of the proposed model, a significant performance improvement in the proposed model can be observed.This highlights the role of the AE network in ensuring overall improvement in recommendation performance.In sum,it is evident from these observations that all components in the proposed model play an essential role and contribute greatly to its success in improving recommendation performance.

    Figure 4:A plot comparing the performance of different ablations of proposed model

    5.2 Comparative Analysis with Related Works

    Two sets of experiments were conducted to compare the effectiveness of the proposed model against existing state-of-the-art models.To do this, we considered three CF models based on fusion strategy—NeuMF, I-NMF, and DRNet—and two state-of-the-art, non-fusion CF models—eALS and BPR.The main reason for this is that, to our knowledge, there are few works in the literature that leverage the benefits of fusion strategies to learn user-item interactions, as we do in our work.Further, we used the results of the papers that do address this to perform a fair comparison.If the authors did not conduct a particular type of experiment and the results were not available,this was indicated with “NA.” To demonstrate the effectiveness of the proposed model, the comparison was conducted on two benchmark datasets, MovieLens100K and MovieLens1M, from two different perspectives, as follows:

    5.3 Top-K Recommendations

    In most real-world applications, the goal of recommendation system is to generate a personalized top-K ranked list of items for the target user based on rating predictions.We therefore conducted an experiment to compare the efficiency of the proposed model with existing stateof-the-art models for top-K recommendations.The experiment analyzes all chosen models for three different values of N {5,10,15}.Tabs.4 and 5 present the top-K recommendation results of all methods in terms of HR@K and NDCG@K for different N values on two benchmark datasets, MovieLens100K and MovieLens1M, respectively.Comparison plots of the results shown in Tabs.4 and 5 are provided in Figs.5 and 6, respectively.

    Table 4:Performance comparison of proposed model against related works for Top-K recommendations on MovieLens100K

    Table 5:Performance comparison of proposed model against related works for Top-K recommendations on MovieLens1M

    Figure 5:Performance comparison plot of top-K recommendations for proposed model against related works on MovieLens100K

    The results in Tabs.5 and 6 show that the performance of all models improves with an increase in the value of N in terms of both the HR and NDCG metrics.Also, the comparison results indicate that the fusion models namely, NeuMF, DRNet, and the proposed model, except I-NMR, outperform the non-fusion models namely, eALS and BPR, in terms of both metrics on both datasets.These results indicate that the fusion models are effective at modeling user-item interactions, leveraging the benefits of neural networks.Among the fusion models, the proposed model performs better in terms of both metrics on both datasets.This demonstrates that the integration of an AE model within a NeuMF framework helps with learning the more robust user and item features that can enable significant improvement in the CF process.In particular, with the MovieLens100K dataset, the proposed model achieved an average of 2.7% and 4.6% relative improvement compared to the NeuMF model in terms of HR and NDCG, respectively.Similarly,with the MovieLens1M dataset, the proposed model achieved an average of 2.6% and 2.3%relative improvement compared to the NeuMF model in terms of HR and NDCG, respectively.Overall, these observations prove that the proposed model performs better than the existing models in improving the performance of Top-K recommendations in terms of HR and NDCG.

    Figure 6:Performance comparison plot of top-K recommendations for proposed model against related works on MovieLens1M

    5.4 Data Sparsity Cold-Start Problem

    In real-world scenarios, the user-item rating matrix is exceedingly sparse and gives rise to data sparsity problems that greatly affect the performance of recommendation system.To investigate the efficacy of the proposed model in mitigating the impact of this problem, a comparative analysis was conducted by arbitrarily removing some entries from the training set and creating a sparse matrix with four different densities.For example, a sparse user-item matrix with 20% data density was created, with 20% of the user-item entries used as the training set and the remaining 80% used as the testing set.Likewise, sparse user-item matrices with densities ranging from 20%to 80% were created.Here, if the data density or the training set proportion is high, the data sparsity is low, and vice versa.

    Tab.6 records the recommendation accuracy of the proposed model by comparing other existing state-of-the-art CF models in terms of HR@10 for different data sparsity levels using two different benchmark datasets.A plot of the same data is presented in Fig.7 for improved interpretability.The horizontal comparison of the results through different data sparsity levels on both datasets indicate that the recommendation accuracy decreases as data sparsity increases, regardless of the recommendation model used.This confirms that data sparsity impacts the accuracy of all recommendation models.The vertical comparison of the results through different models on both datasets indicates that fusion models, such as NeuMF, DRNet, and the proposed model, perform better than non-fusion models namely eALS and BPR.Another notable observation is that the proposed model shows significant improvement over DRNet and NeuMF when the data sparsity level is higher than 60%.This might be because the AE enhances the CF process of NeuMF by learning the most robust user and item features from the given training set.The comparison results at different data sparsity levels show that CF process can be improved to have higher recommendation accuracy with resistance to data sparsity if an AE is used to learn the most robust user and item latent features.

    Table 6:Performance comparison against related works with different level of data sparsity

    Figure 7:Performance comparison plot of proposed model against related works for different percentage of data sparsity on MovieLens100K (left) and MovieLens1M (right)

    6 Conclusion

    This paper proposed a hybrid deep-learning-enabled AE architecture for an NCF-based recommendation model.This paper is the first to integrate the potency of an AE into the NCF framework.The advantage gained through this integration is that SAE allows rating sparsity and cold-start problems to be overcome by extracting the primary latent features of users and items from the rating matrix separately.Later, the extracted latent features are employed to enhance the learning capability of GMF and MLP networks to effectively model the linear and nonlinear user-item interactions, respectively.Finally, the proposed model combines the strengths of GMF and MLP based on ensemble learning to improve the accuracy of recommendations.In addition, the proposed model is designed to gain accuracy based solely on implicit rating data.Ablation experiments confirmed that the design decision to integrate SAE within the NCF framework is beneficial for improving the recommendation performance of the proposed model.Also, the comparative analysis results conducted on two established datasets, MovieLens100K and MoiveLens1M, proved that the proposed model is effective in improving the accuracy of top-N recommendations with resistance to rating sparsity and cold-start problems when compared to the related NCF-based recommendation models.

    The proposed architecture is limited to utilizing only implicit user feedback on items.Also, it is limited to perform online recommendation.Along this line, future work can take three different directions.First, it can investigate how different types of AEs can influence the performance gain in recommendation accuracy.Second, it can investigate how effectively the inclusion of auxiliary information can be utilized to better improve recommendation accuracy.Third, it can restructure the proposed model to better support online recommendation because it is of paramount importance in modern e-commerce applications.

    Funding Statement:This project was supported by the deanship of Scientific Research at Prince Sattam Bin Abdulaziz University, Alkharj, Saudi Arabia through Research Proposal No.2020/01/17215.Also, the author thanks Deanship of college of computer engineering and sciences for technical support provided to complete the project successfully.

    Conficts of Interest:The author declare that they have no conflicts of interest to report regarding the present study.

    丁香六月天网| 国产成人精品久久二区二区91| 久久性视频一级片| 黑人欧美特级aaaaaa片| 国产精品国产三级国产专区5o| 天天躁狠狠躁夜夜躁狠狠躁| 国产成人精品久久久久久| 国产精品秋霞免费鲁丝片| 久久国产精品人妻蜜桃| 一二三四在线观看免费中文在| 国产日韩欧美在线精品| 香蕉丝袜av| 免费在线观看黄色视频的| 精品一区二区三区av网在线观看 | 成人国产av品久久久| 亚洲一码二码三码区别大吗| 亚洲国产精品一区二区三区在线| 欧美日韩亚洲高清精品| 三上悠亚av全集在线观看| 天天躁夜夜躁狠狠躁躁| 激情视频va一区二区三区| 在现免费观看毛片| 好男人视频免费观看在线| 9191精品国产免费久久| av国产久精品久网站免费入址| 午夜福利影视在线免费观看| 欧美 亚洲 国产 日韩一| 久热爱精品视频在线9| 999精品在线视频| 自线自在国产av| 女人高潮潮喷娇喘18禁视频| 精品亚洲成国产av| 人妻 亚洲 视频| 亚洲精品美女久久av网站| 日本a在线网址| 极品人妻少妇av视频| 午夜福利视频精品| 男女午夜视频在线观看| 在线看a的网站| 成年女人毛片免费观看观看9 | 国产在线观看jvid| 久久人妻熟女aⅴ| 久久久久久亚洲精品国产蜜桃av| 欧美精品啪啪一区二区三区 | 亚洲国产精品999| 这个男人来自地球电影免费观看| 91老司机精品| 纯流量卡能插随身wifi吗| 亚洲精品美女久久久久99蜜臀 | 丁香六月天网| 在线看a的网站| 亚洲综合色网址| 热99国产精品久久久久久7| 女人高潮潮喷娇喘18禁视频| www.999成人在线观看| 黑丝袜美女国产一区| 久久女婷五月综合色啪小说| 捣出白浆h1v1| 99re6热这里在线精品视频| 久热这里只有精品99| 久久亚洲国产成人精品v| 日韩熟女老妇一区二区性免费视频| 亚洲人成77777在线视频| 久久精品国产亚洲av高清一级| 久久99精品国语久久久| 麻豆国产av国片精品| 性少妇av在线| 又粗又硬又长又爽又黄的视频| 成人18禁高潮啪啪吃奶动态图| 亚洲欧美精品综合一区二区三区| av视频免费观看在线观看| 天天躁夜夜躁狠狠久久av| xxx大片免费视频| 五月天丁香电影| 国产成人系列免费观看| 国产亚洲欧美在线一区二区| 菩萨蛮人人尽说江南好唐韦庄| 91精品伊人久久大香线蕉| 好男人电影高清在线观看| 视频区欧美日本亚洲| 久久这里只有精品19| 99九九在线精品视频| 精品人妻一区二区三区麻豆| 国产成人精品久久久久久| 久久久国产一区二区| 下体分泌物呈黄色| 久久亚洲国产成人精品v| 国产一区二区三区av在线| 天堂8中文在线网| 婷婷成人精品国产| 人体艺术视频欧美日本| 欧美中文综合在线视频| 久久精品久久久久久久性| 久久精品国产亚洲av涩爱| 在线观看免费午夜福利视频| 天天躁日日躁夜夜躁夜夜| tube8黄色片| 国产福利在线免费观看视频| 黑人欧美特级aaaaaa片| 精品人妻一区二区三区麻豆| 亚洲国产欧美一区二区综合| 久久综合国产亚洲精品| 老熟女久久久| 伊人亚洲综合成人网| 国产精品一区二区在线不卡| 久久久久久人人人人人| 国产伦人伦偷精品视频| 青春草亚洲视频在线观看| 欧美日韩黄片免| 国产一区二区激情短视频 | 91精品伊人久久大香线蕉| 91成人精品电影| 高清黄色对白视频在线免费看| 亚洲欧洲日产国产| 久久久久久久国产电影| 成人亚洲欧美一区二区av| 精品久久蜜臀av无| 亚洲欧美激情在线| 19禁男女啪啪无遮挡网站| 久久这里只有精品19| 又黄又粗又硬又大视频| 人人妻人人爽人人添夜夜欢视频| 亚洲国产av新网站| 国产精品一区二区精品视频观看| 岛国毛片在线播放| 亚洲综合色网址| 国产视频一区二区在线看| 中文字幕人妻丝袜一区二区| 国产精品秋霞免费鲁丝片| 欧美 亚洲 国产 日韩一| 欧美黑人精品巨大| 女警被强在线播放| 9热在线视频观看99| 亚洲精品一二三| 欧美精品一区二区免费开放| 午夜福利影视在线免费观看| 国产av一区二区精品久久| 一区二区三区精品91| 日日爽夜夜爽网站| 欧美av亚洲av综合av国产av| 十八禁人妻一区二区| 国产av国产精品国产| 精品人妻一区二区三区麻豆| 亚洲黑人精品在线| 又粗又硬又长又爽又黄的视频| 在线 av 中文字幕| 欧美精品高潮呻吟av久久| 亚洲av日韩精品久久久久久密 | 中文字幕高清在线视频| 少妇粗大呻吟视频| 久久鲁丝午夜福利片| 婷婷色综合大香蕉| 国产99久久九九免费精品| 每晚都被弄得嗷嗷叫到高潮| 亚洲成人国产一区在线观看 | 国产精品免费视频内射| 久久天堂一区二区三区四区| 亚洲精品日韩在线中文字幕| 亚洲欧洲精品一区二区精品久久久| 久久久久国产精品人妻一区二区| 精品视频人人做人人爽| 亚洲av美国av| 亚洲av国产av综合av卡| 精品久久蜜臀av无| 国产av国产精品国产| 性色av一级| 久久久欧美国产精品| 在线看a的网站| 国产一区亚洲一区在线观看| 欧美乱码精品一区二区三区| 纯流量卡能插随身wifi吗| 中文字幕人妻丝袜一区二区| 大香蕉久久成人网| 精品一区在线观看国产| 欧美日韩精品网址| 国产主播在线观看一区二区 | 老司机亚洲免费影院| 国产淫语在线视频| 久久这里只有精品19| 欧美日韩av久久| 久久av网站| 国产精品偷伦视频观看了| 老司机午夜十八禁免费视频| 国产激情久久老熟女| 精品高清国产在线一区| 欧美+亚洲+日韩+国产| 视频区欧美日本亚洲| 婷婷色综合大香蕉| 人人妻人人添人人爽欧美一区卜| 国产亚洲av高清不卡| 两个人免费观看高清视频| 老鸭窝网址在线观看| 国产成人啪精品午夜网站| 免费日韩欧美在线观看| 欧美av亚洲av综合av国产av| 香蕉丝袜av| 国产成人欧美在线观看 | 十八禁高潮呻吟视频| 视频区欧美日本亚洲| 老司机午夜十八禁免费视频| 在线观看免费高清a一片| 国产视频一区二区在线看| 欧美精品高潮呻吟av久久| 波野结衣二区三区在线| av福利片在线| 亚洲色图综合在线观看| 国产成人欧美| 国产欧美亚洲国产| 国产亚洲欧美在线一区二区| 少妇裸体淫交视频免费看高清 | 亚洲av欧美aⅴ国产| 成人影院久久| 国产精品麻豆人妻色哟哟久久| 十八禁人妻一区二区| 9191精品国产免费久久| 99re6热这里在线精品视频| 久久久久国产一级毛片高清牌| 国语对白做爰xxxⅹ性视频网站| 考比视频在线观看| 大码成人一级视频| 亚洲av在线观看美女高潮| 麻豆av在线久日| 亚洲国产最新在线播放| 黄色一级大片看看| 久久久久久久久免费视频了| 精品人妻在线不人妻| 久久这里只有精品19| 99热全是精品| 男的添女的下面高潮视频| 亚洲精品av麻豆狂野| 久久亚洲精品不卡| 国产成人欧美在线观看 | 一区二区av电影网| 久久精品久久久久久久性| 性色av乱码一区二区三区2| 黄片播放在线免费| 午夜免费男女啪啪视频观看| 午夜激情久久久久久久| 十八禁高潮呻吟视频| 91国产中文字幕| 欧美性长视频在线观看| 国产精品一区二区在线观看99| av天堂久久9| 丝袜喷水一区| 视频区图区小说| 亚洲精品成人av观看孕妇| 中文字幕另类日韩欧美亚洲嫩草| 午夜免费鲁丝| 欧美老熟妇乱子伦牲交| 久久久久视频综合| 久久精品国产a三级三级三级| 久久99精品国语久久久| 少妇 在线观看| 叶爱在线成人免费视频播放| 免费一级毛片在线播放高清视频 | 国产淫语在线视频| 国产精品秋霞免费鲁丝片| 亚洲中文字幕日韩| 国产精品 国内视频| 午夜91福利影院| 一二三四社区在线视频社区8| 国产亚洲欧美精品永久| 午夜福利视频在线观看免费| 亚洲伊人久久精品综合| 久久久久久久大尺度免费视频| 日韩人妻精品一区2区三区| 9191精品国产免费久久| videosex国产| 国产av国产精品国产| 午夜免费男女啪啪视频观看| 国产成人啪精品午夜网站| 91成人精品电影| 美女主播在线视频| 国产无遮挡羞羞视频在线观看| 欧美在线一区亚洲| 欧美精品啪啪一区二区三区 | 色精品久久人妻99蜜桃| 91九色精品人成在线观看| 精品少妇黑人巨大在线播放| 精品第一国产精品| 国产午夜精品一二区理论片| 各种免费的搞黄视频| 悠悠久久av| 18在线观看网站| 老司机影院成人| 午夜福利免费观看在线| 免费一级毛片在线播放高清视频 | 欧美变态另类bdsm刘玥| 一级片'在线观看视频| 99久久人妻综合| 亚洲色图 男人天堂 中文字幕| 国产精品一区二区精品视频观看| 欧美xxⅹ黑人| 高清欧美精品videossex| 性高湖久久久久久久久免费观看| 国精品久久久久久国模美| 欧美激情 高清一区二区三区| 欧美日韩亚洲国产一区二区在线观看 | 欧美黄色片欧美黄色片| 久久久精品94久久精品| 亚洲少妇的诱惑av| 日本vs欧美在线观看视频| 悠悠久久av| 亚洲五月婷婷丁香| 韩国高清视频一区二区三区| 国产精品偷伦视频观看了| 国产精品99久久99久久久不卡| 免费观看人在逋| 久久国产亚洲av麻豆专区| 中文字幕人妻丝袜一区二区| 在线观看国产h片| 一区二区日韩欧美中文字幕| 免费久久久久久久精品成人欧美视频| 久久精品熟女亚洲av麻豆精品| 女人精品久久久久毛片| 97在线人人人人妻| 99久久精品国产亚洲精品| 久久久久久久国产电影| 亚洲一卡2卡3卡4卡5卡精品中文| 久久久精品区二区三区| 久久性视频一级片| 国产精品久久久久久精品古装| 一级片免费观看大全| 久久精品亚洲熟妇少妇任你| 亚洲国产精品一区二区三区在线| 五月天丁香电影| 免费观看av网站的网址| 国产精品熟女久久久久浪| 国产熟女欧美一区二区| 99九九在线精品视频| 性高湖久久久久久久久免费观看| 热re99久久国产66热| 国产一区二区激情短视频 | 少妇 在线观看| 国产av精品麻豆| 一级片免费观看大全| 午夜福利免费观看在线| 爱豆传媒免费全集在线观看| 丰满少妇做爰视频| 国产淫语在线视频| 视频在线观看一区二区三区| 国产亚洲欧美在线一区二区| 男人操女人黄网站| 日日爽夜夜爽网站| videos熟女内射| 十八禁人妻一区二区| 国产精品久久久久成人av| 欧美变态另类bdsm刘玥| 只有这里有精品99| 中文欧美无线码| 丁香六月欧美| 日本欧美视频一区| 国产一区二区三区av在线| 亚洲欧洲精品一区二区精品久久久| 日韩中文字幕欧美一区二区 | 在线看a的网站| 我的亚洲天堂| 美女午夜性视频免费| 后天国语完整版免费观看| 国产精品久久久久成人av| 黄片小视频在线播放| 精品国产乱码久久久久久小说| 99国产精品一区二区三区| 黑丝袜美女国产一区| 99九九在线精品视频| 黑丝袜美女国产一区| 热re99久久国产66热| 午夜福利免费观看在线| 一本综合久久免费| 亚洲国产精品一区二区三区在线| 精品福利观看| 一区二区三区精品91| 精品国产超薄肉色丝袜足j| 成人手机av| 婷婷色av中文字幕| 精品国产乱码久久久久久小说| 亚洲专区中文字幕在线| 桃花免费在线播放| 国产成人系列免费观看| 国产精品免费大片| 另类亚洲欧美激情| 精品一区二区三区av网在线观看 | 另类精品久久| 精品国产乱码久久久久久小说| 999久久久国产精品视频| 人人妻人人澡人人看| 欧美+亚洲+日韩+国产| 国产一区二区在线观看av| 极品少妇高潮喷水抽搐| 狂野欧美激情性xxxx| 天天影视国产精品| 免费久久久久久久精品成人欧美视频| 久久精品aⅴ一区二区三区四区| 欧美另类一区| 在线精品无人区一区二区三| 久久青草综合色| 国产精品九九99| 午夜日韩欧美国产| 欧美日韩亚洲综合一区二区三区_| 高清欧美精品videossex| 满18在线观看网站| 久久国产精品影院| 国产精品一国产av| 国产深夜福利视频在线观看| 中文字幕色久视频| 久久久精品免费免费高清| 久久午夜综合久久蜜桃| www.av在线官网国产| 国产免费视频播放在线视频| 美女脱内裤让男人舔精品视频| 久久精品亚洲熟妇少妇任你| 视频在线观看一区二区三区| 国产成人欧美| e午夜精品久久久久久久| 日本91视频免费播放| 自拍欧美九色日韩亚洲蝌蚪91| 国产淫语在线视频| 两个人看的免费小视频| 黄色一级大片看看| 婷婷色综合www| 亚洲一码二码三码区别大吗| 麻豆乱淫一区二区| 最黄视频免费看| av线在线观看网站| 大码成人一级视频| 亚洲精品一二三| 脱女人内裤的视频| 国产精品av久久久久免费| av不卡在线播放| 亚洲国产欧美一区二区综合| 国产精品久久久久久精品古装| 久久精品久久久久久噜噜老黄| 精品人妻一区二区三区麻豆| 午夜久久久在线观看| 久久久欧美国产精品| 高潮久久久久久久久久久不卡| 成人免费观看视频高清| 欧美性长视频在线观看| 最新的欧美精品一区二区| 精品少妇一区二区三区视频日本电影| 欧美人与性动交α欧美精品济南到| 国产深夜福利视频在线观看| 777米奇影视久久| 亚洲人成网站在线观看播放| 国产av国产精品国产| 欧美日韩综合久久久久久| 免费日韩欧美在线观看| 久久久久久久久久久久大奶| 欧美精品啪啪一区二区三区 | 久久久久久久大尺度免费视频| 色精品久久人妻99蜜桃| 女性被躁到高潮视频| 欧美激情高清一区二区三区| 热re99久久国产66热| 国产精品九九99| 91老司机精品| 美女午夜性视频免费| 18禁黄网站禁片午夜丰满| 天天躁狠狠躁夜夜躁狠狠躁| 日韩中文字幕视频在线看片| 80岁老熟妇乱子伦牲交| 亚洲,一卡二卡三卡| 久久中文字幕一级| 一级黄色大片毛片| 女警被强在线播放| 男人添女人高潮全过程视频| 欧美久久黑人一区二区| 久久国产精品大桥未久av| 中文乱码字字幕精品一区二区三区| 国产黄色免费在线视频| 成人国产一区最新在线观看 | 国产真人三级小视频在线观看| 伦理电影免费视频| 视频区欧美日本亚洲| 在现免费观看毛片| 欧美精品一区二区免费开放| 涩涩av久久男人的天堂| 精品一区在线观看国产| 成人免费观看视频高清| 曰老女人黄片| 中文字幕人妻熟女乱码| 欧美在线黄色| 青青草视频在线视频观看| 老司机影院毛片| 97人妻天天添夜夜摸| 欧美人与性动交α欧美精品济南到| 91成人精品电影| 精品一区在线观看国产| 色94色欧美一区二区| 亚洲一区中文字幕在线| 日本色播在线视频| 无遮挡黄片免费观看| 中文字幕最新亚洲高清| 国产男女超爽视频在线观看| 亚洲欧美精品综合一区二区三区| 日本一区二区免费在线视频| 日韩 欧美 亚洲 中文字幕| 精品第一国产精品| 美女脱内裤让男人舔精品视频| 久久国产精品影院| 国产麻豆69| 久久久精品免费免费高清| 日日爽夜夜爽网站| 天天操日日干夜夜撸| 免费在线观看影片大全网站 | 亚洲图色成人| 多毛熟女@视频| 国产三级黄色录像| 9191精品国产免费久久| 香蕉丝袜av| 在现免费观看毛片| 丰满饥渴人妻一区二区三| 最近中文字幕2019免费版| 国产免费视频播放在线视频| 国产精品九九99| 一区二区三区四区激情视频| 久久久久精品人妻al黑| 脱女人内裤的视频| www.av在线官网国产| 在线观看免费高清a一片| 久久精品人人爽人人爽视色| 亚洲av片天天在线观看| 亚洲人成77777在线视频| 国产免费一区二区三区四区乱码| 国产欧美日韩精品亚洲av| 一区二区三区四区激情视频| 国产一区二区激情短视频 | 久久久久精品国产欧美久久久 | 丝瓜视频免费看黄片| 日本wwww免费看| 欧美av亚洲av综合av国产av| 婷婷丁香在线五月| 精品国产一区二区三区四区第35| 多毛熟女@视频| 视频区欧美日本亚洲| 欧美日韩国产mv在线观看视频| 亚洲黑人精品在线| 亚洲专区中文字幕在线| 国产亚洲午夜精品一区二区久久| 国产在线免费精品| 777久久人妻少妇嫩草av网站| 69精品国产乱码久久久| 男女无遮挡免费网站观看| av国产精品久久久久影院| 亚洲欧美精品自产自拍| 蜜桃国产av成人99| 中文精品一卡2卡3卡4更新| 久久久久精品人妻al黑| 成年人午夜在线观看视频| 男人操女人黄网站| 久久狼人影院| 日日爽夜夜爽网站| 天天躁夜夜躁狠狠久久av| 日韩制服丝袜自拍偷拍| www.熟女人妻精品国产| 精品久久久久久久毛片微露脸 | 18禁观看日本| 男女边吃奶边做爰视频| 黑丝袜美女国产一区| 天堂俺去俺来也www色官网| 久久久久视频综合| 在线天堂中文资源库| 美女国产高潮福利片在线看| 亚洲av成人精品一二三区| 国产欧美日韩一区二区三 | 日韩中文字幕欧美一区二区 | 亚洲,欧美,日韩| 在现免费观看毛片| 视频区图区小说| 亚洲精品一二三| 免费在线观看视频国产中文字幕亚洲 | 国产精品 国内视频| 亚洲av成人不卡在线观看播放网 | 99热全是精品| 亚洲成国产人片在线观看| 亚洲成人国产一区在线观看 | 久久精品亚洲av国产电影网| 欧美精品高潮呻吟av久久| 亚洲,一卡二卡三卡| 中国美女看黄片| 波多野结衣av一区二区av| 亚洲 欧美一区二区三区| 老鸭窝网址在线观看| 女人高潮潮喷娇喘18禁视频| 色视频在线一区二区三区| 大型av网站在线播放| www.自偷自拍.com| 9色porny在线观看| 久久99一区二区三区| 丝瓜视频免费看黄片| 国产精品一区二区免费欧美 | 激情五月婷婷亚洲| 一级毛片黄色毛片免费观看视频| 飞空精品影院首页| 伦理电影免费视频| 天堂中文最新版在线下载| 中文字幕制服av| 欧美少妇被猛烈插入视频| 国产熟女欧美一区二区| 亚洲国产成人一精品久久久| 亚洲欧洲精品一区二区精品久久久| 国产亚洲精品久久久久5区| 亚洲精品国产av蜜桃| 久久影院123| 99国产精品一区二区蜜桃av | 一级片'在线观看视频| 亚洲男人天堂网一区| 午夜日韩欧美国产| 97精品久久久久久久久久精品| a级毛片在线看网站| 久久久久久免费高清国产稀缺| 精品国产乱码久久久久久小说| 国产成人系列免费观看| 人人妻人人添人人爽欧美一区卜| 久久中文字幕一级|