Abstract
A significant challenge being faced in recommender systems research concerns the provision of robust explanations about why a particular option is suggested. These explanations may exploit diverse data types concerning the users and items under consideration. In line with the above, this paper introduces a novel framework for automatic explanations building in recommender systems. The proposed solution follows a hybrid approach that meaningfully integrates collaborative filtering and sentiment analysis features into classical multi-attribute based ranking. A comprehensive evaluation of the proposed solution advocates the exploitation of additional and diverse information in explanation building, since this better fulfils a series of recommendation related aims such as transparency, persuasiveness, effectiveness and satisfaction.
Keywords
Get full access to this article
View all access options for this article.
