Diversity in recommender systems – A survey
Uploaded by: Alt-Tab
Upload date: 2019-10-17 10:41:55

Comments:

### Short summary ##### Plan - procedure for article selection - recommender systems overview - review itself: diversity measure ; impact of diversification on recommendation ; diversification methods - conclusion and perspectives ##### Procedure for article selection - search on Google Scholar with keywords selection - doubles elimination - selection of articles without additional payments - clustering into three groups of articles (based on review plan) ##### RS overview (standard, aiming at people new to the field) - dates back to Salton and McGill, 1980 (ref 1) - usual standard techniques: word vectors, DT, Naïve Bayes, kNN, SVM - applications: digital TV, web multimedia (YouTube, Shelfari (now merged into Goodreads), Facebook, Goodreads), personalized ads, online shopping - the general process of recommendation: past users activity collection ; create user model ; present recommendation information ; feedback collection (distinguishing explicit and implicit recommendation) - important challenges: data sparsity (working with "mostly empty user-items datasets") ; cold start (new users or items in the dataset ; overfitting (actually, rather in the sense of overspecialization) ##### Diversification - Table 1 summarizes diversity measures - Bradley-Smyth 2001: average dissimilarity between all pairs of items - Fleder-Hosanagar 2007: Gini - explore with a model how diversity evolves through recommendation cycles - Clarke et al. 2008: combined measure (ambiguity, redundancy, novelty...) - Vargas et al. 2011: intralist diversity - Hu-Pu 2011: perceived diversity (questionnaire) - Vargas et al. 2012: in the line of Clarke et al. 2008 - Castagnos et al. 2013: in the line of Bradley-Smyth 2001 - develop experiments with users - L'Huillier et al. 2014: idem - Vargas et al. 2014: binomial diversity (mixing coverage and redundancy) - Table 2 summarizes how diversity affects recommendation - usually: F-measure, MAE, NMAE - some articles show that diversification by reranking is possible without affecting too much accuracy (ex: Adomavicius et Kwon, 51) - some address the question of trade-off between diversity and accuracy (52: Hurley-Zhang 2011, 55: Aytekin-Karakaya 2014, 56: Ekstrand et al, 2014, 57: Javari-Jalili, 2014) - pb seen as multi-objective, looking for Pareto efficient ranking (58: Ribeiro et al 2015) - Table 3 summarizes diversification algorithms - many methods are reranking from accuracy-based ranking (59: Ziegler et al. 2005, 51 et 61: Adomavicius-Kwon 2012, 2011, 62: Premchaiswadi et al 2013) - then various strategies, depending on the method, on the type of data, whether the authors question temporal aspects - underlying idea is that the original algorithm (typically CF) is already diverse, just needs reordering ##### Conclusions - no consensus on a diversity metric - increasing diversity does not necessarily means sacrifice accuracy - various challenges: not enough live studies ; work in psychology would be useful ; how to use systems which have a lot of different types of items ; how to diversify during the reco process (and not a posteriori)

Please consider to register or login to comment on the paper.