ClockworksDiversity – Users tend to be more satisfied with recommendations when there is a higher intra-list diversity, i.e. items from e.g. different artists.

Recommender Persistence – In some situations it is more effective to re-show recommendations, or let users re-rate items, than showing new items. There are several reasons for this. Users may ignore items when they are shown for the first time, for instance, because they had no time to inspect the recommendations carefully.

Privacy – Recommender systems usually have to deal with privacy concerns because users have to reveal sensitive information. Building user profiles using collaborative filtering can be problematic from a privacy point of view. Many European countries have a strong culture of data privacy and every attempt to introduce any level of user profiling can result in a negative customer response. A number of privacy issues arose around the dataset offered by Netflix for the Netflix Prize competition. Although the data sets were anonymized in order to preserve customer privacy, in 2007, two researchers from the University of Texas were able to identify individual users by matching the data sets with film ratings on the Internet Movie Database. As a result, in December 2009, an anonymous Netflix user sued Netflix in Doe v. Netflix, alleging that Netflix had violated U.S. fair trade laws and the Video Privacy Protection Act by releasing the datasets. This led in part to the cancellation of a second Netflix Prize competition in 2010. Much research has been conducted on ongoing privacy issues in this space. Ramakrishnan et al. have conducted an extensive overview of the trade-offs between personalization and privacy and found that the combination of weak ties (an unexpected connection that provides serendipitous recommendations) and other data sources can be used to uncover identities of users in an anonymized dataset.

User Demographics – Beel et al. found that user demographics may influence how satisfied users are with recommendations. In their paper they show that elderly users tend to be more interested in recommendations than younger users.

Robustness – When users can participate in the recommender system, the issue of fraud must be addressed.

Serendipity – Serendipity is a measure “how surprising the recommendations are”. For instance, a recommender system that recommends milk to a customer in a grocery store, might be perfectly accurate but still it is not a good recommendation because it is an obvious item for the customer to buy.

Trust – A recommender system is of little value for a user if the user does not trust the system. Trust can be build by a recommender system by explaining how it generates recommendations, and why it recommends an item.

Labelling – User satisfaction with recommendations may be influenced by the labeling of the recommendations. For instance, in the cited study click-through rate (CTR) for recommendations labeled as “Sponsored” were lower (CTR=5.93%) than CTR for identical recommendations labeled as “Organic” (CTR=8.86%). Interestingly, recommendations with no label performed best (CTR=9.87%) in that study.

From: Wikipedia

Beyond Accuracy
Tagged on: