Most recent ten days

New comment — Wednesday 22 May - 15:36
Interesting paper around the question: "Can machine learning be used to cheaply detect and exploit structure in practically relevant instances of NP-hard problems that come from the same distribution?". The problem of listing all maximum cliques in a graph is considered. A binary classifier is trained to discriminate between a node belonging to a maximum clique and a node that do not belong to a maximum clique: logistic regression is used with node features such as the total number of edges, node degree, clustering coefficient and trained on a set of graphs. The classifier is then used to prune the search space of a test graph: nodes that are labeled "not belonging to a maximum clique" by the classifier are removed. Some speedup is observed when comparing the running time of state-of-the-art methods on the original graph (or on a graph pruned using a baseline pruning method (degree pruning)) and on the pruned graph. # A more challenging baseline pruning For the baseline pruning: a heuristic is used to find a large clique and then if the found clique has size $k$, all nodes of degree $k-2$ or less are removed. It would have been better to iteratively remove all nodes of degree $k-2$ or less till there is no more such nodes and thus only keeping the $(k-1)$-core of the graph (it seems that only nodes of degree $k-2$ or less in the original graphs are removed). That would lead to much smaller graphs and a more challenging baseline pruning. How good is this k-core pruning? In table 1, the speedup obtained with degree pruning is not reported for igraph and EmMCE. Is the speedup obtained by the proposed pruning much better than the one obtained by the degree pruning for these two programs? # Setting the confidence threshold The confidence threshold used for the logistic regression classifier is arbitrarily fixed to $q=0.55$. If $q$ is higher, then the list of maximum cliques computed on the pruned graph may not be correct (i.e. it may not be the same as in the original graph as nodes belonging to a maximum clique would be pruned). It is not clear to me why all results are exact in table 1: just luck? How about using $q=0.6$ or $0.7$? Parameter sensitivity is not discussed. Note that if there is a single maximum clique of size $k$ and many cliques of size $k-1$ then removing a node from the maximum clique will completely change the output (which may then be many cliques of size $k-1$ instead of a single clique of size $k$).
Uploaded by : Maximimi
New comment — Sunday 19 May - 15:40
Very good article from GroupLens team about the diversity narrowing effect and the role of the recommender system. Experimental measurements on the MovieLens platform, the content of movies is described with tag-genome, the RS is an item-item collaborative filtering. ### Introduction ##### two research questions : - Do recommender systems expose users to narrower content over time? - How does the experience of users who take recommendations differ from that of users who do not regularly take recommendations? ##### method - users in two categories (following / ignoring ~ control group) - then sort of A/B testing to measure consumption diversity and enjoyment at the individual level ##### 4 contributions : - method to analyze the effects of RS on users - give quantitative proofs that users using RS have a better experience than others - show that diversity reduction effect is small - show that users using RS tend to consume more divers contents ### Related work - Pariser (11) on filter bubble - Tetlock (16) measures bias induce by bubbles (among economists) - Sunstein (15) thinks that personnalization tends to reduce the space of shared experience among users - Negroponte (MIT MediaLab co-founder) defends that algorithms may also open horizons - Linden (Amazon RS contributer, 5) thinks that reco can generate some form of serendipity and oppose to Pariser's thesis - Fleder et al (2) use simulation to show that RS tend to uniform experience / Hosanagar et al (3) use measurements, but limitations to these studies, e.g. simplistic model of human behavior (Fleder) ### Data & metrics ##### Dataset MovieLens (September 2013) : - 220.000 users - 20M ratings, 20000 movies - RS (item-item Collaborating Filtering, similar to Ama) propose "top picks" per user (15 default value) ##### Tag-genome to describe movie content - information space which describes movies with tags given by users - 9500 movies in tag-genome (april 2013) and 1100 tags => 10-11M pairs ##### Time period - 21 months from Feb 2008 to Aug 2010 (because less missing data) ##### preprocessing : - 15 first ratings taken out (platform propositions) - then first 3 months taken out (as many ratings are "catching up" + to give users time to get used to ML and ML to get info from the user ##### recommendation blocks definition : - several possibilities: per login session, per periods of time, but as users don't have the same activity - => blocks of 10 consecutive ratings, 10 is roughly the median of number of ratings per 3 months ##### users and items : - only users who have their first rating during the period of interest and who have at least 3 rating blocks => 1400 users with 3 to 200 rating blocks - 173,000 ratings on 10500 movies, all in tag genome (small inconsistency: there is only 9500 movies in the tag-genome ##### Identifying consumed recommendations in a rating block - groups based on the number of reco followed by users - criterion to consider that reco followed: in the list of top picks between 3h and 3 months before evaluation ##### Ignoring group vs following group - 2 groups of users - pb: some users take a lot of reco during a period and then very few - => rank users depending on the proportion of rating blocks during which they followed a reco (Fig4) - then if >50% : following group (286 users); if 0% : ignoring group (430 users) (seems quite ad hoc, but makes sense as differences are not very large and following reco must be quite rare) ##### Measuring content diversity: tag genome (see figure 5) - matrix movie-tag with 1 to 5 relevance score - movie is then a vector of scores - distance between movies is euclidean distance in this space (rather than cosine similarity because of matrix density) - order of magnitudes: min distance 5.1 (Halloween 4 - Halloween 5), max distance 44.2 (Matrix - Paris was a woman), mean 23.4 - authors defend that tag-genome is very expressive (more than cosine similarity), also benefits from the continuous input from users (illustrate on examples) ##### Measuring the effect of recommender systems ###### # content diversity metrics (standard) - average pairwise distance of the movies in the list - maximum pairwise distance of the movies in the list - on recommended movies (15 top picks) - on rated movies - more or less normally distributed (see fig6) ###### # user experience : - average rating per user - more or less normally distributed ###### # analysis : - mean shift for users in both groups - measure difference between groups and within group at different times - within group : use standard t-test (as same size) - between groups : use Welch t-tests (as different sizes) ### Results ##### Research Question 1: do RS expose users to narrower content over time ? - diversity on recommended movies: see tab2 - statistically significant drop on following group - significant drop (?) on ignoring group - following more diverse than ignoring (significant), but gap narrows over time ##### Research Question 2: does the experience of users who take recommendations differ from that of users who do not regularly take recommendations ? - diversity on rated movies using mean distance: see tab3 - beginning: no significant difference - end: significant drop for both groups - diversity on rated movies using max distance: see tab4 - same trend - enjoyment evaluated with rating between first and last block for both groups: see tab5 - following group gives higher ratings, and the rating drop is lower in following than in ignoring group - similar trend with rating means (see tab7) - refined analysis in tab6 with ratings depending on the fact that movies rated are recommended or not => better experience if movie recommended (complement) - rank together all blocks of all groups at all times per increasing average rating - => block located in this ranking with its percentile (ex: first block, ignoring group ~ 63rd percentile) - in tab8: percentile drop for both groups => percentile drop for ignoring is high (-19) not in the following (-1) ### Discussion ##### summary - diversity tends to narrow in anyway over time - effect subdued for the group which follows reco - users following reco get more diverse recommendations - ratings seem to encourage RS to broaden recommendations ##### prospects - is there a natural trend to narrow consumption diversity? - they think that item-item CF slows this effect down - the RS can also inform the user of his or her consumption diversity - finally RS can be designed to intentionally force diversity ##### limitations - restriction to top picks - restriction to one dataset and one system (item-item CF) (note: on current platform (May 2019), 4 different RS: peasant/bard/warrior/wizard, the one described in the article seems to be warrior, default RS)
Uploaded by : Alt-Tab
New comment — Thursday 16 May - 13:25
Very well written paper proposing an interesting algorithm to solve (lossless and lossy) graph summarization problems (cf problem definition). The algorithm is parallelizable in a shared memory environment and can be distributed using mapreduce. The theoretical analysis is sound and shows that the algorithm has a linear time complexity in practical scenarios. The experimental evaluation shows that the algorithm achieves a better trade-of between time and quality than existing approaches. In particular, the algorithm is faster than the greedy heuristic while leading to similar results. The proposed algorithm can be seen as a scalable implementation of the greedy heuristic: using shingle hashing of nodes allows to prune the search space and consider only some relevant pairs of nodes instead of considering all pairs of nodes (or all pairs of nodes sharing at least one neighbor). ### No theoretical approximation guarantees: The graph summarization problem is well defined and has an optimal solution. The proposed algorithm does not seem to have any theoretical approximation guarantees. According to the experimental evaluation, the quality of the output is similar to the one of the (straightforward) greedy heuristic, but we do not know how far from optimality it is. Is there any algorithm for the problem with some theoretical approximation guarantees? ### Absolute performance: While the performance relatively to other algorithms is good, the absolute performance of the algorithm is somehow disappointing. Figure 3: we see that the size of the output graphs is larger than 40% the size of the input graphs (in terms of (super) edges) for all graphs (and in many cases larger than 70%) except for the two web graphs (it is known that web graphs can be compressed efficiently (cf the webgraph framework)) and surprisingly a protein interaction graph (only 10% of edges are kept).
Uploaded by : Stranger

...