When Hashes Met Wedges: A Distributed Algorithm for Finding High Similarity Vectors
Authors: Aneesh Sharma
Liked by: Maximimi
Domains: Graph mining
Tags: WWW2017
Uploaded by: Open Reading Group
Upload date: 2018-01-12 15:18:52

Comments:

## About scalability: "We apply a standard cleaning procedure (for similarity) and remove high out-degrees. In other words, if some vertex v has more than 10K followers (outdegree > 10K), we remove all these edges. (We do not remove the vertex, but rather only its out-edges.) Intuitively, the fact that two vertices are followed by v is not a useful signal for similarity. In flock and friendster, such vertices are typically spammers and should be ignored. For webgraphs, a page linking to more than 10K other pages is probably not useful for similarity measurement." - Typo: should be "more than 10K followeEs (outdegree > 10K)" not "followers". - Such a cleaning procedure makes sense, in addition, it has a valuable side effect: in practice, after applying such a cleaning procedure, many pairs of nodes with very small non-zero similarity (in the original directed graph) have a similarity of zero (in the new graph with the out-edges of the out-hubs removed). - Given that such a cleaning procedure is applied, a brute force approach (that computes the similarity of each pair of nodes sharing at least one in-neighbor) is scalable to a large extent. - The following C code and experiments support the previous claim: https://github.com/maxdan94/cosineSparseMatrix - If scalability is still an issue, in-edges of nodes with very high in-degree could also be removed. "In our example, this turns out to be more than 100 trillion triples. This is infeasible even for a large industrial-strength cluster.". Listing 100 trillion triples seems doable with a medium-strength cluster. "If a matrix A has a 100 billion non-zeroes, it takes upwards of 1TB just to store the entries. This is more than an order of magnitude of the storage of a commodity machine in a cluster. Any approach of partitioning A into submatrices cannot scale." Indeed, memory seems to be a problem. However, it is unclear why an approach partitioning A into submatrices cannot scale. ## Dataset: "The dataset friendster is a social network, and is available from the Stanford Large Network Dataset Collection [28]." It is unclear which friendster dataset is used. In SNAP, http://snap.stanford.edu/data/com-Friendster.html seems to be "undirected" and has 1.8B edges, not "1.6B edges" as written in Table 1 (even after removing edges of nodes with degree larger than 10,000). ## Typos and minors: - Inside the paper $O(100B)$ looks a bit strange, because according to [the definition](https://en.wikipedia.org/wiki/Big_O_notation#Formal_definition) the set $O(100B)$ is equal to $O(0.00001)$ - should be "more than 10K followeEs (outdegree > 10K)" not "followers".

You comment anonymously! You will not be able to edit/delete the comment.

Please consider to register or login.

Use $\LaTeX$ to type formulæ and markdown to format text.
When you post something to which you hold the copyright you authorise us to do distribute this data across the scientific community. You can post public domain content. All user-generated content will be freely available online. Please see this page to learn more about Papersγ's terms of use and privacy policy.