Comments:

The method uses random projections of the column vectors of the matrix $S=(\alpha_0 I+\alpha_1 A+\alpha_2 A^2+...+\alpha_q A^q)$. It is shown that such a random projection is a heuristics to minimize a relevant cost function (equation (2)) with some guarantees (Theorem 1). As only the products of a set of random vectors $R$ with the matrix $S$ is needed, explicitly forming the (potentially dense) matrix $S$ is avoided: only the vectors $A^p\cdot R$ are computed. The running time is $O(n\cdot d^2+ m\cdot q\cdot d)$, where $m$ is the number of edges, $n$ is the number of nodes, $d$ is the dimensions of the embedding (and the number of random vectors in $R$) and $q$ is the order of the embedding (cf formula of $S$). The operations involved are very simple leading to a very fast practical algorithm. It is shown in the experiments that these simple ideas lead to the most scalable embedding method and that the method is still accurate for some machine learning tasks such as classification, link prediction, and network reconstruction. The executable in MATLAB is publicly available here: https://github.com/ZW-ZHANG/RandNE An alternative open source implementation in C is available here: https://github.com/maxdan94/RandNE ### Concerns regarding Theorem 3 "Theorem 3. The time complexity of Algorithm 3 is linear with respect to the number of changed nodes and the number of changed edges respectively." In the proof, the running time proven is in $O(M'\cdot R(q))$ (where $M'$ is the number of new or disappeared edges), the "$R(q)$" is the $q$-step neighborhood of an edge. We can have $R(q)=O(N)$ for some graphs especially in real-world graphs where the average distance between two nodes is very small (say less than 4) https://research.fb.com/three-and-a-half-degrees-of-separation/. The theorem is thus not correct without stating further assumptions. In addition, in the experiments, the running time of the suggested algorithm for dynamic graphs (Algorithm 3) is not compared to the static algorithm (Algorithm 1) re-run from scratch after each update. It is not clear to me if the gain in time is significant. ### I do not understand 5, C, 1 "Network Reconstruction: The experimental setting is similar to moderate-scale networks (Section V-B2), i.e. we rank pairs of nodes according to their inner product similarities and reconstruct the network. We report the AUC scores in Table V." There are approximately $\approx 10^{16}$ such pairs of nodes. This number of pairs of nodes is too large, I do not understand how the pairs can be ranked and the AUC computed. ### Parametters tuning The method needs many parameters: $d$, $q$, $a_0$, $a_1$, ..., $a_q$. Whyle $d$ and $q$ can be arbitrary fixed to say $d=128$ and $q=3$ (as done in the paper). It is not clear what default values would fit for the $a_k$'s. In the paper, the $a_k$'s are found using grid search. Note that the range of values and the steps are not specified in addition the optimal values are not given. This hurts reproducibility. What if grid search is not possible (e.g. no train set)? Which default value would make sense? ### Grid search and unfair comparison In the experiments, the suggested embedding is shown to lead to better results for some tasks (such as network reconstruction, classification and link prediction) compared to other embedding methods. However, the suggested method has parameters ($a_0$, $a_1$, $a_2$ and $a_3$ since $q$ is set to $3$). These parameters are tuned using grid search, while some of the parameters of the other methods seem to be kept to default. This may not be a very fair comparison. ### Real billion-scale networks? I read "We also report the exact running time in Table VII. It shows that RandNE can learn all the node embeddings of WeChat within 7 hours with 16 normal servers, which is promising for real billion-scale networks." I do not understand what is meant. WeChat has 250M nodes and 4.8G edges. The number of edges is thus in the order of billions. "Real billion-scale networks" means more than 1G nodes and not edges? Another graph with 1G edges is available here: http://snap.stanford.edu/data/com-Friendster.html In addition, contrary to WeChat, nodes have some labels that could be used to test the relevance of the embedding. Graphs with billion nodes are available here: http://law.di.unimi.it/datasets.php Unfortunately, they are directed... Another one is here: https://sparse.tamu.edu/Sybrandt/MOLIERE_2016 undirected, but weighted. A twitter graph with 25G edges can be obtained upon request: https://hal.inria.fr/hal-00948889/en/ ###Typos and minors: - "as an theoretical guarantee" - "an one-vs-all logistic regression" - This relevant WWW2018 paper is not cited and the associated method is not used for comparisons: https://papers-gamma.link/paper/48
Read the paper, add your comments…

Comments:

The idea of coarse-graining the graph by merging "similar" nodes (using edge collapsing or star collapsing) before applying a graph embedding method is interesting. Bringing that idea to the next level by building a hierarchy of coarse-grained graphs is also relevant. The hierarchy of coarse-grained graphs allows finding a better initialization for embedding methods relying on deep learning. The two ways of collapsing (edge collapsing or star collapsing) are very simple and more sophisticated ways are not suggested: for instance based on modular decomposition or community detection. Maybe such more advanced ways lead to even better results. The experimental section shows that coarse-graining the input graph before applying a graph embedding methods leads to a "better" embedding. Indeed, using such an embedding improves the performance of the classification task compared to using the embedding obtained without coarse-graining. However, neither a theoretical justification of this fact is provided, nor an intuition. How about other machine learning tasks? For instance, does the suggested method also help for clustering and link prediction? The method does not make the embedding faster even though this might have been expected since the graphs are smaller. Figure 4: I do not understand in what the various level as relevant. To me, no level looks like the wished embedding. The graphs used in the experiments are extremely small (less than 30k nodes). It is not clear how the method would perform on large and sparse graphs, say 1G edges 100M nodes (for instance these: http://snap.stanford.edu/data/#communities).
Read the paper, add your comments…

Comments:

Read the paper, add your comments…

Comments:

Great paper! ### Input graph in main memory It seems that to compute the suggested ordering (later used to compress the input graph), the graph needs to be stored in the main memory (as an adjacency list). However, if the graph already fits in the main memory of the machine, then compressing it is less interesting. Some experiments are carried on huge graphs that do not fit in the main memory of the machine if not compressed. The trick is that the web graphs are already compressed (maybe with the lexicographic url ordering) to compute the suggested ordering, while the considered social networks are actually not that large and fit in the main memory of a commodity machine. Footnote 21: "It is possible in principle to avoid keeping the graph in the main memory, but the cost becomes $O(n \log n)$.". How can I do that? ### Heuristic to minimize the average gap cost For social networks, it is shown that the compression is highly correlated to the average gap cost (average log gaps) if the "intervalisation" of the BV framework is turned off. The authors note that the suggested ordering is excellent at minimizing this average gap cost. And that even though it does not seem to minimize it directly. Can a heuristic that is explicitly designed to minimize this average gap cost lead to a better compression? ### Typos: - ref lacking: "label propagation [RAK07, ?]" - "until it is possible to do so" -> "until it is not possible to do so" - ref lacking: "Absolute Pott Model (APM) [?]" - "tecniques" - "Some simple experiments not reported here shows that the same happen" -> "Some simple experiments not reported here show that the same happens"
Read the paper, add your comments…

Comments:

Read the paper, add your comments…
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42