Comments:

Hard to understand for a newbie in Deep RL. A formal definition of what is a skill ("A skill is simply a policy.") would help. ### Typos: - "guaranteeing that is has maximum entropy" - "We discuss the the log p(z) term in Appendix B." - "so it much first gather momentum" - "While are skills are learned"
I agree with the previous comment. The article seems to aim at people who are already familiar with reinforcement (not necessarily deep, or based on neural networks I guess) and its usual benchmark. The implementations are not detailed, the authors lay stress on the general idea (which is relatively simple to get), and its visual results which look quite spectacular.
The problem from my point of view is that a scientist has close to no incentive to give additional context information. For example, an author would not add a link to a Wikipedia page, as it is considered (rightly or wrongly) under the scientific quality standards. Even worse: I have observed that in most scientific communities, it is expected that you don't give too much details on the some definitions, assumptions or computations if they are considered as "common knowledge". In other words, adding too much details is understood as a sign that you are an outsider of the community. In anyway, hyperlinks would not solve everything, as the writer still has to target a given "level of knowledge" of the reader, he would make the information understandable by a specific audience. Typically, when I read about a field which is new to me, I first go to Wikipedia. The level of details of a Wikipedia page is widely heterogeneous: some pages are too basic for what I already know, but others are too elaborate, and as you said, I am not willing to spend the "focusing time" necessary for me to understand such a page if I cannot even evaluate if I will have a much better understanding of the paper afterwards. The author is certainly more capable of evaluating what are the context information which are useful to understand what he writes, but you are right that it costs time to him as well. So, I think that in a way, the problem can be seen as an economic one, of which focusing time is the main resource.
> For example, an author would not add a link to a Wikipedia page, as it is considered (rightly or wrongly) under the scientific quality standards I would not agree, some scientists already write blog-posts about their beloved subjects using a lot of hyperlinks to Wikipedia and other sites, consider for example [Igor Pak's](https://igorpak.wordpress.com/2015/05/26/the-power-of-negative-thinking-part-i-pattern-avoidance/) or [Terence Tao's](https://terrytao.wordpress.com/2009/04/26/szemeredis-regularity-lemma-via-random-partitions/) blogs. Currently, such blog-posts are considered complementary to "real" scientific publications. But but one day everything will change.
Read the paper, add your comments…

Comments:

just updated a paper info, now it contains a complete list of authors.
Read the paper, add your comments…

Comments:

Read the paper, add your comments…

Comments:

### General: A very simple and very scalable algorithm with theoretical guarantees to compute an approximation of the closeness centrality for each node in the graph. Maybe the framework can be adapted to solve other problems. In particular, "Hoeffding's theorem" is very handy. ### Parallelism: Note that the algorithm is embarrassingly parallel (the loop over the $k$ reference nodes can be done in parallel with almost no additional work). ### Experiments: It would be nice to have some experiments on large real-world graphs. An efficient (and parallel) implementation of the algorithm is available here: https://github.com/maxdan94/BFS ### Related work: More recent related work by Paolo Boldi and Sebastiano Vigna: - http://mmds-data.org/presentations/2016/s-vigna.pdf - https://arxiv.org/abs/1011.5599 - https://papers-gamma.link/paper/37 It also gives an approximation of the centrality for each node in the graph and it also scales to huge graphs. Which algorithm is "the best"?
Read the paper, add your comments…

Comments:

Just a few remarks from me, ### Definition of mountain Why is the k-peak in the k-mountain ? Why is the core number of a k-peak changed when removing the k+1 peak ? ###More informations on k-mountains Showing a k-mountain on figure 2 or on another toy exemple would have been nice. Besides, more theoretical insights on k-mountains are missing : complexity in time and space of the algorithm used to find the k-mountains ? How many k-mountains include a given node ? Less than the degeneracy of the graph but maybe it's possible to say more. Regarding the complexity of the algorithm used to build the k-mountains, I'm Not sure I understand everything but when the k-peak decomposition is done, for each k-contour, a new k-core decomposition must be run on the subgraph obtained when removing the k-contour. Then it must be compared whith the k-core decomposition of the whole graph. All these operations take time so the complexity of the algorithm used to build the k-mountains is I think higher than the complexity of the k-peak decomposition. Besides multiple choices are made for building the k-mountains, it seems to me that this tool deserves a broader analysis. Same thing for the mountains shapes, some insights are missing, for instance mountains may have an obvious staircase shape, does it mean something ? ###6.2 _p_ I don't get really get what is p? ###7.2 is unclear but it may be related to my ignorance of what is a protein.
### Lemma 2 (bis): Lemma 2 (bis): Algorithm 1 requires $O(\delta\cdot(N+M))$ time, where $\delta$ is the degeneracy of the input graph. Proof: $k_1=\delta$ and the $k_i$ values must be unique non-negative integers, there are thus $\delta$ distinct $k_i$ values or less. Thus, Algorithm 1 enters the while loop $\delta$ times or less leading to the stated running time. We thus have a running time in $O(\min(\sqrt{N},\delta)\cdot (M+N))$. Note that $\delta \leq \sqrt{M}$ and $\delta<N$ and in practice (in large sparse real-world graphs) it seems that $\delta\lessapprox \sqrt{N}$. ### k-core decomposition definition: The (full) definition of the k-core decomposition may come a bit late. Having an informal definition of the k-core decomposition (not just a definition of k-core) at the beginning of the introduction may help a reader not familiar with it. ### Scatter plots: Scatter plots: "k-core value VS k-peak value" for each node in the graph are not shown. This may be interesting. Note that scatter plots: "k-core value VS degree" are shown in "Kijung, Eliassi-Rad and Faloutsos. CoreScope: Graph Mining Using k-Core Analysis. ICDM2016" leading to interesting insights on graphs. Something similar could be done with k-peak. ###Experiments on large graphs: As the algorithm is very scalable: nearly linear time in practice (on large sparse real-world graphs) and linear memory. Experiments on larger graphs e.g. 1G edges could be done. ### Implementation: Even though the algorithm is very easy to implement, a link to a publicly available implementation would make the framework easier to use and easier to improve/extend. ### Link to the densest subgraph: The $\delta$-core (with $\delta$ the degeneracy of the graph) is a 2-approximation of the densest subgraph (here the density is defined as the average degree divided by 2) and thus the core decomposition can be seen as a (2-)approximation of the density friendly decomposition. - "Density-friendly graph decomposition". Tatti and Gionis. WWW2015. - "Large Scale Density-friendly Graph Decomposition via Convex Programming". Danisch et al. WWW2017. Having this in mind, the k-peak decomposition can be seen as an approximation of the following decomposition: - 1 Find the densest subgraph. - 2 Remove it from the graph (along with all edges connected to it). - 3 Go to 1 till the graph is empty. ### Faster algorithms: Another appealing feature of the k-core decomposition is that it is used to make faster algorithm. For instance, it is used in https://arxiv.org/abs/1006.5440 and to https://arxiv.org/abs/1103.0318 list all maximal cliques efficiently in sparse real-world graphs. Can the k-peak decomposition be used in a similar way to make some algorithms faster? ### Section 6.2 not very clear. ### Typos/Minors: - "one can view the the k-peak decomposition" - "et. al", "et al" - "network[23]" (many of these) - "properties similar to k-core ."
Read the paper, add your comments…
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28