Wellcome to Maximimi's library,

  You can find here all papers liked or uploaded by Maximimi
  together with brief user bio and description of her/his academic activity.


[Link to my homepage](https://sites.google.com/view/danisch/home) ## I will read the following papers. - [PageRank as a Function of the Damping Factor](https://papers-gamma.link/paper/106) - [Graph Stream Algorithms: A Survey](https://papers-gamma.link/paper/102) - [Network Sampling: From Static to Streaming Graphs](https://papers-gamma.link/paper/122) - [The Protein-Folding Problem, 50 Years On](https://papers-gamma.link/paper/78) - [Computational inference of gene regulatory networks: Approaches, limitations and opportunitie](https://papers-gamma.link/paper/77) - [Graph complexity analysis identifies an ETV5 tumor-specific network in human and murine low-grade glioma](https://papers-gamma.link/paper/79) - [Gene Networks in Plant Biology: Approaches in Reconstruction and Analysis](https://papers-gamma.link/paper/76) - [The non-convex Burer–Monteiro approach works on smooth semidefinite programs](https://papers-gamma.link/paper/80) - [Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality](https://papers-gamma.link/paper/81) - [Influence maximization in complex networks through optimal percolation](https://papers-gamma.link/paper/70) - [Motifs in Temporal Networks](https://papers-gamma.link/paper/61) - [Deep Sparse Rectifier Neural Networks](https://papers-gamma.link/paper/69) - [Sparse Convolutional Neural Networks](https://papers-gamma.link/paper/67) - [A fast and simple algorithm for training neural probabilistic language models](https://papers-gamma.link/paper/58) - [Adding One Neuron Can Eliminate All Bad Local Minima](https://papers-gamma.link/paper/71) ## I read the following papers. ### 2018-2019: - [Hierarchical Taxonomy Aware Network Embedding](https://papers-gamma.link/paper/116) - [Billion-scale Network Embedding with Iterative Random Projection](https://papers-gamma.link/paper/110) - [HARP: Hierarchical Representation Learning for Networks](https://papers-gamma.link/paper/109/) - [Layered Label Propagation: A MultiResolution Coordinate-Free Ordering for Compressing Social Networks](https://papers-gamma.link/paper/105) ### 2017-2018: - [Link Prediction in Graph Streams](https://papers-gamma.link/paper/101) - [The Community-search Problem and How to Plan a Successful Cocktail Party](https://papers-gamma.link/paper/74) - [A Nonlinear Programming Algorithm for Solving Semidefinite Programs via Low-rank Factorization](https://papers-gamma.link/paper/55) - [Deep Learning](https://papers-gamma.link/paper/68) - [Reducing the Dimensionality of Data with Neural Networks](https://papers-gamma.link/paper/65) - [Representation Learning on Graphs: Methods and Applications](https://papers-gamma.link/paper/60) - [Improved Approximation Algorithms for MAX k-CUT and MAX BISECTION](https://papers-gamma.link/paper/56) - [Cauchy Graph Embedding](https://papers-gamma.link/paper/53) - [Phase Transitions in Semidefinite Relaxations](https://papers-gamma.link/paper/57) - [Graph Embedding Techniques, Applications, and Performance: A Survey](https://papers-gamma.link/paper/52) - [VERSE: Versatile Graph Embeddings from Similarity Measures](https://papers-gamma.link/paper/48) - [Hierarchical Clustering Beyond the Worst-Case](https://papers-gamma.link/paper/45) - [Scalable Motif-aware Graph Clustering](https://papers-gamma.link/paper/18) - [Practical Algorithms for Linear Boolean-width](https://papers-gamma.link/paper/40) - [New Perspectives and Methods in Link Prediction](https://papers-gamma.link/paper/28/New%20Perspectives%20and%20Methods%20in%20Link%20Prediction) - [In-Core Computation of Geometric Centralities with HyperBall: A Hundred Billion Nodes and Beyond](https://papers-gamma.link/paper/37) - [Diversity is All You Need: Learning Skills without a Reward Function](https://papers-gamma.link/paper/36) - [When Hashes Met Wedges: A Distributed Algorithm for Finding High Similarity Vectors](https://papers-gamma.link/paper/23) - [Fast Approximation of Centrality](https://papers-gamma.link/paper/35/Fast%20Approximation%20of%20Centrality) - [Indexing Public-Private Graphs](https://papers-gamma.link/paper/19/Indexing%20Public-Private%20Graphs) - [On the uniform generation of random graphs with prescribed degree sequences](https://papers-gamma.link/paper/26/On%20the%20uniform%20generation%20of%20random%20graphs%20with%20prescribed%20d%20egree%20sequences) - [Linear Additive Markov Processes](https://papers-gamma.link/paper/21/Linear%20Additive%20Markov%20Processes) - [ESCAPE: Efficiently Counting All 5-Vertex Subgraphs](https://papers-gamma.link/paper/17/ESCAPE:%20Efficiently%20Counting%20All%205-Vertex%20Subgraphs) - [The k-peak Decomposition: Mapping the Global Structure of Graphs](https://papers-gamma.link/paper/16/The%20k-peak%20Decomposition:%20Mapping%20the%20Global%20Structure%20of%20Graphs) - [A Fast and Provable Method for Estimating Clique Counts Using Turán’s Theorem](https://papers-gamma.link/paper/24)

Comments:

Read the paper, add your comments…

Comments:

### Concerns: - It seems that the method needs to store $L$ integers per node. That is a total of $N*L$ integers while storing the graph requires $2*M$ integers (where $M$ is the number of edges), thus the method is useful only if $L<2*M/N$ (that is if $L$ smaller than the average degree). In the experiments, the average degree of the three graphs is approximately 7, 55 and 42, while $L$ is 50 or 80. There is thus no gain in space compared to a good implementation of a straightforward algorithm storing the full graph. - Once the sketch for each node is obtained, it is not clear how to obtain each relevant pair of nodes and its similarity. It seems that each possible pair of nodes has to be considered which is very under-optimal and not scalable. - Each graph used in the experiments easily fits in the RAM of any commodity machine. For such small graphs, an exact method storing the full graph should be faster in practice. ### Alternative approach: https://github.com/maxdan94/NeighSim
Read the paper, add your comments…

Comments:

Arxiv link: https://arxiv.org/abs/1807.06874
This a nice paper and I think you manage to keep your article easily readable. Specially the figures, such as figure 3, helped me follow your explanations. I haven't finished reading the paper but I have minor remarks/typo: - Page 9 in the definition of multigraph, should e(v, V) be transformed to e({v},V) in order to be completely rigorous? - I don't know what “maximum-entropy sampling” is but why quoting it ? - I don't get the use of Reversible Decompression, apart from being an extreme case of external information. I don't think this case deserve to be explained in detailed, as it does not have any impact on the understanding of the paper. - I find the notation for definition 10 a bit confusing. A Cartesian MultiEdge Partition is made of Cartesian products of two vertex subsets. The notation for each cartesian product uses subscript on vertex subset: $(V_i \times V'_i)$ for the $i$th product. This lead me to think that $(V_i \times V'_i)$ was a possible Cartesian product. This also lead me to think that Cartesian MultiEdge Partition are made of Cartesian product of vertex partition: $B(V \times V) = \{(V_i \times V_j)\}\ \forall V_i \in P_1, V_j \in P_2 \wedge P_1 \in B(V), P_2 \in B(V)$. I think all Cartesian product of vertex partition are Cartesian MultiEdge partition. Is there any Cartesian MultiEdge Partition that is not a Cartesian product of vertex partition? - In the conplete case after definition 11, I don't know the notation $\omega$ for the number of feasible partitions. - I think you missed a $\lambda$ in the lagrange function page 21. Should it be $q_{\lambda}(VVT)= |VVT| + \lambda Loss(VVT)$
Read the paper, add your comments…
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21