Wellcome to Maximimi's library,

  You can find here all papers liked or uploaded by Maximimi
  together with brief user bio and description of her/his academic activity.


[Link to my homepage](https://sites.google.com/view/danisch/home) ## I will read the following papers. - [Quasi-Succinct Indices](https://papers-gamma.link/paper/130) - [PageRank as a Function of the Damping Factor](https://papers-gamma.link/paper/106) - [Graph Stream Algorithms: A Survey](https://papers-gamma.link/paper/102) - [Network Sampling: From Static to Streaming Graphs](https://papers-gamma.link/paper/122) - [The Protein-Folding Problem, 50 Years On](https://papers-gamma.link/paper/78) - [Computational inference of gene regulatory networks: Approaches, limitations and opportunitie](https://papers-gamma.link/paper/77) - [Graph complexity analysis identifies an ETV5 tumor-specific network in human and murine low-grade glioma](https://papers-gamma.link/paper/79) - [Gene Networks in Plant Biology: Approaches in Reconstruction and Analysis](https://papers-gamma.link/paper/76) - [The non-convex Burer–Monteiro approach works on smooth semidefinite programs](https://papers-gamma.link/paper/80) - [Solving SDPs for synchronization and MaxCut problems via the Grothendieck inequality](https://papers-gamma.link/paper/81) - [Influence maximization in complex networks through optimal percolation](https://papers-gamma.link/paper/70) - [Motifs in Temporal Networks](https://papers-gamma.link/paper/61) - [Deep Sparse Rectifier Neural Networks](https://papers-gamma.link/paper/69) - [Sparse Convolutional Neural Networks](https://papers-gamma.link/paper/67) - [A fast and simple algorithm for training neural probabilistic language models](https://papers-gamma.link/paper/58) - [Adding One Neuron Can Eliminate All Bad Local Minima](https://papers-gamma.link/paper/71) ## I read the following papers. ### 2019-2020 - [Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs](https://papers-gamma.link/paper/162) - [Karp-Sipser based kernels for bipartite graph matching](https://papers-gamma.link/paper/160) - [Speedup Graph Processing by Graph Ordering](https://papers-gamma.link/paper/159) - [Tree Sampling Divergence: An Information-Theoretic Metric for Hierarchical Graph Clustering](https://papers-gamma.link/paper/154) ### 2018-2019: - [SWeG: Lossless and Lossy Summarization of Web-Scale Graphs](https://papers-gamma.link/paper/139) - [Smoothed Analysis: An Attempt to Explain the Behavior of Algorithms in Practice](https://papers-gamma.link/paper/129) - [Are stable instances easy?](https://papers-gamma.link/paper/128) - [Hierarchical Taxonomy Aware Network Embedding](https://papers-gamma.link/paper/116) - [Billion-scale Network Embedding with Iterative Random Projection](https://papers-gamma.link/paper/110) - [HARP: Hierarchical Representation Learning for Networks](https://papers-gamma.link/paper/109/) - [Layered Label Propagation: A MultiResolution Coordinate-Free Ordering for Compressing Social Networks](https://papers-gamma.link/paper/105) ### 2017-2018: - [Link Prediction in Graph Streams](https://papers-gamma.link/paper/101) - [The Community-search Problem and How to Plan a Successful Cocktail Party](https://papers-gamma.link/paper/74) - [A Nonlinear Programming Algorithm for Solving Semidefinite Programs via Low-rank Factorization](https://papers-gamma.link/paper/55) - [Deep Learning](https://papers-gamma.link/paper/68) - [Reducing the Dimensionality of Data with Neural Networks](https://papers-gamma.link/paper/65) - [Representation Learning on Graphs: Methods and Applications](https://papers-gamma.link/paper/60) - [Improved Approximation Algorithms for MAX k-CUT and MAX BISECTION](https://papers-gamma.link/paper/56) - [Cauchy Graph Embedding](https://papers-gamma.link/paper/53) - [Phase Transitions in Semidefinite Relaxations](https://papers-gamma.link/paper/57) - [Graph Embedding Techniques, Applications, and Performance: A Survey](https://papers-gamma.link/paper/52) - [VERSE: Versatile Graph Embeddings from Similarity Measures](https://papers-gamma.link/paper/48) - [Hierarchical Clustering Beyond the Worst-Case](https://papers-gamma.link/paper/45) - [Scalable Motif-aware Graph Clustering](https://papers-gamma.link/paper/18) - [Practical Algorithms for Linear Boolean-width](https://papers-gamma.link/paper/40) - [New Perspectives and Methods in Link Prediction](https://papers-gamma.link/paper/28/New%20Perspectives%20and%20Methods%20in%20Link%20Prediction) - [In-Core Computation of Geometric Centralities with HyperBall: A Hundred Billion Nodes and Beyond](https://papers-gamma.link/paper/37) - [Diversity is All You Need: Learning Skills without a Reward Function](https://papers-gamma.link/paper/36) - [When Hashes Met Wedges: A Distributed Algorithm for Finding High Similarity Vectors](https://papers-gamma.link/paper/23) - [Fast Approximation of Centrality](https://papers-gamma.link/paper/35/Fast%20Approximation%20of%20Centrality) - [Indexing Public-Private Graphs](https://papers-gamma.link/paper/19/Indexing%20Public-Private%20Graphs) - [On the uniform generation of random graphs with prescribed degree sequences](https://papers-gamma.link/paper/26/On%20the%20uniform%20generation%20of%20random%20graphs%20with%20prescribed%20d%20egree%20sequences) - [Linear Additive Markov Processes](https://papers-gamma.link/paper/21/Linear%20Additive%20Markov%20Processes) - [ESCAPE: Efficiently Counting All 5-Vertex Subgraphs](https://papers-gamma.link/paper/17/ESCAPE:%20Efficiently%20Counting%20All%205-Vertex%20Subgraphs) - [The k-peak Decomposition: Mapping the Global Structure of Graphs](https://papers-gamma.link/paper/16/The%20k-peak%20Decomposition:%20Mapping%20the%20Global%20Structure%20of%20Graphs) - [A Fast and Provable Method for Estimating Clique Counts Using Turán’s Theorem](https://papers-gamma.link/paper/24)

Comments:

For me, the paper is similar to Kleinberg's [masterpiece](https://www.cs.cornell.edu/home/kleinber/nips15.pdf).
Read the paper, add your comments…

Comments:

### Concerns about the time complexity given in Table 1: Section 3.3.5. Graph Factorization (GF). "To the best of our knowledge, Graph Factorization [21] was the first method to obtain a graph embedding in $O(|E|)$ time". This contradicts the time complexities given in Table 1 "$O(|E|d)$" for GF and earlier complexities in $O(|E|d^2)$. ### Dimensionality reduction is different from graph embedding: Some of the given references are actually for dimensionality reductions: the input is a set of vectors in dimension $D$ and the output is a set of vectors in dimension $d$ (such that $d<<D$). The problem where the input is a graph and the output is a set of vectors of small dimension is actually different. LLE and Laplacian Eigenmaps are methods starting from a set of vectors, then forming a graph out of the vectors and then forming vectors of small dimension out of the graph. Maybe this can be emphasized. In particular, maybe using these techniques on social networks (for instance) may not be adapted. The input graph is in most of the cases unweighted and the structure might be very different from the structure of a graph obtained from vectors. ### Cauchy graph embedding: - What is the complexity of Cauchy graph embedding (not given in table 1)? - How is the solution obtained? From eigenvectors like the two previous methods (LLE and Laplacian Eigenmaps)? - How to tune the parameter $\sigma$? After reading the paper [33], the Cauchy graph embedding method is more complex and seems less scalable. ### GF and SDP matrices: Page 4, section 3.3.5. Graph Factorization (GF). "Note that as the adjacency matrix is often not positive semidefinite, the minimum of the loss function is greater than 0 even if the dimensionality of embedding is |V|.". - The loss function is never 0 if $\lambda>0$. - If $W$ is SDP (and different from the null matrix) then the associated graph needs to contain self-loops. If self-loops are forbidden, then the matrix is never SDP. - If $W$ is SDP, then there exist a set of vectors $\{Y_i\}$ such that $\forall i,j$, $W_{ij}=<Y_i,Y_j>$. Is it true that such vectors can always be of dimension $n$ (the size of $W$) or less? A reference or proof? ### Minors: - Page 3, section 3.1.1. Locally Linear Embedding (LLE). It should be specified that the rows of the weight matrix need sum to one: $\sum_j W_{ij}=1$. - Page 3, section 3.1.1. Locally Linear Embedding (LLE). "the sparse matrix $(I-W)^T(I-W)$" this matrix is often not sparse (in the sense that it has a lot more non-zero values compared to the adjacency matrix $W$). - Page 3, section 3.1.3. Cauchy Graph Embedding. Is there a solution of the optimization using eigenvectors as in the two previous methods? - What is "positive pointwise mutual information (PPMI)"? - Figure 1 refers to CPE (Community Preserving Embedding), but CPE is not discussed in the survey. - The section on GraRep refers to the section on HOPE which is after, maybe swapping the two sections? ### Typos: - Many text-overflows
Read the paper, add your comments…

Comments:

[Slides](https://drive.google.com/file/d/0B6cGK503Ibt0blg2RU9MelViQm8/view) (easier to understand).
Read the paper, add your comments…
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32