This is not a usual scientific paper, not a metaphysical tractatus either. It is accessible for for people with different backgrounds. Author tries to convince us that "the core of the human mind is simple and its structure is comprehensible". If I understand correctly, he believes that we (humanity or an individual?) will eventually develop a mathematical theory allowing us to understand our own understanding. Nevertheless, we should surmount a strange barrier of unclear shape and unknown size to achieve the goal. The power of our imagination could help... Who knows ? Some phrases are very remarkable, for example : * *In reality, no matter how hard some mathematicians try to achieve the contrary, subexponential time suffices for deciphering their papers.* * *For instance, an intelligent human and an equally intelligent gigantic squid may not agree on which of the above three blobs are abstract and which are concrete.* * *To see what iterations of self-references do, try it with "the meaning of". The first iteration "meaning of meaning" tells you what the (distributive) meaning really is. But try it two more times, and what come out of it, "meaning of meaning of meaning of meaning" strikes you as something meaningless.* The mind experiment from the third point suggests that our brain uses some type of hash function dealing with self-referential iterations. The special hash function that have the following property: it is possible to take a inverse of such hash, but it takes some time; inversion of a double hash becomes more complicated; its virtually impossible to inverse a triple-hash application. If "hash inversion" and "understanding" are linked in some way, this could lead to something interesting... The hash function in this case works like a lossy compression algorithm.
I started to read the paper several days ago. Here, in contentiously-(self?)-updating manner, I'll note my thoughts about and paraphrase its main results. ### Style I like the Levin style, he writes a bit provocative phrases that looks, in the same time, incredibly truthful (at least for the author himself). Consider for example what he says about the invention of positional numeral systems and Quantum computers: > Archimedes made a great discovery that digital representation of numbers is exponentially > more efficient than analog ones (sand pile sizes). Many subsequent analog devices yielded > unimpressive results. It is not clear why QCs [quantum computers] should be an exception. There is something in common between writing styles of Russian mathematicians, consider for example, especially non technical, works of [Leonid Levin](https://en.wikipedia.org/wiki/Leonid_Levin), [Mikhaïl Gromov](https://en.wikipedia.org/wiki/Mikhail_Leonidovich_Gromov), and [Misha Verbitsky](https://en.wikipedia.org/wiki/Misha_Verbitsky). ### One-way function and the axiom of choice One-way functions and the axiom of choice deals with practical or conceptual (im-)possibility to solve [inverse problems](https://en.wikipedia.org/wiki/Inverse_problem) arising in Computer Science and Mathematics. ### Kolmogorov complexity and One-way functions Two primes $p$ and $q$ have almost the same [informational content](https://en.wikipedia.org/wiki/Algorithmic_information_theory) as their product $pq$. However, the [$p, q \mapsto pq$](https://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Arithmetic_functions) is much more easy to do than [$pq \mapsto p,q$](https://en.wikipedia.org/wiki/Integer_factorization#Prime_decomposition).
This a nice paper and I think you manage to keep your article easily readable. Specially the figures, such as figure 3, helped me follow your explanations. I haven't finished reading the paper but I have minor remarks/typo: - Page 9 in the definition of multigraph, should e(v, V) be transformed to e({v},V) in order to be completely rigorous? - I don't know what “maximum-entropy sampling” is but why quoting it ? - I don't get the use of Reversible Decompression, apart from being an extreme case of external information. I don't think this case deserve to be explained in detailed, as it does not have any impact on the understanding of the paper. - I find the notation for definition 10 a bit confusing. A Cartesian MultiEdge Partition is made of Cartesian products of two vertex subsets. The notation for each cartesian product uses subscript on vertex subset: $(V_i \times V'_i)$ for the $i$th product. This lead me to think that $(V_i \times V'_i)$ was a possible Cartesian product. This also lead me to think that Cartesian MultiEdge Partition are made of Cartesian product of vertex partition: $B(V \times V) = \{(V_i \times V_j)\}\ \forall V_i \in P_1, V_j \in P_2 \wedge P_1 \in B(V), P_2 \in B(V)$. I think all Cartesian product of vertex partition are Cartesian MultiEdge partition. Is there any Cartesian MultiEdge Partition that is not a Cartesian product of vertex partition? - In the conplete case after definition 11, I don't know the notation $\omega$ for the number of feasible partitions. - I think you missed a $\lambda$ in the lagrange function page 21. Should it be $q_{\lambda}(VVT)= |VVT| + \lambda Loss(VVT)$