Comments:

Read the paper, add your comments…

Comments:

Read the paper, add your comments…

Comments:

This is not a usual scientific paper, not a metaphysical tractatus either. It is accessible for for people with different backgrounds. Author tries to convince us that "the core of the human mind is simple and its structure is comprehensible". If I understand correctly, he believes that we (humanity or an individual?) will eventually develop a mathematical theory allowing us to understand our own understanding. Nevertheless, we should surmount a strange barrier of unclear shape and unknown size to achieve the goal. The power of our imagination could help... Who knows ? Some phrases are very remarkable, for example : * *In reality, no matter how hard some mathematicians try to achieve the contrary, subexponential time suffices for deciphering their papers.* * *For instance, an intelligent human and an equally intelligent gigantic squid may not agree on which of the above three blobs are abstract and which are concrete.* * *To see what iterations of self-references do, try it with "the meaning of". The first iteration "meaning of meaning" tells you what the (distributive) meaning really is. But try it two more times, and what come out of it, "meaning of meaning of meaning of meaning" strikes you as something meaningless.* The mind experiment from the third point suggests that our brain uses some type of hash function dealing with self-referential iterations. The special hash function that have the following property: it is possible to take a inverse of such hash, but it takes some time; inversion of a double hash becomes more complicated; its virtually impossible to inverse a triple-hash application. If "hash inversion" and "understanding" are linked in some way, this could lead to something interesting... The hash function in this case works like a lossy compression algorithm.
Read the paper, add your comments…

Comments:

The paper describes the architecture of YouTube current recommendation system (as of 2016), based on Deep Learning, which replaced the previous Matrix Factorization based architecture (ref 23). The architecture uses Google Brain's TensorFlow tool and achieves significantly better than the former one (see fig6 for example). The paper is a high-level description of the architecture, and sometimes lack the technical details which would allow a precise understanding. However, it provides very interesting ideas about the problems faced and solutions contemplated by YouTube engineers and of actual constraints with industrial recommendation systems. It also helps realizing that there is as much craft as science in the process. Overall, the architecture consists of two main phases : - a coarse candidate generation (creating a set of a few hundred videos per user from the corpus of several million videos available), - precise ranking of the candidates. Both steps use a DNN architecture. Some technical details : - user profile is summarized as a heterogeneous embedding of identifiers of videos watched, demographic information, search tokens, etc. - a decisive advantage of DNN put into light in the paper is their ability to deal with heterogeneous signal (sources of information of various nature), another is that it (partly) circumvents feature manufacturing - while development of the method calls to offline evaluation metrics (precision, recall, etc.), the final evaluation relies on live A/B testing experiments, the discussion related to this point in Sec 3.4 is very interesting - for candidate generation, YouTube uses implicit feedback information (e.g. watch times) rather than explicit feedback (e.g. thumb up) because there is more information available - taking into account the "freshness" of a video has an important impact on the efficacy of the candidate generation (fig 4) - taking into account the context of a watch (meaning the sequence of watch) is also important as co-watch distribution probability is very asymmetric, in particular taking into account the previous action of a user related to similar items matters
There is no 👍-style explicit feedback for the comments yet. So, I'll use the implicit one. Alt-Tab's comment is a very condensed one, it matters, I even think that this comment is better than the orignal article abstract. Now I really want to drop everything, sit down and read this article in details.
Read the paper, add your comments…

Comments:

Read the paper, add your comments…
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24