Comments:

Very nice overview of deep learning! ### Why layers are important? Why not using some input neurons, some output neurons and some arbitrary graph between/connecting them? For instance a clique or an Erdos-Renyi random graph? ### Gradient descent: ReLu "$f(x)=max(0,x)$" seems to be the most popular activation function. It is also a very simple/basic function. Is it possible to design an optimization method (other than backpropagation using stochastic gradient descent) dedicated to neural networks having only ReLu neurons?
Read the paper, add your comments…

Comments:

Read the paper, add your comments…

Comments:

Read the paper, add your comments…

Comments:

Read the paper, add your comments…

Comments:

This paper won the 2018 SEOUL TEST OF TIME AWARD: http://www.iw3c2.org/updates/CP_TheWebConferenceSeoul-ToT-Award-VENG-2018.pdf ### Concerns about "primary pair": Does a primary pair exist for each n-ary relation? The following example of "NobelPrize" and "AlbertEinstein" given in the paper does not work for "Marie Curie" who won two Nobel prizes according to Wikipedia: https://en.wikipedia.org/wiki/Marie_Curie " However, this method cannot deal with additional arguments to relations that were designed to be binary. The YAGO model offers a simple solution to this problem: It is based on the assumption that for each n-ary relation, a primary pair of its arguments can be identified. For example, for the above won-prize-in-year-relation, the pair of the person and the prize could be considered a primary pair. The primary pair can be represented as a binary fact with a fact identifier: #1 : AlbertEinstein hasWonPrize NobelPrize All other arguments can be represented as relations that hold between the primary pair and the other argument: #2 : #1 time 1921 "
Read the paper, add your comments…
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42