Comments:

Read the paper, add your comments…

Comments:

Very nice overview of deep learning! ### Why layers are important? Why not using some input neurons, some output neurons and some arbitrary graph between/connecting them? For instance a clique or an Erdos-Renyi random graph? ### Gradient descent: ReLu "$f(x)=max(0,x)$" seems to be the most popular activation function. It is also a very simple/basic function. Is it possible to design an optimization method (other than backpropagation using stochastic gradient descent) dedicated to neural networks having only ReLu neurons?
Read the paper, add your comments…

Comments:

Read the paper, add your comments…

Comments:

Read the paper, add your comments…

Comments:

Read the paper, add your comments…
Pages: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42