Deep learning
Uploaded by: Maximimi
Upload date: 2018-05-31 14:42:40


Very nice overview of deep learning! ### Why layers are important? Why not using some input neurons, some output neurons and some arbitrary graph between/connecting them? For instance a clique or an Erdos-Renyi random graph? ### Gradient descent: ReLu "$f(x)=max(0,x)$" seems to be the most popular activation function. It is also a very simple/basic function. Is it possible to design an optimization method (other than backpropagation using stochastic gradient descent) dedicated to neural networks having only ReLu neurons?
Maximimi at 2018-06-01 17:28:18
Edited by Maximimi at 2018-06-07 15:20:59

Please consider to register or login to comment on the paper.