Deep learning
Uploaded by: Maximimi
Upload date: 2018-05-31 14:42:40


Very nice overview of deep learning! ### Why layers are important? Why not using some input neurons, some output neurons and some arbitrary graph between/connecting them? For instance a clique or an Erdos-Renyi random graph? ### Gradient descent: ReLu "$f(x)=max(0,x)$" seems to be the most popular activation function. It is also a very simple/basic function. Is it possible to design an optimization method (other than backpropagation using stochastic gradient descent) dedicated to neural networks having only ReLu neurons?
Maximimi at 2018-06-01 17:28:18
Edited by Maximimi at 2018-06-07 15:20:59

You comment anonymously! You will not be able to edit/delete the comment.

Please consider to register or login.

Use $\LaTeX$ to type formulæ and markdown to format text.
When you post something to which you hold the copyright you authorise us to do distribute this data across the scientific community. You can post public domain content. All user-generated content will be freely available online. Please see this page to learn more about Papersγ's terms of use and privacy policy.