rishaanp + learning-rate-annealing   2

Improving the way we work with learning rate. – techburst
Most optimization algorithms(such as SGD, RMSprop, Adam) require setting the learning rate — the most important hyper-parameter for training deep neural networks. Naive method for choosing learning…
deep-learning  learning-rate  cyclical-learning-rate  fast.ai  learning-rate-annealing  from pocket
february 2018 by rishaanp

bundles : Data-Science

Copy this bookmark:



description:


tags: