rishaanp + cyclical-learning-rate   2

Improving the way we work with learning rate. – techburst
Most optimization algorithms(such as SGD, RMSprop, Adam) require setting the learning rate — the most important hyper-parameter for training deep neural networks. Naive method for choosing learning…
deep-learning  learning-rate  cyclical-learning-rate  fast.ai  learning-rate-annealing  from pocket
february 2018 by rishaanp
The Cyclical Learning Rate technique // teleported.in
Learning rate (LR) is one of the most important hyperparameters to be tuned and holds key to faster and effective training of neural networks. Simply put, LR decides how much of the loss gradient is to be applied to our current weights to move them in the direction of lower loss.
Cyclical-Learning-Rate  Learning-Rate  fast.ai  SGDR  from pocket
february 2018 by rishaanp

bundles : Data-Science

Copy this bookmark:



description:


tags: