rnn   1510

« earlier    

Understanding LSTM Networks -- colah's blog
Humans don’t start their thinking from scratch every second. As you read this essay, you understand each word based on your understanding of previous words. You don’t throw everything away and start thinking from scratch again. Your thoughts have persistence.

Traditional neural networks can’t do this, and it seems like a major shortcoming. For example, imagine you want to classify what kind of event is happening at every point in a movie. It’s unclear how a traditional neural network could use its reasoning about previous events in the film to inform later ones.

Recurrent neural networks address this issue. They are networks with loops in them, allowing information to persist.
ai  ml  machinelearning  rnn  LSTM 
9 days ago by euler
[1804.07915] A Stable and Effective Learning Strategy for Trainable Greedy Decoding
Beam search is a widely used approximate search strategy for neural network decoders, and it generally outperforms simple greedy decoding on tasks like machine translation. However, this improvement comes at substantial computational cost. In this paper, we propose a flexible new method that allows us to reap nearly the full benefits of beam search with nearly no additional computational cost. The method revolves around a small neural network actor that is trained to observe and manipulate the hidden state of a previously-trained decoder. To train this actor network, we introduce the use of a pseudo-parallel corpus built using the output of beam search on a base model, ranked by a target quality metric like BLEU. Our method is inspired by earlier work on this problem, but requires no reinforcement learning, and can be trained reliably on a range of models. Experiments on three parallel corpora and three architectures show that the method yields substantial improvements in translation quality and speed over each base system.
seq2seq  rnn  decoding  beam-search  via:chl 
15 days ago by arsyed
GitHub - minimaxir/textgenrnn:
Text-generating neural network of any size and complexity on any text dataset.
boilerplate  generator  rnn  python  nlp  machine-learning 
18 days ago by mjlassila
[1804.05374] Twin Regularization for online speech recognition
Online speech recognition is crucial for developing natural human-machine interfaces. This modality, however, is significantly more challenging than off-line ASR, since real-time/low-latency constraints inevitably hinder the use of future information, that is known to be very helpful to perform robust predictions. A popular solution to mitigate this issue consists of feeding neural acoustic models with context windows that gather some future frames. This introduces a latency which depends on the number of employed look-ahead features. This paper explores a different approach, based on estimating the future rather than waiting for it. Our technique encourages the hidden representations of a unidirectional recurrent network to embed some useful information about the future. Inspired by a recently proposed technique called Twin Networks, we add a regularization term that forces forward hidden states to be as close as possible to cotemporal backward ones, computed by a "twin" neural network running backwards in time. The experiments, conducted on a number of datasets, recurrent architectures, input features, and acoustic conditions, have shown the effectiveness of this approach. One important advantage is that our method does not introduce any additional computation at test time if compared to standard unidirectional recurrent networks.
asr  online  rnn 
20 days ago by arsyed
The Ultimate Guide to Recurrent Neural Networks (RNN)
The Ultimate Guide to Recurrent Neural Networks (RNN). Read on to understand everything on Module 3 of our Deep Learning course,
rnn  neural-network  data-science  machine-learning 
25 days ago by arobinski

« earlier    

related tags

3d  ai  analytics  android  animation  annotation  archive  art  arxiv  asr  attention  audio  automata  awesome  bayesian  beam-search  beatbox  benchmark  benchmarks  blogs  boilerplate  books  cell  chatbot  cheatsheet  chor-rnn  choreography  classifier  cnn  code  combined  convnet  cool  coreml  creativetech  creativity  cv  data-science  davidha  decoding  deep-learning  deep  deep_learning  deeplearning  design  detector  dialogue  ditm  dl  drawing  dropout  drum  ebooks  embeddings  example  finance  financial  folk  forecast  forecasting  fpga  fsm  functional-programming  games  gan  generalization  generation  generator  generators  github  google  graph  grid  gru  hmi  howto  image-processing  image  imageprocessing  ios  keras  layers  learning  lectures  lstm  machine-learning  machine  machine_learning  machinecomprehension  machinelearning  material  math  mentalhealth  metric-learning  ml  model  music  navigation  net  network  neural-attention  neural-net  neural-network  neural-networks  neural  neuralnets  neuralnetwork  neuralnetworks  nlg  nlp  nn-architecture  nn  numpy  ocaml  online  paper  papers  paragraph  performance  pizza  programming  python  pytorch  radiology  recipe  recurrent  recurrentneuralnetworks  reinforcementlearning  relational  research  review  sarcasm  scratch  segmentation  sentimentanalysis  seq2seq  sequence-modeling  sequence-models  sequence  speech  statistics  synthesis  talks  techniques  tensorflow  text  textanalysis  theory  time-series  topic  tosteal  transformer  tune  tutorial  tutorials  tweetit  ui  university  video  white  with  wordembedding  world  xor 

Copy this bookmark:



description:


tags: