rnn   1625

« earlier    

Neural Kubrick | Interactive Architecture Lab
Stanley Kubrick in 1968 speculated on the arrival of human-level Artificial Intelligence in “2001 A Space Odyssey”. Some 16 years past his prediction, our project “Neural Kubrick” examines the state of the art in Machine Learning, using the latest in “Deep Neural Network” techniques to reinterpret and redirect Kubrick’s own films. Three machine learning algorithms take respective roles in our AI film crew; Art Director, Film Editor and Director of Photography.

The outlook of the project is an artist-machine collaboration. The limitations of the machine are achieved by the artist and the limitations of the artist are achieved by the algorithm. In the context of the project, what the machine interprets is limited to either numbers, classification of features or generation of abstract images. This output is curated by us into a coherent narrative, translated back into human perception.

The project is based on Stanley Kubrick’s movies as input for three machine learning models, namely The Shining, A Clockwork Orange and 2001 A Space Odyssey. The generated videos display a machinic interpretation of the three movies, through a collaborative effort between the artist and the algorithm.
ai  rnn  neuralnetwork  film  art  architecture  vr 
4 days ago by mildlydiverting
[1803.01271] An Empirical Evaluation of Generic Convolutional and Recurrent Networks for Sequence Modeling
For most deep learning practitioners, sequence modeling is synonymous with recurrent networks. Yet recent results indicate that convolutional architectures can outperform recurrent networks on tasks such as audio synthesis and machine translation. Given a new sequence modeling task or dataset, which architecture should one use? We conduct a systematic evaluation of generic convolutional and recurrent architectures for sequence modeling. The models are evaluated across a broad range of standard tasks that are commonly used to benchmark recurrent networks. Our results indicate that a simple convolutional architecture outperforms canonical recurrent networks such as LSTMs across a diverse range of tasks and datasets, while demonstrating longer effective memory. We conclude that the common association between sequence modeling and recurrent networks should be reconsidered, and convolutional networks should be regarded as a natural starting point for sequence modeling tasks. To assist related work, we have made code available at this https://github.com/locuslab/TCN
CNN  vs  RNN  DL-theory  papers  LSTM  GRU  TCN 
27 days ago by foodbaby
MIT Deep Learning
Collection of MIT courses and lectures on deep learning, deep reinforcement learning, autonomous vehicles, and artificial intelligence taught by Lex Fridman
deeplearning  video  learning  AI  reinforcementlearning  cnn  rnn  neuralnetwork 
4 weeks ago by sachaa
The Unreasonable Effectiveness of Recurrent Neural Networks
Amazing article showing the generation of new things from trained RNNs. Includes one of my favourite examples.- machine generated Shakespeare.
rnn  recurrentneuralnetwork  ann  artificialneuralnetwork  ai  machinelearning  deeplearning 
5 weeks ago by ids
larspars/word-rnn: Recurrent Neural Network that predicts word-by-word
Recurrent Neural Network that predicts word-by-word - larspars/word-rnn
machine_learning  ml  rnn 
5 weeks ago by jkeefe
Mask R-CNN with OpenCV - PyImageSearch
In this tutorial you will learn how to use Mask R-CNN with Deep Learning, OpenCV, and Python to predict pixel-wise masks for every object in an image.
python  CV  RNN  ML  tutorial  opencv 
7 weeks ago by mootPoint

« earlier    

related tags

active  ai  algorithm  analytics  android  ann  annotation  architecture  arfima  art  artificialneuralnetwork  asr  attention  audio  automata  awesome  bayesian  beam-search  beatbox  benchmark  bi-directional  blogs  boilerplate  books  bush_ii_administration  byte  bytes  c++  chatbot  cheatsheet  cheney  china  classification  cloudml  cnn  complexity  computervision  conversation  cool  ctrnn  cv  data-science  dataset  datasets  davidha  decoding  deep-learning  deep  deeplearning  design  detection  development  dialogue  dl-theory  dl  dnn  drum  ebooks  eesen  embeddings  encoder-decoder  explodinggradients  film  finance  folk  forecast  forecasting  functional-programming  games  gan  gcp  generalization  generation  generative  generator  github  google  gradients  gru  image  imageprocessing  js  kaggle  keras  kombat  learning-rate  learning  lectures  lifelong  long-memory  lstm  machine-learning  machine  machine_learning  machinelearning  mask  math  mentalhealth  midi  ml  model  mortal  music  network  neural-attention  neural-network  neural-networks  neural  neuralnets  neuralnetwork  neuralnetworks  newbob  nlp  nn-architecture  nn  ocaml  online  oov  opencv  overview  paper  papers  paragraph  performance  pizza  posture  powell  prediction  presentation  programming  python  pytorch  quickdraw  radiology  recipe  recurrent  recurrentneuralnetwork  recurrentneuralnetworks  regularization  reinforcementlearning  reply  research  rnns  scripts  segmentation  seq2seq  sequence-models  sequence  sequential-modeling  sgd  speech  state  statistics  sunita-sarawagi  synthesis  talks  tcn  tensorflow.fs  tensorflow  text  theory  time-series  timeseries  titanic  topic  transfer  transferlearning  transformer  trends  ts  tune  tutorial  tutorials  tweetit  ui  university  usage  vanishinggradients  video  vision  visualisation  vr  vs  wfsa  wilkerson  with  wordembedding  world  ylecun 

Copy this bookmark: