autoencoder   265

« earlier    

[1812.05069] Recent Advances in Autoencoder-Based Representation Learning
Learning useful representations with little or no supervision is a key challenge in artificial intelligence. We provide an in-depth review of recent advances in representation learning with a focus on autoencoder-based models. To organize these results we make use of meta-priors believed useful for downstream tasks, such as disentanglement and hierarchical organization of features. In particular, we uncover three main mechanisms to enforce such properties, namely (i) regularizing the (approximate or aggregate) posterior distribution, (ii) factorizing the encoding and decoding distribution, or (iii) introducing a structured prior distribution. While there are some promising results, implicit or explicit supervision remains a key enabler and all current methods use strong inductive biases and modeling assumptions. Finally, we provide an analysis of autoencoder-based representation learning through the lens of rate-distortion theory and identify a clear tradeoff between the amount of prior knowledge available about the downstream tasks, and how useful the representation is for this task.
autoencoder  representation-learning  game  mmd 
january 2019 by arsyed
[1807.07543] Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer
"Autoencoders provide a powerful framework for learning compressed representations by encoding all of the information needed to reconstruct a data point in a latent code. In some cases, autoencoders can "interpolate": By decoding the convex combination of the latent codes for two datapoints, the autoencoder can produce an output which semantically mixes characteristics from the datapoints. In this paper, we propose a regularization procedure which encourages interpolated outputs to appear more realistic by fooling a critic network which has been trained to recover the mixing coefficient from interpolated data. We then develop a simple benchmark task where we can quantitatively measure the extent to which various autoencoders can interpolate and show that our regularizer dramatically improves interpolation in this setting. We also demonstrate empirically that our regularizer produces latent codes which are more effective on downstream tasks, suggesting a possible link between interpolation abilities and learning useful representations."
autoencoder  regularization  gan 
july 2018 by arsyed

« earlier    

related tags

2017  ae  ambiguity  analysis  anomaly  art  artificial-intelligence  arxiv  attention  attributes  audio  baudelaire  bayesian  cae  captioning  celebrity  chemoinformatics  classification  clean  cleanup  cnn  code  code2pix  collaborative-filtering  collaborative  collaborative_filtering  combinatorial  concept  contactractive  content  contractive  convolution  convolutional  creativity  credit-card  dae  data_science  datascience  dataset  datasets  deep-learning  deep  deep_learning  deeplearning  denoising  detection  dev  dialogue  dilated  discrete  disentanglement  diversity  elbo  embedding  example  examples  explanation  face  fast.ai  fastai  filtering  fraud-detection  fraud  fusion  game  gan  gap  generation  generative-adversarial-networks  generative-models  gestalt  github  google  graph  gui  horse-racing  ids  image  images  inference  javascript  kaggle  keras  kolmogorov  lacuna  latent-variable  latent  layer  layers  learning  life  lilian-weng  long  lstm  machine-learning  machine  machine_learning  machinelearning  math  medium  memnn  memory  mining  ml  mmd  mnist  modernity  monitoring  movie  movies  music  musiv  netflix  network  neural-net  neural  neuralnetworks  nids  nlp  nn  normalization  nvidia  official  opensource  optimal  optimization  outlier-detection  outliers  particle  pataphysics  pca  polyvalence  postsuspicion  processing  pytorch  rankgauss  reading  recnn  recommendation  recommendations  recommender  recursive  reflexive  regularization  reinforcement-learning  renderer  rendering  representation-learning  resemblance  respect  review  rnn  sae  sdae  security  sense  sentiment  seq2seq  sequence  set  siannengai  software  stacked  structured-data  study-group  survey  suspicion  svd  swarm  sweetening  symbol  syntax  tensorflow  text  tied-weights  tied  tied_weights  traffic  transport  tree  tutorial  tutorials  ui  unthinkable  vae  variation  variational-learning  variational-methods  variational  vibe  visualisation  vq-vae  webgl  weightss  word  word2vec  writing 

Copy this bookmark:



description:


tags: