nhaliday + acm + unit   77

Workshop Abstract | Identifying and Understanding Deep Learning Phenomena
ICML 2019 workshop, June 15th 2019, Long Beach, CA

We solicit contributions that view the behavior of deep nets as natural phenomena, to be investigated with methods inspired from the natural sciences like physics, astronomy, and biology.
unit  workshop  acm  machine-learning  science  empirical  nitty-gritty  atoms  deep-learning  model-class  icml  data-science  rigor  replication  examples  ben-recht  physics 
april 2019 by nhaliday
Stat 260/CS 294: Bayesian Modeling and Inference
Topics
- Priors (conjugate, noninformative, reference)
- Hierarchical models, spatial models, longitudinal models, dynamic models, survival models
- Testing
- Model choice
- Inference (importance sampling, MCMC, sequential Monte Carlo)
- Nonparametric models (Dirichlet processes, Gaussian processes, neutral-to-the-right processes, completely random measures)
- Decision theory and frequentist perspectives (complete class theorems, consistency, empirical Bayes)
- Experimental design
unit  course  berkeley  expert  michael-jordan  machine-learning  acm  bayesian  probability  stats  lecture-notes  priors-posteriors  markov  monte-carlo  frequentist  latent-variables  decision-theory  expert-experience  confidence  sampling 
july 2017 by nhaliday
CS 731 Advanced Artificial Intelligence - Spring 2011
- statistical machine learning
- sparsity in regression
- graphical models
- exponential families
- variational methods
- MCMC
- dimensionality reduction, eg, PCA
- Bayesian nonparametrics
- compressive sensing, matrix completion, and Johnson-Lindenstrauss
course  lecture-notes  yoga  acm  stats  machine-learning  graphical-models  graphs  model-class  bayesian  learning-theory  sparsity  embeddings  markov  monte-carlo  norms  unit  nonparametric  compressed-sensing  matrix-factorization  features 
january 2017 by nhaliday
A Fervent Defense of Frequentist Statistics - Less Wrong
Short summary. This essay makes many points, each of which I think is worth reading, but if you are only going to understand one point I think it should be “Myth 5″ below, which describes the online learning framework as a response to the claim that frequentist methods need to make strong modeling assumptions. Among other things, online learning allows me to perform the following remarkable feat: if I’m betting on horses, and I get to place bets after watching other people bet but before seeing which horse wins the race, then I can guarantee that after a relatively small number of races, I will do almost as well overall as the best other person, even if the number of other people is very large (say, 1 billion), and their performance is correlated in complicated ways.

If you’re only going to understand two points, then also read about the frequentist version of Solomonoff induction, which is described in “Myth 6″.

...

If you are like me from, say, two years ago, you are firmly convinced that Bayesian methods are superior and that you have knockdown arguments in favor of this. If this is the case, then I hope this essay will give you an experience that I myself found life-altering: the experience of having a way of thinking that seemed unquestionably true slowly dissolve into just one of many imperfect models of reality. This experience helped me gain more explicit appreciation for the skill of viewing the world from many different angles, and of distinguishing between a very successful paradigm and reality.

If you are not like me, then you may have had the experience of bringing up one of many reasonable objections to normative Bayesian epistemology, and having it shot down by one of many “standard” arguments that seem wrong but not for easy-to-articulate reasons. I hope to lend some reprieve to those of you in this camp, by providing a collection of “standard” replies to these standard arguments.
bayesian  philosophy  stats  rhetoric  advice  debate  critique  expert  lesswrong  commentary  discussion  regularizer  essay  exposition  🤖  aphorism  spock  synthesis  clever-rats  ratty  hi-order-bits  top-n  2014  acmtariat  big-picture  acm  iidness  online-learning  lens  clarity  unit  nibble  frequentist  s:**  expert-experience  subjective-objective 
september 2016 by nhaliday
CS229T/STATS231: Statistical Learning Theory
Course by Percy Liang covers a mix of statistics, computational learning theory, and some online learning. Also surveys the state-of-the-art in theoretical understanding of deep learning (not much to cover unfortunately).
yoga  stanford  course  machine-learning  stats  👳  lecture-notes  acm  kernels  learning-theory  deep-learning  frontier  init  ground-up  unit  dimensionality  vc-dimension  entropy-like  extrema  moments  online-learning  bandits  p:***  explore-exploit  advanced 
june 2016 by nhaliday

bundles : academeacmframemeta

related tags

accretion  acm  acmtariat  advanced  adversarial  advice  alg-combo  algebra  algorithmic-econ  algorithms  amortization-potential  aphorism  applications  arrows  asia  atoms  backup  bandits  bayesian  ben-recht  berkeley  big-picture  bio  books  boolean-analysis  brunn-minkowski  business  calculation  caltech  causation  chaining  chemistry  china  clarity  classic  clever-rats  cmu  coding-theory  cog-psych  columbia  combo-optimization  commentary  comparison  compressed-sensing  concentration-of-measure  concept  conference  confidence  confluence  confounding  constraint-satisfaction  convexity-curvature  cornell  course  criminal-justice  critique  cs  curvature  data-science  debate  decision-making  decision-theory  deep-learning  differential  dimensionality  discrete  discussion  distribution  DP  draft  dropbox  duality  economics  electromag  embeddings  empirical  encyclopedic  engineering  ensembles  entropy-like  equilibrium  ergodic  essay  ethics  events  evolution  examples  expert  expert-experience  explanans  explore-exploit  exposition  extrema  fall-2016  features  finance  formal-values  fourier  frequentist  frontier  game-theory  gaussian-processes  gelman  generative  geometry  georgia  giants  google  gotchas  gradient-descent  graph-theory  graphical-models  graphics  graphs  greedy  ground-up  growth  guide  hamming  harvard  heavyweights  heterodox  hi-order-bits  high-dimension  history  hypothesis-testing  icml  ideas  IEEE  iidness  impro  info-foraging  information-theory  init  interdisciplinary  invariance  investing  israel  kernels  language  latent-variables  learning-theory  lecture-notes  lectures  lens  lesswrong  letters  linear-algebra  linear-models  linear-programming  linearity  links  list  logic  machine-learning  macro  markov  martingale  matching  math  math.CA  math.CO  math.DS  math.FA  math.GR  math.MG  math.NT  math.RT  mathtariat  matrix-factorization  measure  mechanism-design  metabuch  methodology  metric-space  michael-jordan  micro  mit  ML-MAP-E  model-class  moments  monte-carlo  mostly-modern  multi  music-theory  network-structure  neuro  nibble  nitty-gritty  nlp  nonlinearity  nonparametric  norms  numerics  occam  off-convex  oly  online-learning  optimization  ORFE  org:bleg  org:edu  org:fin  org:inst  org:mat  p:*  p:**  p:***  p:null  p:someday  p:whenever  PAC  papers  parametric  pdf  pennsylvania  philosophy  physics  pigeonhole-markov  ppl  pragmatic  pre-2013  preprint  princeton  prioritizing  priors-posteriors  probabilistic-method  probability  prof  programming  psychology  publishing  quantum  quixotic  rand-approx  random  random-networks  rationality  ratty  reading  recommendations  regression  regularization  regularizer  reinforcement  relativity  replication  repo  research  research-program  review  rhetoric  rigor  risk  roadmap  rounding  s:*  s:**  sample-complexity  sampling  sanjeev-arora  science  scitariat  SDP  sebastien-bubeck  seminar  shalizi  SIGGRAPH  slides  social-choice  sparsity  spock  stanford  stat-mech  stats  stochastic-processes  stock-flow  stream  subculture  subjective-objective  sublinear  submodular  survey  synthesis  tails  talks  tcs  tcstariat  thermo  thesis  toolkit  top-n  topics  topology  tutorial  unit  unsupervised  valiant  vc-dimension  video  washington  wiki  winter-2017  workshop  world-war  wormholes  yoga  zooming  👳  🔬  🤖  🦉 

Copy this bookmark:



description:


tags: