minimax   86

« earlier    

Devroye , Reddad : Discrete minimax estimation with trees
"We propose a simple recursive data-based partitioning scheme which produces piecewise-constant or piecewise-linear density estimates on intervals, and show how this scheme can determine the optimal L1L1 minimax rate for some discrete nonparametric classes."
to:NB  density_estimation  devroye.luc  minimax  nonparametrics  statistics 
12 hours ago by cshalizi
[1908.06130] Average-Case Lower Bounds for Learning Sparse Mixtures, Robust Estimation and Semirandom Adversaries
"This paper develops several average-case reduction techniques to show new hardness results for three central high-dimensional statistics problems, implying a statistical-computational gap induced by robustness, a detection-recovery gap and a universality principle for these gaps. A main feature of our approach is to map to these problems via a common intermediate problem that we introduce, which we call Imbalanced Sparse Gaussian Mixtures. We assume the planted clique conjecture for a version of the planted clique problem where the position of the planted clique is mildly constrained, and from this obtain the following computational lower bounds: (1) a k-to-k2 statistical-computational gap for robust sparse mean estimation, providing the first average-case evidence for a conjecture of Li (2017) and Balakrishnan et al. (2017); (2) a tight lower bound for semirandom planted dense subgraph, which shows that a semirandom adversary shifts the detection threshold in planted dense subgraph to the conjectured recovery threshold; and (3) a universality principle for k-to-k2 gaps in a broad class of sparse mixture problems that includes many natural formulations such as the spiked covariance model.
"Our main approach is to introduce several average-case techniques to produce structured and Gaussianized versions of an input graph problem, and then to rotate these high-dimensional Gaussians by matrices carefully constructed from hyperplanes in 𝔽tr. For our universality result, we introduce a new method to perform an algorithmic change of measure tailored to sparse mixtures. We also provide evidence that the mild promise in our variant of planted clique does not change the complexity of the problem."
to:NB  high-dimensional_statistics  minimax  computational_complexity  statistics  theoretical_computer_science 
yesterday by cshalizi
[1901.00555] An Introductory Guide to Fano's Inequality with Applications in Statistical Estimation
"Information theory plays an indispensable role in the development of algorithm-independent impossibility results, both for communication problems and for seemingly distinct areas such as statistics and machine learning. While numerous information-theoretic tools have been proposed for this purpose, the oldest one remains arguably the most versatile and widespread: Fano's inequality. In this chapter, we provide a survey of Fano's inequality and its variants in the context of statistical estimation, adopting a versatile framework that covers a wide range of specific problems. We present a variety of key tools and techniques used for establishing impossibility results via this approach, and provide representative examples covering group testing, graphical model selection, sparse linear regression, density estimation, and convex optimization."
to:NB  information_theory  minimax  statistics  estimation  to_read 
2 days ago by cshalizi
Kontorovich , Pinelis : Exact lower bounds for the agnostic probably-approximately-correct (PAC) machine learning model
"We provide an exact nonasymptotic lower bound on the minimax expected excess risk (EER) in the agnostic probably-approximately-correct (PAC) machine learning classification model and identify minimax learning algorithms as certain maximally symmetric and minimally randomized “voting” procedures. Based on this result, an exact asymptotic lower bound on the minimax EER is provided. This bound is of the simple form c∞/ν√c∞/ν as ν→∞ν→∞, where c∞=0.16997…c∞=0.16997… is a universal constant, ν=m/dν=m/d, mm is the size of the training sample and dd is the Vapnik–Chervonenkis dimension of the hypothesis class. It is shown that the differences between these asymptotic and nonasymptotic bounds, as well as the differences between these two bounds and the maximum EER of any learning algorithms that minimize the empirical risk, are asymptotically negligible, and all these differences are due to ties in the mentioned “voting” procedures. A few easy to compute nonasymptotic lower bounds on the minimax EER are also obtained, which are shown to be close to the exact asymptotic lower bound c∞/ν√c∞/ν even for rather small values of the ratio ν=m/dν=m/d. As an application of these results, we substantially improve existing lower bounds on the tail probability of the excess risk. Among the tools used are Bayes estimation and apparently new identities and inequalities for binomial distributions."
to:NB  learning_theory  statistics  minimax  kontorovich.aryeh  kith_and_kin  to_read 
18 days ago by cshalizi
JoeDog/byron: A multi-game tic-tac-toe game with running score and a race against the clock. Features a M.E.N.A.C.E. engine
A multi-game tic-tac-toe game with running score and a race against the clock. Features a M.E.N.A.C.E. engine - JoeDog/byron
github  java  tic-tac-toe  minimax  montecarlo 
5 weeks ago by snahor
Tic Tac Toe
cwoebker's blog - insights on technology and science
minimax  python 
may 2016 by Sidon

« earlier    

related tags

@to_watch  additive_models  advantage  ai  algorithm  algorithms  alpha-beta-pruning  alpha  alpha_beta_pruning  alphabeta  alphabetapruning  applet  artificial_intelligence  artificialintelligence  bayes  bayesian  bayesian_methodology  berkeley  beta  board  bootstrap  burstiness  cai.t._tony  chess  classifiers  clojure  coding  community_discovery  computational_complexity  confidence_sets  control_theory_and_control_engineering  criterion  css-grids  datascience  decision  deconvolution  density_estimation  detection  detector  devroye.luc  dimension_reduction  eframovich  empirical-processes  ensemble_methods  entropy_estimation  error  estimation  exponential_families  filetype:pdf  floss  four  function  game  game_design  game_theory  gamedev  games  gametrees  genovese.christopher  github  globaldiagnostix  graph_limits  have_read  heard_the_talk  heavy_tails  hex  high-dimensional_statistics  hilbert_space  hypothesis_testing  in_nb  information_theory  inverse-problems  java  javadoc  jeff_delezen  jeffbradberry  jointer-planer  jordan.michael_i.  kernel_estimators  kifi  kith_and_kin  kontorovich.aryeh  learning  learning_in_games  learning_theory  likelihood  linear_programming  loss-functions  low-regret_learning  machine-learning  machine_learning  machinelearning  manifold_learning  marginal  mathematics  matrix-completion  media:document  mixture_models  model_selection  modulation  module  monte-carlo  montecarlo  nature  negamax  network_data_analysis  neyman-pearson  nonparametrics  noughts-and-crosses  objective-c  oebb  of  online_learning  opensource  optimization  parallel  pdffile  prediction  principal_components  privacy  probability  programming  progrmaming  prolog  pruning  python  radiology  raginsky.maxim  ratio  re:growing_ensemble_project  re:smoothing_adjacency_matrices  regression  regret  regularization  research  risk  robust  scaffold  schatten-embeddings  score  search  shrinkage  singh.aarti  source  sparsity  state  statistical  statistical_inference_for_stochastic_processes  statistics  systems_identification  test  theoretical_computer_science  theory  tic-tac-toe  tictactoe  to-read  to:blog  to:nb  to_read  to_teach:statcomp  to_teach:undergrad-ada  toread  tree  trees  treesearch  uc  verdinelli.isa  visualisation  vorteilscard  vu.vincent  wainwright.martin_j.  wasserman.larry  willett.rebecca_m.  work  xray  youngman  yu.bin 

Copy this bookmark: