lower-bounds   50

Applications of computational learning theory in the cognitive sciences - Psychology & Neuroscience Stack Exchange
1. Gold's theorem on the unlearnability in the limit of certain sets of languages, among them context-free ones.

2. Ronald de Wolf's master's thesis on the impossibility to PAC-learn context-free languages.

The first made quiet a stir in the poverty-of-the-stimulus debate, and the second has been unnoticed by cognitive science.
q-n-a  stackex  psychology  cog-psych  learning  learning-theory  machine-learning  PAC  lower-bounds  no-go  language  linguistics  models  fall-2015 
28 days ago by nhaliday
[1805.04625] Strong Converse using Change of Measure Arguments
The strong converse for a coding theorem shows that the optimal asymptotic rate possible with vanishing error cannot be improved by allowing a fixed error. Building on a method introduced by Gu and Effros for centralized coding problems, we develop a general and simple recipe for proving strong converse that is applicable for distributed problems as well. Heuristically, our proof of strong converse mimics the standard steps for proving a weak converse, except that we apply those steps to a modified distribution obtained by conditioning the original distribution on the event that no error occurs. A key component of our recipe is the replacement of the hard Markov constraints implied by the distributed nature of the problem with a soft information cost using a variational formula introduced by Oohama. We illustrate our method by providing a short proof of the strong converse for the Wyner-Ziv problem and new strong converse theorems for interactive function computation, common randomness and secret key agreement, and the wiretap channel.
papers  to-read  heard-the-talk  information-theory  lower-bounds 
july 2018 by mraginsky
Complexity no Bar to AI - Gwern.net
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government 
april 2018 by nhaliday
Correlated Equilibria in Game Theory | Azimuth
Given this, it’s not surprising that Nash equilibria can be hard to find. Last September a paper came out making this precise, in a strong way:

• Yakov Babichenko and Aviad Rubinstein, Communication complexity of approximate Nash equilibria.

The authors show there’s no guaranteed method for players to find even an approximate Nash equilibrium unless they tell each other almost everything about their preferences. This makes finding the Nash equilibrium prohibitively difficult to find when there are lots of players… in general. There are particular games where it’s not difficult, and that makes these games important: for example, if you’re trying to run a government well. (A laughable notion these days, but still one can hope.)

Klarreich’s article in Quanta gives a nice readable account of this work and also a more practical alternative to the concept of Nash equilibrium. It’s called a ‘correlated equilibrium’, and it was invented by the mathematician Robert Aumann in 1974. You can see an attempt to define it here:
baez  org:bleg  nibble  mathtariat  commentary  summary  news  org:mag  org:sci  popsci  equilibrium  GT-101  game-theory  acm  conceptual-vocab  concept  definition  thinking  signaling  coordination  tcs  complexity  communication-complexity  lower-bounds  no-go  liner-notes  big-surf  papers  research  algorithmic-econ  volo-avolo 
july 2017 by nhaliday
Predicting the outcomes of organic reactions via machine learning: are current descriptors sufficient? | Scientific Reports
As machine learning/artificial intelligence algorithms are defeating chess masters and, most recently, GO champions, there is interest – and hope – that they will prove equally useful in assisting chemists in predicting outcomes of organic reactions. This paper demonstrates, however, that the applicability of machine learning to the problems of chemical reactivity over diverse types of chemistries remains limited – in particular, with the currently available chemical descriptors, fundamental mathematical theorems impose upper bounds on the accuracy with which raction yields and times can be predicted. Improving the performance of machine-learning methods calls for the development of fundamentally new chemical descriptors.
study  org:nat  papers  machine-learning  chemistry  measurement  volo-avolo  lower-bounds  analysis  realness  speedometer  nibble  🔬  applications  frontier  state-of-art  no-go  accuracy  interdisciplinary 
july 2017 by nhaliday
Energy of Seawater Desalination
0.66 kcal / liter is the minimum energy required to desalination of one liter of seawater, regardless of the technology applied to the process.
infrastructure  explanation  physics  thermo  objektbuch  data  lower-bounds  chemistry  the-world-is-just-atoms  geoengineering  phys-energy  nibble  oceans  h2o  applications  estimate  🔬  energy-resources  biophysical-econ  stylized-facts  ideas  fluid  volo-avolo 
february 2017 by nhaliday

related tags

:/  abstraction  academia  accretion  accuracy  acm  adaptive-systems  adversarial  ai-control  ai  algebra  algebraic-complexity  algorithmic-econ  algorithms  analogy  analysis  announcement  antidemos  applications  approximation  article  atoms  average-case  baez  big-list  big-peeps  big-surf  binomial  biophysical-econ  bits  boaz-barak  boolean-analysis  bostrom  cheatsheet  chemistry  circuits  civilization  classification  climate-change  cocktail  coding-theory  cog-psych  commentary  communication-complexity  communication  complement-substitute  complex-systems  complexity  composition-decomposition  compressed-sensing  computation  computer-science  computer-vision  concept  conceptual-vocab  concurrency  convex-programming  convexity-curvature  cool  coordination  counterexample  counting  course  crypto  cs  cycles  dana-moshkovitz  data-structures  data  database  death  deep-learning  definition  dennett  density  detail-architecture  dirty-hands  discrete  distributed-computing  distribution  domestication  dynamic-programming  econometrics  economics  eden-heaven  efficiency  electromag  elite  ems  energy-resources  engineering  enhancement  entropy-like  environment  equilibrium  erik-demaine  error  essay  estimate  estimation  europe  evidence  examples  expanders  explanans  explanation  exposition  fall-2015  faq  features  fiction  finance  finiteness  fluid  fourier  frequency  frontier  functional  futurism  game-theory  gedanken  generalization  geoengineering  germanic  giants  gilens-page  government  graph-theory  graphs  gravity  ground-up  gt-101  gwern  h2o  hacker  hanson  hard-tech  hardness  hardware  harvard  hashing  heard-the-talk  hierarchy  hmm  humanity  ideas  ideology  illusion  information-theory  infrastructure  init  instinct  intelligence  interdisciplinary  interests  internet  intricacy  iq  iteration-recursion  janus  knowledge  language  learning-theory  learning  lecture-notes  lectures  len:long  lens  levers  linear-algebra  linear-programming  liner-notes  linguistics  list  long-short-run  luca-trevisan  machine-learning  madhu-sudan  magnitude  markov-decision-processes  math.ac  math.gr  math.rt  math  mathtariat  measure  measurement  mechanics  metabuch  micro  mihai  mit  model-class  models  moments  motivation  multiplicative  naturality  network-structure  neuro  news  nibble  nietzschean  nitty-gritty  no-go  nonlinearity  norms  objektbuch  oceans  online-learning  open-problems  optimization  orders  org:bleg  org:edu  org:inst  org:junk  org:mag  org:mat  org:nat  org:sci  organizing  overflow  p:*  p:someday  p:whenever  pac  papers  parable  pcp  pdf  performance  perturbation  philosophy  phys-energy  physics  policy  polisci  politics  poll  popsci  power  preprint  princeton  probabilistic-method  probability  problem-solving  profile  programming  proofs  pseudorandomness  psychology  psychometrics  puzzles  q-n-a  quantum-info  quantum  questions  quixotic  quotes  rand-approx  rand-complexity  random  ratty  realness  reason  recommendations  reduction  relativity  relativization  relaxation  research  retention  rigorous-crypto  risk  robust  roots  rounding  scale  scifi-fantasy  sdp  security  signal-noise  signaling  singularity  skunkworks  slides  sociology  software  space-complexity  sparsity  spatial  speed  speedometer  stackex  stanford  state-of-art  statistics  stock-flow  strings  structure  study  stylized-facts  sublinear  sum-of-squares  summary  survey  synthesis  talks  tcs  tcstariat  technology  the-self  the-world-is-just-atoms  theory-practice  thermo  thinking  threat-modeling  tidbits  tightness  tim-roughgarden  time-complexity  time  tip-of-tongue  to-read  topics  tradeoffs  trees  tricki  trivia  ugc  unit  universalism-particularism  usa  video  volo-avolo  washington  waves  web  whole-partial-many  within-without  wonkish  yoga  👳  🔬 

Copy this bookmark: