nhaliday + time-complexity   14

Complexity no Bar to AI - Gwern.net
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government 
april 2018 by nhaliday
Rank aggregation basics: Local Kemeny optimisation | David R. MacIver
This turns our problem from a global search to a local one: Basically we can start from any point in the search space and search locally by swapping adjacent pairs until we hit a minimum. This turns out to be quite easy to do. _We basically run insertion sort_: At step n we have the first n items in a locally Kemeny optimal order. Swap the n+1th item backwards until the majority think its predecessor is < it. This ensures all adjacent pairs are in the majority order, so swapping them would result in a greater than or equal K. This is of course an O(n^2) algorithm. In fact, the problem of merely finding a locally Kemeny optimal solution can be done in O(n log(n)) (for much the same reason as you can sort better than insertion sort). You just take the directed graph of majority votes and find a Hamiltonian Path. The nice thing about the above version of the algorithm is that it gives you a lot of control over where you start your search.
techtariat  liner-notes  papers  tcs  algorithms  machine-learning  acm  optimization  approximation  local-global  orders  graphs  graph-theory  explanation  iteration-recursion  time-complexity  nibble 
september 2017 by nhaliday
How to Escape Saddle Points Efficiently – Off the convex path
A core, emerging problem in nonconvex optimization involves the escape of saddle points. While recent research has shown that gradient descent (GD) generically escapes saddle points asymptotically (see Rong Ge’s and Ben Recht’s blog posts), the critical open problem is one of efficiency — is GD able to move past saddle points quickly, or can it be slowed down significantly? How does the rate of escape scale with the ambient dimensionality? In this post, we describe our recent work with Rong Ge, Praneeth Netrapalli and Sham Kakade, that provides the first provable positive answer to the efficiency question, showing that, rather surprisingly, GD augmented with suitable perturbations escapes saddle points efficiently; indeed, in terms of rate and dimension dependence it is almost as if the saddle points aren’t there!
acmtariat  org:bleg  nibble  liner-notes  machine-learning  acm  optimization  gradient-descent  local-global  off-convex  time-complexity  random  perturbation  michael-jordan  iterative-methods  research  learning-theory  math.DS  iteration-recursion 
july 2017 by nhaliday

bundles : tcs

related tags

aaronson  academia  acm  acmtariat  adversarial  ai  ai-control  algorithms  analogy  analysis  approximation  article  atoms  average-case  big-list  big-picture  big-surf  chart  climate-change  cog-psych  comparison  complement-substitute  complexity  composition-decomposition  computation  concentration-of-measure  concept  convexity-curvature  crypto  cs  curvature  data-science  detail-architecture  distribution  DP  elegance  ems  engineering  enhancement  environment  evidence  expectancy  explanation  exposition  faq  feynman  futurism  game-theory  games  government  gradient-descent  graph-theory  graphs  greedy  ground-up  gwern  hacker  hanson  hardness  hardware  hi-order-bits  hierarchy  hmm  humanity  hypothesis-testing  ideas  integral  intelligence  iq  iteration-recursion  iterative-methods  knowledge  learning-theory  limits  linear-algebra  liner-notes  list  local-global  lower-bounds  machine-learning  magnitude  markov  math.CO  math.DS  math.NT  mathtariat  mechanics  metabuch  metameta  methodology  michael-jordan  models  moments  monte-carlo  motivation  multi  multiplicative  neuro  nibble  no-go  nonlinearity  nonparametric  objektbuch  off-convex  open-problems  optimization  orders  org:bleg  overflow  papers  parable  parametric  pdf  performance  perturbation  physics  polynomials  probability  project  proofs  pseudorandomness  psychology  psychometrics  puzzles  q-n-a  qra  quantum  quantum-info  questions  quora  rand-complexity  random  ratty  realness  rec-math  reference  research  rigorous-crypto  risk  s:***  scale  sebastien-bubeck  security  simulation  singularity  skeleton  slides  space-complexity  speedometer  stats  street-fighting  structure  synthesis  talks  tcs  tcstariat  techtariat  theory-practice  threat-modeling  tidbits  time-complexity  top-n  tricki  universalism-particularism  usa  volo-avolo  wiki  yoga 

Copy this bookmark: