nhaliday + complexity   123

Theory of Self-Reproducing Automata - John von Neumann
Fourth Lecture: THE ROLE OF HIGH AND OF EXTREMELY HIGH COMPLICATION

Comparisons between computing machines and the nervous systems. Estimates of size for computing machines, present and near future.

Estimates for size for the human central nervous system. Excursus about the “mixed” character of living organisms. Analog and digital elements. Observations about the “mixed” character of all componentry, artificial as well as natural. Interpretation of the position to be taken with respect to these.

Evaluation of the discrepancy in size between artificial and natural automata. Interpretation of this discrepancy in terms of physical factors. Nature of the materials used.

The probability of the presence of other intellectual factors. The role of complication and the theoretical penetration that it requires.

Questions of reliability and errors reconsidered. Probability of individual errors and length of procedure. Typical lengths of procedure for computing machines and for living organisms--that is, for artificial and for natural automata. Upper limits on acceptable probability of error in individual operations. Compensation by checking and self-correcting features.

Differences of principle in the way in which errors are dealt with in artificial and in natural automata. The “single error” principle in artificial automata. Crudeness of our approach in this case, due to the lack of adequate theory. More sophisticated treatment of this problem in natural automata: The role of the autonomy of parts. Connections between this autonomy and evolution.

- 10^10 neurons in brain, 10^4 vacuum tubes in largest computer at time
- machines faster: 5 ms from neuron potential to neuron potential, 10^-3 ms for vacuum tubes

https://en.wikipedia.org/wiki/John_von_Neumann#Computing
pdf  article  papers  essay  nibble  math  cs  computation  bio  neuro  neuro-nitgrit  scale  magnitude  comparison  acm  von-neumann  giants  thermo  phys-energy  speed  performance  time  density  frequency  hardware  ems  efficiency  dirty-hands  street-fighting  fermi  estimate  retention  physics  interdisciplinary  multi  wiki  links  people  🔬  atoms  automata  duplication  iteration-recursion  turing  complexity  measure  nature  technology  complex-systems  bits  information-theory  circuits  robust  structure  composition-decomposition  evolution  mutation  axioms  analogy  thinking  input-output  hi-order-bits  coding-theory  flexibility  rigidity 
april 2018 by nhaliday
Complexity no Bar to AI - Gwern.net
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government 
april 2018 by nhaliday
AI-complete - Wikipedia
In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI.[1] To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.

AI-complete problems are hypothesised to include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.[2]

Currently, AI-complete problems cannot be solved with modern computer technology alone, but would also require human computation. This property can be useful, for instance to test for the presence of humans as with CAPTCHAs, and for computer security to circumvent brute-force attacks.[3][4]

...

AI-complete problems are hypothesised to include:

Bongard problems
Computer vision (and subproblems such as object recognition)
Natural language understanding (and subproblems such as text mining, machine translation, and word sense disambiguation[8])
Dealing with unexpected circumstances while solving any real world problem, whether it's navigation or planning or even the kind of reasoning done by expert systems.

...

Current AI systems can solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempt to "scale up" their systems to handle more complicated, real world situations, the programs tend to become excessively brittle without commonsense knowledge or a rudimentary understanding of the situation: they fail as unexpected circumstances outside of its original problem context begin to appear. When human beings are dealing with new situations in the world, they are helped immensely by the fact that they know what to expect: they know what all things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. A machine without strong AI has no other skills to fall back on.[9]
concept  reduction  cs  computation  complexity  wiki  reference  properties  computer-vision  ai  risk  ai-control  machine-learning  deep-learning  language  nlp  order-disorder  tactics  strategy  intelligence  humanity  speculation  crux 
march 2018 by nhaliday
Correlated Equilibria in Game Theory | Azimuth
Given this, it’s not surprising that Nash equilibria can be hard to find. Last September a paper came out making this precise, in a strong way:

• Yakov Babichenko and Aviad Rubinstein, Communication complexity of approximate Nash equilibria.

The authors show there’s no guaranteed method for players to find even an approximate Nash equilibrium unless they tell each other almost everything about their preferences. This makes finding the Nash equilibrium prohibitively difficult to find when there are lots of players… in general. There are particular games where it’s not difficult, and that makes these games important: for example, if you’re trying to run a government well. (A laughable notion these days, but still one can hope.)

Klarreich’s article in Quanta gives a nice readable account of this work and also a more practical alternative to the concept of Nash equilibrium. It’s called a ‘correlated equilibrium’, and it was invented by the mathematician Robert Aumann in 1974. You can see an attempt to define it here:
baez  org:bleg  nibble  mathtariat  commentary  summary  news  org:mag  org:sci  popsci  equilibrium  GT-101  game-theory  acm  conceptual-vocab  concept  definition  thinking  signaling  coordination  tcs  complexity  communication-complexity  lower-bounds  no-go  liner-notes  big-surf  papers  research  algorithmic-econ  volo-avolo 
july 2017 by nhaliday
Talks
Quantum Supremacy: Office of Science and Technology Policy QIS Forum, Eisenhower Executive Office Building, White House Complex, Washington DC, October 18, 2016. Another version at UTCS Faculty Lunch, October 26, 2016. Another version at UT Austin Physics Colloquium, Austin, TX, November 9, 2016.

Complexity-Theoretic Foundations of Quantum Supremacy Experiments: Quantum Algorithms Workshop, Aspen Center for Physics, Aspen, CO, March 25, 2016

When Exactly Do Quantum Computers Provide A Speedup?: Yale Quantum Institute Seminar, Yale University, New Haven, CT, October 10, 2014. Another version at UT Austin Physics Colloquium, Austin, TX, November 19, 2014; Applied and Interdisciplinary Mathematics Seminar, Northeastern University, Boston, MA, November 25, 2014; Hebrew University Physics Colloquium, Jerusalem, Israel, January 5, 2015; Computer Science Colloquium, Technion, Haifa, Israel, January 8, 2015; Stanford University Physics Colloquium, January 27, 2015
tcstariat  aaronson  tcs  complexity  quantum  quantum-info  talks  list  slides  accretion  algorithms  applications  physics  nibble  frontier  computation  volo-avolo  speedometer  questions 
may 2017 by nhaliday
Equivalence between counting and sampling
also: every counting problem either has FPTRAS or no approx. w/i polynomial factor
pdf  exposition  lecture-notes  berkeley  nibble  tcs  counting  sampling  characterization  complexity  approximation  rand-approx  proofs 
february 2017 by nhaliday
6.896: Essential Coding Theory
- probabilistic method and Chernoff bound for Shannon coding
- probabilistic method for asymptotically good Hamming codes (Gilbert coding)
- sparsity used for LDPC codes
mit  course  yoga  tcs  complexity  coding-theory  math.AG  fields  polynomials  pigeonhole-markov  linear-algebra  probabilistic-method  lecture-notes  bits  sparsity  concentration-of-measure  linear-programming  linearity  expanders  hamming  pseudorandomness  crypto  rigorous-crypto  communication-complexity  no-go  madhu-sudan  shannon  unit  p:**  quixotic 
february 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Logicians on safari
So what are they then? Maybe it’s helpful to think of them as “quantitative epistemology”: discoveries about the capacities of finite beings like ourselves to learn mathematical truths. On this view, the theoretical computer scientist is basically a mathematical logician on a safari to the physical world: someone who tries to understand the universe by asking what sorts of mathematical questions can and can’t be answered within it. Not whether the universe is a computer, but what kind of computer it is! Naturally, this approach to understanding the world tends to appeal most to people for whom math (and especially discrete math) is reasonably clear, whereas physics is extremely mysterious.

the sequel: http://www.scottaaronson.com/blog/?p=153
tcstariat  aaronson  tcs  computation  complexity  aphorism  examples  list  reflection  philosophy  multi  summary  synthesis  hi-order-bits  interdisciplinary  lens  big-picture  survey  nibble  org:bleg  applications  big-surf  s:*  p:whenever  ideas  elegance 
january 2017 by nhaliday
Computational Complexity: Favorite Theorems: The Yao Principle
The Yao Principle applies when we don't consider the algorithmic complexity of the players. For example in communication complexity we have two players who each have a separate half of an input string and they want to compute some function of the input with the minimum amount of communication between them. The Yao principle states that the best probabilistic strategies for the players will achieve exactly the communication bounds as the best deterministic strategy over a worst-case distribution of inputs.

The Yao Principle plays a smaller role where we measure the running time of an algorithm since applying the Principle would require solving an extremely large linear program. But since so many of our bounds are in information-based models like communication and decision-tree complexity, the Yao Principle, though not particularly complicated, plays an important role in lower bounds in a large number of results in our field.
tcstariat  tcs  complexity  adversarial  rand-approx  algorithms  game-theory  yoga  levers  communication-complexity  random  lower-bounds  average-case  nibble  org:bleg 
january 2017 by nhaliday
Shtetl-Optimized » Blog Archive » Why I Am Not An Integrated Information Theorist (or, The Unconscious Expander)
In my opinion, how to construct a theory that tells us which physical systems are conscious and which aren’t—giving answers that agree with “common sense” whenever the latter renders a verdict—is one of the deepest, most fascinating problems in all of science. Since I don’t know a standard name for the problem, I hereby call it the Pretty-Hard Problem of Consciousness. Unlike with the Hard Hard Problem, I don’t know of any philosophical reason why the Pretty-Hard Problem should be inherently unsolvable; but on the other hand, humans seem nowhere close to solving it (if we had solved it, then we could reduce the abortion, animal rights, and strong AI debates to “gentlemen, let us calculate!”).

Now, I regard IIT as a serious, honorable attempt to grapple with the Pretty-Hard Problem of Consciousness: something concrete enough to move the discussion forward. But I also regard IIT as a failed attempt on the problem. And I wish people would recognize its failure, learn from it, and move on.

In my view, IIT fails to solve the Pretty-Hard Problem because it unavoidably predicts vast amounts of consciousness in physical systems that no sane person would regard as particularly “conscious” at all: indeed, systems that do nothing but apply a low-density parity-check code, or other simple transformations of their input data. Moreover, IIT predicts not merely that these systems are “slightly” conscious (which would be fine), but that they can be unboundedly more conscious than humans are.

To justify that claim, I first need to define Φ. Strikingly, despite the large literature about Φ, I had a hard time finding a clear mathematical definition of it—one that not only listed formulas but fully defined the structures that the formulas were talking about. Complicating matters further, there are several competing definitions of Φ in the literature, including ΦDM (discrete memoryless), ΦE (empirical), and ΦAR (autoregressive), which apply in different contexts (e.g., some take time evolution into account and others don’t). Nevertheless, I think I can define Φ in a way that will make sense to theoretical computer scientists. And crucially, the broad point I want to make about Φ won’t depend much on the details of its formalization anyway.

We consider a discrete system in a state x=(x1,…,xn)∈Sn, where S is a finite alphabet (the simplest case is S={0,1}). We imagine that the system evolves via an “updating function” f:Sn→Sn. Then the question that interests us is whether the xi‘s can be partitioned into two sets A and B, of roughly comparable size, such that the updates to the variables in A don’t depend very much on the variables in B and vice versa. If such a partition exists, then we say that the computation of f does not involve “global integration of information,” which on Tononi’s theory is a defining aspect of consciousness.
aaronson  tcstariat  philosophy  dennett  interdisciplinary  critique  nibble  org:bleg  within-without  the-self  neuro  psychology  cog-psych  metrics  nitty-gritty  composition-decomposition  complex-systems  cybernetics  bits  information-theory  entropy-like  forms-instances  empirical  walls  arrows  math.DS  structure  causation  quantitative-qualitative  number  extrema  optimization  abstraction  explanation  summary  degrees-of-freedom  whole-partial-many  network-structure  systematic-ad-hoc  tcs  complexity  hardness  no-go  computation  measurement  intricacy  examples  counterexample  coding-theory  linear-algebra  fields  graphs  graph-theory  expanders  math  math.CO  properties  local-global  intuition  error  definition  coupling-cohesion 
january 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : academeframetcs

related tags

aaronson  absolute-relative  abstraction  academia  accretion  accuracy  acm  acmtariat  additive-combo  adversarial  advice  aesthetics  ai  ai-control  alg-combo  algebra  algebraic-complexity  algorithmic-econ  algorithms  analogy  analysis  analytical-holistic  announcement  anthropic  aphorism  applications  approximation  arms  arrows  article  atoms  automata  average-case  axioms  baez  bayesian  beauty  berkeley  biases  big-list  big-peeps  big-picture  big-surf  bio  bits  boaz-barak  books  boolean-analysis  bostrom  brain-scan  causation  characterization  chart  cheatsheet  christianity  circuits  civilization  clarity  clever-rats  climate-change  cmu  coarse-fine  coding-theory  cog-psych  commentary  communication-complexity  comparison  competition  complement-substitute  complex-systems  complexity  composition-decomposition  compressed-sensing  computation  computer-vision  concentration-of-measure  concept  conceptual-vocab  conference  confusion  convexity-curvature  cool  coordination  cornell  counterexample  counting  coupling-cohesion  course  creative  critique  crux  crypto  cs  curiosity  curvature  cybernetics  dana-moshkovitz  data  data-structures  database  death  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  definition  degrees-of-freedom  dennett  density  descriptive  detail-architecture  differential-privacy  dimensionality  dirty-hands  discussion  distribution  DP  draft  drama  dropbox  duplication  duty  economics  eden-heaven  efficiency  elegance  empirical  ems  encyclopedic  enhancement  entertainment  entropy-like  environment  equilibrium  erdos  erik-demaine  error  essay  estimate  events  evidence  evolution  examples  existence  exocortex  expanders  expansionism  expert  expert-experience  explanation  exposition  extrema  fall-2016  faq  fermi  fields  finance  flexibility  flux-stasis  focs  forms-instances  fourier  frequency  frequentist  frontier  futurism  game-theory  games  gedanken  generalization  giants  government  gowers  graph-theory  graphs  greedy  ground-up  GT-101  gwern  hacker  hamming  hanson  hard-core  hardness  hardware  harvard  hashing  hi-order-bits  hierarchy  hmm  homepage  humanity  humility  hypocrisy  ideas  idk  impetus  information-theory  init  input-output  insight  intelligence  interdisciplinary  intricacy  intuition  iq  israel  iteration-recursion  knowledge  language  large-factor  learning  learning-theory  lecture-notes  lectures  len:long  lens  lesswrong  letters  levers  limits  linear-algebra  linear-programming  linearity  liner-notes  links  list  local-global  logic  lower-bounds  luca-trevisan  machine-learning  madhu-sudan  magnitude  malthus  markov  math  math.AG  math.CO  math.DS  math.GR  math.NT  math.RT  mathtariat  measure  measurement  meta:math  metabuch  metameta  metrics  micro  mit  models  moments  monte-carlo  morality  motivation  multi  multiplicative  mutation  mystic  naturality  nature  network-structure  neuro  neuro-nitgrit  neurons  news  nibble  nietzschean  nitty-gritty  nlp  no-go  nonlinearity  norms  number  numerics  obama  objektbuch  occam  ocw  offense-defense  oly  open-problems  operational  optimization  order-disorder  orders  org:bleg  org:edu  org:inst  org:junk  org:mag  org:mat  org:sci  outcome-risk  overflow  p:*  p:**  p:someday  p:whenever  papers  parable  paradox  parsimony  pcp  pdf  people  performance  perturbation  philosophy  phys-energy  physics  pigeonhole-markov  plots  polynomials  popsci  postmortem  power  pragmatic  prediction  preprint  princeton  probabilistic-method  probability  problem-solving  prof  profile  proof-systems  proofs  properties  pseudorandomness  psychology  psychometrics  publishing  puzzles  q-n-a  quantifiers-sums  quantitative-qualitative  quantum  quantum-info  quantum-money  questions  quixotic  quotes  rand-approx  rand-complexity  random  rationality  ratty  reading  realness  rec-math  recommendations  reduction  reference  reflection  reinforcement  relativity  relativization  relaxation  religion  research  research-program  retention  retrofit  rhetoric  rigidity  rigor  rigorous-crypto  risk  robust  rounding  ryan-odonnell  s:*  s:**  s:***  salil-vadhan  sampling  sampling-bias  sanjeev-arora  scale  scholar  scholar-pack  SDP  search  security  sequential  series  shannon  signal-noise  signaling  simulation  singularity  skeleton  skunkworks  slides  soft-question  software  space  space-complexity  sparsity  spectral  speculation  speed  speedometer  spock  spreading  stanford  stats  stoc  stories  strategy  stream  street-fighting  strings  structure  subjective-objective  sublinear  submodular  sum-of-squares  summary  summer-2016  survey  symmetry  synthesis  systematic-ad-hoc  tactics  talks  tcs  tcstariat  teaching  technology  techtariat  telos-atelos  tensors  texas  the-classics  the-self  theory-of-mind  theory-practice  theos  thermo  thick-thin  things  thinking  threat-modeling  tidbits  tightness  tim-roughgarden  time  time-complexity  time-preference  tip-of-tongue  top-n  topics  trees  trends  tribalism  tricki  trust  turing  UGC  unintended-consequences  uniqueness  unit  universalism-particularism  us-them  usa  vague  valiant  values  video  visual-understanding  visualization  volo-avolo  von-neumann  walls  washington  water  wealth  weird  whole-partial-many  wigderson  wiki  within-without  workshop  wormholes  yoga  👳  🔬  🤖 

Copy this bookmark:



description:


tags: