nhaliday + automation + war + long-short-run   2

The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:


In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.


In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?


In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.


Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Interview: Mostly Sealing Wax | West Hunter

- conformity and Google, defense and spying (China knows prob almost all our "secrets")
- in the past you could just find new things faster than people could reverse-engineer. part of the problem is that innovation is slowing down today (part of the reason for convergence by China/developing world).
- introgression from archaics of various kinds
- mutational load and IQ, wrath of khan neanderthal
- trade and antiquity (not that useful besides ideas tbh), Roman empire, disease, smallpox
- spices needed to be grown elsewhere, but besides that...
- analogy: caste system in India (why no Brahmin car repairmen?), slavery in Greco-Roman times, more water mills in medieval times (rivers better in north, but still could have done it), new elite not liking getting hands dirty, low status of engineers, rise of finance
- crookery in finance, hedge fund edge might be substantially insider trading
- long-term wisdom of moving all manufacturing to China...?
- economic myopia: British financialization before WW1 vis-a-vis Germany. North vs. South and cotton/industry, camels in Middle East vs. wagons in Europe
- Western medicine easier to convert to science than Eastern, pseudoscience and wrong theories better than bag of recipes
- Greeks definitely knew some things that were lost (eg, line in Pliny makes reference to combinatorics calculation rediscovered by German dude much later. think he's referring to Catalan numbers?), Lucio Russo book
- Indo-Europeans, Western Europe, Amerindians, India, British Isles, gender, disease, and conquest
- no farming (Dark Age), then why were people still farming on Shetland Islands north of Scotland?
- "symbolic" walls, bodies with arrows
- family stuff, children learning, talking dog, memory and aging
- Chinese/Japanese writing difficulty and children learning to read
- Hatfield-McCoy feud: the McCoy family was actually a case study in a neurological journal. they had anger management issues because of cancers of their adrenal gland (!!).

the Chinese know...: https://macropolo.org/casting-off-real-beijings-cryptic-warnings-finance-taking-economy/
Over the last couple of years, a cryptic idiom has crept into the way China’s top leaders talk about risks in the country’s financial system: tuo shi xiang xu (脱实向虚), which loosely translates as “casting off the real for the empty.” Premier Li Keqiang warned against it at his press conference at the end of the 2016 National People’s Congress (NPC). At this year’s NPC, Li inserted this very expression into his annual work report. And in April, while on an inspection tour of Guangxi, President Xi Jinping used the term, saying that China must “unceasingly promote industrial modernization, raise the level of manufacturing, and not allow the real to be cast off for the empty.”

Such an odd turn of phrase is easy to overlook, but it belies concerns about a significant shift in the way that China’s economy works. What Xi and Li were warning against is typically called financialization in developed economies. It’s when “real” companies—industrial firms, manufacturers, utility companies, property developers, and anyone else that produces a tangible product or service—take their money and, rather than put it back into their businesses, invest it in “empty”, or speculative, assets. It occurs when the returns on financial investments outstrip those in the real economy, leading to a disproportionate amount of money being routed into the financial system.

Bad day for Lehman Bros.
Good day for everyone else, then.
west-hunter  interview  audio  podcast  econotariat  cracker-econ  westminster  culture-war  polarization  tech  sv  google  info-dynamics  business  multi  military  security  scitariat  intel  error  government  defense  critique  rant  race  clown-world  patho-altruism  history  mostly-modern  cold-war  russia  technology  innovation  stagnation  being-right  archaics  gene-flow  sapiens  genetics  the-trenches  thinking  sequential  similarity  genomics  bioinformatics  explanation  europe  asia  china  migration  evolution  recent-selection  immune  atmosphere  latin-america  ideas  sky  developing-world  embodied  africa  MENA  genetic-load  unintended-consequences  iq  enhancement  aDNA  gedanken  mutation  QTL  missing-heritability  tradeoffs  behavioral-gen  biodet  iron-age  mediterranean  the-classics  trade  gibbon  disease  parasites-microbiome  demographics  population  urban  transportation  efficiency  cost-benefit  india  agriculture  impact  status  class  elite  vampire-squid  analogy  finance  higher-ed  trends  rot  zeitgeist  🔬  hsu  stories  aphorism  crooked  realne 
may 2017 by nhaliday

bundles : abstractdismalityeconguvnorpatterns

related tags

abstraction  acmtariat  aDNA  advertising  africa  agriculture  ai  ai-control  algorithms  alignment  alt-inst  analogy  analysis  analytical-holistic  anglo  anglosphere  antiquity  aphorism  archaeology  archaics  arms  asia  atmosphere  audio  authoritarianism  automation  backup  behavioral-gen  being-right  biases  big-yud  biodet  bioinformatics  books  bostrom  britain  broad-econ  business  canada  cancer  candidate-gene  chart  china  civil-liberty  civilization  class  class-warfare  clever-rats  clown-world  cocktail  cog-psych  cold-war  comparison  competition  complex-systems  composition-decomposition  conceptual-vocab  conquest-empire  contrarianism  convexity-curvature  cooperate-defect  coordination  cost-benefit  counter-revolution  coupling-cohesion  cracker-econ  crime  critique  crooked  crux  cultural-dynamics  culture-war  cycles  death  debate  decentralized  decision-theory  deep-materialism  defense  definite-planning  demographics  detail-architecture  deterrence  developing-world  dirty-hands  disease  distribution  early-modern  eastern-europe  economics  econotariat  eden  eden-heaven  EEA  efficiency  egalitarianism-hierarchy  EGT  elite  embodied  ems  endocrine  endogenous-exogenous  energy-resources  engineering  enhancement  epistemic  error  europe  evolution  evopsych  existence  expansionism  explanation  farmers-and-foragers  finance  flexibility  fluid  flux-stasis  foreign-lang  formal-values  frontier  futurism  gavisti  gedanken  gender  gender-diff  gene-flow  generalization  genetic-load  genetics  genomics  germanic  gibbon  gnosis-logos  google  government  gregory-clark  growth-econ  hanson  hardware  hari-seldon  heavy-industry  higher-ed  history  hsu  human-capital  humanity  ideas  iidness  immune  impact  impetus  india  individualism-collectivism  industrial-revolution  inequality  info-dynamics  innovation  insight  intel  intelligence  interview  intricacy  intuition  iq  iron-age  is-ought  iteration-recursion  japan  jargon  knowledge  kumbaya-kult  labor  language  large-factor  latin-america  learning  legacy  lesswrong  lexical  links  list  local-global  long-short-run  low-hanging  machine-learning  magnitude  malthus  marginal  math  math.CO  medicine  mediterranean  MENA  meta:medicine  meta:prediction  metameta  methodology  microfoundations  migration  military  miri-cfar  missing-heritability  model-organism  models  modernity  moloch  moments  mostly-modern  multi  multiplicative  mutation  myth  nationalism-globalism  nature  near-far  network-structure  neuro  neuro-nitgrit  news  nitty-gritty  nonlinearity  northeast  number  open-closed  optimism  optimization  org:foreign  org:med  parasites-microbiome  parenting  path-dependence  patho-altruism  patience  pdf  peace-violence  pessimism  philosophy  plots  poast  podcast  polarization  pop-structure  population  prediction  psycho-atoms  psychology  psychometrics  publishing  QTL  race  randy-ayndy  rant  rationality  ratty  realness  recent-selection  regulation  research  retention  revolution  risk  roots  rot  russia  s:***  sapiens  scale  science  scitariat  search  security  selection  sequential  sex  shift  similarity  singularity  sky  smoothness  social  software  speculation  speed  speedometer  stagnation  status  stories  strategy  structure  studying  subculture  summary  survey  sv  systematic-ad-hoc  taxes  tech  technology  telos-atelos  the-classics  the-great-west-whale  the-south  the-trenches  the-world-is-just-atoms  thinking  threat-modeling  time  time-preference  traces  track-record  trade  tradeoffs  transportation  trends  trivia  truth  twitter  uncertainty  unintended-consequences  universalism-particularism  urban  urban-rural  usa  values  vampire-squid  war  wealth  west-hunter  westminster  wiki  wild-ideas  winner-take-all  wisdom  world-war  zeitgeist  zero-positive-sum  🌞  🎩  🔬 

Copy this bookmark: