nhaliday + risk   174

An adaptability limit to climate change due to heat stress
Despite the uncertainty in future climate-change impacts, it is often assumed that humans would be able to adapt to any possible warming. Here we argue that heat stress imposes a robust upper limit to such adaptation. Peak heat stress, quantified by the wet-bulb temperature TW, is surprisingly similar across diverse climates today. TW never exceeds 31 °C. Any exceedence of 35 °C for extended periods should induce hyperthermia in humans and other mammals, as dissipation of metabolic heat becomes impossible. While this never happens now, it would begin to occur with global-mean warming of about 7 °C, calling the habitability of some regions into question. With 11–12 °C warming, such regions would spread to encompass the majority of the human population as currently distributed. Eventual warmings of 12 °C are possible from fossil fuel burning. One implication is that recent estimates of the costs of unmitigated climate change are too low unless the range of possible warming can somehow be narrowed. Heat stress also may help explain trends in the mammalian fossil record.

Trajectories of the Earth System in the Anthropocene: http://www.pnas.org/content/early/2018/07/31/1810141115
We explore the risk that self-reinforcing feedbacks could push the Earth System toward a planetary threshold that, if crossed, could prevent stabilization of the climate at intermediate temperature rises and cause continued warming on a “Hothouse Earth” pathway even as human emissions are reduced. Crossing the threshold would lead to a much higher global average temperature than any interglacial in the past 1.2 million years and to sea levels significantly higher than at any time in the Holocene. We examine the evidence that such a threshold might exist and where it might be.
study  org:nat  environment  climate-change  humanity  existence  risk  futurism  estimate  physics  thermo  prediction  temperature  nature  walls  civilization  flexibility  rigidity  embodied  multi  manifolds  plots  equilibrium  phase-transition  oscillation  comparison  complex-systems  earth 
august 2018 by nhaliday
Surveil things, not people – The sideways view
Technology may reach a point where free use of one person’s share of humanity’s resources is enough to easily destroy the world. I think society needs to make significant changes to cope with that scenario.

Mass surveillance is a natural response, and sometimes people think of it as the only response. I find mass surveillance pretty unappealing, but I think we can capture almost all of the value by surveilling things rather than surveilling people. This approach avoids some of the worst problems of mass surveillance; while it still has unattractive features it’s my favorite option so far.

...

The idea
We’ll choose a set of artifacts to surveil and restrict. I’ll call these heavy technology and everything else light technology. Our goal is to restrict as few things as possible, but we want to make sure that someone can’t cause unacceptable destruction with only light technology. By default something is light technology if it can be easily acquired by an individual or small group in 2017, and heavy technology otherwise (though we may need to make some exceptions, e.g. certain biological materials or equipment).

Heavy technology is subject to two rules:

1. You can’t use heavy technology in a way that is unacceptably destructive.
2. You can’t use heavy technology to undermine the machinery that enforces these two rules.

To enforce these rules, all heavy technology is under surveillance, and is situated such that it cannot be unilaterally used by any individual or small group. That is, individuals can own heavy technology, but they cannot have unmonitored physical access to that technology.

...

This proposal does give states a de facto monopoly on heavy technology, and would eventually make armed resistance totally impossible. But it’s already the case that states have a massive advantage in armed conflict, and it seems almost inevitable that progress in AI will make this advantage larger (and enable states to do much more with it). Realistically I’m not convinced this proposal makes things much worse than the default.

This proposal definitely expands regulators’ nominal authority and seems prone to abuses. But amongst candidates for handling a future with cheap and destructive dual-use technology, I feel this is the best of many bad options with respect to the potential for abuse.
ratty  acmtariat  clever-rats  risk  existence  futurism  technology  policy  alt-inst  proposal  government  intel  authoritarianism  orwellian  tricks  leviathan  security  civilization  ai  ai-control  arms  defense  cybernetics  institutions  law  unintended-consequences  civil-liberty  volo-avolo  power  constraint-satisfaction  alignment 
april 2018 by nhaliday
Complexity no Bar to AI - Gwern.net
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government 
april 2018 by nhaliday
Harnessing Evolution - with Bret Weinstein | Virtual Futures Salon - YouTube
- ways to get out of Malthusian conditions: expansion to new frontiers, new technology, redistribution/theft
- some discussion of existential risk
- wants to change humanity's "purpose" to one that would be safe in the long run; important thing is it has to be ESS (maybe he wants a singleton?)
- not too impressed by transhumanism (wouldn't identify with a brain emulation)
video  interview  thiel  expert-experience  evolution  deep-materialism  new-religion  sapiens  cultural-dynamics  anthropology  evopsych  sociality  ecology  flexibility  biodet  behavioral-gen  self-interest  interests  moloch  arms  competition  coordination  cooperate-defect  frontier  expansionism  technology  efficiency  thinking  redistribution  open-closed  zero-positive-sum  peace-violence  war  dominant-minority  hypocrisy  dignity  sanctity-degradation  futurism  environment  climate-change  time-preference  long-short-run  population  scale  earth  hidden-motives  game-theory  GT-101  free-riding  innovation  leviathan  malthus  network-structure  risk  existence  civil-liberty  authoritarianism  tribalism  us-them  identity-politics  externalities  unintended-consequences  internet  social  media  pessimism  universalism-particularism  energy-resources  biophysical-econ  politics  coalitions  incentives  attention  epistemic  biases  blowhards  teaching  education  emotion  impetus  comedy  expression-survival  economics  farmers-and-foragers  ca 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox
We do this by demonstrating that traveling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilization, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to, and then demonstrate that there are millions of galaxies that could have reached us by now, using similar methods. This results in a considerable sharpening of the Fermi paradox.
pdf  study  article  essay  anthropic  fermi  space  expansionism  bostrom  ratty  philosophy  xenobio  ideas  threat-modeling  intricacy  time  civilization  🔬  futurism  questions  paradox  risk  physics  engineering  interdisciplinary  frontier  technology  volo-avolo  dirty-hands  ai  automation  robotics  duplication  iteration-recursion  von-neumann  data  scale  magnitude  skunkworks  the-world-is-just-atoms  hard-tech  ems  bio  bits  speedometer  nature  model-organism  mechanics  phys-energy  relativity  electromag  analysis  spock  nitty-gritty  spreading  hanson  street-fighting  speed  gedanken  nibble 
march 2018 by nhaliday
The Coming Technological Singularity
Within thirty years, we will have the technological
means to create superhuman intelligence. Shortly after,
the human era will be ended.

Is such progress avoidable? If not to be avoided, can
events be guided so that we may survive? These questions
are investigated. Some possible answers (and some further
dangers) are presented.

_What is The Singularity?_

The acceleration of technological progress has been the central
feature of this century. I argue in this paper that we are on the edge
of change comparable to the rise of human life on Earth. The precise
cause of this change is the imminent creation by technology of
entities with greater than human intelligence. There are several means
by which science may achieve this breakthrough (and this is another
reason for having confidence that the event will occur):
o The development of computers that are "awake" and
superhumanly intelligent. (To date, most controversy in the
area of AI relates to whether we can create human equivalence
in a machine. But if the answer is "yes, we can", then there
is little doubt that beings more intelligent can be constructed
shortly thereafter.
o Large computer networks (and their associated users) may "wake
up" as a superhumanly intelligent entity.
o Computer/human interfaces may become so intimate that users
may reasonably be considered superhumanly intelligent.
o Biological science may find ways to improve upon the natural
human intellect.

The first three possibilities depend in large part on
improvements in computer hardware. Progress in computer hardware has
followed an amazingly steady curve in the last few decades [16]. Based
largely on this trend, I believe that the creation of greater than
human intelligence will occur during the next thirty years. (Charles
Platt [19] has pointed out the AI enthusiasts have been making claims
like this for the last thirty years. Just so I'm not guilty of a
relative-time ambiguity, let me more specific: I'll be surprised if
this event occurs before 2005 or after 2030.)

What are the consequences of this event? When greater-than-human
intelligence drives progress, that progress will be much more rapid.
In fact, there seems no reason why progress itself would not involve
the creation of still more intelligent entities -- on a still-shorter
time scale. The best analogy that I see is with the evolutionary past:
Animals can adapt to problems and make inventions, but often no faster
than natural selection can do its work -- the world acts as its own
simulator in the case of natural selection. We humans have the ability
to internalize the world and conduct "what if's" in our heads; we can
solve many problems thousands of times faster than natural selection.
Now, by creating the means to execute those simulations at much higher
speeds, we are entering a regime as radically different from our human
past as we humans are from the lower animals.
org:junk  humanity  accelerationism  futurism  prediction  classic  technology  frontier  speedometer  ai  risk  internet  time  essay  rhetoric  network-structure  ai-control  morality  ethics  volo-avolo  egalitarianism-hierarchy  intelligence  scale  giants  scifi-fantasy  speculation  quotes  religion  theos  singularity  flux-stasis  phase-transition  cybernetics  coordination  cooperate-defect  moloch  communication  bits  speed  efficiency  eden-heaven  ecology  benevolence  end-times  good-evil  identity  the-self  whole-partial-many  density 
march 2018 by nhaliday
AI-complete - Wikipedia
In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI.[1] To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm.

AI-complete problems are hypothesised to include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real world problem.[2]

Currently, AI-complete problems cannot be solved with modern computer technology alone, but would also require human computation. This property can be useful, for instance to test for the presence of humans as with CAPTCHAs, and for computer security to circumvent brute-force attacks.[3][4]

...

AI-complete problems are hypothesised to include:

Bongard problems
Computer vision (and subproblems such as object recognition)
Natural language understanding (and subproblems such as text mining, machine translation, and word sense disambiguation[8])
Dealing with unexpected circumstances while solving any real world problem, whether it's navigation or planning or even the kind of reasoning done by expert systems.

...

Current AI systems can solve very simple and/or restricted versions of AI-complete problems, but never in their full generality. When AI researchers attempt to "scale up" their systems to handle more complicated, real world situations, the programs tend to become excessively brittle without commonsense knowledge or a rudimentary understanding of the situation: they fail as unexpected circumstances outside of its original problem context begin to appear. When human beings are dealing with new situations in the world, they are helped immensely by the fact that they know what to expect: they know what all things around them are, why they are there, what they are likely to do and so on. They can recognize unusual situations and adjust accordingly. A machine without strong AI has no other skills to fall back on.[9]
concept  reduction  cs  computation  complexity  wiki  reference  properties  computer-vision  ai  risk  ai-control  machine-learning  deep-learning  language  nlp  order-disorder  tactics  strategy  intelligence  humanity  speculation  crux 
march 2018 by nhaliday
Existential Risks: Analyzing Human Extinction Scenarios
https://twitter.com/robinhanson/status/981291048965087232
https://archive.is/dUTD5
Would you endorse choosing policy to max the expected duration of civilization, at least as a good first approximation?
Can anyone suggest a different first approximation that would get more votes?

https://twitter.com/robinhanson/status/981335898502545408
https://archive.is/RpygO
How useful would it be to agree on a relatively-simple first-approximation observable-after-the-fact metric for what we want from the future universe, such as total life years experienced, or civilization duration?

We're Underestimating the Risk of Human Extinction: https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/
An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.

Anderson: You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?

Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.

Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.
bostrom  ratty  miri-cfar  skunkworks  philosophy  org:junk  list  top-n  frontier  speedometer  risk  futurism  local-global  scale  death  nihil  technology  simulation  anthropic  nuclear  deterrence  environment  climate-change  arms  competition  ai  ai-control  genetics  genomics  biotech  parasites-microbiome  disease  offense-defense  physics  tails  network-structure  epidemiology  space  geoengineering  dysgenics  ems  authoritarianism  government  values  formal-values  moloch  enhancement  property-rights  coordination  cooperate-defect  flux-stasis  ideas  prediction  speculation  humanity  singularity  existence  cybernetics  study  article  letters  eden-heaven  gedanken  multi  twitter  social  discussion  backup  hanson  metrics  optimization  time  long-short-run  janus  telos-atelos  poll  forms-instances  threat-modeling  selection  interview  expert-experience  malthus  volo-avolo  intel  leviathan  drugs  pharma  data  estimate  nature  longevity  expansionism  homo-hetero  utopia-dystopia 
march 2018 by nhaliday
Unaligned optimization processes as a general problem for society
TL;DR: There are lots of systems in society which seem to fit the pattern of “the incentives for this system are a pretty good approximation of what we actually want, so the system produces good results until it gets powerful, at which point it gets terrible results.”

...

Here are some more places where this idea could come into play:

- Marketing—humans try to buy things that will make our lives better, but our process for determining this is imperfect. A more powerful optimization process produces extremely good advertising to sell us things that aren’t actually going to make our lives better.
- Politics—we get extremely effective demagogues who pit us against our essential good values.
- Lobbying—as industries get bigger, the optimization process to choose great lobbyists for industries gets larger, but the process to make regulators robust doesn’t get correspondingly stronger. So regulatory capture gets worse and worse. Rent-seeking gets more and more significant.
- Online content—in a weaker internet, sites can’t be addictive except via being good content. In the modern internet, people can feel addicted to things that they wish they weren’t addicted to. We didn’t use to have the social expertise to make clickbait nearly as well as we do it today.
- News—Hyperpartisan news sources are much more worth it if distribution is cheaper and the market is bigger. News sources get an advantage from being truthful, but as society gets bigger, this advantage gets proportionally smaller.

...

For these reasons, I think it’s quite plausible that humans are fundamentally unable to have a “good” society with a population greater than some threshold, particularly if all these people have access to modern technology. Humans don’t have the rigidity to maintain social institutions in the face of that kind of optimization process. I think it is unlikely but possible (10%?) that this threshold population is smaller than the current population of the US, and that the US will crumble due to the decay of these institutions in the next fifty years if nothing totally crazy happens.
ratty  thinking  metabuch  reflection  metameta  big-yud  clever-rats  ai-control  ai  risk  scale  quality  ability-competence  network-structure  capitalism  randy-ayndy  civil-liberty  marketing  institutions  economics  political-econ  politics  polisci  advertising  rent-seeking  government  coordination  internet  attention  polarization  media  truth  unintended-consequences  alt-inst  efficiency  altruism  society  usa  decentralized  rhetoric  prediction  population  incentives  intervention  criminal-justice  property-rights  redistribution  taxes  externalities  science  monetary-fiscal  public-goodish  zero-positive-sum  markets  cost-benefit  regulation  regularizer  order-disorder  flux-stasis  shift  smoothness  phase-transition  power  definite-planning  optimism  pessimism  homo-hetero  interests  eden-heaven  telos-atelos  threat-modeling  alignment 
february 2018 by nhaliday
Information Processing: US Needs a National AI Strategy: A Sputnik Moment?
FT podcasts on US-China competition and AI: http://infoproc.blogspot.com/2018/05/ft-podcasts-on-us-china-competition-and.html

A new recommended career path for effective altruists: China specialist: https://80000hours.org/articles/china-careers/
Our rough guess is that it would be useful for there to be at least ten people in the community with good knowledge in this area within the next few years.

By “good knowledge” we mean they’ve spent at least 3 years studying these topics and/or living in China.

We chose ten because that would be enough for several people to cover each of the major areas listed (e.g. 4 within AI, 2 within biorisk, 2 within foreign relations, 1 in another area).

AI Policy and Governance Internship: https://www.fhi.ox.ac.uk/ai-policy-governance-internship/

https://www.fhi.ox.ac.uk/deciphering-chinas-ai-dream/
https://www.fhi.ox.ac.uk/wp-content/uploads/Deciphering_Chinas_AI-Dream.pdf
Deciphering China’s AI Dream
The context, components, capabilities, and consequences of
China’s strategy to lead the world in AI

Europe’s AI delusion: https://www.politico.eu/article/opinion-europes-ai-delusion/
Brussels is failing to grasp threats and opportunities of artificial intelligence.
By BRUNO MAÇÃES

When the computer program AlphaGo beat the Chinese professional Go player Ke Jie in a three-part match, it didn’t take long for Beijing to realize the implications.

If algorithms can already surpass the abilities of a master Go player, it can’t be long before they will be similarly supreme in the activity to which the classic board game has always been compared: war.

As I’ve written before, the great conflict of our time is about who can control the next wave of technological development: the widespread application of artificial intelligence in the economic and military spheres.

...

If China’s ambitions sound plausible, that’s because the country’s achievements in deep learning are so impressive already. After Microsoft announced that its speech recognition software surpassed human-level language recognition in October 2016, Andrew Ng, then head of research at Baidu, tweeted: “We had surpassed human-level Chinese recognition in 2015; happy to see Microsoft also get there for English less than a year later.”

...

One obvious advantage China enjoys is access to almost unlimited pools of data. The machine-learning technologies boosting the current wave of AI expansion are as good as the amount of data they can use. That could be the number of people driving cars, photos labeled on the internet or voice samples for translation apps. With 700 or 800 million Chinese internet users and fewer data protection rules, China is as rich in data as the Gulf States are in oil.

How can Europe and the United States compete? They will have to be commensurately better in developing algorithms and computer power. Sadly, Europe is falling behind in these areas as well.

...

Chinese commentators have embraced the idea of a coming singularity: the moment when AI surpasses human ability. At that point a number of interesting things happen. First, future AI development will be conducted by AI itself, creating exponential feedback loops. Second, humans will become useless for waging war. At that point, the human mind will be unable to keep pace with robotized warfare. With advanced image recognition, data analytics, prediction systems, military brain science and unmanned systems, devastating wars might be waged and won in a matter of minutes.

...

The argument in the new strategy is fully defensive. It first considers how AI raises new threats and then goes on to discuss the opportunities. The EU and Chinese strategies follow opposite logics. Already on its second page, the text frets about the legal and ethical problems raised by AI and discusses the “legitimate concerns” the technology generates.

The EU’s strategy is organized around three concerns: the need to boost Europe’s AI capacity, ethical issues and social challenges. Unfortunately, even the first dimension quickly turns out to be about “European values” and the need to place “the human” at the center of AI — forgetting that the first word in AI is not “human” but “artificial.”

https://twitter.com/mr_scientism/status/983057591298351104
https://archive.is/m3Njh
US military: "LOL, China thinks it's going to be a major player in AI, but we've got all the top AI researchers. You guys will help us develop weapons, right?"

US AI researchers: "No."

US military: "But... maybe just a computer vision app."

US AI researchers: "NO."

https://www.theverge.com/2018/4/4/17196818/ai-boycot-killer-robots-kaist-university-hanwha
https://www.nytimes.com/2018/04/04/technology/google-letter-ceo-pentagon-project.html
https://twitter.com/mr_scientism/status/981685030417326080
https://archive.is/3wbHm
AI-risk was a mistake.
hsu  scitariat  commentary  video  presentation  comparison  usa  china  asia  sinosphere  frontier  technology  science  ai  speedometer  innovation  google  barons  deepgoog  stories  white-paper  strategy  migration  iran  human-capital  corporation  creative  alien-character  military  human-ml  nationalism-globalism  security  investing  government  games  deterrence  defense  nuclear  arms  competition  risk  ai-control  musk  optimism  multi  news  org:mag  europe  EU  80000-hours  effective-altruism  proposal  article  realness  offense-defense  war  biotech  altruism  language  foreign-lang  philosophy  the-great-west-whale  enhancement  foreign-policy  geopolitics  anglo  jobs  career  planning  hmm  travel  charity  tech  intel  media  teaching  tutoring  russia  india  miri-cfar  pdf  automation  class  labor  polisci  society  trust  n-factor  corruption  leviathan  ethics  authoritarianism  individualism-collectivism  revolution  economics  inequality  civic  law  regulation  data  scale  pro-rata  capital  zero-positive-sum  cooperate-defect  distribution  time-series  tre 
february 2018 by nhaliday
What Peter Thiel thinks about AI risk - Less Wrong
TL;DR: he thinks its an issue but also feels AGI is very distant and hence less worried about it than Musk.

I recommend the rest of the lecture as well, it's a good summary of "Zero to One"  and a good QA afterwards.

For context, in case anyone doesn't realize: Thiel has been MIRI's top donor throughout its history.

other stuff:
nice interview question: "thing you know is true that not everyone agrees on?"
"learning from failure overrated"
cleantech a huge market, hard to compete
software makes for easy monopolies (zero marginal costs, network effects, etc.)
for most of history inventors did not benefit much (continuous competition)
ethical behavior is a luxury of monopoly
ratty  lesswrong  commentary  ai  ai-control  risk  futurism  technology  speedometer  audio  presentation  musk  thiel  barons  frontier  miri-cfar  charity  people  track-record  venture  startups  entrepreneurialism  contrarianism  competition  market-power  business  google  truth  management  leadership  socs-and-mops  dark-arts  skunkworks  hard-tech  energy-resources  wire-guided  learning  software  sv  tech  network-structure  scale  marginal  cost-benefit  innovation  industrial-revolution  economics  growth-econ  capitalism  comparison  nationalism-globalism  china  asia  trade  stagnation  things  dimensionality  exploratory  world  developing-world  thinking  definite-planning  optimism  pessimism  intricacy  politics  war  career  planning  supply-demand  labor  science  engineering  dirty-hands  biophysical-econ  migration  human-capital  policy  canada  anglo  winner-take-all  polarization  amazon  business-models  allodium  civilization  the-classics  microsoft  analogy  gibbon  conquest-empire  realness  cynicism-idealism  org:edu  open-closed  ethics  incentives  m 
february 2018 by nhaliday
Fermi paradox - Wikipedia
Rare Earth hypothesis: https://en.wikipedia.org/wiki/Rare_Earth_hypothesis
Fine-tuned Universe: https://en.wikipedia.org/wiki/Fine-tuned_Universe
something to keep in mind:
Puddle theory is a term coined by Douglas Adams to satirize arguments that the universe is made for man.[54][55] As stated in Adams' book The Salmon of Doubt:[56]
Imagine a puddle waking up one morning and thinking, “This is an interesting world I find myself in, an interesting hole I find myself in, fits me rather neatly, doesn't it? In fact, it fits me staggeringly well, must have been made to have me in it!” This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, it's still frantically hanging on to the notion that everything's going to be all right, because this World was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.
article  concept  paradox  wiki  reference  fermi  anthropic  space  xenobio  roots  speculation  ideas  risk  threat-modeling  civilization  nihil  🔬  deep-materialism  new-religion  futurism  frontier  technology  communication  simulation  intelligence  eden  war  nuclear  deterrence  identity  questions  multi  explanans  physics  theos  philosophy  religion  chemistry  bio  hmm  idk  degrees-of-freedom  lol  troll  existence 
january 2018 by nhaliday
[1709.01149] Biotechnology and the lifetime of technical civilizations
The number of people able to end Earth's technical civilization has heretofore been small. Emerging dual-use technologies, such as biotechnology, may give similar power to thousands or millions of individuals. To quantitatively investigate the ramifications of such a marked shift on the survival of both terrestrial and extraterrestrial technical civilizations, this paper presents a two-parameter model for civilizational lifespans, i.e. the quantity L in Drake's equation for the number of communicating extraterrestrial civilizations. One parameter characterizes the population lethality of a civilization's biotechnology and the other characterizes the civilization's psychosociology. L is demonstrated to be less than the inverse of the product of these two parameters. Using empiric data from Pubmed to inform the biotechnology parameter, the model predicts human civilization's median survival time as decades to centuries, even with optimistic psychosociological parameter values, thereby positioning biotechnology as a proximate threat to human civilization. For an ensemble of civilizations having some median calculated survival time, the model predicts that, after 80 times that duration, only one in 1024 civilizations will survive -- a tempo and degree of winnowing compatible with Hanson's "Great Filter." Thus, assuming that civilizations universally develop advanced biotechnology, before they become vigorous interstellar colonizers, the model provides a resolution to the Fermi paradox.
preprint  article  gedanken  threat-modeling  risk  biotech  anthropic  fermi  ratty  hanson  models  xenobio  space  civilization  frontier  hmm  speedometer  society  psychology  social-psych  anthropology  cultural-dynamics  disease  parasites-microbiome  maxim-gun  prepping  science-anxiety  technology  magnitude  scale  data  prediction  speculation  ideas  🌞  org:mat  study  offense-defense  arms  unintended-consequences  spreading  explanans  sociality  cybernetics 
october 2017 by nhaliday
Superintelligence Risk Project Update II
https://www.jefftk.com/p/superintelligence-risk-project-update

https://www.jefftk.com/p/conversation-with-michael-littman
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.

The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.

He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]

https://www.jefftk.com/p/conversation-with-an-ai-researcher
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

https://www.jefftk.com/p/superintelligence-risk-project-conclusion
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty  core-rats  ai  risk  ai-control  prediction  expert  machine-learning  deep-learning  speedometer  links  research  research-program  frontier  multi  interview  deepgoog  games  hardware  performance  roots  impetus  chart  big-picture  state-of-art  reinforcement  futurism  🤖  🖥  expert-experience  singularity  miri-cfar  empirical  evidence-based  speculation  volo-avolo  clever-rats  acmtariat  robust  ideas  crux  atoms  detail-architecture  software  gradient-descent 
july 2017 by nhaliday
The Rise and Fall of Cognitive Control - Behavioral Scientist
The results highlight the downsides of controlled processing. Within a population, controlled processing may—rather than ensuring undeterred progress—usher in short-sighted, irrational, and detrimental behavior, ultimately leading to population collapse. This is because the innovations produced by controlled processing benefit everyone, even those who do not act with control. Thus, by making non-controlled agents better off, these innovations erode the initial advantage of controlled behavior. This results in the demise of control and the rise of lack-of-control. In turn, this eventually leads to a return to poor decision making and the breakdown of the welfare-enhancing innovations, possibly accelerated and exacerbated by the presence of the enabling technologies themselves. Our models therefore help to explain societal cycles whereby periods of rationality and forethought are followed by plunges back into irrationality and short-sightedness.

https://static1.squarespace.com/static/51ed234ae4b0867e2385d879/t/595fac998419c208a6d99796/1499442499093/Cyclical-Population-Dynamics.pdf
Psychologists, neuroscientists, and economists often conceptualize decisions as arising from processes that lie along a continuum from automatic (i.e., “hardwired” or overlearned, but relatively inflexible) to controlled (less efficient and effortful, but more flexible). Control is central to human cognition, and plays a key role in our ability to modify the world to suit our needs. Given its advantages, reliance on controlled processing may seem predestined to increase within the population over time. Here, we examine whether this is so by introducing an evolutionary game theoretic model of agents that vary in their use of automatic versus controlled processes, and in which cognitive processing modifies the environment in which the agents interact. We find that, under a wide range of parameters and model assumptions, cycles emerge in which the prevalence of each type of processing in the population oscillates between 2 extremes. Rather than inexorably increasing, the emergence of control often creates conditions that lead to its own demise by allowing automaticity to also flourish, thereby undermining the progress made by the initial emergence of controlled processing. We speculate that this observation may have relevance for understanding similar cycles across human history, and may lend insight into some of the circumstances and challenges currently faced by our species.
econotariat  economics  political-econ  policy  decision-making  behavioral-econ  psychology  cog-psych  cycles  oscillation  unintended-consequences  anthropology  broad-econ  cultural-dynamics  tradeoffs  cost-benefit  rot  dysgenics  study  summary  multi  EGT  dynamical  volo-avolo  self-control  discipline  the-monster  pdf  error  rationality  info-dynamics  bounded-cognition  hive-mind  iq  intelligence  order-disorder  risk  microfoundations  science-anxiety  big-picture  hari-seldon  cybernetics 
july 2017 by nhaliday
Overcoming Bias : A Tangled Task Future
So we may often retain systems that inherit the structure of the human brain, and the structures of the social teams and organizations by which humans have worked together. All of which is another way to say: descendants of humans may have a long future as workers. We may have another future besides being retirees or iron-fisted peons ruling over gods. Even in a competitive future with no friendly singleton to ensure preferential treatment, something recognizably like us may continue. And even win.
ratty  hanson  speculation  automation  labor  economics  ems  futurism  prediction  complex-systems  network-structure  intricacy  thinking  engineering  management  law  compensation  psychology  cog-psych  ideas  structure  gray-econ  competition  moloch  coordination  cooperate-defect  risk  ai  ai-control  singularity  number  humanity  complement-substitute  cybernetics  detail-architecture  legacy  threat-modeling  degrees-of-freedom  composition-decomposition  order-disorder  analogy  parsimony  institutions  software 
june 2017 by nhaliday
The Bridge: 数字化 – 网络化 – 智能化: China’s Quest for an AI Revolution in Warfare
The PLA’s organizational tendencies could render it more inclined to take full advantage of the disruptive potential of artificial intelligence, without constraints due to concerns about keeping humans ‘in the loop.’ In its command culture, the PLA has tended to consolidate and centralize authorities at higher levels, remaining reluctant to delegate decision-making downward. The introduction of information technology has exacerbated the tendency of PLA commanders to micromanage subordinates through a practice known as “skip-echelon command” (越级指挥) that enables the circumvention of command bureaucracy to influence units and weapons systems at even a tactical level.[xxviii] This practice can be symptomatic of a culture of distrust and bureaucratic immaturity. The PLA has confronted and started to progress in mitigating its underlying human resource challenges, recruiting increasingly educated officers and enlisted personnel, while seeking to modernize and enhance political and ideological work aimed to ensure loyalty to the Chinese Communist Party. However, the employment of artificial intelligence could appeal to the PLA as a way to circumvent and work around those persistent issues. In the long term, the intersection of the PLA’s focus on ‘scientific’ approaches to warfare with the preference to consolidate and centralize decision-making could cause the PLA’s leadership to rely more upon artificial intelligence, rather than human judgment.
news  org:mag  org:foreign  trends  china  asia  sinosphere  war  meta:war  military  defense  strategy  current-events  ai  automation  technology  foreign-policy  realpolitik  expansionism  innovation  individualism-collectivism  values  prediction  deepgoog  games  n-factor  human-ml  alien-character  risk  ai-control 
june 2017 by nhaliday
spaceships - Can there be a space age without petroleum (crude oil)? - Worldbuilding Stack Exchange
Yes...probably

What was really important to our development of technology was not oil, but coal. Access to large deposits of high-quality coal largely fueled the industrial revolution, and it was the industrial revolution that really got us on the first rungs of the technological ladder.

Oil is a fantastic fuel for an advanced civilisation, but it's not essential. Indeed, I would argue that our ability to dig oil out of the ground is a crutch, one that we should have discarded long ago. The reason oil is so essential to us today is that all our infrastructure is based on it, but if we'd never had oil we could still have built a similar infrastructure. Solar power was first displayed to the public in 1878. Wind power has been used for centuries. Hydroelectric power is just a modification of the same technology as wind power.

Without oil, a civilisation in the industrial age would certainly be able to progress and advance to the space age. Perhaps not as quickly as we did, but probably more sustainably.

Without coal, though...that's another matter

What would the industrial age be like without oil and coal?: https://worldbuilding.stackexchange.com/questions/45919/what-would-the-industrial-age-be-like-without-oil-and-coal

Out of the ashes: https://aeon.co/essays/could-we-reboot-a-modern-civilisation-without-fossil-fuels
It took a lot of fossil fuels to forge our industrial world. Now they're almost gone. Could we do it again without them?

But charcoal-based industry didn’t die out altogether. In fact, it survived to flourish in Brazil. Because it has substantial iron deposits but few coalmines, Brazil is the largest charcoal producer in the world and the ninth biggest steel producer. We aren’t talking about a cottage industry here, and this makes Brazil a very encouraging example for our thought experiment.

The trees used in Brazil’s charcoal industry are mainly fast-growing eucalyptus, cultivated specifically for the purpose. The traditional method for creating charcoal is to pile chopped staves of air-dried timber into a great dome-shaped mound and then cover it with turf or soil to restrict airflow as the wood smoulders. The Brazilian enterprise has scaled up this traditional craft to an industrial operation. Dried timber is stacked into squat, cylindrical kilns, built of brick or masonry and arranged in long lines so that they can be easily filled and unloaded in sequence. The largest sites can sport hundreds of such kilns. Once filled, their entrances are sealed and a fire is lit from the top.
q-n-a  stackex  curiosity  gedanken  biophysical-econ  energy-resources  long-short-run  technology  civilization  industrial-revolution  heavy-industry  multi  modernity  frontier  allodium  the-world-is-just-atoms  big-picture  ideas  risk  volo-avolo  news  org:mag  org:popup  direct-indirect  retrofit  dirty-hands  threat-modeling  duplication  iteration-recursion  latin-america  track-record  trivia  cocktail  data 
june 2017 by nhaliday
On Pinkglossianism | Wandering Near Sawtry
Steven Pinker is not wrong to say that some things have got better – or even that some things are getting better. We live longer. We have more food. We have more medicine. We have more free time. We have less chance of dying at another’s hands. My main objection to his arguments is not that some things have got worse as well (family life, for example, or social trust). It is not that he emphasises proportion when scale is more significant (such as with animal suffering). It is the fragility of these peaceful, prosperous conditions.

Antibiotics have made us healthier but antibiotic resistance threatens to plunge us into epidemics. Globalisation has made us richer but is also a powder-keg of cultural unease. Industrialisation has brought material wealth but it is also damaging the environment. Nuclear weapons have averted international conflict but it would only take one error for them to wreak havoc.

At his best, Pinker reminds us of how much we have to treasure, then. At his worst, he is like a co-passenger in a car – pointing out the sunny weather and the beautiful surroundings as it hurtles towards the edge of a cliff.

http://takimag.com/article/dusting_off_the_crystal_ball_john_derbyshire/print
http://blogs.discovermagazine.com/gnxp/2011/11/the-new-york-times-on-violence-and-pinker/
albion  rhetoric  contrarianism  critique  pinker  peace-violence  domestication  crime  criminology  trends  whiggish-hegelian  optimism  pessimism  cynicism-idealism  multi  news  org:lite  gnon  isteveish  futurism  list  top-n  eric-kaufmann  dysgenics  nihil  nationalism-globalism  nuclear  robust  scale  risk  gnxp  scitariat  faq  modernity  tetlock  the-bones  paleocon  journos-pundits  org:sci 
june 2017 by nhaliday
Logic | West Hunter
All the time I hear some public figure saying that if we ban or allow X, then logically we have to ban or allow Y, even though there are obvious practical reasons for X and obvious practical reasons against Y.

No, we don’t.

http://www.amnation.com/vfr/archives/005864.html
http://www.amnation.com/vfr/archives/002053.html

compare: https://pinboard.in/u:nhaliday/b:190b299cf04a

Small Change Good, Big Change Bad?: https://www.overcomingbias.com/2018/02/small-change-good-big-change-bad.html
And on reflection it occurs to me that this is actually THE standard debate about change: some see small changes and either like them or aren’t bothered enough to advocate what it would take to reverse them, while others imagine such trends continuing long enough to result in very large and disturbing changes, and then suggest stronger responses.

For example, on increased immigration some point to the many concrete benefits immigrants now provide. Others imagine that large cumulative immigration eventually results in big changes in culture and political equilibria. On fertility, some wonder if civilization can survive in the long run with declining population, while others point out that population should rise for many decades, and few endorse the policies needed to greatly increase fertility. On genetic modification of humans, some ask why not let doctors correct obvious defects, while others imagine parents eventually editing kid genes mainly to max kid career potential. On oil some say that we should start preparing for the fact that we will eventually run out, while others say that we keep finding new reserves to replace the ones we use.

...

If we consider any parameter, such as typical degree of mind wandering, we are unlikely to see the current value as exactly optimal. So if we give people the benefit of the doubt to make local changes in their interest, we may accept that this may result in a recent net total change we don’t like. We may figure this is the price we pay to get other things we value more, and we we know that it can be very expensive to limit choices severely.

But even though we don’t see the current value as optimal, we also usually see the optimal value as not terribly far from the current value. So if we can imagine current changes as part of a long term trend that eventually produces very large changes, we can become more alarmed and willing to restrict current changes. The key question is: when is that a reasonable response?

First, big concerns about big long term changes only make sense if one actually cares a lot about the long run. Given the usual high rates of return on investment, it is cheap to buy influence on the long term, compared to influence on the short term. Yet few actually devote much of their income to long term investments. This raises doubts about the sincerity of expressed long term concerns.

Second, in our simplest models of the world good local choices also produce good long term choices. So if we presume good local choices, bad long term outcomes require non-simple elements, such as coordination, commitment, or myopia problems. Of course many such problems do exist. Even so, someone who claims to see a long term problem should be expected to identify specifically which such complexities they see at play. It shouldn’t be sufficient to just point to the possibility of such problems.

...

Fourth, many more processes and factors limit big changes, compared to small changes. For example, in software small changes are often trivial, while larger changes are nearly impossible, at least without starting again from scratch. Similarly, modest changes in mind wandering can be accomplished with minor attitude and habit changes, while extreme changes may require big brain restructuring, which is much harder because brains are complex and opaque. Recent changes in market structure may reduce the number of firms in each industry, but that doesn’t make it remotely plausible that one firm will eventually take over the entire economy. Projections of small changes into large changes need to consider the possibility of many such factors limiting large changes.

Fifth, while it can be reasonably safe to identify short term changes empirically, the longer term a forecast the more one needs to rely on theory, and the more different areas of expertise one must consider when constructing a relevant model of the situation. Beware a mere empirical projection into the long run, or a theory-based projection that relies on theories in only one area.

We should very much be open to the possibility of big bad long term changes, even in areas where we are okay with short term changes, or at least reluctant to sufficiently resist them. But we should also try to hold those who argue for the existence of such problems to relatively high standards. Their analysis should be about future times that we actually care about, and can at least roughly foresee. It should be based on our best theories of relevant subjects, and it should consider the possibility of factors that limit larger changes.

And instead of suggesting big ways to counter short term changes that might lead to long term problems, it is often better to identify markers to warn of larger problems. Then instead of acting in big ways now, we can make sure to track these warning markers, and ready ourselves to act more strongly if they appear.

Growth Is Change. So Is Death.: https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html
I see the same pattern when people consider long term futures. People can be quite philosophical about the extinction of humanity, as long as this is due to natural causes. Every species dies; why should humans be different? And few get bothered by humans making modest small-scale short-term modifications to their own lives or environment. We are mostly okay with people using umbrellas when it rains, moving to new towns to take new jobs, etc., digging a flood ditch after our yard floods, and so on. And the net social effect of many small changes is technological progress, economic growth, new fashions, and new social attitudes, all of which we tend to endorse in the short run.

Even regarding big human-caused changes, most don’t worry if changes happen far enough in the future. Few actually care much about the future past the lives of people they’ll meet in their own life. But for changes that happen within someone’s time horizon of caring, the bigger that changes get, and the longer they are expected to last, the more that people worry. And when we get to huge changes, such as taking apart the sun, a population of trillions, lifetimes of millennia, massive genetic modification of humans, robots replacing people, a complete loss of privacy, or revolutions in social attitudes, few are blasé, and most are quite wary.

This differing attitude regarding small local changes versus large global changes makes sense for parameters that tend to revert back to a mean. Extreme values then do justify extra caution, while changes within the usual range don’t merit much notice, and can be safely left to local choice. But many parameters of our world do not mostly revert back to a mean. They drift long distances over long times, in hard to predict ways that can be reasonably modeled as a basic trend plus a random walk.

This different attitude can also make sense for parameters that have two or more very different causes of change, one which creates frequent small changes, and another which creates rare huge changes. (Or perhaps a continuum between such extremes.) If larger sudden changes tend to cause more problems, it can make sense to be more wary of them. However, for most parameters most change results from many small changes, and even then many are quite wary of this accumulating into big change.

For people with a sharp time horizon of caring, they should be more wary of long-drifting parameters the larger the changes that would happen within their horizon time. This perspective predicts that the people who are most wary of big future changes are those with the longest time horizons, and who more expect lumpier change processes. This prediction doesn’t seem to fit well with my experience, however.

Those who most worry about big long term changes usually seem okay with small short term changes. Even when they accept that most change is small and that it accumulates into big change. This seems incoherent to me. It seems like many other near versus far incoherences, like expecting things to be simpler when you are far away from them, and more complex when you are closer. You should either become more wary of short term changes, knowing that this is how big longer term change happens, or you should be more okay with big long term change, seeing that as the legitimate result of the small short term changes you accept.

https://www.overcomingbias.com/2018/03/growth-is-change-so-is-death.html#comment-3794966996
The point here is the gradual shifts of in-group beliefs are both natural and no big deal. Humans are built to readily do this, and forget they do this. But ultimately it is not a worry or concern.

But radical shifts that are big, whether near or far, portend strife and conflict. Either between groups or within them. If the shift is big enough, our intuition tells us our in-group will be in a fight. Alarms go off.
west-hunter  scitariat  discussion  rant  thinking  rationality  metabuch  critique  systematic-ad-hoc  analytical-holistic  metameta  ideology  philosophy  info-dynamics  aphorism  darwinian  prudence  pragmatic  insight  tradition  s:*  2016  multi  gnon  right-wing  formal-values  values  slippery-slope  axioms  alt-inst  heuristic  anglosphere  optimate  flux-stasis  flexibility  paleocon  polisci  universalism-particularism  ratty  hanson  list  examples  migration  fertility  intervention  demographics  population  biotech  enhancement  energy-resources  biophysical-econ  nature  military  inequality  age-generation  time  ideas  debate  meta:rhetoric  local-global  long-short-run  gnosis-logos  gavisti  stochastic-processes  eden-heaven  politics  equilibrium  hive-mind  genetics  defense  competition  arms  peace-violence  walter-scheidel  speed  marginal  optimization  search  time-preference  patience  futurism  meta:prediction  accuracy  institutions  tetlock  theory-practice  wire-guided  priors-posteriors  distribution  moments  biases  epistemic  nea 
may 2017 by nhaliday
[1705.08807] When Will AI Exceed Human Performance? Evidence from AI Experts
Researchers predict AI will outperform humans in many activities in the next ten years, such as translating languages (by 2024), writing high-school essays (by 2026), driving a truck (by 2027), working in retail (by 2031), writing a bestselling book (by 2049), and working as a surgeon (by 2053). Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans.

https://www.reddit.com/r/slatestarcodex/comments/6dy6ex/arxiv_when_will_ai_exceed_human_performance/
study  preprint  science  meta:science  technology  ai  automation  labor  ai-control  risk  futurism  poll  expert  usa  asia  trends  hmm  idk  definite-planning  frontier  ideas  prediction  innovation  china  sinosphere  multi  reddit  social  commentary  ssc  speedometer  flux-stasis  ratty  expert-experience  org:mat  singularity  optimism  pessimism 
may 2017 by nhaliday
[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 10^30 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/

simpler explanation (just different math for Drake equation):
Dissolving the Fermi Paradox: http://www.jodrellbank.manchester.ac.uk/media/eps/jodrell-bank-centre-for-astrophysics/news-and-events/2017/uksrn-slides/Anders-Sandberg---Dissolving-Fermi-Paradox-UKSRN.pdf
http://marginalrevolution.com/marginalrevolution/2017/07/fermi-paradox-resolved.html
Overall the argument is that point estimates should not be shoved into a Drake equation and then multiplied by each, as that requires excess certainty and masks much of the ambiguity of our knowledge about the distributions. Instead, a Bayesian approach should be used, after which the fate of humanity looks much better. Here is one part of the presentation:

Life Versus Dark Energy: How An Advanced Civilization Could Resist the Accelerating Expansion of the Universe: https://arxiv.org/abs/1806.05203
The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of M∼(0.2−1)M⊙, and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting.
preprint  study  essay  article  bostrom  ratty  anthropic  philosophy  space  xenobio  computation  physics  interdisciplinary  ideas  hmm  cocktail  temperature  thermo  information-theory  bits  🔬  threat-modeling  time  scale  insight  multi  commentary  liner-notes  pdf  slides  error  probability  ML-MAP-E  composition-decomposition  econotariat  marginal-rev  fermi  risk  org:mat  questions  paradox  intricacy  multiplicative  calculation  street-fighting  methodology  distribution  expectancy  moments  bayesian  priors-posteriors  nibble  measurement  existence  technology  geoengineering  magnitude  spatial  density  spreading  civilization  energy-resources  phys-energy  measure  direction  speculation  structure 
may 2017 by nhaliday
One more time | West Hunter
One of our local error sources suggested that it would be impossible to rebuild technical civilization, once fallen. Now if every human were dead I’d agree, but in most other scenarios it wouldn’t be particularly difficult, assuming that the survivors were no more silly and fractious than people are today.  So assume a mild disaster, something like the effect of myxomatosis on the rabbits of Australia, or perhaps toe-to-toe nuclear combat with the Russkis – ~90%  casualties worldwide.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69221
Books are everywhere. In the type of scenario I sketched out, almost no knowledge would be lost – so Neolithic tech is irrelevant. Look, if a single copy of the 1911 Britannica survived, all would be well.

You could of course harvest metals from the old cities. But even if if you didn’t, the idea that there is no more copper or zinc or tin in the ground is just silly. “recoverable ore” is mostly an economic concept.

Moreover, if we’re talking wiring and electrical uses, one can use aluminum, which makes up 8% of the Earth’s crust.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69368
Some of those book tell you how to win.

Look, assume that some communities strive to relearn how to make automatic weapons and some don’t. How does that story end? Do I have to explain everything?

I guess so!

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69334
Well, perhaps having a zillion times more books around would make a difference. That and all the “X for Dummies” books, which I think the Romans didn’t have.

A lot of Classical civ wasn’t very useful: on the whole they didn’t invent much. On the whole, technology advanced quite a bit more rapidly in Medieval times.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69225
How much coal and oil are in the ground that can still be extracted with 19th century tech? Honest question; I don’t know.
--
Lots of coal left. Not so much oil (using simple methods), but one could make it from low-grade coal, with the Fischer-Tropsch process. Sasol does this.

Then again, a recovering society wouldn’t need much at first.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69223
reply to: https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69220
That’s more like it.

#1. Consider Grand Coulee Dam. Gigawatts. Feeling of power!
#2. Of course.
#3. Might be easier to make superconducting logic circuits with MgB2, starting over.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69325
Your typical biker guy is more mechanically minded than the average Joe. Welding, electrical stuff, this and that.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69260
If fossil fuels were unavailable -or just uneconomical at first- we’d be back to charcoal for our Stanley Steamers and railroads. We’d still have both.

The French, and others, used wood-gasifier trucks during WWII.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69407
Teslas are of course a joke.
west-hunter  scitariat  civilization  risk  nihil  gedanken  frontier  allodium  technology  energy-resources  knowledge  the-world-is-just-atoms  discussion  speculation  analysis  biophysical-econ  big-picture  🔬  ideas  multi  history  iron-age  the-classics  medieval  europe  poast  the-great-west-whale  the-trenches  optimism  volo-avolo  mostly-modern  world-war  gallic  track-record  musk  barons  transportation  driving  contrarianism  agriculture  retrofit  industrial-revolution  dirty-hands  books  competition  war  group-selection  comparison  mediterranean  conquest-empire  gibbon  speedometer  class  threat-modeling  duplication  iteration-recursion  trivia  cocktail  encyclopedic  definite-planning  embodied  gnosis-logos  kumbaya-kult 
may 2017 by nhaliday
How many times over could the world's current supply of nuclear weapons destroy the world? - Quora
A Common Story: “There are enough nuclear weapons to destroy the world many times over.” This is nothing more than poorly crafted fiction an urban legend. This common conclusion is not based in any factual data. It is based solely in hype, hysteria, propaganda and fear mongering.

If you take every weapon in existence today, approximately 6500 megatons between 15,000 warheads with an average yield of 433 KT, and put a single bomb in its own 100 square mile grid… one bomb per grid (10 miles x 10 miles), you will contain >95% of the destructive force of each bomb on average within the grid it is in. This means the total landmass to receive a destructive force from all the world's nuclear bombs is an area of 1.5 million square miles. Not quite half of the United States and 1/38 of the world's total land mass…. that's it!
q-n-a  qra  arms  nuclear  technology  war  meta:war  impact  deterrence  foreign-policy  usa  world  risk  nihil  scale  trivia  threat-modeling  peace-violence 
may 2017 by nhaliday
What is the likelihood we run out of fossil fuels before we can switch to renewable energy sources? - Quora
1) Can we de-carbon our primary energy production before global warming severely damages human civilization? In the short term this means switching from coal to natural gas, and in the long term replacing both coal and gas generation with carbon-neutral sources such as renewables or nuclear. The developed world cannot accomplish this alone -- it requires worldwide action, and most of the pain will be felt by large developing nations such as India and China. Ultimately this is a political and economic problem. The technology to eliminate most carbon from electricity generation exists today at fairly reasonable cost.

2) Can we develop a better transportation energy storage technology than oil, before market forces drive prices to levels that severely damage the global economy? Fossil fuels are a source of energy, but primarily we use oil in vehicles because it is an exceptional energy TRANSPORT medium. Renewables cannot meet this need because battery technology is completely uncompetitive for most fuel consumers -- prices are an order of magnitude too high and energy density is an order of magnitude too low for adoption of all-electric vehicles outside developed-world urban centers. (Heavy trucking, cargo ships, airplanes, etc will never be all-electric with chemical batteries. There are hard physical limits to the energy density of electrochemical reactions. I'm not convinced passenger vehicles will go all-electric in our lifetimes either.) There are many important technologies in existence that will gain increasing traction in the next 50 years such as natural gas automobiles and improved gas/electric hybrids, but ultimately we need a better way to store power than fossil fuels. _This is a deep technological problem that will not be solved by incremental improvements in battery chemistry or any process currently in the R&D pipeline_.

Based on these two unresolved issues, _I place the odds of us avoiding fossil-fuel-related energy issues (major climate or economic damage) at less than 10%_. The impetus for the major changes required will not be sufficiently urgent until the world is seeing severe and undeniable impacts. Civilization will certainly survive -- but there will be no small amount of human suffering during the transition to whatever comes next.

- Ryan Carlyle
q-n-a  qra  expert  energy-resources  climate-change  environment  risk  civilization  nihil  prediction  threat-modeling  world  futurism  biophysical-econ  stock-flow  transportation  technology  economics  long-short-run  no-go  speedometer  modernity  expert-experience 
may 2017 by nhaliday
Say a little prior for me: more on climate change - Statistical Modeling, Causal Inference, and Social Science
http://www.fooledbyrandomness.com/climateletter.pdf
We have only one planet. This fact radically constrains the kinds of risks that are appropriate to take at a large scale. Even a risk with a very low probability becomes unacceptable when it affects all of us – there is no reversing mistakes of that magnitude.
gelman  scitariat  discussion  links  science  meta:science  epistemic  info-dynamics  climate-change  causation  models  thinking  priors-posteriors  atmosphere  environment  multi  pdf  rhetoric  uncertainty  risk  outcome-risk  moments 
april 2017 by nhaliday
Societal collapse - Wikipedia
https://twitter.com/Billare/status/900903803364536321
en.m.wikipedia.org/wiki/Ottoman_d… Despite ever increasing rigor & use of sources, this is why academic historians are useless.
Just like the Roman Empire, the Ottoman Empire never declined. That common-sense notion is too "simplistic." Instead, if was "transformed."
Nevertheless. There was a period when surrounding European powers "trembled at the name" of the vizier or the sultan or the janissary corps.
Some time later, they were eagerly carving up its territory & using it as a diplomatic plaything.
Something happened in that meantime. Something important. I would like to be able to read straightforwardly what those things were.
https://twitter.com/GarettJones/status/900910830090412032
https://archive.is/eROiG
Hah! I am right now about halfway through Bryan Ward-Perkins book The Fall of Rome and the end of civilization.
One of the best books I have ever read
One of the most important as well for shaping my worldview, my applied epistemology in particular.
history  iron-age  mediterranean  the-classics  gibbon  sociology  anthropology  risk  society  world  antiquity  age-of-discovery  civilization  leviathan  tainter  nihil  wiki  reference  list  prepping  scale  cultural-dynamics  great-powers  conquest-empire  multi  twitter  social  commentary  discussion  unaffiliated  econotariat  garett-jones  spearhead  academia  social-science  rationality  epistemic  info-foraging  MENA  rot  is-ought  kumbaya-kult  backup  truth  reason  absolute-relative  egalitarianism-hierarchy  track-record  hari-seldon 
april 2017 by nhaliday
Annotating Greg Cochran’s interview with James Miller
https://westhunt.wordpress.com/2017/04/05/interview-2/
opinion of Scott and Hanson: https://westhunt.wordpress.com/2017/04/05/interview-2/#comment-90238
Greg's methodist: https://westhunt.wordpress.com/2017/04/05/interview-2/#comment-90256
https://westhunt.wordpress.com/2017/04/05/interview-2/#comment-90299
You have to consider the relative strengths of Japan and the USA. USA was ~10x stronger, industrially, which is what mattered. Technically superior (radar, Manhattan project). Almost entirely self-sufficient in natural resources. Japan was sure to lose, and too crazy to quit, which meant that they would lose after being smashed flat.
--
There’s a fairly common way of looking at things in which the bad guys are not at fault because they’re bad guys, born that way, and thus can’t help it. Well, we can’t help it either, so the hell with them. I don’t think we had to respect Japan’s innate need to fuck everybody in China to death.

https://westhunt.wordpress.com/2017/03/25/ramble-on/
https://westhunt.wordpress.com/2017/03/24/topics/
https://soundcloud.com/user-519115521/greg-cochran-part-1
2nd part: https://pinboard.in/u:nhaliday/b:9ab84243b967

some additional things:
- political correctness, the Cathedral and the left (personnel continuity but not ideology/value) at start
- joke: KT impact = asteroid mining, every mass extinction = intelligent life destroying itself
- Alawites: not really Muslim, women liberated because "they don't have souls", ended up running shit in Syria because they were only ones that wanted to help the British during colonial era
- solution to Syria: "put the Alawites in NYC"
- Zimbabwe was OK for a while, if South Africa goes sour, just "put the Boers in NYC" (Miller: left would probably say they are "culturally incompatible", lol)
- story about Lincoln and his great-great-great-grandfather
- skepticism of free speech
- free speech, authoritarianism, and defending against the Mongols
- Scott crazy (not in a terrible way), LW crazy (genetics), ex.: polyamory
- TFP or microbio are better investments than stereotypical EA stuff
- just ban AI worldwide (bully other countries to enforce)
- bit of a back-and-forth about macroeconomics
- not sure climate change will be huge issue. world's been much warmer before and still had a lot of mammals, etc.
- he quite likes Pseudoerasmus
- shits on modern conservatism/Bret Stephens a bit

- mentions Japan having industrial base a tenth the size of the US's and no chance of winning WW2 around 11m mark
- describes himself as "fairly religious" around 20m mark
- 27m30s: Eisenhower was smart, read Carlyle, classical history, etc.

but was Nixon smarter?: https://www.gnxp.com/WordPress/2019/03/18/open-thread-03-18-2019/
The Scandals of Meritocracy. Virtue vs. competence. Would you rather have a boss who is evil but competent, or good but incompetent? The reality is you have to balance the two. Richard Nixon was probably smarter that Dwight Eisenhower in raw g, but Eisenhower was probably a better person.
org:med  west-hunter  scitariat  summary  links  podcast  audio  big-picture  westminster  politics  culture-war  academia  left-wing  ideology  biodet  error  crooked  bounded-cognition  stories  history  early-modern  africa  developing-world  death  mostly-modern  deterrence  japan  asia  war  meta:war  risk  ai  climate-change  speculation  agriculture  environment  prediction  religion  islam  iraq-syria  gender  dominant-minority  labor  econotariat  cracker-econ  coalitions  infrastructure  parasites-microbiome  medicine  low-hanging  biotech  terrorism  civil-liberty  civic  social-science  randy-ayndy  law  polisci  government  egalitarianism-hierarchy  expression-survival  disease  commentary  authoritarianism  being-right  europe  nordic  cohesion  heuristic  anglosphere  revolution  the-south  usa  thinking  info-dynamics  yvain  ssc  lesswrong  ratty  subculture  values  descriptive  epistemic  cost-disease  effective-altruism  charity  econ-productivity  technology  rhetoric  metameta  ai-control  critique  sociology  arms  paying-rent  parsimony  writing  realness  migration  eco 
april 2017 by nhaliday
Sustainability | West Hunter
There have been societies that functioned for a long time, thousands of years. They had sustainable demographic patterns. That means that they had enough children to replace themselves – not necessarily in every generation, but over the long haul. But sustainability requires more than that. Long-lived civilizations [ones with cities, literacy, governments, and all that] had a pattern of natural selection that didn’t drastically decrease intelligence – in some cases, one that favored it, at least in some subgroups. There was also ongoing selection against mutational accumulation – which meant that individuals with more genetic load than than average were significantly less likely to survive and reproduce. Basically, this happened through high child mortality, and in some cases by lower fitness in lower socioeconomic classes [starvation]. There was nothing fun about it.

Modern industrialized societies are failing on all three counts. Every population that can make a decent cuckoo clock has below-replacement fertility. The demographic pattern also selects against intelligence, something like one IQ point a generation. And, even if people at every level of intelligence had the same number of children, so that there was no selection against IQ, we would still be getting more and messed up, because there’s not enough selection going on to counter ongoing mutations.

It is possible that some country, or countries, will change in a way that avoids civilizational collapse. I doubt if this will happen by voluntary action. Some sort of technological solution might also arise – but it has to be soon.

Bruce Charlton, Victorian IQ, Episcopalians, military officers:
https://westhunt.wordpress.com/2013/05/09/sustainability/#comment-13188
https://westhunt.wordpress.com/2013/05/09/sustainability/#comment-13207
Again, I don’t believe a word of it. As for the declining rate of innovation, you have to have a really wide-ranging understanding of modern science and technology to have any feeling for what the underlying causes are. I come closer than most, and I probably don’t know enough. You don’t know enough. Let me tell you one thing: if genetic potential IQ for IQ had dropped 1 std, we’d see the end of progress in higher mathematics, and that has not happened at all.

Moreover, the selective trends disfavoring IQ all involve higher education among women and apparently nothing else – a trend which didn’t really get started until much more recently.

Not long enough, nor is dysgenic selection strong enough.

ranting on libertarians:
https://westhunt.wordpress.com/2013/05/09/sustainability/#comment-13348
About 40% of those Americans with credit cards keep a balance on their credit cards and pay ridiculous high interest. But that must be the right decision!
https://westhunt.wordpress.com/2013/05/09/sustainability/#comment-13499
” then that is their decision” – that’s fucking obvious. The question is whether they tend to make decisions that work very well – saying ‘that is their decision” is exactly the kind of crap I was referring to. As for “they probably have it coming” – if I’m smarter than you, which I surely am, using those smarts to rook you in every possible way must be just peachy. In fact, I’ll bet I could manage it even after warning you in advance.

On average, families in this country have paid between 10% and 14% of their income in debt service over the past few decades. That fraction averages considerably higher in low-income families – more like 18%. A quarter of those low income families are putting over 40% of their income into debt service. That’s mostly stuff other than credit-card debt.

Is this Straussian?

hmm:
Examining Arguments Made by Interest Rate Cap Advocates: https://www.mercatus.org/system/files/peirce_reframing_ch13.pdf

https://twitter.com/tcjfs/status/964972690435133440
https://archive.is/r34J8
Interest rate caps on $1,000 installment loans, by US state, today and in 1935
west-hunter  civilization  dysgenics  fertility  legacy  risk  mutation  genetic-load  discussion  rant  iq  demographics  gnon  sapiens  trends  malthus  leviathan  long-short-run  science-anxiety  error  biodet  duty  s:*  malaise  big-picture  debt  randy-ayndy  recent-selection  demographic-transition  order-disorder  deep-materialism  🌞  age-generation  scitariat  rhythm  allodium  behavioral-gen  nihil  zeitgeist  rot  the-bones  prudence  darwinian  flux-stasis  counter-revolution  modernity  microfoundations  multi  poast  civil-liberty  is-ought  track-record  time-preference  temperance  patience  antidemos  money  compensation  class  coming-apart  pro-rata  behavioral-econ  blowhards  history  early-modern  britain  religion  christianity  protestant-catholic  gender  science  innovation  frontier  the-trenches  speedometer  military  elite  optimate  data  intervention  aphorism  alt-inst  ethics  morality  straussian  intelligence  class-warfare  authoritarianism  hari-seldon  interests  crooked  twitter  social  back 
march 2017 by nhaliday
There’s good eating on one of those | West Hunter
Recently, Y.-H. Percival Zhang and colleagues demonstrated a method of converting cellulose into starch and glucose. Zhang thinks that it can be scaled up into an effective industrial process, one that could produce a thousand calories of starch for less than a dollar from cellulosic waste. This would be a good thing. It’s not just that are 7 billion people – the problem is that we have hardly any food reserves (about 74 days at last report).

Prepare for Nuclear Winter: http://www.overcomingbias.com/2017/09/prepare-for-nuclear-winter.html
If a 1km asteroid were to hit the Earth, the dust it kicked up would block most sunlight over most of the world for 3 to 10 years. There’s only a one in a million chance of that happening per year, however. Whew. However, there’s a ten times bigger chance that a super volcano, such as the one hiding under Yellowstone, might explode, for a similar result. And I’d put the chance of a full scale nuclear war at ten to one hundred times larger than that: one in ten thousand to one thousand per year. Over a century, that becomes a one to ten percent chance. Not whew; grimace instead.

There is a substantial chance that a full scale nuclear war would produce a nuclear winter, with a similar effect: sunlight is blocked for 3-10 years or more. Yes, there are good criticisms of the more extreme forecasts, but there’s still a big chance the sun gets blocked in a full scale nuclear war, and there’s even a substantial chance of the same result in a mere regional war, where only 100 nukes explode (the world now has 15,000 nukes).

...

Yeah, probably a few people live on, and so humanity doesn’t go extinct. But the only realistic chance most of us have of surviving in this scenario is to use our vast industrial and scientific abilities to make food. We actually know of many plausible ways to make more than enough food to feed everyone for ten years, even with no sunlight. And even if big chunks of the world economy are in shambles. But for that to work, we must preserve enough social order to make use of at least the core of key social institutions.

http://www.overcomingbias.com/2017/09/mre-futures-to-not-starve.html

Nuclear War Survival Skills: http://oism.org/nwss/nwss.pdf
Updated and Expanded 1987 Edition

Nuclear winter: https://en.wikipedia.org/wiki/Nuclear_winter

Yellowstone supervolcano may blow sooner than thought — and could wipe out life on the planet: https://www.usatoday.com/story/news/nation/2017/10/12/yellowstone-supervolcano-may-blow-sooner-than-thought-could-wipe-out-life-planet/757337001/
http://www.foxnews.com/science/2017/10/12/yellowstone-supervolcano-could-blow-faster-than-thought-destroy-all-mankind.html
http://fortune.com/2017/10/12/yellowstone-park-supervolcano/
https://www.sciencenews.org/article/supervolcano-blast-would-blanket-us-ash
west-hunter  discussion  study  commentary  bio  food  energy-resources  technology  risk  the-world-is-just-atoms  agriculture  wild-ideas  malthus  objektbuch  threat-modeling  scitariat  scale  biophysical-econ  allodium  nihil  prepping  ideas  dirty-hands  magnitude  multi  ratty  hanson  planning  nuclear  arms  deterrence  institutions  alt-inst  securities  markets  pdf  org:gov  white-paper  survival  time  earth  war  wiki  reference  environment  sky  news  org:lite  hmm  idk  org:biz  org:sci  simulation  maps  usa  geoengineering 
march 2017 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractpredictionvague

related tags

2016-election  80000-hours  :/  aaronson  ability-competence  absolute-relative  abstraction  academia  accelerationism  accuracy  acemoglu  acm  acmtariat  additive  adversarial  advertising  advice  aesthetics  africa  afterlife  age-generation  age-of-discovery  aggregator  agriculture  ai  ai-control  albion  algorithms  alien-character  alignment  allodium  alt-inst  altruism  amazon  analogy  analysis  analytical-holistic  anglo  anglosphere  announcement  anthropic  anthropology  antidemos  antiquity  aphorism  apollonian-dionysian  apple  applicability-prereqs  applications  approximation  archaeology  architecture  aristos  arms  art  article  asia  atmosphere  atoms  attaq  attention  audio  authoritarianism  autism  automation  average-case  aversion  axelrod  axioms  backup  baez  barons  bayesian  behavioral-econ  behavioral-gen  being-becoming  being-right  benevolence  berkeley  best-practices  biases  big-peeps  big-picture  big-yud  bio  biodet  bioinformatics  biomechanics  biophysical-econ  biotech  bitcoin  bits  blockchain  blog  blowhards  books  bostrom  bounded-cognition  brain-scan  branches  brands  brexit  britain  broad-econ  buddhism  business  business-models  c:**  c:***  caching  calculation  calculator  california  caltech  canada  cancer  canon  capital  capitalism  cardio  career  cartoons  causation  charity  chart  checklists  chemistry  china  christianity  civic  civil-liberty  civilization  cjones-like  class  class-warfare  classic  clever-rats  climate-change  cliometrics  clown-world  coalitions  coarse-fine  cocktail  coding-theory  cog-psych  cohesion  cold-war  collaboration  comedy  coming-apart  commentary  communication  communism  community  comparison  compensation  competition  complement-substitute  complex-systems  complexity  composition-decomposition  computation  computer-vision  concentration-of-measure  concept  conceptual-vocab  concrete  conference  conquest-empire  constraint-satisfaction  contracts  contradiction  contrarianism  convergence  convexity-curvature  cool  cooperate-defect  coordination  core-rats  corporation  corruption  cost-benefit  cost-disease  counter-revolution  courage  course  cracker-econ  creative  crime  criminal-justice  criminology  CRISPR  critique  crooked  crux  crypto  cryptocurrency  cs  cultural-dynamics  culture  culture-war  curiosity  current-events  curvature  cybernetics  cycles  cynicism-idealism  dark-arts  darwinian  data  data-science  dataset  death  debate  debt  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definite-planning  definition  degrees-of-freedom  democracy  demographic-transition  demographics  dennett  density  descriptive  detail-architecture  deterrence  developing-world  developmental  differential-privacy  dignity  dimensionality  diogenes  direct-indirect  direction  dirty-hands  discipline  discrete  discrimination  discussion  disease  distribution  diversity  domestication  dominant-minority  draft  drama  driving  drugs  duality  duplication  duty  dynamic  dynamical  dysgenics  early-modern  earth  ecology  econ-metrics  econ-productivity  economics  econotariat  eden  eden-heaven  education  EEA  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  einstein  elections  electromag  elite  embedded-cognition  embodied  emergent  emotion  empirical  ems  encyclopedic  end-times  endogenous-exogenous  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  entertainment  entrepreneurialism  entropy-like  environment  envy  epidemiology  epistemic  equilibrium  eric-kaufmann  error  essay  essence-existence  estimate  ethanol  ethical-algorithms  ethics  EU  europe  events  evidence-based  evolution  evopsych  examples  existence  exit-voice  expansionism  expectancy  experiment  expert  expert-experience  explanans  explanation  exploratory  explore-exploit  expression-survival  externalities  extra-introversion  extrema  facebook  failure  faq  farmers-and-foragers  fashun  FDA  fermi  fertility  feudal  feynman  fiction  field-study  finance  flexibility  flux-stasis  focus  food  foreign-lang  foreign-policy  formal-values  forms-instances  fourier  free-riding  frisson  frontier  futurism  gallic  game-theory  games  garett-jones  gavisti  gbooks  gedanken  gelman  gender  gender-diff  generalization  generative  genetic-load  genetics  genomics  geoengineering  geography  geopolitics  germanic  giants  gibbon  gnon  gnosis-logos  gnxp  god-man-beast-victim  good-evil  google  gotchas  government  gradient-descent  graphs  gray-econ  great-powers  gregory-clark  group-selection  growth  growth-econ  growth-mindset  GT-101  guide  guilt-shame  GWAS  gwern  hacker  haidt  hanson  hard-tech  hardware  hari-seldon  harvard  healthcare  heavy-industry  heterodox  heuristic  hidden-motives  hierarchy  high-dimension  high-variance  higher-ed  history  hive-mind  hmm  hn  homo-hetero  honor  horror  housing  hsu  huge-data-the-biggest  human-capital  human-ml  humanity  humility  huntington  hypocrisy  ideas  identity  identity-politics  ideology  idk  iidness  illusion  impact  impetus  impro  incentives  india  individualism-collectivism  industrial-revolution  inequality  info-dynamics  info-econ  info-foraging  information-theory  infrastructure  innovation  insight  institutions  intel  intelligence  interdisciplinary  interests  internet  interpretability  intervention  interview  intricacy  intuition  invariance  investing  iq  iran  iraq-syria  iron-age  is-ought  islam  israel  isteveish  iteration-recursion  janus  japan  jargon  jobs  journos-pundits  judaism  justice  kinship  knowledge  korea  kumbaya-kult  labor  land  language  large-factor  larry-summers  latin-america  law  leadership  leaks  learning  lecture-notes  left-wing  legacy  legibility  len:long  len:short  lens  lesswrong  let-me-see  letters  leviathan  life-history  limits  linear-algebra  liner-notes  links  list  literature  lived-experience  local-global  logic  lol  long-short-run  long-term  longevity  longform  love-hate  lovecraft  low-hanging  lower-bounds  machine-learning  macro  madisonian  magnitude  malaise  malthus  management  managerial-state  manifolds  map-territory  maps  marginal  marginal-rev  market-failure  market-power  marketing  markets  markov  martial  matching  math  math.CA  mathtariat  maxim-gun  meaningness  measure  measurement  mechanics  media  medicine  medieval  mediterranean  memes(ew)  MENA  mena4  meta-analysis  meta:medicine  meta:prediction  meta:research  meta:rhetoric  meta:science  meta:war  metabuch  metameta  methodology  metrics  microfoundations  microsoft  migrant-crisis  migration  military  miri-cfar  ML-MAP-E  mobile  model-class  model-organism  models  modernity  moloch  moments  monetary-fiscal  money  morality  mostly-modern  multi  multiplicative  murray  musk  mutation  mystic  myth  n-factor  narrative  nascent-state  nationalism-globalism  nature  near-far  neocons  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  nl-and-so-can-you  nlp  no-go  noahpinion  noble-lie  noise-structure  nonlinearity  nootropics  nordic  north-weingast-like  northeast  novelty  nuclear  number  nutrition  nyc  obama  objective-measure  objektbuch  occam  occident  oceans  offense-defense  old-anglo  open-closed  openai  optimate  optimism  optimization  order-disorder  orders  ORFE  org:anglo  org:biz  org:bleg  org:bv  org:data  org:econlib  org:edge  org:edu  org:foreign  org:gov  org:inst  org:junk  org:lite  org:local  org:mag  org:mat  org:med  org:nat  org:ngo  org:popup  org:rec  org:sci  organization  organizing  orient  orwellian  oscillation  other-xtian  outcome-risk  outliers  oxbridge  paleocon  papers  parable  paradox  parallax  parasites-microbiome  parenting  parsimony  path-dependence  patience  paying-rent  pdf  peace-violence  people  performance  personality  persuasion  pessimism  phalanges  pharma  phase-transition  philosophy  phys-energy  physics  pinker  planning  play  plots  poast  podcast  poetry  polanyi-marx  polarization  policy  polis  polisci  political-econ  politics  poll  pop-diff  popsci  population  population-genetics  postmortem  power  power-law  pragmatic  pre-2013  pre-ww2  prediction  prediction-markets  prepping  preprint  presentation  primitivism  princeton  prioritizing  priors-posteriors  privacy  pro-rata  probability  problem-solving  profile  propaganda  properties  property-rights  proposal  protestant-catholic  protocol  prudence  pseudoE  psych-architecture  psychiatry  psychology  psychometrics  public-goodish  publishing  puzzles  q-n-a  qra  quality  quantum  questions  quotes  race  random  randy-ayndy  ranking  rant  rationality  ratty  realness  realpolitik  reason  recent-selection  recommendations  recruiting  red-queen  reddit  redistribution  reduction  reference  reflection  regularization  regularizer  regulation  reinforcement  relativity  religion  rent-seeking  replication  research  research-program  responsibility  retention  retrofit  review  revolution  rhetoric  rhythm  right-wing  rigidity  rigor  rigorous-crypto  risk  ritual  robotics  robust  roots  rot  russia  s-factor  s:*  s:***  saas  safety  sampling-bias  sanctity-degradation  sapiens  satire  scale  science  science-anxiety  scifi-fantasy  scitariat  scott-sumner  search  securities  security  selection  self-control  self-interest  selfish-gene  sequential  sex  sexuality  shakespeare  shift  short-circuit  signal-noise  signaling  similarity  simulation  singularity  sinosphere  skeleton  skunkworks  sky  slides  slippery-slope  smoothness  social  social-choice  social-norms  social-psych  social-science  sociality  society  sociology  socs-and-mops  software  space  spatial  spearhead  speculation  speed  speedometer  spengler  spock  spreading  ssc  stackex  stagnation  stanford  startups  state-of-art  statesmen  stats  status  stereotypes  stochastic-processes  stock-flow  stories  strategy  straussian  stream  street-fighting  structure  study  studying  stylized-facts  subculture  success  sulla  summary  supply-demand  survey  survival  sv  synchrony  systematic-ad-hoc  systems  tactics  tails  tainter  talks  taubes-guyenet  taxes  tcs  tcstariat  teaching  tech  technocracy  technology  techtariat  telos-atelos  temperance  temperature  terrorism  tetlock  the-bones  the-classics  the-devil  the-founding  the-great-west-whale  the-monster  the-self  the-south  the-trenches  the-watchers  the-west  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thick-thin  thiel  things  thinking  threat-modeling  thucydides  time  time-complexity  time-preference  time-series  todo  tools  top-n  traces  track-record  trade  tradeoffs  tradition  transportation  travel  trees  trends  tribalism  tricki  tricks  trivia  troll  trump  trust  truth  turchin  turing  tutoring  twitter  unaffiliated  uncertainty  unintended-consequences  uniqueness  unit  universalism-particularism  unsupervised  urban-rural  us-them  usa  utopia-dystopia  vague  values  vampire-squid  venture  video  virtu  visual-understanding  visualization  visuo  vitality  volo-avolo  von-neumann  vr  walls  walter-scheidel  war  water  wealth  wealth-of-nations  weird  welfare-state  west-hunter  westminster  whiggish-hegelian  white-paper  whole-partial-many  wiki  wild-ideas  winner-take-all  wire-guided  wisdom  within-without  wonkish  wordlessness  world  world-war  writing  X-not-about-Y  xenobio  yc  yvain  zeitgeist  zero-positive-sum  zooming  🌞  🎩  🐸  👽  🔬  🖥  🤖 

Copy this bookmark:



description:


tags: