nhaliday + threat-modeling   62

Lindy effect - Wikipedia
The Lindy effect is a theory that the future life expectancy of some non-perishable things like a technology or an idea is proportional to their current age, so that every additional period of survival implies a longer remaining life expectancy.[1] Where the Lindy effect applies, mortality rate decreases with time. In contrast, living creatures and mechanical things follow a bathtub curve where, after "childhood", the mortality rate increases with time. Because life expectancy is probabilistically derived, a thing may become extinct before its "expected" survival. In other words, one needs to gauge both the age and "health" of the thing to determine continued survival.
wiki  reference  concept  metabuch  ideas  street-fighting  planning  comparison  time  distribution  flux-stasis  history  measure  correlation  arrows  branches  pro-rata  manifolds  aging  stylized-facts  age-generation  robust  technology  thinking  cost-benefit  conceptual-vocab  methodology  threat-modeling  efficiency  neurons  tools  track-record 
2 days ago by nhaliday
Preventing the Collapse of Civilization [video] | Hacker News
- Jonathan Blow

NB: DevGAMM is a game industry conference

- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
- knowledge of trivia becomes more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
hn  commentary  video  presentation  techtariat  carmack  pragmatic  contrarianism  pessimism  sv  tech  unix  rhetoric  critique  programming  engineering  pls  worrydream  software  hardware  performance  robust  trends  multiplicative  roots  impact  comparison  history  iron-age  the-classics  mediterranean  conquest-empire  gibbon  technology  the-world-is-just-atoms  flux-stasis  increase-decrease  games  graphics  hmm  idk  systems  os  abstraction  intricacy  worse-is-better/the-right-thing  compilers  build-packaging  microsoft  osx  apple  reflection  assembly  c(pp)  expert-experience  things  knowledge  detail-architecture  thick-thin  trivia  info-dynamics  caching  frameworks  generalization  systematic-ad-hoc  universalism-particularism  analytical-holistic  structure  tainter  libraries  tradeoffs  prepping  threat-modeling  network-structure  writing  risk  local-global  trade  composition-decomposition  coupling-cohesion  parsimony  civilization  complex-systems  system-design  multi  error  list  debugging 
4 weeks ago by nhaliday
Complexity no Bar to AI - Gwern.net
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:


In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.


In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?


In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.


Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox
We do this by demonstrating that traveling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilization, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to, and then demonstrate that there are millions of galaxies that could have reached us by now, using similar methods. This results in a considerable sharpening of the Fermi paradox.
pdf  study  article  essay  anthropic  fermi  space  expansionism  bostrom  ratty  philosophy  xenobio  ideas  threat-modeling  intricacy  time  civilization  🔬  futurism  questions  paradox  risk  physics  engineering  interdisciplinary  frontier  technology  volo-avolo  dirty-hands  ai  automation  robotics  duplication  iteration-recursion  von-neumann  data  scale  magnitude  skunkworks  the-world-is-just-atoms  hard-tech  ems  bio  bits  speedometer  nature  model-organism  mechanics  phys-energy  relativity  electromag  analysis  spock  nitty-gritty  spreading  hanson  street-fighting  speed  gedanken  nibble 
march 2018 by nhaliday
Existential Risks: Analyzing Human Extinction Scenarios
Would you endorse choosing policy to max the expected duration of civilization, at least as a good first approximation?
Can anyone suggest a different first approximation that would get more votes?

How useful would it be to agree on a relatively-simple first-approximation observable-after-the-fact metric for what we want from the future universe, such as total life years experienced, or civilization duration?

We're Underestimating the Risk of Human Extinction: https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/
An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.

Anderson: You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?

Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.

Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.
bostrom  ratty  miri-cfar  skunkworks  philosophy  org:junk  list  top-n  frontier  speedometer  risk  futurism  local-global  scale  death  nihil  technology  simulation  anthropic  nuclear  deterrence  environment  climate-change  arms  competition  ai  ai-control  genetics  genomics  biotech  parasites-microbiome  disease  offense-defense  physics  tails  network-structure  epidemiology  space  geoengineering  dysgenics  ems  authoritarianism  government  values  formal-values  moloch  enhancement  property-rights  coordination  cooperate-defect  flux-stasis  ideas  prediction  speculation  humanity  singularity  existence  cybernetics  study  article  letters  eden-heaven  gedanken  multi  twitter  social  discussion  backup  hanson  metrics  optimization  time  long-short-run  janus  telos-atelos  poll  forms-instances  threat-modeling  selection  interview  expert-experience  malthus  volo-avolo  intel  leviathan  drugs  pharma  data  estimate  nature  longevity  expansionism  homo-hetero  utopia-dystopia 
march 2018 by nhaliday
Unaligned optimization processes as a general problem for society
TL;DR: There are lots of systems in society which seem to fit the pattern of “the incentives for this system are a pretty good approximation of what we actually want, so the system produces good results until it gets powerful, at which point it gets terrible results.”


Here are some more places where this idea could come into play:

- Marketing—humans try to buy things that will make our lives better, but our process for determining this is imperfect. A more powerful optimization process produces extremely good advertising to sell us things that aren’t actually going to make our lives better.
- Politics—we get extremely effective demagogues who pit us against our essential good values.
- Lobbying—as industries get bigger, the optimization process to choose great lobbyists for industries gets larger, but the process to make regulators robust doesn’t get correspondingly stronger. So regulatory capture gets worse and worse. Rent-seeking gets more and more significant.
- Online content—in a weaker internet, sites can’t be addictive except via being good content. In the modern internet, people can feel addicted to things that they wish they weren’t addicted to. We didn’t use to have the social expertise to make clickbait nearly as well as we do it today.
- News—Hyperpartisan news sources are much more worth it if distribution is cheaper and the market is bigger. News sources get an advantage from being truthful, but as society gets bigger, this advantage gets proportionally smaller.


For these reasons, I think it’s quite plausible that humans are fundamentally unable to have a “good” society with a population greater than some threshold, particularly if all these people have access to modern technology. Humans don’t have the rigidity to maintain social institutions in the face of that kind of optimization process. I think it is unlikely but possible (10%?) that this threshold population is smaller than the current population of the US, and that the US will crumble due to the decay of these institutions in the next fifty years if nothing totally crazy happens.
ratty  thinking  metabuch  reflection  metameta  big-yud  clever-rats  ai-control  ai  risk  scale  quality  ability-competence  network-structure  capitalism  randy-ayndy  civil-liberty  marketing  institutions  economics  political-econ  politics  polisci  advertising  rent-seeking  government  coordination  internet  attention  polarization  media  truth  unintended-consequences  alt-inst  efficiency  altruism  society  usa  decentralized  rhetoric  prediction  population  incentives  intervention  criminal-justice  property-rights  redistribution  taxes  externalities  science  monetary-fiscal  public-goodish  zero-positive-sum  markets  cost-benefit  regulation  regularizer  order-disorder  flux-stasis  shift  smoothness  phase-transition  power  definite-planning  optimism  pessimism  homo-hetero  interests  eden-heaven  telos-atelos  threat-modeling  alignment 
february 2018 by nhaliday
Fermi paradox - Wikipedia
Rare Earth hypothesis: https://en.wikipedia.org/wiki/Rare_Earth_hypothesis
Fine-tuned Universe: https://en.wikipedia.org/wiki/Fine-tuned_Universe
something to keep in mind:
Puddle theory is a term coined by Douglas Adams to satirize arguments that the universe is made for man.[54][55] As stated in Adams' book The Salmon of Doubt:[56]
Imagine a puddle waking up one morning and thinking, “This is an interesting world I find myself in, an interesting hole I find myself in, fits me rather neatly, doesn't it? In fact, it fits me staggeringly well, must have been made to have me in it!” This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, it's still frantically hanging on to the notion that everything's going to be all right, because this World was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.
article  concept  paradox  wiki  reference  fermi  anthropic  space  xenobio  roots  speculation  ideas  risk  threat-modeling  civilization  nihil  🔬  deep-materialism  new-religion  futurism  frontier  technology  communication  simulation  intelligence  eden  war  nuclear  deterrence  identity  questions  multi  explanans  physics  theos  philosophy  religion  chemistry  bio  hmm  idk  degrees-of-freedom  lol  troll  existence 
january 2018 by nhaliday
[1709.01149] Biotechnology and the lifetime of technical civilizations
The number of people able to end Earth's technical civilization has heretofore been small. Emerging dual-use technologies, such as biotechnology, may give similar power to thousands or millions of individuals. To quantitatively investigate the ramifications of such a marked shift on the survival of both terrestrial and extraterrestrial technical civilizations, this paper presents a two-parameter model for civilizational lifespans, i.e. the quantity L in Drake's equation for the number of communicating extraterrestrial civilizations. One parameter characterizes the population lethality of a civilization's biotechnology and the other characterizes the civilization's psychosociology. L is demonstrated to be less than the inverse of the product of these two parameters. Using empiric data from Pubmed to inform the biotechnology parameter, the model predicts human civilization's median survival time as decades to centuries, even with optimistic psychosociological parameter values, thereby positioning biotechnology as a proximate threat to human civilization. For an ensemble of civilizations having some median calculated survival time, the model predicts that, after 80 times that duration, only one in 1024 civilizations will survive -- a tempo and degree of winnowing compatible with Hanson's "Great Filter." Thus, assuming that civilizations universally develop advanced biotechnology, before they become vigorous interstellar colonizers, the model provides a resolution to the Fermi paradox.
preprint  article  gedanken  threat-modeling  risk  biotech  anthropic  fermi  ratty  hanson  models  xenobio  space  civilization  frontier  hmm  speedometer  society  psychology  social-psych  anthropology  cultural-dynamics  disease  parasites-microbiome  maxim-gun  prepping  science-anxiety  technology  magnitude  scale  data  prediction  speculation  ideas  🌞  org:mat  study  offense-defense  arms  unintended-consequences  spreading  explanans  sociality  cybernetics 
october 2017 by nhaliday
Overcoming Bias : A Tangled Task Future
So we may often retain systems that inherit the structure of the human brain, and the structures of the social teams and organizations by which humans have worked together. All of which is another way to say: descendants of humans may have a long future as workers. We may have another future besides being retirees or iron-fisted peons ruling over gods. Even in a competitive future with no friendly singleton to ensure preferential treatment, something recognizably like us may continue. And even win.
ratty  hanson  speculation  automation  labor  economics  ems  futurism  prediction  complex-systems  network-structure  intricacy  thinking  engineering  management  law  compensation  psychology  cog-psych  ideas  structure  gray-econ  competition  moloch  coordination  cooperate-defect  risk  ai  ai-control  singularity  number  humanity  complement-substitute  cybernetics  detail-architecture  legacy  threat-modeling  degrees-of-freedom  composition-decomposition  order-disorder  analogy  parsimony  institutions  software  coupling-cohesion 
june 2017 by nhaliday
spaceships - Can there be a space age without petroleum (crude oil)? - Worldbuilding Stack Exchange

What was really important to our development of technology was not oil, but coal. Access to large deposits of high-quality coal largely fueled the industrial revolution, and it was the industrial revolution that really got us on the first rungs of the technological ladder.

Oil is a fantastic fuel for an advanced civilisation, but it's not essential. Indeed, I would argue that our ability to dig oil out of the ground is a crutch, one that we should have discarded long ago. The reason oil is so essential to us today is that all our infrastructure is based on it, but if we'd never had oil we could still have built a similar infrastructure. Solar power was first displayed to the public in 1878. Wind power has been used for centuries. Hydroelectric power is just a modification of the same technology as wind power.

Without oil, a civilisation in the industrial age would certainly be able to progress and advance to the space age. Perhaps not as quickly as we did, but probably more sustainably.

Without coal, though...that's another matter

What would the industrial age be like without oil and coal?: https://worldbuilding.stackexchange.com/questions/45919/what-would-the-industrial-age-be-like-without-oil-and-coal

Out of the ashes: https://aeon.co/essays/could-we-reboot-a-modern-civilisation-without-fossil-fuels
It took a lot of fossil fuels to forge our industrial world. Now they're almost gone. Could we do it again without them?

But charcoal-based industry didn’t die out altogether. In fact, it survived to flourish in Brazil. Because it has substantial iron deposits but few coalmines, Brazil is the largest charcoal producer in the world and the ninth biggest steel producer. We aren’t talking about a cottage industry here, and this makes Brazil a very encouraging example for our thought experiment.

The trees used in Brazil’s charcoal industry are mainly fast-growing eucalyptus, cultivated specifically for the purpose. The traditional method for creating charcoal is to pile chopped staves of air-dried timber into a great dome-shaped mound and then cover it with turf or soil to restrict airflow as the wood smoulders. The Brazilian enterprise has scaled up this traditional craft to an industrial operation. Dried timber is stacked into squat, cylindrical kilns, built of brick or masonry and arranged in long lines so that they can be easily filled and unloaded in sequence. The largest sites can sport hundreds of such kilns. Once filled, their entrances are sealed and a fire is lit from the top.
q-n-a  stackex  curiosity  gedanken  biophysical-econ  energy-resources  long-short-run  technology  civilization  industrial-revolution  heavy-industry  multi  modernity  frontier  allodium  the-world-is-just-atoms  big-picture  ideas  risk  volo-avolo  news  org:mag  org:popup  direct-indirect  retrofit  dirty-hands  threat-modeling  duplication  iteration-recursion  latin-america  track-record  trivia  cocktail  data 
june 2017 by nhaliday
[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 10^30 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.


simpler explanation (just different math for Drake equation):
Dissolving the Fermi Paradox: http://www.jodrellbank.manchester.ac.uk/media/eps/jodrell-bank-centre-for-astrophysics/news-and-events/2017/uksrn-slides/Anders-Sandberg---Dissolving-Fermi-Paradox-UKSRN.pdf
Overall the argument is that point estimates should not be shoved into a Drake equation and then multiplied by each, as that requires excess certainty and masks much of the ambiguity of our knowledge about the distributions. Instead, a Bayesian approach should be used, after which the fate of humanity looks much better. Here is one part of the presentation:

Life Versus Dark Energy: How An Advanced Civilization Could Resist the Accelerating Expansion of the Universe: https://arxiv.org/abs/1806.05203
The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of M∼(0.2−1)M⊙, and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting.
preprint  study  essay  article  bostrom  ratty  anthropic  philosophy  space  xenobio  computation  physics  interdisciplinary  ideas  hmm  cocktail  temperature  thermo  information-theory  bits  🔬  threat-modeling  time  scale  insight  multi  commentary  liner-notes  pdf  slides  error  probability  ML-MAP-E  composition-decomposition  econotariat  marginal-rev  fermi  risk  org:mat  questions  paradox  intricacy  multiplicative  calculation  street-fighting  methodology  distribution  expectancy  moments  bayesian  priors-posteriors  nibble  measurement  existence  technology  geoengineering  magnitude  spatial  density  spreading  civilization  energy-resources  phys-energy  measure  direction  speculation  structure 
may 2017 by nhaliday
One more time | West Hunter
One of our local error sources suggested that it would be impossible to rebuild technical civilization, once fallen. Now if every human were dead I’d agree, but in most other scenarios it wouldn’t be particularly difficult, assuming that the survivors were no more silly and fractious than people are today.  So assume a mild disaster, something like the effect of myxomatosis on the rabbits of Australia, or perhaps toe-to-toe nuclear combat with the Russkis – ~90%  casualties worldwide.

Books are everywhere. In the type of scenario I sketched out, almost no knowledge would be lost – so Neolithic tech is irrelevant. Look, if a single copy of the 1911 Britannica survived, all would be well.

You could of course harvest metals from the old cities. But even if if you didn’t, the idea that there is no more copper or zinc or tin in the ground is just silly. “recoverable ore” is mostly an economic concept.

Moreover, if we’re talking wiring and electrical uses, one can use aluminum, which makes up 8% of the Earth’s crust.

Some of those book tell you how to win.

Look, assume that some communities strive to relearn how to make automatic weapons and some don’t. How does that story end? Do I have to explain everything?

I guess so!

Well, perhaps having a zillion times more books around would make a difference. That and all the “X for Dummies” books, which I think the Romans didn’t have.

A lot of Classical civ wasn’t very useful: on the whole they didn’t invent much. On the whole, technology advanced quite a bit more rapidly in Medieval times.

How much coal and oil are in the ground that can still be extracted with 19th century tech? Honest question; I don’t know.
Lots of coal left. Not so much oil (using simple methods), but one could make it from low-grade coal, with the Fischer-Tropsch process. Sasol does this.

Then again, a recovering society wouldn’t need much at first.

reply to: https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69220
That’s more like it.

#1. Consider Grand Coulee Dam. Gigawatts. Feeling of power!
#2. Of course.
#3. Might be easier to make superconducting logic circuits with MgB2, starting over.

Your typical biker guy is more mechanically minded than the average Joe. Welding, electrical stuff, this and that.

If fossil fuels were unavailable -or just uneconomical at first- we’d be back to charcoal for our Stanley Steamers and railroads. We’d still have both.

The French, and others, used wood-gasifier trucks during WWII.

Teslas are of course a joke.
west-hunter  scitariat  civilization  risk  nihil  gedanken  frontier  allodium  technology  energy-resources  knowledge  the-world-is-just-atoms  discussion  speculation  analysis  biophysical-econ  big-picture  🔬  ideas  multi  history  iron-age  the-classics  medieval  europe  poast  the-great-west-whale  the-trenches  optimism  volo-avolo  mostly-modern  world-war  gallic  track-record  musk  barons  transportation  driving  contrarianism  agriculture  retrofit  industrial-revolution  dirty-hands  books  competition  war  group-selection  comparison  mediterranean  conquest-empire  gibbon  speedometer  class  threat-modeling  duplication  iteration-recursion  trivia  cocktail  encyclopedic  definite-planning  embodied  gnosis-logos  kumbaya-kult 
may 2017 by nhaliday
How many times over could the world's current supply of nuclear weapons destroy the world? - Quora
A Common Story: “There are enough nuclear weapons to destroy the world many times over.” This is nothing more than poorly crafted fiction an urban legend. This common conclusion is not based in any factual data. It is based solely in hype, hysteria, propaganda and fear mongering.

If you take every weapon in existence today, approximately 6500 megatons between 15,000 warheads with an average yield of 433 KT, and put a single bomb in its own 100 square mile grid… one bomb per grid (10 miles x 10 miles), you will contain >95% of the destructive force of each bomb on average within the grid it is in. This means the total landmass to receive a destructive force from all the world's nuclear bombs is an area of 1.5 million square miles. Not quite half of the United States and 1/38 of the world's total land mass…. that's it!
q-n-a  qra  arms  nuclear  technology  war  meta:war  impact  deterrence  foreign-policy  usa  world  risk  nihil  scale  trivia  threat-modeling  peace-violence 
may 2017 by nhaliday
What is the likelihood we run out of fossil fuels before we can switch to renewable energy sources? - Quora
1) Can we de-carbon our primary energy production before global warming severely damages human civilization? In the short term this means switching from coal to natural gas, and in the long term replacing both coal and gas generation with carbon-neutral sources such as renewables or nuclear. The developed world cannot accomplish this alone -- it requires worldwide action, and most of the pain will be felt by large developing nations such as India and China. Ultimately this is a political and economic problem. The technology to eliminate most carbon from electricity generation exists today at fairly reasonable cost.

2) Can we develop a better transportation energy storage technology than oil, before market forces drive prices to levels that severely damage the global economy? Fossil fuels are a source of energy, but primarily we use oil in vehicles because it is an exceptional energy TRANSPORT medium. Renewables cannot meet this need because battery technology is completely uncompetitive for most fuel consumers -- prices are an order of magnitude too high and energy density is an order of magnitude too low for adoption of all-electric vehicles outside developed-world urban centers. (Heavy trucking, cargo ships, airplanes, etc will never be all-electric with chemical batteries. There are hard physical limits to the energy density of electrochemical reactions. I'm not convinced passenger vehicles will go all-electric in our lifetimes either.) There are many important technologies in existence that will gain increasing traction in the next 50 years such as natural gas automobiles and improved gas/electric hybrids, but ultimately we need a better way to store power than fossil fuels. _This is a deep technological problem that will not be solved by incremental improvements in battery chemistry or any process currently in the R&D pipeline_.

Based on these two unresolved issues, _I place the odds of us avoiding fossil-fuel-related energy issues (major climate or economic damage) at less than 10%_. The impetus for the major changes required will not be sufficiently urgent until the world is seeing severe and undeniable impacts. Civilization will certainly survive -- but there will be no small amount of human suffering during the transition to whatever comes next.

- Ryan Carlyle
q-n-a  qra  expert  energy-resources  climate-change  environment  risk  civilization  nihil  prediction  threat-modeling  world  futurism  biophysical-econ  stock-flow  transportation  technology  economics  long-short-run  no-go  speedometer  modernity  expert-experience 
may 2017 by nhaliday
Annotating Greg Cochran’s interview with James Miller
opinion of Scott and Hanson: https://westhunt.wordpress.com/2017/04/05/interview-2/#comment-90238
Greg's methodist: https://westhunt.wordpress.com/2017/04/05/interview-2/#comment-90256
You have to consider the relative strengths of Japan and the USA. USA was ~10x stronger, industrially, which is what mattered. Technically superior (radar, Manhattan project). Almost entirely self-sufficient in natural resources. Japan was sure to lose, and too crazy to quit, which meant that they would lose after being smashed flat.
There’s a fairly common way of looking at things in which the bad guys are not at fault because they’re bad guys, born that way, and thus can’t help it. Well, we can’t help it either, so the hell with them. I don’t think we had to respect Japan’s innate need to fuck everybody in China to death.

2nd part: https://pinboard.in/u:nhaliday/b:9ab84243b967

some additional things:
- political correctness, the Cathedral and the left (personnel continuity but not ideology/value) at start
- joke: KT impact = asteroid mining, every mass extinction = intelligent life destroying itself
- Alawites: not really Muslim, women liberated because "they don't have souls", ended up running shit in Syria because they were only ones that wanted to help the British during colonial era
- solution to Syria: "put the Alawites in NYC"
- Zimbabwe was OK for a while, if South Africa goes sour, just "put the Boers in NYC" (Miller: left would probably say they are "culturally incompatible", lol)
- story about Lincoln and his great-great-great-grandfather
- skepticism of free speech
- free speech, authoritarianism, and defending against the Mongols
- Scott crazy (not in a terrible way), LW crazy (genetics), ex.: polyamory
- TFP or microbio are better investments than stereotypical EA stuff
- just ban AI worldwide (bully other countries to enforce)
- bit of a back-and-forth about macroeconomics
- not sure climate change will be huge issue. world's been much warmer before and still had a lot of mammals, etc.
- he quite likes Pseudoerasmus
- shits on modern conservatism/Bret Stephens a bit

- mentions Japan having industrial base a tenth the size of the US's and no chance of winning WW2 around 11m mark
- describes himself as "fairly religious" around 20m mark
- 27m30s: Eisenhower was smart, read Carlyle, classical history, etc.

but was Nixon smarter?: https://www.gnxp.com/WordPress/2019/03/18/open-thread-03-18-2019/
The Scandals of Meritocracy. Virtue vs. competence. Would you rather have a boss who is evil but competent, or good but incompetent? The reality is you have to balance the two. Richard Nixon was probably smarter that Dwight Eisenhower in raw g, but Eisenhower was probably a better person.
org:med  west-hunter  scitariat  summary  links  podcast  audio  big-picture  westminster  politics  culture-war  academia  left-wing  ideology  biodet  error  crooked  bounded-cognition  stories  history  early-modern  africa  developing-world  death  mostly-modern  deterrence  japan  asia  war  meta:war  risk  ai  climate-change  speculation  agriculture  environment  prediction  religion  islam  iraq-syria  gender  dominant-minority  labor  econotariat  cracker-econ  coalitions  infrastructure  parasites-microbiome  medicine  low-hanging  biotech  terrorism  civil-liberty  civic  social-science  randy-ayndy  law  polisci  government  egalitarianism-hierarchy  expression-survival  disease  commentary  authoritarianism  being-right  europe  nordic  cohesion  heuristic  anglosphere  revolution  the-south  usa  thinking  info-dynamics  yvain  ssc  lesswrong  ratty  subculture  values  descriptive  epistemic  cost-disease  effective-altruism  charity  econ-productivity  technology  rhetoric  metameta  ai-control  critique  sociology  arms  paying-rent  parsimony  writing  realness  migration  eco 
april 2017 by nhaliday
There’s good eating on one of those | West Hunter
Recently, Y.-H. Percival Zhang and colleagues demonstrated a method of converting cellulose into starch and glucose. Zhang thinks that it can be scaled up into an effective industrial process, one that could produce a thousand calories of starch for less than a dollar from cellulosic waste. This would be a good thing. It’s not just that are 7 billion people – the problem is that we have hardly any food reserves (about 74 days at last report).

Prepare for Nuclear Winter: http://www.overcomingbias.com/2017/09/prepare-for-nuclear-winter.html
If a 1km asteroid were to hit the Earth, the dust it kicked up would block most sunlight over most of the world for 3 to 10 years. There’s only a one in a million chance of that happening per year, however. Whew. However, there’s a ten times bigger chance that a super volcano, such as the one hiding under Yellowstone, might explode, for a similar result. And I’d put the chance of a full scale nuclear war at ten to one hundred times larger than that: one in ten thousand to one thousand per year. Over a century, that becomes a one to ten percent chance. Not whew; grimace instead.

There is a substantial chance that a full scale nuclear war would produce a nuclear winter, with a similar effect: sunlight is blocked for 3-10 years or more. Yes, there are good criticisms of the more extreme forecasts, but there’s still a big chance the sun gets blocked in a full scale nuclear war, and there’s even a substantial chance of the same result in a mere regional war, where only 100 nukes explode (the world now has 15,000 nukes).


Yeah, probably a few people live on, and so humanity doesn’t go extinct. But the only realistic chance most of us have of surviving in this scenario is to use our vast industrial and scientific abilities to make food. We actually know of many plausible ways to make more than enough food to feed everyone for ten years, even with no sunlight. And even if big chunks of the world economy are in shambles. But for that to work, we must preserve enough social order to make use of at least the core of key social institutions.


Nuclear War Survival Skills: http://oism.org/nwss/nwss.pdf
Updated and Expanded 1987 Edition

Nuclear winter: https://en.wikipedia.org/wiki/Nuclear_winter

Yellowstone supervolcano may blow sooner than thought — and could wipe out life on the planet: https://www.usatoday.com/story/news/nation/2017/10/12/yellowstone-supervolcano-may-blow-sooner-than-thought-could-wipe-out-life-planet/757337001/
west-hunter  discussion  study  commentary  bio  food  energy-resources  technology  risk  the-world-is-just-atoms  agriculture  wild-ideas  malthus  objektbuch  threat-modeling  scitariat  scale  biophysical-econ  allodium  nihil  prepping  ideas  dirty-hands  magnitude  multi  ratty  hanson  planning  nuclear  arms  deterrence  institutions  alt-inst  securities  markets  pdf  org:gov  white-paper  survival  time  earth  war  wiki  reference  environment  sky  news  org:lite  hmm  idk  org:biz  org:sci  simulation  maps  usa  geoengineering 
march 2017 by nhaliday
Evolution of Resistance Against CRISPR/Cas9 Gene Drive | Genetics
CRISPR/Cas9 gene drive (CGD) promises to be a highly adaptable approach for spreading genetically engineered alleles throughout a species, even if those alleles impair reproductive success. CGD has been shown to be effective in laboratory crosses of insects, yet it remains unclear to what extent potential resistance mechanisms will affect the dynamics of this process in large natural populations. Here we develop a comprehensive population genetic framework for modeling CGD dynamics, which incorporates potential resistance mechanisms as well as random genetic drift. Using this framework, we calculate the probability that resistance against CGD evolves from standing genetic variation, de novo mutation of wild-type alleles, or cleavage repair by nonhomologous end joining (NHEJ)—a likely by-product of CGD itself. We show that resistance to standard CGD approaches should evolve almost inevitably in most natural populations, unless repair of CGD-induced cleavage via NHEJ can be effectively suppressed, or resistance costs are on par with those of the driver. The key factor determining the probability that resistance evolves is the overall rate at which resistance alleles arise at the population level by mutation or NHEJ. By contrast, the conversion efficiency of the driver, its fitness cost, and its introduction frequency have only minor impact. Our results shed light on strategies that could facilitate the engineering of drivers with lower resistance potential, and motivate the possibility to embrace resistance as a possible mechanism for controlling a CGD approach. This study highlights the need for careful modeling of the population dynamics of CGD prior to the actual release of a driver construct into the wild.
study  org:nat  bio  genetics  evolution  population-genetics  models  CRISPR  unintended-consequences  geoengineering  mutation  risk  parasites-microbiome  threat-modeling  selfish-gene  cooperate-defect  red-queen 
february 2017 by nhaliday
The Great Filter | West Hunter
Let us imagine that we found out that nervous systems had evolved twice (which seems to be the case). And suppose that you spent a lot of time worrying about the Fermi Paradox – and had previously thought that nervous system evolution was the unlikely event that explains the great silence, the bottleneck that explained why we don’t see signs of alien intelligent life. Thus in our past: we’re safe. Now you’re worried: maybe the Great Filter lies in our future, and the End approaches. But not just that: you assume that the political class noticed this too, and will start neglecting the future (cough, cough) because they too believe that isn’t going to be one.
Worrying about the Great Filter might not be crazy, but assuming that politicians are hep to such things and worry about them is. If you think that, you have less common sense than a monotreme. And that’s real common. I’ve had analogous arguments with people: they didn’t have any common sense either.
west-hunter  discussion  troll  risk  government  evolution  neuro  eden  antiquity  bio  fermi  threat-modeling  scitariat  anthropic  nihil  new-religion  xenobio  deep-materialism  ideas 
february 2017 by nhaliday
The Membrane – spottedtoad
All of which is to say that the Internet, which shares many qualities in common with an assemblage of living things except for those clear boundaries and defenses, might well not trend toward increased usability or easier exchange of information over the longer term, even if that is what we have experienced heretofore. The history of evolution is every bit as much a history of parasitism and counterparasitism as it is any kind of story of upward movement toward greater complexity or order. There is no reason to think that we (and still less national or political entities) will necessarily experience technology as a means of enablement and Cool Stuff We Can Do rather than a perpetual set of defenses against scammers of our money and attention. There’s the respect that makes Fake News the news that matters forever more.

THE MADCOM FUTURE: http://www.atlanticcouncil.org/images/publications/The_MADCOM_Future_RW_0926.pdf

ai robocalls/phonetrees/Indian Ocean call centers~biologicalization of corporations thru automation&global com tech

fly-by-night scams double mitotically,covered by outer membrane slime&peptidoglycan

trillion $ corps w/nonspecific skin/neutrophils/specific B/T cells against YOU
ratty  unaffiliated  contrarianism  walls  internet  hacker  risk  futurism  speculation  wonkish  chart  red-queen  parasites-microbiome  analogy  prediction  unintended-consequences  security  open-closed  multi  pdf  white-paper  propaganda  ai  offense-defense  ecology  cybernetics  pessimism  twitter  social  discussion  backup  bio  automation  cooperate-defect  coordination  attention  crypto  money  corporation  accelerationism  threat-modeling  alignment 
december 2016 by nhaliday
Overcoming Bias : In Praise of Low Needs
We humans have come a long way since we first became human; we’ve innovated and grown our ability to achieve human ends by perhaps a factor of ten million. Not at all shabby, even though it may be small compared to the total factor of growth and innovation that life achieved before humans arrived. But even if humanity’s leap is a great achievement, I fear that we have much further to go than we have come.

The universe seems almost entirely dead out there. There’s a chance it will eventually be densely filled with life, and that our descendants may help to make that happen. Some worry about the quality of that life filling the universe, and yes there are issues there. But I worry mostly about the difference between life and death. Our descendants may kill themselves or stop growing, and fail to fill the universe with life. Any life.

To fill the universe with life requires that we grow far more than our previous leap factor of ten million. More like three to ten factors that big still to go. (See Added below.) So think of all the obstacles we’ve overcome so far, obstacles that appeared when we reached new scales of size and levels of ability. If we were lucky to make it this far, we’ll have to be much more lucky to make it all the way.


Added 28Oct: Assume humanity’s leap factor is 107. Three of those is 1021. As there are 1024 stars in observable universe, that much growth could come from filling one in a thousand of those stars with as many rich humans as Earth now has. Ten of humanity’s leap is 1070, and there are now about 1010 humans on Earth. As there are about 1080 atoms in the observable universe, that much growth could come from finding a way to implement one human like creature per atom.
hanson  contrarianism  stagnation  trends  values  farmers-and-foragers  essay  rhetoric  new-religion  ratty  spreading  phalanges  malthus  formal-values  flux-stasis  economics  growth-econ  status  fashun  signaling  anthropic  fermi  nihil  death  risk  futurism  hierarchy  ranking  discipline  temperance  threat-modeling  existence  wealth  singularity  smoothness  discrete  scale  magnitude  population  physics  estimate  uncertainty  flexibility  rigidity  capitalism  heavy-industry  the-world-is-just-atoms  nature  corporation  institutions  coarse-fine 
october 2016 by nhaliday
Overcoming Bias : Beware General Visible Prey
So, bottom line, the future great filter scenario that most concerns me is one where our solar-system-bound descendants have killed most of nature, can’t yet colonize other stars, are general predators and prey of each other, and have fallen into a short-term-predatory-focus equilibrium where predators can easily see and travel to most all prey. Yes there are about a hundred billion comets way out there circling the sun, but even that seems a small enough number for predators to careful map and track all of them.
hanson  risk  prediction  futurism  speculation  pessimism  war  ratty  space  big-picture  fermi  threat-modeling  equilibrium  slippery-slope  anthropic  chart  deep-materialism  new-religion  ideas  bio  nature  plots  expansionism  malthus  marginal  convexity-curvature  humanity  farmers-and-foragers  diversity  entropy-like  homo-hetero  existence  volo-avolo  technology  frontier  intel  travel  time-preference  communication  civilization  egalitarianism-hierarchy  peace-violence  ecology  cooperate-defect  dimensionality  whole-partial-many  temperance  patience  thinking  long-short-run  prepping  offense-defense 
october 2016 by nhaliday
weaponizing smallpox | West Hunter
As I have said before, it seems likely to me that the Soviet Union put so much effort into treaty-violating biological warfare because the guys at the top believed in it – because they had seen it work, the same reason that they were such tank enthusiasts. One more point on the likely use of tularemia at Stalingrad: in the summer of ’42 the Germans had occupied regions holding 40% of the Soviet Union’s population. The Soviets had a tularemia program: if not then [“Not One Step Back!”], when would they have used it? When would Stalin have used it? Imagine that someone intent on the destruction of the American republic and the extermination of its people [remember the Hunger Plan?] had taken over everything west of the Mississippi: would be that too early to pull out all the stops? Reminds me of of an old Mr Boffo cartoon: you see a monster, taller than skyscrapers, stomping his way through the city. That’s trouble. But then you notice that he’s a hand puppet: that’s serious trouble. Perhaps Stalin was waiting for serious trouble, for example if the Norse Gods had come in on the side of the Nazis.

Anyhow, the Soviets had a big smallpox program. In some ways smallpox is almost the ultimate biological weapon – very contagious, while some strains are highly lethal. And it’s controllable – you can easily shield your own guys via vaccination. Of course back in the 1970s, almost everyone was vaccinated, so it was also completely useless.

We kept vaccinating people as long as smallpox was still running around in the Third World. But when it was eradicated in 1978, people stopped. There seemed to be no reason – and so, as new unvaccinated generations arose, the military efficacy of smallpox has gone up and up and up. It got to the point where the World Health organization threw away its stockpile of vaccine, a couple hundred million units, just to save on the electric bill for the refrigerators.

Consider that the Soviet Union was always the strongest proponent of worldwide eradication of smallpox, dating back to the 1950s. Successful eradication would eventually make smallpox a superweapon: does it seem possible that the people running the Soviet Union had this in mind as a long term-goal ? Potentiation through ‘eradication’? Did the left hand know what the strangling hand had in mind, and shape policies accordingly? Of course.

D.A. Henderson, the man that led the eradication campaign, died just a few days ago. He was aware of this possibility.

Dr. Henderson strenuously argued that the samples should be destroyed because, in his view, any amount of smallpox was too dangerous to tolerate. A side effect of the eradication program — and one of the “horrendous ironies of history,” said “Hot Zone” author Preston — is that since no one in generations has been exposed to the virus, most of the world’s population would be vulnerable to it in the event of an outbreak.

“I feel very — what should we say? — dispirited,” Dr. Henderson told the Times in 2002. “Here we are, regressing to defend against something we thought was permanently defeated. We shouldn’t have to be doing this.”

Ken Alibek believes that, following the collapse of the Soviet Union in 1991, unemployed or badly-paid scientists are likely to have sold samples of smallpox clandestinely and gone to work in rogue states engaged in illicit biological weapons development. DA Henderson agrees that this is a plausible scenario and is upset by the legacy it leaves. 'If the [Russian bio-weapons] programme had not taken place we would not I think be worrying about smallpox in the same way. One can feel extremely bitter and extremely angry about this because I think they've subjected the entire world to a risk which was totally unnecessary.'

War in the East: https://westhunt.wordpress.com/2012/02/02/war-in-the-east/
The books generally say that biological warfare is ineffective, but then they would say that, wouldn’t they? There is reason to think it has worked, and it may have made a difference.


We know of course that this offensive eventually turned into a disaster in which the German Sixth Army was lost. But nobody knew that then. The Germans were moving forward with little to stop them: they were scary SOBs. Don’t let anyone tell you otherwise. The Soviet leadership was frightened, enough so that they sent out a general backs-to-the-wall, no-retreat order that told the real scale of losses. That was the Soviet mood in the summer of 42.

That’s the historical background. Now for the clues. First, Ken Alibek was a bioweapons scientist back in the USSR. In his book, Biohazard, he tells how, as a student, he was given the assignment of explaining a mysterious pattern of tularemia epidemics back in the war. To him, it looked artificial, whereupon his instructor said something to the effect of “you never thought that, you never said that. Do you want a job?” Second, Antony Beevor mentions the mysteriously poor health of German troops at Stalingrad – well before being surrounded (p210-211). Third, the fact that there were large tularemia epidemics in the Soviet Union during the war – particularly in the ‘oblasts temporarily occupied by the Fascist invaders’, described in History and Incidence of Tularemia in the Soviet Union, by Robert Pollitzer.

Fourth, personal communications from a friend who once worked at Los Alamos. Back in the 90’s, after the fall of the Soviet Union, there was a time when you could hire a whole team of decent ex-Soviet physicists for the price of a single American. My friend was having a drink with one of his Russian contractors, son of a famous ace, who started talking about how his dad had dropped tularemia here, here, and here near Leningrad (sketching it out on a napkin) during the Great Patriotic War. Not that many people spontaneously bring up stories like that in dinner conversation…

Fifth, the huge Soviet investment in biowarfare throughout the Cold War is a hint: they really, truly, believed in it, and what better reason could there be than decisive past successes? In much the same way, our lavish funding of the NSA strongly suggested that cryptanalysis and sigint must have paid off handsomely for the Allies in WWII – far more so than publicly acknowledged, until the revelations about Enigma in the 1970s and later.

We know that tularemia is an effective biological agent: many countries have worked with it, including the Soviet Union. If the Russians had had this capability in the summer of ’42 (and they had sufficient technology: basically just fermentation) , it is hard to imagine them not using it. I mean, we’re talking about Stalin. You think he had moral qualms? But we too would have used germ warfare if our situation had been desperate.

Sean, you don’t know what you’re talking about. Anybody exposed to an aerosol form of tularemia is likely to get it: 10-50 bacteria are enough to give a 50% probability of infection. You do not need to be sickly, starved, or immunosuppressed in order to contract it, although those factors probably influence its lethality. The same is true of anthrax: if it starts growing in your lungs, you get sick. You’re not born immune. There are in fact some diseases that you _are_ born immune to (most strains of sleeping sickness, for example), or at least have built-in defenses against (Epstein-Barr, cf TLRs).

A few other facts I’ve just found: First, the Soviets had a tularemia vaccine, which was used to an unclear extent at Stalingrad. At the time nobody else did.

Next, as far as I can tell, the Stalingrad epidemic is the only large-scale pneumonic tularemia epidemic that has ever occurred.

Next cool fact: during the Cold War, the Soviets were somewhat more interested in tularemia than other powers. At the height of the US biowarfare program, we produced less than two tons per year. The Soviets produced over one thousand tons of F. tularensis per year in that period.

Next question, one which deserves a serious, extended treatment. Why are so many people so very very good at coming up with wrong answers? Why do they apply Occam’s razor backwards? This is particularly common in biology. I’m not talking about Croddy in Military Medicine: he probably had orders to lie, and you can see hints of that if you read carefully.

Joining the Army might work. In general not available to private individuals, for reasons that are largely bullshit.
war  disease  speculation  military  russia  history  len:long  west-hunter  technology  multi  c:**  parasites-microbiome  mostly-modern  arms  scitariat  communism  maxim-gun  biotech  ideas  world-war  questions  poast  occam  parsimony  trivia  data  stylized-facts  scale  bio  epidemiology  🌞  nietzschean  food  death  nihil  axioms  morality  strategy  unintended-consequences  risk  news  org:rec  prepping  profile  postmortem  people  crooked  org:anglo  thick-thin  alt-inst  flux-stasis  flexibility  threat-modeling  twitter  social  discussion  backup  prudence  government  spreading  gender  sex  sexuality  elite  ability-competence  rant  pharma  drugs  medicine  politics  ideology  impetus  big-peeps  statesmen 
september 2016 by nhaliday
Are You Living in a Computer Simulation?
Bostrom's anthropic arguments

In sum, if your descendants might make simulations of lives like yours, then you might be living in a simulation. And while you probably cannot learn much detail about the specific reasons for and nature of the simulation you live in, you can draw general conclusions by making analogies to the types and reasons of simulations today. If you might be living in a simulation then all else equal it seems that you should care less about others, live more for today, make your world look likely to become eventually rich, expect to and try to participate in pivotal events, be entertaining and praiseworthy, and keep the famous people around you happy and interested in you.

Theological Implications of the Simulation Argument: https://www.tandfonline.com/doi/pdf/10.1080/15665399.2010.10820012
Nick Bostrom’s Simulation Argument (SA) has many intriguing theological implications. We work out some of them here. We show how the SA can be used to develop novel versions of the Cosmological and Design Arguments. We then develop some of the affinities between Bostrom’s naturalistic theogony and more traditional theological topics. We look at the resurrection of the body and at theodicy. We conclude with some reflections on the relations between the SA and Neoplatonism (friendly) and between the SA and theism (less friendly).

lesswrong  philosophy  weird  idk  thinking  insight  links  summary  rationality  ratty  bostrom  sampling-bias  anthropic  theos  simulation  hanson  decision-making  advice  mystic  time-preference  futurism  letters  entertainment  multi  morality  humility  hypocrisy  wealth  malthus  power  drama  gedanken  pdf  article  essay  religion  christianity  the-classics  big-peeps  iteration-recursion  aesthetics  nietzschean  axioms  gwern  analysis  realness  von-neumann  space  expansionism  duplication  spreading  sequential  cs  computation  outcome-risk  measurement  empirical  questions  bits  information-theory  efficiency  algorithms  physics  relativity  ems  neuro  data  scale  magnitude  complexity  risk  existence  threat-modeling  civilization  forms-instances 
september 2016 by nhaliday

bundles : hackerparanoia

related tags

80000-hours  ability-competence  abstraction  academia  accelerationism  accuracy  acemoglu  acm  acmtariat  adversarial  advertising  advice  aesthetics  africa  age-generation  aging  agriculture  ai  ai-control  albion  algorithms  alignment  allodium  alt-inst  altruism  analogy  analysis  analytical-holistic  anglo  anglosphere  announcement  anonymity  anthropic  anthropology  antidemos  antiquity  apollonian-dionysian  apple  applications  approximation  aristos  arms  arrows  art  article  asia  assembly  atoms  attaq  attention  audio  authoritarianism  automation  average-case  axioms  backup  barons  bayesian  behavioral-gen  being-becoming  being-right  benevolence  biases  big-peeps  big-picture  big-yud  bio  biodet  biophysical-econ  biotech  bits  blockchain  blowhards  books  bostrom  bounded-cognition  brain-scan  branches  britain  broad-econ  build-packaging  c(pp)  c:**  caching  calculation  calculator  canada  cancer  capitalism  carmack  causation  charity  chart  cheatsheet  chemistry  china  christianity  civic  civil-liberty  civilization  class  clever-rats  climate-change  coalitions  coarse-fine  cocktail  cog-psych  cohesion  commentary  communication  communism  comparison  compensation  competition  compilers  complement-substitute  complex-systems  complexity  composition-decomposition  computation  concept  conceptual-vocab  concrete  conquest-empire  contracts  contrarianism  convexity-curvature  cool  cooperate-defect  coordination  corporation  correlation  corruption  cost-benefit  cost-disease  counter-revolution  coupling-cohesion  cracker-econ  criminal-justice  CRISPR  critique  crooked  crux  crypto  cryptocurrency  cs  cultural-dynamics  culture-war  curiosity  cybernetics  cycles  cynicism-idealism  darwinian  data  database  death  debate  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definite-planning  degrees-of-freedom  democracy  dennett  density  descriptive  detail-architecture  deterrence  developing-world  dimensionality  direct-indirect  direction  dirty-hands  discipline  discrete  discrimination  discussion  disease  distribution  diversity  dominant-minority  drama  driving  drugs  duplication  duty  dynamic  dysgenics  early-modern  earth  ecology  econ-metrics  econ-productivity  economics  econotariat  eden  eden-heaven  education  EEA  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  electromag  elite  embodied  emergent  empirical  ems  encyclopedic  endogenous-exogenous  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  entertainment  entropy-like  environment  epidemiology  epistemic  equilibrium  error  essay  estimate  ethanol  ethics  europe  evolution  evopsych  examples  existence  exit-voice  expansionism  expectancy  expert  expert-experience  explanans  explanation  explore-exploit  expression-survival  externalities  faq  farmers-and-foragers  fashun  fermi  fertility  feudal  fiction  finance  flexibility  flux-stasis  food  foreign-policy  formal-values  forms-instances  frameworks  frisson  frontier  futurism  gallic  games  gedanken  gender  generalization  genetics  genomics  geoengineering  geography  geopolitics  germanic  giants  gibbon  gnon  gnosis-logos  gnxp  google  government  gradient-descent  graphics  gray-econ  great-powers  gregory-clark  group-selection  growth-econ  guide  guilt-shame  gwern  hacker  hanson  hard-tech  hardware  heavy-industry  heuristic  hierarchy  high-dimension  high-variance  higher-ed  history  hmm  hn  homo-hetero  horror  hsu  huge-data-the-biggest  human-bean  human-capital  humanity  humility  hypocrisy  ideas  identity  identity-politics  ideology  idk  iidness  illusion  impact  impetus  incentives  increase-decrease  individualism-collectivism  industrial-revolution  inequality  inference  info-dynamics  information-theory  infrastructure  innovation  input-output  insight  institutions  intel  intelligence  interdisciplinary  interests  internet  intervention  interview  intricacy  intuition  iq  iraq-syria  iron-age  islam  iteration-recursion  janus  japan  kinship  knowledge  korea  kumbaya-kult  labor  land  large-factor  latin-america  law  left-wing  legacy  legibility  len:long  len:short  lesswrong  let-me-see  letters  leviathan  libraries  linear-algebra  liner-notes  links  list  literature  lived-experience  local-global  lol  long-short-run  longevity  lovecraft  low-hanging  lower-bounds  machine-learning  macro  madisonian  magnitude  malthus  management  manifolds  maps  marginal  marginal-rev  market-failure  marketing  markets  maxim-gun  measure  measurement  mechanics  media  medicine  medieval  mediterranean  memory-management  meta:reading  meta:war  metabuch  metameta  methodology  metrics  microsoft  migration  military  miri-cfar  ML-MAP-E  mobile  model-class  model-organism  models  modernity  moloch  moments  monetary-fiscal  money  morality  mostly-modern  multi  multiplicative  murray  musk  mutation  mystic  nationalism-globalism  nature  near-far  neocons  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  no-go  nonlinearity  nordic  nuclear  number  objektbuch  occam  oceans  offense-defense  old-anglo  open-closed  openai  opsec  optimate  optimism  optimization  order-disorder  org:anglo  org:biz  org:bleg  org:edu  org:gov  org:inst  org:junk  org:lite  org:mag  org:mat  org:med  org:nat  org:ngo  org:popup  org:rec  org:sci  organization  os  osx  outcome-risk  papers  parable  paradox  parasites-microbiome  parenting  parsimony  patience  paying-rent  pdf  peace-violence  people  performance  pessimism  phalanges  pharma  phase-transition  philosophy  phys-energy  physics  planning  plots  pls  poast  podcast  polarization  policy  polis  polisci  political-econ  politics  poll  popsci  population  population-genetics  postmortem  power  pragmatic  prediction  prepping  preprint  presentation  priors-posteriors  privacy  pro-rata  probability  profile  programming  propaganda  property-rights  proposal  protestant-catholic  prudence  pseudoE  psychiatry  psychology  psychometrics  public-goodish  publishing  puzzles  q-n-a  qra  quality  questions  randy-ayndy  ranking  rant  rationality  ratty  realness  realpolitik  reason  red-queen  reddit  redistribution  reduction  reference  reflection  regularizer  regulation  reinforcement  relativity  religion  rent-seeking  research  research-program  responsibility  retention  retrofit  review  revolution  rhetoric  rigidity  rigorous-crypto  risk  robotics  robust  roots  russia  s:***  saas  safety  sampling-bias  scale  science  science-anxiety  scifi-fantasy  scitariat  search  securities  security  selection  selfish-gene  sequential  sex  sexuality  shift  signal-noise  signaling  simulation  singularity  skunkworks  sky  sleuthin  slides  slippery-slope  smoothness  social  social-choice  social-psych  social-science  sociality  society  sociology  software  space  spatial  speculation  speed  speedometer  spock  spreading  ssc  stackex  stagnation  statesmen  status  stock-flow  stories  strategy  street-fighting  structure  study  studying  stylized-facts  subculture  sulla  summary  survey  survival  sv  system-design  systematic-ad-hoc  systems  tails  tainter  taxes  tcs  tech  technology  techtariat  telos-atelos  temperance  temperature  terrorism  tetlock  the-classics  the-great-west-whale  the-self  the-south  the-trenches  the-world-is-just-atoms  theory-practice  theos  thermo  thick-thin  things  thinking  threat-modeling  time  time-complexity  time-preference  time-series  tools  top-n  track-record  trade  tradecraft  tradeoffs  tradition  transportation  travel  trends  tribalism  trivia  troll  trust  truth  turing  twitter  unaffiliated  uncertainty  unintended-consequences  universalism-particularism  unix  unsupervised  urban-rural  us-them  usa  utopia-dystopia  values  vampire-squid  video  visualization  volo-avolo  von-neumann  walls  war  wealth  wealth-of-nations  weird  west-hunter  westminster  whiggish-hegelian  white-paper  whole-partial-many  wiki  wild-ideas  winner-take-all  within-without  wonkish  world  world-war  worrydream  worse-is-better/the-right-thing  writing  X-not-about-Y  xenobio  yvain  zeitgeist  zero-positive-sum  🌞  👽  🔬  🤖 

Copy this bookmark: