nhaliday + threat-modeling   67

Linus's Law - Wikipedia
Linus's Law is a claim about software development, named in honor of Linus Torvalds and formulated by Eric S. Raymond in his essay and book The Cathedral and the Bazaar (1999).[1][2] The law states that "given enough eyeballs, all bugs are shallow";

--

In Facts and Fallacies about Software Engineering, Robert Glass refers to the law as a "mantra" of the open source movement, but calls it a fallacy due to the lack of supporting evidence and because research has indicated that the rate at which additional bugs are uncovered does not scale linearly with the number of reviewers; rather, there is a small maximum number of useful reviewers, between two and four, and additional reviewers above this number uncover bugs at a much lower rate.[4] While closed-source practitioners also promote stringent, independent code analysis during a software project's development, they focus on in-depth review by a few and not primarily the number of "eyeballs".[5][6]

Although detection of even deliberately inserted flaws[7][8] can be attributed to Raymond's claim, the persistence of the Heartbleed security bug in a critical piece of code for two years has been considered as a refutation of Raymond's dictum.[9][10][11][12] Larry Seltzer suspects that the availability of source code may cause some developers and researchers to perform less extensive tests than they would with closed source software, making it easier for bugs to remain.[12] In 2015, the Linux Foundation's executive director Jim Zemlin argued that the complexity of modern software has increased to such levels that specific resource allocation is desirable to improve its security. Regarding some of 2014's largest global open source software vulnerabilities, he says, "In these cases, the eyeballs weren't really looking".[11] Large scale experiments or peer-reviewed surveys to test how well the mantra holds in practice have not been performed.

Given enough eyeballs, all bugs are shallow? Revisiting Eric Raymond with bug bounty programs: https://academic.oup.com/cybersecurity/article/3/2/81/4524054

https://hbfs.wordpress.com/2009/03/31/how-many-eyeballs-to-make-a-bug-shallow/
wiki  reference  aphorism  ideas  stylized-facts  programming  engineering  linux  worse-is-better/the-right-thing  correctness  debugging  checking  best-practices  security  error  scale  ubiquity  collaboration  oss  realness  empirical  evidence-based  multi  study  info-econ  economics  intricacy  plots  manifolds  techtariat  cracker-prog  os  systems  magnitude  quantitative-qualitative  number  threat-modeling 
5 weeks ago by nhaliday
xkcd: Security
being serious for a moment the proper defense against this seems to be anonymity
comics  lol  security  crypto  opsec  tradecraft  pic  threat-modeling  pragmatic  the-world-is-just-atoms  software  anonymity  cynicism-idealism  embodied  peace-violence  crypto-anarchy 
july 2019 by nhaliday
The Existential Risk of Math Errors - Gwern.net
How big is this upper bound? Mathematicians have often made errors in proofs. But it’s rarer for ideas to be accepted for a long time and then rejected. But we can divide errors into 2 basic cases corresponding to type I and type II errors:

1. Mistakes where the theorem is still true, but the proof was incorrect (type I)
2. Mistakes where the theorem was false, and the proof was also necessarily incorrect (type II)

Before someone comes up with a final answer, a mathematician may have many levels of intuition in formulating & working on the problem, but we’ll consider the final end-product where the mathematician feels satisfied that he has solved it. Case 1 is perhaps the most common case, with innumerable examples; this is sometimes due to mistakes in the proof that anyone would accept is a mistake, but many of these cases are due to changing standards of proof. For example, when David Hilbert discovered errors in Euclid’s proofs which no one noticed before, the theorems were still true, and the gaps more due to Hilbert being a modern mathematician thinking in terms of formal systems (which of course Euclid did not think in). (David Hilbert himself turns out to be a useful example of the other kind of error: his famous list of 23 problems was accompanied by definite opinions on the outcome of each problem and sometimes timings, several of which were wrong or questionable5.) Similarly, early calculus used ‘infinitesimals’ which were sometimes treated as being 0 and sometimes treated as an indefinitely small non-zero number; this was incoherent and strictly speaking, practically all of the calculus results were wrong because they relied on an incoherent concept - but of course the results were some of the greatest mathematical work ever conducted6 and when later mathematicians put calculus on a more rigorous footing, they immediately re-derived those results (sometimes with important qualifications), and doubtless as modern math evolves other fields have sometimes needed to go back and clean up the foundations and will in the future.7

...

Isaac Newton, incidentally, gave two proofs of the same solution to a problem in probability, one via enumeration and the other more abstract; the enumeration was correct, but the other proof totally wrong and this was not noticed for a long time, leading Stigler to remark:

...

TYPE I > TYPE II?
“Lefschetz was a purely intuitive mathematician. It was said of him that he had never given a completely correct proof, but had never made a wrong guess either.”
- Gian-Carlo Rota13

Case 2 is disturbing, since it is a case in which we wind up with false beliefs and also false beliefs about our beliefs (we no longer know that we don’t know). Case 2 could lead to extinction.

...

Except, errors do not seem to be evenly & randomly distributed between case 1 and case 2. There seem to be far more case 1s than case 2s, as already mentioned in the early calculus example: far more than 50% of the early calculus results were correct when checked more rigorously. Richard Hamming attributes to Ralph Boas a comment that while editing Mathematical Reviews that “of the new results in the papers reviewed most are true but the corresponding proofs are perhaps half the time plain wrong”.

...

Gian-Carlo Rota gives us an example with Hilbert:

...

Olga labored for three years; it turned out that all mistakes could be corrected without any major changes in the statement of the theorems. There was one exception, a paper Hilbert wrote in his old age, which could not be fixed; it was a purported proof of the continuum hypothesis, you will find it in a volume of the Mathematische Annalen of the early thirties.

...

Leslie Lamport advocates for machine-checked proofs and a more rigorous style of proofs similar to natural deduction, noting a mathematician acquaintance guesses at a broad error rate of 1/329 and that he routinely found mistakes in his own proofs and, worse, believed false conjectures30.

[more on these "structured proofs":
https://academia.stackexchange.com/questions/52435/does-anyone-actually-publish-structured-proofs
https://mathoverflow.net/questions/35727/community-experiences-writing-lamports-structured-proofs
]

We can probably add software to that list: early software engineering work found that, dismayingly, bug rates seem to be simply a function of lines of code, and one would expect diseconomies of scale. So one would expect that in going from the ~4,000 lines of code of the Microsoft DOS operating system kernel to the ~50,000,000 lines of code in Windows Server 2003 (with full systems of applications and libraries being even larger: the comprehensive Debian repository in 2007 contained ~323,551,126 lines of code) that the number of active bugs at any time would be… fairly large. Mathematical software is hopefully better, but practitioners still run into issues (eg Durán et al 2014, Fonseca et al 2017) and I don’t know of any research pinning down how buggy key mathematical systems like Mathematica are or how much published mathematics may be erroneous due to bugs. This general problem led to predictions of doom and spurred much research into automated proof-checking, static analysis, and functional languages31.

[related:
https://mathoverflow.net/questions/11517/computer-algebra-errors
I don't know any interesting bugs in symbolic algebra packages but I know a true, enlightening and entertaining story about something that looked like a bug but wasn't.

Define sinc𝑥=(sin𝑥)/𝑥.

Someone found the following result in an algebra package: ∫∞0𝑑𝑥sinc𝑥=𝜋/2
They then found the following results:

...

So of course when they got:

∫∞0𝑑𝑥sinc𝑥sinc(𝑥/3)sinc(𝑥/5)⋯sinc(𝑥/15)=(467807924713440738696537864469/935615849440640907310521750000)𝜋

hmm:
Which means that nobody knows Fourier analysis nowdays. Very sad and discouraging story... – fedja Jan 29 '10 at 18:47

--

Because the most popular systems are all commercial, they tend to guard their bug database rather closely -- making them public would seriously cut their sales. For example, for the open source project Sage (which is quite young), you can get a list of all the known bugs from this page. 1582 known issues on Feb.16th 2010 (which includes feature requests, problems with documentation, etc).

That is an order of magnitude less than the commercial systems. And it's not because it is better, it is because it is younger and smaller. It might be better, but until SAGE does a lot of analysis (about 40% of CAS bugs are there) and a fancy user interface (another 40%), it is too hard to compare.

I once ran a graduate course whose core topic was studying the fundamental disconnect between the algebraic nature of CAS and the analytic nature of the what it is mostly used for. There are issues of logic -- CASes work more or less in an intensional logic, while most of analysis is stated in a purely extensional fashion. There is no well-defined 'denotational semantics' for expressions-as-functions, which strongly contributes to the deeper bugs in CASes.]

...

Should such widely-believed conjectures as P≠NP or the Riemann hypothesis turn out be false, then because they are assumed by so many existing proofs, a far larger math holocaust would ensue38 - and our previous estimates of error rates will turn out to have been substantial underestimates. But it may be a cloud with a silver lining, if it doesn’t come at a time of danger.

https://mathoverflow.net/questions/338607/why-doesnt-mathematics-collapse-down-even-though-humans-quite-often-make-mista

more on formal methods in programming:
https://www.quantamagazine.org/formal-verification-creates-hacker-proof-code-20160920/
https://intelligence.org/2014/03/02/bob-constable/

https://softwareengineering.stackexchange.com/questions/375342/what-are-the-barriers-that-prevent-widespread-adoption-of-formal-methods
Update: measured effort
In the October 2018 issue of Communications of the ACM there is an interesting article about Formally verified software in the real world with some estimates of the effort.

Interestingly (based on OS development for military equipment), it seems that producing formally proved software requires 3.3 times more effort than with traditional engineering techniques. So it's really costly.

On the other hand, it requires 2.3 times less effort to get high security software this way than with traditionally engineered software if you add the effort to make such software certified at a high security level (EAL 7). So if you have high reliability or security requirements there is definitively a business case for going formal.

WHY DON'T PEOPLE USE FORMAL METHODS?: https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/
You can see examples of how all of these look at Let’s Prove Leftpad. HOL4 and Isabelle are good examples of “independent theorem” specs, SPARK and Dafny have “embedded assertion” specs, and Coq and Agda have “dependent type” specs.6

If you squint a bit it looks like these three forms of code spec map to the three main domains of automated correctness checking: tests, contracts, and types. This is not a coincidence. Correctness is a spectrum, and formal verification is one extreme of that spectrum. As we reduce the rigour (and effort) of our verification we get simpler and narrower checks, whether that means limiting the explored state space, using weaker types, or pushing verification to the runtime. Any means of total specification then becomes a means of partial specification, and vice versa: many consider Cleanroom a formal verification technique, which primarily works by pushing code review far beyond what’s humanly possible.

...

The question, then: “is 90/95/99% correct significantly cheaper than 100% correct?” The answer is very yes. We all are comfortable saying that a codebase we’ve well-tested and well-typed is mostly correct modulo a few fixes in prod, and we’re even writing more than four lines of code a day. In fact, the vast… [more]
ratty  gwern  analysis  essay  realness  truth  correctness  reason  philosophy  math  proofs  formal-methods  cs  programming  engineering  worse-is-better/the-right-thing  intuition  giants  old-anglo  error  street-fighting  heuristic  zooming  risk  threat-modeling  software  lens  logic  inference  physics  differential  geometry  estimate  distribution  robust  speculation  nonlinearity  cost-benefit  convexity-curvature  measure  scale  trivia  cocktail  history  early-modern  europe  math.CA  rigor  news  org:mag  org:sci  miri-cfar  pdf  thesis  comparison  examples  org:junk  q-n-a  stackex  pragmatic  tradeoffs  cracker-prog  techtariat  invariance  DSL  chart  ecosystem  grokkability  heavyweights  CAS  static-dynamic  lower-bounds  complexity  tcs  open-problems  big-surf  ideas  certificates-recognition  proof-systems  PCP  mediterranean  SDP  meta:prediction  epistemic  questions  guessing  distributed  overflow  nibble  soft-question  track-record  big-list  hmm  frontier  state-of-art  move-fast-(and-break-things)  grokkability-clarity  technical-writing  trust 
july 2019 by nhaliday
Lindy effect - Wikipedia
The Lindy effect is a theory that the future life expectancy of some non-perishable things like a technology or an idea is proportional to their current age, so that every additional period of survival implies a longer remaining life expectancy.[1] Where the Lindy effect applies, mortality rate decreases with time. In contrast, living creatures and mechanical things follow a bathtub curve where, after "childhood", the mortality rate increases with time. Because life expectancy is probabilistically derived, a thing may become extinct before its "expected" survival. In other words, one needs to gauge both the age and "health" of the thing to determine continued survival.
wiki  reference  concept  metabuch  ideas  street-fighting  planning  comparison  time  distribution  flux-stasis  history  measure  correlation  arrows  branches  pro-rata  manifolds  aging  stylized-facts  age-generation  robust  technology  thinking  cost-benefit  conceptual-vocab  methodology  threat-modeling  efficiency  neurons  tools  track-record  ubiquity 
june 2019 by nhaliday
One week of bugs
If I had to guess, I'd say I probably work around hundreds of bugs in an average week, and thousands in a bad week. It's not unusual for me to run into a hundred new bugs in a single week. But I often get skepticism when I mention that I run into multiple new (to me) bugs per day, and that this is inevitable if we don't change how we write tests. Well, here's a log of one week of bugs, limited to bugs that were new to me that week. After a brief description of the bugs, I'll talk about what we can do to improve the situation. The obvious answer to spend more effort on testing, but everyone already knows we should do that and no one does it. That doesn't mean it's hopeless, though.

...

Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.

Given that people aren't going to put any effort into testing, what's the best way to do it?

Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.

...

There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.

John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.

For more on my perspective on testing, there's this.

Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx
https://news.ycombinator.com/item?id=4531549

https://hypothesis.works/articles/the-purpose-of-hypothesis/
From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.

From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.

But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.

Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.

Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.

This combination is clearly a recipe for disaster.

The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.

Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.

Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow

NB: DevGAMM is a game industry conference

- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
https://youtu.be/pW-SOdj4Kkk?t=1387
- knowledge of trivia becomes more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
techtariat  dan-luu  tech  software  error  list  debugging  linux  github  robust  checking  oss  troll  lol  aphorism  webapp  email  google  facebook  games  julia  pls  compilers  communication  mooc  browser  rust  programming  engineering  random  jargon  formal-methods  expert-experience  prof  c(pp)  course  correctness  hn  commentary  video  presentation  carmack  pragmatic  contrarianism  pessimism  sv  unix  rhetoric  critique  worrydream  hardware  performance  trends  multiplicative  roots  impact  comparison  history  iron-age  the-classics  mediterranean  conquest-empire  gibbon  technology  the-world-is-just-atoms  flux-stasis  increase-decrease  graphics  hmm  idk  systems  os  abstraction  intricacy  worse-is-better/the-right-thing  build-packaging  microsoft  osx  apple  reflection  assembly  things  knowledge  detail-architecture  thick-thin  trivia  info-dynamics  caching  frameworks  generalization  systematic-ad-hoc  universalism-particularism  analytical-holistic  structure  tainter  libraries  tradeoffs  prepping  threat-modeling  network-structure  writing  risk  local-glob 
may 2019 by nhaliday
Complexity no Bar to AI - Gwern.net
Critics of AI risk suggest diminishing returns to computing (formalized asymptotically) means AI will be weak; this argument relies on a large number of questionable premises and ignoring additional resources, constant factors, and nonlinear returns to small intelligence advantages, and is highly unlikely. (computer science, transhumanism, AI, R)
created: 1 June 2014; modified: 01 Feb 2018; status: finished; confidence: likely; importance: 10
ratty  gwern  analysis  faq  ai  risk  speedometer  intelligence  futurism  cs  computation  complexity  tcs  linear-algebra  nonlinearity  convexity-curvature  average-case  adversarial  article  time-complexity  singularity  iteration-recursion  magnitude  multiplicative  lower-bounds  no-go  performance  hardware  humanity  psychology  cog-psych  psychometrics  iq  distribution  moments  complement-substitute  hanson  ems  enhancement  parable  detail-architecture  universalism-particularism  neuro  ai-control  environment  climate-change  threat-modeling  security  theory-practice  hacker  academia  realness  crypto  rigorous-crypto  usa  government 
april 2018 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Eternity in six hours: intergalactic spreading of intelligent life and sharpening the Fermi paradox
We do this by demonstrating that traveling between galaxies – indeed even launching a colonisation project for the entire reachable universe – is a relatively simple task for a star-spanning civilization, requiring modest amounts of energy and resources. We start by demonstrating that humanity itself could likely accomplish such a colonisation project in the foreseeable future, should we want to, and then demonstrate that there are millions of galaxies that could have reached us by now, using similar methods. This results in a considerable sharpening of the Fermi paradox.
pdf  study  article  essay  anthropic  fermi  space  expansionism  bostrom  ratty  philosophy  xenobio  ideas  threat-modeling  intricacy  time  civilization  🔬  futurism  questions  paradox  risk  physics  engineering  interdisciplinary  frontier  technology  volo-avolo  dirty-hands  ai  automation  robotics  duplication  iteration-recursion  von-neumann  data  scale  magnitude  skunkworks  the-world-is-just-atoms  hard-tech  ems  bio  bits  speedometer  nature  model-organism  mechanics  phys-energy  relativity  electromag  analysis  spock  nitty-gritty  spreading  hanson  street-fighting  speed  gedanken  nibble 
march 2018 by nhaliday
Existential Risks: Analyzing Human Extinction Scenarios
https://twitter.com/robinhanson/status/981291048965087232
https://archive.is/dUTD5
Would you endorse choosing policy to max the expected duration of civilization, at least as a good first approximation?
Can anyone suggest a different first approximation that would get more votes?

https://twitter.com/robinhanson/status/981335898502545408
https://archive.is/RpygO
How useful would it be to agree on a relatively-simple first-approximation observable-after-the-fact metric for what we want from the future universe, such as total life years experienced, or civilization duration?

We're Underestimating the Risk of Human Extinction: https://www.theatlantic.com/technology/archive/2012/03/were-underestimating-the-risk-of-human-extinction/253821/
An Oxford philosopher argues that we are not adequately accounting for technology's risks—but his solution to the problem is not for Luddites.

Anderson: You have argued that we underrate existential risks because of a particular kind of bias called observation selection effect. Can you explain a bit more about that?

Bostrom: The idea of an observation selection effect is maybe best explained by first considering the simpler concept of a selection effect. Let's say you're trying to estimate how large the largest fish in a given pond is, and you use a net to catch a hundred fish and the biggest fish you find is three inches long. You might be tempted to infer that the biggest fish in this pond is not much bigger than three inches, because you've caught a hundred of them and none of them are bigger than three inches. But if it turns out that your net could only catch fish up to a certain length, then the measuring instrument that you used would introduce a selection effect: it would only select from a subset of the domain you were trying to sample.

Now that's a kind of standard fact of statistics, and there are methods for trying to correct for it and you obviously have to take that into account when considering the fish distribution in your pond. An observation selection effect is a selection effect introduced not by limitations in our measurement instrument, but rather by the fact that all observations require the existence of an observer. This becomes important, for instance, in evolutionary biology. For instance, we know that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did. Our data point-that intelligent life arose on our planet-is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets. When it comes to human extinction and existential risk, there are certain controversial ways that observation selection effects might be relevant.
bostrom  ratty  miri-cfar  skunkworks  philosophy  org:junk  list  top-n  frontier  speedometer  risk  futurism  local-global  scale  death  nihil  technology  simulation  anthropic  nuclear  deterrence  environment  climate-change  arms  competition  ai  ai-control  genetics  genomics  biotech  parasites-microbiome  disease  offense-defense  physics  tails  network-structure  epidemiology  space  geoengineering  dysgenics  ems  authoritarianism  government  values  formal-values  moloch  enhancement  property-rights  coordination  cooperate-defect  flux-stasis  ideas  prediction  speculation  humanity  singularity  existence  cybernetics  study  article  letters  eden-heaven  gedanken  multi  twitter  social  discussion  backup  hanson  metrics  optimization  time  long-short-run  janus  telos-atelos  poll  forms-instances  threat-modeling  selection  interview  expert-experience  malthus  volo-avolo  intel  leviathan  drugs  pharma  data  estimate  nature  longevity  expansionism  homo-hetero  utopia-dystopia 
march 2018 by nhaliday
Unaligned optimization processes as a general problem for society
TL;DR: There are lots of systems in society which seem to fit the pattern of “the incentives for this system are a pretty good approximation of what we actually want, so the system produces good results until it gets powerful, at which point it gets terrible results.”

...

Here are some more places where this idea could come into play:

- Marketing—humans try to buy things that will make our lives better, but our process for determining this is imperfect. A more powerful optimization process produces extremely good advertising to sell us things that aren’t actually going to make our lives better.
- Politics—we get extremely effective demagogues who pit us against our essential good values.
- Lobbying—as industries get bigger, the optimization process to choose great lobbyists for industries gets larger, but the process to make regulators robust doesn’t get correspondingly stronger. So regulatory capture gets worse and worse. Rent-seeking gets more and more significant.
- Online content—in a weaker internet, sites can’t be addictive except via being good content. In the modern internet, people can feel addicted to things that they wish they weren’t addicted to. We didn’t use to have the social expertise to make clickbait nearly as well as we do it today.
- News—Hyperpartisan news sources are much more worth it if distribution is cheaper and the market is bigger. News sources get an advantage from being truthful, but as society gets bigger, this advantage gets proportionally smaller.

...

For these reasons, I think it’s quite plausible that humans are fundamentally unable to have a “good” society with a population greater than some threshold, particularly if all these people have access to modern technology. Humans don’t have the rigidity to maintain social institutions in the face of that kind of optimization process. I think it is unlikely but possible (10%?) that this threshold population is smaller than the current population of the US, and that the US will crumble due to the decay of these institutions in the next fifty years if nothing totally crazy happens.
ratty  thinking  metabuch  reflection  metameta  big-yud  clever-rats  ai-control  ai  risk  scale  quality  ability-competence  network-structure  capitalism  randy-ayndy  civil-liberty  marketing  institutions  economics  political-econ  politics  polisci  advertising  rent-seeking  government  coordination  internet  attention  polarization  media  truth  unintended-consequences  alt-inst  efficiency  altruism  society  usa  decentralized  rhetoric  prediction  population  incentives  intervention  criminal-justice  property-rights  redistribution  taxes  externalities  science  monetary-fiscal  public-goodish  zero-positive-sum  markets  cost-benefit  regulation  regularizer  order-disorder  flux-stasis  shift  smoothness  phase-transition  power  definite-planning  optimism  pessimism  homo-hetero  interests  eden-heaven  telos-atelos  threat-modeling  alignment 
february 2018 by nhaliday
Fermi paradox - Wikipedia
Rare Earth hypothesis: https://en.wikipedia.org/wiki/Rare_Earth_hypothesis
Fine-tuned Universe: https://en.wikipedia.org/wiki/Fine-tuned_Universe
something to keep in mind:
Puddle theory is a term coined by Douglas Adams to satirize arguments that the universe is made for man.[54][55] As stated in Adams' book The Salmon of Doubt:[56]
Imagine a puddle waking up one morning and thinking, “This is an interesting world I find myself in, an interesting hole I find myself in, fits me rather neatly, doesn't it? In fact, it fits me staggeringly well, must have been made to have me in it!” This is such a powerful idea that as the sun rises in the sky and the air heats up and as, gradually, the puddle gets smaller and smaller, it's still frantically hanging on to the notion that everything's going to be all right, because this World was meant to have him in it, was built to have him in it; so the moment he disappears catches him rather by surprise. I think this may be something we need to be on the watch out for.
article  concept  paradox  wiki  reference  fermi  anthropic  space  xenobio  roots  speculation  ideas  risk  threat-modeling  civilization  nihil  🔬  deep-materialism  new-religion  futurism  frontier  technology  communication  simulation  intelligence  eden  war  nuclear  deterrence  identity  questions  multi  explanans  physics  theos  philosophy  religion  chemistry  bio  hmm  idk  degrees-of-freedom  lol  troll  existence 
january 2018 by nhaliday
[1709.01149] Biotechnology and the lifetime of technical civilizations
The number of people able to end Earth's technical civilization has heretofore been small. Emerging dual-use technologies, such as biotechnology, may give similar power to thousands or millions of individuals. To quantitatively investigate the ramifications of such a marked shift on the survival of both terrestrial and extraterrestrial technical civilizations, this paper presents a two-parameter model for civilizational lifespans, i.e. the quantity L in Drake's equation for the number of communicating extraterrestrial civilizations. One parameter characterizes the population lethality of a civilization's biotechnology and the other characterizes the civilization's psychosociology. L is demonstrated to be less than the inverse of the product of these two parameters. Using empiric data from Pubmed to inform the biotechnology parameter, the model predicts human civilization's median survival time as decades to centuries, even with optimistic psychosociological parameter values, thereby positioning biotechnology as a proximate threat to human civilization. For an ensemble of civilizations having some median calculated survival time, the model predicts that, after 80 times that duration, only one in 1024 civilizations will survive -- a tempo and degree of winnowing compatible with Hanson's "Great Filter." Thus, assuming that civilizations universally develop advanced biotechnology, before they become vigorous interstellar colonizers, the model provides a resolution to the Fermi paradox.
preprint  article  gedanken  threat-modeling  risk  biotech  anthropic  fermi  ratty  hanson  models  xenobio  space  civilization  frontier  hmm  speedometer  society  psychology  social-psych  anthropology  cultural-dynamics  disease  parasites-microbiome  maxim-gun  prepping  science-anxiety  technology  magnitude  scale  data  prediction  speculation  ideas  🌞  org:mat  study  offense-defense  arms  unintended-consequences  spreading  explanans  sociality  cybernetics 
october 2017 by nhaliday
Overcoming Bias : A Tangled Task Future
So we may often retain systems that inherit the structure of the human brain, and the structures of the social teams and organizations by which humans have worked together. All of which is another way to say: descendants of humans may have a long future as workers. We may have another future besides being retirees or iron-fisted peons ruling over gods. Even in a competitive future with no friendly singleton to ensure preferential treatment, something recognizably like us may continue. And even win.
ratty  hanson  speculation  automation  labor  economics  ems  futurism  prediction  complex-systems  network-structure  intricacy  thinking  engineering  management  law  compensation  psychology  cog-psych  ideas  structure  gray-econ  competition  moloch  coordination  cooperate-defect  risk  ai  ai-control  singularity  number  humanity  complement-substitute  cybernetics  detail-architecture  legacy  threat-modeling  degrees-of-freedom  composition-decomposition  order-disorder  analogy  parsimony  institutions  software  coupling-cohesion 
june 2017 by nhaliday
spaceships - Can there be a space age without petroleum (crude oil)? - Worldbuilding Stack Exchange
Yes...probably

What was really important to our development of technology was not oil, but coal. Access to large deposits of high-quality coal largely fueled the industrial revolution, and it was the industrial revolution that really got us on the first rungs of the technological ladder.

Oil is a fantastic fuel for an advanced civilisation, but it's not essential. Indeed, I would argue that our ability to dig oil out of the ground is a crutch, one that we should have discarded long ago. The reason oil is so essential to us today is that all our infrastructure is based on it, but if we'd never had oil we could still have built a similar infrastructure. Solar power was first displayed to the public in 1878. Wind power has been used for centuries. Hydroelectric power is just a modification of the same technology as wind power.

Without oil, a civilisation in the industrial age would certainly be able to progress and advance to the space age. Perhaps not as quickly as we did, but probably more sustainably.

Without coal, though...that's another matter

What would the industrial age be like without oil and coal?: https://worldbuilding.stackexchange.com/questions/45919/what-would-the-industrial-age-be-like-without-oil-and-coal

Out of the ashes: https://aeon.co/essays/could-we-reboot-a-modern-civilisation-without-fossil-fuels
It took a lot of fossil fuels to forge our industrial world. Now they're almost gone. Could we do it again without them?

But charcoal-based industry didn’t die out altogether. In fact, it survived to flourish in Brazil. Because it has substantial iron deposits but few coalmines, Brazil is the largest charcoal producer in the world and the ninth biggest steel producer. We aren’t talking about a cottage industry here, and this makes Brazil a very encouraging example for our thought experiment.

The trees used in Brazil’s charcoal industry are mainly fast-growing eucalyptus, cultivated specifically for the purpose. The traditional method for creating charcoal is to pile chopped staves of air-dried timber into a great dome-shaped mound and then cover it with turf or soil to restrict airflow as the wood smoulders. The Brazilian enterprise has scaled up this traditional craft to an industrial operation. Dried timber is stacked into squat, cylindrical kilns, built of brick or masonry and arranged in long lines so that they can be easily filled and unloaded in sequence. The largest sites can sport hundreds of such kilns. Once filled, their entrances are sealed and a fire is lit from the top.
q-n-a  stackex  curiosity  gedanken  biophysical-econ  energy-resources  long-short-run  technology  civilization  industrial-revolution  heavy-industry  multi  modernity  frontier  allodium  the-world-is-just-atoms  big-picture  ideas  risk  volo-avolo  news  org:mag  org:popup  direct-indirect  retrofit  dirty-hands  threat-modeling  duplication  iteration-recursion  latin-america  track-record  trivia  cocktail  data 
june 2017 by nhaliday
[1705.03394] That is not dead which can eternal lie: the aestivation hypothesis for resolving Fermi's paradox
If a civilization wants to maximize computation it appears rational to aestivate until the far future in order to exploit the low temperature environment: this can produce a 10^30 multiplier of achievable computation. We hence suggest the "aestivation hypothesis": the reason we are not observing manifestations of alien civilizations is that they are currently (mostly) inactive, patiently waiting for future cosmic eras. This paper analyzes the assumptions going into the hypothesis and how physical law and observational evidence constrain the motivations of aliens compatible with the hypothesis.

http://aleph.se/andart2/space/the-aestivation-hypothesis-popular-outline-and-faq/

simpler explanation (just different math for Drake equation):
Dissolving the Fermi Paradox: http://www.jodrellbank.manchester.ac.uk/media/eps/jodrell-bank-centre-for-astrophysics/news-and-events/2017/uksrn-slides/Anders-Sandberg---Dissolving-Fermi-Paradox-UKSRN.pdf
http://marginalrevolution.com/marginalrevolution/2017/07/fermi-paradox-resolved.html
Overall the argument is that point estimates should not be shoved into a Drake equation and then multiplied by each, as that requires excess certainty and masks much of the ambiguity of our knowledge about the distributions. Instead, a Bayesian approach should be used, after which the fate of humanity looks much better. Here is one part of the presentation:

Life Versus Dark Energy: How An Advanced Civilization Could Resist the Accelerating Expansion of the Universe: https://arxiv.org/abs/1806.05203
The presence of dark energy in our universe is causing space to expand at an accelerating rate. As a result, over the next approximately 100 billion years, all stars residing beyond the Local Group will fall beyond the cosmic horizon and become not only unobservable, but entirely inaccessible, thus limiting how much energy could one day be extracted from them. Here, we consider the likely response of a highly advanced civilization to this situation. In particular, we argue that in order to maximize its access to useable energy, a sufficiently advanced civilization would chose to expand rapidly outward, build Dyson Spheres or similar structures around encountered stars, and use the energy that is harnessed to accelerate those stars away from the approaching horizon and toward the center of the civilization. We find that such efforts will be most effective for stars with masses in the range of M∼(0.2−1)M⊙, and could lead to the harvesting of stars within a region extending out to several tens of Mpc in radius, potentially increasing the total amount of energy that is available to a future civilization by a factor of several thousand. We also discuss the observable signatures of a civilization elsewhere in the universe that is currently in this state of stellar harvesting.
preprint  study  essay  article  bostrom  ratty  anthropic  philosophy  space  xenobio  computation  physics  interdisciplinary  ideas  hmm  cocktail  temperature  thermo  information-theory  bits  🔬  threat-modeling  time  scale  insight  multi  commentary  liner-notes  pdf  slides  error  probability  ML-MAP-E  composition-decomposition  econotariat  marginal-rev  fermi  risk  org:mat  questions  paradox  intricacy  multiplicative  calculation  street-fighting  methodology  distribution  expectancy  moments  bayesian  priors-posteriors  nibble  measurement  existence  technology  geoengineering  magnitude  spatial  density  spreading  civilization  energy-resources  phys-energy  measure  direction  speculation  structure 
may 2017 by nhaliday
One more time | West Hunter
One of our local error sources suggested that it would be impossible to rebuild technical civilization, once fallen. Now if every human were dead I’d agree, but in most other scenarios it wouldn’t be particularly difficult, assuming that the survivors were no more silly and fractious than people are today.  So assume a mild disaster, something like the effect of myxomatosis on the rabbits of Australia, or perhaps toe-to-toe nuclear combat with the Russkis – ~90%  casualties worldwide.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69221
Books are everywhere. In the type of scenario I sketched out, almost no knowledge would be lost – so Neolithic tech is irrelevant. Look, if a single copy of the 1911 Britannica survived, all would be well.

You could of course harvest metals from the old cities. But even if if you didn’t, the idea that there is no more copper or zinc or tin in the ground is just silly. “recoverable ore” is mostly an economic concept.

Moreover, if we’re talking wiring and electrical uses, one can use aluminum, which makes up 8% of the Earth’s crust.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69368
Some of those book tell you how to win.

Look, assume that some communities strive to relearn how to make automatic weapons and some don’t. How does that story end? Do I have to explain everything?

I guess so!

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69334
Well, perhaps having a zillion times more books around would make a difference. That and all the “X for Dummies” books, which I think the Romans didn’t have.

A lot of Classical civ wasn’t very useful: on the whole they didn’t invent much. On the whole, technology advanced quite a bit more rapidly in Medieval times.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69225
How much coal and oil are in the ground that can still be extracted with 19th century tech? Honest question; I don’t know.
--
Lots of coal left. Not so much oil (using simple methods), but one could make it from low-grade coal, with the Fischer-Tropsch process. Sasol does this.

Then again, a recovering society wouldn’t need much at first.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69223
reply to: https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69220
That’s more like it.

#1. Consider Grand Coulee Dam. Gigawatts. Feeling of power!
#2. Of course.
#3. Might be easier to make superconducting logic circuits with MgB2, starting over.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69325
Your typical biker guy is more mechanically minded than the average Joe. Welding, electrical stuff, this and that.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69260
If fossil fuels were unavailable -or just uneconomical at first- we’d be back to charcoal for our Stanley Steamers and railroads. We’d still have both.

The French, and others, used wood-gasifier trucks during WWII.

https://westhunt.wordpress.com/2015/05/17/one-more-time/#comment-69407
Teslas are of course a joke.
west-hunter  scitariat  civilization  risk  nihil  gedanken  frontier  allodium  technology  energy-resources  knowledge  the-world-is-just-atoms  discussion  speculation  analysis  biophysical-econ  big-picture  🔬  ideas  multi  history  iron-age  the-classics  medieval  europe  poast  the-great-west-whale  the-trenches  optimism  volo-avolo  mostly-modern  world-war  gallic  track-record  musk  barons  transportation  driving  contrarianism  agriculture  retrofit  industrial-revolution  dirty-hands  books  competition  war  group-selection  comparison  mediterranean  conquest-empire  gibbon  speedometer  class  threat-modeling  duplication  iteration-recursion  trivia  cocktail  encyclopedic  definite-planning  embodied  gnosis-logos  kumbaya-kult 
may 2017 by nhaliday
How many times over could the world's current supply of nuclear weapons destroy the world? - Quora
A Common Story: “There are enough nuclear weapons to destroy the world many times over.” This is nothing more than poorly crafted fiction an urban legend. This common conclusion is not based in any factual data. It is based solely in hype, hysteria, propaganda and fear mongering.

If you take every weapon in existence today, approximately 6500 megatons between 15,000 warheads with an average yield of 433 KT, and put a single bomb in its own 100 square mile grid… one bomb per grid (10 miles x 10 miles), you will contain >95% of the destructive force of each bomb on average within the grid it is in. This means the total landmass to receive a destructive force from all the world's nuclear bombs is an area of 1.5 million square miles. Not quite half of the United States and 1/38 of the world's total land mass…. that's it!
q-n-a  qra  arms  nuclear  technology  war  meta:war  impact  deterrence  foreign-policy  usa  world  risk  nihil  scale  trivia  threat-modeling  peace-violence 
may 2017 by nhaliday
What is the likelihood we run out of fossil fuels before we can switch to renewable energy sources? - Quora
1) Can we de-carbon our primary energy production before global warming severely damages human civilization? In the short term this means switching from coal to natural gas, and in the long term replacing both coal and gas generation with carbon-neutral sources such as renewables or nuclear. The developed world cannot accomplish this alone -- it requires worldwide action, and most of the pain will be felt by large developing nations such as India and China. Ultimately this is a political and economic problem. The technology to eliminate most carbon from electricity generation exists today at fairly reasonable cost.

2) Can we develop a better transportation energy storage technology than oil, before market forces drive prices to levels that severely damage the global economy? Fossil fuels are a source of energy, but primarily we use oil in vehicles because it is an exceptional energy TRANSPORT medium. Renewables cannot meet this need because battery technology is completely uncompetitive for most fuel consumers -- prices are an order of magnitude too high and energy density is an order of magnitude too low for adoption of all-electric vehicles outside developed-world urban centers. (Heavy trucking, cargo ships, airplanes, etc will never be all-electric with chemical batteries. There are hard physical limits to the energy density of electrochemical reactions. I'm not convinced passenger vehicles will go all-electric in our lifetimes either.) There are many important technologies in existence that will gain increasing traction in the next 50 years such as natural gas automobiles and improved gas/electric hybrids, but ultimately we need a better way to store power than fossil fuels. _This is a deep technological problem that will not be solved by incremental improvements in battery chemistry or any process currently in the R&D pipeline_.

Based on these two unresolved issues, _I place the odds of us avoiding fossil-fuel-related energy issues (major climate or economic damage) at less than 10%_. The impetus for the major changes required will not be sufficiently urgent until the world is seeing severe and undeniable impacts. Civilization will certainly survive -- but there will be no small amount of human suffering during the transition to whatever comes next.

- Ryan Carlyle
q-n-a  qra  expert  energy-resources  climate-change  environment  risk  civilization  nihil  prediction  threat-modeling  world  futurism  biophysical-econ  stock-flow  transportation  technology  economics  long-short-run  no-go  speedometer  modernity  expert-experience 
may 2017 by nhaliday
Annotating Greg Cochran’s interview with James Miller
https://westhunt.wordpress.com/2017/04/05/interview-2/
opinion of Scott and Hanson: https://westhunt.wordpress.com/2017/04/05/interview-2/#comment-90238
Greg's methodist: https://westhunt.wordpress.com/2017/04/05/interview-2/#comment-90256
https://westhunt.wordpress.com/2017/04/05/interview-2/#comment-90299
You have to consider the relative strengths of Japan and the USA. USA was ~10x stronger, industrially, which is what mattered. Technically superior (radar, Manhattan project). Almost entirely self-sufficient in natural resources. Japan was sure to lose, and too crazy to quit, which meant that they would lose after being smashed flat.
--
There’s a fairly common way of looking at things in which the bad guys are not at fault because they’re bad guys, born that way, and thus can’t help it. Well, we can’t help it either, so the hell with them. I don’t think we had to respect Japan’s innate need to fuck everybody in China to death.

https://westhunt.wordpress.com/2017/03/25/ramble-on/
https://westhunt.wordpress.com/2017/03/24/topics/
https://soundcloud.com/user-519115521/greg-cochran-part-1
2nd part: https://pinboard.in/u:nhaliday/b:9ab84243b967

some additional things:
- political correctness, the Cathedral and the left (personnel continuity but not ideology/value) at start
- joke: KT impact = asteroid mining, every mass extinction = intelligent life destroying itself
- Alawites: not really Muslim, women liberated because "they don't have souls", ended up running shit in Syria because they were only ones that wanted to help the British during colonial era
- solution to Syria: "put the Alawites in NYC"
- Zimbabwe was OK for a while, if South Africa goes sour, just "put the Boers in NYC" (Miller: left would probably say they are "culturally incompatible", lol)
- story about Lincoln and his great-great-great-grandfather
- skepticism of free speech
- free speech, authoritarianism, and defending against the Mongols
- Scott crazy (not in a terrible way), LW crazy (genetics), ex.: polyamory
- TFP or microbio are better investments than stereotypical EA stuff
- just ban AI worldwide (bully other countries to enforce)
- bit of a back-and-forth about macroeconomics
- not sure climate change will be huge issue. world's been much warmer before and still had a lot of mammals, etc.
- he quite likes Pseudoerasmus
- shits on modern conservatism/Bret Stephens a bit

- mentions Japan having industrial base a tenth the size of the US's and no chance of winning WW2 around 11m mark
- describes himself as "fairly religious" around 20m mark
- 27m30s: Eisenhower was smart, read Carlyle, classical history, etc.

but was Nixon smarter?: https://www.gnxp.com/WordPress/2019/03/18/open-thread-03-18-2019/
The Scandals of Meritocracy. Virtue vs. competence. Would you rather have a boss who is evil but competent, or good but incompetent? The reality is you have to balance the two. Richard Nixon was probably smarter that Dwight Eisenhower in raw g, but Eisenhower was probably a better person.
org:med  west-hunter  scitariat  summary  links  podcast  audio  big-picture  westminster  politics  culture-war  academia  left-wing  ideology  biodet  error  crooked  bounded-cognition  stories  history  early-modern  africa  developing-world  death  mostly-modern  deterrence  japan  asia  war  meta:war  risk  ai  climate-change  speculation  agriculture  environment  prediction  religion  islam  iraq-syria  gender  dominant-minority  labor  econotariat  cracker-econ  coalitions  infrastructure  parasites-microbiome  medicine  low-hanging  biotech  terrorism  civil-liberty  civic  social-science  randy-ayndy  law  polisci  government  egalitarianism-hierarchy  expression-survival  disease  commentary  authoritarianism  being-right  europe  nordic  cohesion  heuristic  anglosphere  revolution  the-south  usa  thinking  info-dynamics  yvain  ssc  lesswrong  ratty  subculture  values  descriptive  epistemic  cost-disease  effective-altruism  charity  econ-productivity  technology  rhetoric  metameta  ai-control  critique  sociology  arms  paying-rent  parsimony  writing  realness  migration  eco 
april 2017 by nhaliday
There’s good eating on one of those | West Hunter
Recently, Y.-H. Percival Zhang and colleagues demonstrated a method of converting cellulose into starch and glucose. Zhang thinks that it can be scaled up into an effective industrial process, one that could produce a thousand calories of starch for less than a dollar from cellulosic waste. This would be a good thing. It’s not just that are 7 billion people – the problem is that we have hardly any food reserves (about 74 days at last report).

Prepare for Nuclear Winter: http://www.overcomingbias.com/2017/09/prepare-for-nuclear-winter.html
If a 1km asteroid were to hit the Earth, the dust it kicked up would block most sunlight over most of the world for 3 to 10 years. There’s only a one in a million chance of that happening per year, however. Whew. However, there’s a ten times bigger chance that a super volcano, such as the one hiding under Yellowstone, might explode, for a similar result. And I’d put the chance of a full scale nuclear war at ten to one hundred times larger than that: one in ten thousand to one thousand per year. Over a century, that becomes a one to ten percent chance. Not whew; grimace instead.

There is a substantial chance that a full scale nuclear war would produce a nuclear winter, with a similar effect: sunlight is blocked for 3-10 years or more. Yes, there are good criticisms of the more extreme forecasts, but there’s still a big chance the sun gets blocked in a full scale nuclear war, and there’s even a substantial chance of the same result in a mere regional war, where only 100 nukes explode (the world now has 15,000 nukes).

...

Yeah, probably a few people live on, and so humanity doesn’t go extinct. But the only realistic chance most of us have of surviving in this scenario is to use our vast industrial and scientific abilities to make food. We actually know of many plausible ways to make more than enough food to feed everyone for ten years, even with no sunlight. And even if big chunks of the world economy are in shambles. But for that to work, we must preserve enough social order to make use of at least the core of key social institutions.

http://www.overcomingbias.com/2017/09/mre-futures-to-not-starve.html

Nuclear War Survival Skills: http://oism.org/nwss/nwss.pdf
Updated and Expanded 1987 Edition

Nuclear winter: https://en.wikipedia.org/wiki/Nuclear_winter

Yellowstone supervolcano may blow sooner than thought — and could wipe out life on the planet: https://www.usatoday.com/story/news/nation/2017/10/12/yellowstone-supervolcano-may-blow-sooner-than-thought-could-wipe-out-life-planet/757337001/
http://www.foxnews.com/science/2017/10/12/yellowstone-supervolcano-could-blow-faster-than-thought-destroy-all-mankind.html
http://fortune.com/2017/10/12/yellowstone-park-supervolcano/
https://www.sciencenews.org/article/supervolcano-blast-would-blanket-us-ash
west-hunter  discussion  study  commentary  bio  food  energy-resources  technology  risk  the-world-is-just-atoms  agriculture  wild-ideas  malthus  objektbuch  threat-modeling  scitariat  scale  biophysical-econ  allodium  nihil  prepping  ideas  dirty-hands  magnitude  multi  ratty  hanson  planning  nuclear  arms  deterrence  institutions  alt-inst  securities  markets  pdf  org:gov  white-paper  survival  time  earth  war  wiki  reference  environment  sky  news  org:lite  hmm  idk  org:biz  org:sci  simulation  maps  usa  geoengineering  insurance 
march 2017 by nhaliday
Evolution of Resistance Against CRISPR/Cas9 Gene Drive | Genetics
CRISPR/Cas9 gene drive (CGD) promises to be a highly adaptable approach for spreading genetically engineered alleles throughout a species, even if those alleles impair reproductive success. CGD has been shown to be effective in laboratory crosses of insects, yet it remains unclear to what extent potential resistance mechanisms will affect the dynamics of this process in large natural populations. Here we develop a comprehensive population genetic framework for modeling CGD dynamics, which incorporates potential resistance mechanisms as well as random genetic drift. Using this framework, we calculate the probability that resistance against CGD evolves from standing genetic variation, de novo mutation of wild-type alleles, or cleavage repair by nonhomologous end joining (NHEJ)—a likely by-product of CGD itself. We show that resistance to standard CGD approaches should evolve almost inevitably in most natural populations, unless repair of CGD-induced cleavage via NHEJ can be effectively suppressed, or resistance costs are on par with those of the driver. The key factor determining the probability that resistance evolves is the overall rate at which resistance alleles arise at the population level by mutation or NHEJ. By contrast, the conversion efficiency of the driver, its fitness cost, and its introduction frequency have only minor impact. Our results shed light on strategies that could facilitate the engineering of drivers with lower resistance potential, and motivate the possibility to embrace resistance as a possible mechanism for controlling a CGD approach. This study highlights the need for careful modeling of the population dynamics of CGD prior to the actual release of a driver construct into the wild.
study  org:nat  bio  genetics  evolution  population-genetics  models  CRISPR  unintended-consequences  geoengineering  mutation  risk  parasites-microbiome  threat-modeling  selfish-gene  cooperate-defect  red-queen 
february 2017 by nhaliday
The Great Filter | West Hunter
Let us imagine that we found out that nervous systems had evolved twice (which seems to be the case). And suppose that you spent a lot of time worrying about the Fermi Paradox – and had previously thought that nervous system evolution was the unlikely event that explains the great silence, the bottleneck that explained why we don’t see signs of alien intelligent life. Thus in our past: we’re safe. Now you’re worried: maybe the Great Filter lies in our future, and the End approaches. But not just that: you assume that the political class noticed this too, and will start neglecting the future (cough, cough) because they too believe that isn’t going to be one.
Worrying about the Great Filter might not be crazy, but assuming that politicians are hep to such things and worry about them is. If you think that, you have less common sense than a monotreme. And that’s real common. I’ve had analogous arguments with people: they didn’t have any common sense either.
west-hunter  discussion  troll  risk  government  evolution  neuro  eden  antiquity  bio  fermi  threat-modeling  scitariat  anthropic  nihil  new-religion  xenobio  deep-materialism  ideas 
february 2017 by nhaliday
The Membrane – spottedtoad
All of which is to say that the Internet, which shares many qualities in common with an assemblage of living things except for those clear boundaries and defenses, might well not trend toward increased usability or easier exchange of information over the longer term, even if that is what we have experienced heretofore. The history of evolution is every bit as much a history of parasitism and counterparasitism as it is any kind of story of upward movement toward greater complexity or order. There is no reason to think that we (and still less national or political entities) will necessarily experience technology as a means of enablement and Cool Stuff We Can Do rather than a perpetual set of defenses against scammers of our money and attention. There’s the respect that makes Fake News the news that matters forever more.

THE MADCOM FUTURE: http://www.atlanticcouncil.org/images/publications/The_MADCOM_Future_RW_0926.pdf
HOW ARTIFICIAL INTELLIGENCE WILL ENHANCE COMPUTATIONAL PROPAGANDA, REPROGRAM HUMAN CULTURE, AND THREATEN DEMOCRACY... AND WHAT CAN BE DONE ABOUT IT.

https://twitter.com/toad_spotted/status/984065056437653505
https://archive.is/fZLyb
ai robocalls/phonetrees/Indian Ocean call centers~biologicalization of corporations thru automation&global com tech

fly-by-night scams double mitotically,covered by outer membrane slime&peptidoglycan

trillion $ corps w/nonspecific skin/neutrophils/specific B/T cells against YOU

https://warontherocks.com/2019/08/the-coming-automation-of-propaganda/
ratty  unaffiliated  contrarianism  walls  internet  hacker  risk  futurism  speculation  wonkish  chart  red-queen  parasites-microbiome  analogy  prediction  unintended-consequences  security  open-closed  multi  pdf  white-paper  propaganda  ai  offense-defense  ecology  cybernetics  pessimism  twitter  social  discussion  backup  bio  automation  cooperate-defect  coordination  attention  crypto  money  corporation  accelerationism  threat-modeling  alignment  cost-benefit  interface  interface-compatibility 
december 2016 by nhaliday
Overcoming Bias : In Praise of Low Needs
We humans have come a long way since we first became human; we’ve innovated and grown our ability to achieve human ends by perhaps a factor of ten million. Not at all shabby, even though it may be small compared to the total factor of growth and innovation that life achieved before humans arrived. But even if humanity’s leap is a great achievement, I fear that we have much further to go than we have come.

The universe seems almost entirely dead out there. There’s a chance it will eventually be densely filled with life, and that our descendants may help to make that happen. Some worry about the quality of that life filling the universe, and yes there are issues there. But I worry mostly about the difference between life and death. Our descendants may kill themselves or stop growing, and fail to fill the universe with life. Any life.

To fill the universe with life requires that we grow far more than our previous leap factor of ten million. More like three to ten factors that big still to go. (See Added below.) So think of all the obstacles we’ve overcome so far, obstacles that appeared when we reached new scales of size and levels of ability. If we were lucky to make it this far, we’ll have to be much more lucky to make it all the way.

...

Added 28Oct: Assume humanity’s leap factor is 107. Three of those is 1021. As there are 1024 stars in observable universe, that much growth could come from filling one in a thousand of those stars with as many rich humans as Earth now has. Ten of humanity’s leap is 1070, and there are now about 1010 humans on Earth. As there are about 1080 atoms in the observable universe, that much growth could come from finding a way to implement one human like creature per atom.
hanson  contrarianism  stagnation  trends  values  farmers-and-foragers  essay  rhetoric  new-religion  ratty  spreading  phalanges  malthus  formal-values  flux-stasis  economics  growth-econ  status  fashun  signaling  anthropic  fermi  nihil  death  risk  futurism  hierarchy  ranking  discipline  temperance  threat-modeling  existence  wealth  singularity  smoothness  discrete  scale  magnitude  population  physics  estimate  uncertainty  flexibility  rigidity  capitalism  heavy-industry  the-world-is-just-atoms  nature  corporation  institutions  coarse-fine 
october 2016 by nhaliday
Overcoming Bias : Beware General Visible Prey
So, bottom line, the future great filter scenario that most concerns me is one where our solar-system-bound descendants have killed most of nature, can’t yet colonize other stars, are general predators and prey of each other, and have fallen into a short-term-predatory-focus equilibrium where predators can easily see and travel to most all prey. Yes there are about a hundred billion comets way out there circling the sun, but even that seems a small enough number for predators to careful map and track all of them.
hanson  risk  prediction  futurism  speculation  pessimism  war  ratty  space  big-picture  fermi  threat-modeling  equilibrium  slippery-slope  anthropic  chart  deep-materialism  new-religion  ideas  bio  nature  plots  expansionism  malthus  marginal  convexity-curvature  humanity  farmers-and-foragers  diversity  entropy-like  homo-hetero  existence  volo-avolo  technology  frontier  intel  travel  time-preference  communication  civilization  egalitarianism-hierarchy  peace-violence  ecology  cooperate-defect  dimensionality  whole-partial-many  temperance  patience  thinking  long-short-run  prepping  offense-defense 
october 2016 by nhaliday
weaponizing smallpox | West Hunter
As I have said before, it seems likely to me that the Soviet Union put so much effort into treaty-violating biological warfare because the guys at the top believed in it – because they had seen it work, the same reason that they were such tank enthusiasts. One more point on the likely use of tularemia at Stalingrad: in the summer of ’42 the Germans had occupied regions holding 40% of the Soviet Union’s population. The Soviets had a tularemia program: if not then [“Not One Step Back!”], when would they have used it? When would Stalin have used it? Imagine that someone intent on the destruction of the American republic and the extermination of its people [remember the Hunger Plan?] had taken over everything west of the Mississippi: would be that too early to pull out all the stops? Reminds me of of an old Mr Boffo cartoon: you see a monster, taller than skyscrapers, stomping his way through the city. That’s trouble. But then you notice that he’s a hand puppet: that’s serious trouble. Perhaps Stalin was waiting for serious trouble, for example if the Norse Gods had come in on the side of the Nazis.

Anyhow, the Soviets had a big smallpox program. In some ways smallpox is almost the ultimate biological weapon – very contagious, while some strains are highly lethal. And it’s controllable – you can easily shield your own guys via vaccination. Of course back in the 1970s, almost everyone was vaccinated, so it was also completely useless.

We kept vaccinating people as long as smallpox was still running around in the Third World. But when it was eradicated in 1978, people stopped. There seemed to be no reason – and so, as new unvaccinated generations arose, the military efficacy of smallpox has gone up and up and up. It got to the point where the World Health organization threw away its stockpile of vaccine, a couple hundred million units, just to save on the electric bill for the refrigerators.

Consider that the Soviet Union was always the strongest proponent of worldwide eradication of smallpox, dating back to the 1950s. Successful eradication would eventually make smallpox a superweapon: does it seem possible that the people running the Soviet Union had this in mind as a long term-goal ? Potentiation through ‘eradication’? Did the left hand know what the strangling hand had in mind, and shape policies accordingly? Of course.

D.A. Henderson, the man that led the eradication campaign, died just a few days ago. He was aware of this possibility.

https://www.washingtonpost.com/local/obituaries/da-henderson-disease-detective-who-eradicated-smallpox-dies-at-87/2016/08/20/b270406e-63dd-11e6-96c0-37533479f3f5_story.html
Dr. Henderson strenuously argued that the samples should be destroyed because, in his view, any amount of smallpox was too dangerous to tolerate. A side effect of the eradication program — and one of the “horrendous ironies of history,” said “Hot Zone” author Preston — is that since no one in generations has been exposed to the virus, most of the world’s population would be vulnerable to it in the event of an outbreak.

“I feel very — what should we say? — dispirited,” Dr. Henderson told the Times in 2002. “Here we are, regressing to defend against something we thought was permanently defeated. We shouldn’t have to be doing this.”

http://www.bbc.co.uk/history/worldwars/coldwar/pox_weapon_01.shtml#four
Ken Alibek believes that, following the collapse of the Soviet Union in 1991, unemployed or badly-paid scientists are likely to have sold samples of smallpox clandestinely and gone to work in rogue states engaged in illicit biological weapons development. DA Henderson agrees that this is a plausible scenario and is upset by the legacy it leaves. 'If the [Russian bio-weapons] programme had not taken place we would not I think be worrying about smallpox in the same way. One can feel extremely bitter and extremely angry about this because I think they've subjected the entire world to a risk which was totally unnecessary.'

also:
War in the East: https://westhunt.wordpress.com/2012/02/02/war-in-the-east/
The books generally say that biological warfare is ineffective, but then they would say that, wouldn’t they? There is reason to think it has worked, and it may have made a difference.

...

We know of course that this offensive eventually turned into a disaster in which the German Sixth Army was lost. But nobody knew that then. The Germans were moving forward with little to stop them: they were scary SOBs. Don’t let anyone tell you otherwise. The Soviet leadership was frightened, enough so that they sent out a general backs-to-the-wall, no-retreat order that told the real scale of losses. That was the Soviet mood in the summer of 42.

That’s the historical background. Now for the clues. First, Ken Alibek was a bioweapons scientist back in the USSR. In his book, Biohazard, he tells how, as a student, he was given the assignment of explaining a mysterious pattern of tularemia epidemics back in the war. To him, it looked artificial, whereupon his instructor said something to the effect of “you never thought that, you never said that. Do you want a job?” Second, Antony Beevor mentions the mysteriously poor health of German troops at Stalingrad – well before being surrounded (p210-211). Third, the fact that there were large tularemia epidemics in the Soviet Union during the war – particularly in the ‘oblasts temporarily occupied by the Fascist invaders’, described in History and Incidence of Tularemia in the Soviet Union, by Robert Pollitzer.

Fourth, personal communications from a friend who once worked at Los Alamos. Back in the 90’s, after the fall of the Soviet Union, there was a time when you could hire a whole team of decent ex-Soviet physicists for the price of a single American. My friend was having a drink with one of his Russian contractors, son of a famous ace, who started talking about how his dad had dropped tularemia here, here, and here near Leningrad (sketching it out on a napkin) during the Great Patriotic War. Not that many people spontaneously bring up stories like that in dinner conversation…

Fifth, the huge Soviet investment in biowarfare throughout the Cold War is a hint: they really, truly, believed in it, and what better reason could there be than decisive past successes? In much the same way, our lavish funding of the NSA strongly suggested that cryptanalysis and sigint must have paid off handsomely for the Allies in WWII – far more so than publicly acknowledged, until the revelations about Enigma in the 1970s and later.

We know that tularemia is an effective biological agent: many countries have worked with it, including the Soviet Union. If the Russians had had this capability in the summer of ’42 (and they had sufficient technology: basically just fermentation) , it is hard to imagine them not using it. I mean, we’re talking about Stalin. You think he had moral qualms? But we too would have used germ warfare if our situation had been desperate.

https://westhunt.wordpress.com/2012/02/02/war-in-the-east/#comment-1330
Sean, you don’t know what you’re talking about. Anybody exposed to an aerosol form of tularemia is likely to get it: 10-50 bacteria are enough to give a 50% probability of infection. You do not need to be sickly, starved, or immunosuppressed in order to contract it, although those factors probably influence its lethality. The same is true of anthrax: if it starts growing in your lungs, you get sick. You’re not born immune. There are in fact some diseases that you _are_ born immune to (most strains of sleeping sickness, for example), or at least have built-in defenses against (Epstein-Barr, cf TLRs).

A few other facts I’ve just found: First, the Soviets had a tularemia vaccine, which was used to an unclear extent at Stalingrad. At the time nobody else did.

Next, as far as I can tell, the Stalingrad epidemic is the only large-scale pneumonic tularemia epidemic that has ever occurred.

Next cool fact: during the Cold War, the Soviets were somewhat more interested in tularemia than other powers. At the height of the US biowarfare program, we produced less than two tons per year. The Soviets produced over one thousand tons of F. tularensis per year in that period.

Next question, one which deserves a serious, extended treatment. Why are so many people so very very good at coming up with wrong answers? Why do they apply Occam’s razor backwards? This is particularly common in biology. I’m not talking about Croddy in Military Medicine: he probably had orders to lie, and you can see hints of that if you read carefully.

https://twitter.com/gcochran99/status/952248214576443393
https://archive.is/tEcgK
Joining the Army might work. In general not available to private individuals, for reasons that are largely bullshit.
war  disease  speculation  military  russia  history  len:long  west-hunter  technology  multi  c:**  parasites-microbiome  mostly-modern  arms  scitariat  communism  maxim-gun  biotech  ideas  world-war  questions  poast  occam  parsimony  trivia  data  stylized-facts  scale  bio  epidemiology  🌞  nietzschean  food  death  nihil  axioms  morality  strategy  unintended-consequences  risk  news  org:rec  prepping  profile  postmortem  people  crooked  org:anglo  thick-thin  alt-inst  flux-stasis  flexibility  threat-modeling  twitter  social  discussion  backup  prudence  government  spreading  gender  sex  sexuality  elite  ability-competence  rant  pharma  drugs  medicine  politics  ideology  impetus  big-peeps  statesmen 
september 2016 by nhaliday
Are You Living in a Computer Simulation?
Bostrom's anthropic arguments

https://www.jetpress.org/volume7/simulation.htm
In sum, if your descendants might make simulations of lives like yours, then you might be living in a simulation. And while you probably cannot learn much detail about the specific reasons for and nature of the simulation you live in, you can draw general conclusions by making analogies to the types and reasons of simulations today. If you might be living in a simulation then all else equal it seems that you should care less about others, live more for today, make your world look likely to become eventually rich, expect to and try to participate in pivotal events, be entertaining and praiseworthy, and keep the famous people around you happy and interested in you.

Theological Implications of the Simulation Argument: https://www.tandfonline.com/doi/pdf/10.1080/15665399.2010.10820012
Nick Bostrom’s Simulation Argument (SA) has many intriguing theological implications. We work out some of them here. We show how the SA can be used to develop novel versions of the Cosmological and Design Arguments. We then develop some of the affinities between Bostrom’s naturalistic theogony and more traditional theological topics. We look at the resurrection of the body and at theodicy. We conclude with some reflections on the relations between the SA and Neoplatonism (friendly) and between the SA and theism (less friendly).

https://www.gwern.net/Simulation-inferences
lesswrong  philosophy  weird  idk  thinking  insight  links  summary  rationality  ratty  bostrom  sampling-bias  anthropic  theos  simulation  hanson  decision-making  advice  mystic  time-preference  futurism  letters  entertainment  multi  morality  humility  hypocrisy  wealth  malthus  power  drama  gedanken  pdf  article  essay  religion  christianity  the-classics  big-peeps  iteration-recursion  aesthetics  nietzschean  axioms  gwern  analysis  realness  von-neumann  space  expansionism  duplication  spreading  sequential  cs  computation  outcome-risk  measurement  empirical  questions  bits  information-theory  efficiency  algorithms  physics  relativity  ems  neuro  data  scale  magnitude  complexity  risk  existence  threat-modeling  civilization  forms-instances 
september 2016 by nhaliday

bundles : hackerparanoia

related tags

80000-hours  ability-competence  abstraction  academia  accelerationism  accuracy  acemoglu  acm  acmtariat  adversarial  advertising  advice  aesthetics  africa  age-generation  aging  agriculture  ai  ai-control  albion  algorithms  alignment  allodium  alt-inst  altruism  analogy  analysis  analytical-holistic  anglo  anglosphere  announcement  anonymity  anthropic  anthropology  antidemos  antiquity  aphorism  apollonian-dionysian  app  apple  applicability-prereqs  applications  approximation  aristos  arms  arrows  art  article  asia  assembly  atoms  attaq  attention  audio  authoritarianism  automation  average-case  axioms  backup  barons  bayesian  behavioral-gen  being-becoming  being-right  benevolence  best-practices  biases  big-list  big-peeps  big-picture  big-surf  big-yud  bio  biodet  biophysical-econ  biotech  bits  blockchain  blowhards  books  bostrom  bounded-cognition  brain-scan  branches  britain  broad-econ  browser  build-packaging  c(pp)  c:**  caching  calculation  calculator  canada  cancer  capitalism  carmack  CAS  causation  certificates-recognition  charity  chart  cheatsheet  checking  checklists  chemistry  china  christianity  civic  civil-liberty  civilization  class  classification  clever-rats  climate-change  coalitions  coarse-fine  cocktail  cog-psych  cohesion  collaboration  comics  commentary  communication  communism  comparison  compensation  competition  compilers  complement-substitute  complex-systems  complexity  composition-decomposition  computation  computer-memory  concept  conceptual-vocab  concrete  conquest-empire  contracts  contrarianism  convexity-curvature  cool  cooperate-defect  coordination  corporation  correctness  correlation  corruption  cost-benefit  cost-disease  counter-revolution  coupling-cohesion  course  cracker-econ  cracker-prog  criminal-justice  CRISPR  critique  crooked  crux  crypto  crypto-anarchy  cryptocurrency  cs  cultural-dynamics  culture-war  curiosity  cybernetics  cycles  cynicism-idealism  dan-luu  darwinian  data  database  death  debate  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  defense  definite-planning  degrees-of-freedom  democracy  dennett  density  descriptive  desktop  detail-architecture  deterrence  developing-world  differential  dimensionality  diogenes  direct-indirect  direction  dirty-hands  discipline  discrete  discrimination  discussion  disease  distributed  distribution  diversity  dominant-minority  drama  driving  drugs  DSL  duplication  duty  dynamic  dysgenics  early-modern  earth  ecology  econ-metrics  econ-productivity  economics  econotariat  ecosystem  eden  eden-heaven  education  EEA  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  electromag  elite  email  embodied  emergent  empirical  ems  encyclopedic  endogenous-exogenous  energy-resources  engineering  enhancement  enlightenment-renaissance-restoration-reformation  entertainment  entropy-like  environment  epidemiology  epistemic  equilibrium  error  essay  estimate  ethanol  ethics  europe  evidence-based  evolution  evopsych  examples  existence  exit-voice  expansionism  expectancy  expert  expert-experience  explanans  explanation  explore-exploit  expression-survival  externalities  facebook  faq  farmers-and-foragers  fashun  fermi  fertility  feudal  fiction  finance  flexibility  flux-stasis  food  foreign-policy  formal-methods  formal-values  forms-instances  frameworks  frisson  frontier  futurism  gallic  games  gedanken  gender  generalization  genetics  genomics  geoengineering  geography  geometry  geopolitics  germanic  giants  gibbon  github  gnon  gnosis-logos  gnxp  google  government  gradient-descent  graphics  gray-econ  great-powers  gregory-clark  grokkability  grokkability-clarity  group-selection  growth-econ  guessing  guide  guilt-shame  gwern  hacker  hanson  hard-tech  hardware  heavy-industry  heavyweights  heuristic  hierarchy  high-dimension  high-variance  higher-ed  history  hmm  hn  homo-hetero  horror  hsu  huge-data-the-biggest  human-bean  human-capital  humanity  humility  hypocrisy  ideas  identification-equivalence  identity  identity-politics  ideology  idk  iidness  illusion  impact  impetus  incentives  increase-decrease  individualism-collectivism  industrial-revolution  inequality  inference  info-dynamics  info-econ  information-theory  infrastructure  innovation  input-output  insight  institutions  insurance  intel  intelligence  interdisciplinary  interests  interface  interface-compatibility  internet  intervention  interview  intricacy  intuition  invariance  iq  iraq-syria  iron-age  islam  iteration-recursion  janus  japan  jargon  judgement  julia  jvm  kinship  knowledge  korea  kumbaya-kult  labor  land  language  large-factor  latin-america  law  left-wing  legacy  legibility  len:long  len:short  lens  lesswrong  let-me-see  letters  leviathan  libraries  life-history  linear-algebra  liner-notes  links  linux  list  literature  lived-experience  local-global  logic  logistics  lol  long-short-run  long-term  longevity  lovecraft  low-hanging  lower-bounds  machine-learning  macro  madisonian  magnitude  malthus  management  manifolds  maps  marginal  marginal-rev  market-failure  marketing  markets  math  math.CA  maxim-gun  measure  measurement  mechanics  media  medicine  medieval  mediterranean  memory-management  meta:prediction  meta:reading  meta:war  metabuch  metal-to-virtual  metameta  methodology  metrics  microsoft  migration  military  miri-cfar  ML-MAP-E  mobile  model-class  model-organism  models  modernity  moloch  moments  monetary-fiscal  money  mooc  morality  mostly-modern  move-fast-(and-break-things)  multi  multiplicative  murray  musk  mutation  mystic  nationalism-globalism  nature  near-far  neocons  network-structure  neuro  neuro-nitgrit  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  nlp  no-go  nonlinearity  nordic  nuclear  number  objektbuch  occam  oceans  offense-defense  old-anglo  open-closed  open-problems  openai  opsec  optimate  optimism  optimization  order-disorder  org:anglo  org:biz  org:bleg  org:com  org:edu  org:gov  org:inst  org:junk  org:lite  org:mag  org:mat  org:med  org:nat  org:ngo  org:popup  org:rec  org:sci  organization  os  oss  osx  outcome-risk  overflow  papers  parable  paradox  parasites-microbiome  parenting  parsimony  patience  paying-rent  PCP  pdf  peace-violence  people  performance  pessimism  phalanges  pharma  phase-transition  philosophy  phys-energy  physics  pic  piracy  planning  plots  pls  poast  podcast  polarization  policy  polis  polisci  political-econ  politics  poll  popsci  population  population-genetics  postmortem  power  pragmatic  prediction  prepping  preprint  presentation  priors-posteriors  privacy  pro-rata  probability  prof  profile  programming  proof-systems  proofs  propaganda  property-rights  proposal  protestant-catholic  prudence  pseudoE  psychiatry  psychology  psychometrics  public-goodish  publishing  puzzles  q-n-a  qra  quality  quantitative-qualitative  questions  random  randy-ayndy  ranking  rant  rationality  ratty  realness  realpolitik  reason  red-queen  reddit  redistribution  reduction  reference  reflection  regularizer  regulation  reinforcement  relativity  religion  rent-seeking  repo  research  research-program  responsibility  retention  retrofit  review  revolution  rhetoric  rigidity  rigor  rigorous-crypto  risk  robotics  robust  roots  russia  rust  s:***  saas  safety  sampling-bias  scale  science  science-anxiety  scifi-fantasy  scitariat  SDP  search  securities  security  selection  selfish-gene  sequential  sex  sexuality  shift  signal-noise  signaling  similarity  simulation  singularity  skunkworks  sky  sleuthin  slides  slippery-slope  smoothness  social  social-choice  social-psych  social-science  sociality  society  sociology  soft-question  software  space  spatial  speculation  speed  speedometer  spock  spreading  ssc  stackex  stagnation  state  state-of-art  statesmen  static-dynamic  status  stock-flow  stories  strategy  street-fighting  structure  study  studying  stylized-facts  subculture  sulla  summary  survey  survival  sv  synchrony  system-design  systematic-ad-hoc  systems  tails  tainter  taxes  tcs  tech  technical-writing  technology  techtariat  telos-atelos  temperance  temperature  terrorism  tetlock  the-classics  the-great-west-whale  the-self  the-south  the-trenches  the-world-is-just-atoms  theory-practice  theos  thermo  thesis  thick-thin  things  thinking  threat-modeling  time  time-complexity  time-preference  time-series  tools  top-n  track-record  trade  tradecraft  tradeoffs  tradition  transportation  travel  trends  tribalism  trivia  troll  trust  truth  turing  twitter  ubiquity  unaffiliated  uncertainty  unintended-consequences  universalism-particularism  unix  unsupervised  urban-rural  us-them  usa  utopia-dystopia  values  vampire-squid  video  visualization  volo-avolo  von-neumann  walls  war  wealth  wealth-of-nations  webapp  weird  west-hunter  westminster  whiggish-hegelian  white-paper  whole-partial-many  wiki  wild-ideas  winner-take-all  within-without  wonkish  world  world-war  worrydream  worse-is-better/the-right-thing  writing  X-not-about-Y  xenobio  yvain  zeitgeist  zero-positive-sum  zooming  🌞  👽  🔬  🤖 

Copy this bookmark:



description:


tags: