nhaliday + parsimony   85

Ask HN: How do you manage your one-man project? | Hacker News
The main thing is to not fall into the "productivity porn" trap of trying to find the best tool instead of actually getting stuff done - when something simple is more than enough.
hn  discussion  productivity  workflow  exocortex  management  prioritizing  parsimony  recommendations  software  desktop  app  webapp  notetaking  discipline  q-n-a 
29 days ago by nhaliday
58 Bytes of CSS to look great nearly everywhere | Hacker News
Author mentions this took a long time to arrive at.
I recommend "Web Design in 4 Minutes" from the CSS guru behind Bulma:

https://jgthms.com/web-design-in-4-minutes/
[ed.: lottsa sensible criticism of the above in the comments]
https://news.ycombinator.com/item?id=12166687
hn  commentary  techtariat  design  form-design  howto  web  frontend  minimum-viable  efficiency  minimalism  parsimony  move-fast-(and-break-things)  tutorial  multi  mobile  init  advice 
5 weeks ago by nhaliday
The Future of Mathematics? [video] | Hacker News
https://news.ycombinator.com/item?id=20909404
Kevin Buzzard (the Lean guy)

- general reflection on proof asssistants/theorem provers
- Kevin Hale's formal abstracts project, etc
- thinks of available theorem provers, Lean is "[the only one currently available that may be capable of formalizing all of mathematics eventually]" (goes into more detail right at the end, eg, quotient types)
hn  commentary  discussion  video  talks  presentation  math  formal-methods  expert-experience  msr  frontier  state-of-art  proofs  rigor  education  higher-ed  optimism  prediction  lens  search  meta:research  speculation  exocortex  skunkworks  automation  research  math.NT  big-surf  software  parsimony  cost-benefit  intricacy  correctness  programming  pls  python  functional  haskell  heavyweights  research-program  review  reflection  multi  pdf  slides  oly  experiment  span-cover  git  vcs  teaching  impetus  academia  composition-decomposition  coupling-cohesion  database  trust  types  plt  lifts-projections  induction  critique  beauty  truth  elegance  aesthetics 
5 weeks ago by nhaliday
Unix philosophy - Wikipedia
1. Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new "features".
2. Expect the output of every program to become the input to another, as yet unknown, program. Don't clutter output with extraneous information. Avoid stringently columnar or binary input formats. Don't insist on interactive input.
3. Design and build software, even operating systems, to be tried early, ideally within weeks. Don't hesitate to throw away the clumsy parts and rebuild them.
4. Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.
wiki  concept  philosophy  lens  ideas  design  system-design  programming  engineering  systems  unix  subculture  composition-decomposition  coupling-cohesion  metabuch  skeleton  hi-order-bits  summary  list  top-n  quotes  aphorism  minimalism  minimum-viable  best-practices  intricacy  parsimony  protocol-metadata 
12 weeks ago by nhaliday
Organizing complexity is the most important skill in software development | Hacker News
- John D. Cook

https://news.ycombinator.com/item?id=9758063
Organization is the hardest part for me personally in getting better as a developer. How to build a structure that is easy to change and extend. Any tips where to find good books or online sources?
hn  commentary  techtariat  reflection  lens  engineering  programming  software  intricacy  parsimony  structure  coupling-cohesion  composition-decomposition  multi  poast  books  recommendations  abstraction  complex-systems  system-design  design  code-organizing  human-capital 
july 2019 by nhaliday
Panel: Systems Programming in 2014 and Beyond | Lang.NEXT 2014 | Channel 9
- Bjarne Stroustrup, Niko Matsakis, Andrei Alexandrescu, Rob Pike
- 2014 so pretty outdated but rare to find a discussion with people like this together
- pretty sure Jonathan Blow asked a couple questions
- Rob Pike compliments Rust at one point. Also kinda softly rags on dynamic typing at one point ("unit testing is what they have instead of static types").
video  presentation  debate  programming  pls  c(pp)  systems  os  rust  d-lang  golang  computer-memory  legacy  devtools  formal-methods  concurrency  compilers  syntax  parsimony  google  intricacy  thinking  cost-benefit  degrees-of-freedom  facebook  performance  people  rsc  cracker-prog  critique  types  checking  api  flux-stasis  engineering  time  wire-guided  worse-is-better/the-right-thing  static-dynamic  latency-throughput  techtariat 
july 2019 by nhaliday
python - Executing multi-line statements in the one-line command-line? - Stack Overflow
you could do
> echo -e "import sys\nfor r in range(10): print 'rob'" | python
or w/out pipes:
> python -c "exec(\"import sys\nfor r in range(10): print 'rob'\")"
or
> (echo "import sys" ; echo "for r in range(10): print 'rob'") | python

[ed.: In fish
> python -c "import sys"\n"for r in range(10): print 'rob'"]
q-n-a  stackex  programming  yak-shaving  pls  python  howto  terminal  parsimony  syntax  gotchas 
july 2019 by nhaliday
The End of the Editor Wars » Linux Magazine
Moreover, even if you assume a broad margin of error, the pollings aren't even close. With all the various text editors available today, Vi and Vim continue to be the choice of over a third of users, while Emacs well back in the pack, no longer a competitor for the most popular text editor.

https://www.quora.com/Are-there-more-Emacs-or-Vim-users
I believe Vim is actually more popular, but it's hard to find any real data on it. The best source I've seen is the annual StackOverflow developer survey where 15.2% of developers used Vim compared to a mere 3.2% for Emacs.

Oddly enough, the report noted that "Data scientists and machine learning developers are about 3 times more likely to use Emacs than any other type of developer," which is not necessarily what I would have expected.

[ed. NB: Vim still dominates overall.]

https://pinboard.in/u:nhaliday/b:6adc1b1ef4dc

Time To End The vi/Emacs Debate: https://cacm.acm.org/blogs/blog-cacm/226034-time-to-end-the-vi-emacs-debate/fulltext

Vim, Emacs and their forever war. Does it even matter any more?: https://blog.sourcerer.io/vim-emacs-and-their-forever-war-does-it-even-matter-any-more-697b1322d510
Like an episode of “Silicon Valley”, a discussion of Emacs vs. Vim used to have a polarizing effect that would guarantee a stimulating conversation, regardless of an engineer’s actual alignment. But nowadays, diehard Emacs and Vim users are getting much harder to find. Maybe I’m in the wrong orbit, but looking around today, I see that engineers are equally or even more likely to choose any one of a number of great (for any given definition of ‘great’) modern editors or IDEs such as Sublime Text, Visual Studio Code, Atom, IntelliJ (… or one of its siblings), Brackets, Visual Studio or Xcode, to name a few. It’s not surprising really — many top engineers weren’t even born when these editors were at version 1.0, and GUIs (for better or worse) hadn’t been invented.

...

… both forums have high traffic and up-to-the-minute comment and discussion threads. Some of the available statistics paint a reasonably healthy picture — Stackoverflow’s 2016 developer survey ranks Vim 4th out of 24 with 26.1% of respondents in the development environments category claiming to use it. Emacs came 15th with 5.2%. In combination, over 30% is, actually, quite impressive considering they’ve been around for several decades.

What’s odd, however, is that if you ask someone — say a random developer — to express a preference, the likelihood is that they will favor for one or the other even if they have used neither in anger. Maybe the meme has spread so widely that all responses are now predominantly ritualistic, and represent something more fundamental than peoples’ mere preference for an editor? There’s a rather obvious political hypothesis waiting to be made — that Emacs is the leftist, socialist, centralized state, while Vim represents the right and the free market, specialization and capitalism red in tooth and claw.

How is Emacs/Vim used in companies like Google, Facebook, or Quora? Are there any libraries or tools they share in public?: https://www.quora.com/How-is-Emacs-Vim-used-in-companies-like-Google-Facebook-or-Quora-Are-there-any-libraries-or-tools-they-share-in-public
In Google there's a fair amount of vim and emacs. I would say at least every other engineer uses one or another.

Among Software Engineers, emacs seems to be more popular, about 2:1. Among Site Reliability Engineers, vim is more popular, about 9:1.
--
People use both at Facebook, with (in my opinion) slightly better tooling for Emacs than Vim. We share a master.emacs and master.vimrc file, which contains the bare essentials (like syntactic highlighting for the Hack language). We also share a Ctags file that's updated nightly with a cron script.

Beyond the essentials, there's a group for Emacs users at Facebook that provides tips, tricks, and major-modes created by people at Facebook. That's where Adam Hupp first developed his excellent mural-mode (ahupp/mural), which does for Ctags what iDo did for file finding and buffer switching.
--
For emacs, it was very informal at Google. There wasn't a huge community of Emacs users at Google, so there wasn't much more than a wiki and a couple language styles matching Google's style guides.

https://trends.google.com/trends/explore?date=all&geo=US&q=%2Fm%2F07zh7,%2Fm%2F01yp0m

https://www.quora.com/Why-is-interest-in-Emacs-dropping
And it is still that. It’s just that emacs is no longer unique, and neither is Lisp.

Dynamically typed scripting languages with garbage collection are a dime a dozen now. Anybody in their right mind developing an extensible text editor today would just use python, ruby, lua, or JavaScript as the extension language and get all the power of Lisp combined with vibrant user communities and millions of lines of ready-made libraries that Stallman and Steele could only dream of in the 70s.

In fact, in many ways emacs and elisp have fallen behind: 40 years after Lambda, the Ultimate Imperative, elisp is still dynamically scoped, and it still doesn’t support multithreading — when I try to use dired to list the files on a slow NFS mount, the entire editor hangs just as thoroughly as it might have in the 1980s. And when I say “doesn’t support multithreading,” I don’t mean there is some other clever trick for continuing to do work while waiting on a system call, like asynchronous callbacks or something. There’s start-process which forks a whole new process, and that’s about it. It’s a concurrency model straight out of 1980s UNIX land.

But being essentially just a decent text editor has robbed emacs of much of its competitive advantage. In a world where every developer tool is scriptable with languages and libraries an order of magnitude more powerful than cranky old elisp, the reason to use emacs is not that it lets a programmer hit a button and evaluate the current expression interactively (which must have been absolutely amazing at one point in the past).

https://www.reddit.com/r/emacs/comments/bh5kk7/why_do_many_new_users_still_prefer_vim_over_emacs/

more general comparison, not just popularity:
Differences between Emacs and Vim: https://stackoverflow.com/questions/1430164/differences-between-Emacs-and-vim

https://www.reddit.com/r/emacs/comments/9hen7z/what_are_the_benefits_of_emacs_over_vim/

https://unix.stackexchange.com/questions/986/what-are-the-pros-and-cons-of-vim-and-emacs

https://www.quora.com/Why-is-Vim-the-programmers-favorite-editor
- Adrien Lucas Ecoffet,

Because it is hard to use. Really.

However, the second part of this sentence applies to just about every good editor out there: if you really learn Sublime Text, you will become super productive. If you really learn Emacs, you will become super productive. If you really learn Visual Studio… you get the idea.

Here’s the thing though, you never actually need to really learn your text editor… Unless you use vim.

...

For many people new to programming, this is the first time they have been a power user of… well, anything! And because they’ve been told how great Vim is, many of them will keep at it and actually become productive, not because Vim is particularly more productive than any other editor, but because it didn’t provide them with a way to not be productive.

They then go on to tell their friends how great Vim is, and their friends go on to become power users and tell their friends in turn, and so forth. All these people believe they became productive because they changed their text editor. Little do they realize that they became productive because their text editor changed them[1].

This is in no way a criticism of Vim. I myself was a beneficiary of such a phenomenon when I learned to type using the Dvorak layout: at that time, I believed that Dvorak would help you type faster. Now I realize the evidence is mixed and that Dvorak might not be much better than Qwerty. However, learning Dvorak forced me to develop good typing habits because I could no longer rely on looking at my keyboard (since I was still using a Qwerty physical keyboard), and this has made me a much more productive typist.

Technical Interview Performance by Editor/OS/Language: https://triplebyte.com/blog/technical-interview-performance-by-editor-os-language
[ed.: I'm guessing this is confounded to all hell.]

The #1 most common editor we see used in interviews is Sublime Text, with Vim close behind.

Emacs represents a fairly small market share today at just about a quarter the userbase of Vim in our interviews. This nicely matches the 4:1 ratio of Google Search Trends for the two editors.

...

Vim takes the prize here, but PyCharm and Emacs are close behind. We’ve found that users of these editors tend to pass our interview at an above-average rate.

On the other end of the spectrum is Eclipse: it appears that someone using either Vim or Emacs is more than twice as likely to pass our technical interview as an Eclipse user.

...

In this case, we find that the average Ruby, Swift, and C# users tend to be stronger, with Python and Javascript in the middle of the pack.

...

Here’s what happens after we select engineers to work with and send them to onsites:

[Python does best.]

There are no wild outliers here, but let’s look at the C++ segment. While C++ programmers have the most challenging time passing Triplebyte’s technical interview on average, the ones we choose to work with tend to have a relatively easier time getting offers at each onsite.

The Rise of Microsoft Visual Studio Code: https://triplebyte.com/blog/editor-report-the-rise-of-visual-studio-code
This chart shows the rates at which each editor's users pass our interview compared to the mean pass rate for all candidates. First, notice the preeminence of Emacs and Vim! Engineers who use these editors pass our interview at significantly higher rates than other engineers. And the effect size is not small. Emacs users pass our interview at a rate 50… [more]
news  linux  oss  tech  editors  devtools  tools  comparison  ranking  flux-stasis  trends  ubiquity  unix  increase-decrease  multi  q-n-a  qra  data  poll  stackex  sv  facebook  google  integration-extension  org:med  politics  stereotypes  coalitions  decentralized  left-wing  right-wing  chart  scale  time-series  distribution  top-n  list  discussion  ide  parsimony  intricacy  cost-benefit  tradeoffs  confounding  analysis  crosstab  pls  python  c(pp)  jvm  microsoft  golang  hmm  correlation  debate  critique  quora  contrarianism  ecosystem  DSL 
june 2019 by nhaliday
Interview with Donald Knuth | Interview with Donald Knuth | InformIT
Andrew Binstock and Donald Knuth converse on the success of open source, the problem with multicore architecture, the disappointing lack of interest in literate programming, the menace of reusable code, and that urban legend about winning a programming contest with a single compilation.

Reusable vs. re-editable code: https://hal.archives-ouvertes.fr/hal-01966146/document
- Konrad Hinsen

https://www.johndcook.com/blog/2008/05/03/reusable-code-vs-re-editable-code/
I think whether code should be editable or in “an untouchable black box” depends on the number of developers involved, as well as their talent and motivation. Knuth is a highly motivated genius working in isolation. Most software is developed by large teams of programmers with varying degrees of motivation and talent. I think the further you move away from Knuth along these three axes the more important black boxes become.
nibble  interview  giants  expert-experience  programming  cs  software  contrarianism  carmack  oss  prediction  trends  linux  concurrency  desktop  comparison  checking  debugging  stories  engineering  hmm  idk  algorithms  books  debate  flux-stasis  duplication  parsimony  best-practices  writing  documentation  latex  intricacy  structure  hardware  caching  workflow  editors  composition-decomposition  coupling-cohesion  exposition  technical-writing  thinking  cracker-prog  code-organizing  grokkability  multi  techtariat  commentary  pdf  reflection  essay  examples  python  data-science  libraries  grokkability-clarity 
june 2019 by nhaliday
One week of bugs
If I had to guess, I'd say I probably work around hundreds of bugs in an average week, and thousands in a bad week. It's not unusual for me to run into a hundred new bugs in a single week. But I often get skepticism when I mention that I run into multiple new (to me) bugs per day, and that this is inevitable if we don't change how we write tests. Well, here's a log of one week of bugs, limited to bugs that were new to me that week. After a brief description of the bugs, I'll talk about what we can do to improve the situation. The obvious answer to spend more effort on testing, but everyone already knows we should do that and no one does it. That doesn't mean it's hopeless, though.

...

Here's where I'm supposed to write an appeal to take testing more seriously and put real effort into it. But we all know that's not going to work. It would take 90k LOC of tests to get Julia to be as well tested as a poorly tested prototype (falsely assuming linear complexity in size). That's two person-years of work, not even including time to debug and fix bugs (which probably brings it closer to four of five years). Who's going to do that? No one. Writing tests is like writing documentation. Everyone already knows you should do it. Telling people they should do it adds zero information1.

Given that people aren't going to put any effort into testing, what's the best way to do it?

Property-based testing. Generative testing. Random testing. Concolic Testing (which was done long before the term was coined). Static analysis. Fuzzing. Statistical bug finding. There are lots of options. Some of them are actually the same thing because the terminology we use is inconsistent and buggy. I'm going to arbitrarily pick one to talk about, but they're all worth looking into.

...

There are a lot of great resources out there, but if you're just getting started, I found this description of types of fuzzers to be one of those most helpful (and simplest) things I've read.

John Regehr has a udacity course on software testing. I haven't worked through it yet (Pablo Torres just pointed to it), but given the quality of Dr. Regehr's writing, I expect the course to be good.

For more on my perspective on testing, there's this.

Everything's broken and nobody's upset: https://www.hanselman.com/blog/EverythingsBrokenAndNobodysUpset.aspx
https://news.ycombinator.com/item?id=4531549

https://hypothesis.works/articles/the-purpose-of-hypothesis/
From the perspective of a user, the purpose of Hypothesis is to make it easier for you to write better tests.

From my perspective as the primary author, that is of course also a purpose of Hypothesis. I write a lot of code, it needs testing, and the idea of trying to do that without Hypothesis has become nearly unthinkable.

But, on a large scale, the true purpose of Hypothesis is to drag the world kicking and screaming into a new and terrifying age of high quality software.

Software is everywhere. We have built a civilization on it, and it’s only getting more prevalent as more services move online and embedded and “internet of things” devices become cheaper and more common.

Software is also terrible. It’s buggy, it’s insecure, and it’s rarely well thought out.

This combination is clearly a recipe for disaster.

The state of software testing is even worse. It’s uncontroversial at this point that you should be testing your code, but it’s a rare codebase whose authors could honestly claim that they feel its testing is sufficient.

Much of the problem here is that it’s too hard to write good tests. Tests take up a vast quantity of development time, but they mostly just laboriously encode exactly the same assumptions and fallacies that the authors had when they wrote the code, so they miss exactly the same bugs that you missed when they wrote the code.

Preventing the Collapse of Civilization [video]: https://news.ycombinator.com/item?id=19945452
- Jonathan Blow

NB: DevGAMM is a game industry conference

- loss of technological knowledge (Antikythera mechanism, aqueducts, etc.)
- hardware driving most gains, not software
- software's actually less robust, often poorly designed and overengineered these days
- *list of bugs he's encountered recently*:
https://youtu.be/pW-SOdj4Kkk?t=1387
- knowledge of trivia becomes more than general, deep knowledge
- does at least acknowledge value of DRY, reusing code, abstraction saving dev time
techtariat  dan-luu  tech  software  error  list  debugging  linux  github  robust  checking  oss  troll  lol  aphorism  webapp  email  google  facebook  games  julia  pls  compilers  communication  mooc  browser  rust  programming  engineering  random  jargon  formal-methods  expert-experience  prof  c(pp)  course  correctness  hn  commentary  video  presentation  carmack  pragmatic  contrarianism  pessimism  sv  unix  rhetoric  critique  worrydream  hardware  performance  trends  multiplicative  roots  impact  comparison  history  iron-age  the-classics  mediterranean  conquest-empire  gibbon  technology  the-world-is-just-atoms  flux-stasis  increase-decrease  graphics  hmm  idk  systems  os  abstraction  intricacy  worse-is-better/the-right-thing  build-packaging  microsoft  osx  apple  reflection  assembly  things  knowledge  detail-architecture  thick-thin  trivia  info-dynamics  caching  frameworks  generalization  systematic-ad-hoc  universalism-particularism  analytical-holistic  structure  tainter  libraries  tradeoffs  prepping  threat-modeling  network-structure  writing  risk  local-glob 
may 2019 by nhaliday
quality - Is the average number of bugs per loc the same for different programming languages? - Software Engineering Stack Exchange
Contrary to intuition, the number of errors per 1000 lines of does seem to be relatively constant, reguardless of the specific language involved. Steve McConnell, author of Code Complete and Software Estimation: Demystifying the Black Art goes over this area in some detail.

I don't have my copies readily to hand - they're sitting on my bookshelf at work - but a quick Google found a relevant quote:

Industry Average: "about 15 - 50 errors per 1000 lines of delivered code."
(Steve) further says this is usually representative of code that has some level of structured programming behind it, but probably includes a mix of coding techniques.

Quoted from Code Complete, found here: http://mayerdan.com/ruby/2012/11/11/bugs-per-line-of-code-ratio/

If memory serves correctly, Steve goes into a thorough discussion of this, showing that the figures are constant across languages (C, C++, Java, Assembly and so on) and despite difficulties (such as defining what "line of code" means).

Most importantly he has lots of citations for his sources - he's not offering unsubstantiated opinions, but has the references to back them up.

[ed.: I think this is delivered code? So after testing, debugging, etc. I'm more interested in the metric for the moment after you've gotten something to compile.

edit: cf https://pinboard.in/u:nhaliday/b:0a6eb68166e6]
q-n-a  stackex  programming  engineering  nitty-gritty  error  flux-stasis  books  recommendations  software  checking  debugging  pro-rata  pls  comparison  parsimony  measure  data  objektbuch  speculation  accuracy  density  correctness  estimate  street-fighting  multi  quality  stylized-facts  methodology 
april 2019 by nhaliday
ellipsis - Why is the subject omitted in sentences like "Thought you'd never ask"? - English Language & Usage Stack Exchange
This is due to a phenomenon that occurs in intimate conversational spoken English called "Conversational Deletion". It was discussed and exemplified quite thoroughly in a 1974 PhD dissertation in linguistics at the University of Michigan that I had the honor of directing.

Thrasher, Randolph H. Jr. 1974. Shouldn't Ignore These Strings: A Study of Conversational Deletion, Ph.D. Dissertation, Linguistics, University of Michigan, Ann Arbor

...

"The phenomenon can be viewed as erosion of the beginning of sentences, deleting (some, but not all) articles, dummies, auxiliaries, possessives, conditional if, and [most relevantly for this discussion -jl] subject pronouns. But it only erodes up to a point, and only in some cases.

"Whatever is exposed (in sentence initial position) can be swept away. If erosion of the first element exposes another vulnerable element, this too may be eroded. The process continues until a hard (non-vulnerable) element is encountered." [ibidem p.9]

Dad calls this and some similar omissions "Kiplinger style": https://en.wikipedia.org/wiki/Kiplinger
q-n-a  stackex  anglo  language  writing  speaking  linguistics  thesis  trivia  cocktail  parsimony  compression  multi  wiki  organization  technical-writing  protocol-metadata  simplification-normalization 
march 2019 by nhaliday
Which benchmark programs are faster? | Computer Language Benchmarks Game
old:
https://salsa.debian.org/benchmarksgame-team/archive-alioth-benchmarksgame
https://web.archive.org/web/20170331153459/http://benchmarksgame.alioth.debian.org/
includes Scala

very outdated but more languages: https://web.archive.org/web/20110401183159/http://shootout.alioth.debian.org:80/

OCaml seems to offer the best tradeoff of performance vs parsimony (Haskell not so much :/)
https://blog.chewxy.com/2019/02/20/go-is-average/
http://blog.gmarceau.qc.ca/2009/05/speed-size-and-dependability-of.html
old official: https://web.archive.org/web/20130731195711/http://benchmarksgame.alioth.debian.org/u64q/code-used-time-used-shapes.php
https://web.archive.org/web/20121125103010/http://shootout.alioth.debian.org/u64q/code-used-time-used-shapes.php
Haskell does better here

other PL benchmarks:
https://github.com/kostya/benchmarks
BF 2.0:
Kotlin, C++ (GCC), Rust < Nim, D (GDC,LDC), Go, MLton < Crystal, Go (GCC), C# (.NET Core), Scala, Java, OCaml < D (DMD) < C# Mono < Javascript V8 < F# Mono, Javascript Node, Haskell (MArray) << LuaJIT << Python PyPy < Haskell < Racket <<< Python << Python3
mandel.b:
C++ (GCC) << Crystal < Rust, D (GDC), Go (GCC) < Nim, D (LDC) << C# (.NET Core) < MLton << Kotlin << OCaml << Scala, Java << D (DMD) << Go << C# Mono << Javascript Node << Haskell (MArray) << LuaJIT < Python PyPy << F# Mono <<< Racket
https://github.com/famzah/langs-performance
C++, Rust, Java w/ custom non-stdlib code < Python PyPy < C# .Net Core < Javscript Node < Go, unoptimized C++ (no -O2) << PHP << Java << Python3 << Python
comparison  pls  programming  performance  benchmarks  list  top-n  ranking  systems  time  multi  🖥  cost-benefit  tradeoffs  data  analysis  plots  visualization  measure  intricacy  parsimony  ocaml-sml  golang  rust  jvm  javascript  c(pp)  functional  haskell  backup  scala  realness  generalization  accuracy  techtariat  crosstab  database  repo  objektbuch  static-dynamic  gnu 
december 2018 by nhaliday
Who We Are | West Hunter
I’m going to review David Reich’s new book, Who We Are and How We Got Here. Extensively: in a sense I’ve already been doing this for a long time. Probably there will be a podcast. The GoFundMe link is here. You can also send money via Paypal (Use the donate button), or bitcoins to 1Jv4cu1wETM5Xs9unjKbDbCrRF2mrjWXr5. In-kind donations, such as orichalcum or mithril, are always appreciated.

This is the book about the application of ancient DNA to prehistory and history.

height difference between northern and southern europeans: https://westhunt.wordpress.com/2018/03/29/who-we-are-1/
mixing, genocide of males, etc.: https://westhunt.wordpress.com/2018/03/29/who-we-are-2-purity-of-essence/
rapid change in polygenic traits (appearance by Kevin Mitchell and funny jab at Brad Delong ("regmonkey")): https://westhunt.wordpress.com/2018/03/30/rapid-change-in-polygenic-traits/
schiz, bipolar, and IQ: https://westhunt.wordpress.com/2018/03/30/rapid-change-in-polygenic-traits/#comment-105605
Dan Graur being dumb: https://westhunt.wordpress.com/2018/04/02/the-usual-suspects/
prediction of neanderthal mixture and why: https://westhunt.wordpress.com/2018/04/03/who-we-are-3-neanderthals/
New Guineans tried to use Denisovan admixture to avoid UN sanctions (by "not being human"): https://westhunt.wordpress.com/2018/04/04/who-we-are-4-denisovans/
also some commentary on decline of Out-of-Africa, including:
"Homo Naledi, a small-brained homonin identified from recently discovered fossils in South Africa, appears to have hung around way later that you’d expect (up to 200,000 years ago, maybe later) than would be the case if modern humans had occupied that area back then. To be blunt, we would have eaten them."

Live Not By Lies: https://westhunt.wordpress.com/2018/04/08/live-not-by-lies/
Next he slams people that suspect that upcoming genetic genetic analysis will, in most cases, confirm traditional stereotypes about race – the way the world actually looks.

The people Reich dumps on are saying perfectly reasonable things. He criticizes Henry Harpending for saying that he’d never seen an African with a hobby. Of course, Henry had actually spent time in Africa, and that’s what he’d seen. The implication is that people in Malthusian farming societies – which Africa was not – were selected to want to work, even where there was no immediate necessity to do so. Thus hobbies, something like a gerbil running in an exercise wheel.

He criticized Nicholas Wade, for saying that different races have different dispositions. Wade’s book wasn’t very good, but of course personality varies by race: Darwin certainly thought so. You can see differences at birth. Cover a baby’s nose with a cloth: Chinese and Navajo babies quietly breathe through their mouth, European and African babies fuss and fight.

Then he attacks Watson, for asking when Reich was going to look at Jewish genetics – the kind that has led to greater-than-average intelligence. Watson was undoubtedly trying to get a rise out of Reich, but it’s a perfectly reasonable question. Ashkenazi Jews are smarter than the average bear and everybody knows it. Selection is the only possible explanation, and the conditions in the Middle ages – white-collar job specialization and a high degree of endogamy, were just what the doctor ordered.

Watson’s a prick, but he’s a great prick, and what he said was correct. Henry was a prince among men, and Nick Wade is a decent guy as well. Reich is totally out of line here: he’s being a dick.

Now Reich may be trying to burnish his anti-racist credentials, which surely need some renewal after having pointing out that race as colloquially used is pretty reasonable, there’s no reason pops can’t be different, people that said otherwise ( like Lewontin, Gould, Montagu, etc. ) were lying, Aryans conquered Europe and India, while we’re tied to the train tracks with scary genetic results coming straight at us. I don’t care: he’s being a weasel, slandering the dead and abusing the obnoxious old genius who laid the foundations of his field. Reich will also get old someday: perhaps he too will someday lose track of all the nonsense he’s supposed to say, or just stop caring. Maybe he already has… I’m pretty sure that Reich does not like lying – which is why he wrote this section of the book (not at all logically necessary for his exposition of the ancient DNA work) but the required complex juggling of lies and truth required to get past the demented gatekeepers of our society may not be his forte. It has been said that if it was discovered that someone in the business was secretly an android, David Reich would be the prime suspect. No Talleyrand he.

https://westhunt.wordpress.com/2018/04/12/who-we-are-6-the-americas/
The population that accounts for the vast majority of Native American ancestry, which we will call Amerinds, came into existence somewhere in northern Asia. It was formed from a mix of Ancient North Eurasians and a population related to the Han Chinese – about 40% ANE and 60% proto-Chinese. Is looks as if most of the paternal ancestry was from the ANE, while almost all of the maternal ancestry was from the proto-Han. [Aryan-Transpacific ?!?] This formation story – ANE boys, East-end girls – is similar to the formation story for the Indo-Europeans.

https://westhunt.wordpress.com/2018/04/18/who-we-are-7-africa/
In some ways, on some questions, learning more from genetics has left us less certain. At this point we really don’t know where anatomically humans originated. Greater genetic variety in sub-Saharan African has been traditionally considered a sign that AMH originated there, but it possible that we originated elsewhere, perhaps in North Africa or the Middle East, and gained extra genetic variation when we moved into sub-Saharan Africa and mixed with various archaic groups that already existed. One consideration is that finding recent archaic admixture in a population may well be a sign that modern humans didn’t arise in that region ( like language substrates) – which makes South Africa and West Africa look less likely. The long-continued existence of homo naledi in South Africa suggests that modern humans may not have been there for all that long – if we had co-existed with homo naledi, they probably wouldn’t lasted long. The oldest known skull that is (probably) AMh was recently found in Morocco, while modern humans remains, already known from about 100,000 years ago in Israel, have recently been found in northern Saudi Arabia.

While work by Nick Patterson suggests that modern humans were formed by a fusion between two long-isolated populations, a bit less than half a million years ago.

So: genomics had made recent history Africa pretty clear. Bantu agriculuralists expanded and replaced hunter-gatherers, farmers and herders from the Middle East settled North Africa, Egypt and northeaat Africa, while Nilotic herdsmen expanded south from the Sudan. There are traces of earlier patterns and peoples, but today, only traces. As for questions back further in time, such as the origins of modern humans – we thought we knew, and now we know we don’t. But that’s progress.

https://westhunt.wordpress.com/2018/04/18/reichs-journey/
David Reich’s professional path must have shaped his perspective on the social sciences. Look at the record. He starts his professional career examining the role of genetics in the elevated prostate cancer risk seen in African-American men. Various social-science fruitcakes oppose him even looking at the question of ancestry ( African vs European). But they were wrong: certain African-origin alleles explain the increased risk. Anthropologists (and human geneticists) were sure (based on nothing) that modern humans hadn’t interbred with Neanderthals – but of course that happened. Anthropologists and archaeologists knew that Gustaf Kossina couldn’t have been right when he said that widespread material culture corresponded to widespread ethnic groups, and that migration was the primary explanation for changes in the archaeological record – but he was right. They knew that the Indo-European languages just couldn’t have been imposed by fire and sword – but Reich’s work proved them wrong. Lots of people – the usual suspects plus Hindu nationalists – were sure that the AIT ( Aryan Invasion Theory) was wrong, but it looks pretty good today.

Some sociologists believed that caste in India was somehow imposed or significantly intensified by the British – but it turns out that most jatis have been almost perfectly endogamous for two thousand years or more…

It may be that Reich doesn’t take these guys too seriously anymore. Why should he?

varnas, jatis, aryan invastion theory: https://westhunt.wordpress.com/2018/04/22/who-we-are-8-india/

europe and EEF+WHG+ANE: https://westhunt.wordpress.com/2018/05/01/who-we-are-9-europe/

https://www.nationalreview.com/2018/03/book-review-david-reich-human-genes-reveal-history/
The massive mixture events that occurred in the recent past to give rise to Europeans and South Asians, to name just two groups, were likely “male mediated.” That’s another way of saying that men on the move took local women as brides or concubines. In the New World there are many examples of this, whether it be among African Americans, where most European ancestry seems to come through men, or in Latin America, where conquistadores famously took local women as paramours. Both of these examples are disquieting, and hint at the deep structural roots of patriarchal inequality and social subjugation that form the backdrop for the emergence of many modern peoples.
west-hunter  scitariat  books  review  sapiens  anthropology  genetics  genomics  history  antiquity  iron-age  world  europe  gavisti  aDNA  multi  politics  culture-war  kumbaya-kult  social-science  academia  truth  westminster  environmental-effects  embodied  pop-diff  nordic  mediterranean  the-great-west-whale  germanic  the-classics  shift  gene-flow  homo-hetero  conquest-empire  morality  diversity  aphorism  migration  migrant-crisis  EU  africa  MENA  gender  selection  speed  time  population-genetics  error  concrete  econotariat  economics  regression  troll  lol  twitter  social  media  street-fighting  methodology  robust  disease  psychiatry  iq  correlation  usa  obesity  dysgenics  education  track-record  people  counterexample  reason  thinking  fisher  giants  old-anglo  scifi-fantasy  higher-ed  being-right  stories  reflection  critique  multiplicative  iteration-recursion  archaics  asia  developing-world  civil-liberty  anglo  oceans  food  death  horror  archaeology  gnxp  news  org:mag  right-wing  age-of-discovery  latin-america  ea 
march 2018 by nhaliday
The Falling Price of Fat | Pseudoerasmus
Summary : There are too many baroque explanations for the increased prevalence of obesity. I suggest a simple mechanism : falling food prices, rising incomes.
econotariat  broad-econ  pseudoE  economics  supply-demand  food  obesity  trends  explanans  cynicism-idealism  money  compensation  cost-benefit  backup  epidemiology  public-health  roots  regularizer  parsimony 
february 2018 by nhaliday
Plague of Frogs | West Hunter
For a few years the herpetologists were concerned yet happy. Concerned, because many frog populations were crashing and some were going extinct. Happy, because confused puppies in Washington were giving them money, something that hardly ever happens to frogmen. The theory was that amphibians were ‘canaries in a coal mine’, uniquely sensitive to environmental degradation.

...

It took some time for herpetologists to admit that this chytrid fungus is the main culprit – some are still resisting. First, it was a lot like how doctors resisted Semmelweiss’ discoveries about the cause of puerperal fever – since doctors were the main method of transmission. How did this fungus get to the cloud forests of Costa Rica? On the boots of herpetologists, of course.

The second problem is Occam’s butterknife: even though this chytrid fungus is the main culprit, it’s just got to be more complicated than that. Even if it isn’t. People in the life sciences – biology and medicine – routinely reject simple hypotheses that do a good job of explaining the data for more complex hypotheses that don’t. College taught them to think – unwisely.
west-hunter  scitariat  reflection  stories  troll  lol  science  low-hanging  occam  parsimony  bio  medicine  meta:medicine  ability-competence  explanans  disease  parasites-microbiome  spreading  world  nature  environment  climate-change  hypochondria  academia  questions  epidemiology  incentives  interests 
february 2018 by nhaliday
Why Sex? And why only in Pairs? - Marginal REVOLUTION
The core conclusion is that mutations continue to rise with the number of sex-participating partners, but in simple Red Queen models the limiting features of the genotypes is the same whether there are two, three, or more partners.

Men Are Animals: http://www.overcomingbias.com/2018/06/men-are-animals.html
I agree with all the comments citing motility/sessility.
econotariat  marginal-rev  commentary  study  summary  economics  broad-econ  interdisciplinary  bio  biodet  deep-materialism  new-religion  eden  gender  sex  EGT  explanans  red-queen  parasites-microbiome  mutation  comparison  evolution  roots  🌞  population-genetics  genetics  marginal  equilibrium  number  ecology  whole-partial-many  uniqueness  parsimony  multi  cost-benefit  outcome-risk  uncertainty  moments  spatial  travel  explore-exploit  ratty  hanson 
january 2018 by nhaliday
Ptolemy's Model of the Solar System
It follows, from the above discussion, that the geocentric model of Ptolemy is equivalent to a heliocentric model in which the various planetary orbits are represented as eccentric circles, and in which the radius vector connecting a given planet to its corresponding equant revolves at a uniform rate. In fact, Ptolemy's model of planetary motion can be thought of as a version of Kepler's model which is accurate to first-order in the planetary eccentricities--see Cha. 4. According to the Ptolemaic scheme, from the point of view of the earth, the orbit of the sun is described by a single circular motion, whereas that of a planet is described by a combination of two circular motions. In reality, the single circular motion of the sun represents the (approximately) circular motion of the earth around the sun, whereas the two circular motions of a typical planet represent a combination of the planet's (approximately) circular motion around the sun, and the earth's motion around the sun. Incidentally, the popular myth that Ptolemy's scheme requires an absurdly large number of circles in order to fit the observational data to any degree of accuracy has no basis in fact. Actually, Ptolemy's model of the sun and the planets, which fits the data very well, only contains 12 circles (i.e., 6 deferents and 6 epicycles).
org:junk  org:edu  nibble  physics  space  mechanics  history  iron-age  mediterranean  the-classics  science  the-trenches  the-great-west-whale  giants  models  intricacy  parsimony 
september 2017 by nhaliday
All models are wrong - Wikipedia
Box repeated the aphorism in a paper that was published in the proceedings of a 1978 statistics workshop.[2] The paper contains a section entitled "All models are wrong but some are useful". The section is copied below.

Now it would be very remarkable if any system existing in the real world could be exactly represented by any simple model. However, cunningly chosen parsimonious models often do provide remarkably useful approximations. For example, the law PV = RT relating pressure P, volume V and temperature T of an "ideal" gas via a constant R is not exactly true for any real gas, but it frequently provides a useful approximation and furthermore its structure is informative since it springs from a physical view of the behavior of gas molecules.

For such a model there is no need to ask the question "Is the model true?". If "truth" is to be the "whole truth" the answer must be "No". The only question of interest is "Is the model illuminating and useful?".
thinking  metabuch  metameta  map-territory  models  accuracy  wire-guided  truth  philosophy  stats  data-science  methodology  lens  wiki  reference  complex-systems  occam  parsimony  science  nibble  hi-order-bits  info-dynamics  the-trenches  meta:science  physics  fluid  thermo  stat-mech  applicability-prereqs  theory-practice  elegance  simplification-normalization 
august 2017 by nhaliday
Kelly criterion - Wikipedia
In probability theory and intertemporal portfolio choice, the Kelly criterion, Kelly strategy, Kelly formula, or Kelly bet, is a formula used to determine the optimal size of a series of bets. In most gambling scenarios, and some investing scenarios under some simplifying assumptions, the Kelly strategy will do better than any essentially different strategy in the long run (that is, over a span of time in which the observed fraction of bets that are successful equals the probability that any given bet will be successful). It was described by J. L. Kelly, Jr, a researcher at Bell Labs, in 1956.[1] The practical use of the formula has been demonstrated.[2][3][4]

The Kelly Criterion is to bet a predetermined fraction of assets and can be counterintuitive. In one study,[5][6] each participant was given $25 and asked to bet on a coin that would land heads 60% of the time. Participants had 30 minutes to play, so could place about 300 bets, and the prizes were capped at $250. Behavior was far from optimal. "Remarkably, 28% of the participants went bust, and the average payout was just $91. Only 21% of the participants reached the maximum. 18 of the 61 participants bet everything on one toss, while two-thirds gambled on tails at some stage in the experiment." Using the Kelly criterion and based on the odds in the experiment, the right approach would be to bet 20% of the pot on each throw (see first example in Statement below). If losing, the size of the bet gets cut; if winning, the stake increases.
nibble  betting  investing  ORFE  acm  checklists  levers  probability  algorithms  wiki  reference  atoms  extrema  parsimony  tidbits  decision-theory  decision-making  street-fighting  mental-math  calculation 
august 2017 by nhaliday
A Lttle More Nuance
economics lowest, though still increasing (also most successful frankly, wonder why? :P):
In this view, the trajectories of the disciplines relative to one another are sharpened. I have to say that if you’d asked me ex ante to rank fields by nuance I would have come up with an ordering much like the one visible at the end of the trend lines. But it also seems that social science fields were not differentiated in this way until comparatively recently. Note that the negative trend line for Economics is relative not to the rate of nuance within field itself—which is going up, as it is everywhere—but rather with respect to the base rate. The trend line for Philosophy is also worth remarking on. It differs quite markedly from the others, as it has a very high nuance rate in the first few decades of the twentieth century, which then sharply declines, and rejoins the upward trend in the 1980s. I have not looked at the internal structure of this trend any further, but it is very tempting to read it as the post-WWI positivists bringing the hammer down on what they saw as nonsense in their own field. That’s putting it much too sharply, of course, but then again that’s partly why we’re here in the first place.

https://twitter.com/GabrielRossman/status/879698510077059074
hmm: https://kieranhealy.org/files/papers/fuck-nuance.pdf
scitariat  social-science  sociology  philosophy  history  letters  psychology  sapiens  anthropology  polisci  economics  anglo  language  data  trends  pro-rata  visualization  mostly-modern  academia  intricacy  vague  parsimony  clarity  multi  twitter  social  commentary  jargon  pdf  study  essay  rhetoric  article  time-series  lexical  grokkability-clarity 
june 2017 by nhaliday
Overcoming Bias : A Tangled Task Future
So we may often retain systems that inherit the structure of the human brain, and the structures of the social teams and organizations by which humans have worked together. All of which is another way to say: descendants of humans may have a long future as workers. We may have another future besides being retirees or iron-fisted peons ruling over gods. Even in a competitive future with no friendly singleton to ensure preferential treatment, something recognizably like us may continue. And even win.
ratty  hanson  speculation  automation  labor  economics  ems  futurism  prediction  complex-systems  network-structure  intricacy  thinking  engineering  management  law  compensation  psychology  cog-psych  ideas  structure  gray-econ  competition  moloch  coordination  cooperate-defect  risk  ai  ai-control  singularity  number  humanity  complement-substitute  cybernetics  detail-architecture  legacy  threat-modeling  degrees-of-freedom  composition-decomposition  order-disorder  analogy  parsimony  institutions  software  coupling-cohesion 
june 2017 by nhaliday
Rheumatoid Arthritis | West Hunter
It causes characteristic changes in the bones.  Key point:  it is vanishingly rare in Old World skeletons before the 17th century.  Those changes, however, been seen in some pre-Columbian Amerindian skeletons [work by Bruce Rothschild].

The obvious explanation is that RA is caused by some pathogen that originated in the Americas and later spread to the rest of the world.  Like the French disease.

https://westhunt.wordpress.com/2012/05/09/montezumas-revenge/
Everybody knows that the Amerindians were devastated by new infectious diseases after Columbus discovered America and made it stick. Smallpox, falciparum malaria, yellow fever, bubonic plague, cholera, measles, whooping cough, etc : by some estimates, the Amerindian population dropped by about 90%, worse than the Black Plague, which only killed off half of Europe. Naturally, you wonder what ailments the Americas exported to the rest of the world.

We know of two for sure. First, syphilis: the first known epidemic was in 1495, in Naples, during a French invasion. By 1520 it had reached Africa and China.

From the timing of the first epidemic, and the apparent newness of the disease, many have suspected that it was an import from the New World. Some, like Bartolome de las Casas, had direct knowledge: Las Casas was in Seville in 1493, his father and uncle sailed with Columbus on the second voyage, and he himself traveled to the New World in 1502, where he spent most of the rest of his life working with the Amerindians. Ruiz Diaz de Isla, a Spanish physician, reported treating some of Columbus’s crew for syphilis, and that he had observed its rapid spread in Barcelona.

I have seen someone object to this scenario, on the grounds that the two years after Columbus’s return surely couldn’t have been long enough to generate a major outbreak. I think maybe that guy doesn’t get out much. It has always looked plausible, considering paleopathological evidence (bone changes) and the timing of the first epidemic. Recent analysis shows that some American strains of pinta (a treponemal skin disease) are genetically closest to the venereal strains. I’d say the Colombian theory is pretty well established, at this point.

Interestingly, before the genetic evidence, this was one of the longest-running disputes among historians. As far as I can tell, part of the problem was (and is) that many in the social sciences routinely apply Ockham’s razor in reverse. Simple explanations are bad, even when they fit all the facts. You see this in medicine, too.

...

There are two other diseases that are suspected of originating in the Americas. The first is typhus, gaol fever, caused by a Rickettsial organism and usually spread by lice. Sometimes it recurs after many years, in a mild form called Brill’s disease, rather like chickenpox and shingles. This means that typhus is always waiting in the wings: if the world gets sufficiently messed up, it will reappear.

Typhus shows up most often in war, usually in cool countries. There is a claim that there was a clear epidemic in Granada in 1489, which would definitely predate Columbus, but descriptions of disease symptoms by premodern physicians are amazingly unreliable. The first really reliable description seems to have been by Fracastoro, in 1546 (according to Hans Zinsser in Rats, Lice, and History). The key hint is the existence of a very closely related organism in American flying squirrels.

Thinking about it, I have the impression that the legions of the Roman Republic didn’t have high casualties due to infectious disease, while that was the dominant cause of death in more recent European armies, up until the 20tth century. If smallpox, measles, syphilis, bubonic plague, perhaps typhus, simply hadn’t arrived yet, this makes sense. Falciparum malaria wasn’t much of a factor in northern Italy until Imperial times…

The second possibly American disease is rheumatoid arthritis. We don’t even know that it has an infectious cause – but we do know that it causes characteristic skeletal changes, and that no clear-cut pre-Columbian rheumatoid skeletons are known from the Old World, while a number have been found in the lower South. To me, this makes some infectious cause seem likely: it would very much be worth following this up with the latest molecular genetic methods.

American crops like maize and potatoes more than canceled the demographic impact of syphilis and typhus. But although the Old World produced more dangerous pathogens than the Americas, due to size, longer time depth of agriculture, and more domesticated animals, luck played a role, too. Something as virulent as smallpox or falciparum malaria could have existed in the Americas, and if it had, Europe would have been devastated.

https://westhunt.wordpress.com/2012/05/09/montezumas-revenge/#comment-2910
Malaria came from Africa, probably. There are old primate versions. Smallpox, dunno: I have heard people suggest viral infections of cows and monkeys as ancestral. Measles is derived from rinderpest, probably less than two thousand years ago.

Falciparum malaria has been around for a while, but wasn’t found near Rome during the Republic. It seems to have gradually moved north in Italy during classical times, maybe because the range of the key mosquito species was increasing. By early medieval times it was a big problem around Rome.

Smallpox probably did not exist in classical Greece: there is no clear description in the literature of the time. It may have arrived in the Greco-Roman world in 165 AD, as the Antonine plague.

The Pathogenesis of Rheumatoid Arthritis: http://sci-hub.tw/http://www.nejm.org/doi/full/10.1056/NEJMra1004965

https://westhunt.wordpress.com/2017/08/27/age-of-discovery-pandora/
In the Age of Discovery, Europeans were playing with fire. Every voyage of exploration risked bring back some new plague. From the New World, syphilis, probably typhus and rheumatoid arthritis. From India, cholera. HIV, recently, from Africa. Comparably important new pests attacking important crops and domesticated animals also arrived, such as grape phylloxera (which wiped out most of the vineyards of Europe) and potato blight ( an oomycete or ‘water mold’, from central Mexico).

If one of those plagues had been as potent as smallpox or falciparum malaria, you probably wouldn’t be reading this.
west-hunter  scitariat  discussion  ideas  speculation  critique  disease  parasites-microbiome  usa  age-of-discovery  europe  embodied  history  early-modern  multi  spreading  random  counterfactual  🌞  occam  parsimony  archaeology  cost-benefit  india  asia  africa  agriculture  uncertainty  outcome-risk  red-queen  epidemiology  thick-thin  pdf  piracy  study  article  survey  iron-age  the-classics  mediterranean  novelty  poast  explanans  roots  prioritizing 
may 2017 by nhaliday
Annotating Greg Cochran’s interview with James Miller
https://westhunt.wordpress.com/2017/04/05/interview-2/
opinion of Scott and Hanson: https://westhunt.wordpress.com/2017/04/05/interview-2/#comment-90238
Greg's methodist: https://westhunt.wordpress.com/2017/04/05/interview-2/#comment-90256
https://westhunt.wordpress.com/2017/04/05/interview-2/#comment-90299
You have to consider the relative strengths of Japan and the USA. USA was ~10x stronger, industrially, which is what mattered. Technically superior (radar, Manhattan project). Almost entirely self-sufficient in natural resources. Japan was sure to lose, and too crazy to quit, which meant that they would lose after being smashed flat.
--
There’s a fairly common way of looking at things in which the bad guys are not at fault because they’re bad guys, born that way, and thus can’t help it. Well, we can’t help it either, so the hell with them. I don’t think we had to respect Japan’s innate need to fuck everybody in China to death.

https://westhunt.wordpress.com/2017/03/25/ramble-on/
https://westhunt.wordpress.com/2017/03/24/topics/
https://soundcloud.com/user-519115521/greg-cochran-part-1
2nd part: https://pinboard.in/u:nhaliday/b:9ab84243b967

some additional things:
- political correctness, the Cathedral and the left (personnel continuity but not ideology/value) at start
- joke: KT impact = asteroid mining, every mass extinction = intelligent life destroying itself
- Alawites: not really Muslim, women liberated because "they don't have souls", ended up running shit in Syria because they were only ones that wanted to help the British during colonial era
- solution to Syria: "put the Alawites in NYC"
- Zimbabwe was OK for a while, if South Africa goes sour, just "put the Boers in NYC" (Miller: left would probably say they are "culturally incompatible", lol)
- story about Lincoln and his great-great-great-grandfather
- skepticism of free speech
- free speech, authoritarianism, and defending against the Mongols
- Scott crazy (not in a terrible way), LW crazy (genetics), ex.: polyamory
- TFP or microbio are better investments than stereotypical EA stuff
- just ban AI worldwide (bully other countries to enforce)
- bit of a back-and-forth about macroeconomics
- not sure climate change will be huge issue. world's been much warmer before and still had a lot of mammals, etc.
- he quite likes Pseudoerasmus
- shits on modern conservatism/Bret Stephens a bit

- mentions Japan having industrial base a tenth the size of the US's and no chance of winning WW2 around 11m mark
- describes himself as "fairly religious" around 20m mark
- 27m30s: Eisenhower was smart, read Carlyle, classical history, etc.

but was Nixon smarter?: https://www.gnxp.com/WordPress/2019/03/18/open-thread-03-18-2019/
The Scandals of Meritocracy. Virtue vs. competence. Would you rather have a boss who is evil but competent, or good but incompetent? The reality is you have to balance the two. Richard Nixon was probably smarter that Dwight Eisenhower in raw g, but Eisenhower was probably a better person.
org:med  west-hunter  scitariat  summary  links  podcast  audio  big-picture  westminster  politics  culture-war  academia  left-wing  ideology  biodet  error  crooked  bounded-cognition  stories  history  early-modern  africa  developing-world  death  mostly-modern  deterrence  japan  asia  war  meta:war  risk  ai  climate-change  speculation  agriculture  environment  prediction  religion  islam  iraq-syria  gender  dominant-minority  labor  econotariat  cracker-econ  coalitions  infrastructure  parasites-microbiome  medicine  low-hanging  biotech  terrorism  civil-liberty  civic  social-science  randy-ayndy  law  polisci  government  egalitarianism-hierarchy  expression-survival  disease  commentary  authoritarianism  being-right  europe  nordic  cohesion  heuristic  anglosphere  revolution  the-south  usa  thinking  info-dynamics  yvain  ssc  lesswrong  ratty  subculture  values  descriptive  epistemic  cost-disease  effective-altruism  charity  econ-productivity  technology  rhetoric  metameta  ai-control  critique  sociology  arms  paying-rent  parsimony  writing  realness  migration  eco 
april 2017 by nhaliday
Mean field theory - Wikipedia
In physics and probability theory, mean field theory (MFT also known as self-consistent field theory) studies the behavior of large and complex stochastic models by studying a simpler model. Such models consider a large number of small individual components which interact with each other. The effect of all the other individuals on any given individual is approximated by a single averaged effect, thus reducing a many-body problem to a one-body problem.
concept  atoms  models  physics  stat-mech  ising  approximation  parsimony  wiki  reference  nibble 
march 2017 by nhaliday
INFECTIOUS CAUSATION OF DISEASE: AN EVOLUTIONARY PERSPECTIVE
A New Germ Theory: https://www.theatlantic.com/magazine/archive/1999/02/a-new-germ-theory/377430/
The dictates of evolution virtually demand that the causes of some of humanity's chronic and most baffling "noninfectious" illnesses will turn out to be pathogens -- that is the radical view of a prominent evolutionary biologist

A LATE-SEPTEMBER heat wave enveloped Amherst College, and young people milled about in shorts or sleeveless summer frocks, or read books on the grass. Inside the red-brick buildings framing the leafy quadrangle students listened to lectures on Ellison and Emerson, on Paul Verlaine and the Holy Roman Empire. Few suspected that strains of the organism that causes cholera were growing nearby, in the Life Sciences Building. If they had known, they would probably not have grasped the implications. But these particular strains of cholera make Paul Ewald smile; they are strong evidence that he is on the right track. Knowing the rules of evolutionary biology, he believes, can change the course of infectious disease.

https://www.theatlantic.com/past/docs/issues/99feb/germ2.htm
I HAVE a motto," Gregory Cochran told me recently. "'Big old diseases are infectious.' If it's common, higher than one in a thousand, I get suspicious. And if it's old, if it has been around for a while, I get suspicious."

https://www.theatlantic.com/past/docs/issues/99feb/germ3.htm
pdf  study  speculation  bio  evolution  sapiens  parasites-microbiome  red-queen  disease  west-hunter  🌞  unit  nibble  len:long  biodet  EGT  wild-ideas  big-picture  epidemiology  deep-materialism  🔬  spearhead  scitariat  maxim-gun  ideas  lens  heterodox  darwinian  equilibrium  medicine  heuristic  spreading  article  psychiatry  QTL  distribution  behavioral-gen  genetics  population-genetics  missing-heritability  gender  sex  sexuality  cardio  track-record  aging  popsci  natural-experiment  japan  asia  meta:medicine  profile  ability-competence  empirical  theory-practice  data  magnitude  scale  cost-benefit  is-ought  occam  parsimony  stress  GWAS  roots  explanans  embodied  obesity  geography  canada  britain  anglo  trivia  cocktail  shift  aphorism  stylized-facts  evidence  inference  psycho-atoms 
february 2017 by nhaliday
Information Processing: Epistasis vs additivity
On epistasis: why it is unimportant in polygenic directional selection: http://rstb.royalsocietypublishing.org/content/365/1544/1241.short
- James F. Crow

The Evolution of Multilocus Systems Under Weak Selection: http://www.genetics.org/content/genetics/134/2/627.full.pdf
- Thomas Nagylaki

Data and Theory Point to Mainly Additive Genetic Variance for Complex Traits: http://journals.plos.org/plosgenetics/article?id=10.1371/journal.pgen.1000008
The relative proportion of additive and non-additive variation for complex traits is important in evolutionary biology, medicine, and agriculture. We address a long-standing controversy and paradox about the contribution of non-additive genetic variation, namely that knowledge about biological pathways and gene networks imply that epistasis is important. Yet empirical data across a range of traits and species imply that most genetic variance is additive. We evaluate the evidence from empirical studies of genetic variance components and find that additive variance typically accounts for over half, and often close to 100%, of the total genetic variance. We present new theoretical results, based upon the distribution of allele frequencies under neutral and other population genetic models, that show why this is the case even if there are non-additive effects at the level of gene action. We conclude that interactions at the level of genes are not likely to generate much interaction at the level of variance.
hsu  scitariat  commentary  links  study  list  evolution  population-genetics  genetics  methodology  linearity  nonlinearity  comparison  scaling-up  nibble  lens  bounded-cognition  ideas  bio  occam  parsimony  🌞  summary  quotes  multi  org:nat  QTL  stylized-facts  article  explanans  sapiens  biodet  selection  variance-components  metabuch  thinking  models  data  deep-materialism  chart  behavioral-gen  evidence-based  empirical  mutation  spearhead  model-organism  bioinformatics  linear-models  math  magnitude  limits  physics  interdisciplinary  stat-mech 
february 2017 by nhaliday
Not Final! | West Hunter
In mathematics we often prove that some proposition is true by showing that  the alternative is false.  The principle can sometimes work in other disciplines, but it’s tricky.  You have to have a very good understanding  to know that some things are impossible (or close enough to impossible).   You can do it fairly often in physics, less often in biology.
west-hunter  science  history  reflection  epistemic  occam  contradiction  parsimony  noise-structure  scitariat  info-dynamics  hetero-advantage  sapiens  evolution  disease  sexuality  ideas  genetics  s:*  thinking  the-trenches  no-go  thick-thin  theory-practice  inference  apollonian-dionysian  elegance  applicability-prereqs  necessity-sufficiency 
november 2016 by nhaliday
Hidden Games | West Hunter
Since we are arguably a lot smarter than ants or bees, you might think that most adaptive personality variation in humans would be learned (a response to exterior cues) rather than heritable. Maybe some is, but much variation looks heritable. People don’t seem to learn to be aggressive or meek – they just are, and in those tendencies resemble their biological parents. I wish I (or anyone else) understood better why this is so, but there are some notions floating around that may explain it. One is that jacks of all trades are masters of none: if you play the same role all the time, you’ll be better at it than someone who keep switching personalities. It could be the case that such switching is physiologically difficult and/or expensive. And in at least some cases, being predictable has social value. Someone who is known to be implacably aggressive will win at ‘chicken’. Being known as the sort of guy who would rush into a burning building to save ugly strangers may pay off, even though actually running into that blaze does not.

...

This kind of game-theoretic genetic variation, driving distinct behavioral strategies, can have some really odd properties. For one thing, there can be more than one possible stable mix of behavioral types even in identical ecological situations. It’s a bit like dropping a marble onto a hilly landscape with many unconnected valleys – it will roll to the bottom of some valley, but initial conditions determine which valley. Small perturbations will not knock the marble out of the valley it lands in. In the same way, two human populations could fall into different states, different stable mixes of behavioral traits, for no reason at all other than chance and then stay there indefinitely. Populations are even more likely to fall into qualitatively different stable states when the ecological situations are significantly different.

...

What this means, think, is that it is entirely possible that human societies fall into fundamentally different patterns because of genetic influences on behavior that are best understood via evolutionary game theory. Sometimes one population might have a psychological type that doesn’t exist at all in another society, or the distribution could be substantially different. Sometimes these different social patterns will be predictable results of different ecological situations, sometimes the purest kind of chance. Sometimes the internal dynamics of these genetic systems will produce oscillatory (or chaotic!) changes in gene frequencies over time, which means changes in behavior and personality over time. In some cases, these internal genetic dynamics may be the fundamental reason for the rise and fall of empires. Societies in one stable distribution, in a particular psychological/behavioral/life history ESS, may simply be unable to replicate some of the institutions found in peoples in a different ESS.

Evolutionary forces themselves vary according to what ESS you’re in. Which ESS you’re in may be the most fundamental ethnic fact, and explain the most profound ethnic behavioral differences

Look, everyone is always looking for the secret principles that underlie human society and history, some algebra that takes mounds of historical and archaeological data – the stuff that happens – and explains it in some compact way, lets us understand it, just as continental drift made a comprehensible story out of geology. On second thought, ‘everyone’ mean that smallish fraction of researchers that are slaves of curiosity…

This approach isn’t going to explain everything – nothing will. But it might explain a lot, which would make it a hell of a lot more valuable than modern sociology or cultural anthropology. I would hope that an analysis of this sort might help explain fundamental long-term flavor difference between different human societies, differences in life-history strategies especially (dads versus cads, etc). If we get particularly lucky, maybe we’ll have some notions of why the Mayans got bored with civilization, why Chinese kids are passive at birth while European and African kids are feisty. We’ll see.

Of course we could be wrong. It’s going to have be tested and checked: it’s not magic. It is based on the realization that the sort of morphs and game-theoretic balances we see in some nonhuman species are if anything more likely to occur in humans, because our societies are so complex, because the effectiveness of a course of action so often depends on the psychologies of other individuals – that and the obvious fact that people are not the same everywhere.
west-hunter  sapiens  game-theory  evolution  personality  thinking  essay  adversarial  GT-101  EGT  scitariat  tradeoffs  equilibrium  strategy  distribution  sociality  variance-components  flexibility  rigidity  diversity  biodet  behavioral-gen  nature  within-without  roots  explanans  psychology  social-psych  evopsych  intricacy  oscillation  pro-rata  iteration-recursion  insight  curiosity  letters  models  theory-practice  civilization  latin-america  farmers-and-foragers  age-of-discovery  china  asia  sinosphere  europe  the-great-west-whale  africa  developmental  empirical  humanity  courage  virtu  theory-of-mind  reputation  cybernetics  random  degrees-of-freedom  manifolds  occam  parsimony  turchin  broad-econ  deep-materialism  cultural-dynamics  anthropology  cliometrics  hari-seldon  learning  ecology  context  leadership  cost-benefit  apollonian-dionysian  detail-architecture  history  antiquity  pop-diff  comparison  plots  being-becoming 
november 2016 by nhaliday
Paul Krugman Is an "Evolution Groupie" - Evonomics
Let me give you an example. William Hamilton’s wonderfully named paper “Geometry for the Selfish Herd” imagines a group of frogs sitting at the edge of a circular pond, from which a snake may emerge – and he supposes that the snake will grab and eat the nearest frog. Where will the frogs sit? To compress his argument, Hamilton points out that if there are two groups of frogs around the pool, each group has an equal chance of being targeted, and so does each frog within each group – which means that the chance of being eaten is less if you are a frog in the larger group. Thus if you are a frog trying to maximize your choice of survival, you will want to be part of the larger group; and the equilibrium must involve clumping of all the frogs as close together as possible.

Notice what is missing from this analysis. Hamilton does not talk about the evolutionary dynamics by which frogs might acquire a sit-with-the-other-frogs instinct; he does not take us through the intermediate steps along the evolutionary path in which frogs had not yet completely “realized” that they should stay with the herd. Why not? Because to do so would involve him in enormous complications that are basically irrelevant to his point, whereas – ahem – leapfrogging straight over these difficulties to look at the equilibrium in which all frogs maximize their chances given what the other frogs do is a very parsimonious, sharp-edged way of gaining insight.
essay  economics  evolution  interdisciplinary  methodology  reflection  krugman  heterodox  🎩  econotariat  c:**  equilibrium  parsimony  complex-systems  lens  competition  news  org:sci  org:mag  elegance 
november 2016 by nhaliday
Beauty is Fit | Carcinisation
Cage’s music is an example of the tendency for high-status human domains to ignore fit with human nervous systems in favor of fit with increasingly rarified abstract cultural systems. Human nervous systems are limited. Representation of existing forms, and generating pleasure and poignancy in human minds, are often disdained as solved problems. Domains unhinged from the desires and particularities of human nervous systems and bodies become inhuman; human flourishing, certainly, is not a solved problem. However, human nervous systems themselves create and seek out “fit” of the more abstract sort; the domain of abstract systems is part of the natural human environment, and the forms that exist there interact with humans as symbiotes. Theorems and novels and money and cathedrals rely on humans for reproduction, like parasites, but offer many benefits to humans in exchange. Humans require an environment that fits their nervous systems, but part of the definition of “fit” in this case is the need for humans to feel that they are involved in something greater (and perhaps more abstract) than this “animal” kind of fit.
essay  reflection  carcinisation  🦀  aesthetics  postrat  thinking  culture  insight  operational  minimalism  parsimony  beauty  elegance 
november 2016 by nhaliday
Epistemic learned helplessness - Jackdaws love my big sphinx of quartz
I don’t think I’m overselling myself too much to expect that I could argue circles around the average uneducated person. Like I mean that on most topics, I could demolish their position and make them look like an idiot. Reduce them to some form of “Look, everything you say fits together and I can’t explain why you’re wrong, I just know you are!” Or, more plausibly, “Shut up I don’t want to talk about this!”

And there are people who can argue circles around me. Maybe not on every topic, but on topics where they are experts and have spent their whole lives honing their arguments. When I was young I used to read pseudohistory books; Immanuel Velikovsky’s Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.

And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn’t believe I had ever been so dumb as to believe Velikovsky.

And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.

And so on for several more iterations, until the labyrinth of doubt seemed inescapable. What finally broke me out wasn’t so much the lucidity of the consensus view so much as starting to sample different crackpots. Some were almost as bright and rhetorically gifted as Velikovsky, all presented insurmountable evidence for their theories, and all had mutually exclusive ideas. After all, Noah’s Flood couldn’t have been a cultural memory both of the fall of Atlantis and of a change in the Earth’s orbit, let alone of a lost Ice Age civilization or of megatsunamis from a meteor strike. So given that at least some of those arguments are wrong and all seemed practically proven, I am obviously just gullible in the field of ancient history. Given a total lack of independent intellectual steering power and no desire to spend thirty years building an independent knowledge base of Near Eastern history, I choose to just accept the ideas of the prestigious people with professorships in Archaeology, rather than those of the universally reviled crackpots who write books about Venus being a comet.

You could consider this a form of epistemic learned helplessness, where I know any attempt to evaluate the arguments is just going to be a bad idea so I don’t even try. If you have a good argument that the Early Bronze Age worked completely differently from the way mainstream historians believe, I just don’t want to hear about it. If you insist on telling me anyway, I will nod, say that your argument makes complete sense, and then totally refuse to change my mind or admit even the slightest possibility that you might be right.

(This is the correct Bayesian action: if I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way. I should ignore it and stick with my prior.)

...

Even the smartest people I know have a commendable tendency not to take certain ideas seriously. Bostrom’s simulation argument, the anthropic doomsday argument, Pascal’s Mugging – I’ve never heard anyone give a coherent argument against any of these, but I’ve also never met anyone who fully accepts them and lives life according to their implications.

A friend tells me of a guy who once accepted fundamentalist religion because of Pascal’s Wager. I will provisionally admit that this person “takes ideas seriously”. Everyone else gets partial credit, at best.

...

Responsible doctors are at the other end of the spectrum from terrorists here. I once heard someone rail against how doctors totally ignored all the latest and most exciting medical studies. The same person, practically in the same breath, then railed against how 50% to 90% of medical studies are wrong. These two observations are not unrelated. Not only are there so many terrible studies, but pseudomedicine (not the stupid homeopathy type, but the type that links everything to some obscure chemical on an out-of-the-way metabolic pathway) has, for me, proven much like pseudohistory – unless I am an expert in that particular subsubfield of medicine, it can sound very convincing even when it’s very wrong.

The medical establishment offers a shiny tempting solution. First, a total unwillingness to trust anything, no matter how plausible it sounds, until it’s gone through an endless cycle of studies and meta-analyses. Second, a bunch of Institutes and Collaborations dedicated to filtering through all these studies and analyses and telling you what lessons you should draw from them.

I’m glad that some people never develop epistemic learned helplessness, or develop only a limited amount of it, or only in certain domains. It seems to me that although these people are more likely to become terrorists or Velikovskians or homeopaths, they’re also the only people who can figure out if something basic and unquestionable is wrong, and make this possibility well-known enough that normal people start becoming willing to consider it.

But I’m also glad epistemic learned helplessness exists. It seems like a pretty useful social safety valve most of the time.
yvain  essay  thinking  rationality  philosophy  reflection  ratty  ssc  epistemic  🤖  2013  minimalism  intricacy  p:null  info-dynamics  truth  reason  s:**  contrarianism  subculture  inference  bayesian  priors-posteriors  debate  rhetoric  pessimism  nihil  spreading  flux-stasis  robust  parsimony  dark-arts  illusion 
october 2016 by nhaliday
weaponizing smallpox | West Hunter
As I have said before, it seems likely to me that the Soviet Union put so much effort into treaty-violating biological warfare because the guys at the top believed in it – because they had seen it work, the same reason that they were such tank enthusiasts. One more point on the likely use of tularemia at Stalingrad: in the summer of ’42 the Germans had occupied regions holding 40% of the Soviet Union’s population. The Soviets had a tularemia program: if not then [“Not One Step Back!”], when would they have used it? When would Stalin have used it? Imagine that someone intent on the destruction of the American republic and the extermination of its people [remember the Hunger Plan?] had taken over everything west of the Mississippi: would be that too early to pull out all the stops? Reminds me of of an old Mr Boffo cartoon: you see a monster, taller than skyscrapers, stomping his way through the city. That’s trouble. But then you notice that he’s a hand puppet: that’s serious trouble. Perhaps Stalin was waiting for serious trouble, for example if the Norse Gods had come in on the side of the Nazis.

Anyhow, the Soviets had a big smallpox program. In some ways smallpox is almost the ultimate biological weapon – very contagious, while some strains are highly lethal. And it’s controllable – you can easily shield your own guys via vaccination. Of course back in the 1970s, almost everyone was vaccinated, so it was also completely useless.

We kept vaccinating people as long as smallpox was still running around in the Third World. But when it was eradicated in 1978, people stopped. There seemed to be no reason – and so, as new unvaccinated generations arose, the military efficacy of smallpox has gone up and up and up. It got to the point where the World Health organization threw away its stockpile of vaccine, a couple hundred million units, just to save on the electric bill for the refrigerators.

Consider that the Soviet Union was always the strongest proponent of worldwide eradication of smallpox, dating back to the 1950s. Successful eradication would eventually make smallpox a superweapon: does it seem possible that the people running the Soviet Union had this in mind as a long term-goal ? Potentiation through ‘eradication’? Did the left hand know what the strangling hand had in mind, and shape policies accordingly? Of course.

D.A. Henderson, the man that led the eradication campaign, died just a few days ago. He was aware of this possibility.

https://www.washingtonpost.com/local/obituaries/da-henderson-disease-detective-who-eradicated-smallpox-dies-at-87/2016/08/20/b270406e-63dd-11e6-96c0-37533479f3f5_story.html
Dr. Henderson strenuously argued that the samples should be destroyed because, in his view, any amount of smallpox was too dangerous to tolerate. A side effect of the eradication program — and one of the “horrendous ironies of history,” said “Hot Zone” author Preston — is that since no one in generations has been exposed to the virus, most of the world’s population would be vulnerable to it in the event of an outbreak.

“I feel very — what should we say? — dispirited,” Dr. Henderson told the Times in 2002. “Here we are, regressing to defend against something we thought was permanently defeated. We shouldn’t have to be doing this.”

http://www.bbc.co.uk/history/worldwars/coldwar/pox_weapon_01.shtml#four
Ken Alibek believes that, following the collapse of the Soviet Union in 1991, unemployed or badly-paid scientists are likely to have sold samples of smallpox clandestinely and gone to work in rogue states engaged in illicit biological weapons development. DA Henderson agrees that this is a plausible scenario and is upset by the legacy it leaves. 'If the [Russian bio-weapons] programme had not taken place we would not I think be worrying about smallpox in the same way. One can feel extremely bitter and extremely angry about this because I think they've subjected the entire world to a risk which was totally unnecessary.'

also:
War in the East: https://westhunt.wordpress.com/2012/02/02/war-in-the-east/
The books generally say that biological warfare is ineffective, but then they would say that, wouldn’t they? There is reason to think it has worked, and it may have made a difference.

...

We know of course that this offensive eventually turned into a disaster in which the German Sixth Army was lost. But nobody knew that then. The Germans were moving forward with little to stop them: they were scary SOBs. Don’t let anyone tell you otherwise. The Soviet leadership was frightened, enough so that they sent out a general backs-to-the-wall, no-retreat order that told the real scale of losses. That was the Soviet mood in the summer of 42.

That’s the historical background. Now for the clues. First, Ken Alibek was a bioweapons scientist back in the USSR. In his book, Biohazard, he tells how, as a student, he was given the assignment of explaining a mysterious pattern of tularemia epidemics back in the war. To him, it looked artificial, whereupon his instructor said something to the effect of “you never thought that, you never said that. Do you want a job?” Second, Antony Beevor mentions the mysteriously poor health of German troops at Stalingrad – well before being surrounded (p210-211). Third, the fact that there were large tularemia epidemics in the Soviet Union during the war – particularly in the ‘oblasts temporarily occupied by the Fascist invaders’, described in History and Incidence of Tularemia in the Soviet Union, by Robert Pollitzer.

Fourth, personal communications from a friend who once worked at Los Alamos. Back in the 90’s, after the fall of the Soviet Union, there was a time when you could hire a whole team of decent ex-Soviet physicists for the price of a single American. My friend was having a drink with one of his Russian contractors, son of a famous ace, who started talking about how his dad had dropped tularemia here, here, and here near Leningrad (sketching it out on a napkin) during the Great Patriotic War. Not that many people spontaneously bring up stories like that in dinner conversation…

Fifth, the huge Soviet investment in biowarfare throughout the Cold War is a hint: they really, truly, believed in it, and what better reason could there be than decisive past successes? In much the same way, our lavish funding of the NSA strongly suggested that cryptanalysis and sigint must have paid off handsomely for the Allies in WWII – far more so than publicly acknowledged, until the revelations about Enigma in the 1970s and later.

We know that tularemia is an effective biological agent: many countries have worked with it, including the Soviet Union. If the Russians had had this capability in the summer of ’42 (and they had sufficient technology: basically just fermentation) , it is hard to imagine them not using it. I mean, we’re talking about Stalin. You think he had moral qualms? But we too would have used germ warfare if our situation had been desperate.

https://westhunt.wordpress.com/2012/02/02/war-in-the-east/#comment-1330
Sean, you don’t know what you’re talking about. Anybody exposed to an aerosol form of tularemia is likely to get it: 10-50 bacteria are enough to give a 50% probability of infection. You do not need to be sickly, starved, or immunosuppressed in order to contract it, although those factors probably influence its lethality. The same is true of anthrax: if it starts growing in your lungs, you get sick. You’re not born immune. There are in fact some diseases that you _are_ born immune to (most strains of sleeping sickness, for example), or at least have built-in defenses against (Epstein-Barr, cf TLRs).

A few other facts I’ve just found: First, the Soviets had a tularemia vaccine, which was used to an unclear extent at Stalingrad. At the time nobody else did.

Next, as far as I can tell, the Stalingrad epidemic is the only large-scale pneumonic tularemia epidemic that has ever occurred.

Next cool fact: during the Cold War, the Soviets were somewhat more interested in tularemia than other powers. At the height of the US biowarfare program, we produced less than two tons per year. The Soviets produced over one thousand tons of F. tularensis per year in that period.

Next question, one which deserves a serious, extended treatment. Why are so many people so very very good at coming up with wrong answers? Why do they apply Occam’s razor backwards? This is particularly common in biology. I’m not talking about Croddy in Military Medicine: he probably had orders to lie, and you can see hints of that if you read carefully.

https://twitter.com/gcochran99/status/952248214576443393
https://archive.is/tEcgK
Joining the Army might work. In general not available to private individuals, for reasons that are largely bullshit.
war  disease  speculation  military  russia  history  len:long  west-hunter  technology  multi  c:**  parasites-microbiome  mostly-modern  arms  scitariat  communism  maxim-gun  biotech  ideas  world-war  questions  poast  occam  parsimony  trivia  data  stylized-facts  scale  bio  epidemiology  🌞  nietzschean  food  death  nihil  axioms  morality  strategy  unintended-consequences  risk  news  org:rec  prepping  profile  postmortem  people  crooked  org:anglo  thick-thin  alt-inst  flux-stasis  flexibility  threat-modeling  twitter  social  discussion  backup  prudence  government  spreading  gender  sex  sexuality  elite  ability-competence  rant  pharma  drugs  medicine  politics  ideology  impetus  big-peeps  statesmen 
september 2016 by nhaliday
Noise: dinosaurs, syphilis, and all that | West Hunter
Generally speaking, I thought the paleontologists were a waste of space: innumerate, ignorant about evolution, and simply not very smart.

None of them seemed to understand that a sharp, short unpleasant event is better at causing a mass extinction, since it doesn’t give flora and fauna time to adapt.

Most seemed to think that gradual change caused by slow geological and erosion forces was ‘natural’, while extraterrestrial impact was not. But if you look at the Moon, or Mars, or the Kirkwood gaps in the asteroids, or think about the KAM theorem, it is apparent that Newtonian dynamics implies that orbits will be perturbed, and that sometimes there will be catastrophic cosmic collisions. Newtonian dynamics is as ‘natural’ as it gets: paleontologists not studying it in school and not having much math hardly makes it ‘unnatural’.

One of the more interesting general errors was not understanding how to to deal with noise – incorrect observations. There’s a lot of noise in the paleontological record. Dinosaur bones can be eroded and redeposited well after their life times – well after the extinction of all dinosaurs. The fossil record is patchy: if a species is rare, it can easily look as if it went extinct well before it actually did. This means that the data we have is never going to agree with a perfectly correct hypothesis – because some of the data is always wrong. Particularly true if the hypothesis is specific and falsifiable. If your hypothesis is vague and imprecise – not even wrong – it isn’t nearly as susceptible to noise. As far as I can tell, a lot of paleontologists [ along with everyone in the social sciences] think of of unfalsifiability as a strength.

Done Quickly: https://westhunt.wordpress.com/2011/12/03/done-quickly/
I’ve never seen anyone talk about it much, but when you think about mass extinctions, you also have to think about rates of change

You can think of a species occupying a point in a many-dimensional space, where each dimension represents some parameter that influences survival and/or reproduction: temperature, insolation, nutrient concentrations, oxygen partial pressure, toxin levels, yada yada yada. That point lies within a zone of habitability – the set of environmental conditions that the species can survive. Mass extinction occurs when environmental changes are so large that many species are outside their comfort zone.

The key point is that, with gradual change, species adapt. In just a few generations, you can see significant heritable responses to a new environment. Frogs have evolved much greater tolerance of acidification in 40 years (about 15 generations). Some plants in California have evolved much greater tolerance of copper in just 70 years.

As this happens, the boundaries of the comfort zone move. Extinctions occur when the rate of environmental change is greater than the rate of adaptation, or when the amount of environmental change exceeds the limit of feasible adaptation. There are such limits: bar-headed geese fly over Mt. Everest, where the oxygen partial pressure is about a third of that at sea level, but I’m pretty sure that no bird could survive on the Moon.

...

Paleontologists prefer gradualist explanations for mass extinctions, but they must be wrong, for the most part.
disease  science  critique  rant  history  thinking  regularizer  len:long  west-hunter  thick-thin  occam  social-science  robust  parasites-microbiome  early-modern  parsimony  the-trenches  bounded-cognition  noise-structure  signal-noise  scitariat  age-of-discovery  sex  sexuality  info-dynamics  alt-inst  map-territory  no-go  contradiction  dynamical  math.DS  space  physics  mechanics  archaeology  multi  speed  flux-stasis  smoothness  evolution  environment  time  shift  death  nihil  inference  apollonian-dionysian  error  explanation  spatial  discrete  visual-understanding  consilience  traces  evidence  elegance 
september 2016 by nhaliday
Intellectual Hipsters and Meta-Contrarianism - Less Wrong
So my hypothesis is that if a certain side of an issue has very obvious points in support of it, and the other side of an issue relies on much more subtle points that the average person might not be expected to grasp, then adopting the second side of the issue will become a signal for intelligence, even if that side of the argument is wrong.
thinking  rationality  yvain  essay  community  contrarianism  lesswrong  regularizer  insight  len:short  epistemic  ratty  biases  pre-2013  occam  error  intricacy  parsimony  bounded-cognition  meta:rhetoric  signaling  info-dynamics 
august 2016 by nhaliday
The Awe Delusion
Art is a technology. If you did a Casablanca / Law & Order double feature you might notice that although Casablanca perhaps has more ‘artistic value’ (that horribly vague phrase), Law & Order tells its stories with a mind-boggling efficiency that vastly outstrips the former. Some time after 1960 filmmakers learned how to tell more story with less.
thinking  art  philosophy  vgr  postrat  literature  insight  mystic  essay  hmm  aesthetics  len:short  🦀  minimalism  beauty  parsimony  elegance 
june 2016 by nhaliday
« earlier      
per page:    204080120160

bundles : abstractproblem-solvingsci

related tags

aaronson  ability-competence  absolute-relative  abstraction  academia  accuracy  acemoglu  acm  acmtariat  aDNA  adversarial  advice  aesthetics  africa  age-of-discovery  aging  agriculture  ai  ai-control  algebra  algorithms  alignment  allodium  alt-inst  analogy  analysis  analytical-holistic  anglo  anglosphere  announcement  anthropology  antidemos  antiquity  aphorism  api  apollonian-dionysian  app  apple  applicability-prereqs  approximation  archaeology  archaics  aristos  arms  arrows  art  article  asia  assembly  atoms  attaq  audio  authoritarianism  automata-languages  automation  aversion  axioms  backup  bare-hands  bayesian  beauty  behavioral-gen  being-becoming  being-right  ben-recht  benchmarks  best-practices  betting  biases  big-list  big-peeps  big-picture  big-surf  bio  biodet  bioinformatics  biophysical-econ  biotech  bits  blowhards  books  boolean-analysis  bounded-cognition  britain  broad-econ  browser  build-packaging  business  c(pp)  c:**  caching  calculation  canada  cancer  carcinisation  cardio  career  carmack  causation  characterization  charity  chart  cheatsheet  checking  checklists  chicago  china  christianity  civic  civil-liberty  civilization  clarity  class  class-warfare  classification  client-server  climate-change  cliometrics  cloud  coalitions  cocktail  cocoa  code-dive  code-organizing  cog-psych  cohesion  commentary  common-case  communication  communism  community  comparison  compensation  competition  compilers  complement-substitute  complex-systems  complexity  composition-decomposition  compression  computation  computer-memory  concept  conceptual-vocab  concrete  concurrency  confidence  confounding  confusion  conquest-empire  consilience  context  contradiction  contrarianism  cool  cooperate-defect  coordination  core-rats  correctness  correlation  corruption  cost-benefit  cost-disease  counter-revolution  counterexample  counterfactual  coupling-cohesion  courage  course  cracker-econ  cracker-prog  creative  criminal-justice  critique  crooked  crosstab  cs  cultural-dynamics  culture  culture-war  curiosity  cybernetics  cycles  cynicism-idealism  d-lang  dan-luu  dark-arts  darwinian  data  data-science  data-structures  database  dataviz  death  debate  debugging  decentralized  decision-making  decision-theory  deep-learning  deep-materialism  deepgoog  definition  degrees-of-freedom  dennett  density  descriptive  design  desktop  detail-architecture  deterrence  developing-world  developmental  devtools  dimensionality  diogenes  direct-indirect  direction  discipline  discovery  discrete  discrimination  discussion  disease  distributed  distribution  diversity  documentation  dominant-minority  dotnet  drama  drugs  DSL  duplication  dynamic  dynamical  dysgenics  early-modern  ecology  econ-metrics  econ-productivity  economics  econotariat  ecosystem  eden  editors  education  effective-altruism  efficiency  egalitarianism-hierarchy  EGT  eh  elegance  elite  email  embedded  embodied  emergent  empirical  ems  ends-means  energy-resources  engineering  enlightenment-renaissance-restoration-reformation  environment  environmental-effects  epidemiology  epistemic  equilibrium  error  error-handling  essay  estimate  ethanol  ethics  EU  europe  evidence  evidence-based  evolution  evopsych  examples  exit-voice  exocortex  experiment  expert-experience  explanans  explanation  explore-exploit  exposition  expression-survival  extratricky  extrema  facebook  farmers-and-foragers  fashun  finance  fisher  flexibility  fluid  flux-stasis  fontier  food  foreign-lang  foreign-policy  form-design  formal-methods  formal-values  forum  fourier  frameworks  frontend  frontier  functional  futurism  gallic  game-theory  games  garett-jones  gavisti  gender  gender-diff  gene-flow  generalization  genetics  genomics  geography  germanic  giants  gibbon  git  github  gnon  gnosis-logos  gnu  gnxp  golang  google  gotchas  government  gowers  gradient-descent  graphics  gray-econ  greedy  grokkability  grokkability-clarity  ground-up  group-selection  growth-econ  GT-101  guide  guilt-shame  GWAS  hacker  hanson  hardware  hari-seldon  haskell  hci  health  healthcare  heavy-industry  heavyweights  hetero-advantage  heterodox  heuristic  hi-order-bits  higher-ed  history  hmm  hn  homo-hetero  horror  howto  hsu  human-capital  humanity  hypochondria  hypocrisy  ide  ideas  identification-equivalence  identity-politics  ideology  idk  IEEE  illusion  impact  impetus  incentives  increase-decrease  india  induction  inference  info-dynamics  infographic  information-theory  infrastructure  init  insight  institutions  integration-extension  intelligence  interdisciplinary  interests  interface-compatibility  interview  interview-prep  intricacy  intuition  investing  ios  iq  iraq-syria  iron-age  is-ought  ising  islam  iteration-recursion  japan  jargon  javascript  judaism  judgement  julia  justice  jvm  kernels  kinship  knowledge  krugman  kumbaya-kult  labor  language  large-factor  latency-throughput  latex  latin-america  law  leadership  learning  left-wing  legacy  len:long  len:short  lens  lesswrong  letters  levers  lexical  libraries  lifts-projections  limits  linear-algebra  linear-models  linearity  liner-notes  linguistics  links  linux  lisp  list  literature  local-global  logic  logistics  lol  long-short-run  long-term  low-hanging  machiavelli  machine-learning  macro  madisonian  magnitude  malthus  management  manifolds  map-territory  marginal  marginal-rev  market-failure  markets  math  math.CA  math.CO  math.DS  math.NT  mathtariat  maxim-gun  measure  measurement  mechanics  mechanism-design  media  medicine  mediterranean  MENA  mental-math  meta:medicine  meta:prediction  meta:reading  meta:research  meta:rhetoric  meta:science  meta:war  metabolic  metabuch  metal-to-virtual  metameta  methodology  metrics  michael-jordan  microsoft  migrant-crisis  migration  military  minimalism  minimum-viable  missing-heritability  mit  mobile  model-class  model-organism  models  moloch  moments  monetary-fiscal  money  mooc  morality  mostly-modern  motivation  move-fast-(and-break-things)  msr  multi  multiplicative  murray  music  mutation  mystic  n-factor  natural-experiment  nature  near-far  necessity-sufficiency  neocons  network-structure  networking  neuro  neurons  new-religion  news  nibble  nietzschean  nihil  nitty-gritty  no-go  noise-structure  nonlinearity  nordic  nostalgia  notation  notetaking  novelty  number  numerics  obesity  objektbuch  ocaml-sml  occam  oceans  old-anglo  oly  oop  operational  optimism  optimization  order-disorder  orders  ORFE  org:anglo  org:bleg  org:com  org:edu  org:gov  org:inst  org:junk  org:mag  org:mat  org:med  org:nat  org:popup  org:rec  org:sci  organization  os  oscillation  oss  osx  outcome-risk  overflow  p:null  papers  parasites-microbiome  pareto  parsimony  paying-rent  pdf  people  performance  personality  perturbation  pessimism  pharma  philosophy  physics  pic  piracy  plan9  plots  pls  plt  poast  podcast  policy  polis  polisci  political-econ  politics  poll  pop-diff  pop-structure  popsci  population-genetics  postmortem  postrat  pragmatic  pre-2013  prediction  prediction-markets  prepping  preprint  presentation  prioritizing  priors-posteriors  privacy  pro-rata  probability  problem-solving  productivity  prof  profile  programming  project  proofs  properties  protestant-catholic  protocol-metadata  prudence  pseudoE  psychiatry  psycho-atoms  psychology  psychometrics  public-goodish  public-health  puzzles  python  q-n-a  qra  QTL  quality  questions  quora  quotes  race  random  randy-ayndy  ranking  rant  rat-pack  rationality  ratty  realness  reason  rec-math  recommendations  red-queen  reduction  reference  reflection  regression  regularizer  regulation  reinforcement  religion  replication  repo  reputation  research  research-program  responsibility  retention  retrofit  review  revolution  rhetoric  right-wing  rigidity  rigor  risk  robust  rock  roots  rot  rounding  rsc  russia  rust  s:*  s:**  s:***  saas  safety  sapiens  scala  scale  scaling-tech  scaling-up  sci-comp  science  scifi-fantasy  scitariat  search  security  selection  sensitivity  sequential  sex  sexuality  shift  shipping  SIGGRAPH  signal-noise  signaling  simplification-normalization  singularity  sinosphere  skeleton  skunkworks  sky  slides  smoothness  social  social-choice  social-psych  social-science  sociality  society  sociology  soft-question  software  space  span-cover  spatial  speaking  spearhead  spectral  speculation  speed  spock  spreading  ssc  stackex  stanford  startups  stat-mech  state  state-of-art  statesmen  static-dynamic  stats  status  stereotypes  stock-flow  stories  strategy  street-fighting  stress  strings  structure  study  stylized-facts  sub-super  subculture  success  sulla  summary  summer-2014  supply-demand  survey  sv  synchrony  syntax  synthesis  system-design  systematic-ad-hoc  systems  tainter  talks  tcs  tcstariat  teaching  tech  tech-infrastructure  technical-writing  technology  techtariat  telos-atelos  temperature  terminal  terrorism  tetlock  the-bones  the-classics  the-great-west-whale  the-south  the-trenches  the-world-is-just-atoms  theory-of-mind  theory-practice  theos  thermo  thesis  thick-thin  things  thinking  threat-modeling  tidbits  tightness  time  time-complexity  time-series  tip-of-tongue  tools  top-n  traces  track-record  trade  tradeoffs  tradition  travel  trees  trends  tribalism  tricks  trivia  troll  trust  truth  turchin  turing  tutorial  twitter  types  ubiquity  ui  unaffiliated  uncertainty  unintended-consequences  uniqueness  unit  universalism-particularism  unix  us-them  usa  ux  vague  values  vampire-squid  variance-components  vcs  vgr  video  virtu  visual-understanding  visualization  volo-avolo  vulgar  walter-scheidel  war  waves  web  webapp  weird  west-hunter  westminster  white-paper  whole-partial-many  wiki  wild-ideas  wire-guided  within-without  workflow  working-stiff  world  world-war  worrydream  worse-is-better/the-right-thing  writing  X-not-about-Y  yak-shaving  yvain  zeitgeist  zooming  🌞  🎩  👽  🔬  🖥  🤖  🦀 

Copy this bookmark:



description:


tags: